forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
rJgvf3RcFQ
On Inductive Biases in Deep Reinforcement Learning
[ "Matteo Hessel", "Hado van Hasselt", "Joseph Modayil", "David Silver" ]
Many deep reinforcement learning algorithms contain inductive biases that sculpt the agent's objective and its interface to the environment. These inductive biases can take many forms, including domain knowledge and pretuned hyper-parameters. In general, there is a trade-off between generality and performance when we use such biases. Stronger biases can lead to faster learning, but weaker biases can potentially lead to more general algorithms that work on a wider class of problems. This trade-off is relevant because these inductive biases are not free; substantial effort may be required to obtain relevant domain knowledge or to tune hyper-parameters effectively. In this paper, we re-examine several domain-specific components that modify the agent's objective and environmental interface. We investigated whether the performance deteriorates when all these fixed components are replaced with adaptive solutions from the literature. In our experiments, performance sometimes decreased with the adaptive components, as one might expect when comparing to components crafted for the domain, but sometimes the adaptive components performed better. We then investigated the main benefit of having fewer domain-specific components, by comparing the learning performance of the two systems on a different set of continuous control problems, without additional tuning of either system. As hypothesized, the system with adaptive components performed better on many of the tasks.
[ "inductive biases", "components", "performance", "adaptive components", "deep reinforcement", "agent", "objective", "system", "many deep reinforcement" ]
https://openreview.net/pdf?id=rJgvf3RcFQ
https://openreview.net/forum?id=rJgvf3RcFQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "B1xQ0nIMl4", "H1eGa3Z5R7", "BJlyABDhaQ", "HJg5KSw26Q", "SJx1tmP3pQ", "rJg_hF_TnQ", "HJgVj1k327", "SJlnow4qh7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544871115351, 1543277754417, 1542383047377, 1542382978169, 1542382454629, 1541405103639, 1541300124272, 1541191588372 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1271/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1271/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1271/Authors" ], [ "ICLR.cc/2019/Conference/Paper1271/Authors" ], [ "ICLR.cc/2019/Conference/Paper1271/Authors" ], [ "ICLR.cc/2019/Conference/Paper1271/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1271/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1271/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper studies inductive biases in DRL, by comparing with different reward shaping, and curriculums. The authors performed comparative experiments where they replace domain specific heuristics by such adaptive components.\\n\\nThe paper includes very little (new) scientific contributions, and, as such, is not suitable for publication at ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Not enough novel technical content nor insights\"}", "{\"title\": \"Did not misunderstand but also not a big factor in review\", \"comment\": \"Thanks for clarifying question: I did not misunderstand the empirical efforts. As you can see, my review mentions single \\\"environment\\\". I believe you conduct 57 games experiment in Arcade environment. Perhaps when I used Atari environment, it confused you.\\n\\nIn any case, the major issue of no substantial technical contributions and/or theoretical analysis still stands and hence my score also stands. With these two ingredients, the current empirical evaluation may (not necessarily) have been adequate but the paper needs more work in terms of contributions before it can be accepted.\"}", "{\"title\": \"Thanks for the comments and suggestions\", \"comment\": \"We thank the reviewer for the many positive comments.\\nWe will add a figure for each of the 3 motivating examples in the Appendix, thanks for the suggestion!\"}", "{\"title\": \"On \\\"inductive biases\\\", \\\"generality\\\" and other questions.\", \"comment\": \"Some of the questions raised by the reviewer suggest that there may have been a misunderstanding of the term \\u201cinductive bias\\u201d, possibly interpreted as referring to some form of statistical bias. \\u201cInductive Bias\\u201d is a well defined concept from the Machine Learning and Neuroscience literature and refers to the set of assumptions that go into a learning system (such as domain knowledge and heuristics). In the context of this paper we define and classify the various types of inductive biases under consideration in Section 2.\\n\\nRegarding how to measure \\\"generality\\\": in this paper we propose to measure the \\\"generality\\\" of an RL algorithm as the degree to which such algorithm can be ported to a different domain from the one it was proposed for, without forcing the practitioner to revisit the inductive biases that were incorporated in the original agent. Our experiments on Continuous Control show that adaptive solutions perform better in this respect than other heuristic inductive biases. \\n\\nAs always, the Actor-Critic update in equation 2 of Section1 subsumes the tabular case, which can be seen by noting that in a tabular representation the gradient would only update the corresponding entry in the table.\"}", "{\"title\": \"On the empirical evidence provided in the paper\", \"comment\": \"In addition to 3 grid-world domains (designed specifically to highlight specific properties of the inductive biases considered in the paper), we also provide extensive experiments at scale on 57 Atari games and 28 continuous control tasks. This is a larger set of non-trivial environments than in the vast majority of deep RL papers. Perhaps the reviewer interpreted the Atari experiments (on 57 games) as having been run on a single Atari game?\"}", "{\"title\": \"Review for the paper: \\\"On Inductive Biases in Deep Reinforcement Learning\\\"\", \"review\": \"This paper focuses on deep reinforcement learning methods and discusses the presence of inductive biases in the existingRL algorithm. Specifically, they discuss biases that take the form of domain knowledge or hyper-parameter tuning. The authors state that such biases rise the tradeoff between generality and performance wherein strong biases can lead to efficient performance but deteriorate generalization across domains. Further, it motivates that most inductive biases has a cost associated to it and hence it is important to study and analyze the effect of such biases.\\n\\nTo support their insights, the authors investigate the performance of well known actor-critic model in the Atari environment after replacing domain specific heuristics with the adaptive components. The author considers two ways of injecting biases: i) sculpting agents objective and ii) sculpting agent's environment. They show empirical evidence that replacing carefully designed heuristics to induce biases with more adaptive counterparts preserves performance and generalizes without additional fine tuning.\\n\\nThe paper focuses on an important concept and problem of inductive biases in deep reinforcement learning techniques. \\nAnalysis of such biases and methods to use them judiciously is an interesting future direction. The paper covers a lot of related work in terms of various algorithms and corresponding biases.\\nHowever, this paper only discusses such concepts at high level and provides short empirical evidences in a single environment to support their arguments. Further, both the heuristics used in practice and the adaptive counterparts that the paper uses to replace those heuristics are all available in existing approaches and there is no novel contribution in that direction too.\\nFinally, the adaptive methods based on parallel environment and RNNs have several limitation, as per author's own admission.\\n\\nOverall, the paper does not have any novel technical contributions or theoretical analysis on the effect of such inductive biases which makes it very weak. Further, there is nothing surprising about the author's claims and many of the outcomes from the analysis are expected. The authors are recommended to consider this task more rigorously and provide stronger and concrete analysis on the effects of inductive biases on variety of algorithms and variety of environments.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"This paper contains various numerical experiments to see the effects of some heuristics in reinforcement learning, but no definite answers are given.\", \"review\": \"This paper contains various numerical experiments to see the effects of some heuristics in reinforcement learning. Those heuristics include reward clipping, discounting for effective learning, repeating actions, and different network structures. However, since the training algorithms also greatly affect the performance of RL agents, it seems hard to draw any quantitive conclusions from this paper.\", \"detailed_comments\": \"1. It seems that actor-critic algorithms are defined for RL with function approximation. What is the tabular A2C algorithm? A reference in Section 3.1 would be better.\\n\\n2. This paper claims to study the \\\"inductive biases\\\", which is not clearly defined. How to quantify those biases and how to measure \\\"generality\\\"?\\n\\n3. Are there any quantitive conclusions that can be drawn from the experiments?\\n\\n4. Since the performance of RL agents also relies on initialization and the training algorithms. There are a lot of tricks of optimization for deep learning. How to measure the \\\"inductive biases\\\" by ruling out the effects of training algorithms?\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Good summary and experimental evaluation of various inducive biases in deep reinforcement learning\", \"review\": \"The paper presents and evaluates different common inductive biases in Deep RL. These are systematically evaluated on different experimental settings.\\n\\nThe paper is easy to read and the authors explain well the setting and their findings. The comparison and evaluations is well conducted and valuable contribution to the literature. I would have liked some more details on the motivating example in section 3.1, maybe with a figure supporting the explanation of the example.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}" ] }
S1lIMn05F7
A Direct Approach to Robust Deep Learning Using Adversarial Networks
[ "Huaxia Wang", "Chun-Nam Yu" ]
Deep neural networks have been shown to perform well in many classical machine learning problems, especially in image classification tasks. However, researchers have found that neural networks can be easily fooled, and they are surprisingly sensitive to small perturbations imperceptible to humans. Carefully crafted input images (adversarial examples) can force a well-trained neural network to provide arbitrary outputs. Including adversarial examples during training is a popular defense mechanism against adversarial attacks. In this paper we propose a new defensive mechanism under the generative adversarial network~(GAN) framework. We model the adversarial noise using a generative network, trained jointly with a classification discriminative network as a minimax game. We show empirically that our adversarial network approach works well against black box attacks, with performance on par with state-of-art methods such as ensemble adversarial training and adversarial training with projected gradient descent.
[ "deep learning", "adversarial learning", "generative adversarial networks" ]
https://openreview.net/pdf?id=S1lIMn05F7
https://openreview.net/forum?id=S1lIMn05F7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1lmqBBmxV", "BJli_EaR07", "B1x2VA5jCQ", "Bye9qmg80m", "rylMb7gLRX", "SygmUMgI07", "BkxjO3X5a7", "HJlkCrUa3Q", "BJeFNgN5nm", "rylwmWGq2X", "B1xHSDgVo7", "SJlj7BhF5X", "SJlz9Ez2tm" ], "note_type": [ "meta_review", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review", "official_comment", "comment", "comment" ], "note_created": [ 1544930699166, 1543586930656, 1543380531903, 1543009170146, 1543009018223, 1543008843204, 1542237298831, 1541395911449, 1541189680852, 1541181727222, 1539733308668, 1539061027401, 1538167945776 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1270/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1270/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1270/Authors" ], [ "ICLR.cc/2019/Conference/Paper1270/Authors" ], [ "ICLR.cc/2019/Conference/Paper1270/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1270/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1270/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1270/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1270/Authors" ], [ "(anonymous)" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposed a GAN approach to robust learning against adversarial examples, where a generator produces adversarial examples as perturbations and a discriminator is used to distinguish between adversarial and raw images. The performance on MNIST, SVHN, and CIFAR10 demonstrate the effectiveness of the approach, and in general, the performance is on par with carefully crafted algorithms for such task.\\n\\nThe architecture of GANs used in the paper is standard, yet the defensive performance seems good. The reviewers wonder the reason behind this good mechanism and the novelty compared with other works in similar spirits. In response, the authors add some insights on discussing the mechanism as well as comparisons with other works mentioned by the reviewers. \\n\\nThe reviewers all think that the paper presents a simple scheme for robust deep learning based on GANs, which shows its effectiveness in experiments. The understanding on why it works may need further explorations. Thus the paper is proposed to be borderline lean accept.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"A GAN approach to robust learning against adversarial examples.\"}", "{\"title\": \"In the Appendix\", \"comment\": \"The results for the wide resnet is in the Appendix due to space limitations.\\n\\nThe tables in the paper itself are also updated after tuning of the weight decay parameter. Most of the numbers are close to the original.\"}", "{\"comment\": \"\\\" We will update the results table accordingly. Using a deeper or wider network can push the results to 95% or above.\\\"\\n\\nI cannot find the updated results in the revised manuscript.\", \"title\": \"Where are your updated results?\"}", "{\"title\": \"Response to reviewer 2\", \"comment\": \"Thank you for your valuable comments.\\n\\nFor an explanation of why we think our adersarial network approach works well in training robust neural networks, we have provided more intuitions on our design in the Methods section on why it works well against black box attacks and also more observations in the Discussions section. These are by no means a definitive explanation, but more like a sharing of our intuitions and experiences with the models that hopefully can stimulate further discussions. \\n\\nAs for the suggestions on visualizaiton, we have examined some of the perturbed images generated by our method and other methods. Qualitatively they don't look much different for the same perturbation size \\\\epsilon, even if they have different losses. So we decide not to include such visualizations because they are not particularly information in this case. But during the revision of this paper we have expanded various sections and provided extra experiments in the appendix through the reviewers' feedbacks.\"}", "{\"title\": \"Response to reviewer 3\", \"comment\": \"Thank you for your valuable comments.\\n\\nThank you for pointing out the ambiguity in the argmax. It is supposed to be referring to the maximum index in the last layer of the discriminative network (1,2,..., k), corresponding to the k classes. We have added a sentence to clarify this. \\n\\nFor the usefulness of the generator, it is true that it learns to attack primarily the discriminative network it is trained against. However, the generator learned can sometimes be transferred to attack other models, although not necessarily as effective. In Table 5, the generator learned to attack model A, and were then used to attack other models A', B', and C'. It is very effective against A' trained using standard training but with a different random seed, somewhat effective against our adversarial network (C'), taking away close to 10% of accuracy. But it is not effective at all against adversarial PGD (B'). How well these learned generators can transfer to attack other models is an interesting question for further investigation. \\n\\nThank you for pointing out the issues with table captions. We have replaced the captions with 'classification accuracies under white box and black box attacks' to avoid ambiguites.\"}", "{\"title\": \"Response to reviewer 1\", \"comment\": \"Thank you for your valuable comments.\\n\\nWe have already included Xiao et al. 2018 in the last paragraph of our related works section. \\nXiao et al. 2018 focused on using GAN as an attack method against a fixed pre-trained discriminative network, with another discriminative network co-trained to make sure the perturbed images look like the original. On the other hand our work is more focused on the defense using GAN, and we co-train the adversarial noise generator together with the discriminative network for classification (not the network for ensuring noisy images look like the original). We didn't include Lee et al. 2018 in our original version, and it has been included in the updated veresion. Lee et al. 2017 also tried to use GAN to defend against adversarial attacks, but there are two main difference to our work. First, the inputs to their generative network is the gradient of the discriminative network with respect to the image x, not just the image x as in our current work. This causes complex dependence of the gradient of the generative network parameters to the discriminative network parameters, and makes the parameter updates for the generative network more complicated. Second, there doesn't seem to be a single minimax objective that they are solving for in their work; the update rules for networks D and G optimize related but different objectives. Our work on the other hand has only one minimax objective, and the updates on D and G networks are directly derived from it. \\n\\nAnd as for the attacks suggested by the reviewers, we didn't include them for various reasons. \\nTargeted FGS is more useful when there are many close categories like ImageNet. Since we are mostly dealing with datasets with small number of classes (10) in this paper, we believe the marginal benefit of including T-FGS is small since we already have FGS. For C&W attack, it is costly to run and Madry et al. 2018 showed that it is no more powerful than PGD attacks, so we stick to PGD attacks in this paper. For GAN attacks, the results we have in Table 5 using generative networks to attack undefended models are equivalent to the GAN attacks used in earlier work by Xiao et al. 2018 and Baluja & Fischer 2017. We have also included extra results on GAN attacks on adversarial PGD and our adversarial networks in the Appendix. \\n\\nFor the question on why undefended models sometimes work better than defended models under black box attacks, it really depends on how the black box attacks are constructed. What we have observed across experiments is that black box attack examples constructed (with FGS or PGD) from models trained using the same method but different random seeds are the most effective in attacking the same type of models. Therefore it is possible for the undefended model to work well against adversarial examples constructed from adversarial PGD(B') or our adversarial network(C') approach, as those adversarial examples might not transfer well to the undefended model. The undefended model is weak in the sense that it has low white box attack accuracies, and also it has the lowest black box accuracies when models of the same type (A') are used to attack it.\"}", "{\"comment\": \"This paper makes several black-box claims but no attacks that query the model were tried (e.g., the Decision Attack from ICLR'18 or SPSA from Uesato et al. 2018 at ICML'18). Could the authors try either of these attacks?\", \"title\": \"No black-box query attacks were tried\"}", "{\"title\": \"Propose the use of GANs to improve robustness to adversarial instances; extensive results but lack references and positioning to recent relevant arXiv papers\", \"review\": \"Summary: The paper proposes a GAN-based approach for dealing with adversarial instances, with the training of a robust discriminator that is able to identify adversaries from clean samples, and a generator that produces adversarial noise for its given input clean image in order to mislead the discriminator. In contrast to the state-of-the-art \\u201censemble adversarial training\\u201d approach, which relies on several pre-trained neural networks for generating adversarial examples, the authors introduce a way for dynamically generating adversarial examples on-the-fly by using a generator, which they along with their clean counterparts are then consumed for training the discriminator.\", \"quality\": \"The paper is relatively well-written, although a little sketchy, and its motivations are clear. The authors compare their proposed approach with a good of variety of strong defenses such as \\u201censemble adversarial training\\u201d and \\u201cPGD adversarial training\\u201d, supporting with convincing experiments their approach.\", \"originality\": \"Xioa et al. (2018) used very similar technique for generating new adversarial examples (generator attack), then used for training a robust discriminator. Likewise, Lee et al. (2018) also used GANs to produce perturbations for making images misclassified. Given this, what is the main novelty of this approach comparing to the (Xioa et al., 2018) and (Lee et al., 2018)? These references should be discussed in details in the paper.\\n\\nMoreover, limited comparison with different attacks: Why did not compare against targeted attacks such as T-FGS, C&W or GAN-attack?\\n\\nIt is really surprising that undefended network is working better (showing more robustness) than the defended network \\u201cadversarial PGD\\u201d on black-box attacks, why this is happening?\", \"references\": [\"Xiao, C., Li, B., Zhu, J. Y., He, W., Liu, M., & Song, D. (2018). Generating adversarial examples with adversarial networks. arXiv preprint arXiv:1801.02610.\", \"Lee, H., Han, S., & Lee, J. (2017). Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN. arXiv preprint arXiv:1705.03387.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"An interesting method for robust deep learning\", \"review\": \"The paper \\\"A Direct Approach to Robust Deep Learning Using Adversarial Networks\\\" proposes a GAN solution for deep models of classification, faced to white and black box attacks. It defines an architecture where a generator network seeks to produce slight pertubations that succeed in fooling the discriminator. The discriminator is the targetted classification model.\\n\\nThe paper is globally well written and easy to follow. It well presents related works and the approach is well justified. Though the global idea is rather straightforward from my point of view, it looks to be a novel - effective - application of GANs. The implementation is well designed (it notably uses recent GAN stabilization techniques). The experiments are quite convincing, since it looks to produce rather robust models, without a loss of performance with clean (which appears crucial to me and is not the case of its main competitors).\", \"minor_comments\": [\"eq1 : I do not understand the argmax (the support is missing). It corresponds to the class with higher probability I suppose but...\", \"Authors say that GANs are usually useful for the generator (this is not always the case by the way), while in their case both obtained discriminator and generator have value. I do not understand in what the generator could be useful here, since it is only fitted to attack its own model (so what is the interest, are its attacks transferable on other models?)\", \"Tables 1 and 2 are described as giving attack accuracies. But scores reported are classification accuracy right ? This is rather defense accuracies so...\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Robust defensive design using adversarial networks\", \"review\": [\"The paper proposed a defensive mechanism against adversarial attacks using GANs. The general network structure is very much similar to a standard GANs -- generated perturbations are used as adversarial examples, and a discriminator is used to distinguish between them. The performance on MNIST, SVHN, and CIFAR10 demonstrate the effectiveness of the approach, and in general, the performance is on par with carefully crafted algorithms for such task.\", \"pros\", \"the presentation of the approach is clean and easy-to-follow.\", \"the proposed network structure is simple, but it surprisingly works well in general.\", \"descriptions of training details are reasonable, and the experimental results across several datasets are extensive\", \"cons\", \"the network structure may not be novel, though the performance is very nice.\", \"there are algorithms that are carefully crafted to perform the network defense mechanism, such as Samangouei et al, 2018. However, the method described in this paper, despite simple, works very good. It would be great if authors can provide more insights on why it works well (though not the best, but still reasonable), besides only demonstrating the experimental results.\", \"it would also be nice if authors can visualize the behavior of their design by showing some examples using the dataset they are working on, and provide side-to-side comparisons against other approaches.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Reply to questions on experimental results\", \"comment\": \"1. Thank you for pointing out this issue. We found that the weight decay we use (1E-5) was too small for CIFAR10. By changing the weight decay to 1E-4 the models can achieve accuracies of about 92% on clean data. We will update the results table accordingly. Using a deeper or wider network can push the results to 95% or above.\\n\\n2. FGSM training, depending on how it's done, could lead to label leakage during test time. Also it's susceptible to the more powerful PGD attack. Due to space and training time constraints we just focus on adversarial training with the powerful PGD attack in this work.\"}", "{\"comment\": \"The paper writes prior defenses are \\\"... defensive distillation Papernot et al. (2016b), using randomization at inference time Xie et al. (2018), and thermometer encoding (Buckman et al., 2018), etc.\\\" These might not be the best example to pick since these have been shown to be broken: https://arxiv.org/abs/1607.04311 and https://arxiv.org/abs/1802.00420\", \"title\": \"Prior defenses mentioned\"}", "{\"comment\": \"I have a few concerns about the experiments on CIFAR10:\\n1. Your reported accuracy on clean data is relatively low. In contrast, ResNet achieves an accuracy of 93%~95%. See, for example, https://github.com/bearpaw/pytorch-classification.\\n2. In Table 3, your method achieves ~75% accuracy against FGSM and PGD adversarial samples in the blackbox setting. However, I implement FGSM adversarial training to obtain ~85% accuracy under the exact same setting. FGSM is much simpler, while it yields better results. Am I missing anything?\", \"title\": \"Questions on Experimental Results\"}" ] }
ryxLG2RcYX
Learning Abstract Models for Long-Horizon Exploration
[ "Evan Zheran Liu", "Ramtin Keramati", "Sudarshan Seshadri", "Kelvin Guu", "Panupong Pasupat", "Emma Brunskill", "Percy Liang" ]
In high-dimensional reinforcement learning settings with sparse rewards, performing effective exploration to even obtain any reward signal is an open challenge. While model-based approaches hold promise of better exploration via planning, it is extremely difficult to learn a reliable enough Markov Decision Process (MDP) in high dimensions (e.g., over 10^100 states). In this paper, we propose learning an abstract MDP over a much smaller number of states (e.g., 10^5), which we can plan over for effective exploration. We assume we have an abstraction function that maps concrete states (e.g., raw pixels) to abstract states (e.g., agent position, ignoring other objects). In our approach, a manager maintains an abstract MDP over a subset of the abstract states, which grows monotonically through targeted exploration (possible due to the abstract MDP). Concurrently, we learn a worker policy to travel between abstract states; the worker deals with the messiness of concrete states and presents a clean abstraction to the manager. On three of the hardest games from the Arcade Learning Environment (Montezuma's, Pitfall!, and Private Eye), our approach outperforms the previous state-of-the-art by over a factor of 2 in each game. In Pitfall!, our approach is the first to achieve superhuman performance without demonstrations.
[ "Reinforcement Learning", "Hierarchical Reinforcement Learning", "Model-based Reinforcement Learning", "Exploration" ]
https://openreview.net/pdf?id=ryxLG2RcYX
https://openreview.net/forum?id=ryxLG2RcYX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1e-jkTBxV", "Hye9J5a1k4", "BkgV4LeC0m", "B1eAD7KoAX", "BkeBNpfsCm", "HJeSXCh9Cm", "rJeSWZK9A7", "r1eQOgK90m", "BklWPRFqpQ", "SygvApF5TQ", "ryl096Y5pm", "rylm4WhLTQ", "B1ehFbba37", "SJl2_N-q2m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545093016986, 1543653857665, 1543534123544, 1543373670272, 1543347500672, 1543323164806, 1543307517468, 1543307370646, 1542262360675, 1542262223499, 1542262166340, 1542009130828, 1541374340378, 1541178484347 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1269/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1269/Authors" ], [ "ICLR.cc/2019/Conference/Paper1269/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1269/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1269/Authors" ], [ "ICLR.cc/2019/Conference/Paper1269/Authors" ], [ "ICLR.cc/2019/Conference/Paper1269/Authors" ], [ "ICLR.cc/2019/Conference/Paper1269/Authors" ], [ "ICLR.cc/2019/Conference/Paper1269/Authors" ], [ "ICLR.cc/2019/Conference/Paper1269/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1269/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1269/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper presents a novel approach to exploration in long-horizon / sparse reward RL settings. The approach is based on the notion of abstract states, a space that is lower-dimensional than the original state space, and in which transition dynamics can be learned and exploration is planned. A distributed algorithm is proposed for managing exploration in the abstract space (done by the manager), and learning to navigate between abstract states (workers). Empirical results show strong performance on hard exploration Atari games.\\n\\nThe paper addresses a key challenge in reinforcement learning - learning and planning in long horizon MDPs. It presents an original approach to this problem, and demonstrates that it can be leveraged to achieve strong empirical results. \\n\\nAt the same time, the reviewers and AC note several potential weaknesses, the focus here is on the subset that substantially affected the final acceptance decision. First, the paper deviates from the majority of current state of the art deep RL approaches by leveraging prior knowledge in the form of the RAM state. The cause for concern is not so much the use of the RAM information, but the comparison to other prior approaches using \\\"comparable amounts of prior knowledge\\\" - an argument that was considered misleading by the reviewers and AC. The reviewers make detailed suggestions on how to address these concerns in a future revision. Despite initially diverging assessments, the final consensus between the reviewers and AC was that the stated concerns would require a thorough revision of the paper and that it should not be accepted in its current stage.\\n\\nOn a separate note, a lot of the discussion between R1 and the authors centered on whether more comparisons / a larger number of seeds should be run. The authors argued that the requested comparisons would be too costly. A suggestion for a future revision of the paper would be to only run a large number (e.g., 10) of seeds for the first 150M steps of each experiment, and presenting these results separately from the long-running experiments. This should be a cost efficient way to shed light on a particularly important range, and would help validate claims about sample efficiency.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"innovative approach and strong results, concerns about comparison to baselines\"}", "{\"title\": \"Summary of Updated Draft\", \"comment\": \"We thank all the reviewers for their feedback. We uploaded a significantly improved draft at the end of the rebuttal period to incorporate the reviewer\\u2019s feedback. Here is a summary of the main changes:\\n\\n1) To measure the significance of our prior knowledge (RAM state information), we evaluated AbstractStateHash on all 3 games (details in Section 6). AbstractStateHash (a variant of SmartHash) combines the prior state-of-the-art method of count-based exploration with the exact RAM state information used by our approach. While AbstractStateHash performs comparably to the prior state-of-the-art on two games, it performs poorly on one and is outperformed by our approach by >2x on each game. This suggests that the RAM state information does not trivialize the games and that prior state-of-the-art methods do not effectively leverage this information.\\n\\n2) We simplified the presentation of our algorithm to highlight the key ideas (Section 2, 3, 4). The key idea behind our approach is to construct the abstract Markov Decision Process (MDP), a representation of the task (the concrete MDP) meeting several critical properties:\\n - Its state space is low-dimensional, making it computationally tractable to plan (e.g., via value iteration). Planning\\n enables our approach to perform targeted exploration.\\n - We maintain accurate estimates of its transition dynamics and rewards, enabling us to plan without compounding\\n errors.\\n - At any point in time, it is an accurate representation on a subset of the abstract states, and eventually, our\\n approach grows it to cover all abstract states. This enables our approach to learn a high-reward policy on the\\n abstract MDP, which is simple because the abstract MDP is so small. Then we use that policy to obtain a\\n high-reward policy on the concrete MDP.\\nWe incrementally grow the abstract MDP by learning a worker policy, which learns the subtask of navigating between pairs of nearby abstract states in the abstract MDP.\\n\\n3) We updated our related works (Section 7) to better compare with the hierarchical reinforcement learning (HRL) literature and Roderick et al., 2017. The key difference between our work and many prior HRL works is our construction of the abstract MDP. While other HRL works also operate in latent abstract state spaces and learn skills / options, they do not enforce that the latent abstract state space forms a MDP with the learned skills, which prevents them from exploiting properties of MDPs as our approach does: i.e., planning and avoiding exponentially many state histories. Roderick et al., 2017 also forms an abstract MDP like our approach, but it differs on crucial design decisions, (e.g., how to grow the abstract MDP), which causes it to perform nearly an order of magnitude worse than our approach (see Section 7 for details).\\n\\n4) We improved the rigor of our theoretical results (Section 5 and Appendix C). Our results concern sample complexity: the number of samples required to learn a near-optimal policy with high-probability. Prior algorithms (e.g., R-MAX) provably guarantee learning a near-optimal policy, but require so many samples that the guarantee is vacuous. In contrast, for a subclass of MDPs, our approach provably learns a near-optimal policy in an exponentially smaller number of samples.\"}", "{\"title\": \"Our approach is relatively sample efficient\", \"comment\": \"We are definitely sympathetic to this concern and will note this in future drafts. Importantly, our approach is *more sample efficient* than the prior state-of-the-art (SOTA), and achieves SOTA results when compared with other approaches at 150M frames (the number of frames used by Ostrovski, et al., 2017).\\n\\nConcretely, our approach achieves higher reward than the prior SOTA at every point along the training curves in Montezuma\\u2019s Revenge and Private Eye, except from ~85M to ~95M frames of training, where SmartHash achieves roughly the same reward as our approach on Montezuma's Revenge. Directly comparing sample complexity with SOORL (Keramati et al., 2018) on Pitfall is not possible, because they save samples by manually specifying an extremely simplified model class and manually extracting all objects.\", \"specifically_at_150m_frames_of_training\": \"- On Montezuma's Revenge, our approach achieves a mean reward of 4875, \\n compared to SmartHash (4645), DQN-CTS (3705), and DQN-PixelCNN (2514).\\n - On Pitfall, our approach achieves a mean reward of 332 compared to 80 from \\n SOORL. Notably, no prior approaches have achieved positive reward on Pitfall \\n without prior knowledge much stronger than ours (e.g., demonstrations or \\n knowing a simplified model class and extracting all objects as in SOORL).\\n - On Private Eye, our approach achieves a mean reward of 35897, compared to DQN- \\n PixelCNN (15806). Note that DQN-PixelCNN achieves a mean reward of 15806 in the \\n middle of training; by 150M frames, its performance actually drops to 7787.\\n\\nReporting results on more frames enables us to differentiate our approach, which continues to learn even after 150M frames of training, from other approaches that plateau (e.g., SmartHash, DQN-PixelCNN):\\n - On Montezuma's Revenge, SmartHash barely improves after 150M frames, \\n achieving only 5001 reward after 2B frames of training. In contrast, our \\n approach continues to learn and achieves a mean reward of 11020.\\n - On Pitfall, our approach surpasses the average reward of a strong learning from \\n demonstrations approach, ApeX DQfD (Pohlen et al., 2018), by ~800M frames \\n (~4000 reward). By ~1.6B frames, our approach surpasses human performance \\n (6464) and achieves a mean reward of 9959 after 2B frames of training. Out of \\n curiosity, we ran a single seed on Pitfall for even longer. This single seed achieved a \\n reward of 29000 by 5B frames of training, and a reward of 35000 by 20B frames of \\n training. We did not run multiple seeds for so many frames due to computational \\n resources.\\n - On Private Eye, our approach improves more slowly. However, if we change a single \\n hyperparameter (Appendix B.4), our approach achieves a mean reward of >60000 \\n by 200M frames of training, nearing superhuman performance. All other results are \\n from running the same set of minimally tuned hyperparameters across all games, \\n where those hyperparameters were exclusively tuned on Montezuma's Revenge.\\n\\nFinally, our theoretical results (Section 5 and Appendix C) provide sample complexity guarantees that require exponentially fewer samples than prior algorithms with sample complexity guarantees (e.g., R-MAX, MBIE-EB).\"}", "{\"comment\": \"Thanks for the response, making that change will alleviate most of my concern. As a side note, I just noticed that in Figure 2 your Montezuma agents have been trained for 2 billion frames of experience. For fairness, you should probably note that this is about 10x more experience than Ostrovski et al. and Bellemare et al.'s agents were trained on. (Or expand the table to include results after 150m frames if you're going to put the figures side-by-side.) There's a worrying trend in recent work on hard exploration games to amp up the training time without mentioning it. (The RND paper that appeared concurrently with this one underplays sample efficiency too.)\", \"title\": \"Probably worth mentioning training time too\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for the comment. We compare with the non-demonstration approach achieving the highest *mean* reward, averaged across multiple seeds, because an algorithm's *max* reward across multiple seeds is typically not fully representative of its performance (Henderson et al., 2017). While DQN-PixelCNN (Ostrovski et al., 2017) and DQN-CTS (Bellemare et al., 2016) match the *max* performance of SmartHash, which we will note in future drafts, they perform much worse on *average*. At the 150M frames reported by these works, SmartHash achieves a mean reward of 4645, while DQN-CTS achieves a mean reward of 3705, and DQN-PixelCNN achieves a mean reward of 2514.\"}", "{\"comment\": \"Benchmarking against SmartHash on Montezuma's Revenge seems a bit arbitrary to me, given that Ostrovski et al. also reported a maximum score of 6,600 for their best seed and they *don't* exploit access to the game's RAM. (The assumptions that they do make are certainly not \\\"comparable\\\" to assuming RAM access).\\n\\nI'm not saying this is definitely the case, but choosing to cite SmartHash instead looks like a deliberate attempt to support your argument about prior knowledge, especially given that you're aware of Ostrovski et al.'s work. At the very least, you should give Ostrovski et al. credit for reaching 6,600 too.\", \"title\": \"Why not compare to Ostrovski et al. in Montezuma's Revenge?\"}", "{\"title\": \"Reply to Reviewer 1 [2/2]\", \"comment\": \"Responding to R1's additional feedback:\\n\\nR1 asks if our method applies to continuous spaces. Our method applies to continuous spaces with no changes, we can just discretize the abstract state (not the concrete state). In particular, our method may be well-suited for many robotics tasks, which often have the full state (e.g., joint angles and object positions) available. For example, in a task like stacking blocks with a robotic arm, a good state abstraction function would be the position of the end effector and blocks, which are directly available in the state (e.g., in Stacker from DM Control Suite).\\n\\nR1 says that the randomized exploration used by the discoverer is underwhelming. We view the simplicity of the discoverer as advantageous. Fundamentally, exploration requires some degree of randomness, and we were already able to achieve state-of-the-art results without overcomplicating the discoverer. We note that this random exploration is only for locally discovering nearby abstract states. Globally, we drive exploration by incrementally growing the safe set (renamed known set in the updated draft).\\n\\nR1 asks for experiments that do not use RAM state information. We clarify that we use the RAM state information for the state abstraction function, which is a fundamental component of our work, so it is not possible to run experiments without this RAM information. However, we explore the robustness of our method to the exact chosen abstraction in section 7.4 and find that our method achieves state-of-the-art results over a wide range of state abstraction functions, suggesting that alternate state abstraction functions could be used. We also note that our experiments compare with state-of-the-art approaches, which also use prior knowledge comparable to our usage of RAM state information.\"}", "{\"title\": \"Reply to Reviewer 1 [1/2]\", \"comment\": \"We thank Reviewer 1 for their detailed comments and feedback. Reviewer 1\\u2019s main concerns are 1) that the related works section does not sufficiently frame our work with previous literature, 2) that the proofs of theoretical guarantees are not sufficiently rigorous, and 3) that the experiments section is not comprehensive enough. We have posted a significantly updated new draft to address these concerns.\\n\\n-------------------------------------\\n\\nExperiments\\n\\nReviewer 1 claims that we do not sufficiently compare with enough other methods, and specifically asks for comparisons with Feudal Networks (FuN) and Roderick et al., 2017. We already comprehensively compare with the prior non-demonstration state-of-the-art, which use a comparable amount of prior knowledge, in each game. Since we already compare with the prior state-of-the-art approaches, and other approaches perform significantly worse than the prior state-of-the-art approaches, we do not compare with the many other deep RL approaches. In particular, FuN and Roderick et al., 2017 both report results on Montezuma\\u2019s Revenge. The prior state-of-the-art approach we compare against, SmartHash, outperforms these approaches by 1.75x and 4x respectively, at the number of frames they report (200M and 50M respectively). Our approach further outperforms SmartHash by over 2x.\\n\\nReviewer 1 further asks for evaluation on more games. We believe that we have already demonstrated a significant improvement over the prior state-of-the-art, and additional experiments could be prohibitively expensive. In particular, we follow Aytar et al., 2018, and evaluate on 3 of the hardest exploration games from the Arcade Learning Environment. We do not evaluate on many of the simpler other games (e.g., Breakout), because they do not require sophisticated exploration and can already be solved with current state-of-the-art methods. We use the same set of minimally tuned hyperparameters (tuned only on Montezuma\\u2019s Revenge) and obtain new state-of-the-art results by over 2x, suggesting that our approach can generalize to new tasks. Our results are not cherry-picked as R1 suggests: following many recent deep RL works, e.g., Ostrovski et al., 2017, Tang et al., 2017, we run 4 seeds on each task, and obtain statistically significant results. Even our *worst seed* outperforms or is competitive with the prior state-of-the-art *best seed*.\\n\\nWe note that running 10 seeds would approximately cost $30,000 per additional game in compute. Renting the appropriate equipment (e.g., via Google Cloud) to run a single seed to completion costs ~$1,500. To run 20 seeds (10 for our approach, 10 for the prior state-of-the-art) would cost 20 x $1,500 = $30,000 or roughly the median US annual salary.\\n\\n---------------------------------------\\n\\nRelated Works\\n\\nWe\\u2019ve updated the related works section in our recently posted draft to more carefully compare Please see Sections 1 and 7 for updated related work. The main critical difference between our work and other HRL works is that we build an abstract MDP, which enables us to plan for targeted exploration; other works also learn skills and operate in latent abstract state spaces, but not necessarily in a way that satisfies the property of an MDP, which can make effectively using the learned skills difficult.\\n\\n--------------------------------------\\n\\nTheory\\n\\nIn the updated draft of our paper, we have updated the rigor of the theory section: please see Section 5 and Appendix C for updated theory. To summarize: we\\u2019re interested in the sample complexity of RL algorithms, i.e., the number of samples required for the learned policy to become near-optimal (achieve reward at most epsilon less than the optimal policy). Standard results (e.g., MBIE-EB, R-MAX) can guarantee a near-optimal policy, but they require so many samples (polynomial in the size of the state space) in deep RL settings, that the guarantees are effectively vacuous. In contrast, for a subclass of MDPs, our approach provably learns a near-optimal policy in a number of samples polynomial in the size of the *abstract* MDP.\"}", "{\"title\": \"Reply to Reviewer 3\", \"comment\": \"We thank Reviewer 3 for their comments. Reviewer 3 points out the strong state-of-the-art performance of our approach as a strength and mentions prior knowledge (our use of RAM state information) as a minor weakness. To clarify, in our experiments, we outperform previous non-demonstration state-of-the-art approaches that use a comparable amount of prior knowledge. We discuss our usage of prior knowledge in greater detail in the section titled \\u201cPrior Knowledge\\u201d in our response to Reviewer 2.\"}", "{\"title\": \"Reply to Reviewer 2 [2 / 2]\", \"comment\": \"Complexity\\n\\nWhile our approach has many pieces, it consists of three highly modularized components with simple interfaces: the manager, worker, and discoverer. These components can be (and in our case were) developed and improved separately, significantly limiting the effective complexity of working with the system. For example, the worker can use any state-of-the-art RL algorithm to learn its goal-conditioned policy. In addition, in contrast to most end-to-end deep RL methods whose metrics (e.g., Q-values, loss functions) are hard to interpret, the metrics in the framework are interpretable and make debugging and improving the system easier. For example, the growth of the safe set indicates good exploration, and the number of episodes required for the worker to learn each transition indicates how well the worker\\u2019s RL algorithms are learning. We plan to release our code to further aid reproducibility efforts.\\n\\nReviewer 2 notes that our approach has many (19) hyperparameters. We used the same hyperparameters to achieve state-of-the-art performance on all games and only tuned (exclusively on Montezuma\\u2019s Revenge) 4 hyperparameters total, suggesting that applying our approach to new tasks may not require heavy hyperparameter tuning. In addition, while our approach does have many hyperparameters, the total number of hyperparameters is comparable to other approaches, e.g. DQN-CTS has 14 hyperparameters. \\n\\n----------------------------------\\n\\nMinor\\n\\nWe thank Reviewer 2 for pointing out these minor issues and will address them in newer drafts, which we will post shortly.\"}", "{\"title\": \"Reply to Reviewer 2 [1 / 2]\", \"comment\": \"We would like to thank Reviewer 2 for their detailed and thoughtful feedback! Reviewer 2 raises two main concerns: 1) that our approach requires prior knowledge and 2) that our approach is complicated, which we address in the two sections below:\\n\\n----------------------------------------\\n\\nPrior Knowledge\\n\\nIn this work, we assume access to prior knowledge (i.e., RAM state information) in the form of the state abstraction function. However, in our experiments, we compare with state-of-the-art approaches that use a comparable amount of prior knowledge (these approaches use more prior knowledge in 1 game, the same prior knowledge in 1 game, and less prior knowledge in 1 game). In each game, we compare with the highest scoring non-demonstration approach and we achieve new state-of-the-art results in each game, by over 2x:\\n\\n- In Montezuma\\u2019s Revenge, we compare with SmartHash, which requires RAM state information equivalent to the prior \\nknowledge used by our approach. Our approach achieves over 2x as much reward as SmartHash on average.\\n- In Pitfall!, we compare with SOORL, which requires parsing out all the relevant objects on the screen, prior knowledge much stronger than that used by our approach. Our approach achieves over 10x as much reward on average. In addition, we also compare with Apex DQfD, which uses expert demonstrations, even stronger prior knowledge. Our approach achieves about 2.5x the reward of Apex DQfD on average. We note that no prior approach has ever achieved >0 reward on Pitfall! with only RAM state information (our approach achieves ~10K reward).\\n- In Private Eye, we compare with DQN-CTS, which encodes the prior knowledge that semantically different states tend to have very different pixels. DQN-CTS uses weaker prior knowledge than our approach, but we compare with DQN-CTS because it achieves the best performance out of all non-demonstration prior approaches. Our approach achieves over 2x as much reward as DQN-CTS on average.\\n\\nTo further understand what portion of the performance of our method is due to just prior knowledge, we\\u2019ve run additional experiments with AbstractStateHash, an approach (described in greater detail in the paper) which uses the same prior knowledge as our approach and uses this prior knowledge to do count-based exploration (count-based exploration methods have achieved the prior state-of-the-art results in the hardest exploration games). In the initial submission, we already reported results of AbstractStateHash on Montezuma\\u2019s Revenge, which achieves results competitive with the prior state-of-the-art; our approach achieves >2x the reward of AbstractStateHash. We will soon submit an updated draft with results of AbstractStateHash on Pitfall! and Private Eye and we provide a summary below. \\n\\n- On Pitfall!, AbstractStateHash achieves 0 reward (comparable with many strong approaches, e.g., DQN-PixelCNN and Rainbow), whereas our approach achieves ~10K reward.\\n- On Private Eye, our approach achieves >100x the reward of AbstractStateHash.\\n\\nThese results suggest that while the RAM state prior knowledge does provide our approach valuable signal, prior state-of-the-art methods do not effectively leverage this prior knowledge.\\n\\nIn addition, in Section 7.5, we analyze the effect of varying the state abstraction function to answer the question of: how hard is it to find a state abstraction function that works well with our method? We find that our approach significantly outperforms the prior state-of-the-art under many abstract state representations. This alleviates the burden of selecting the perfect state abstraction function for new tasks (in our case, for each game, we selected an abstraction function and never changed or tuned it), and suggests that future work could find different state abstraction functions requiring less prior knowledge. In other domains, it may also be possible to easily extract abstract states from the state. For example, many robotics tasks have fully observable states (e.g. consisting of joint angles of a robotic arm and positions of objects). In these tasks, a good state abstraction function might just extract the dimensions corresponding to the position of the gripper and the positions of the objects.\"}", "{\"title\": \"Relevant topic, poor evaluation, unclear related work\", \"review\": \"This paper deal with learning abstract MDPs for planning in tasks that require long-horizon due to sparse rewards.\\nThis is an extremely important and timely topic in the RL community.\\n\\nThe paper is generally clear and well written.\\n\\nThe proposed algorithm seems reasonable and it is conceptually simple to understand. In the current experimental results presented it also seems to outperform the alternative baselines.\\n\\nNonetheless, the paper has few flaws that significantly impact the stated contributions and reduced my rating.\\n1) a stated contribution are theoretical guarantees about the performance of the algorithm. this analysis is not currently included in the main body of the manuscript, but rather in the appendix, which I find rather annoying. Moreover, said the analysis is in my opinion not sufficiently rigorous, with hand-wavy arguments, no formal proof and unclear terms (e.g. how do you define near-optimal?). Moreover, as observed by the authors this analysis currently rely on strong assumptions that might make it rather unrealistic. Overall, if you want to claim theoretical guarantees you will have to significantly improve the manuscript.\\n2) Related work, although extensive in terms of the number of references, do not help to place this work in the literature. Listing related work is no the same as describing similarities and differences compared to previous methods. For example, a paper that obviously comes to mind is \\\"FeUdal Networks for Hierarchical Reinforcement Learning\\\". What are the differences to your approach? Also, please place the related work earlier on in the paper. Otherwise, it is impossible for a reader to correctly and objectively relate your proposed approach to previous literature.\\n3) In its current form, the experimental results are extremely cherry-picked, with a very small number of tasks evaluated, and for each task a single selected baseline used. This needs to be changed: a) you should run all the baselines for each of the current tasks b) you should also expand the experiments evaluated to include tasks where it is not obvious that a hierarchy would help/is necessary c) you should include more baselines. feudal RL should be one, Roderick et al 2017 should be another one (especially considering your discussion in Sec 8)\", \"additional_feedback\": [\"The paper is currently oriented towards discrete states. What can you say about continuous spaces?\", \"The use of random exploration for the discoverer is underwhelming. Have you tried different approaches? Would more advanced exploration techniques work or improve the performance?\", \"Using only 4 seeds seems too little to provide accurate standard deviations. Please run at least 10 experiments.\", \"The use of RAM is a fairly serious limitation of your experimental setting in my view. You should include results also for the pixel space, even if negative. Otherwise, this choice is incomprehensible.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Effective but complex method which achieves good exploration performance conditioned on substantial prior knowledge\", \"review\": \"This paper considers how to effectively perform exploration in the setting where a difficult, high-dimensional MDP can be mapped to a simpler, lower-dimensional MDP. They propose a hierarchical approach where a model of the abstract MDP is incrementally learned, and then used to train sub-policies to transition between abstract states. These sub-policies are trained using intrinsic rewards for transitioning to the correct state, and the transition probabilities in the abstract MDP reflect how well a sub-policy can perform the transition.\\n\\nThe approach is evaluated on three difficult Atari games, which all require difficult exploration: Montezuma's Revenge, Pitfall and Private Eye, and is shown to achieve good performance in all of them. Furthermore, the model can be used to generalize to new tasks by changing the rewards associated with different transitions. \\n\\nThe main downside with this paper is that the mapping from original state (i.e. pixels) to the abstract state is assumed to be known beforehand, which requires prior knowledge. The authors hardcode this mapping for each of the games by fetching the relevant bits of information from RAM. This prevents fair comparison to many other methods which only use pixels, and makes this paper borderline rather than strong accept.\", \"quality\": \"the method is evaluated on difficult problems and shown to perform well. The experiments are thorough and explore a variety of dimensions such as robustness to stochasticity, granularity of the abstract state and generalization to new tasks. The approach does strike me as rather complicated though - it requires 19 (!) different hyperparameters as shown in table 2. The authors do mention that many of these did not require much tuning and they intend on making their code public. Still, this suggests that re-implementation or extensions by others may be challenging. Are all of these moving parts necessary?\", \"clarity\": \"the paper is well-written, for the most part clear, and the details are thoroughly described in the appendix.\", \"originality\": \"this approach in the context of modern deep learning is to my knowledge novel.\", \"significance\": \"This paper provides a general approach for hierarchical model-based planning when the mapping from the hard MDP to the easy one is known, and in this sense is significant. It is limited by the assumption that the mapping to abstract states is known. I suspect the complexity of the approach may also be a limiting factor.\", \"pros\": [\"good results on 3 challenging problems\", \"effective demonstration of hierachical model-based planning\"], \"cons\": [\"requires significant prior knowledge for state encoding\", \"complicated method\"], \"minor\": \"- in the intro, last paragraph: \\\"Our approach significantly outperforms previous non-demonstration SOTA approaches in all 3 domains\\\". Please specify that you use extra knowledge extracted from RAM, otherwise this is misleading. \\n- Algorithm 1: nagivate -> navigate\\n- Section 4, last sentence: broken appendix link. \\n- Bottom of page 6: \\\"Recent work on contextual MDPs...as we do here\\\" is not a sentence. \\n- In related work, it would be nice to mention some relevant early work by Schmidhuber on subgoal generation: http://people.idsia.ch/~juergen/subgoals.html\\n\\n\\n\\n*** Updated ***\\n\\nAfter reading the updated paper, responses, other reviews, and looking at related works more closely, I have changed my score to a 5. This is due to several factors. \\n\\nAlthough the paper's core idea is definitely interesting, the fact that they use hardcoded features, rather the standard setup which uses pixels, makes comparison to other methods much more complicated. In particular, I think that the comparison to DQN-PixelCNN is unfair, as this other method makes very few assumptions about the inputs (only that they are pixels). The authors sort of point this out in the main text, but this is somewhat misleading. They say \\\"PixelCNN uses less prior knowledge than our approach\\\". In fact, it uses as much prior knowledge as any RL method which operates on pixels. Granted, this is nonzero, but it's vastly less than what this paper's method assumes. The other comparison is to SOORL (which uses a different state encoding altogether). The comparison to SmartHash is fairer, although the variant of SmartHash they compare against is not the main method the paper proposes (a generic autoencoder-based state encoding which makes minimal assumptions about the input). It would have been better if the authors included experiments for their method using such a learned state encoding.\\n\\nReporting SOTA results on very hard tasks using extra hardcoded features or other domain knowledge is potentially misleading to the community as to how far along we are in solving these tasks, and extra care should be taken to put these results in context. Otherwise, for those not familiar with the subtleties, this makes it seem like these tasks are being solved when in fact they are not. My concern is that other works may then be asked to be compared against these artificially high results. Having many different task setups also makes comparison between different published works confusing in general. Other works (such as Ostrovski et al) have been able to make progress on these tasks while staying within the standard pixel-based framework.\\n\\nThese concerns would have been partially mitigated had the authors made it *very* clear that they were assuming substantial prior knowledge, which makes their method non-comparable to others which do not make this assumption. This could have been done in the introduction (which was one of my comments, but this was not included in the updated draft). I.e., something to the effect of \\\"We emphasize that our approach assumes substantially more prior knowledge than other approaches which operate only on pixels, and as such is not directly comparable with these approaches\\\". In addition, I would have liked if the authors had followed the suggestion of Reviewer 1 to include results in pixel space, even if negative, but this was not done either (using a simple autoencoder-based representation, like the one in the SmartHash paper, would have also been fine). As it is, statements such as \\\"Our approach achieves more than 2x the reward of prior non-demonstration SOTA approaches\\\" and \\\"our approach relies on some prior knowledge in the state abstraction function, although we compare against SOTA methods using a similar amount of prior knowledge in our experiments\\\" are quite misleading and unfair to other methods which do not assume access to prior knowledge (the second statement is untrue for the case of DQN-PixelCNN). \\n\\nAnother point which I had not noticed previously is the very high sample complexity (2 billion). One of the motivations behind model-based approaches is that they are supposed to be more sample efficient, but that does not seem to be the case here.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"The proposed algorithm outperforms the state of the art algorithms on three very hard games\", \"review\": \"This paper considers reinforcement learning tasks that have high-dimensional space, long-horizon time, sparse-rewards. In this setting, current reinforcement learning algorithms struggle to train agents so that they can achieve high rewards. To address this problem, the authors propose an abstract MDP algorithm. The algorithm consists of three parts: manager, worker, and discoverer. The manager controls the exploration scheduling, the worker updates the policy, and the discoverer purely explores the abstract states. Since there are too many state, the abstract MDP utilize the RAM state as the corresponding abstract state for each situation.\\n\\nThe main strong point of this paper is the experiment section. The proposed algorithm outperforms all previous state of the art algorithms for Montezuma\\u2019s revenge, Pitfall!, and Private eye over a factor of 2. \\n\\nIt is a minor weak point that the algorithm can work only when the abstract state is obtained by the RAM state. In some RL tasks, it is not allowed to access the RAM state. \\n\\n================================\\nI've read all other reviewers' comments and the response from authors, and decreased the score. Although this paper contains interesting idea and results, as other reviewers pointed out, it is very hard to compare with other algorithm. I agree to other reviewers. The algorithm assumptions are strong.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}" ] }
H1lIzhC9FX
Learning to remember: Dynamic Generative Memory for Continual Learning
[ "Oleksiy Ostapenko", "Mihai Puscas", "Tassilo Klein", "Moin Nabi" ]
Continuously trainable models should be able to learn from a stream of data over an undefined period of time. This becomes even more difficult in a strictly incremental context, where data access to previously seen categories is not possible. To that end, we propose making use of a conditional generative adversarial model where the generator is used as a memory module through neural masking to emulate neural plasticity in the human brain. This memory module is further associated with a dynamic capacity expansion mechanism. Taken together, this method facilitates a resource efficient capacity adaption to accommodate new tasks, while retaining previously attained knowledge. The proposed approach outperforms state-of-the-art algorithms on publicly available datasets, overcoming catastrophic forgetting.
[ "Continual Learning", "Catastrophic Forgetting", "Dynamic Network Expansion" ]
https://openreview.net/pdf?id=H1lIzhC9FX
https://openreview.net/forum?id=H1lIzhC9FX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "S1g6df1XgE", "SJeLjug5R7", "HyxZlDe9A7", "rJgfByXdRQ", "BygjNpyHAm", "Hkg19YkSAm", "rkxwLL6eA7", "S1laMLagAQ", "H1eAyvBa2Q", "rJggAvorhQ", "HkgsUIdM3Q" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544905332773, 1543272605553, 1543272169235, 1543151417679, 1542942002851, 1542941063185, 1542669902595, 1542669845075, 1541392101975, 1540892615841, 1540683346680 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1268/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1268/Authors" ], [ "ICLR.cc/2019/Conference/Paper1268/Authors" ], [ "ICLR.cc/2019/Conference/Paper1268/Authors" ], [ "ICLR.cc/2019/Conference/Paper1268/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1268/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1268/Authors" ], [ "ICLR.cc/2019/Conference/Paper1268/Authors" ], [ "ICLR.cc/2019/Conference/Paper1268/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1268/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1268/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The authors propose to tackle the problem of catastrophic forgetting in continual learning by adopting the generative replay strategy with the generator network as an extendable memory module.\\n\\nWhile acknowledging that the proposed model is potentially useful, the reviewers raised several important concerns that were viewed by AC as critical issues:\\n(1) poor presentation clarity of the manuscript and incremental technical contribution in light of prior work by Serra et al. (2018); (2) rigorous experiments and in-depth analysis of the baseline models in terms of accuracy, number of parameters, memory demand and model complexity would significantly strengthen the evaluation \\u2013 see R1\\u2019s and R3\\u2019s suggestions how to improve; (3) simple strategies such as storing a number of examples and memory replay should not be neglected and evaluated to assess the scope of the contribution. \\nAdditionally R1 raised a concern that preventing the generator from forgetting should be supported by an ablation study on both, the discriminator and the generator, abilities to remember and to forget.\\n\\nR1 and R3 provided very detailed and constructive reviews, as acknowledged by the authors. R2 expressed similar concerns about time/memory comparison of different methods, but his/her brief review did not have a substantial impact on the decision.\\n\\nAC suggests in its current state the manuscript is not ready for a publication. We hope the reviews are useful for improving and revising the paper.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-Review\"}", "{\"title\": \"Thank you for your feedback.\", \"comment\": \"We updated the paper with the CIFAR results as well as cite the mentioned papers on capacity growth.\", \"considering_the_comparison_to_progressive_networks\": \"Similarly to Progressive Neural Networks [1] and its evolution [2] our method addresses the challenge of knowledge transfer by ensuring the reusability of parameters across the tasks. Our method does it naturally since it only keeps a single network for long and short-term memory with different neurons assigned to different memory types. Using binary masking allows keeping both memory types in a single network without forgetting. DGM neither require keeping a pool of networks (columns) used for previous tasks (as in [1]) nor utilizing separate long and short-term memory networks (as in [2]). \\n\\nOverall, we thank the reviewer again for the constructive feedback, which we will consider in our future work.\\n\\n[1] Progressive Neural Networks, Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell, https://arxiv.org/abs/1606.04671\\n[2] Progress & Compress: A scalable framework for continual learning, Jonathan Schwarz, Jelena Luketina, Wojciech M. Czarnecki, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, Raia Hadsell, https://arxiv.org/abs/1805.06370\"}", "{\"title\": \"Why \\u201cstrict\\u201d is not only privacy.\", \"comment\": \"We thank the reviewer again for his/her extensive response.\\n\\nWe believe, the just storing real samples of previous classes does not comply with the fundamental vision of how a continually trainable system should work (e.g. compared to natural intelligence). Further, the challenge of scalability in continual learning cannot be addressed by simply storing real samples (at least not in large-scale context). \\u201cStrictness\\u201d is an increasingly important issue in the literature and has been addressed by other works such as [ 1,2,3 ]. We, therefore, stick to \\u201cstrictness\\u201d requirement and prohibit storing real samples which naturally leads to using the generative memory. \\n\\nAs opposed to DGR [1] based approaches, DGM replays a 'complete' learned representation of previous tasks - meaning no information is lost due to continuous retraining of the G on samples generated by the previous generator.\\n\\n\\n[1] Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. In Advances in Neural Information Processing Systems, pp. 2990\\u20132999, 2017.\\n[2] C. Wu, L. Herranz, X. Liu, Y. Wang, J. van de Weijer, and B. Raducanu. Memory Replay GANs: learning to generate images from new categories without forgetting. In Advances In Neural Information Processing Systems, 2018.\\n[3] Seff, Ari, et al. \\\"Continual learning in generative adversarial nets.\\\" arXiv preprint arXiv:1705.08395(2017).\"}", "{\"title\": \"We thank the reviewer for their work. We address the comments of the reviewer as follows.\", \"comment\": \"1. We first want to point out the main contributions of the paper.\\nFirst, we address the catastrophic forgetting problem in continual learning. Thereby we introduce Dynamic Generative Memory (DGM) - an adversarially trainable generative network endowed with neuronal plasticity through efficient learning of sparse attention mask for layer activations. Hereby we extend the idea of [2] to generative networks. We highlight the differences to DGR [3] in the Sec. 2 of our work. \\n\\n2. Equation (5) and (6) are taken from [2] one to one. Equations (3) and (4) are adopted from [2]: equation (3) describes the annealing of the parameter s, we anneal it globally over the course of epochs, whereas [2] anneal it for each epoch over the number of batches; equation (4) is a simplified version of the one used by [2].\\n\\n3. To avoid confusion of the proposed method to utilize techniques of DGR[3] in order to prevent forgetting in the G, we kindly ask the reviewer to refer to our response (2) to the Reviewer 1.\\n\\nIn the proposed work we adopt the generative replay not in order to avoid storing previous samples, but in order to prevent forgetting in the discriminator (which is used as a final classification model). Data synthesized by the generator is replayed for to the discriminator during the training of the subsequent tasks. There is no replay applied to the generator network. In order to avoid storing previous data, we utilize parameter level attention mechanism similar to HAT [2].\\n\\nConcerning the time comparison, there is no reason why our approach should be less time efficient then DGR based approaches [1, 3] as our method does not require retraining the generator from scratch at each time step.\\n\\n4. Why our method does not outperform joint training on SVHN?\\nUsing generated samples accommodates for better performance then joint training is the case of tasks of relatively low complexity such as MNIST. Indeed, such a result has been shown in other works, e.g. [1]. As explained in Sec. 5.2, this can be attributed to a potentially higher diversity with a steady quality of the generated samples. Clearly, the performance of the classifier trained on the generated samples highly depends on the complexity of the task and quality of the generated samples. Thus, this effect can not be observed neither in the SVHN not the CIFAR10 benchmarks.\\n\\n5. Grammar mistakes and typos.\\nThis will be fixed in the updated version of the paper.\\n\\n6. No guarantee to work for any task or scenario.\\nAs pointed out by the reviewer and is true for many machine learning method, there is no guarantee that the proposed method will work for any task or scenario. \\n\\n[1] C. Wu, L. Herranz, X. Liu, Y. Wang, J. van de Weijer, and B. Raducanu. Memory Replay GANs: learning to generate images from new categories without forgetting. In Advances In Neural Information Processing Systems, 2018.\\n[2] J. Serr\\u00e0, D. Sur\\u00eds, M. Miron, and A. Karatzoglou. Overcoming catastrophic forgetting with hard attention to the task. CoRR, abs/1801.01423, 2018. URL http://arxiv.org/abs/1801.01423.\\n[3] H. Shin, J. K. Lee, J. Kim, and J. Kim. Continual learning with deep generative replay. In\\nAdvances in Neural Information Processing Systems, pages 2990\\u20132999, 2017.\"}", "{\"title\": \"Experimental methodology\", \"comment\": \"Thanks again for the responses.\\n\\n> Furthermore, using generated samples accommodates for better performance than simply storing instances \\n> only in case of tasks of relatively low complexity such as MNIST. \\n\\n Sure, but 1) this makes it not surprising like it's presented in the paper (there are a large number of papers that essentially use generative models as data augmentation, and you could do this for your joint training methods as well) and 2) I was saying that I'm surprised that methods such as iCARL or simply replaying a small number of examples wouldn't do well on these tasks.\\n\\n> The CIFAR results will be provided in the Tab. 1 alongside with other datasets in the next version. \\n\\n If you have these available, can you post them on openreview?\\n\\n In terms of experimental methodology, I don't believe growing the generator is fair, or at least it brings in other competitors that do the same. Specifically, replay methods that use real samples typically reduce the number of samples per task as the number of tasks grow, specifically to keep the memory constant. Here, while you are not growing the discriminator, you are still growing the amount of memory you use. Again, a simple baseline would be to take the same amount of memory your method uses (including expansion) and replay those examples during training. In all of these comparisons, a table is needed that shows exactly how much memory is used for all of the baselines/competitors that use replay, and how much memory your method uses. Note the other reviewer asked for the same, and included time complexity as well.\\n\\n Further, if you are going to use capacity expansion there are a number of methods that aren't cited in your work, including progressive networks [1,2] the latter of which uses distillation as a mechanism to avoid large-scale growth in the networks. \\n\\n This paper and results does have promise, but given that it essentially uses HAT for the generative model, this is largely an empirical paper. As such, there should be precise experiments that make it much easier to discern the advantage of the method over both state of art as well as much simpler baselines. \\n\\n[1] Progressive Neural Networks, Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell, https://arxiv.org/abs/1606.04671\\n\\n[2] Progress & Compress: A scalable framework for continual learning, Jonathan Schwarz, Jelena Luketina, Wojciech M. Czarnecki, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, Raia Hadsell, https://arxiv.org/abs/1805.06370\"}", "{\"title\": \"Thank you for the response\", \"comment\": \"I appreciate the authors' detailed response. \\n\\n In terms of contribution, it would be great to make this much more clear in the revision. However, you seem to agree that essentially it is to extend HAT to the generative network in addition to adding (a very simple) capacity expansion, neither of which adds a great amount of novelty or advances our understanding of continual learning. For example, as far as I can tell most of the technical description in section 4 is for HAT.\\n\\n I also still do not understand why the method is claimed to be in such stark contrast to the previous work. When you say that your method only loses information through \\\"natural forgetting\\\" what does this mean precisely? What does \\\"'complete' learned representation\\\" mean? These are vague terms and should be precisely defined. To me both of these are again just achieved by using HAT. \\n\\n Finally, I am still not convinced why a \\\"strict\\\" incremental setting rules out the storage of real samples. If storing a number of examples equivalent to the amount of memory used by your method achieves better performance (of course, equalizing for the capacity growth you have), why is that an issue? The only reason I agree with might be privacy, but that can be addressed through other privacy methods (e.g. random perturbation). \\n\\n Overall, I think generative models are great and can be useful to addressing catastrophic forgetting, but such methods have have other limitations (complexity of training, etc.) and should be compared to simpler replay baselines. Further, just the idea of using HAT for a generative model plus capacity expansion seems to me not a significant enough contribution. Given the additional concerns about methodology (see next comment), I do not believe these comments sufficiently address all of the concerns to warrant acceptance.\"}", "{\"title\": \"Response to Reviewer #1 (part 1)\", \"comment\": \"We thank the reviewer for their constructive comments. We address them as follows.\\n\\n1. We first would like to point out the contributions of our work.\\n\\nFirst, we address the catastrophic forgetting problem in continual learning. Thereby we introduce Dynamic Generative Memory (DGM) - an adversarially trainable generative network endowed with neuronal plasticity through efficient learning of sparse attention mask for layer activations. Hereby we extend the idea of HAT[2] to generative networks. \\n\\nSecondly, we address the scalability problem in continual learning. To ensure sufficient model capacity to accommodate for new tasks, we propose an adaptive network expansion mechanism in which newly added capacity is derived from the learnable neuron masks.\\n\\n\\n2. We further we would like to clarify a possible confusion of the proposed method to be a combination of Deep Generative Replay (DGR)[6] and HAT[2].\\n\\nAs pointed out in the Sec. 2 of our work, Deep Generative Replay (DGR) tries to prevent forgetting in the generator by retraining it from scratch every time a new data chunk becomes available. Thus, in DGR the generator would lose information at each replay step since the quality of generated samples highly depends on the quality of samples generated by the prior generator causing \\\"semantic drift\\\". This contrasts our method, which effectively retains the knowledge in the generator using HAT like neuron masking and only loses information through \\u201cnatural\\u201d forgetting. This allows us to use \\u201ccomplete\\u201d learned representation during learning and inference of the subsequent tasks as well as speed up the training (no replay of G is involved).\\n\\n3. We are not simply shifting the forgetting problem into G. \\n\\nOur work tackles the problem of class incremental learning. As opposed to task-incremental setup and shown in previous work, e.g. [3,4,5], models in class incremental setup (with single-head architecture) require a replay of previously seen categories when learning new ones. The reason for using G is not having access to samples of previous classes in the \\u201cstrict\\u201d incremental setup and using generated samples instead. As pointed out in our work, restricting storage of real samples represents a more realistic setup, since in real-world applications such an \\u201cepisodic memory\\u201d with real samples is often impossible due to memory and privacy restrictions.\"}", "{\"title\": \"Response to Reviewer #1 (part 2)\", \"comment\": \"4. Our approach has 2 important hyperparameters: scaling parameter s used for calculating binary mask from the embedding matrix as well as \\u03bb_RU, that controls the size accuracy trade-off (see Sec. 4.1 \\u201cjoint training\\u201d). We add a table analyzing the sensitivity of the parameter \\u03bb_RU observing the expected behavior: higher values of \\u03bb_RU lead to a smaller model size, however, reduced G size is positively correlated with the final classification performance of D (smaller G -> lower accuracy of D).\\n+---------+---------+-------+\\n| \\u03bb_RU | Acc.5 | Size |\\n+---------+---------+-------+\\n| 2E-06 | 98.16 | 660 |\\n+---------+--------+--------+\\n| 0.002 | 98.22 | 638 |\\n+---------+--------+--------+\\n| 0.2 | 98.02 | 598 |\\n+---------+--------+--------+\\n| 0.75 | 97.36 | 577 |\\n+---------+--------+--------+\\n| 2 | 86.82 | 522 |\\n+---------+--------+--------+\\n\\n5. We use the baseline presented by [1], that tackles identical scenario. To our knowledge [1] provides the state of the art performance in \\\"strict\\\" class incremental setup without using real samples.\\n\\n We consider a joint training (JT, classical training) of the discriminator as the upper performance bound. Joint training features a setup in which the discriminator is trained on ALL real samples of the previous tasks. The reviewer proposes to simulate information loss and use a random subset of real samples to train the upper bound model. However, this would certainly give a worse performance than when using all real samples. We, therefore, think that used JT upper bound is appropriate.\\n\\nFurthermore, using generated samples accommodates for better performance than simply storing instances only in case of tasks of relatively low complexity such as MNIST. Indeed, such a result has been shown in other works, e.g. [1]. As explained in Sec. 5.2, this can be attributed to a potentially higher diversity with steady quality of the generated samples. Clearly, the performance of the classifier trained on the generated samples highly depends on the complexity of the task and quality of the generated samples. Thus, this effect can be observed neither in the SVHN nor the CIFAR10 benchmarks.\\n\\n6. The CIFAR results will be provided in the Tab. 1 alongside with other datasets in the next version. \\n\\nTo ensure a fair comparison with the benchmark methods that do not use any network expansion strategy for the generator (e.g. [1,6]), we initialize our G to be approximately 50% of the size of the G used in the benchmarks. Also a study on network growth dynamics is provided in Fig. 5 (Sec. 5.3), showcasing a lower network capacity than the worst case scenario. Growing the generator is an essential part of our method that addresses the scalability problem in continual learning, e.g. with always growing amount of data model\\u2019s capacity will be exhausted at a certain point. Noteworthy, the discriminator is not affected by the proposed dynamic network expansion mechanism and features the same architecture as in the benchmark methods.\\n\\nWe believe the comparison to the joint training is fair because DGM only grows the capacity of the generator. In the discriminator, only the last classification layer is expanded with the growing model\\u2019s output space as new classes are added. Thus, for k-th task we compare the accuracy of a discriminator with identical architecture trained on real samples of all k tasks (JT) with one trained on DGM-synthesized samples of k-1 tasks+reals of k-th tasks. Thus DGM\\u2019s discriminator has no advantages over the joint training generator.\\n\\n8. Finally, we will address typos, writing and presentation issues in the updated version of the paper.\\n\\n\\n[1] C. Wu, L. Herranz, X. Liu, Y. Wang, J. van de Weijer, and B. Raducanu. Memory Replay GANs: learning to generate images from new categories without forgetting. In Advances In Neural Information Processing Systems, 2018.\\n\\n[2] J. Serr\\u00e0, D. Sur\\u00eds, M. Miron, and A. Karatzoglou. Overcoming catastrophic forgetting with hard attention to the task. CoRR, abs/1801.01423, 2018. URL http://arxiv.org/abs/1801.01423.\\n\\n[3] H. Shin, J. K. Lee, J. Kim, and J. Kim. Continual learning with deep generative replay. In\\nAdvances in Neural Information Processing Systems, pages 2990\\u20132999, 2017.\\n\\n[4] S. Rebuffi, A. Kolesnikov, and C. H. Lampert. icarl: Incremental classifier and representation\\nlearning.CoRR, abs/1611.07725, 2016. URL http://arxiv.org/abs/1611.07725.\\n\\n[5] A. Chaudhry, P. K. Dokania, T. Ajanthan, and P. H. S. Torr. Riemannian walk for incremental learning: Understanding forgetting and intransigence. CoRR, abs/1801.10112, 2018. URL http://arxiv.org/abs/1801.10112.\\n\\n[6] Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. In Advances in Neural Information Processing Systems, pp. 2990\\u20132999, 2017.\"}", "{\"title\": \"Interesting combination of previous methods, but contributions are not clear and experiments need more rigor\", \"review\": [\"The proposed method tackles class-incremental continual learning, where new categories are incrementally exposed to the network but a classifier across all categories must be learned. The proposed method seems to be essentially a combination of generative replay (e.g. Deep Generative Replay) with AC-GAN as the model and attention (HAT), along with a growing mechanism to support saturating capacity. Quantitative results are shown on MNIST and SVHN while some analysis is provided on CIFAR.\", \"Pros\", \"The method combines the existing works in a way that makes sense, specifically AC-GAN to support a single generator network with attention-based methods to prevent forgetting in the generator.\", \"The method results in good performance, although see caveats below.\", \"Analysis of the evolution of mask values over time is interesting.\", \"Cons\", \"The method is very confusingly presented and requires both knowledge of HAT as well as more than one reading to understand. The fact that HAT-like masks are used for a generative replay approach is clear, but the actual mechanism of \\\"growing capacity\\\" is not made clear at all especially in the beginning of the paper. Further the contributions are not clear at all, since a large part of the detailed approach/equations relate to the masking which was taken from previous work. The authors should on the claimed contributions. Is it a combination of DGR and HAT with some capacity expansion?\", \"It is not clear whether pushing the catastrophic forgetting problem into the generator is the best approach. Clearly, replaying data accurately from all tasks will work well, but why is it harder to guard against the generative forgetting problem than the discriminative one?\", \"The approach also seems to add a lot of complexity and heuristics/hyper-parameters. It also adds capacity and it is not at all made clear whether the comparison is fair since no analysis on number of parameters are shown.\", \"Relatedly, better baselines should be used; for example, if the memory used by the generative model is merely put to storing randomly chosen instances from the tasks, how will the results compare? Clearly storing instances bypasses the forgetting problem completely (as memory size approaches the dataset size it turns into the joint problem) and it's not clear how many instances are really needed per task, especially for these simpler problems. As such, I find it surprising that simply storing instances would do as poorly as stated in this paper which says cannot provide enough diversity.\", \"It also seems strange to say that storing instances \\\"violates the strictly incremental setup\\\" while generative models do not. Obviously there is a tradeoff in terms of memory usage, privacy, performance, etc. but since none of these methods currently achieve the best across all of these there is no reason to rule out any of the methods. Otherwise you are just defining the problem in a way that excludes other simple approaches which work.\", \"There are several methodological issues: Why are CIFAR results not shown in a table as is done for the other dataset? How many times were the experiments run and what were the variances? How many parameters are used (since capacity can increase?) It is for example not clear that the comparison to joint training is fair, when stating: \\\"Interestingly, DGM outperforms joint training on the MNIST dataset using the same architecture. This suggests that the strictly incremental training methodology indeed forced the network to learn better generalizations compared to what it would learn given all the data.\\\" Doesn't DGM grow the capacity, and therefore this isn't that surprising? This is true throughout; as stated before it is not clear how many parameters and how much memory these methods need, which makes it impossible to compare.\"], \"some_other_minor_issues_in_the_writing_includes\": \"1) The introduction makes it seem the generative replay is new, without citing approaches such as DGR (which are cited in the related work). The initial narrative mixes prior works' contributions and this paper's contributions; the contributions of the paper itself should be made clear, \\n\\n 2) Using the word \\\"task\\\" in describing \\\"joint training\\\" of the generative, discriminative, and classification networks is very confusing (since \\\"task\\\" is used for the continual learning description too, \\n\\n 3) There is no legend for CIFAR; what do the colors represent?\\n\\n 4) There are several typos/grammar issues e.g. \\\"believed to occurs\\\", \\\"important parameters sections\\\", \\\"capacity that if efficiently allocated\\\", etc.).\\n\\n In summary, the paper presents what seems like an effective strategy for continual learning, by combining some existing methods together, but does not make it precise what the contributions are and the methodology/analysis make it hard to determine if the comparisons are fair or not. More rigorous experiments and analysis is needed to make this a good ICLR paper.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"The expanded generator will also raise the storing problem as that in episodic memory strategy\", \"review\": \"This paper attempts to mitigate catastrophic problem in continual learning. Different from the previous works where episodic memory is used, this work adopts the generative replay strategy and improve the work in (Serra et al., 2018) by extending the output neurons of generative network when facing the significant domain shift between tasks.\", \"here_are_my_detailed_comments\": \"Catastrophic problem is the most severe problem in continual learning since when learning more and more new tasks, the classifier will forget what they learned before, which will be no longer an effective continual learning model. Considering that episodic memory will cost too much space, this work adopts the generative replay strategy where old representative data are generated by a generative model. Thus, at every time step, the model will receive data from every task so that its performance on old tasks will retain. However, if the differences between tasks are significant, the generator cannot reserve vacant neurons for new tasks or in other words, the generator will forget the old information from old tasks when overwritten by information from new tasks. Therefore, this work tries to tackle this problem by extending the output neurons of the generator to keep vacant neurons to retain receive new information. As far as I am concerned, this is the main contribution of this work.\\n \\nNevertheless, I think there are some deficiencies in this work.\\n \\nFirst, this paper is not easy to follow. The main reason is that from the narration, I cannot figure out what is the idea or technique of other works and what is the contribution of this paper. For example, in Section 4.1, I am not sure the equation (3), (4), (5), (6) are the contributions of this paper or not since a large number of citations appear.\\n \\nSecond, the authors mention that to avoid storing previous data, they adopt generative replay and continuously enlarge the generator to tackle the significant domain shift between tasks. However, in this way, when more and more tasks come, the generator will become larger and larger. The storing problem still exists. Generative replay also brings the time complexity problem since it is time consuming to generate previous data. Thus, I suggest the authors could show the space and time comparisons with the baseline methods to show effectiveness of the proposed method.\\n \\nThird, the datasets used in this paper are rather limited. Three datasets cannot make the experiments convincing. In addition, I observe that in Table 1, the proposed method does not outperform the Joint Training in SVHN with A_10. I hope the author could explain this phenomenon. Furthermore, I do not see legend in Figure 3 and thus I cannot figure out what the curves represent.\\n \\nFourth, there are some grammar mistakes and typos. For example, there are two \\\"the\\\" in the end of the third paragraph in Related Work. In the last paragraph in Related Work, \\\"provide\\\" should be \\\"provides\\\". In page 8, the double quotation marks of \\\"short-term\\\" are not correct.\\n \\nFinally yet importantly, though a large number of works have been proposed to try to solve this problem especially the catastrophic forgetting, most of these works are heuristic and lack mathematical proof, and thus have no guarantee on new tasks or scenarios. The proposed method is also heuristic and lacks promising guarantee.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Good work on how to prioritize the use of neurons in memory\", \"review\": \"As a paper on how to prioritize the use of neurons in a memory this is an excellent paper with important results.\\n\\nI am confused by the second part of the paper an attached GAN of unlimited size. It may start out small but there is nothing to limit its size over increased learning. It seems to me in the end it becomes the dominate structure. You start the abstract with \\\"able to learn from a stream of data over an undefined period of time\\\". I think it would be an improvement if you can move this from an undefined time/memory size to a limited size for the GAN and then see how far that takes you.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
SyfIfnC5Ym
Improving the Generalization of Adversarial Training with Domain Adaptation
[ "Chuanbiao Song", "Kun He", "Liwei Wang", "John E. Hopcroft" ]
By injecting adversarial examples into training data, adversarial training is promising for improving the robustness of deep learning models. However, most existing adversarial training approaches are based on a specific type of adversarial attack. It may not provide sufficiently representative samples from the adversarial domain, leading to a weak generalization ability on adversarial examples from other attacks. Moreover, during the adversarial training, adversarial perturbations on inputs are usually crafted by fast single-step adversaries so as to scale to large datasets. This work is mainly focused on the adversarial training yet efficient FGSM adversary. In this scenario, it is difficult to train a model with great generalization due to the lack of representative adversarial samples, aka the samples are unable to accurately reflect the adversarial domain. To alleviate this problem, we propose a novel Adversarial Training with Domain Adaptation (ATDA) method. Our intuition is to regard the adversarial training on FGSM adversary as a domain adaption task with limited number of target domain samples. The main idea is to learn a representation that is semantically meaningful and domain invariant on the clean domain as well as the adversarial domain. Empirical evaluations on Fashion-MNIST, SVHN, CIFAR-10 and CIFAR-100 demonstrate that ATDA can greatly improve the generalization of adversarial training and the smoothness of the learned models, and outperforms state-of-the-art methods on standard benchmark datasets. To show the transfer ability of our method, we also extend ATDA to the adversarial training on iterative attacks such as PGD-Adversial Training (PAT) and the defense performance is improved considerably.
[ "adversarial training", "domain adaptation", "adversarial example", "deep learning" ]
https://openreview.net/pdf?id=SyfIfnC5Ym
https://openreview.net/forum?id=SyfIfnC5Ym
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ryxqizaT1E", "BJen5fN40X", "rJgbbg44AQ", "ryl2YCmVR7", "HkeAenmERX", "SJehLY7VAm", "B1ln4ILcaX", "SkgT7hy9a7", "SyeI-MS1TX", "Hygu-dLc37", "BJx8VSaIhQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544569506187, 1542894227826, 1542893561415, 1542893187863, 1542892534134, 1542891859982, 1542247988209, 1542220837065, 1541521917652, 1541199872442, 1540965678468 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1267/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1267/Authors" ], [ "ICLR.cc/2019/Conference/Paper1267/Authors" ], [ "ICLR.cc/2019/Conference/Paper1267/Authors" ], [ "ICLR.cc/2019/Conference/Paper1267/Authors" ], [ "ICLR.cc/2019/Conference/Paper1267/Authors" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1267/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1267/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1267/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper presents an interesting idea for increasing the robustness of adversarial defenses by combining with existing domain adaptation approaches. All reviewers agree that the paper is well written and clearly articulates the approach and contribution.\\n\\nThe main areas of weakness is that the experiments focus on small datasets, namely CiFAR and MNIST. That being said, the algorithm is reasonably ablated on the data explored and the authors provided valuable new experimental evidence during the rebuttal phase and in response to the public comment.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Using domain adaptation for robust adversarial learning\"}", "{\"title\": \"Addressed by extending our ATDA method to the method of Madry et al.\", \"comment\": \"Thanks for your interest in our paper.\\n\\nSince (the noisy) PGD can samples more sufficient adversarial examples in adversarial domain, adversarial training on it yields more robust models than adversarial training on FGSM. However, PGD-Adversarial Training (PAT) [1] is challenging to scale to deep or wide neural networks, as it increases the training time by a factor that is roughly equal to the number of PGD steps.\\n\\n[1] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations(ICLR), 2018.\\n\\nOur work is mainly focused on the adversarial training yet efficient FGSM adversary due to its scalability. The empirical evidence shows that our ATDA method has great generalization ability to various adversaries as compared to SAT (adversarial training on FGSM). \\n\\nTo address your concerns, in subsection 4.5, we extend the ATDA method to PGD-Adversarial Training by combining adversarial training on the noisy PGD with domain adaptation. Thus, we implement an extension of ATDA for PAT, called PATDA. The evaluation results suggest that PATDA has stronger robustness on various attacks as compared to PAT. The results indicate that domain adaptation can be applied flexibly to adversarial training on other adversaries to improve the robustness of the model. This experiment exhibits a good transfer ability of our domain adaptation method. Thank you for the nice suggestion.\"}", "{\"title\": \"More experiments and analysis are added. Thank you for your helpful review.\", \"comment\": \"We deeply appreciate your positive comments. In the revision, we reorganized the content of the Experimental Section, improved the writing quality and organization, and add more quantitative analysis to strengthen our work.\\n\\n1) We add subsection 4.3 and evaluate the robustness of defenses in terms of the local loss sensitivity to perturbations and the shift of adversarial data distribution with respect to the clean data distribution. ATDA performs the best in terms of the two metrics. \\n\\n2) We add subsection 4.4 and conduct a series of ablation experiments to tease out the benefit of each of the various terms added to the loss functions. Both UDA and SDA can improve the generalization of SAT on various attacks. By combining UDA and SDA together with SAT, the final ATDA can exhibits stable improvements on the standard adversarial training.\\n\\n3) We add subsection 4.5 and verify the scalability and flexibility of ATDA by combining adversarial training on the noisy PGD with domain adaptation. The results indicate domain adaptation can be applied flexibly to adversarial training on other adversaries to improve the robustness of the model.\\n\\nWe have tried to address all concerns of the reviewers and changed the abstract, introduction and conclusion accordingly. We have also revised the paper to address the minor typos. We hope our effort can earn your support and convince you the quality of our work.\"}", "{\"title\": \"More experiments and analysis are added. Thank you for your helpful review.\", \"comment\": \"We deeply appreciate your positive comments, as well as the thorough and constructive suggestions. Towards your suggestions on further experimental analysis and the writing quality of experimental section, we have performed plenty of revisions and improvements on the experimental section.\\n\\nFor the large dataset of ImageNet, due to the time limit and the resource limit, we are sorry that we could not provide experiments on it, instead, we add three more subsections and provide more quantitative analysis on the current four datasets. We also extend the ATDA method to the PGD-Adversarial Training (PAT) to show how our method can be transferred to iterative attacks.\\n \\n1. In subsection 4.3, we performed additional experiments to measure defense efficiency of ATDA in terms of the local loss sensitivity (defined in [1]) to perturbations and the shift of adversarial data distribution with respect to the clean data distribution. Especially on the quantitative analysis on the embeddings, besides plotting the t-SNE embeddings, we report the detailed MMD distances across domains. The results are shown in Table 3. We observe that: SAT and EAT actually increase the MMD distance across domains of the clean data and the adversarial data as compared with NT. In contrast, PRT and ATDA can learn domain invariance between the clean domain and the adversarial domain. Furthermore, our learned logits representation achieves the best performance on distribution discrepancy. \\n[1] https://arxiv.org/abs/1706.05394 (ICML 2017). \\n\\n2. Following your valuable suggestion, in subsection 4.4, we provide a more thorough analysis to tease out the benefit of each of the various terms added to the loss functions. The results are shown in Figure 3. Thank you.\\n\\n3. Although our work is mainly focused on the adversarial training yet efficient FGSM adversary, we show the scalability and flexibility of ATDA on subsection 4.5. We consider to extend the ATDA method to PGD-Adversarial Training (PAT) : adversarial training on the noisy PGD (iterative attack). Meanwhile, we implement an extension of ATDA (called PATDA) by combining adversarial training on the noisy PGD with domain adaptation. As the evaluation shown in Table 4, PATDA exhibits stronger robustness against various bounded adversaries as compared to PAT.\\n\\n\\nWe thank you again for the valuable feedback and comments, which have improved the manuscript apparently. We have tried to address most of your concerns and changed the abstract, introduction and conclusion accordingly in the revision. We hope our effort can convince you the quality of our work.\"}", "{\"title\": \"More experiments and analysis are added. Thank you for the helpful review.\", \"comment\": \"We deeply appreciate the reviewer for the positive remarks and constructive suggestions. Your suggestion of further experiments is a great idea and towards this, we have performed the following revisions.\\n\\n1. We have reorganized the content of the Experimental Section, clarify the writing and reduce the duplication of expression, so as to leave more space to describe further quantitative analysis on ATDA.\\n\\n2. We add subsection 4.3 and performed additional experiments to measure the defense efficiency of ATDA in terms of the local loss sensitivity (defined in [1]) to perturbations and the shift of adversarial data distribution with respect to the clean data distribution.\\n The results in Table 2 on local loss sensitivity suggest that the adversarial training methods do increase the smoothness of the learned model as compared with the normal training, and ATDA performs the best in terms of the sensitivity of the loss function.\\n We also quantify the distribution discrepancy of the clean data and the adversarial data. Besides the illustration of the t-SNE embeddings (Figure 2), we report the detailed MMD distances across domains in Table 3 for quantitative analysis (the lower the better). The results suggest that SAT and EAT actually increase the MMD distance across domains of the clean data and the adversarial data. In contrast, PRT and ATDA can learn domain invariance between the clean domain and the adversarial domain, and ATDA achieves the best performance in terms of the distribution discrepancy.\\n[1] https://arxiv.org/abs/1706.05394 (ICML 2017). \\n\\n3. To tease out the benefit of each of the various terms added to the loss functions, as Reviewer 2 suggested, we study the benefit of the different components in ATDA:\\n\\t1) Standard Adversarial Training (SAT)\\n\\t2) Unsupervised Domain Adaptation (UDA)\\n\\t3) Supervised Domain Adaptation (SDA)\\n We add subsection 4.4 and conduct a series of ablation experiments. The results are shown in Figure 3. We observe that by aligning the covariance matrix and mean vector of the clean and adversarial examples, UDA plays a key role in improving the generalization of SAT on various attacks.\\n\\tIn general, the aware of margin loss on SDA can also improve the defense quality on standard adversarial training, but the effectiveness is not very stable over all datasets. By combining UDA and SDA together with SAT, our final algorithm ATDA can exhibits stable improvements on the standard adversarial training. In general, the performance is slightly better than SAT+UDA. \\n\\n4. Although our work is mainly focused on the adversarial training yet efficient FGSM adversary, we add subsection 4.5 to extend our domain adaptation method to PGD-Adversarial Training (PAT) and get an extension of ATDA (called PATDA).\\n We performed experiments to evaluate the robustness of PAT and PATDA. The results suggest that PATDA has stronger robustness on various attacks with respect to PAT. This indicates domain adaptation can be applied to adversarial training on other adversaries to improve the robustness of the model.\\n\\nWe thank you again for the valuable feedback and comments, which have improved the manuscript apparently. We have addressed all the concerns and changed the abstract, introduction and conclusion accordingly in the revision of the paper. We hope the additional quantitative results can convince you the quality of our work.\"}", "{\"title\": \"Paper revision 1\", \"comment\": \"Dear Reviewers,\\n\\nWe deeply appreciate all reviewers for the thorough comments and valuable suggestions, which definitely help the improvement of our paper! We would like to briefly summarize our modification here and leave specific concerns in individual comments to each of you.\", \"our_main_modifications_are_as_follows\": \"1) For the Experimental Section (section 4), we have reorganized the content to reduce the duplication of expression and further investigate the defense performance of ATDA.\\nA new subsection 4.3 is added to show the defense efficiency of ATDA in terms of the local loss sensitivity (defined in [1]) to perturbations and the shift of adversarial data distribution with respect to the clean data distribution, as suggested by Reviewer 1 and 2;\\n [1] https://arxiv.org/abs/1706.05394 (ICML 2017). \\n\\n2) A new subsection 4.4 is added to show the ablation studies on ATDA so as to tease out the benefit of each of the various terms added to the loss functions, as suggested by Reviewer 2;\\n\\n3) To address an anonymous comment, a new subsection 4.5 is added to show the scalability and flexibility of ATDA and the performance of the extension on adversarial training on the noisy PGD (called PATDA) as compared with the original adversarial training on the noisy PGD (called PAT);\\n\\nWe hope all our effort can make our paper more comprehensive and address your concerns. Thank you very much!\\n\\nBests,\\nAuthors.\"}", "{\"comment\": \"I concur. This is a necessary baseline.\", \"title\": \"Concur\"}", "{\"comment\": \"In Tables 3-6 I would have expected the authors provide a comparison to Madry et al. (2018) which provides the strongest white-box robustness to date. In particular, this paper reports CIFAR-10 numbers at eps=4/255 while Madry et al. uses the harder 8/255. How much robustness is lost by using only one step of FGSM adversarial training?\", \"title\": \"Comparison against Madry et al. (2018)?\"}", "{\"title\": \"Good idea and experimental evidence but lacking in rigor and more empirical analysis\", \"review\": \"The paper casts the problem of learning from adversarial examples to make models resistant to adversarial perturbations to a domain adaptation problem. The proposed method Adversarial training with Domain adapatation( ATDA) learns a representation that is invariant to clean and adversarial data achieving state of the art results on CIFAR.\\n\\nquality - Paper is well written, explanation of the mathematical parts are good, experimental quality can be much better.\\nclarity - the problem motivation as well as the methodology is clearly explained. the learning from the experiments are unclear and need more work.\\noriginality - The casting of the problem as domain adaptation is original but from the experiments it was not conclusive as to how much benefit we get. \\nsignificance of this work - Current models being sensitive to adversarial perturbations is quite a big problem so the particular problem authors are trying to address is very significant.\\n\\npros\\n\\nA good idea, enough experiments that indicate the benefit of casting this as a domain adaptation problem.\\n\\ncons \\n\\nI feel, the authors should have extended the experiments to ImageNet which is a much larger dataset and validate the findings still hold, I feel the discussion section and comparison to other methods needs to be worked to be more thorough and to tease out the benefit of each of the various terms added to the loss functions as currently all we have is final numbers without much explanation and details. TSNE embeddings part is also very qualitative and while the plots indicate a better separation for ATDA, I feel authors should do more quantitative analysis on the embeddings instead of just qualitative plots.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A new adversarial training with domain adaptation demonstrating fair performance improvement\", \"review\": \"Authors propose a new adversarial training with domain adaptation method to overcome the weak generalisation problem in adversarial training for adversarial examples from different attacks. Authors consider the adversarial training as a domain adaptation task with limited number of target labeled data. They demonstrate that by combining unsupervised and supervised domain adaptation with adversarial training, the generalisation ability on adversarial examples from various attacks can be improved for efficient defence. The experimental results on several benchmark datasets suggest that\\nthe proposed approach achieves significantly better generalisation results in most cases, when compared to current\\ncompeting adversarial training methods. Paper is clearly written and well structured. The novelty of the proposed technique is fair and the originality alike. The results are not very conclusive therefore I think more experiments are needed and possible further adjustments.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Good paper\", \"review\": \"This paper addresses the generalization of adversarial training by proposing a new domain adaptation method. In order to have robust defense for adversarial examples, they combine supervised and unsupervised learning for domain adaptation. The idea of domain adaptation is to increase the similarity between clear and adversarial examples. For this purpose, in their objective, they are minimizing the domain shift by aligning the covariance matrix and mean vector of the clean and adversarial examples.\\n\\nFrom experimental viewpoint, they have lower performance than almost all competitors on clean data, but they are beating them when there is white-box as well as the back-box threats. This means their method gives a good generalization. In CIFAR-100 they do not have this trade-off for accuracy and generalization; they are beating other competitors in clean data as well.\\n\\nThe paper is clear and well-written. The introduction and background give useful information. \\n\\nIn general, I think the paper has a potential for acceptance, but I have to mention that I am not an expert in Adversarial networks area.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}" ] }
B1gIf305Ym
NSGA-Net: A Multi-Objective Genetic Algorithm for Neural Architecture Search
[ "Zhichao Lu", "Ian Whalen", "Vishnu Boddeti", "Yashesh Dhebar", "Kalyanmoy Deb", "Erik Goodman", "Wolfgang Banzhaf" ]
This paper introduces NSGA-Net, an evolutionary approach for neural architecture search (NAS). NSGA-Net is designed with three goals in mind: (1) a NAS procedure for multiple, possibly conflicting, objectives, (2) efficient exploration and exploitation of the space of potential neural network architectures, and (3) output of a diverse set of network architectures spanning a trade-off frontier of the objectives in a single run. NSGA-Net is a population-based search algorithm that explores a space of potential neural network architectures in three steps, namely, a population initialization step that is based on prior-knowledge from hand-crafted architectures, an exploration step comprising crossover and mutation of architectures and finally an exploitation step that applies the entire history of evaluated neural architectures in the form of a Bayesian Network prior. Experimental results suggest that combining the objectives of minimizing both an error metric and computational complexity, as measured by FLOPS, allows NSGA-Net to find competitive neural architectures near the Pareto front of both objectives on two different tasks, object classification and object alignment. NSGA-Net obtains networks that achieve 3.72% (at 4.5 million FLOP) error on CIFAR-10 classification and 8.64% (at 26.6 million FLOP) error on the CMU-Car alignment task.
[ "neural architecture search", "evolutionary algorithms" ]
https://openreview.net/pdf?id=B1gIf305Ym
https://openreview.net/forum?id=B1gIf305Ym
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BkxUHfabeV", "Bklu2VIdyE", "SyeXH4a3RX", "ByxFddn3AX", "H1x7Vdnn0m", "SJxYRA5307", "Hyl0mlAch7", "BklhkyjYnX", "Skx7x8jNsm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544831549784, 1544213679741, 1543455803045, 1543452785418, 1543452715321, 1543446224663, 1541230629628, 1541152484449, 1539778027481 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1266/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1266/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1266/Authors" ], [ "ICLR.cc/2019/Conference/Paper1266/Authors" ], [ "ICLR.cc/2019/Conference/Paper1266/Authors" ], [ "ICLR.cc/2019/Conference/Paper1266/Authors" ], [ "ICLR.cc/2019/Conference/Paper1266/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1266/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1266/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": [\"Pros:\", \"an explicitly multi-objective approach to neural architecture search\", \"multiple datasets\", \"ablation experiments\"], \"cons\": [\"lack of baselines like hyperparameter search\", \"ill-justified increase in capacity after search\", \"ineffective use of the multiple objectives in assessment\", \"not clearly beating random search baseline\", \"The reviewers adjusted their scores upward after the rebuttal, but serious concerns remain, and the consensus is still to (borderline) reject the paper.\"], \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Meta-review\"}", "{\"title\": \"Post-rebuttal comments\", \"comment\": \"1. Unfortunately, despite thinking a lot about it, I am unconvinced that the paper currently demonstrates a clear utility of multi-objective architecture search. I think it is clear in theory that multi-objective search for architectures can have a utility. The question is: do we know how to do it right? The answer to this question is not clear from the experiments with FGSM robustness as the secondary objective either, although there are certainly good ideas here that my rating already takes into account.\\n\\nThe FGSM attack is not considered a \\u2018serious\\u2019 attack at all under any threat model. This is known in recent work on adversarial attacks, but I can also point you to a direct comment from the author of FGSM [1].\\nIn fact, I\\u2019d argue that from the current state of adversarial ML research, it is not clear at all whether architecture search would lead to models that are secure under realistic threat models. I\\u2019d be happy to be proven wrong by new results using NSGANet.\\n\\nIt is likely that there are other objectives where the utility of multi-objective architecture search can be more clearly demonstrated, but at present these experiments are unconvincing to me.\\n\\n2. When considering the complexity-performance trade-off, it does not appear correct to change the complexity of the network after optimisation (since the computational complexity and performance do not have a simple relationship). I can only comment on the paper under review, but perhaps this is not a major issue when the search method does not actively seek this trade-off. I\\u2019m sympathetic that computational requirements can be a hurdle in having the \\u2018right\\u2019 setup for the method.\\n\\n[1] https://openreview.net/forum?id=SkgVRiC9Km&noteId=rkxYnt8JpQ\"}", "{\"title\": \"Responses to review comments\", \"comment\": \"1, The primary objective of our paper is to demonstrate the utility NSGA based multi-objective NAS, as opposed to outperforming previous methods. Our choice of FLOPS as the second objective does not fully showcase the utility of NSGA-Net. In Appendix A we consider another objective, robustness to adversarial attacks, as our second objective. To the best of our knowledge, hand tuning or automatically tuning networks that are robust to adversarial attacks has not been studied. Our experiments in Appendix A allow us draw some interesting conclusions, namely, 1) there exist trade-offs between classification accuracy and robustness to adversarial attack and NSGA-Net is capable of identifying a set of network architectures that provide an efficient trade-off between accuracy and robustness that would not have easily been obtained by hand; 2) \\u201cwide\\u201d networks (like ResNeXt (Xie et al., 2016) or Inception blocks (Szegedy et al., 2015)) appear to provide good accuracy on standard benchmark images, but are fragile to the FSGM attack. On the other hand, \\u201cdeep\\u201d networks (akin to ResNet (He et al., 2016a) or VGG (Simonyan & Zisserman, 2015)) are more robust to FSGM attack, while having less accuracy; 3) the skip connection provided by the modification we made to the original encoding proposed by Xie & Yuille (2017) appears to be critical in obtaining a network that is robust to adversarial attacks. \\n\\n2, The extrapolation step in NSGA-Net is adopted from other NAS approaches [Zoph et al., (2017); Real et al., (2018)], and it\\u2019s not a contribution of NSGA-Net. Having said that, the only reason of we use lower number of filters during the architecture search is for computational tractability. Moreover, in principle, the training process during architecture search cannot be exactly the same as when we train the best network, since only a subset of the training dataset is used during architecture search (to prevent over-fitting to the test set) while the entire training set is used for training the best network.\\n\\nWe agree with the review that hyper-parameter tuning cannot be fully decoupled from architecture search. However, even without hyper-parameter tuning, NSGA-Net finds network architectures with competitive performance (in terms of both accuracy and complexity) to those hand-designed and hyper-parameter tuned networks (ResNet and DenseNet). \\n\\n3, Tuning hyperparameters (such as learning rates, regularization parameters, etc.) cannot always tune complex objectives, such as robustness to adversarial attacks as shown in Appendix A. Furthermore, hyperparameter search is its own separate field of research and is complementary to the methods proposed here. Ideas from both fields could be combined, though hyperparameter tuning is not the goal of this study.\\n\\n4, SVHN and MNIST datasets were only used for the ablation study on the effect of crossover. We did not run full experiments on those datasets.\", \"references\": \"Zoph, B., Vasudevan, V., Shlens, J. and Le, Q.V., 2017. Learning transferable architectures for scalable image recognition. arXiv preprint arXiv:1707.07012, 2(6).\\n\\nReal, E., Aggarwal, A., Huang, Y. and Le, Q.V., 2018. Regularized evolution for image classifier architecture search. arXiv preprint arXiv:1802.01548.\"}", "{\"title\": \"Responses to review comments\", \"comment\": \"\", \"small_concerns\": \"1, The choice of FLOPS as the second objective for our experiments is perhaps does not showcase the full utility of multi-objective NAS, since the \\u201cFlop-objective is cheap to compute and does not require simulation, one could expect to tune this offline before initialization\\u201d\\n\\nFor a different choice of the second objective \\u201ca better initialization\\u201d, as suggested by the review, may not be feasible. For example, consider robustness against adversarial attack as the second objective. \\u201cA better initialization\\u201d is not readily apparent in this case. An example of NSGA-Net applied to find neural architectures for classification accuracy (first objective) and robustness against adversarial attack (second objective) is provided in Section A in Appendix.\\n\\n2, We agree with the review that noisy evaluations could potentially affect elitist dominance schemes. However, all NAS methods potentially suffer from the same problem.\\n\\n3, We run the ablation study suggested by the review, where we reduced the population size from 40 to 4 and ran for 300 generations instead of 30. The empirical result, shown in Figure 14 (a) in Appendix B, suggests that running with low population size and longer generations reaches reasonable performance but still worse than evolving large population for fewer generations. The outcome of this approach is, however, critically dependent on the initialization. For other tasks, such as, robustness to adversarial attacks, such an initialization is not readily apparent.\\n\\n4, We revised the language to reflect the review\\u2019s suggestions.\"}", "{\"title\": \"Responses to review comments\", \"comment\": \"\", \"main_concerns\": \"1, The review\\u2019s criticism of the use of CD, as opposed to HV-contribution, is incorrect from the following aspects, more details and examples are provided in Appendix C of the revised submission. \\n\\nThe review\\u2019s claim that using CD for selecting off-springs results in a non-monotonic increase in the HV metric over iterations. We argue the same is true with the HV-contribution for selecting off-springs, as suggested by the review. Finding a subset of solutions (greedily) from the non-dominated set is a combinatorial problem (N choose k), and in general, selection based on each solution\\u2019s HV-contribution is not the optimal solution to the problem. In Appendix C, we provide an example to show that a subset selected using CD can have a higher HV metric value than a subset selected using HV-contribution. \\nAdditionally, when the performance need to be assessed under multiple competing objectives, there is no single metric, including HV, that perfectly characterizes both convergence and diversity of the obtained solution set. Researchers have shown that algorithms solely driven by HV metric maximization can result in partial pareto-front being completely excluded [Ishibuchi et al. (2018)]. Appendix C provides an example to show that given a set of non-dominated points, a subset with higher HV metric value can be less evenly distributed as compared to another subset with lower HV metric value.\\n\\nMoreover, fundamentally, the goal of multi-objective optimization is NOT to improve the HV metric (or any other performance metric), instead, is to obtain a set of converged, diverse and evenly distributed solutions along the pareto-front.\\n\\n2, BN is an effective method to identify the relationships between the phases. Examples are provided in these Bayesian Optimization Algorithm (BOA) papers [Pelikan et al. (1999), Pelikan et al. (2005)]. In the context of network architecture, we believe that the optimal phase (computational block) structure at later phases depends on the structure of the previous phases.\\n\\nThe genetic operations in the exploration stage aim to find good computational block structures that preserve efficient trade-offs between objectives by inheriting common substructures between parents via crossover and injecting diversity via mutation. After evaluating a number of computational blocks (by the end of exploration), we now want to search for better ways to connect these blocks to form the entire network structure. The BN helps exploit the relations between promising combinations of the blocks.\\n\\nAs suggested by the review, we ran an experiment that compares NSGA-Net with only genetic operations and NSGA-Net with both genetic operations and BN. In both setups, we use the same total search budget. The median hypervolumes from three runs of each setting are provided in Figure 14(b) in the Appendix B. The empirical results show that NSGA-Net is able to achieve a better hypervolume metric value with BN. \\n\\n3, The \\u201csmall scale\\u201d crossover experiment in the paper uses a population of size 20 and 5 iterations as the total search budget.\\n\\nWe have revised our ablation study and more details are provided to address the concerns raised by the review. We implemented and evaluated all the methods suggested by the review: i) mutation alone with higher probability of mutation and ii) allowing up to two mutations with no crossover. The empirical results in Appendix B (Figure 14(a)) clearly shows that mutation alone is worse than crossover followed by mutation with low probability for our NAS setup. \\n\\nConceptually, we argue that with crossover is better than mutation alone with large probability of mutation. Because 1) with large mutation probability, the search method will tend to behave similarly to random search as a large portion of the variables are randomly perturbed at every iteration; 2) mutation has no respect for variable linkages, in the context of neural architecture, crossover is capable of exchanging the sub-structures between two architectures and in the meantime maintaining the sub-structures locally unchanged.\\n\\nWith an unlimited search budget, both with crossover and without crossover will likely to result in similar results. However, with limited search budget, our experiments suggest that using crossover yields a better performance in comparison to not using crossover. Figure 12(a) in Appendix B shows the results of our experiment.\", \"references\": \"Ishibuchi,H., Imada, R., Setoguchi, Y., and Nojima, Y. How to specify a reference point in hypervolume calculation for fair performance comparison. Evol. Comput. 2018.\\n\\nPelikan, M., Goldberg, D.E. and Cantu-Paz, E., 1999, July. BOA: The Bayesian optimization algorithm. In Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation.\\n\\nPelikan, M., 2005. Hierarchical Bayesian optimization algorithm. In Hierarchical Bayesian Optimization Algorithm(pp. 105-129). Springer, Berlin, Heidelberg.\"}", "{\"title\": \"Responses to review comments\", \"comment\": \"Below we address each of the concerns raised by the review.\\n\\n- The contribution of our paper is that we present an EA-based neural architecture search framework, named as NSGA-Net, that\\u2019s designed to find a competitive neural network architecture under single-criterion requirement or a set of neural network architectures that are representative to show the trade-offs between the multiple competing criteria. We argue that NSGA-Net is flexible and effective.\", \"the_flexibility_of_nsga_net_can_be_visualized_from_the_following_aspects\": \"1) NSGA-Net is capable of handling different tasks. \\n (a) An example of NSGA-Net applied to find neural architectures for CIFAR-10 images \\n classification is provided in Figure 8 and Table 1. \\n (b) An example of NSGA-Net applied to find neural architectures for objective alignment task \\n (CMU-Cars dataset) is provided in Section G.1 in Appendix. \\n\\n2) NSGA-Net is capable of handling multiple objectives. \\n (a) An example of NSGA-Net applied to find neural architectures for classification accuracy and \\n network complexity is provided in Figure 8 and Table 1. \\n (b) An example of NSGA-Net applied to find neural architectures for classification accuracy and \\n robustness against adversarial attack is provided in Section A in Appendix.\\n\\n3) NSGA-Net is capable of handling different datasets. \\n (a) Examples of NSGA-Net applied to MNIST, SVHN and CIFAR-10 are provided in Figure 9 (b) as \\n an ablation study. Due to space limit constraint, we are only able to show the hypervolume \\n comparison in the paper.\", \"the_effectiveness_of_nsga_net_can_be_visualized_from_the_following_aspects\": \"1) NSGA-Net finds network architectures with similar performance (in terms of both classification accuracy and network complexity) as compared to both hand-crafted network architecture (like ResNet, DenseNet) and search-assisted network architectures (like NASNet). See Figure 8 for details. \\n2) The overall search expense of NSGA-Net is significantly lower than other RL-based or EA-based neural architecture search methods. See Table 2 and Table 4 in Appendix for details. \\n\\n\\n- FLOPs is a more comprehensive estimator for a network\\u2019s computational complexity, it is a composite measure of the number of parameters, the number of connections among the nodes and layers, and the operations taken place inside each node. For example, a DenseNet has a higher computational complexity as compared to a similar depth (number of parameters and layers) ResNet, which can be captured by FLOPs but not the number of parameters as those dense connections accumulates hardly any extra parameters, but require significantly more FLOPs.\\n\\n\\n- The detailed procedure of our Bayesian network is as following:\", \"prerequisites\": \"i) our network architectures consist of three phases (computational blocks) and pooling operations in-between phases. And we are not repeating the phases. ii) we have collected an archive of evaluated network architectures from previous \\u201cexploration\\u201d step.\", \"step_1\": \"From the archive of all previously evaluated network architectures, we select the top 50% of the networks by non-dominated sorting followed by crowding-distance on both classification error and computational complexity.\", \"step_2\": \"we sample the \\u201cphase1\\u201d block from the distribution of frequency of occurrence of the \\u201cphase1\\u201d blocks. This occurrence frequency is estimated from the top 50% networks in the archive.\", \"step_3\": \"we sample the \\u201cphase2\\u201d block from the distribution of frequency of occurrence of the \\u201cphase2\\u201d blocks conditioned on the sampled \\u201cphase1\\u201d block in the previous step. This occurrence frequency is also estimated from the same top 50% networks in the archive.\", \"step_4\": \"similarly, we sample the \\u201cphase3\\u201d block from the distribution of frequency of occurrence of the \\u201cphase3\\u201d blocks conditioned on the sampled \\u201cphase2\\u201d block in the previous step. This occurrence frequency is also estimated from the same top 50% networks in the archive.\", \"step_5\": \"we then form our new network architecture by connecting \\u201cphase1\\u201d, \\u201cphase2\\u201d, and \\u201cphase3\\u201d with pooling operations.\\n\\n\\n- This was already included in Appendix D of the submission.\"}", "{\"title\": \"A combination of architecture search ideas\", \"review\": \"This paper proposes a search method for neural network architectures such that two (potentially) conflicting objectives: maximization of final performance and minimization of computational complexity can be pursued simultaneously. The motivation for the approach is that a principled multiobjective search procedure (NSGA-II) makes it unnecessary to manually find the right trade-off between two objectives, and simultaneously finds several solutions spanning the tradeoff. It is also capable of finding solutions from the concave regions of the Pareto-front. Multiobjective search for architectures has been explored in recent work, so the primary contribution of this paper is to show its utility in a more general and perhaps more powerful setting.\\n\\nThe paper is clearly written and is easy to understand, except that the parenthetical citations used appear to differ from the ICLR style and cause confusion. The authors delve into details of the approach though many aspects are from past work. I think that this makes the paper more self-contained and easy to understand, even if it makes the paper longer than the suggested length of 8 pages. I also found the comparisons and ablations shown in Figures 8 and 9 to be useful and informative. \\n\\nHowever, based on the presented results on the CIFAR-10 dataset (which can be compared to past work), I am not convinced of the utility of multiobjective optimisation for architecture search. There are a few reasons for this:\\n\\n1. The best architectures found by previous methods in the literature are already at a similar or better accuracy. It appears that NSGA-Net did not succeed in finding architectures that a) outperform past results with higher FLOPs, or b) match past results with fewer FLOPs. I understand that in principle, a benefit of NSGA-Net is that other solutions with lower accuracy and fewer FLOPs are also found simultaneously, but these models are not discussed or analysed much in detail. What precisely is the utility of the proposed method then? This consideration is also complicated by the next point.\\n\\n2. For the evaluation in the paper, the network with the lowest accuracy is extrapolated \\u2014 the number of filters in each layer are increased and the network is retrained. Is this procedure justified in general? How to know the best increasing factor? \\nSince lowering the computational cost is an objective of the search, changing the cost of an obtained solution does not seem principled. Moreover, changing network sizes will affect any ordering of networks by accuracy since optimal hyperparameters for both optimization and regularization may change. In general, it is rather difficult to decouple hyperparameter search from architecture search.\\n\\n3. A baseline that is missing in the paper is hyperparameter search, which can often yield very good performance for a given architecture. Tuning regularization in particular is often crucial. Since NSGA-Net trains 1200 networks, a comparable search would consider a known architecture e.g. Densenet and allocate 200 trials each to 6 architectures of different FLOPS (or 100 each to 12 architectures). How effective is this simple procedure at obtaining a good tradeoff front?\\n\\nDue to these concerns, I am presently unconvinced by the results in this paper, though I think that in general multiobjective optimization of architectures should be a fruitful direction.\", \"minor_question\": \"Figure 9(b) indicates that experiments were also conducted on the SVHN and MNIST datasets. Why are these results not reported?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"The proposed method is interesting and promising approach, but the contribution is not clear.\", \"review\": [\"Summary\", \"This paper proposes an evolutionary-based method for the multi-objective neural architecture search, where the proposed method aims at minimizing two objectives: an error metric and the number of FLOPS. The proposed method consists of an exploration step and an exploitation step. In the exploration step, architectures are sampled by using genetic operators such as the crossover and the mutation. In the exploitation step, architectures are generated by a Bayesian Network. The proposed method is evaluated on object classification and object alignment tasks.\", \"Pros\", \"The performance of the proposed method is better than the existing multi-objective architecture search methods in the object classification task.\", \"The effect of each proposed technique is appropriately evaluated.\", \"Cons\", \"The contribution of the proposed method is not clear to me. The proposed method is compared with the existing multi-objective methods in terms of classification accuracy, but if we focus on that point, the performance (i.e., error rate and FLOPs) of the proposed method is almost the same as those of the random search judging from Table 4. It would be better to compare the proposed method to the existing multi-objective methods in terms of classification accuracy and other objectives.\", \"This paper argues that the choice of the number of parameters is sub-optimal and ineffective in terms of computational complexity. Please provide more details about this point. For example, what is the drawbacks of the number of parameters, what is the advantages of FLOPs for multi-objective optimization?\", \"Please elaborate on the procedure and settings of the Bayesian network used in this paper.\", \"It would be better to provide discussions of recent neural architecture search methods solving the single-objective problem.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Poorly justified approach\", \"review\": \"After rebuttal, I adapted the score. See below for original review.\\n--------------------------------------------\\n\\n\\nThe authors implement a two-stage multi-objective optimization scheme to optimize neural network architectures with several conflicting goals.\\nI can not accept the paper in its current form.\\n\\nIn short, I have the following main criticisms:\\n1. use of crowding distance(CD) instead of hypervolume-contribution.\\nCD is not consistent with the HV estimator, especially CD might remove solutions that have a large HV-contribution and thus HV will not increase monotonically. The effect is even visible in Figure 8c) as in iteration 22, HV is decreasing as crowding distance removes a good offspring. In short: Crowding distance should not be used as long as the number of objectives does not prohibit computing the HV-contribution.\\n\\n2. No good justification of BN. It is unclear to me why BN should be used instead of more iterations at stage 1. In 4.4 BN is only compared to the uniform initialization, but this comparison has no meaning given that we already have an optimized front that improved on the uniformly sampled distribution. To be honest, the samples shown from BN do not look very convincing as a lot of very poor architectures are created.\\n\\nA proper comparison would be comparing the 2-step approach with only the first step and the same budget. Then we could compare samples from both distributions (either sampling from the front using mutation/crossover or sampling from BN). Also we would have a fair comparison of the obtained fronts and HV-values.\\n\\n3. Ablation study cross-over\\nI am not convinced by the results presented. The paper says this is a \\\"small scale\\\" study but does not give the number of iterations/samples. It is clear that in the setup of the mutation operator cross-over might help, simply because it can change many more connections in a single iteration than mutation alone, which is limited to max 1 change. Allowing up to two mutations and no crossover could already proof to be better (orsmaller size of offspring population, see below)\", \"smaller_concerns\": \"1. The results suggest that the uniform distribution might not be tuned well, as it only covers the \\\"expensive\\\" networks but not the \\\"cheap\\\" networks. A better initialization scheme that covers the x-axis better might already show vastly different results. As the Flop-objective is cheap to compute and does not require simulation, one could expect to tune this offline before initialization.\\n\\n2. No handling of Noise.\\nDuring optimization, the chosen starting point and SGD algorithm will introduce noise into the process. Thus, the final test accuracy will be noisy. As an elitist dominance scheme is used, one might easily end up with an architecture that has a large variance when trained, i.e. when performing a final training pass on the full dataset, the performance might be very different. Moreover, the algorithm might stop convergence towards the true pareto front as it is held back by noisy \\\"good\\\" results. This should be discussed in the paper\\n\\n3. A single-offspring approach might be better than sampling a full population (or offspring size in the order of parallel instances one can expend to run). 40 sounds excessive given that the sampling distribution is only improved through selection and given that the pareto front approximation appears to include less than 40 elements. This might also affect the results in the ablation study for cross-over: more iterations with reduced offspring size allows for more mutations of successful offspring.\\n\\n4. Some unclear or wrong wordings:\", \"page_4\": \"\\\"As a consequence[...] the best solution encountered [...] will always be present in the final population. \\\" What do you consider \\\"best\\\" in a 2-objective problem? Do you mean: the best in each objective?\\npage 6, footnote1: this is not true. even without crossover the selection operator ties the solutions together, an offspring has to beat any point in the population, not necessarily its direct parent.\\n\\n5. Figure 8a) does not include the state of the art result for CIFAR10, see for example\", \"http\": \"//rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html#43494641522d3130\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
SkeUG30cFQ
The Expressive Power of Deep Neural Networks with Circulant Matrices
[ "Alexandre Araujo", "Benjamin Negrevergne", "Yann Chevaleyre", "Jamal Atif" ]
Recent results from linear algebra stating that any matrix can be decomposed into products of diagonal and circulant matrices has lead to the design of compact deep neural network architectures that perform well in practice. In this paper, we bridge the gap between these good empirical results and the theoretical approximation capabilities of Deep diagonal-circulant ReLU networks. More precisely, we first demonstrate that a Deep diagonal-circulant ReLU networks of bounded width and small depth can approximate a deep ReLU network in which the dense matrices are of low rank. Based on this result, we provide new bounds on the expressive power and universal approximativeness of this type of networks. We support our experimental results with thorough experiments on a large, real world video classification problem.
[ "deep learning", "circulant matrices", "universal approximation" ]
https://openreview.net/pdf?id=SkeUG30cFQ
https://openreview.net/forum?id=SkeUG30cFQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Sygd0qmJeN", "Hygqlw3G07", "Bkeezu4n6X", "B1xQrvG_pX", "S1leyKeua7", "r1l27pBshX", "B1gyINsFhX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review" ], "note_created": [ 1544661711552, 1542797042181, 1542371336158, 1542100794561, 1542093016123, 1541262627632, 1541153862854 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1265/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1265/Authors" ], [ "ICLR.cc/2019/Conference/Paper1265/Authors" ], [ "ICLR.cc/2019/Conference/Paper1265/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1265/Authors" ], [ "ICLR.cc/2019/Conference/Paper1265/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1265/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper conveys interesting study but the reviewers expressed concerns regarding the difference of this work compared to existing approaches and pointed a room for more thorough empirical evaluation.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Metareview\"}", "{\"title\": \"justification for using diagonal-circulant networks as compact representations.\", \"comment\": \"We would like to answer your comment \\\"I do not see how the result can be seen as a justification for using diagonal-circulant networks as compact representations\\\" in more precise terms.\\n\\nThere are plenty of matrix decomposition methods aiming at compact representation. For e.g., Givens-matrix decompositions, DCT-diagonal products, product of Toeplitz matrices, Diagonal-Hadamard products, low rank decomposition, etc..\\n\\nUp to know, there was no theoretical argument supporting the choice of one of these compact representations.\\n\\nWe argue that the most basic requirement for a compact representation is that it must allow to compactly represent low-rank matrices. Products of Circulant-Diagonal matrices do satisfy this requirement. It is unknown for the others decompositions, and we conjecture that it does not hold for Toeplitz.\\n\\nMoreover, we argue that circulant-diagonal products are MORE EXPRESSIVE than low-rank decomposition, because:\\n- Any rank-k matrix (represented by 2nk parameters) can be represented by a circulant-diagonal products involving O(nk) parameters\\n- The converse is not true: there exists full-rank circulant matrices, so these circulant matrices (represented by n parameters) are rank-n matrices requiring n^2 parameters\"}", "{\"title\": \"Expressivity of circulant matrices\", \"comment\": \"Thanks for the review, this is an interesting comment and will address it in the next revision of the paper.\\n\\nThe circulant diagonal decomposition is in fact more general than the low rank matrix factorization in the sense that any low rank matrix can be represented using a circulant matrix, but converse is not necessarily true.\\nIt is true that the circulant diagonal generally requires more parameters, however it only requires linearly more parameters. This explain the circulant diagonal decomposition generally performs better in practice as demonstrated in [1], (table 1).\\n\\nAnother important difference is the computational complexity of the matrix-vector multiplication. With low rank decomposition the matrix vector multiplication O(nk^2) wheras with a circulant diagonal decomposition it is O(k n log(n))\\n\\n\\n[1] Moczulski, Marcin, et al. \\\"ACDC: A structured efficient linear layer.\\\" ICLR (2016).\"}", "{\"title\": \"Claims not sufficiently justified\", \"review\": \"The experiments in the paper are similar to those explored in previous work! The main contribution claimed in the paper is the theoretical formulation for compact design of neural networks using circulant matrices instead of fully connected matrices.\\n\\nI do not think the claim is sufficiently justified by the theoretical results provided. \\n\\nEarlier result already shows how any matrix fully connected matrices can be approximated by 2n-1 circulant matrices. As the authors themselves point out, this theoretical result does not necessarily imply reduction in number of parameters since the for a depth l network, the equivalent diagonal-circulant-ReLU network will now require (2n-1)l depth, or 2n(2n-1)l parameters. \\n\\nThe main results (Proposition 3, 4) show that if the fully connected networks of depth l network are parameterized by (approximately) rank k matrices, then the resultant depth of diagonal-circulant network required to approximate the original network is (4k+1)l, which results in a total of 8n(4k+1)l parameters. Similar to the case of full rank fully connected networks (proposition 2), this result does not necessarily indicate a compression of number of parameters either. In particular, if fully connected networks are indeed rank k, then we only need nkl parameters parameters to represent the matrix, which is lower than the number of parameters required by the diagonal-circulant network. \\n\\nSo I do not see how the result can be seen as a justification for using diagonal-circulant networks as compact representations.\", \"writing\": \"\", \"theorem_1\": \"The statement about approximability with B_1B_2\\u2026B_{2n-1} is independent of p and S.\", \"proposition_3\": \"The expression for depth should be \\\\sum_{i=1}^l (4k_i+1) \\u2014 sum should go from i=1 to l and there should be no multiplicative factor l\", \"other_non_critical_comments\": \"Multiplication by circulant matrices amounts to circular convolution with full dimensional kernel. In this sense, replacing a fully connected layers by circulant matrices is similar to replacing it with convolutional layers. May be this connection can be explicitly stated in the paper.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"About the impact of our contribution\", \"comment\": \"We would like to thank both reviewers for a valuable feedback and we apologize for the typos and grammatical errors. We have double checked the current version so hopefully there won't be anymore in the next revision. \\n\\nYou (Reviewer 2) have expressed some doubts about the significance of our theoretical contributions. We would like to clarify and emphasize some points hereafter.\\n\\nFirstly a comparison between the decomposition we use and other decompositions such as TensorTrain has already been published in ICLR [2]. As one can see in Table 1 and Figure 4 from [2], the circulant matrix decomposition compares favorably to TensorTrain. For completeness, we will add a comparison between TensorTrain and our decomposition on the YouTube dataset (in Table 2) but this should not change the essence of our contribution.\\n\\nSecondly, our main contribution is of importance: we provide a similar approximation result as [3] did for classical networks, but for circulant networks. Indeed we provide a bound on the approximation error of a circulant neural network with bounded width and height. We believe that this result can be of interest to anyone trying to build compact networks using circulant matrices. \\n\\nMore importantly, without the result we provide in our paper, the good results reported in [2,1] were difficult to explain with the existing theory: [9] states that any linear operator can be decomposed into a product of at least n diagonal and circulant factors (where n can be as big as 1024), but in practice good results have been observed in [1,2] with as few as 1 factor. So in a sense, the situation is analogous to the one of neural networks based on fully connected layers *before* the first (celebrated) results on approximation with bounded nets [3,4].\\n\\nWe also believe that this paper brings results with a larger scope than the specific problem of designing compact neural networks. Circulant martices deserve a particular attention in deep learning because of their strong ties with convolutions: a circulant matrix operator is equivalent to the convolution operator with circular paddings (as shown in [5]). This fact makes any contribution to the area of circulant matrices particularly relevant to the field of deep learning with impacts beyond the problem of designing compact models. \\nFor instance, it is currently not known whether convolutional neural networks are universal approximators. Our work proves that a particular type of convolutional neural nets are universal approximators. We believe that this is a strong first result that paves the way to more general results about error bounds in general CNNs. \\n\\nFinally, regarding the architecture, we choose the Deep Bag-of-Frames (DBoF) and Mixtures of Experts (MoE) architectures since they are state of the art in the computer vision area, as discussed in [6, 7, 8]. \\n\\n \\n[1] Cheng, Yu, et al. \\\"An exploration of parameter redundancy in deep networks with circulant projections.\\\" Proceedings of the IEEE International Conference on Computer Vision. 2015.\\n\\n[2] Moczulski, Marcin, et al. \\\"ACDC: A structured efficient linear layer.\\\" ICLR (2016).\\n\\n[3] Barron, A. R. (1993). \\\"Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information theory, 39(3), 930-945.\\n\\n[4] Hanin, Boris. \\\"Universal function approximation by deep neural nets with bounded width and relu activations.\\\" arXiv preprint arXiv:1708.02691 (2017).\\n\\n[5] Xiao et al. \\\"Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks.\\\" ICML 2018\\n\\n[6] Abu-El-Haija et al. \\\"YouTube-8M: A Large-Scale Video Classification Benchmark\\\", arXiv preprint arXiv:1609.08675\\n\\n[7] Miech et al. \\\"Learnable pooling with Context Gating for video classification\\\", Proc. of the CVPR Workshop on YouTube-8M Large-Scale Video Understanding (2017)\\n\\n[8] Paul Natsev, \\\"Context-Gated DBoF Models for YouTube-8M\\\", https://static.googleusercontent.com/media/research.google.com/fr//youtube8m/workshop2018/natsev.pdf\\n\\n[9] Huhtanen, M., & Per\\u00e4m\\u00e4ki, A. (2015). Factoring matrices into the product of circulant and diagonal matrices. Journal of Fourier Analysis and Applications, 21(5), 1018-1033.\"}", "{\"title\": \"An important contribution.\", \"review\": \"In this paper, the authors prove that bounded width diagonal-circulant ReLU networks (I will call them DC-ReLU henceforth) are universal approximators (this was shown previously without the bounded width condition). They also show that bounded width and small depth DC-ReLUs can approximate deep ReLU nets with row rank parameters matrices. This explains the observed success of such networks. The authors also provide experiments to demonstrate the compression one can achieve without sacrificing accuracy.\", \"pros\": \"The authors provide strong approximation results that explain the observed success of DC-ReLUs.\", \"cons\": \"Too many grammatical errors (mainly improper pluralization of verbs and punctuation errors), typos, stylistic inconsistencies seriously affect the readability of the paper. The authors should pay more attention to these.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"neither original nor thorough enough [title no longer appropriate after rebuttals]\", \"review\": \"The paper proposes using structured matrices, specifically circulant and diagonal matrices, to speed up computation and reduce memory requirements in NNs. The idea has been previously explored by a number of papers, as described in the introduction and related work. The main contribution of the paper is to do some theoretical analysis, which is interesting but of uncertain impact.\\n\\nThe experiments compare performance against DeepBagOf`Fframes (DBOF) and MixturesOfExperts (MOE). However, there are other algorithms that are both more competitive and more closely related. I would like to see head-to-head comparisons with tensor-based algorithms such as Novikov et al: https://papers.nips.cc/paper/5787-tensorizing-neural-networks, which achieves huge compression ratios (~200 000x), and other linear-algebra based approaches. \\n\\nAFTER READING REBUTTAL\\nI've increased my score because the authors point out previous work comparing their decomposition and tensortrains (although note the comparisons in Moczulski are on different networks and thus hard to interpret) and make a reasonable case that their work contributes to improve understanding of why circulant networks are effective.\", \"i_strongly_agree_with_authors_when_they_state\": \"\\\"We also believe that this paper brings results with a larger scope than the specific problem of designing compact neural networks. Circulant matrices deserve a particular attention in deep learning because of their strong ties with convolutions: a circulant matrix operator is equivalent to the convolution operator with circular paddings\\\". I would broaden the topic to structured linear algebra more generally. I hope to someday see a comprehensive investigation of the topic.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
SJMBM2RqKQ
Uncertainty-guided Lifelong Learning in Bayesian Networks
[ "Sayna Ebrahimi", "Mohamed Elhoseiny", "Trevor Darrell", "Marcus Rohrbach" ]
Sequentially learning of tasks arriving in a continuous stream is a complex problem and becomes more challenging when the model has a fixed capacity. Lifelong learning aims at learning new tasks without forgetting previously learnt ones as well as freeing up capacity for learning future tasks. We argue that identifying the most influential parameters in a representation learned for one task plays a critical role to decide on \textit{what to remember} for continual learning. Motivated by the statistically-grounded uncertainty defined in Bayesian neural networks, we propose to formulate a Bayesian lifelong learning framework, \texttt{BLLL}, that addresses two lifelong learning directions: 1) completely eliminating catastrophic forgetting using weight pruning, where a hard selection mask freezes the most certain parameters (\texttt{BLLL-PRN}) and 2) reducing catastrophic forgetting by adaptively regularizing the learning rates using the parameter uncertainty (\texttt{BLLL-REG}). While \texttt{BLLL-PRN} is by definition a zero-forgetting guaranteed method, \texttt{BLLL-REG}, despite exhibiting some small forgetting, is a task-agnostic lifelong learner, which does not require to know when a new task arrives. This feature makes \texttt{BLLL-REG} a more convenient candidate for applications such as robotics or on-line learning in which such information is not available. We evaluate our Bayesian learning approaches extensively on diverse object classification datasets in short and long sequences of tasks and perform superior or marginally better than the existing approaches.
[ "lifelong learning", "continual learning", "sequential learning" ]
https://openreview.net/pdf?id=SJMBM2RqKQ
https://openreview.net/forum?id=SJMBM2RqKQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJlBJw6bxN", "HkgRrpYcAX", "SylXWhF5CX", "SklZQcuqR7", "B1e-kOu90X", "Syl6wT6qnm", "BJx-HPCY3m", "B1l2REPYh7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544832733357, 1543310662335, 1543310330730, 1543305753077, 1543305176977, 1541229924882, 1541166904644, 1541137619914 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1264/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1264/Authors" ], [ "ICLR.cc/2019/Conference/Paper1264/Authors" ], [ "ICLR.cc/2019/Conference/Paper1264/Authors" ], [ "ICLR.cc/2019/Conference/Paper1264/Authors" ], [ "ICLR.cc/2019/Conference/Paper1264/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1264/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1264/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"Reviewers are in a consensus and recommended to reject. However, the reviewers did not engage at all with the authors, and did not acknowledge whether their concerns have been answered. I therefore lean to reject, and would recommend the authors to resubmit. Please take reviewers' comments into consideration to improve your submission should you decide to resubmit.\", \"confidence\": \"2: The area chair is not sure\", \"recommendation\": \"Reject\", \"title\": \"Paper decision\"}", "{\"title\": \"Responses to individual comments from R#3\", \"comment\": \"We would like to thank you again for your time and your feedback. Please see first our meta response to major comments shared between reviewers. Here we refer to separate comments/questions we received from you.\\n\\nWe also found it behaving inconsistent through the new experimental setting we adapted per reviewers' request. Therefore, as an alternative, we used a simple regularization trick on the learning rates instead of directly minimizing the changes on network parameters. Please see our updated draft regarding the altered regularization method. \\n\\nWe have tried to address this feedback provided by all reviewers by replicating the experimental setting used in the literature to be able to make fair comparisons. Specifically asked by R#1, we have a full comparison with 7 other baselines and 8 datasets. Please see the updated version of the paper regarding this matter.\"}", "{\"title\": \"Responses to individual comments from R#2\", \"comment\": \"We would like to thank you again for your time and your feedback. Please see first our meta response to major comments shared between reviewers. Here we refer to separate comments/questions we received from you.\\n\\nWe acknowledge all your concerns regarding our proposed regularization technique. We also found it behaving inconsistent through the new experimental setting we adapted per reviewers' request. Therefore, as an alternative, we used a simple regularization trick on the learning rates instead of directly minimizing the changes on network parameters. Please see our updated draft regarding the altered regularization method.\", \"pruning_percentage\": \"this is a valid concern which we have now addressed in Figure 1 and a subsection under section 6.1. We briefly explain here that in the continual learning regime we do not know how many tasks we yet have to learn. Therefore, we can only decide based on the current performance of our model on a held-out validation set. By computing validation accuracy as a function of pruning percentage we can set a threshold beyond which, we do not wish to downgrade in our performance. Figure 1 shows such a plot for MNIST Split dataset when incrementally learned in two tasks.\\n\\nYour concern regarding the datasets size and number of baselines are now addressed in the updated draft with having 7 baselines with 8 short and long sequences of tasks\\n\\nThe listed typos and minor issues are now fixed.\"}", "{\"title\": \"Responses to individual comments from R#1\", \"comment\": \"Please see the addressed comments shared between reviewers first. Here we refer to separate comments/questions we received from you:\\n\\n(1) We have corrected this in the manuscript (page 2 paragraph 2)\\n\\n(2) Fixed.\\n\\n(3) We would like to thank you again for suggesting the mentioned paper by Serr\\u00e0, Joan, et al. (2018). Per your request, we have changed our experimental setting in accordance to it and included full comparison with the datasets provided in [*]. Paper is fully updated with the obtained results.\", \"questions_you_raised\": \"(1) We have modified our regularization approach explained in our shared meta response. Instead of minimizing the changes between current parameters and updated values, we scale up or down their learning rate conditioned on how important they are, i.e. how big their STD is. Hence equation 6 no longer exists in the paper. \\n\\n(2) This is a valid point and we agree that the memory size overhead was not clearly explained so let us clarify this with explaining how much encoding a mask and writing it to memory will cost us. The overhead memory per parameter in encoding the mask as well as saving it on the disk is as follows. Assuming we have $n$ tasks to learn using a single network, the total number of required bits to encode an accumulated mask for a parameter is at max $\\\\log_2{n}$ bits assuming a parameter was found to be important from task $1$ and kept being encoded in the mask. Saving the binary mask for a typical model with $n$ tasks results in a mask size of $1/n^2$ with respect to the initial model size. \\n\\n\\n*** Per your request on improving the text we have re-written large parts of the text.\"}", "{\"title\": \"Addressing comments shared between reviewers\", \"comment\": \"We thank all the reviewers for their constructive feedback and time. We would like to address some common concerns across all the reviewers first before going to individual responses.\\n\\n1- All reviewers had fairly asked for more experiments and baselines, usage of larger datasets and deeper analysis. \\n \\nWe believe this was a valid point which we have tied to address as much as we can. Upon Reviewer #1\\u2019s request we have used similar experimental setting introduced in (Serr\\u00e0, Joan, et al. 2018) and compared against 7 baselines including HAT, EWC, PathNet, PNN, LWF, LFL, IMM on short and long sequences of 8 datasets in total including Split MNIST, Permuted MNIST, Alternating incremental CIFAR10/100 FaceScrub, Not NotMNIST, SVHN, TrafficSigns, and FashionMNIST. \\nWe have also included reference baselines such as fine-tuning and feature extraction, as well as joint training using both Bayesian and non-Bayesian networks.\\n\\nUpon Reviewer#3\\u2019s request we have also compared against VCL, as well as GEM, and IS on Permuted MNIST. Due to the extensive evaluations, we fully switched to the tasks provided in Serr\\u00e0, Joan, et al. 2018 and abandoned fine grained classification datasets we had in the initial version. \\n\\n2. All reviewers had comments and questions regarding the regularization method. While experimenting with the new datasets with our Bayesian approach, we came to realize that the regularization method which was initially introduced in our paper exhibits inconsistent behavior to overcome forgetting on different datasets, leading us to believe it is not a promising approach. Instead, we were able to find an alternative simpler regularization technique that is also easier to comprehend and empirically performs better.\", \"the_change_in_the_regularization_method_is_as_follows\": \"instead of minimizing the change in both mean and variance of the parameters distributions, we now control the gradient updates for mean of the distributions based on the predicted uncertainty we have for them. This mean that we begin with a usual constant learning rate for all parameters, and as we train for more epochs, we compute sigma (standard deviation) of the means. We simply used the STD as an indicative of their uncertainty. The more uncertain (higher STD) a parameter is computed to have, the more it should be allowed to be updated in future epochs. Therefore, we use uncertainty (STD) as a scalar to scale up or down the learning rate of all Mu parameters, The intuition behind this is that we wish to minimize any further changes on the means by simply imposing a small learning rate to them while allowing the variances to change. This results in allowing the model to learn more concepts while preserving the critical information obtained in the past. We used this intuitive regularization trick throughout the paper when we were not using pruning.\\nThe key benefit from using such an approach is that we do not need to wait for a task to finish to find the most important parameters. We do not even need to know when tasks switching occurs. By simply modifying our optimizer to adjust the learning rate based on the computed uncertainty, we regularize at every epoch, resulting in a model that is less prone to forget. \\n\\nPaper has been updated with all these changes and added experiments.\"}", "{\"title\": \"Good motivation, minor contributions in term of algorithms\", \"review\": \"Motivated from leveraging the uncertainty information in Bayesian learning, the authors propose two algorithms to prevent forgetting: Pruning and Regularization. Experiments on several sequential learning tasks show the improved performance.\", \"quality\": \"The description on the related work is comprehensive. The proposed algorithms seem easy to follow.\", \"clarity\": \"Low\\n\\nThe contributions in terms of algorithms are clearly presented. However, the writing can be largely improved.\\n\\n(1) Some claims are improper: I don't think it's accurate to say that most of lifelong learning is non-Bayesian (In introduction), and EWC is derived from a Bayesian perspective, and Variational Conditional Learning is a very Bayesian approach.\\n\\n(2) Please proofread the submission:\", \"typos\": \"e.g., \\\"Beysian\\\", \\\"citestochastic methods\\\";\", \"style\": \"x is not bold occasionally, but has the meaning given the context.\", \"originality\": \"It seems to be the first work that leverages the variance in Bayesian Neural Nets (BNN) to prevent forgetting. My understanding that EWC also consider the variance, but in a different way.\", \"significance\": \"It is good to consider variance/uncertainty for lifelong learning, and should be encouraged.\\nHowever, the comparison to the representative algorithms or state-of-the-art is missing in this submission. For example, EWC/IS, or method in [*]. Is it possible to run the experiments on more standard datasets, such as [*].\\n\\n[*] Overcoming Catastrophic Forgetting with Hard Attention to the Task, ICML 2018\", \"questions\": \"1. In (6), there are three terms on the right side, it seems the 2nd term include the 3rd term, why do we need to add the 3rd term again?\\n\\n2. \\\"Once a task is learned, an associated binary mask is saved which will be used at inference to recover key parameters to the desired task. The overhead memory caused by saving the binary mask (less than 20MB for ResNet18), is negligible given the fact it completely eliminates the forgetting\\\"\\n\\nTo me, saving a binary mask means saving \\\"partial\\\" model. First, this is additional parameter saving. Second, in the inference stage, one can recover the corresponding best model using the mask, how close is it to cheating? (Perhaps I am not an expert in lifelong learning). \\nCan you put the model size of ResNet18, so that the readers can understand 20MB is small/negligible compared to the full model.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review\", \"review\": \"The paper addresses the problem of lifelong learning of neural networks - a setting where learning is performed on a continuously arriving new tasks without having access to previously encountered data.\\nAuthors propose a method that prevents catastrophic forgetting typical for naive application of stochastic gradient descent by preventing supposedly important weights to change (in either soft of hard manner), where the weight importance is assessed by its signal to noise ratio estimated from the corresponding (approximate) posterior distribution.\\nAuthors evaluate their approach on a set of image classification datasets and find it superior to the PackNet baseline as well as few simpler ones.\\n\\nThe idea of using uncertainty estimates obtained from Bayesian training to adjust weight updates is natural and potentially very promising. \\nHowever, to me this paper does not seem to investigate the idea sufficiently deep.\\n\\nThe weight pruning or hard masking variant of the method depends on a very important hyperparameter p (size of the mask) which is unclear how to set beforehand. \\n\\nI also struggle with understanding the weight regularisation or soft masking variant. \\nAuthors seem to get their inspiration in the idea of assumed density filtering, where the posterior for 1:T-1 is approximated and used a prior for task T (last sentence on page 5).\\nAt the same time, in Algorithm 2, line 6 the prior is defined as the standard BBB mixture prior and not the approximate posterior from the previous task.\\nQuite oddly, parameters of the _approximate posterior_ are being quadratically regularalized to not deviate from parameters of the _approximate posterior_ from the previous task. \\nThis deviates from the original idea and requires additional justification.\\nBesides that, I find the way this regularisation is applied potentially problematic for the variance parameter (last term in eq. 6).\\nHere authors apply the regularisation to the parameter of the softplus transformation they use, but scale it with the inverse std deviation which is the \\u201cclassical\\u201d parametrisation. The choice of parametrisation was not discussed, however, clearly different parametrizations may lead to very different results. \\n\\nOn the experimental side, I have two major issues:\\n1. The datasets considered are very small, authors could consider using ImageNet, especially given that they already work with 224x224 images.\\n2. The only prior work used as a baseline is PackNet, while there is no reason why other established methods such as EWC are not applicable.\", \"minor_comments\": \"The middle expression in eq. 5 seems to miss the -log p(D_T | D_{1:T-1}) term which does not change the latter expression (since it does not depend on parameters theta).\", \"page_3\": \"\\u201ccitestochastic methods\\u201d, a citation seems to be missing.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Nice combination of ideas, but requires more development.\", \"review\": \"In this paper, a framework for lifelong learning based on Bayesian neural network is proposed. The key idea is to combine iterative pruning for multi-task learning along with the weight regularization. The idea of iterative pruning was first considered by Mallya et al., 2018 and weight regularization was considered for Bayesian neural network by Nguyen et al., 2018.\", \"pros\": [\"Combination of two idea seems novel. I like the idea of considering the weight parameter as the \\\"global\\\" random variables and the mask parameters as the task-specific random variables.\"], \"cons\": [\"In general, there is lack of explanation/justification on the combination of two ideas. Especially, there is lack of explanation on how to apply the whole algorithm (e.g., text states that complete algorithm is in Algorithm 3., but there is no Algorithm 3. in the paper).\", \"I do not understand how equation (6) is developed, and why hyper-parameters are need for \\\"regularization of weights\\\", comparing with the Variational Continual Learning (VCL, Nguyen et al., 2018). More explanation seems necessary for justification of the algorithm.\", \"More stronger baselines need to be considered for the experiments. Why is there no comparison with the existing continual learning algorithms? At the very least, comparison with the VCL or Elastic Weight Consolidation (EWC, Kirkpatrick et al., 2017) seems necessary since one of the key idea is about regularization for weights.\", \"In general, I think it is a nice idea to combine two existing approaches. However, the algorithm lacks justification in general and experimental results are not very persuasive.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
S1gBz2C9tX
Importance Resampling for Off-policy Policy Evaluation
[ "Matthew Schlegel", "Wesley Chung", "Daniel Graves", "Martha White" ]
Importance sampling is a common approach to off-policy learning in reinforcement learning. While it is consistent and unbiased, it can result in high variance updates to the parameters for the value function. Weighted importance sampling (WIS) has been explored to reduce variance for off-policy policy evaluation, but only for linear value function approximation. In this work, we explore a resampling strategy to reduce variance, rather than a reweighting strategy. We propose Importance Resampling (IR) for off-policy learning, that resamples experience from the replay buffer and applies a standard on-policy update. The approach avoids using importance sampling ratios directly in the update, instead correcting the distribution over transitions before the update. We characterize the bias and consistency of the our estimator, particularly compared to WIS. We then demonstrate in several toy domains that IR has improved sample efficiency and parameter sensitivity, as compared to several baseline WIS estimators and to IS. We conclude with a demonstration showing IR improves over IS for learning a value function from images in a racing car simulator.
[ "Reinforcement Learning", "Off-policy policy evaluation", "importance resampling", "importance sampling" ]
https://openreview.net/pdf?id=S1gBz2C9tX
https://openreview.net/forum?id=S1gBz2C9tX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ryeb3X8-lV", "r1gTd5V214", "BkeiWN7t0X", "H1lq6XQFRX", "S1lNgEC1CQ", "H1edRZXkR7", "Hkxm5TG1RX", "ryeGVsf1RX", "ryekoEmc6m", "SkeKENrLTX", "B1xhJIFQpm", "BJeFeZSG6m", "B1gXCgBfp7", "BJek6RxA3X" ], "note_type": [ "meta_review", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1544803240800, 1544469108614, 1543218178906, 1543218113755, 1542607851953, 1542562256445, 1542561162579, 1542560553693, 1542235286770, 1541981232580, 1541801444149, 1541718257063, 1541718218714, 1541439159207 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1263/Area_Chair1" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1263/Authors" ], [ "ICLR.cc/2019/Conference/Paper1263/Authors" ], [ "ICLR.cc/2019/Conference/Paper1263/Authors" ], [ "ICLR.cc/2019/Conference/Paper1263/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1263/Authors" ], [ "ICLR.cc/2019/Conference/Paper1263/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1263/Authors" ], [ "ICLR.cc/2019/Conference/Paper1263/AnonReviewer5" ], [ "ICLR.cc/2019/Conference/Paper1263/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1263/Authors" ], [ "ICLR.cc/2019/Conference/Paper1263/Authors" ], [ "ICLR.cc/2019/Conference/Paper1263/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes to use importance resampling (IR) as an alternative to the more popular importance sampling (IS) approach to off-policy RL. The hope is to reduce variance, as shown in experiments. However, there is no analysis why/when IR will be better than IS for variance reduction, and a few baselines were suggested by reviewers. While the authors rebuttal was helpful in clarifying several issues, the overall contribution does not seem strong enough for ICLR, on both theoretical and empirical sides.\\n\\nThe high variance of IS is known, and the following work may be referenced for better 1st order updates when IS weights are used: Karampatziakis & Langford (UAI'11).\\n\\nIn section 3, the paper says that most off-policy work uses d_mu, instead of d_pi, to weigh states. This is true, but in the current context (infinite-horizon RL), there are more recent works that should probably be referenced:\", \"http\": \"//proceedings.mlr.press/v70/hallak17a.html\", \"https\": \"//papers.nips.cc/paper/7781-breaking-the-curse-of-horizon-infinite-horizon-off-policy-estimation\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Nice work with potential, but contributions need to be strengthened\"}", "{\"comment\": \"I have gone through the reviews and found the explanations reasonable. I also agree that theoretical comparisons between the variance of IR, IS (and/or WIS) is non-trivial, and there seems to be no known analysis on this.\\n\\nRe comparisons, although the above explanations (on ABQ, Vtrace, Impala) are very reasonable, I still think comparing with at least one modern off-policy RL methods (such as V-trace) would be useful to convince readers on the idea of applying IR to off-policy RL, especially empirically showing how this method can better utilize replay buffer in learning. \\n\\nTherefore, I would keep my current scores.\", \"title\": \"Thank you for the clarifications\"}", "{\"title\": \"Please see the revision\", \"comment\": \"We found your argument hard to refute, and thus performed experiments in the markov chain random walk setting in the current revision. Please review the revision and let us know if there are any further concerns.\"}", "{\"title\": \"Author Revision\", \"comment\": \"We have submitted a minor revision to the current paper, primarily including results for V-trace and Sarsa in the appendix and pointing to these results in the main paper. From these results our initial hypothesis seem correct. To recap here, V-trace does well, but there is a clear variance bias trade-off as the clipping parameter (\\\\bar{\\\\rho}) becomes more aggressive (also having similar issues to ER+IS in the hardest policy settings). Sarsa performed well for the first two policy settings (the easiest) in the markov chain, while not learning in the final setting. We believe Sarsa breaks down in the hardest case for similar reasons that ER+IS does (not sampling enough of the needed experience to learn). We decided to again exclude ABQ, as with the trace parameter set to 0 (which is what is considered here because of the replay buffer) the algorithm resolves down to TD(0).\\n\\nFrom these preliminary results we don\\u2019t feel the two added algorithms (V-trace and Sarsa) add to the comparison meaningfully. We stand by keeping the current results in the paper as is, with the added algorithms tested in the appendix.\\n\\nWe decided to forgo changes to the theory section for the same reasons mentioned in the individual reviews.\\n\\nWe hope you get a chance to review the additions, and look forward to further comments you may have.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for your review! We will address you concerns below.\\n\\nYou are correct, we don\\u2019t show concrete evidence of IR having lower variance than IS or WIS. This is actually quite a tricky thing to do in general, and the WIS* estimator we use as a lower bound for the variance of IR also doesn\\u2019t have a good form. Unfortunately, we are unaware of any techniques to show this more concretely but provided the bounds as presented in the paper for completeness. In fact, it is not generally true that WIS has lower variance than IS, and proving more generally that IR (or even WIS) has lower variance than IS is not possible here.\\n\\nWe don\\u2019t compare to retrace [4] and ABQ [3] for two reasons. The first is we focus on State-values in this paper, while ABQ and retrace are only derived for state-action values. The other reason (specifically for ABQ) is that we cannot use a trace in the experience replay setting. ABQ [3] was primarily designed as an online algorithm, requiring the trace to learn off-policy. When sampling from a replay buffer, an eligibility trace doesn\\u2019t make sense as the temporal structure of the data is broken. Finally, V-trace wasn\\u2019t compared because we care about getting accurate values for the target policies as exactly specified, rather than policies in-between the target and behavior [1]. This minor point is inconsequential for the use of V-trace in IMPALA as long as the ordering of policies remains consistent (i.e. the critic is still valid), but our main goal is to (as exactly as possible) evaluate policies towards creating many GVF predictions (Horde like architecture [2]). We give some hypothesis below about how V-trace would perform, where we don\\u2019t feel it would add much outside of the comparisons already made. We felt focusing on fundamental approaches (IS, WIS) to importance resampling was fair here, and didn't feel V-trace would have added any more useful comparisons.\", \"hypothesis_for_how_v_trace_will_perform\": \"\", \"markov_chain_random_walk\": \"\", \"easiest_policy\": \"It will perform as well as IS, assuming the clipping parameter c_i is set well (i.e. above 2), making the algorithm equivalent to importance sampling (IS)\", \"hardest_policy\": \"It will be less sensitive than IS (depending on what the clipping parameter c_i is set to), but will still face the same issues as ER+IS when sampling from the experience replay buffer. Because there is no prioritization on the \\u201cimportant\\u201d samples, we expect the RMSVE to follow closely with ER+IS in figure 3 (left).\", \"four_rooms\": \"We expect V-trace to have similar problems to the hardest policy, similar to IS, where we get a broader range of useful learning rates but still are hampered by the experience sampled.\", \"torcs\": \"It is hard to know here. We expect V-trace to work well, but there is a lot of play in how the learning rate is tuned (RMSProp) that makes this problem hard to predict.\\n\\n\\n[1] Espeholt, Lasse, et al. \\u201cIMPALA: Scalable distributed Deep-RL with importance weighted actor-learner architectures.\\u201d arXiv preprint arXiv:1802.01561 (2018).\\n[2] Sutton, Richard S., et al. \\u201cHorde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction.\\u201d The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2. International Foundation for Autonomous Agents and Multiagent Systems, 2011.\\n[3] Mahmood, Ashique Rupam, Huizhen Yu, and Richard S. Sutton. \\u201cMulti-step off-policy learning without importance sampling ratios.\\u201d arXiv preprint arXiv:1702.03006 (2017).\\n[4] Munos, R\\u00e9mi, et al. \\u201cSafe and efficient off-policy reinforcement learning.\\u201d Advances in Neural Information Processing Systems. 2016\"}", "{\"title\": \"Agree, but can we compute V(s) if you already get Q(s,a)?\", \"comment\": \"Totally agree and that why I'm thinking about FQI can be a simple baseline: TD-learning of V(s) requires IS ratio in off-policy case, but for learning Q(s,a) we may not need that, so we are not even bothered by the problem of IS-ratio in state value function learning.\\n\\nAfter we learn the state-action value function, we can compute the state value function since we know the policy. In that point of view, I guess it should be directly comparable in experiment? \\n\\nIn general, I do agree there is many cases we need IS ratio so this method is useful and not directly comparable with FQI, e.g. we can replace the original IS (just IS itself) approach in OPPE with this IR. \\n\\nAlso, one thing I might miss is, this generalized value function learning problem may be more general than FQI case where we need to assume MDP? I'm not familiar with the GVF setting described in the paper.\"}", "{\"title\": \"Only possible for state-action value functions\", \"comment\": \"Oh right. In the state-action value function case we can do this. But I'm pretty sure this isn't possible for the state value function case, which is what is considered here.\\n\\nIf you have evidence to the contrary, let use know! :)\\n\\nThank you for the clarification!\"}", "{\"title\": \"Thanks for your response; some clarification about FQI\", \"comment\": \"Thanks for your response! Just some clarification about what I meant by fitted Q:\\n\\nIt is not exactly the original FQI algorithm, but I believe it is a simple variant of FQI in policy evaluation case. (Maybe it has another name?) Note that fitted Q iteration use (s,a,s') pairs from any behavior policy, to learn a (optimal) Q function by fitting the following optimality Bellman equation:\\nQ(s,a) = r(s,a) + \\\\gamma*max_a' Q(s',a')\\n\\nIf we want to do policy evaluation, we can simply change the optimality Bellman equation to policy Bellman equation:\\nQ(s,a) = r(s,a) + \\\\gamma* E_{a' \\\\sim \\\\pi(s')} Q(s',a')\\nAnd this does not require IS ratio too, as FQI for policy optimization.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for the review! Your comments were insightful. We answer below:\\n\\n1.\\nWe can bound the variance if we assume bounds on all the individual quantities, but the specific bound would not be important since we only need the TD update to have finite variance. We could mention the specific quantities that need bounding in the final version (i.e. variance of rewards, gradients, ...).\\n\\n2.\\nFrom what we gather, the reviewer wants a more careful comparison of the variances for the same number of samples. Unfortunately, there is no simple expression for the variance of the WIS estimator. The expressions that do exist use various approximations to estimate the variance of WIS. We also don't believe this would be a useful comparison, as one is a batch (WIS) while the other is an online mini-batch (IR) algorithm. A potential better comparison would be the progress in terms of the objective function over the same number of samples. Unfortunately, we don't think this would be possible as off-policy TD has no convergence guarantees with function approximation.\\n\\n3.\\nOur understanding is FQI is a control algorithm, as it is defined by [1]. We don't think it is an applicable competitor for two reasons. The first is we aren't doing control here, and the algorithm was designed specifically for off-policy policy evaluation. All of our prediction tasks have set policies for which we want to evaluate, we aren't controlling to maximize a signal. Another point, we are not learning state-action value functions but only state-value functions (off-policy algorithm papers typically focus on either state value [2][3] or state-action value [4][5]).\\n\\nThis does not mean we are unable to extend the algorithm to the control case (see comments for reviewer 2) or state-action value functions, but decided for this paper to focus on state-value functions.\\n\\n4.\\nWe will consider including this in the final paper or in the appendix, but feel as though the sensitivity curve is a relatively good proxy for measuring the variance of the updates. If the curve is wide this means the updates are less variant, while a narrow curve the updates are variant and small learning rates must be used. We also think the empirical reduction in variance is obvious from the removal of the IS ratio from the update (the major contributor to high variance in IS).\\n\\n5 & 6.\\nRight! The variance of the second term (\\\\expected[\\\\var(X_IR)]) is dependent on the IS ratios, because of the sampling distribution. We can clarify this and point 6 in the final version.\\n\\n7.\\nWhile we are focusing on off-policy policy evaluation, we do not feel as though off-policy learning is restricted to the control case. Instead we view off-policy learning as the more general statement of learning from off-policy data, which could be policy evaluation or policy improvement.\\n\\n\\n[1] Ernst, Damien, Pierre Geurts, and Louis Wehenkel. \\\"Tree-based batch mode reinforcement learning.\\\" Journal of Machine Learning Research 6.Apr (2005): 503-556.\\n[2] Sutton, Richard S., et al. \\\"Fast gradient-descent methods for temporal-difference learning with linear function approximation.\\\" Proceedings of the 26th Annual International Conference on Machine Learning. ACM, 2009.\\n[3] Mahmood, A. Rupam, Hado P. van Hasselt, and Richard S. Sutton. \\\"Weighted importance sampling for off-policy learning with linear function approximation.\\\" Advances in Neural Information Processing Systems. 2014.\\n[4] Munos, R\\u00e9mi, et al. \\\"Safe and efficient off-policy reinforcement learning.\\\" Advances in Neural Information Processing Systems. 2016.\\n[5] Mahmood, Ashique Rupam, Huizhen Yu, and Richard S. Sutton. \\\"Multi-step off-policy learning without importance sampling ratios.\\\" arXiv preprint arXiv:1702.03006 (2017).\"}", "{\"title\": \"Good idea on off-policy learning, but with limited analysis and experiments\", \"review\": \"In this work, the authors studied the technique of importance re-sampling (IR) for off-policy evaluation in RL, which tends to have low-biased (and it's unbiased in the bias-correction version) and low-variance. Different than existing methods such as importance sampling (IS) and weighted importance sampling (WIS) which correct the distribution over policy/transitions by an importance sampling ratio, in IR one stores the offline data in a buffer and re-samples the experience data (in form of state, action, & next-state) for on-policy RL updates. This approach avoids using importance sampling ratios directly, which potentially alleviate the variance issue in TD estimates. The authors further analyze the bias and consistency of IR, discuss about the variance of IR, and demonstrate the effectiveness of IR by comparing it with IS/WIS on several benchmark domains.\\n\\nOn the overall I think this paper presents an interesting idea of IR for off-policy learning. In particular it hinges on designing the sampling strategy in replay buffer to handle the distribution discrepancy problem in off-policy RL. Through this simple off-policy estimator, the authors are able to show improvements when compared with other state-of-the-art off-policy methods such as IS and WIS, which are both known to have high-variance issues. The authors also provided bias and consistency analysis of these estimators, which are reasonable theoretical contributions. The major theoretical question/concern that I have is in terms the variance comparisons between IR and IS/WIS. While I see some discussions in Sec 3.3, is there a concrete result showing that IR estimator has lower variance when compared to IS and WIS (even under certain technical assumptions)? This is an important missing piece for IR, as the original motivation of not using IS/WIS estimators is because of their issues on variance. \\n\\nIn terms of experiment, while the authors have done a reasonably good job evaluating IR on several domains based on the MSE of policy evaluation, to make it more complete can the authors also show the efficiency of IR when compared to state-of-the-art algorithms such as V-trace, ABQ or Re-Trace (which are cited in the introduction section)?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Simple and interesting method; some questions with the main theoretical results; would like to see the comparison with FQI\", \"review\": \"This paper introduces the concept of Sampling Importance Resampling (SIR) and give a simple method to adjust the off-policyness in the TD update rule of (general) value function learning, as an alternative of importance sampling. The authors argue that this resampling technique has several advantages over IS, especially on the stability with respect to step-size if we are doing optimization based the reweighted/resampled samples. In experiment section they show the sensitivity to learning rate of IR TD learning is closer to the on-policy TD learning, comparing with using IS or WIS.\", \"main_comments\": \"The proposed IR technique is simple and definitely interesting in RL settings. The advantage about sensitivity of step-size choice in optimization algorithm looks appealing to me, since that is a very common practical issue with IS weighted objective. However I feel both the theoretical analysis and empirical results will be more convinced to me if a more complete analysis is presented. Especially considering that the importance resampling itself is well known in another field, in my point of view, the main contribution/duty of this paper would be introducing it to RL, comparing the pros/cons with popular OPPE methods in RL, and characterize what is the best suitable scenario for this method. I think the paper could potentially do a better job. See detailed comments:\\n\\n1. The assumption of Thm 3.2 in main body looks a little bit unnatural to me. Why can we assume that the variance is bounded instead of prove what is the upper bound of variance in terms of MDP parameters? I believe there exists an upper bound so that result would be correct, but I\\u2019m just saying that this should be part of the proof to make the theorem to be complete.\\n2. If my understanding to section 3.3 is correct, the variance of IR here is variance of IR just for one minibatch. Then this variance analysis also seems a little bit weird to me. Since IR/IR-BC is computed online (actually in minibatch), I think a more fair comparison with IS/WIS might be giving them the same number of computations over samples. E.g. I would like to see the result of averaged IR/IR-BC estimator (over n/k minibatch\\u2019s) in either slicing window (changed every time) or fully offline buffer, where n is the number of samples used in IS/WIS and k the size of minibatch. I think it would be more informative than just viewing WIS as an (upper bound) benchmark since it uses more samples than.\\n3. From a higher level, this paper considers the problem of learning policy-value function with off-policy data. I think in addition to TD learning with IS adjustment, fitted Q iteration might be a natural baseline to compare with. It is also pretty widely-used and simple. Unlink TD, FQI does not need off-policy adjustment since it learns values for each action. I think that can be a fair and necessary baseline to compare to, at least in experiment section.\\n4. A relatively minor issue: I\\u2019m glad to see the author shows how sensitive each method is to the change of learning rate. I think it would be better to show some results to directly support the argument in introduction -- \\u201cthe magnitude of the updates will vary less\\u201d, and maybe some more visualizable results on how stable the optimization is using IS and IR. I really think that is the most appealing point of IR to me.\", \"minor_comments\": \"5. The authors suggest that the second part of Var(IR), stated in the fifth line from the bottom in page 5, is some variability not related to IS ratio but just about the update value it self. I think that seems not the case since the k samples (\\\\delta_ij\\u2019s, j=1 to k) actually (heavily) depend on IS raios, unless I missed something here. E.g. in two extreme case where IS weights are all ones or IS weights are all zero except for one (s,a) in the buffer, the variance is very different and that is because of IS ratio but not the variance of updates themselves. \\n6. Similar with (5), in two variance expressions on the top of page 6, it would be better to point out that the distribution of k samples are actually different in two equations. One of them is sampled uniformly from buffer and the other is proportional to IS ratios.\\n7. I think it is a little bit confused to readers when sometimes both off-policy learning and off-policy policy evaluation are used to describe the same setting. I would personally prefer use off-policy (policy) learning only in the \\u201ccontrol\\u201d setting: learning the optimal policy or the optimal value function, and use the term off-policy policy evaluation referring to estimating a given policy\\u2019s value function. Though I understand that sometimes we may say \\u201clearning a policy value function for a given policy\\u201d, I think it might be better to clarify the setting and later use the same term in the whole paper.\\n\\nOverall, I think there are certainly some interesting points about the IR idea in this paper. However the issues above weakens my confidence about the clarity and completeness of the analysis (in both theory and experiment) in this paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Author response part 2\", \"comment\": \"Other concerns:\\n\\nThe two domains in which we do more concrete empirical studies are simple, but the simplicity allows us to make concrete statements about the effects of our algorithm. One example is in the four rooms domain, where competitors could not learn given 500,000 examples from the environment, where IR performed well given the same amount of data almost learning the entire value function in unfavorable conditions. We also provide a demonstration in torcs, and theoretical contributions about the approach's bias and variance.\\n\\nWe are unsure exactly what you mean by \\\"In particular, since the target policy and the behavior policy are fixed, the bigger issue seems to be that the distribution itself will not change over time\\\" but will respond to the best of our ability. Any clarifications for future discussion will be helpful.\\n\\nThere is no need for the behaviour policy to be fixed, but we follow many of the experimental designs from prior off-policy work. We also provide examples in the appendix with a learned behaviour policy in mountain car. The changing target policy (as in the control setting) could also be handled as mentioned above.\\n\\nThe replay buffer is not being completely resampled (i.e. SIR [3]). Correct. Although we are gaining the benefits of this type of full resample. We decide not to do a full resample (as in SIR), because the problem for an RL agent is a bit different. It seems better to keep the old replay buffer as is and resample only portions non-destructively so the buffer could be shared for other portions of the full autonomous agent. In a sense you could imagine us constructing a resampled buffer, but not storing the resampled buffer.\\n\\nFinally the replay buffer is changing overtime through a moving window of past experience, as is typical in reinforcement learning applications in the Torcs domain and Four rooms domain. We will work to make clear in the final version that only the Markov chain has a fixed set of experience (primarily so we can do clearer studies).\\n\\n[1] Sutton, Richard S., et al. \\\"Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction.\\\" The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2. International Foundation for Autonomous Agents and Multiagent Systems, 2011.\\n[2] Art B. Owen. 2013. Monte Carlo theory, methods and examples. Chapter 9: http://statweb.stanford.edu/~owen/mc/\\n[3] Rubin, Donald B. \\\"Using the SIR algorithm to simulate posterior distributions.\\\" Bayesian statistics 3 (1988): 395-402.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for your review, and look forward to having a discussion about your comments and concerns!\", \"from_reading_your_review_you_have_two_main_concerns_and_these_concerns_are_quite_strongly_linked\": \"what is the applicability of this algorithm, and how can this be used for control. We focus on these two concerns in the main body and address other concerns below.\\n\\nWhile a large portion of the RL community is primarily concerned with control there is also interest in pure policy evaluation, or prediction, where our algorithm is highly applicable. Concretely, we look towards the Horde architecture [1], which is a large collection of general value functions (GVFs). This type of architecture could benefit from variance reduction techniques designed for static target policies, especially if the predictive units are using a shared representation. Our algorithm provides variance reduction, and also prioritizes samples important for learning off-policy value functions. Another application is in the autonomous car domain, as represented with the experiments in torcs, where predictions are made about certain pre-defined policies that the car cannot take due to safety concerns but can learn about off-policy.\\n\\nYou comment that our algorithm works best when policies are very different. We agree! And find this an especially appealing property of resampling. If our goal is to learn a large collection of GVFs with a potentially diverse set of target policies we want an algorithm which can be applied no matter the behaving policy. You may also have concerns about computational complexity with using this algorithm for a large collection of GVFs with many target policies. Instead of keeping a PMF for each new target policy, we could instead keep a single PMF with a policy that is well supported for all the target distributions (i.e. a uniform random policy). We may be able to choose a sampling policy to produce a lower variant value function than the target policy (see [2] about the best proposal distribution q(x) for a statistic f(x) and p(x) target distribtuion being q(x) \\\\propto f(x)p(x)).\\n\\nApplying resampling to control is possible as one could apply importance sampling to the change in the target policy new_tp(x)/old_tp(x) compared to what the target policy was when it was first stored in the replay buffer. This is computationally efficient and should address the concerns that our method cannot be applied to the control setting efficiently. The benefits of IR in the control setting are still to be seen but we think we should see reduced variance in the updates which could be quite beneficial. This extension would be interesting for follow up work. There is also no need to use the current target policy as the sampling policy, as mentioned above.\\n\\n(Continues below)\"}", "{\"title\": \"Interesting approach, but unclear how far it is applicable\", \"review\": \"The authors propose to use importance resampling (IR) in place of importance sampling (IS) for policy evaluation tasks. The method proposed by the authors definitely seems valid, but it isn\\u2019t quite clear when this is applicable.\\n\\nIR is often used in the case of particle filters and other SMC is often used to combat the so-called \\u201cdegeneracy problem\\u201d where a collection of particles (or trajectories) comes to degenerate such that all the mass is concentrated onto a single particle. This does not seem to be the case here, as the set of data (the replay buffer) does not seem to be changing over time. In particular, since the target policy and the behavior policy are fixed, the bigger issue seems to be that the distribution itself will not change over time.\\n\\nFinally, the results are given for somewhat simple problems. The first two settings show that the difference between IR/IS can be very stark, but it seems like this is the case when the distributions are very different and hence the ESS is very low. The IR methods seem like they can eliminate this deficiency by only sampling from this limited subset, but it is also unclear how to extend this to the policy optimization setting.\\n\\nOverall I have questions about where these results are applicable. And finally, as stated a moment ago, it is unclear how these results could be extended to the setting of off-policy policy optimization, where now the resulting policies are changing over time. This would necessitate updating the requisite sampling distributions as the policies change, which does seem like it would be difficult or computationally expensive (unless I am missing something). Note that this is not an issue with IS-based methods, because they can still be sampled and re-weighted upon sampling.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
SygHGnRqK7
Probabilistic Federated Neural Matching
[ "Mikhail Yurochkin", "Mayank Agarwal", "Soumya Ghosh", "Kristjan Greenewald", "Nghia Hoang", "Yasaman Khazaeni" ]
In federated learning problems, data is scattered across different servers and exchanging or pooling it is often impractical or prohibited. We develop a Bayesian nonparametric framework for federated learning with neural networks. Each data server is assumed to train local neural network weights, which are modeled through our framework. We then develop an inference approach that allows us to synthesize a more expressive global network without additional supervision or data pooling. We then demonstrate the efficacy of our approach on federated learning problems simulated from two popular image classification datasets.
[ "Bayesian nonparametrics", "Indian Buffet Process", "Federated Learning" ]
https://openreview.net/pdf?id=SygHGnRqK7
https://openreview.net/forum?id=SygHGnRqK7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "r1gDEO5NxV", "B1lO8WT_p7", "HJx1mbpd67", "rJgnheTOaX", "rJgwugpdaQ", "B1x1LTanhQ", "SJekcov5nQ", "S1lgPnb_n7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545017391194, 1542144336201, 1542144279266, 1542144180049, 1542144110895, 1541360966689, 1541204871211, 1541049432461 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1262/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1262/Authors" ], [ "ICLR.cc/2019/Conference/Paper1262/Authors" ], [ "ICLR.cc/2019/Conference/Paper1262/Authors" ], [ "ICLR.cc/2019/Conference/Paper1262/Authors" ], [ "ICLR.cc/2019/Conference/Paper1262/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1262/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1262/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"While there was some support for the ideas presented, unfortunately this paper was on the borderline. Significant concerns were raised as to whether the setting studied was realistic, among others.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Borderline paper\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewers for their feedback. We've uploaded the revised draft to resolve reviewers' concerns. Individual responses follow below.\"}", "{\"title\": \"Authors response\", \"comment\": \"We thank the reviewer for their time and interesting suggestions. We have added additional experiments to the draft (first paragraph of Section 4) to help address your concerns and we provide additional comments below.\\n\\nTo address concerns regarding performance, compression rates, comparison to McMahan et al. (2017), and local neural networks we conducted an additional experiment (please see first paragraph of Section 4 and Fig. 2). In our previous experiments all `baselines' had some kind of intrinsic advantage over our setting and our goal was to achieve comparable performance, rather than outperform them. For example, improving upon an ensemble by averaging models in the parameter space seems very challenging -- if possible at all -- especially for higher number of batches and smaller batch sizes. A more fair comparison would be against approaches fully compatible with the federated learning problem studied in this work. \\n\\nIn particular, we concur that the baselines you suggested are well suited as \\\"fair\\\" baselines, i.e. the performance of local neural networks and Federated Averaging of McMahan et al. (2017) with a single communication. McMahan et al. (2017) (Figure 1 in their paper) empirically observed that sharing initialization across neural networks improves the performance of naive averaging of weights and used this idea to propose Federate Averaging. To implement Federated Averaging we shared initialization across batches and trained local neural networks with one hidden layer and 300 neurons for 10 epochs and then performed weighted averaging to obtain the global model. In Fig. 2 (main text) we see that even with shared initialization averaging of weights does not perform well for higher number of batches and/or heterogeneous data partitioning. In this experiment, for PFNM we trained local neural networks with 100 neurons each (and no constraints on how they were initialized). We considered two values of $\\\\gamma_0$, one that results in larger model but better performance and a degenerate one resulting in the model of the size of the local net, but sacrificing performance quality. PFNM with $\\\\gamma_0=1$ (truncated at 700 neurons; other hyperparameters were fixed to $\\\\sigma^2_0=10$ and $\\\\sigma^2_j=1$ for each j) consistently outperforms all baselines. In this experiment compression is relatively significant, i.e. when J=100, max size is 10000, whereas PFNM used around 500 neurons in the global model.\\n\\nWe also note that when it is permissible to do multiple communication rounds to improve the performance, PFNM may serve as good initialization (setting $\\\\gamma_0$ very small to enforce global model to be of the same size as local models) to Federated Averaging without the need to share common initialization across local neural networks.\", \"regarding_hyperparameters\": \"when comparing to baselines satisfying the constraints (Fig. 2) we set $\\\\sigma_j=1$ for every j and $\\\\sigma^2_0=10$ across all experiments for fair comparison. When comparing to baselines with extra resources (Fig. 3) we set $\\\\sigma^2_0=10$ and value of $\\\\sigma_j$ was shared across $j$ and selected based on the train data performance. Please see section \\\"Parameter sensitivity analysis for PFNM\\\" in the Supplementary (Section 4.3, Fig. 4 and Fig. 5) for more details. In summary, we observed that $\\\\sigma_j$ does not have much effect when partition is homogeneous and causes minor fluctuations in performance for heterogeneous cases.\"}", "{\"title\": \"Authors response\", \"comment\": \"We thank the reviewer for their time and interesting suggestions. We have added additional experiments to the draft (first paragraph of Section 4) to help address the concerns and we provide additional comments below.\", \"regarding_distillation_from_an_ensemble_and_dp_means\": \"We think that our method can be considered complementary to knowledge distillation when it is possible to obtain some amount of additional data. In particular, to train a distillation network, one would need to decide on the number of hidden neurons and the weight initialization for this network. Since local networks' weights are available, it is desirable to reuse them. However, a naive strategy of model averaging in the parameter space does not work well. McMahan et al. (2017) (see Figure 1 of their paper) empirically observed that sharing initialization across neural networks improves the performance of naive averaging of weights. DP-means, as you suggested, could be another option. It is also possible to simply pick one of the local models at random for initialization. In the added experiment (Fig. 2), we show that our PFNM method provides the best model pooling solution among all of these alternatives. These alternative methods are the more appropriate baselines for our method. In our previous experiments all `baselines' had some kind of intrinsic advantage over our setting and our goal was to achieve comparable performance, rather than outperform them. For example, improving upon an ensemble by pooling models in the parameter space seems very challenging, if at all possible, especially in the large number of batches and small batch size regime.\", \"regarding_dropout\": \"The important difference between our setting and dropout (when viewed as model pooling) is that we aggregate networks trained independently, while dropout may be viewed as implicitly aggregating networks trained sequentially, i.e. each new network is initialized from a previous one. The permutation invariance phenomenon motivating PFNM implies that a neural network with L hidden neurons has at least $L!$ equivalent permuted neural networks that are equivalent local optima. When neural networks are trained independently, it is possible that they converge to similar, up to a permutation, solutions. That is one of the reasons naive averaging of weights (since it ignores permutations) performs poorly and matching based model pooling is more appropriate. The dropout case is the opposite of this, since when trained sequentially, it seems much less likely that a neural network will jump from one permutation invariant local optima to another, making naive averaging of weights obtained throughout training with dropout work well.\\n\\nRegarding choice of $\\\\sigma_j$ for smaller batch sizes: Currently we set $\\\\sigma_j$ to be same for all $j$. In Section 4.3, Fig. 4 and 5 of the Supplementary material we present some sensitivity analysis. Empirically $\\\\sigma_j$ does not have much effect when partition is homogeneous and causes minor fluctuations in performance for heterogeneous cases. From the modeling perspective, higher $\\\\sigma_j$ implies smaller global model size since local neurons assume higher variation and become \\\"more willing\\\" to be matched to existing global neurons.\\n\\nThank you for the suggestions regarding notations - we will revise the manuscript accordingly.\", \"reference\": \"Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas.\\nCommunication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, pp. 1273\\u20131282, 2017.\"}", "{\"title\": \"Authors response\", \"comment\": \"We thank the reviewer for the feedback and provide answers to the raised concerns below.\", \"regarding_experiments\": \"In our experiments each of the baselines is violating at least one of the constraints of the federated learning problem we are studying, i.e. single round of communication, no access to data after training local models and compressed global model. All baselines considered have a significant advantage by violating some of those conditions. Therefore the goal of the experiments was to show that we can achieve comparable performance with our method (PFNM) while adhering to all constraints, not to outperform the baselines. Indeed outperforming an ensemble by performing model averaging in the space of weights is extremely challenging, especially for many batches with fewer data points. We have added an additional experiment to compare with baselines satisfying all of the problem constraints. This experiment (Fig. 2) shows that PFNM outperforms all \\\"fair\\\" baselines by a good margin.\\n\\nRegarding Hierarchical Beta Process (HBP): We do not learn parameters of the second level Beta processes (except in the streaming case). Instead, those are integrated out and do not have any negative effect on the learning. We agree that it is possible to consider one global Beta process and a Bernoulli process per batch, however the group structure introduced by the second level Beta processes is important for the streaming case to infer heterogeneity of global atoms distributions across groups (Section 3.3; see Fig. 3b in Supplement for experimental evaluation of streaming case).\"}", "{\"title\": \"Unclear advantage\", \"review\": \"The paper uses the beta process to do federated neural matching. The brief experimental results show worse performance than the other techniques compared with. Also, the motivation for the hierarchical beta process isn't clear, since each group has a single Bernoulli process. This makes learning each second level beta process a meaningless task. Why not have a single beta-Bernoulli process?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"An interesting idea, but slightly contrived and lacking empirical support\", \"review\": \"The paper develops a novel solution for federated learning under three constraints, i.e. no data pooling (which distillation violates), infrequent communication (which iterative distributed learning violates), and modest-sized global model (which ensemble model violates). This is acknowledgedly a kind of unique setting, and the proposed solution does fit it well.\\n\\nHowever, I have the following two main concerns\\n1. The major attack on distillation from an ensemble is that it needs to pool data across all sources which has cost and privacy concerns. However I'm not entirely convinced this \\\"data pooling\\\" is really necessary. One could argue distillation might as well be performed with simply an extra dataset that could be collected (sampled) elsewhere.\\nPlus, even though the proposed solution doesn't need to do \\\"data pooling\\\", it is effectively doing \\\"model pooling\\\" which may has its own costs and issues, e.g. the assumptions that one has access to all the parameters of the local models, and that all those local models should more or less be homogeneous to allow such pooling to happen, might not hold.\\n\\n2. The idea of applying Beta-Bernoulli Process to uncover the underlying global model from a pool of local models is interesting. But I would very much like to see comparisons to some other simpler baselines, e.g. using dictionary learning to extract the common set of basis shared among the local models, or perhaps the slightly fancier DP-means (Kulis & Jordan, 2012)? Especially the lack of a meaningful improvement over the compared baselines from the empirical studies makes me wonder whether the BBP is indeed fit for purpose or even necessary for this task.\\n\\nSome other questions/comments,\\n1. I'd be interested to see what the authors think about the connection between their proposed PFNM to Hinton's dropout, which could also be interpreted as performing an implicit \\\"model pooling\\\" over an ensemble of local models sharing weights among each other.\\n\\n2. After introducing the notation for \\\"-j\\\", I'd suggest not to abuse \\\"j\\\" to keep denoting (dummy) indices in summations (e.g. Eq.(7), (8), etc.) - I might prefer swapping it with e.g. \\\"j'\\\" in $B^{j'}_{i,l}$, $v_{j'l}$ and $\\\\sigma^2_{j'}$ to avoid confusions.\\n\\n3. When the number of batches J gets larger, which means a smaller batch size and therefore also a larger variance among the local models, would it be beneficial to also increase the noise variances $\\\\sigma_j$ accordingly to allow a better fit?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"interesting way to combine neural networks trained locally on federated data using a Beta-Bernoulli process\", \"review\": [\"Summary: The paper considers federate learning of neural networks, i.e. data are distributed on multiple machines and the allocation of data points is potentially inhomogenous and unbalanced. The paper proposes a method to combine multiple networks trained locally on different data shards and form a global neural network. The key idea is to identify and reuse neurons that can be used for all shards and add new neurons to the global network if necessary. The matching and combination process is done via MAP inference of a model using a Beta-Bernoulli process. Some experiments on federated learning demonstrate the performance of the proposed method.\", \"General evaluation (+ pro/ - con, more specific comments/questions below):\", \"the paper is very well-written -- the BBP presentation is light but very accessible. The experimental set up seems sound.\", \"the matching procedure is novel for federated training of neural networks, as far as I know, but might not be if you are a Bayesian nonparametric person, as the paper pointed out similar techniques have been used for topic models.\", \"the results seem to back up the claim that the proposed is a good candidate for combining networks at the end of training, but the performance is very similar or inferior to naive combination methods and that the global network is way larger than individual local network and nearly as large as simply aggregating all neurons together.\", \"the comparison to recent federated learning methods is lacking (e.g. McMahan et al, 2017) (perhaps less communication efficient than the proposed method, but more accurate).\", \"Specific comments/questions/suggestions:\", \"the MAP update for the weights given the assignment matrix is interesting and resembles exactly how the Bayesian committee machine algorithm of Tresp (2000) works, except that the variances are not learnt for each parameter but fixed for each neuron. On this, there are several hyperparameters for the model, e.g. variance sigma_j -- how are these tuned/selected?\", \"the local neural networks are very mall (only 50 neurons per layer). How do they perform on the test set on the homogeneous case? Is there a performance loss by combining these networks together?\", \"the compression rate is not that fantastic, i.e. the global network tends to add new neuron for each local neuron considered. Is this because it is in general very hard to identify similar neuron and group them together? In the homogeneous case, surely there are some neurons that might be similar. Or is it because of the MAP inference procedure/local optima?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
S1eBzhRqK7
Evolutionary-Neural Hybrid Agents for Architecture Search
[ "Krzysztof Maziarz", "Andrey Khorlin", "Quentin de Laroussilhe", "Andrea Gesmundo" ]
Neural Architecture Search has recently shown potential to automate the design of Neural Networks. The use of Neural Network agents trained with Reinforcement Learning can offer the possibility to learn complex patterns, as well as the ability to explore a vast and compositional search space. On the other hand, evolutionary algorithms offer the greediness and sample efficiency needed for such an application, as each sample requires a considerable amount of resources. We propose a class of Evolutionary-Neural hybrid agents (Evo-NAS), that retain the best qualities of the two approaches. We show that the Evo-NAS agent can outperform both Neural and Evolutionary agents, both on a synthetic task, and on architecture search for a suite of text classification datasets.
[ "Evolutionary", "Architecture Search", "NAS" ]
https://openreview.net/pdf?id=S1eBzhRqK7
https://openreview.net/forum?id=S1eBzhRqK7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJlU4O2WeV", "SygXI27VTQ", "BklviPgZ6Q", "Syxf0zQch7", "B1lCfqNOh7" ], "note_type": [ "meta_review", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544828973643, 1541844043173, 1541633950661, 1541186250381, 1541061141785 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1261/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1261/Authors" ], [ "ICLR.cc/2019/Conference/Paper1261/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1261/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1261/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"Reviewers are in a consensus and recommended to reject after engaging with the authors. Please take reviewers' comments into consideration to improve your submission should you decide to resubmit.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Paper decision\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you so much for your feedback!\\n\\nWe tried to write everything as clearly as possible, could you please tell us exactly which things were unclear? We would be especially interested in hearing what you meant in points 2 and 3 in your review.\"}", "{\"title\": \"Simple and intuitive idea, insufficiently convincing results\", \"review\": \"Review:\\nThe paper introduces a novel way to do architecture search that uses an RNN to guide the mutation operation. The method and the motivation of the idea as long with the related work are all clearly described. However, the experiments section does not show a big uplift of the method versus the baselines and the number of types of tasks is relatively small (artificial and text).\", \"cons\": [\"No image task\", \"No large scale task to show the scalability\", \"No baselines that are not coming from AUTO-ML to show the relative performance of a classical method\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Interesting method. However, empirical results are not convincing enough.\", \"review\": \"Summary:\\n\\nThe paper proposes a hybrid approach which combines evolution and RL. The key idea is to conduct tournament selection over a population of architectures with learned mutations. The mutations are defined as the output of an RNN controller which either reuses or alters the sequence descriptor of the parent at each step. The proposed hybrid architect is evaluated on both synthetic and text classification tasks and then compared against pure evolutionary and RL-based agents.\", \"pros\": [\"The method can be viewed as a generalization of conventional evolution by replacing the handcrafted (uniform) distribution of mutations with a learned one. On the one hand, this should hopefully improve the sample efficiency of pure genetic methods since the population can evolve towards more meaningful directions, assuming useful patterns can be learned by the mutation controller. On the other hand, mutating existing architectures seem a easier task than sampling the entire architecture from scratch.\", \"The synthetic experiment is interesting, though it's hard to draw any conclusions based a single task.\"], \"cons\": [\"To my knowledge, all text classification tasks used in 5.2 are quite small. There is no evidence that the method can scale to and work well on large-scale tasks, where improving the sample efficiency becomes truly crucial and challenging.\", \"It is good to see comparisons against pure evo and RL within the authors' own search space. However, the advantage of the proposed evo-NAS, especially when evaluated on real-world text classification tasks, does not seem significant enough. In particular, there is a clear overlap between the performance of architectures found by NAS, evo and evo-NAS (Figure 4). The advantage of evo-NAS is even smaller if we compare the very best model (as can be read from Figure 4) instead of the average among the top 10 (as reported in Table 2). In my option, performance of the strongest model is arguably more interesting than the averaged one in practice.\", \"Since no results on CIFAR or ImageNet are provided as in most prior works in the literature, it is impossible to empirically compare the method with the state-of-the-art. The experiments would be more convincing if a comparison can be provided on those benchmarks. Otherwise, it is possible that the current search space & hyperparameters are tailored towards evo-NAS and it remains unclear whether the method can generalize well to other domains and/or search spaces.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Intuitive idea, unclear explanation.\", \"review\": \"The paper proposes a class of Evolutionary-Neural hybrid agents (Evo-NAS) to take advantage of both evolutionary algorithms and reinforcement learning algorithms for efficient neural architecture search.\\n\\n1. Doesn't explain how exactly the mutation action is learned, and missing the explanation of how RL acts on its modification on NAS (Evo-NAS). \\n2. Very poor explanation on LEARN TO COUNT experiment. The experiment contains difficult setups on a toy data, which makes it difficult to repeat. In figure 3, the paper says that the sample efficiency of the Evo-NAS strongly outperforms both the evolutionary and the neural agent. However, where the strength comes from is not discussed in detail. In figure 2, the paper claims that PQT outperforms Reinforce for both the Neural and the Evo-NAS agent. For the Evo-NAS agent, the gain is especially pronounced at the beginning of the experiment. Thus, the paper concludes that PQT can provide a stronger training signal than Reinforce. However, how much stronger training signal can obtain of the proposed method is not discussed. Because the experiments of 5.1 is setup on a toy data with complicated parameters. The conclusions based on this data set is not convincing. It would be better to add comparative results on the CIFAR and Imagenet data for convenient comparisons with state-of-the-art. \\n3. Confusing notation and experimental setup. In 5.1, the sequence a is first defined as <a1, a2, .., an>. Then, after eq.2, the sequence a is given as a=<1, 2, ..., n>. It would be better to use different symbols here.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BylBfnRqFm
CAML: Fast Context Adaptation via Meta-Learning
[ "Luisa M Zintgraf", "Kyriacos Shiarlis", "Vitaly Kurin", "Katja Hofmann", "Shimon Whiteson" ]
We propose CAML, a meta-learning method for fast adaptation that partitions the model parameters into two parts: context parameters that serve as additional input to the model and are adapted on individual tasks, and shared parameters that are meta-trained and shared across tasks. At test time, the context parameters are updated with one or several gradient steps on a task-specific loss that is backpropagated through the shared part of the network. Compared to approaches that adjust all parameters on a new task (e.g., MAML), our method can be scaled up to larger networks without overfitting on a single task, is easier to implement, and saves memory writes during training and network communication at test time for distributed machine learning systems. We show empirically that this approach outperforms MAML, is less sensitive to the task-specific learning rate, can capture meaningful task embeddings with the context parameters, and outperforms alternative partitionings of the parameter vectors.
[ "caml", "context parameters", "fast context adaptation", "parameters", "test time", "maml", "fast adaptation", "model parameters", "parts", "additional input" ]
https://openreview.net/pdf?id=BylBfnRqFm
https://openreview.net/forum?id=BylBfnRqFm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJla2RJ-eN", "HyeF5PNARQ", "Byl9ch4FR7", "SyeqYsIyCQ", "SJxzujUyCX", "HklgSqLJA7", "rkgcmqIkR7", "ByeMWqLy0X", "S1eoNpUQaQ", "B1l_Q1cgTX", "B1xX4-Eq3m", "H1g5yNXKhX", "HyxEtN6Ohm", "Skxk5LPL27" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1544777397474, 1543550865435, 1543224466479, 1542577025832, 1542577002325, 1542576695886, 1542576673793, 1542576634070, 1541791026797, 1541607200411, 1541189930531, 1541120994342, 1541096572242, 1540941446610 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1260/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1260/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1260/Authors" ], [ "ICLR.cc/2019/Conference/Paper1260/Authors" ], [ "ICLR.cc/2019/Conference/Paper1260/Authors" ], [ "ICLR.cc/2019/Conference/Paper1260/Authors" ], [ "ICLR.cc/2019/Conference/Paper1260/Authors" ], [ "ICLR.cc/2019/Conference/Paper1260/Authors" ], [ "ICLR.cc/2019/Conference/Paper1260/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1260/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1260/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1260/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1260/Authors" ], [ "~Ali_Janalizadeh_Choobbasti1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes a meta-learning algorithm that performs gradient-based adaptation (similar to MAML) on a lower dimensional embedding. The paper is generally well-written, and the reviewers generally agree that it has nice conceptual properties. The method also draws similarities to LEO. The main weakness of the paper is with regard to the strength of the experimental results. In a future version of the paper, we encourage the authors to improve the paper by introducing more complex domains or adding experiments that explicitly take advantage of the accessibility of the task embedding.\\nWithout such experiments that are more convincing, I do not think the paper meets the bar for acceptance at ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview\"}", "{\"title\": \"Thank you for clarifications, still have concerns\", \"comment\": \"Thank you for the detailed replies, particularly regarding how CAML relates to prior work.\\n\\nI still have concerns about novelty and strength of experiments. Rusu et al. learn an embedding that can also be interpreted as a task encoding, and it\\u2019s not clear from the results whether the choice of parameter regression (LEO) versus feature fusion (CAML) matters much. While CAML is admirably simpler, the experiments don\\u2019t convincingly make the case that this simple change gives significant benefits. \\nExperiments on more complex few-shot learning problems might illuminate these benefits.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewers for their time to evaluate our paper, and their valuable feedback.\\n\\nWe uploaded a revision of the paper. Besides editorial changes, we added a section of practical tips to the Appendix as suggested by Reviewer 3. We updated our related work section to better reflect how CAML differs from existing work, in particular MT-Nets [1] and LEO [2], in response to Reviewer 4.\", \"we_summarise_these_differences_here\": \"MT-Nets [1] learn which parameters to update in MAML [3]. To this end, they learn an M-net, which is a mask (sampled from a learned probability distribution for each new task), and determines which parameters are updated in the inner loop. In the outer loop, all parameters are updated. Hence the task-specific and shared-task parameters are not disjoint as in CAML, and no task embedding emerges. Additionally, they learn a T-net, which learns the update direction and step size of the parameters, which makes MT-Nets robust to the inner loop learning rate. CAML adjusts the inner loop learning rate automatically via the magnitude of the gradient, and can handle a wider range of initial learning rates compared to MT-Nets. This is possible because the parameter sets are disjoint, and the context parameters are inputs to the model (i.e., gradients do not get backpropagated further).\\n\\nLEO [2] learns an embedding which generates the weights of the last layer (for classification) or the entire parameter vector of the network (in the general case). This embedding is computed via an embedding network and a relation network. At test time, the gradient steps are done in this embedding space (with additional fine-tuning of the generated network for the Mini-Imagenet experiments). In contrast, we learn an embedding that modulates a fixed network, and we do so via backpropagation through that same network. Hence our method has fewer hyperparameters / architecture choices that have to be made.\\n\\nSome reviewers raised concerns about the experimental evaluation of CAML. We deliberately chose to show that CAML works well on a broad range of problems, instead of focusing on a single setting.\\n\\nCAML can be scaled up to achieve better performance on the Mini-Imagenet benchmark, but we see this as an orthogonal problem and a question of compute and hyperparameter search. After the reviewers\\u2019 feedback, we ran an additional experiment using a Resnet-18 model to test how feasible it is to scale CAML up. We tested the same hyperparameters that we used for the CNN-based experiments. The implementation is easy: we used the ResNet readily available in PyTorch and added the context parameters / FiLM layer in-between the second and third residual block, together with a few lines of extra functionality (like resetting the context parameters to zero). For MAML / MT-Nest, we would have to manually access all network parameters to set up the computation graph. We get 52.16% (+/- 0.32) and 66.33% (+/- 0.26) accuracy on the 1 and 5 shot problem respectively, which is higher than our best CNN-based results in the paper and outperforms MT-Nets. Again, these results are achieved by adjusting only 100 context parameters at test time. This indicates that CAML can indeed be scaled up further, and we leave it to future work to try larger resnets and do a full hyperparameter search. (Note that these numbers might not be directly comparable to the SOTA scores of LEO [2], who get their final score by training on the training and validation set.)\\n\\nOverall, we believe our paper is interesting to the ICLR community, since compared to the popular algorithm MAML [3] and several papers that build on it (like Meta-SGD [4] and MT-Net [1]s) we explicitly learn task embeddings, which are separated from the network that is shared across tasks. We interpret the inner loop of meta-learning as a task identification step, and show that we only need to adapt a few parameters at test time, instead of the entire network. Our paper is the first to show that this is possible for a wide range of problems using a simple backpropagation operation on contextual input parameters. Compared to MAML, our method has the advantage of being robust to overfitting. It is also easier to implement (we do not need to access all weights of the network manually), needs fewer memory writes, and can be useful for distributed machine learning systems.\\n\\n[1] \\u201cGradient-based meta-learning with learned layerwise metric and subspace\\u201d Lee et al. (2018)\\n[2] \\u201cMeta-Learning with Latent Embedding Optimisation\\u201d Rusu et al. (2018)\\n[3] \\u201cModel-Agnostic Meta-Learning for Fast Adaptation of Deep Networks\\u201d Finn et al. (2017)\\n[4] \\u201cMeta-SGD: Learning to learn quickly for few shot learning\\u201d Li et al. (2017)\"}", "{\"title\": \"Reply (Part 2)\", \"comment\": \"\\u201cI am confused by the comparison between adapting input parameters versus subsets of nodes at each layer or entire layers for the sinusoid regression task. Adapting subsets of nodes at each layer roughly corresponds to Lee and Choi, yet the reported numbers are quite different?\\u201d\\n- Yes, adapting subsets of nodes at each layer corresponds roughly to Lee an Choi, except that we choose which subset of nodes are adapted for this ablation study of alternative partitioning schemes. The MSE reported by Lee and Choi is higher than ours, and also their MSE scores for MAML are higher than in the original paper. We assume that this is due to differences in the implementation.\\n\\n\\u201cIn Table 3, which CAML is a fair comparison (in terms of network size and architecture) to MT-NET?\\u201d\\n- MT-Nets use the same architecture as MAML (32 filters), so the expressiveness during the forward pass is the same as our smallest (32 filter) architecture. MT-Net additionally learns parameters to generate T and M, which for this network are around 4,000 parameters. To outperform MT-Nets, we need to scale up the number of filters in our convolutions - trading off implementation complexity (higher in MT-Nets) for a larger network (necessary for CAML) and having a separate task embedding (given in CAML).\"}", "{\"title\": \"Reply (Part 1)\", \"comment\": \"Thank you for the time to evaluate our paper, and the thorough review. We address your raised points below.\\n\\n\\u201cRusu et al (LEO) optimize a context vector, which is used to generate model parameters. Reducing the generative model to a point estimate, how is this different from generating the FiLM parameters as a function of context as done in CAML?\\u201d\\n- The outputs of the FiLM layer can be seen as parameters of the network, but this differs from the approach in LEO as follows. The FiLM outputs scale and shift entire feature maps in convolutional layers (but have no influence on the FiLM layer parameters, or convolution parameters, themselves). LEO generates the weights of the last layer of the neural network (for classification) or the entire parameter vector of the network (in the general case). We view the context parameter in CAML as modulating the activations in a fixed network, whereas LEO generates the parameters themselves.\\n\\n\\u201cLee and Choi (MT-nets) propose a general formulation for learning which model parameters to adapt. CAML is simpler in that the model parameters to adapt are chosen beforehand to be inputs.\\u201d\\n- Lee and Choi learn which parameters to adapt, but they do not consider having additional inputs that can be adapted. Additionally, MT-Nets do not partition the network parameters into disjoint sets (a new mask is drawn for each new task, from a learned probability distribution; all parameters are updated in the outer loop). We introduce context parameters and show that this is sufficient compared to the more complex approach of MT-Nets, and that by this we can learn a task embedding via backpropagation.\\n\\n\\u201cSnell et al. / Oreshkin et al. are prototype-based methods infer context via a neural network rather than optimizing for it.\\u201d\\n- Both these methods are specific to few-shot classification, whereas CAML can also be applied to regression and reinforcement learning. Methods for few-shot classification often rely on learning class embeddings, whereas we directly learn an embedding for the current task (i.e., all classes) which modulates the classification network.\\n\\n\\u201cCAML is robust to the adaptation learning rate, but isn\\u2019t this true of any scheme that separates meta-learned and adapted parameters into disjoint sets? (e.g. also true of Lee and Choi?) \\u201c\\n- We don\\u2019t think that\\u2019s necessarily true. If the task-specific parameters depend on shared parameters in earlier layers of the network, regulating the learning rate via the magnitude of the gradient would have an influence on the outer loop update (since those gradients would be backpropagated further). This can be countered to some extent, but might still be less flexible than CAML, where the context parameters are leaves of the computation graph. \\nMT-Nets (Lee and Choi) learn an M and a T net. The M-net is responsible for selecting which parameters to update, and is sampled for each new task (from a learned probability distribution). If we understand correctly, all parameters are updated in the outer loop - hence the parameter sets are not entirely disjoint. The T-net learns the update direction and step size of the parameters, which is why MT-Nets are robust to the inner loop learning rate. For the regression task we show that CAML is robust within a learning rate range of 10^(-7) to 100 (see updated Figure 5), whereas MT-Nets cannot successfully scale to a learning rate of 10 (see their paper).\\n\\n\\u201cThe visualizations of the context parameters are nice, but interpreting much higher dimensional context vectors (which would be necessary for harder tasks) is more difficult, so I\\u2019m not sure what to take away from this?\\u201d\\n- We show the visualisations in the 2-D context to confirm that we indeed learn a task embedding via backpropagation, which corresponds to (and is smooth with respect to) the true task differences. This illustrates that the inner loop of meta-learning algorithms can be seen as a task embedding step, and that we successfully do this via backpropagation. For higher dimensional context vectors, visualisation methods such as t-SNE could be used. Few-shot classification is a special case, since the embedding is not disentangled (with respect to the different classes). This makes visualisation for separate classes difficult. If a disentangled representation is of interest, it might be possible to train CAML with this in mind, e.g. by updating only a part of the context vector per class. \\n\\n[continued below]\"}", "{\"title\": \"Reply\", \"comment\": \"Thank you for your time and review of our paper.\"}", "{\"title\": \"Reply\", \"comment\": \"Thank you for your time to read and evaluate our paper.\"}", "{\"title\": \"Reply\", \"comment\": \"Thank you for your review, and the time to assess our paper.\\n\\nWe added a section with practical tips for implementation and hyperparameter selection to the Appendix. In general, we think choosing the hyperparameters in CAML can be guided by domain knowledge: since we separate the task-specific and shared parameters, the choice of both is more intuitive for the human designer than in MAML. The context parameters of CAML can be added on top of any network architecture, and they are updated only via backpropagation (unlike, e.g., LEO [1] which requires a separate network to encode the training data). Additionally, our method is not sensitive to network architecture (not prone to overfit like MAML), the inner loop learning rate, and can handle overparameterisation of the context parameters (as shown in the regression experiment, see updated Table 1).\\n\\n[1] \\u201cMeta-Learning with Latent Embedding Optimisation\\u201d Rusu et al. (2018)\"}", "{\"title\": \"incremental idea, weak experimental evidence\", \"review\": \"Summary\\nCAML is a gradient-based meta-learning method closely related to MAML. It divides model parameters into disjoint sets of task-specific parameters $\\\\phi$ which are adapted to each task and task-independent parameters $\\\\theta$ with are meta-learned across tasks. $\\\\phi$ are then interpreted as an embedding and fed as input to the model (parameterized by $\\\\theta$). Experiments demonstrate that this approach performs on par with MAML while adapting far fewer parameters. An additional benefit is that this approach is less sensitive to the adaptation learning rate and is easier to implement and faster to compute.\\n\\nStrengths\\nWhile not really explained in the paper, this work connects gradient-based to embedding-based meta-learning approaches. Adaptation is via gradient descent, but the adapted parameters are then re-interpreted as an embedding.\\nThe method has the potential to perform on par with MAML while being simpler and faster.\\nThe paper is well-written.\\n\\nWeaknesses\\nThe field of meta-learning variants is crowded, and this paper struggles to carve out its novelty. \\nRusu et al (LEO) optimize a context vector, which is used to generate model parameters. Reducing the generative model to a point estimate, how is this different from generating the FiLM parameters as a function of context as done in CAML? \\nLee and Choi (MT-nets) propose a general formulation for learning which model parameters to adapt. CAML is simpler in that the model parameters to adapt are chosen beforehand to be inputs. \\nSnell et al. / Oreshkin et al. are prototype-based methods infer context via a neural network rather than optimizing for it.\\n\\nIn this context, CAML appears to be yet another point drawn from the convex hull of choices already explored in episodic meta-learning (these choices can be broadly grouped into task encoding and conditional inference). The paper must then rest on its experimental results, which are at present unconvincing.\\n\\nOn the whole, the experimental results seem weak and analysis results largely uninformative. The method is benchmarked on the toy tasks of sinusoid regression and a 2-D point mass, as well as mini-ImageNet few-shot classification. The sinusoid and point mass navigation are toy and compared only to MAML, so it is hard to draw conclusions from those experiments. For mini-ImageNet, while CAML outperforms MAML, it seems that the pertinent comparison is with MT-NET (which CAML does not outperform) and LEO (missing fair comparison?).\\n\\nQuestions regarding experiments\\n - CAML is robust to the adaptation learning rate, but isn\\u2019t this true of any scheme that separates meta-learned and adapted parameters into disjoint sets? (e.g. also true of Lee and Choi?) \\n - The visualizations of the context parameters are nice, but interpreting much higher dimensional context vectors (which would be necessary for harder tasks) is more difficult, so I\\u2019m not sure what to take away from this? It\\u2019s very unsurprising that the 2-D context vector encodes x and y position in the point mass experiment, for example. \\n - I am confused by the comparison between adapting input parameters versus subsets of nodes at each layer or entire layers for the sinusoid regression task. Adapting subsets of nodes at each layer roughly corresponds to Lee and Choi, yet the reported numbers are quite different? \\n - In Table 3, which CAML is a fair comparison (in terms of network size and architecture) to MT-NET? \\n\\nEditorial Notes\", \"intro_paragraph_3\": \"fine-tuning image classification features for a semantic segmentation task is not a good example of task independent parameters, since fine-tuning end-to-end gives significant improvements.\", \"related_work_paragraph_2\": \"Initializing context parameters to zero is not the only difference with Rei et al (2015), and seems a strange thing to highlight?\", \"tables_1_and_2\": \"state what the task is in the caption\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"interesting idea, falling short on experimental evidence\", \"review\": \"The paper talks about Meta-Learning where some of the parameters of the models adapt to the new task (context parameters) and rest of the parameters are kept fixed (shared parameters). The authors propose a more general approach and show how CAML works for supervised learning and reinforcement learning paradigms.\\n\\nquality - The paper is written with good mathematical notation and in general is of high quality. The references to related work and motivation of the problem is good.\\n\\nclarity - While the paper is clear in many parts, it can be a lot better. Specifically it is unclear why authors chose regression, classification and RL to make their point without landing either one of them fully confidently.\\n\\noriginality - the idea is good and general enough to be applicable for many situations. While variants of this idea have been tried with fine-tuning for transferred learning I still think this work can classify as original and novel.\\n\\nsignificance of this work - The significance of meta learning is good but based on the experiments authors conducted I am worried it has little significance. \\n\\npros and cons - Overall, while I am supportive of a weak accept because of the idea and it's broad applicability I feel authors should maybe chose one of the tasks and show much more value in using the CAML framework. The three tasks they chose are all toy problems and do not instill confidence in the validity of CAML for either large scale experiments or in setups where distribution is changing but tasks remain same. It would be great to strengthen the paper with a more cleaner story on the experiments section and show CAML achieves SOA convincingly.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Good paper in general\", \"review\": [\"They are proposing a meta-learning method inspired by previous method, MAML. Their idea is separating the parameters in to two groups of context and shared parameters. The context parameters are learned through back-propagation of inner-loop and represents embedding for individual task. Shared-parameters on the other hand are shared between all tasks, and are learned in the outer-loop.\", \"Compared to MAML, the pros of their method is as follows:\", \"Less sensitive to learning rate: thus more robust to hyper parameters.\", \"Does not prone to overfit as MAML does.\", \"It is easier to implement, more efficient from memory view point.\", \"Cons in general,\", \"In Mini-ImgeNet data set, although they are beating MAML, but they are not able to beat other competitors in 5-shot classification.\", \"They could have explored applying their method to deep residual networks and compare their results.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"An interesting meta-learning algorithm\", \"review\": \"CAML seems an interesting meta-learning algorithm. I like the idea that the context parameters are used to modulate the whole network during the inner loop of meta-learning, while the rest of the network parameters are adaped in the outer loopand shared across tasks. Also, it is good to see that CAML is competitive with on few shot CNNs.\\n\\nThe paper is very well presented. Experiments are reasonably solid.\\n\\nIf I understood correctly, although CAML has achieved better accuracy it seems CAML still requires a decent amount of parameter/network structure optimisation. Would be good if the paper has a section talking about practical tricks of how to find the best CAML hyperparameter quickly.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Two-Stream Architecture\", \"comment\": \"Thank you for the kind feedback and your comment.\\n\\nYes, splitting up the context parameters and the network architecture up into separate streams like that is possible with CAML: given the two forward streams, there would also be two backward streams through which the gradient gets propagated, for the respective parts of the context parameters. It would be interesting to see the network can make use of the opportunity to propagate information through the separate streams to speed learning.\"}", "{\"comment\": \"A very nice read, the work is very admirable.\\n\\nI was thinking if there was a way to split the context parameters used in learning the policy into two separate streams; a linear and nonlinear stream. Something like in nonlinear control theory, where the linear stream would stabilize the local dynamics, and the nonlinear stream would handle global control. \\nIn a reinforcement learning setup, this would bring the benefits of both linear and nonlinear policies, which would, in turn, lead to greater generalization and more scalability.\", \"title\": \"Making the model more scalable\"}" ] }
HklSf3CqKm
Subgradient Descent Learns Orthogonal Dictionaries
[ "Yu Bai", "Qijia Jiang", "Ju Sun" ]
This paper concerns dictionary learning, i.e., sparse coding, a fundamental representation learning problem. We show that a subgradient descent algorithm, with random initialization, can recover orthogonal dictionaries on a natural nonsmooth, nonconvex L1 minimization formulation of the problem, under mild statistical assumption on the data. This is in contrast to previous provable methods that require either expensive computation or delicate initialization schemes. Our analysis develops several tools for characterizing landscapes of nonsmooth functions, which might be of independent interest for provable training of deep networks with nonsmooth activations (e.g., ReLU), among other applications. Preliminary synthetic and real experiments corroborate our analysis and show that our algorithm works well empirically in recovering orthogonal dictionaries.
[ "Dictionary learning", "Sparse coding", "Non-convex optimization", "Theory" ]
https://openreview.net/pdf?id=HklSf3CqKm
https://openreview.net/forum?id=HklSf3CqKm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJxUGIdXlV", "SJgbLGX96Q", "rkg-4f75pQ", "rkgvlzm96Q", "H1gObZmc6m", "ByeePKFwaX", "r1gomAzrpQ", "Bkxa-CzHpX", "HJeARTGrpX", "BklN3TzB6X", "SkxX-tyrT7", "Byl9W4UmTQ", "H1lW-zgChm", "r1g5pn1527" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544943117617, 1542234697434, 1542234665177, 1542234607446, 1542234367940, 1542064472212, 1541905954833, 1541905924767, 1541905878376, 1541905835675, 1541892346954, 1541788673572, 1541435897444, 1541172418276 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1259/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1259/Authors" ], [ "ICLR.cc/2019/Conference/Paper1259/Authors" ], [ "ICLR.cc/2019/Conference/Paper1259/Authors" ], [ "ICLR.cc/2019/Conference/Paper1259/Authors" ], [ "ICLR.cc/2019/Conference/Paper1259/AnonReviewer5" ], [ "ICLR.cc/2019/Conference/Paper1259/Authors" ], [ "ICLR.cc/2019/Conference/Paper1259/Authors" ], [ "ICLR.cc/2019/Conference/Paper1259/Authors" ], [ "ICLR.cc/2019/Conference/Paper1259/Authors" ], [ "ICLR.cc/2019/Conference/Paper1259/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1259/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1259/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1259/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper studies non smooth and non convex optimization and provides a global analysis for orthogonal dictionary learning. The referees indicate that the analysis is highly nontrivial compared with existing work.\\n\\nThe experiments fall a bit short and the relation to the loss landscape of neural networks could be described more clearly. \\n\\nThe reviewers pointed out that the experiments section was too short. The revision included a few more experiments. The paper has a theoretical focus, and scores high ratings there. \\n\\nThe confidence levels of the reviewers is relatively moderate, with only one confident reviewer. However, all five reviewers regard this paper positively, in particular the confident reviewer.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good ratings, strong theory\"}", "{\"title\": \"Update: paper revised\", \"comment\": \"We have expanded our synthetic experiment section, added an experiment with real data, and added a conclusion section which discusses some connections to shallow neural nets. Please feel free to take a look at our revision.\"}", "{\"title\": \"Update: paper revised\", \"comment\": \"We have expanded the synthetic experiments in Section 5 and added a real data experiments in Appendix H. Please feel free to take a look at our revision.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your valuable feedback!\\n\\nWe have expanded our synthetic experiment section, added an experiment with real data, and added a conclusion section which discusses some connections to shallow neural nets. Please feel free to take a look at our revision.\"}", "{\"title\": \"Revision: expanded synthetic experiments + real data experiments + conclusion\", \"comment\": \"We have made a revision of our paper. The major changes are summarized as follows:\\n\\n(1) The synthetic experiment (Section 5) is slightly expanded with results on different sparsity (\\\\theta = 0.1, 0.3, 0.5). Recovery is easier when the sparsity is higher (i.e. \\\\theta is lower), but in all cases we get successful recovery when m >= O(n^2).\\n\\n(2) We added an experiment on real images (Appendix H), which shows that complete dictionaries offer a reasonable sparsifying basis for real image patches.\\n\\n(3) We have added a conclusion section (Section 6) with discussions of our contributions and future directions.\"}", "{\"title\": \"Nice work on nonconvex nonsmooth theory, needs more work on experiments and relation to loss landscape of neural networks mentioned in abstract\", \"review\": \"The paper provides a very nice analysis for the nonsmooth (l1) dictionary learning minimization in the case of orthogonal complete dictionaries and linearly sparse signals. They utilize a subgradient method and prove a non-trivial convergence result.\\n\\nThe theory provided is solid and expands on the earlier works of sun et al. for the nonsmooth case. Also interesting is the use a covering number argument with the d_E metric.\\n\\nA big plus of the method presented is that unlike previous methods the subgradient descent based scheme presented is independent of the initialization.\\n\\nDespite a solid theory developed, lack of numerical experiments reduces the quality of the paper. Additional experiments with random data to illustrate the theory would be beneficial and it would also be nice to find applications with real data.\\n\\nIn addition as mentioned in the abstract the authors suggest that the methods used in the paper may also aid in the analysis of shallow non-smooth neural networks but they need to continue and elaborate with more explicit connections.\\n\\nMinor typos near the end of the paper and perhaps missing few definitions and notation are also a small concern\\n\\nThe paper is a very nice work and still seems significant! Nonetheless, fixing the above will elevate the quality of the paper.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Thanks & will have more detailed experiments\", \"comment\": \"Thank you for the positive feedback!\\n\\nWe are performing some more experiments as well as expanding the experiments section in more details. Please stay tuned and we will let you know when it\\u2019s done.\"}", "{\"title\": \"Will expand experiments and add conclusion/discussion\", \"comment\": \"Thank you for the thoughtful feedback!\\n\\nOur preliminary experiments do show the effect of sample complexity -- in particular, empirically the subgradient descent algorithm almost always succeed as long as m = O(n^2), which is even better than the O(n^4) suggested by our theory.\\n\\nWe are working on additional experiments comparing different sparsity, and real data experiments. (The experiments are indeed a bit time-consuming and would require days.) \\n\\nWe are also working on adding a conclusion section and revising the paper a bit. Please stay tuned and we will let you know when it\\u2019s done.\"}", "{\"title\": \"Response on SQW and potential generalizations\", \"comment\": \"Thank you for the positive feedback! We respond to the specific questions in turn.\\n\\n\\u201cChallenge of extending SQW to non-smooth case\\u201d --- The high-level ideas of obtaining the two results are the same: characterizing the nice global landscape of the respective objectives on the sphere, and then designing specific optimization algorithms taking advantage of the particular landscapes. Characterization of the landscape is through the use of first-order (and second-order) derivatives. For our nonsmooth setting, we have to use the subdifferential to describe the first-order geometry, which involves dealing with set-valued functions and random sets (due to the randomness in the data assumption)---very different than dealing with the gradient and Hessian in the smooth calculus, as in SQW. Moreover, traditional argument of uniform convergence of random quantities to their expectation often relies on Lipschitz property of the quantities of interest. For random sets, the notion of concentration is unconventional, and the desired Lipschitz property also fails to hold. We introduce tools from random set theory and construct a novel concentration argument getting around the Lipschitz requirement. This in turn implies that the first-order geometry of the sample objective is close to the benign population objective, from which the algorithmic guarantee follows.\\n\\n\\u201cPotential generalizations\\u201d --- \\u00a0We believe that our theory has the potential to generalize into the overcomplete case. There, a natural generalization of the orthogonality assumption is that the dictionary A is a well-conditioned tight frame (n x L \\u201cfat\\u201d matrix with orthonormal rows and suitably widespread columns in the n-dim space). Although the \\\"sparse vectors in a linear subspace\\\" intuition fails there, we would still expect the columns a_i of A minimize the population objective ||a^T Y||_1 = ||a^T A X||_1: due to the widespread nature of columns of A, a_i^T A would be an \\u201capproximately 1-sparse\\u201d vector (i.e., with one dominant entry and others having small magnitudes) and so vectors a_i^T AX are expected to be noisy versions of rows of X, which are the sparest (in a soft sense) vectors among all vectors of the form a^T AX. Figuring out the precise optimization landscape in that case would be of great interest.\"}", "{\"title\": \"Extension to overcomplete case; role of nonsmoothness\", \"comment\": \"Thank you for the positive feedback! We respond to the questions in the following.\\n\\n\\u201cExtending to overcomplete DL\\u201d --- We believe that our theory has the potential to generalize into the overcomplete case. There, a natural generalization of the orthogonality assumption is that the dictionary A is a well-conditioned tight frame (n x L \\u201cfat\\u201d matrix with orthonormal rows and suitably widespread columns in the n-dim space). Although the \\\"sparse vectors in a linear subspace\\\" intuition fails there, we would still expect the columns a_i of A minimize the population objective ||a^T Y||_1 = ||a^T A X||_1: due to the widespread nature of columns of A, a_i^T A would be an \\u201capproximately 1-sparse\\u201d vector (i.e., with one dominant entry and others having small magnitudes) and so vectors a_i^T AX are expected to be noisy estimates of rows of X, which are the sparest (in a soft sense) vectors among all vectors of the form a^T AX. Figuring out the precise optimization landscape in that case would be of great interest. \\n\\n\\u201cNonsmooth approach vs. (randomized) smoothing\\u201d --- We wonder whether you\\u2019re referring to the smoothed *objective*, or applying smoothing *algorithms* on our non-smooth objective. We will discuss both as follows.\\n\\nA smoothed objective was analyzed in Sun et al.\\u201815. Smoothing therein helped to make conventional calculus tools and expectation-concentration style argument readily applicable conceptually, but the smoothed objective and its low-order derivatives led to involved technical analysis---the smoothed objective loses the simplicity of the L1 function. This tends to be the case for several natural smoothing schemes. Also, L1 function is the regularizer people use in practical dictionary learning. This paper directly works with the non-smooth L1 objective and is able to obtain stronger results with a substantially cleaner argument, using unconventional yet highly accessible tools from nonsmooth analysis, set-valued analysis, and random set theory. \\n\\nSmoothing algorithms on non-smooth objective is an active area of ongoing research. For example, Jin et al. \\u201818 showed that randomized smoothing algorithms succeed on minimizing non-smooth objectives as long as it is point-wise close to a smooth objective, which is often chosen to be its expected version. However, in our case, even the expected objective is non-smooth (see e.g. Section 3.1), so it is not readily applicable. Moreover, the result there is based on a zero-th order method, which is a conservative algorithmic choice when the (sub)gradient information is readily available---this is the case for us. In this paper, we are able to show the convergence of subgradient descent (i.e., a first-order method) directly on the non-smooth objective. It would be of interest to see whether first-order smoothing algorithms work as well.\\n\\n\\u201cNonsmoothness in neural networks\\u201d --- It depends on what perspective we take. \\n\\nIf we are interested in the landscape (i.e. the global geometry of the loss function), then the nonsmoothness matters a lot as the nonsmooth points are scattered everywhere in the space, and if one initializes the model adversarially near the highly nonsmooth parts, intuitively the performance can be hurt by the nonsmoothness.\\n\\nHowever, if we are more interested in the trajectory of some particular algorithms (say, SGD), then maybe the non-smoothness won\\u2019t hurt a lot --- as long as nice properties on the trajectory can be established. Such a trajectory-specific analysis has been done recently in, e.g., Du et al. \\u201818. Even in this kind of results, there is no formal theory or statement saying that the nonsmooth points won\\u2019t be encountered. \\n\\nBesides our work, there are other recent papers showing why nonsmoothness should and can be handled on a rigorous basis, e.g., Laurent & von Brecht \\u201917, Kakade & Lee \\u201918.\", \"reference\": \"Sun, J., Qu, Q., & Wright, J. (2015). Complete Dictionary Recovery over the Sphere I: Overview and the Geometric Picture. arXiv preprint arXiv:1511.03607.\\n\\nJin, C., Liu, L. T., Ge, R., & Jordan, M. I. (2018). Minimizing Nonconvex Population Risk from Rough Empirical Risk. arXiv preprint arXiv:1803.09357.\\n\\nDu, S. S., Zhai, X., Poczos, B., & Singh, A. (2018). Gradient Descent Provably Optimizes Over-parameterized Neural Networks. arXiv preprint arXiv:1810.02054.\\n\\nLaurent, T., & von Brecht, J. (2017). The Multilinear Structure of ReLU Networks. arXiv preprint arXiv:1712.10132.\\n\\nKakade, S., & Lee, J. D. (2018). Provably Correct Automatic Subdifferentiation for Qualified Programs. arXiv preprint arXiv:1809.08530.\"}", "{\"title\": \"A good paper\", \"review\": \"This paper studies dictionary learning problem by a non-convex constrained l1 minimization. By using subgradient descent algorithm with random initialization, they provide a non-trivial global convergence analysis for problem. The result is interesting, which does not depend on the complicated initializations used in other methods.\\n\\nThe paper could be better, if the authors could provide more details and results on numerical experiments. This could be used to confirm the proved theoretical properties in practical algorithms.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"solid analysis and new insights\", \"review\": \"This paper studies nonsmooth and nonconvex optimization and provides a global analysis for orthogonal dictionary learning. The analysis is highly nontrivial compared with existing work. Also for dictionary learning nonconvex $\\\\ell_1$ minimization is very important due to its robustness properties.\\n\\nI am wondering how extendable is this approach to overcomplete dictionary learning. It seems that overcomplete dictionary would break the key observation of \\\"sparsest vector in the subspace\\\". \\n\\nIs it possible to circumvent the difficulty of nonsmoothness using (randomized) smoothing, and then apply the existing theory to the transformed objective? My knowledge is limited but this seems to be a more natural thing to try first. Could the authors compare this naive approach with the one proposed in the paper?\\n\\nAnother minor question is about the connection with training deep neural networks. It seems that in practical training algorithms we often ignore the fact that ReLU is nonsmooth since it only has one nonsmooth point \\u2014 only with diminishing probability, it affects the dynamics of SGD, which makes subgradient descent seemingly unnecessary. Could the authors elaborate more on this connection?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Non-smooth non-convex optimization approach to complete dictionary learning\", \"review\": \"This paper is a direct follow-up on the Sun-Qu-Wright non-convex optimization view on the Spielman-Wang-Wright complete dictionary learning approach. In the latter paper the idea is to simply realize that with Y=AX, X being nxm sparse and A a nxn rotation, one has the property that for m large enough, the rows of X will be the sparsest element of the subspace in R^m generated by the rows of Y. This leads to a natural non-convex optimization problem, whose local optimum are hopefully the rows of X. This was proved in SWW for *very* sparse X, and then later improved in SQW to the linear sparsity scenario. The present paper refines this approach, and obtain slightly better sample complexity by studying the most natural non-convex problem (ell_1 regularization on the sphere).\\n\\n\\nI am not an expert on SQW so it is hard to evaluate how difficult it was to extend their approach to the non-smooth case (which seems to be the main issue with ell_1 regularization compared to the surrogate loss of SQW).\\n\\n\\nOverall I think this is a solid theoretical contribution, at least from the point of view of non-smooth non-convex optimization. I have some concerns about the model itself. Indeed *complete* dictionary learning seemed like an important first step in 2012 towards more general and realistic scenario. It is unclear to this reviewer whether the insights gained for this complete scenario are actually useful more generally.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Relevant problem, incomplete paper\", \"review\": \"The paper proposes a subgradient descent method to learn orthogonal, squared /complete n x n dictionaries under l1 norm regularization. The problem is interesting and relevant, and the paper, or at least the first part, is clear.\\n\\nThe most interesting property is that the solution does not depend on the dictionary initialization, unlike many other competing methods. \\n\\nThe experiments sections in disappointingly short. Could the authors play with real data? How does sparsity affect the results? How does it change with different sample complexities? Also, it would be nice to have a final conclusion section. I think the paper contains interesting material but, overall, it gives the impression that the authors rushed to submit the paper before the deadline!\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}" ] }
HyGEM3C9KQ
Improving Differentiable Neural Computers Through Memory Masking, De-allocation, and Link Distribution Sharpness Control
[ "Robert Csordas", "Juergen Schmidhuber" ]
The Differentiable Neural Computer (DNC) can learn algorithmic and question answering tasks. An analysis of its internal activation patterns reveals three problems: Most importantly, the lack of key-value separation makes the address distribution resulting from content-based look-up noisy and flat, since the value influences the score calculation, although only the key should. Second, DNC's de-allocation of memory results in aliasing, which is a problem for content-based look-up. Thirdly, chaining memory reads with the temporal linkage matrix exponentially degrades the quality of the address distribution. Our proposed fixes of these problems yield improved performance on arithmetic tasks, and also improve the mean error rate on the bAbI question answering dataset by 43%.
[ "rnn", "dnc", "memory augmented neural networks", "mann" ]
https://openreview.net/pdf?id=HyGEM3C9KQ
https://openreview.net/forum?id=HyGEM3C9KQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJxq-1b2JE", "rJxK_pj6TQ", "H1epWZcnpQ", "B1eovy5nam", "BklGVJ92pX", "SkxQaAFh6m", "Hkg0R50bpQ", "rygRU5E5nm", "H1g7-dMz3m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544453889916, 1542466928988, 1542394117508, 1542393698804, 1542393642341, 1542393530700, 1541692117967, 1541192277908, 1540659195014 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1258/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1258/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1258/Authors" ], [ "ICLR.cc/2019/Conference/Paper1258/Authors" ], [ "ICLR.cc/2019/Conference/Paper1258/Authors" ], [ "ICLR.cc/2019/Conference/Paper1258/Authors" ], [ "ICLR.cc/2019/Conference/Paper1258/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1258/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1258/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"\", \"pros\": [\"Identification of several interesting problems with the original DNC model: masked attention, erasion of de-allocated elements, and sharpened temporal links\", \"An improved architecture which addresses the issues and shows improved performance on synthetic memory tasks and bAbI over the original model\", \"Clear writing\"], \"cons\": \"- Does not really show this modified DNC can solve a task that the original DNC could not and the bAbI tasks are effectively solved anyway. It is still not clear whether the DNC even with these improvements will have much impact beyond these toy tasks.\\n\\nOverall the reviewers found this to be a solid paper with a useful analysis and I agree. I recommend acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper; solid improvements to DNC\"}", "{\"title\": \"rebuttal\", \"comment\": \"Thanks for addressing the main concerns of my review, I have updated my score accordingly.\"}", "{\"title\": \"Update\", \"comment\": [\"Following the suggestions of the reviewers, we updated our paper. We made the following changes:\", \"Clarified the abstract\", \"Added mean/std loss curves for the associative recall task for many models\", \"Added mean/std error curves for the bAbI task in the appendix\", \"Highlighted our modifications compared to DNC equations in Appendix A\", \"Fixed missing definitions/variables/etc.\"]}", "{\"title\": \"Reply to reviewer 1\", \"comment\": \"Thank you for your thoughtful feedback!\"}", "{\"title\": \"Reply to reviewer 2\", \"comment\": \"Thank you for your thoughtful and helpful comments.\\n\\nFollowing the suggestions, we added additional results for the associative recall task for many network variants. We also report mean and variance of losses for different seeds. This shows that masking improves performance on this task especially when combined with improved de-allocation, while sharpness enhancements negatively affect performance in this case. From the variance plots it can be seen that some seeds of DNC-M and DNC-MD converge significantly faster than plain DNC.\\n\\nIn our experimental section, we added requested references to methods performing better on bAbI, and point out that our goal is not to beat SOTA on bAbI, but to exhibit and overcome drawbacks of DNC.\\n\\nComparison to Sparse DNC is an interesting idea, and we are currently running experiments in this direction. We intend to make the results available in the near future.\\n\\nWe are unable to provide a fair comparison for the lowest bAbi scores, having reported 8 seeds compared to the 20 seeds reported by Graves et al. Indeed, the high variance of DNC (Table 1) suggests that it may benefit a lot from exploring additional seeds.\\n\\nWe incorporated all of the smaller notes, including a comparison to the original DNC equations in Appendix A.\"}", "{\"title\": \"Reply to reviewer 3\", \"comment\": \"Thank you for your careful consideration and feedback. Following your request, we updated the paper to include mean learning curves for different models in Figure 6 in Appendix C. Our models converge faster than DNC. Some of them (especially DNC-MD) also have significantly lower variance than DNC.\"}", "{\"title\": \"Solid improvements to DNC.\", \"review\": \"Summary:\\n\\nThis paper is built on the top of DNC model. Authors observe a list of issues with the DNC model: issues with deallocation scheme, issues with the blurring of forward and backward addressing, and issues in content-based addressing. Authors propose changes in the network architecture to solve all these three issues. With toy experiments, authors demonstrate the usefulness of the proposed modifications to DNC. The improvements are also seen in more realistic bAbI tasks.\", \"major_comments\": \"The paper is well written and easy to follow. The proposed improvements seem to result in very clear improvements. The proposed improvements also improve the convergence of the model. I do not have any major concerns about the paper. I think that contributions of the paper are good enough to accept the paper.\\n\\nI also appreciate that the authors have submitted the code to reproduce the results.\\n\\nI am curious to know if authors observe similar convergence gains in bAbI tasks as well. Can you please provide the mean learning curve for bAbI task for DNC vs proposed modifications?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Promising modifications to the Differentiable Neural Computer (DNC) architecture, but needs stronger empirical evidence\", \"review\": \"\", \"overview\": \"This paper proposes modifications to the original Differentiable Neural Computer architecture in three ways. First by introducing a masked content-based addressing which dynamically induces a key-value separation. Second, by modifying the de-allocation system by also multiplying the memory contents by a retention vector before an update. Finally, the authors propose a modification in the link distribution, through renormalization. They provide some theoretical motivation and empirical evidence that it helps avoiding memory aliasing. \\nThe authors test their approach in the some algorithm task from the DNC paper (Copy, Associative Recall and Key-Value Retrieval), and also in the bAbi dataset.\", \"strengths\": \"Overall I think the paper is well-written, and proposes simple adaptions to the DNC architecture which are theoretically grounded and could be effective for improving general performance. Although the experimental results seem promising when comparing the modified architecture to the original DNC, in my opinion there are a few fundamental problems in the empirical session (see weakness discussion bellow).\", \"weaknesses\": \"Not all model modifications are studied in all the algorithmic tasks. For example, in the associative recall and key-value retrieval only DNC and DNC + masking are studied.\\n\\nFor the bAbi task, although there is a significant improvement (43%) in the mean error rate compared to the original DNC, it's important to note that performance in this task has improved a lot since the DNC paper was release. Since this is the only non-toy task in the paper, in my opinion, the authors have to discuss current SOTA on it, and have to cite, for example the universal transformer[1], entnet[2], relational nets [3], among others architectures that shown recent advances on this benchmark. \\nMoreover, the sparse DNC (Rae el at., 2016) is already a much better performant in this task. (mean error DNC: 16.7 \\\\pm 7.6, DNC-MD (this paper) 9.5 \\\\pm 1.6, sparse DNC 6.4 \\\\pm 2.5). Although the authors mention in the conclusion that it's future work to merge their proposed changes into the sparse DNC, it is hard to know how relevant the improvements are, knowing that there are much better baselines for this task.\\nIt would also be good if besides the mean error rates, they reported best runs chosen by performance on the validation task, and number of the tasks solve (with < 5% error) as it is standard in this dataset.\\n\\n\\nSmaller Notes. \\n1) In the abstract, I find the message for motivating the masking from the sentence \\\"content based look-up results... which is not present in the key and need to be retrieved.\\\" hard to understand by itself. When I first read the abstract, I couldn't understand what the authors wanted to communicate with it. Later in 3.1 it became clear. \\n\\n2) page 3, beta in that equation is not defined\\n\\n3) First paragraph in page 5 uses definition of acronyms DNC-MS and DNC-MDS before they are defined.\\n\\n4) Table 1 difference between DNC and DNC (DM) is not clear. I am assuming it's the numbers reported in the paper, vs the author's implementation? \\n\\n5)In session 3.1-3.3, for completeness. I think it would be helpful to explicitly compare the equations from the original DNC paper with the new proposed ones. \\n\\n--------------\", \"post_rebuttal_update\": \"I think the authors have addressed my main concern points and I am updating my score accordingly.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Well written, has implications beyond DNC\", \"review\": \"The authors propose three improvements to the DNC model: masked attention, erasion of de-allocated elements, and sharpened temporal links --- and show that this allows the model to solve synthetic memory tasks faster and with better precision. They also show the model performs better on average on bAbI than the original DNC.\\n\\nThe negatives are that the paper does not really show this modified DNC can solve a task that the original DNC could not. As the authors also admit, there have been other DNC improvements that have had more dramatic improvements on bAbI.\\n\\nI think the paper is particularly clearly written, and I would vote for it being accepted as it has implications beyond the DNC. The fact that masked attention works so much better than the standard cosine-weighted content-based attention is pretty interesting in itself. The insights (e.g. Figure 5) are interesting and show the study is not just trying to be a benchmark paper for some top-level results, but actually cares about understanding a problem and fixing it. Although most recent memory architectures do not seem to have incorporated the DNC's slightly complex memory de-allocation scheme, any resurgent work in this area would benefit from this study.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
HklVMnR5tQ
Exploring the interpretability of LSTM neural networks over multi-variable data
[ "Tian Guo", "Tao Lin" ]
In learning a predictive model over multivariate time series consisting of target and exogenous variables, the forecasting performance and interpretability of the model are both essential for deployment and uncovering knowledge behind the data. To this end, we propose the interpretable multi-variable LSTM recurrent neural network (IMV-LSTM) capable of providing accurate forecasting as well as both temporal and variable level importance interpretation. In particular, IMV-LSTM is equipped with tensorized hidden states and update process, so as to learn variables-wise hidden states. On top of it, we develop a mixture attention mechanism and associated summarization methods to quantify the temporal and variable importance in data. Extensive experiments using real datasets demonstrate the prediction performance and interpretability of IMV-LSTM in comparison to a variety of baselines. It also exhibits the prospect as an end-to-end framework for both forecasting and knowledge extraction over multi-variate data.
[ "Interpretability", "recurrent neural network", "attention" ]
https://openreview.net/pdf?id=HklVMnR5tQ
https://openreview.net/forum?id=HklVMnR5tQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ryxzgItblN", "Skegf_JRkV", "SJgW5KK3y4", "BkedufG31E", "H1gY3VZhJ4", "B1ezdEgeJE", "B1ez2no9hX", "SJxactWc3X", "SklwRJb52m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544816105970, 1544579079645, 1544489353209, 1544458863775, 1544455345112, 1543664745594, 1541221546265, 1541179797018, 1541177295073 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1257/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1257/Authors" ], [ "ICLR.cc/2019/Conference/Paper1257/Authors" ], [ "ICLR.cc/2019/Conference/Paper1257/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1257/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1257/Authors" ], [ "ICLR.cc/2019/Conference/Paper1257/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1257/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1257/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers appreciated the clarity of writing, and the importance of the problem being addressed. There was a moderate amount of discussion around the paper, but the two reviewers who responded to the author discussion were split in their opinion, with one slightly increasing their score to a 6, and the other remaining unconvinced. The scores overall are borderline for ICLR acceptance, and given that, no reviewer stepped forward to champion the paper.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Borderline, with no clear reviewer endorsement\"}", "{\"title\": \"Thanks for the reply!\", \"comment\": \"Dear Reviewer,\\n\\nThanks for updating the rating!\\n\\nWe are continuously working on improving the manuscript both theoretically and experimentally. \\n\\nFeel free to post comments if you have additional advice.\\n\\nThanks!\\n\\nBest regards,\\nAuthors\"}", "{\"title\": \"About RETAIN\", \"comment\": \"Dear Reviewer,\\n\\nThanks for your reply to the revision! Maybe we did not explain clearly in the previous response. \\n\\nWe understand and fully agree that RETAIN is innovative in calculating the contribution of each variable in each timestep. \\n\\n-- regarding RETAIN \\n\\nWhat we tried to explain is that the derivation of the attention at each time step (i.e. \\u201cStep 3\\u201d in the paper of RETAIN) and using the attention value to weight input (i.e. \\u201cStep 4\\u201d) are problematic. \\n\\nWe would like to draw the attention of the community and inspire some insights into designing attention or contribution measures on multi-variable data. \\n\\nIn particular, in the RETAIN paper, in \\u201cStep 3\\u201d, $\\\\beta_j$ over variables is derived from the hidden states of RNN_{beta}. RNN_{beta} consumes multi-variable data in a conventional way and thus the hidden states mix information from all variables. Each element of $\\\\beta_j$ is then derived from hidden states with mixed information and opaque data flows.\\n\\nOur hypothesis is that it is improper to use each element of $\\\\beta_j$ to represent the contribution or importance measure of corresponding variables at each timestep, since each element includes the mixed contribution of all input variables. \\n\\nAs for \\u201cStep 4\\u201d, using attention value to directly weight the input data could be problematic as well if we take into account the correlation direction and domain of the input variables.\\n\\n-- \\u201cTherefore, when the authors were conducting the experiment in 4.5 using RETAIN, I wonder if the authors selected the variables by correctly calculating the contribution of each variable, or simply used the attention that each variable received.\\u201d\\n\\nThe experiments in 4.5 are strictly in accordance with RETAIN paper and use the \\u201ccontribution coefficient\\u201d defined in Eq. (5) in RETAIN paper to select variables.\"}", "{\"title\": \"Retaining the rating\", \"comment\": \"After reading the authors' feedback, I must say that I am still not convinced of the strong novelty of this work.\\nThe proposed method for deriving the importance (variable-wise or time-wise) is still, in essence, averaging the attention values. \\nAnd the authors' feedback suggests that the authors may not have a clear understanding of RETAIN. What separates RETAIN from other attention-based models is that RETAIN provides a way to precisely calculate the contribution of each variable in each timestep, which is not the same as calculating the variable importance by the attention each variable receives. \\nTherefore, when the authors were conducting the experiment in 4.5 using RETAIN, I wonder if the authors selected the variables by correctly calculating the contribution of each variable, or simply used the attention that each variable received.\\nWith these said, I still think the paper proposes a decent approach, and the overall quality of the paper calls for a 6, and I retain my rating.\\nHowever, if this paper is accepted, I suggest that the authors clarify the points I raised regarding RETAIN, as imprecise description of baselines could lower the credibility of the entire paper (even though the paper's idea itself is nice).\"}", "{\"title\": \"I revised my rating on the basis of the improvements brought to the paper.\", \"comment\": \"In light of the improvements brought to the paper to address some of the concerns initially raised, I believe the paper will be of interest to the ICLR community.\"}", "{\"title\": \"Comments on the updated version\", \"comment\": \"Dear all reviewers,\\n\\nIf there are still concerns not addressed in our response and the updated version, we can provide further explanation in this forum.\\n\\nThanks!\"}", "{\"title\": \"Nice work, but claims are bit much\", \"review\": \"Summary:\\nThe authors propose IMV-LSTM, which can handle multi-variate time series data in a manner that enables accurate forecasting, and interpretation (importance of variables across time, and importance of each variable). The authors use one LSTM per variable, and propose two implementations: IMV-Full explicitly tries to capture the interaction between the variables before mixing the LSTM hidden layers with attention. IMV-Tensor uses separate LSTMs for each variable that remain separate, and mixes the hidden layers of the LSTMs using attention. The propose model outperforms popular interpretable models on three different datasets, and the experiments regarding the variable importance is convincing.\", \"pros\": [\"The paper is clearly written, easy to understand.\", \"IMV-LSTM outperforms many baselines including popular interpretable models on three different datasets, and the interpretation part is not super rigorous, but convincing enough.\", \"Multi-variate time-series data are very common, therefore an interpretable, accurate models such as IMV-LSTM have a big practical impact.\", \"I like the idea of using the important variables to train another model for testing how accurately the models can choose important variables\"], \"issues\": [\"In the introduction: claim that attention mechanism can unveil the effect of variable to the target is tricky, potentially dangerous: Attention is attention. It is no causal, let alone correlation. Coefficients in logistic regression are correlated with the prediction target. Variables with high attention has \\\"some relationship\\\" with the prediction target.\", \"The methodological novelty of IMV-LSTM is limited. Using attention mechanism on RNN to provide interpretation has been explored quite often. This paper is not so different from other works [1,2,3]\", \"Claim that this is the first work to derive temporal-level & variable-level importance is not convincing: The importance calculation of this paper boils down to averaging the attention values. This can be easily done in the previous works [1,2,3], or any model that uses attention on each input channel and on the temporal axis.\", \"Can't follow Eq.10. How is this justified?\", \"[1] Choi, E., Bahadori, M.T., Sun, J., Kulas, J., Schuetz, A. and Stewart, W., 2016. Retain: An interpretable predictive model for healthcare using reverse time attention mechanism. In Advances in Neural Information Processing Systems (pp. 3504-3512).\", \"[2] Zhang, J., Kowsari, K., Harrison, J.H., Lobo, J.M. and Barnes, L.E., 2018. Patient2Vec: A Personalized Interpretable Deep Representation of the Longitudinal Electronic Health Record. IEEE Access.\", \"[3] Xu, Y., Biswal, S., Deshpande, S.R., Maher, K.O. and Sun, J., 2018, July. RAIM: Recurrent Attentive and Intensive Model of Multimodal Patient Monitoring Data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 2565-2573). ACM.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting and potent interpretable LSTM without an actual interpretation in terms of the problem: the model claims to make variable/temporal variable importance but without actually interpreting the quality.\", \"review\": \"This paper describes a recurrent model (LSTM specifically, but generalizable) which can produce variable-wise hidden states that can be further used for two types of attentions: 1) variable importance for the importance of each variable (not accounting for time), and 2) temporal importance of each variable for the importance of each variable over time. The proposed NN model (IMV-LSTM) does not seem to directly provide such importance. Rather, the outputs are \\u201cdecomposed\\u201d for each variable/time that allows probabilistic inference on top of this.\\n\\nOne of my main concerns (described in Cons/Comments below) is how it is not straightforward to grasp the quality of variable importance and temporal variable importance results despite this is the key strength of this paper. If this comes from my lack of understanding, I would appreciate if the authors could provide a little more explanation.\", \"pros\": \"1.\\tThe overall quality of the paper is decent and mostly clear.\\n2.\\tThe experiments are quite extensive.\\n3.\\tThe fact that each variable should have different level of importance is interesting and practical.\\n\\nCons/Comments:\\n1.\\tThe term \\u201ctensor\\u201d is used throughout the paper to describe the stacked matrices. While this is not technically wrong to describe 2>-dimensional structures, this term could potentially imply (and make the readers to expect) tensor-based schemes such as tensor decomposition. This is not necessarily bad, but to me, \\u201ctensor\\u201d and \\u201cvariable-wise correspondence\\u201d do not seems to be associated too deeply since the \\u201ctensor\\u201d used in IMV-LSTM is a stack of matrices that are also independently used with respect to each other.\\n\\n2.\\tThe variable importance experiments seem quite extensive and thorough, especially the lists of variable-wise temporal importance matrices provided in the appendix. However, the authors could provide some significance or relevance of the findings with respect to any domain knowledge or literature, it may help further appreciate and interpret the quality of the variable importance which is quite subjective to non-experts. Such information may not even need to be in the main paper; including a short description in the appendix.\\n\\n3.\\tRelated to comments (2), the difference between IMV-Full and IMV-Tensor is hard to interpret since neither one is always better than the other (i.e., IMV-Full > IMV-Tensor in some experiments, vice versa). While the key difference is speculated to be from how the LSTM handles the variables, I am curious how this related to the differences in the results and how the differences variable importance results (i.e., Fig.3) can be in at least speculated.\", \"questions\": \"1.\\tShould \\\\tilde{h}_t in Figure 1 (a) be \\\\tilde{h}__{t-1} since this hidden state is from t-1? The figure itself currently implies that the hidden state for t is used, but this is computed from x_t using U_j. With \\\\tilde{h}__{t-1}, it follows Eq.(1).\\n\\n2.\\tIn Equation set 2 for IMV-Tensor, are W and U (not W_j and U_j) also in tensor forms so each variable and hidden state get transformed correspondingly (i.e., W_1 for h^1_{t-1}, U_1 for x^1_t).\\n\\n3.\\tThe IMV-Tensor version of IMV-LSTM (related to the question above) can be considered as a set of parallel LSTMs, one for each variable. Such independence could also be inferred from Figure 1. If that\\u2019s the case, where do the variables \\u201cinteract\\u201d with each other? Is this happening in the later stage where the hidden states across variable/time are aggregated in the attention stage (Eq.(8) and on)?\\n\\n4.\\tUp until Eq.(8), n was used for the variable index where n = 1,\\u2026,N. In Eq.(8), it seems to be still used as the variable index (i.e., h_T^n and g^n), but it is also a set of possible values for a random variable z_{T+1}. Is n used the same way for z_{T+1} as well? I am slightly confused on how z is used. Also, (just to clarify), if we use N variables, we are using y_t as well (i.e., [x_t^1,\\u2026,x_t^{N-1}, y_t])?\\n\\n5.\\tf_agg: Is this for aggregating over instances? For \\\\bar{\\\\alpha}^n, I\\u2019m guessing this is aggregated over instances for variable n for t=1,\\u2026,T_1.\\n\\n6.\\tI am not too familiar with the notion of \\u201ctime-lag\\u201d in the experiments. If the authors could explain this a little bit, I would appreciate it.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"This paper explores the interpretability of LSTM with multivariable data while providing accurate forecasts in the context of time series. The paper is interesting and addresses a relevant topic. But it has several drawbacks that need to be addressed.\", \"review\": \"The contributions of this paper are in the field of LSTM, where the authors explore the interpretability of LSTM with multivariate data obtained from various and disparate applications. To this end, the authors endow their approach with tensorized hidden states and an update process in order to learn the hidden states. Furthermore, the authors develop a mixture attention mechanism and a summarization methods to quantify the temporal and variable importance in the data. They validate the forecasting and interpretability performance of their approach with experiments.\\n\\nThe parer is interesting, well structured and and clearly written. Also, the addressed topic of interpretability is pertinent. However, I have several concerns.\\n\\n1. In the related work the authors state that \\u201cIn time series analysis, prediction with exogenous variables is formulated as an auto-regressive exogenous model \\u201d . This is not always right - it is not imperative to add the auto-regressive terms, this is optional and depends on the way we want to formulate our time series forecasting approach and the known constraints. \\n2. In section 3 \\u2014 Interpretable Multi-Variable LSTM, by stacking exogenous time series and target series, the authors implicitly formulate their algorithms in a way to consider auto-regression. And I have several concerns with this for time series forecasting. Because, the past is not always a predictor of the future even - particularly in time series context and in industrial settings. And in the occasions where the past allows to predict the future we do not necessarily need to use LSTM to forecast (the notion of persistency in forecasting is enough). Therefore, the power of LSTM in forecasting would have been convincing if you omit the target series in your multi-variable input.\\n3. In Network Architecture section the authors develop tensorized hidden state and an update scheme. This idea is interesting, I think it would also be good to know what is the algorithmic complexity of this approach? \\n4. In section 3.3 the authors state that \\\"In the present paper, we choose the simple normalized summation function eq.(9). \\\" Could the authors justify the reason behind this choice? I am not convinced of the reason behind this, especially the authors mention, right after, that \\\"It is flexible to choose alternative functions for f_{agg}\\\"\\n\\n5. In the experiment section, concerning the prediction performance the authors present a table showing their results, I believe it would have been more compelling to present the prediction results with graphs showing the normalized cumulative errors, as an example.\\n\\n6. With regard to the interpretation of the results, the authors show the variable importance as a function of the epoch number, it would be equally important to correlate the same figure with the associated prediction results/normalized cumulative errors as a function of the epoch number - this will allow to assess the importance of the interpretability.\\n\\nI think it would be important to further justify the pertinence of this work in terms of interpretability (the statement in the introduction \\\"the interpretability of prediction models is essential for deployment and\\nknowledge extraction\\\" seems to be limited) for example what does it bring knowing the variance importance as a function of the epoch number. As an example, the Pearson correlation coefficient can help select relevant features to a model, and restrict the number of inputs to the relevant ones - can we draw inspiration from this and explain what the authors are proposing in terms of interpretability... Here the idea is to have a motivation presenting the merits of this work, which I think is missing - particularly with the experiments presented here.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
r1eVMnA9K7
Unsupervised Control Through Non-Parametric Discriminative Rewards
[ "David Warde-Farley", "Tom Van de Wiele", "Tejas Kulkarni", "Catalin Ionescu", "Steven Hansen", "Volodymyr Mnih" ]
Learning to control an environment without hand-crafted rewards or expert data remains challenging and is at the frontier of reinforcement learning research. We present an unsupervised learning algorithm to train agents to achieve perceptually-specified goals using only a stream of observations and actions. Our agent simultaneously learns a goal-conditioned policy and a goal achievement reward function that measures how similar a state is to the goal state. This dual optimization leads to a co-operative game, giving rise to a learned reward function that reflects similarity in controllable aspects of the environment instead of distance in the space of observations. We demonstrate the efficacy of our agent to learn, in an unsupervised manner, to reach a diverse set of goals on three domains -- Atari, the DeepMind Control Suite and DeepMind Lab.
[ "deep reinforcement learning", "goals", "UVFA", "mutual information" ]
https://openreview.net/pdf?id=r1eVMnA9K7
https://openreview.net/forum?id=r1eVMnA9K7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ByxK_M5iMV", "rkxLYyujMN", "r1eSR828l4", "HkxChlKQe4", "SJxN1LwxeN", "HJlYLFbllN", "SyxhS8Dpy4", "SJgJEie3AQ", "r1loqmzq0X", "S1xjwi8dCm", "SJljVd1DAm", "BJeTg0KBAX", "BygCo9LNRQ", "Hylt_SmER7", "rylBLB7NAX", "rJxo_xmE0X", "rkxnaAM4AX", "SyeJ_RG40X", "HJgBJCzNCm", "H1xQ8iScnX", "SygwYkYK3m", "B1es20ASnQ", "SygRw182s7", "H1eD9ACDsm", "S1epB9ag57", "SkxPx4ty5X" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "comment" ], "note_created": [ 1547571825336, 1547562877720, 1545156301070, 1544945846333, 1544742363601, 1544718672584, 1544545859929, 1543404327065, 1543279507427, 1543166818718, 1543071794920, 1542983156541, 1542904485953, 1542890865203, 1542890828721, 1542889586594, 1542889156469, 1542889062961, 1542888924982, 1541196618520, 1541144446891, 1540906674529, 1540280166264, 1539989135501, 1538476613447, 1538393070734 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1256/Authors" ], [ "ICLR.cc/2019/Conference/Paper1256/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1256/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1256/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1256/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1256/Authors" ], [ "ICLR.cc/2019/Conference/Paper1256/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1256/Authors" ], [ "ICLR.cc/2019/Conference/Paper1256/Authors" ], [ "ICLR.cc/2019/Conference/Paper1256/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1256/Authors" ], [ "ICLR.cc/2019/Conference/Paper1256/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1256/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1256/Authors" ], [ "ICLR.cc/2019/Conference/Paper1256/Authors" ], [ "ICLR.cc/2019/Conference/Paper1256/Authors" ], [ "ICLR.cc/2019/Conference/Paper1256/Authors" ], [ "ICLR.cc/2019/Conference/Paper1256/Authors" ], [ "ICLR.cc/2019/Conference/Paper1256/Authors" ], [ "ICLR.cc/2019/Conference/Paper1256/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1256/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1256/Authors" ], [ "ICLR.cc/2019/Conference/Paper1256/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1256/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1256/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"title\": \"Re: Still some issues\", \"comment\": \"We thank the reviewer for pointing out the notational mistake, wherein e() was maintained in several in-text instances even after it was purged from the equations. We will correct this in the final version.\\n\\nRegarding the confusion about Q(lambda) and the IMPALA architecture, we draw distinctions between the type of policy (deterministic vs. stochastic), the reinforcement learning algorithm, the data-gathering strategy, and the particular architecture of function approximators used to represent the policy.\\n\\nWe train a greedy, deterministic agent which represents its Q function using a feedforward network that drives a recurrent memory. While the extension of Q-learning agents to the recurrent case is straightforward (especially in the case of relatively short episodes such as ours, where the LSTM can be fully unrolled in time), we note now that it was previously explored by Hausknecht & Stone (2015). We will cite this work in the final version for the sake of fair attribution and added clarity.\\n\\nThere are two orthogonal similarities between the agents used in our experiments and IMPALA. The first is that we gather experience for our replay buffer in a distributed manner with a centralized learner, resembling IMPALA\\u2019s approach at a high level. The second is with regards to the function approximators chosen: our network architectures are identical to those employed in the IMPALA work, except that rather than policy and state-value output layers, an output layer based on equation (6) computes action-values (the previous reward and action are also omitted as inputs). Q(lambda) is then used to compute the targets for this layer.\\n\\nWe stress that the choice of experience-gathering strategy, the network architecture (including the use of an LSTM), and even the use of Q(lambda) targets are implementation choices that are not central to DISCERN\\u2019s contribution.\"}", "{\"title\": \"Still some issues\", \"comment\": \"I'm currently reading the paper and found a few points that deserve clarification.\\n\\n* In Section 4, the relationship between $e()$ and $\\\\xhi_\\\\phi$ is not explicitly written. The authors probably mean $e(s) = \\\\xi_\\\\phi(h(s))^T\\\\xi_\\\\phi(h(s))$.\\n\\nIf this is so, rather than defining $l_g$ as they do, they could rewrite (5) as\\n\\n$$... = log \\\\frac{exp(\\\\beta e(s_g))}{exp(\\\\beta e(s_g)) + \\\\sum^K exp(e(d_k))}$$.\\n\\n* In Section 4 the authors say that Q is trained with $Q(\\\\lambda)$, but in Appendix A2 they describe something more complicated related to IMPALA and using an LSTM. Where is the truth?\\n\\nI would be glad to see these points fixed in the final version of the paper (or the arxiv one)\"}", "{\"title\": \"Discussion should be about VIC more than DIAYN\", \"comment\": \"@Reviewer 3,\\n\\nThis paper is more about a reasonable simplification + CPC-like MI information estimator of Variational Intrinsic Control (VIC). I don't particularly see the need to discuss the relationship with DIAYN given that DIAYN is itself a trivial simplification of VIC while the authors clearly derive their idea from VIC by (1) replacing options with a goal image and making the policy goal-conditioned, (2) the entropy term of available options reduces to entropy of goal states which is not a parametric function of the agent's policy or the MI estimator between goal and final state; (3) MI between option and final state = MI between goal observation and final observation - which is approximated by the authors with a contrastive loss with associative embeddings.\", \"there_are_other_nice_details_in_this_approach\": \"The authors train a separate embedding on top of the conv features of the image observations for the contrastive loss but don't back prop to the conv features used by the policy which optimizes the mutual information objective - which is a good stable implementation.\"}", "{\"title\": \"More about the relationship to DIAYN\", \"comment\": \"When thinking more about it (a late thought, sorry), the \\\"discrimination\\\" idea in this paper is quite close in spirit to the idea in \\\"Diversity is All You Need\\\" (Eysenbach et al. 2018, which is about to get accepted at ICLR 2019)).\\nAt first glance, the main differences are in the mathematical way to put it, and the fact that DISCERN deals with \\\"distractor\\\" goals which cannot be under the control of the agent.\\n\\nIn the final version of this paper, I would be glad to know more about this relationship: is there a simple connection that can be established between the mathematical frameworks? Could the authors compare how both frameworks (DIAYN and DISCERN) behave in two simple benchmarks, one without a distractor and one with some distractors? Are there other important differences which should be put forward?\"}", "{\"metareview\": \"This paper introduces an unsupervised algorithm to learn a goal-conditioned policy and the reward function by formulating a mutual information maximization problem. The idea is interesting, but the experimental studies seem not rigorous enough. In the final version, I would like to see some more detailed analysis of the results obtained by the baselines (pixel approaches), as well as careful discussion on the relationship with other related work, such as Variational Intrinsic Control.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Concerned about the rigor of experiments\"}", "{\"title\": \"Will revise for camera-ready\", \"comment\": \"Greetings R2,\\n\\nWe agree that this is potentially misleading, as VIC did indeed show preliminary results on domains with pixel observations. We will replace \\\"scale\\\" with \\\"be applied\\\" and \\\"significantly more complex tasks\\\" with \\\"simulated continuous control tasks\\\" in order to be more concise (please let us know if you'd suggest an alternative wording).\\n\\nRegards,\\n\\n-- Paper 1256 Authors\"}", "{\"title\": \"Suggestion for a correction\", \"comment\": \"Authors,\\n\\nWhen reading the paper again, I came across a line in related work which is potentially incorrect: \\n\\\"Recently, Eysenbach et al. (2018) showed that a special case of the VIC objective can scale to significantly more complex tasks and provide a useful basis for low-level control in a hierarchical reinforcement learning context.\\\"\\n\\nIt's hard to make a clear comparison between the nature of tasks considered in the two papers. The tasks picked by Eysenbach et al are actually quite easy if the goal is to just see diverse random behaviors for locomotion as long as appropriate action limits are set for the controller and random rewards are optimized. Operating from pixels is a different challenge altogether. \\\"Scale\\\" and \\\"Significantly more complex tasks\\\" are loose and vague statements. I am assuming this paper will be well read and received as far as goal-based / unsupervised RL are concerned and it's important related work is covered without any misleading interpretations.\"}", "{\"title\": \"Maze example\", \"comment\": \"Regarding the example of a U-shaped maze, if the goal images for two states are indistinguishable, then we would expect our approach to either pick one of the goals or go to the one that is closer to the agent\\u2019s current position. We agree that this is a limitation of using a single observation to specify a goal, but it applies equally to other approaches relying on this formulation. One way to work around this without changing the goal formulation is to provide the agent with a sequence of waypoint goals one after the other, leading it to the desired states. This is the approach used in \\u201cZero-Shot Visual Imitation\\u201d by Pathak et al.\"}", "{\"title\": \"Another revision posted\", \"comment\": [\"Dear reviewers,\", \"We have posted another revision addressing more textual concerns.\", \"We have added a description in-text regarding our use of HER for the reward learning baselines. We intended to relabel figures but were unable to, we will relabel them in the final camera-ready.\", \"We have moved the Algorithm box forward in the text as suggested by AnonReviewer3, as well as addressed all typos they pointed out.\", \"We have clarified in text regarding why cartpole is \\\"difficult\\\" in our setting.\", \"We have added references to suggested deep HRL works to the discussion, where we mention HRL.\", \"We have added additional notes to the appendix regarding why we picked the Control Suite environments we did.\", \"We have added a citation to Ha & Schmidhuber's \\\"World Models\\\" as suggested by AnonReviewer2.\", \"We have cited Lin's experience replay work when we introduce the time-evolving non-parametric buffer of previous states that we use as a source of goals.\", \"Finally, Appendix A5.1 now includes a description of HER ablation results on Atari, i.e. training without HER and training only on HER.\", \"We thank the reviewers for their help in improving the paper.\"]}", "{\"title\": \"Thanks for your reply.\", \"comment\": \"\\\"The backtracking procedure in [3] is completely orthogonal to our work. The prioritization scheme for what states to start backtracking from relies entirely on extrinsic reward, which we do not use and to which we do not presume access.. \\\" \\\"Using an expanding set of goals is motivated by the fact that we do not assume to have access to the set of all possible goals, and rely on exploration through our behaviour policy. Thus evolving the goal buffer over time allows us to train on newly encountered states as goals. \\\"\\n\\nI understand the motivation behind your work, as well as backtracking procedure. While reading your paper, backtracking procedure work came to my mind, as there motivation was also the same in using an expanding set of goals, which worked well for them. So, just wanted to point it out. \\n\\n\\\"We believe this is precisely the reason to learn a state embedding rather than rely on naive or predetermined notions of similarity. In the point mass task, for example, pixel renderings of any two non-overlapping positions of the point mass will have equal L2 distance from one another. A reward based on this will not reward positions nearby to the goal differently from far away positions. The situation becomes even more complex when the goal and the observation contain differences outside of the agent\\u2019s control.\\\"\\n\\nI'm not sure, if I understand why for your method it would treat the 2 states differently. \\n\\n\\\"Regarding curiosity-driven models: while these can be learned in an unsupervised way and there is a strong connection to the original formulation of empowerment (in the one-step case), after a significant review we have yet to come across a paper where state-conditioned goal achievement could be done without significant algorithmic modifications.\\\"\\n\\nAgain, I agree with authors. I was more of the impression, that just giving the visual goal as a conditional input in the formulation of (Pathak et. al).\"}", "{\"title\": \"Related work revised, another revision on the way\", \"comment\": \"Dear reviewers,\\n\\nAs the Related Work section was a point of contention for more than one reviewer, and the most significant revision required, we felt it prudent to address it first and as soon as possible. We have posted a revised copy with an entirely rewritten Related Work section incorporating your feedback.\\n\\nWe will post another revision tomorrow which addresses the rest of the textual concerns.\"}", "{\"title\": \"Missing revised paper\", \"comment\": \"Well, most of the reponses look satisfactory, but now we need to see how they will be implemented into the revised version of the paper... The deadline is close now!\"}", "{\"title\": \"Response to Authors' Feedback\", \"comment\": \"I have read the detailed feedback of the authors and appreciate their effort in addressing my comments by running additional experiments and also explaining why certain aspects of proposed experiments could warrant a future research topic.\\n\\nThe authors are right in saying that GAN based methods such as SPIRAL are not anyway less hacky than noise contrastive methods given the level of trickery / skill needed to make those methods train in a stable way. So, I take back my comment on that. \\n\\nThis paper is important to encourage more work on RL without rewards, an emerging field. So, I strongly recommend accepting the paper and updating my score to 8.\"}", "{\"title\": \"Response to AR3 (2/2)\", \"comment\": \"> - I don't have a strong background about variational methods, and it is unclear to me why using an expanding set of goals corresponding to already seen states recorded in a buffer makes it that maximizing the log likelihood given in (4) is easier than something else.\\n\\nEquations 2 and 3 make reference to q(s_g|s_T) which, in its most general form, would be a conditional distribution responsible for assigning a scalar density to each point s_g giving the likelihood that a trajectory (from goal_conditional policy pi) which achieved terminal state s_T was in fact \\u201cintending\\u201d (i.e. given the conditioning goal) s_g. This is potentially a very difficult density modeling problem and it is unclear that such a distribution would be efficiently learnable, especially online in tandem with the policy. We replace this with a classification problem between K+1 candidates for the goal, as classification is generally regarded as easier than density modeling. Furthermore our use of a non-parametric matching network-style objective allows the classifier to perform a different classification \\\"task\\\" every time (the terminal observation of the goal episode is always different, as well as the set of candidate goals) while nonetheless generalizing across classification instances.\\n\\nUsing an expanding set of goals is motivated by the fact that we do not assume to have access to the set of all possible goals, and rely on exploration through our behaviour policy. Thus evolving the goal buffer over time allows us to train on newly encountered states as goals.\\n\\n> The algorithm comes with a lot of mechanisms and small tricks (at the end of Section 3 and in Section 4) whose importance is never assessed by specific experimental studies. This matters all the more than some of the details do not seem to be much principled. It would be nice to have elements to figure out how important they are with ablative studies putting them aside and comparing performance. \\n\\nOur initial experiments used q-hat(s_g|s_t) as the reward but this has the potential to introduce noise through decoy sampling, reducing the realized reward when a decoy happens to be close to the goal. We found that simply using the rectified embedding cosine similarity worked better, but we do not believe this is necessarily an optimal choice.\\n\\n> Among other things, I would be glad to know how well the system performs without its HER component. Is it critical?\\n\\nWe performed additional experiments on Atari which we will include in the appendix. It appears that on the two Atari domains considered, performance drops to approximately 25-30% goal achievement in both instances without Hindsight Experience Replay. This could possibly be improved by choosing a different function of the embedding similarity as the reward.\\n\\n> The same about the goal sampling strategy, as mentioned in the discussion: how critical is it in the performance of the algorithms?\\n\\nWe report results with both sampling strategies on Control Suite and Atari tasks. Diverse sampling seems to be important on Montezuma\\u2019s Revenge (where exploration is more difficult) but otherwise both strategies seem to perform comparably well.\\n\\n> - difficult tasks like cartpole: other papers mention cartpole as a rather easy task.\\n\\nHere we refer to the cartpole task as we pose it, i.e. achieving a specific cart and pole position at a specific time (the end of the goal episode). Unlike the standard balancing task it is impossible to maintain an arbitrary pole position due to the effects of gravity. We will make this clearer in the text.\\n\\n> In the begining of Section 4, the authors mention that the mechanisms of DISCERN naturally induce a form of curriculum (which may be debated), but this aspect is not highlighted clearly enough in the experimental study.\\n\\nWe are unsure how we would go about highlighting this and welcome your suggestions.\"}", "{\"title\": \"Response to AR3 (1/2)\", \"comment\": \"We thank AnonReviewer3 for their thoughtful review. We are currently preparing a revised manuscript which will address notational issues, typos as well as an expanded related work section. We address other specific comments below.\\n\\n> and the lack of ablative studies makes it difficult to determine which of the mechanisms are crucial in the system performance and which are not.\", \"we_have_ablated_the_reward_function_learner_in_3_ways\": \"first, by keeping everything fixed but swapping the discriminative objective for an autoencoding objective. Second, by swapping the reward learner for a separate network trained using the criterion from Ganin et al (again, keeping the agent architecture fixed; we also use the same proportion of hindsight relabeled trajectories, a point not stressed but which will be made in a revision). Third, using a reward based on a fixed notion of visual similarity in terms of L2 distance, where we tuned the bandwidth hyperparameter of this baseline to make it as strong as possible. If there are specific ablations AR3 would like to see we can attempt to address them.\\n\\n> - in Section 4, I would refer to Algorithm 1 only in the end of the section after all the details have been explained: I went first to the algorithm and could not understand many details that are explained only afterwards.\\n\\nWe agree and will do this.\\n\\n> - in Algorithm 1, shouldn't the two procedures be called \\\"Imitator\\\" and \\\"Teacher\\\", rather than \\\"actor\\\" and \\\"learner\\\", to be consistent with the end of Section 3?\\n\\nThe algorithm box is explained in terms of an experience-gathering procedure (Actor) and parameter update procedure (Learner) that is a split independent of the specifics introduced by DISCERN (see, e.g. [1] and [2] for other examples of such an exposition). Each of these procedures makes use of (and in the case of the learner, trains) both of the conceptual pieces (\\u201cimitator\\u201d and \\u201cteacher\\u201d, as you say). We chose this conceptual breakdown of the algorithm for the pseudocode block as it closely reflects our parallel distributed implementation (similar to Espeholt et al (2018)). Although a serial (or even more directly a single machine, multi-process) implementation is straightforward to derive from this conceptual partitioning, the reverse is not true, and so we felt it more valuable to provide the Actor/Learner description.\\n\\n[1] https://surreal.stanford.edu/img/surreal-corl2018.pdf\\n[2] https://arxiv.org/abs/1803.00933\\n\\n> - there must be a mathematical relationship between $\\\\xsi_\\\\phi$ and $\\\\hat{q}$, but I could not find this relationship anywhere in the text. What is $\\\\xsi_\\\\phi$ is never introduced clearly\\u2026\\n\\nWe have introduced e() as the composition of h() and the learned embedding xi(), and use e() in equation 4. We introduced e() specifically to reduce clutter but we see now that this has caused more confusion. Several reviewers have commented on the lack of clarity here so we will address this in a revision later this week.\\n\\n> - p4: we treat h as fixed ... => explain why.\\n\\nWe re-use the same convolutional net for computational efficiency, i.e. in order to avoid the need to learn a separate convolutional network. We will add explanatory text to the revision. Note that this is a common procedure in deep actor-critic methods, where the convolutional network features of the policy are often reused for the critic without backpropagating the critic\\u2019s gradients into the shared features (see, e.g. the \\u201cLearning from pixels\\u201d results in Section 6 of Tassa et al, 2018); we will expand upon this in the text. We experimented with optimizing the convolutional network with respect to both the reward learning loss and the reinforcement learning loss and found it to perform worse in practice than only optimizing it with respect to the RL loss. Joint optimization would likely require careful tuning of a weighting hyperparameter trading off the two losses.\"}", "{\"title\": \"Response to AR1\", \"comment\": \"We thank the reviewer for the careful review of our work and your comments. As noted in replies to AR2, we are pushing a revision shortly to address concerns raised, chiefly a new Related Work section.\\n\\n> MISSING CITATIONS: Original UVFA [1] paper should be cited while citing goal conditioned policies. \\n\\nThis was an oversight (the citation was there but got cut in editing), and will be addressed in a revision later this week.\\n\\n> In the paragraph, \\\"Goal distribution\\\" , the paper uses a non parametric approach to approximate the goal distribution. Previous works ([2], [3]) have used such an approach and relevant work should be cited. \\n\\n> [1] http://proceedings.mlr.press/v37/schaul15.html\\n> [2] Many Goals Reinforcement Learning https://arxiv.org/abs/1806.09605\\n> [3] Recall Traces: Backtracking Models for efficient RL https://arxiv.org/abs/1804.00379\\n\\nWe cite HER, of which [2] is an extension. The idea of progress-based prioritization is unlikely to work in our context, as the notion of goal-completion is highly non-stationary. However, we agree that non-parametric buffers have a rich history in the context of deep reinforcement learning, so we\\u2019ve added a citation in our next revision to the paper by Lin that introduced them.\\n\\nThe backtracking procedure in [3] is completely orthogonal to our work. The prioritization scheme for what states to start backtracking from relies entirely on extrinsic reward, which we do not use and to which we do not presume access.. \\n\\n> I wonder if learning the variational distribution would be tricky in scenarios where one need to extract a representation of the end state that can distinguish states based on actions required to reach them. Like consider a U-shaped maze \\n> | | |\\n> | | |\\n> |_A__|__B__|\\n> In this maze, even though the states represented by points A and B close to each other, but functionally they are very far apart. I'm curious as to what authors have to say in this regard. \\n\\nWe believe this is precisely the reason to learn a state embedding rather than rely on naive or predetermined notions of similarity. In the point mass task, for example, pixel renderings of any two non-overlapping positions of the point mass will have equal L2 distance from one another. A reward based on this will not reward positions nearby to the goal differently from far away positions. The situation becomes even more complex when the goal and the observation contain differences outside of the agent\\u2019s control.\\n\\n> Baseline Comparison: I find the experiment results not really convincing. First, comparison to other \\\"unsupervised\\\" exploration methods like Variational information maximizing exploration (VIME), Variational Intrinsic Control (VIC), Curiosity driven learning (using inverse models) is missing. I understand that VIME and VIC are really not scalable as compared to the proposed method, and hence it should be easy to construct a toy task where it is possible to intuitively understand whats really going on, as well as one can compare with the other baselines (VIME, VIC).\\n\\nWe are aware of VIME but as far as we can tell, every condition examined in that work uses an externally defined reward function, which we do not. Furthermore, VIME is a strategy for improving exploration while DISCERN is a method for learning to achieve visually specified goals. It is not clear how they could be compared.\\n\\nVariational Intrinsic Control has not been shown to work in the setting of goals specified as visual observations and the results presented in that work on complex visual environments are very preliminary. We agree that applying VIC to this problem directly is an interesting direction but consider it out of scope for the present work.\", \"regarding_curiosity_driven_models\": \"while these can be learned in an unsupervised way and there is a strong connection to the original formulation of empowerment (in the one-step case), after a significant review we have yet to come across a paper where state-conditioned goal achievement could be done without significant algorithmic modifications.\\n\\n> I would recommend authors to study a toyish environment in a proper way as compared to running (incomplete) experiments on 3 different set of envs. It would make the paper really strong.\\n\\nWe believe that several of the tasks considered (reacher, point mass) are among the simplest instantiations of the problem we consider that are still interesting (i.e. high-dimensional pixel observations). The reason we chose to study both Atari and the Control Suite tasks in-depth is because they represent very different characteristics and are externally defined. We agree with Reviewer 2\\u2019s assessment that tasks in the deep RL literature are too often cherry-picked and we wanted to demonstrate breadth of applicability, which we believe we have.\"}", "{\"title\": \"Response to AR2 (3/3)\", \"comment\": \"> 5. Overall, I think this is a good paper, gives a good overview of an important problem; the matching networks idea is nice and simple; but the paper could be more broader in terms of writing than trying to portray the success of DISCERN specifically. I would be happy accepting it even if the SPIRAL baseline or VAE / AE baseline worked as well as the matching networks because I think those approaches are more principled and likely to require fewer hacks and could be applied to a lot of domains easily.\\n\\nWe disagree with the assertion that SPIRAL/AE/VAE baselines are more principled. Both our reward learner and agent approximately optimize the same objective, whereas density modeling or reconstruction objectives for reward learning are in fact introducing a secondary objective unrelated to the reinforcement learning problem at hand.\\n\\nGANs are somewhat notorious for being difficult to train. Discriminator-based rewards can also degrade when the generator\\u2019s performance becomes such that the discriminator has little or no basis for telling real from synthetic (which is perhaps not a problem for SPIRAL as the faces are not sufficiently realistic reproductions, but see e.g. Bahdanau et al, 2018\\u2019s AGILE method for a discussion). Our cooperative objective does not seem to suffer from these degeneracies, and the agent performing well does not have the potential to negatively impact the reward learner\\u2019s learning dynamics.\\n\\nFinally, we view DISCERN\\u2019s contribution as a method for robustly operationalizing mutual information in the case of goal-based RL. None of the baseline methods would have any reason to learn a similarity metric that ignores distracting elements outside the agent\\u2019s control, and indeed this is validated by their stronger performance on visually simpler Control Suite tasks and degraded performance on Atari domains, where important elements of the observed game state (namely enemies) cannot be reliably matched exactly.\\n\\n> I also hope the authors run the baselines I asked for just to make the paper more scientifically complete. \\n\\nAs we note above, we have run the hindsight-only baseline for Atari and will be adding to the Appendix. We believe this is the harder of the two quantitatively evaluated domain families for a hindsight-only setup to cope with due to the presence of uncontrollable distracting elements of the state, and we have verified that indeed it is unable to achieve a significant fraction of goals. We can run these baselines for the Control Suite task as well if requested but we don\\u2019t believe this will be as informative as the Atari result.\\n\\n> (2) Time Contrastive Networks (which also uses AlexNet and doesn't really work on single-view tasks but is a good citation to add),\\n\\nWe agree, and in fact already cite this work.\\n\\n> (3) Original UVFA (definitely has to be there given you even use the abbreviation for the keywords description of the paper)\\n\\nThis is indeed an oversight, UVFA was mentioned in a previous version and the citation was cut in an edit and never reintroduced. We will correct this.\\n\\n> 7. Some slightly incorrect facts/wording in the paper: The two papers cited in model-based methods (Oh and Chiappa) are not really unsupervised. They use a ton of demonstrations to learn those world models. \\n\\nWhile these papers do use pretrained agents to collect the training data, the model-learning algorithm is unsupervised and could be used on data from a random policy without modification. This is likely to reduce their performance, but our point is that even with this \\\"cheat\\\", performing model-based RL on them doesn't work.\\n\\n> Better citation might be David Ha's World Models or Chelsea Finn's Video Prediction. \\n\\nThe \\u201cWorld Models\\u201d setup is more inline with our own, compared to the papers we initially cited, so we will cite World Models in our upcoming revision. But since their results are only on two environments, one of which is a rather peculiarly constructed task (bullet avoidance in VizDoom), we should regard them as preliminary.\", \"video_prediction\": \"Similarly to the papers we do cite, it uses training data that isn't random (generated by a human I believe). It is farther from our domains of interest and isn't evaluated in terms of model-based RL.\"}", "{\"title\": \"Response to AR2 (2/3)\", \"comment\": \"> Note that in other papers cited in this, such as SPIRAL, UPN, etc, the reward metrics are used for every state transition.\\n\\nWe consulted the SPIRAL paper and found that it uses the same reward scheme as us, i.e. a single non-zero reward on the final step of the episode. As you mentioned earlier, the HER paper makes a good case for why sparse rewards could be better than per-step rewards.\\n\\n> (iii) In addition to naive image HER, I would really like to see a SPIRAL + HER baseline as is. \\n\\nThank you for pointing this out. We give both the autoencoder and the WGAN-trained policy the benefit of hindsight experience replay as well. Hence, the WGAN baseline in our paper corresponds to exactly what you suggest. We will relabel the WGAN and AE baselines as WGAN+HER and AE+HER to make this clearer. As you can see, WGAN+HER works well on many Control Suite tasks, but does not work on Atari, where moving distractor objects are present.\\n\\n> I would really like to know how the reward for each transition in the trajectory works (both for SPIRAL and your approach) and how the naive HER works. \\n\\nWe experimented early on with dense rewards and found it to work worse, possibly due to tradeoffs between partial achievement early and more complete achievement later. We consider dense rewards to be an important future direction but we haven\\u2019t included this comparison as the original SPIRAL work used a sparse reward, and we don\\u2019t believe we have the space to give this topic the treatment it deserves.\\n\\n> 3. Another place I really found confusing throughout the paper is the careless swapping of notations, especially in the xi(h) and e(h). Please use consistent notations especially in equation (3), the pseudocode and the rest of the paper. \\n\\nWe will attempt to address this in a revision posted later this week. We intended e() to be the composition of h(), xi(), and an L2 normalization and introduced it to reduce clutter. We apologize if it made things hard to follow.\\n\\n> 4. a. Would be nice to know if a VAE feature space metric is bad, but not a strict requirement if you don't have time to do it. But I think showing Euclidean metric baseline on VAE is better than an AE. \\n\\nWe did experiment with both VAE and AE baselines. The AE baseline performed better so we omitted the VAE baseline in order to avoid crowding the figures.\\n \\n> b. Another baseline that is related is to learn a metric with a triplet loss as in Sermanet's work. \\n\\nAs far as we can tell, neither of the Sermanet et al papers are directly comparable because the reward function relies on demonstrations of \\u201cgood\\u201d trajectories, rather than an intrinsically desired end state.\"}", "{\"title\": \"Response to AR2 (1/3)\", \"comment\": \"We thank AnonReviewer2 for their careful reading of our paper and their valuable feedback.\\n\\nWe have heard concerns regarding related work from several reviewers, and are currently finishing a complete rewrite of this section, and will publish a revision in the next few days which addresses this and several other concerns raised. We respond to specific concerns below.\\n\\n> 1. A slight negative: I find the whole pipeline extremely hacky and raises serious questions on whether this paper/technique is easy to apply on a wide variety of tasks. It gives me the suspicion that the environments were cherry-picked for showing the success of the proposed method, though, that's, in general, true of most deep RL papers. \\n\\nWe evaluated on examples of three families of visual domains that have very little in common and demonstrated success to various degrees on all of them. We know of no other work which evaluates on continuous control from pixels and Atari games in the same paper. Our main criterion for selection among Control Suite tasks was the dimensionality of the action space (and therefore the cardinality of the discretized action space; we will clarify this in the text), which concerns a limitation of Q learning rather than our method built on top of Q learning. We'd also note that DISCERN is not uniformly the winner on our \\\"whole state\\\" goal achievement metric on the Control Suite tasks; if we had wanted to cherry pick, including these would be an odd choice.\\n\\n> (it would be amazing if the benchmark is open sourced so that it will lead to more people working specifically on this setting and a lot more comparisons). \\n\\nThe domains we used are already open source. We plan on open-sourcing the detectors we used for the Atari task as well as the code we used to extract ground truth from the Control Suite environments, in order to enable comparison.\\n\\n> (i) Need for decoy observations to learn an approximate log-likelihood \\n\\nWe disagree that using decoys is hacky. Methods like noise contrastive estimation rely on a similar mechanism and are a standard way of doing approximate maximum likelihood training. What we propose is a non-parametric formulation of mutual information maximization which we further instantiate approximately by sampling. We note that contrastive predictive coding, concurrent work with our own which you mention in your review, also employs negative examples or decoys.\\n\\n> (ii) Using sparse reward for all transitions except the final terminal state: Yes, I am aware of the fact that HER has already shown sparse rewards are easier to learn value functions with, compared to dense rewards. But I am genuinely surprised that you have pretty much the same setting (ie re-label only terminal transition, r(s_T, s_g)) and motivate the need for learning a perceptual metric. If the information bits per transition is similar to HER in terms of the policy network's objective function, I am not sure why you need to learn a perceptual reward then? There's also no baseline comparison with just naive HER on image observations. That will be worth seeing actually. \\n\\nWe attempted training purely by HER on the Atari tasks in the way you suggest. This did not work well and the percentage of goals achieved was worse than for a random agent on both Seaquest and Montezuma\\u2019s Revenge. We will add these results to the appendix.\"}", "{\"title\": \"Important problem, reasonable initial attempt, room for improvement\", \"review\": \"Summary:\\n\\nThe authors take up an important problem in unsupervised deep reinforcement learning which is to learn perceptual reward functions for goal-conditioned policies without extrinsic rewards from the environment. The problem is important in order to push the field forward to learning representations of the environment without predicting value functions from scalar rewards and learn more generalizable aspects of the environment (the authors call this mastery) as opposed to just memorizing the best sequence of actions in typical value/policy networks. \\n\\nModel-based methods are currently hard to execute as far as mastery is concerned and goal-conditioned value functions are a good alternative. The authors, therefore, propose to learn UVFA (Schaul et al) with a learned perceptual reward function r(s, s_g) where 's' and 's_g' are current and goal observations respectively. They investigate a few choices for deriving this reward, such as pixel-space L2 distance, Auto-Encoder feature space, WGAN Discriminator (as done in SPIRAL - Ganin and Kulkarni et al), and their approach: cosine similarity based log-likelihood for similarity metric (as in Matching Networks). They show that their approach works better than other alternatives on a number of visual goal-based tasks.\", \"specific_aspects\": \"1. A slight negative: I find the whole pipeline extremely hacky and raises serious questions on whether this paper/technique is easy to apply on a wide variety of tasks. It gives me the suspicion that the environments were cherry-picked for showing the success of the proposed method, though, that's, in general, true of most deep RL papers. It would be nice if the authors instead wrote the paper from the perspective of proposing a new benchmark (it would be amazing if the benchmark is open sourced so that it will lead to more people working specifically on this setting and a lot more comparisons). \\n\\n-- Revision: The pipeline is hacky, but getting GAN based reward learning to work is also not very straightforward. The authors do plan to release the detectors used for the benchmarking.\\n\\n2. To elaborate on the above, these are the portions I find hacky: \\n(i) Need for decoy observations to learn an approximate log-likelihood \\n(ii) Using sparse reward for all transitions except the final terminal state: Yes, I am aware of the fact that HER has already shown sparse rewards are easier to learn value functions with, compared to dense rewards. But I am genuinely surprised that you have pretty much the same setting (ie re-label only terminal transition, r(s_T, s_g)) and motivate the need for learning a perceptual metric. If the information bits per transition is similar to HER in terms of the policy network's objective function, I am not sure why you need to learn a perceptual reward then? There's also no baseline comparison with just naive HER on image observations. That will be worth seeing actually. I feel this kind of comparisons are more interesting and important for the message of the paper. Note that in other papers cited in this, such as SPIRAL, UPN, etc, the reward metrics are used for every state transition. \\n(iii) In addition to naive image HER, I would really like to see a SPIRAL + HER baseline as is. ie use the GAN reward for all transitions and also use relabeling for successes. My prior belief is that this will work really well. I would really like to know how the reward for each transition in the trajectory works (both for SPIRAL and your approach) and how the naive HER works. \\n\\n--Revision: The authors have added HER baselines. Agreed with the authors that comparison of per-timestep perceptual reward vs terminal state perceptual reward is a good topic for future work.\\n\\n3. Another place I really found confusing throughout the paper is the careless swapping of notations, especially in the xi(h) and e(h). Please use consistent notations especially in equation (3), the pseudocode and the rest of the paper. \\n\\n4. a. Would be nice to know if a VAE feature space metric is bad, but not a strict requirement if you don't have time to do it. But I think showing Euclidean metric baseline on VAE is better than an AE. \\n b. Another baseline that is related is to learn a metric with a triplet loss as in Sermanet's work. Or any noise contrastive loss approach (such as CPC). The matching networks approach is similar in spirit. Just pointing out as reference and something worth trying, but not expecting it to be done for rebuttal. \\n \\n5. Overall, I think this is a good paper, gives a good overview of an important problem; the matching networks idea is nice and simple; but the paper could be more broader in terms of writing than trying to portray the success of DISCERN specifically. I would be happy accepting it even if the SPIRAL baseline or VAE / AE baseline worked as well as the matching networks because I think those approaches are more principled and likely to require fewer hacks and could be applied to a lot of domains easily. I also hope the authors run the baselines I asked for just to make the paper more scientifically complete. \\n\\n6. Not a big deal for me in terms of deciding acceptance, but for the sake of good principles in academics, related work could be stronger, though I can understand it must have been small purely due to page limits. \\n\\nSome papers which could be cited are (1) Unsupervised Perceptual Rewards (though it uses AlexNet pre-trained), (2) Time Contrastive Networks (which also uses AlexNet and doesn't really work on single-view tasks but is a good citation to add), (3) Original UVFA (definitely has to be there given you even use the abbreviation for the keywords description of the paper)\\n\\n7. Some slightly incorrect facts/wording in the paper: The two papers cited in model-based methods (Oh and Chiappa) are not really unsupervised. They use a ton of demonstrations to learn those world models. Better citation might be David Ha's World Models or Chelsea Finn's Video Prediction.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Review\", \"review\": \"The paper proposes an unsupervised learning algorithm to learn a goal conditioned policy and the corresponding reward function (for the goal conditioned policy) by maximizing the mutual information b/w the goal state and the state achieved by running the goal conditioned policy for K time steps. The paper proposes a tractable way to maximize this mutual information objective, which basically amounts to learning a reward function for the goal conditioned policy.\\n\\nThe paper is very well written and easy to understand.\", \"missing_citations\": \"Original UVFA [1] paper should be cited while citing goal conditioned policies.\\n\\nIn the paragraph, \\\"Goal distribution\\\" , the paper uses a non parametric approach to approximate the goal distribution. Previous works ([2], [3]) have used such an approach and relevant work should be cited. \\n\\n[1] http://proceedings.mlr.press/v37/schaul15.html\\n[2] Many Goals Reinforcement Learning https://arxiv.org/abs/1806.09605\\n[3] Recall Traces: Backtracking Models for efficient RL https://arxiv.org/abs/1804.00379\\n\\nI wonder if learning the variational distribution would be tricky in scenarios where one need to extract a representation of the end state that can distinguish states based on actions required to reach them. Like consider a U-shaped maze \\n| | |\\n| | |\\n|_A__|__B__|\\nIn this maze, even though the states represented by points A and B close to each other, but functionally they are very far apart. I'm curious as to what authors have to say in this regard.\", \"baseline_comparison\": \"I find the experiment results not really convincing. First, comparison to other \\\"unsupervised\\\" exploration methods like Variational information maximizing exploration (VIME), Variational Intrinsic Control (VIC), Curiosity driven learning (using inverse models) is missing. I understand that VIME and VIC are really not scalable as compared to the proposed method, and hence it should be easy to construct a toy task where it is possible to intuitively understand whats really going on, as well as one can compare with the other baselines (VIME, VIC).\\n\\nI would recommend authors to study a toyish environment in a proper way as compared to running (incomplete) experiments on 3 different set of envs. It would make the paper really strong.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Clarifications\", \"comment\": \"Thank you for your comment. We are not sure if including additional results is permitted outside of the rebuttal phase so we will add additional visualizations to the supplementary site once the rebuttal period begins. Please see below for detailed answers and clarifications.\\n\\n> It would also be really useful and good for clarity if you can explicitly point out the goal image observation as a separate image observation rather than superimposing it in the video.\\n\\nThank you for the suggestion. The videos are included in addition to Figures 1 and 2, which show the goal frames provided to the agent on Atari and Control Suite tasks. We found that superimposing the goal on the videos made it easier to judge how closely the agent matches the goal. However, we will include some videos where the goal is not superimposed as you suggest.\\n\\n> I would like to clearly know what part of the Atari screen is fed in as is\\n\\nThe entire RGB frame for both the goal and the current time-step (84x84, preprocessed according to the protocol in Mnih et al (2015)) is fed in.\\n\\n> Example, it seems like the scores are blurred in Seaquest.\\n\\nWe are not altering the observation or the goal in any way beyond the standard preprocessing above. Downsampling to 84x84 does impact legibility of the score, and we are displaying (in both the paper and the videos) the downsampled observations. The achievement frames in the manuscript (bottom row of Figures 1b and 2), are averages over the final frames of trajectories started with the same goal but different initial states, so some blurring will naturally occur; however, all frames seen by the agent are unaltered beyond the fixed preprocessing noted above.\\n\\n\\n> And the DM Control task is confusing to me, in terms of whether the small ball's goal position is part of the goal or not, or is it just the pose matching of the arm. \\n\\nWe feed in entire frames as goals, so the small yellow ball in the \\u201cmanipulator\\u201d task is indeed part of the goal (although the ball is not visible in some frames if it\\u2019s falling or bouncing). As the video for the \\u201cmanipulator task\\u201d shows, DISCERN learns to approximately match the position of the arm in the goal image, but largely ignores the ball. The results in the DeepMind Control Suite paper show [1] that the \\u201cmanipulator\\u201d task is difficult when using pixel observations as we do; the state of the art D4PG agent was not able to solve the \\u201cmanipulator\\u201d task from pixels when given an extrinsic reward for moving the ball to a target location. Our setting is even more difficult since the agent does not receive such an extrinsic reward.\\n\\nThe \\u201cmanipulator\\u201d domain also includes a pink marker which represents the target location for the ball for the extrinsic reward task. The location of the pink marker is chosen randomly by the environment at the beginning of each environment episode; hence agent has no control over the location of the marker. Because we are not using the extrinsic reward, the pink marker is a distracting object similar to the skull in Montezuma\\u2019s Revenge. DISCERN is however robust to these distracting elements and learns to ignore them while matching aspects of the environment which are under its control. Similarly, the \\u201creacher\\u201d and \\u201cpoint_mass\\u201d domains also include a pink marker that is not controllable by the agent.\\n\\nWe will clarify these aspects of the environment in the appendix and update the paper during the rebuttal phase.\\n\\n[1] \\u201cDeepMind Control Suite\\u201d Tassa et al. - https://arxiv.org/abs/1801.00690\"}", "{\"title\": \"A strong paper with innovative ideas, but somewhat unclear methods and results\", \"review\": \"In this paper, the authors address the problem of learning to achieve perceptually specified goals in a fully unsupervised way. For doing so, they simultaneously learn a goal-conditioned policy and a goal achievement reward function based on the mutual information between goals sampled from an a priori distribution and states achieved using the goal-conditioned policy. These two learning processes are coupled through the mutual information criterion, which seems to result in efficient state representation learning for the visual specified goal space. A key feature is that the resulting metrics in the visual goal space helps the agent focus on what it can control and ignore distractors, which is critical for open-ended learning.\\n\\nOverall, the idea looks very original and promissing, but the methods are quite difficult to understand under the current form, the messages from the results are not always clear, and the lack of ablative studies makes it difficult to determine which of the mechanisms are crucial in the system performance and which are not.\\n\\n* Clarification of the methods:\\n\\nGiven the key features outlined above, I believe the work described in this paper has a lot of potential, but the main issue is that the methods are not easy to get, and the authors could do a better job in that respect. Here is a list of remarks meant to help the authors write a clearer presentation of their method:\\n\\n- the \\\"problem formulation\\\" section contains various things. Part of it could be inserted as a subsection in Section 3, and the last paragraph may rather come into the related work section.\\n\\n- in Section 3, optimization paragraph, the details given after \\\"As will be discussed\\\"... might rather come in Section 4 were most of all other details are given.\\n\\n- in Section 4, I would refer to Algorithm 1 only in the end of the section after all the details have been explained: I went first to the algorithm and could not understand many details that are explained only afterwards.\\n\\n- in Algorithm 1, shouldn't the two procedures be called \\\"Imitator\\\" and \\\"Teacher\\\", rather than \\\"actor\\\" and \\\"learner\\\", to be consistent with the end of Section 3?\\n\\n- there must be a mathematical relationship between $\\\\xsi_\\\\phi$ and $\\\\hat{q}$, but I could not find this relationship anywhere in the text. What is $\\\\xsi_\\\\phi$ is never introduced clearly...\\n\\n- p4: we treat h as fixed ... => explain why.\\n\\n- I don't have a strong background about variational methods, and it is unclear to me why using an expanding set of goals corresponding to already seen states recorded in a buffer makes it that maximizing the log likelihood given in (4) is easier than something else.\\n\\nMore generally, the above are local remarks from a reader who did not succeed in getting a clear picture of what is done exactly and why. Anything you can do to give a more didactic account of the methods is welcome.\\n\\n* Related work:\\n\\nThe related work section is too poor for a strong paper like this one. Learning to reach goals and learning goal representations are two extremely active domains at the moment and the authors should position themselves with respect to more of these works. Here is a short list in which the authors may find many more relevant papers:\\n\\n (Machado and Bowling, 2016), (Machado et al., 2017), GoalGANs (Florensa et al., 2018), RIG (Nair et al., 2018), Many-Goals RL (Veeriah et al., 2018), DAYN (Eysenbach et al., 2018), FUN (Vezhnevets et al., 2017), HierQ, HAC (Levy et al., 2018), HIRO (Nachum et al., 2018), IMGEP (Pere et al., 2018), MUGL IMGEP (Laversanne-Finot et al., 2018).\\n\\nIt would also be useful to position yourself with respect to Sermanet et al. : \\\"Unsupervised Perceptual Rewards for Imitation Learning\\\".\\n\\nAbout state representation learning, if you consider the topic as relevant for your work, you might have a look at the recent survey from Lesort et al. (2018).\\n\\nExternal comments on ICLR web site also point to missing references. The authors should definitely consider doing a much more serious job in positioning their work with respect to the relevant literature.\\n\\n* Experimental study:\\n\\nThe algorithm comes with a lot of mechanisms and small tricks (at the end of Section 3 and in Section 4) whose importance is never assessed by specific experimental studies. This matters all the more than some of the details do not seem to be much principled. It would be nice to have elements to figure out how important they are with ablative studies putting them aside and comparing performance. Among other things, I would be glad to know how well the system performs without its HER component. Is it critical?\\n\\nThe same about the goal sampling strategy, as mentioned in the discussion: how critical is it in the performance of the algorithms?\\n\\n- Fig. 1b is not so easy to exploit: it is hard to figure out what the reader should actually extract from these figures\\n\\n- difficult tasks like cartpole: other papers mention cartpole as a rather easy task.\\n\\nIn the begining of Section 4, the authors mention that the mechanisms of DISCERN naturally induce a form of curriculum (which may be debated), but this aspect is not highlighted clearly enough in the experimental study.\\n\\nIn my opinion, studying fewer environments but giving a more detailed analysis of the performance of DISCERN and its variations in these environment would make the paper stronger.\\n\\n\\n\\n* typos:\", \"p3\": \"the problem (of) learning a goal achievement reward function\\n\\nIn (3), p_g should most probably be p_{goal}\", \"p4\": \"we treated h(.) ... and did not adapt => treat, do not\", \"p9\": \"needn't => need not\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Clarification\", \"comment\": \"Dear authors,\\n\\nCan you please add the visual results on the supplementary site for all the benchmarks in the paper? ie other DM Control tasks and the DM Lab tasks. It would also be really useful and good for clarity if you can explicitly point out the goal image observation as a separate image observation rather than superimposing it in the video. I would like to clearly know what part of the Atari screen is fed in as is, and what is not. Example, it seems like the scores are blurred in Seaquest. And the DM Control task is confusing to me, in terms of whether the small ball's goal position is part of the goal or not, or is it just the pose matching of the arm.\"}", "{\"title\": \"Thank you for the feedback\", \"comment\": \"Thank you for the feedback! The quoted line is a bit ambiguous in this context. Unlike the linked paper, our work doesn\\u2019t have a low-dimensional \\u201cmeasurement\\u201d observation stream. Rather, when we refer to goals specified as observations, we mean the full observation given to the agent (i.e. the image). This lack of a measurement channel in the environments tested precludes having \\\"Learning to Act by Predicting the Future\\\" as a baseline.\\n\\nWe agree that DFP is related since it provides an alternative specification of goals by using additional low-dimensional \\u201cmeasurements\\u201d. We did not cite the paper because our related work section focuses on methods for achieving visually specified goals. This is already a large area to cover and including alternative goal specification methods such as DFP, language-based goals, and demonstrations is beyond the scope of our work.\"}", "{\"comment\": \"\\\"We have presented a system that can learn to achieve goals, specified in the form of observations\\nfrom the environment\\\" - paper\\n\\n\\\"Assuming that the goal can be expressed in terms of future measurements,\\\" from \\\"Learning to Act by Predicting the Future\\\", ICLR 2017 https://arxiv.org/abs/1611.01779\\n\\nWhile the approaches are quite different, but the main idea is close enough to mention or even tested against DFP, given the strong performance of the latter\", \"title\": \"missing reference to DFP?\"}" ] }
BJl4f2A5tQ
Surprising Negative Results for Generative Adversarial Tree Search
[ "Kamyar Azizzadenesheli", "Brandon Yang", "Weitang Liu", "Emma Brunskill", "Zachary Lipton", "Animashree Anandkumar" ]
While many recent advances in deep reinforcement learning rely on model-free methods, model-based approaches remain an alluring prospect for their potential to exploit unsupervised data to learn environment dynamics. One prospect is to pursue hybrid approaches, as in AlphaGo, which combines Monte-Carlo Tree Search (MCTS)—a model-based method—with deep-Q networks (DQNs)—a model-free method. MCTS requires generating rollouts, which is computationally expensive. In this paper, we propose to simulate roll-outs, exploiting the latest breakthroughs in image-to-image transduction, namely Pix2Pix GANs, to predict the dynamics of the environment. Our proposed algorithm, generative adversarial tree search (GATS), simulates rollouts up to a specified depth using both a GAN- based dynamics model and a reward predictor. GATS employs MCTS for planning over the simulated samples and uses DQN to estimate the Q-function at the leaf states. Our theoretical analysis establishes some favorable properties of GATS vis-a-vis the bias-variance trade-off and empirical results show that on 5 popular Atari games, the dynamics and reward predictors converge quickly to accurate solutions. However, GATS fails to outperform DQNs in 4 out of 5 games. Notably, in these experiments, MCTS has only short rollouts (up to tree depth 4), while previous successes of MCTS have involved tree depth in the hundreds. We present a hypothesis for why tree search with short rollouts can fail even given perfect modeling.
[ "Deep Reinforcement Learning", "Generative Adversarial Nets" ]
https://openreview.net/pdf?id=BJl4f2A5tQ
https://openreview.net/forum?id=BJl4f2A5tQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1xonh9BlN", "ryxT8-lUAQ", "SJlNNbeUCQ", "rJg4Vgg8CX", "Sylua1lIRX", "r1lZ5v7I67", "HylvhTth27", "ByludhFs2X" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545084083167, 1543008596890, 1543008556147, 1543008299764, 1543008191689, 1541973897493, 1541344687491, 1541278832509 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1255/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1255/Authors" ], [ "ICLR.cc/2019/Conference/Paper1255/Authors" ], [ "ICLR.cc/2019/Conference/Paper1255/Authors" ], [ "ICLR.cc/2019/Conference/Paper1255/Authors" ], [ "ICLR.cc/2019/Conference/Paper1255/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1255/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1255/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper addresses questions on the relationship between model-free and model-based reinforcement learning, in particular focusing on planning using learned generative models. The proposed approach, GATS, uses learned generative models for rollouts in MCTS, and provide theoretical insights that show a favorable bias-variance tradeoff. Despite this theoretical advantage, and high-quality models, the proposed approach fails to perform well empirically. This surprising negative results motivates the paper and providing insights on it is the main contribution.\\n\\nBased on the initial submitted version, the reviewers positively emphasized the need to understand and publish important negative results. All reviewers and the AC appreciate the import role that such a contribution can bring to the research community. Reviewers also note the careful discussion of modeling choices for the generative models. \\n\\nThe reviewers also noted several potential weaknesses. Central were the need to better motivate and investigate the hypothesis proposed to explain the negative results. Several avenues towards a better understanding were proposed, and many of these were picked up by the authors in the revision and rebuttal. A novel toy domain \\\"goldfish and gold bucket\\\" was introduced for empirical analysis, and experiments there show that GATS can outperform DQN when a longer planning horizon is used. \\n\\nThe introduced toy domain provides additional insights into the relationship between planning horizon and GATS / MCTS performance. However, it does not address key questions around why the negative result is maintained. The authors hypothesize that the Q-value is less accurate in the GATS setting - this is something that can be empirically evaluated, but specific evidence for this hypothesis is not clearly shown. Other forms of analysis that could shed further light on why the specific negative result occurs could be to inspect model errors. For example, if generated frames are sorted by the magnitude of prediction errors - what are the largest mistakes? Could these cause learning performance to deteriorate?\\n\\nThe reviewers also raised several issues around the theoretical analysis, clarity (especially of captions) and structure - these were largely addressed by the revision. The concern that most strongly affected the final evaluation is the limited insight (and evidence) of the factors that influence performance of the proposed approach. Due to this, the consensus is to not accept the paper for publication at ICLR at this stage.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"A valuable direction, needs more systematic analysis into possible causes of negative results\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thanks for the positive assessment and clear thoughtful feedback. We have improved the draft substantially per your feedback. Please find specific points below:\\n\\n*** Synthetic Examples***\\nPer your suggestions, we implemented a controlled environment \\u201cGoldfish and gold bucket\\u201d environment highlighted in the discussion section, and evaluated the GATS algorithm with and without Dyna-Q in this synthetic environment. In this experiment, we give the agent access to the true environment, as suggested by the reviewer. Empirically, we find that GATS with short roll-outs (of lengths 1 and 2) consistently results in slower learning than vanilla DQN. Please see the graph of full results in Figure 2. \\n\\n*** The individual building blocks of GATS *** \\nPer your feedback, we reorganized the paper to include the individual building blocks of GATS in the introduction. We also expand upon the building blocks in the related works section to better illustrate the ways the GATS framework can be extended. We believe that these improvements will help to highlight the general contributions of the insights of this paper for studying model-based and model-free reinforcement learning.\\n\\n*** Figures ***\\nIn the updated draft, we made Figure 1 and Figures 12-13 (in the appendix) significantly larger. The current version should be much easier to view, especially in print.\\n\\n*** Structure *** \\nWe have restructured the appendix to include the derivation of the theoretic result first, followed by individual empirical results. As suggested by the reviewer, we have added significant additional analysis of the controlled experiments mentioned above in the discussion and expanded upon the analysis of the negative results.\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for this constructive review. We have improved the draft substantially to address your concrete concerns and are optimistic that the updated draft addresses your concerns sufficiently to warrant revisiting your assessment. We comment below briefly on the theoretical derivation, originality, and exploration.\\n\\n***Theoretical Derivation***\\nRegarding the reviewer\\u2019s concerns on the theoretical derivation in Eq. 9: We have added a detailed section to the derivation showing that the bound still holds when the optimal action max_a Q(s,a) and max_a \\\\hat{Q}(s,a) are not the same action. Please refer to the updated draft directly and lemma 1 on Page 14. We have also revised the rest of the derivation to make this more clear.\\n\\n*** Negative Results ***\\nWe were also surprised that GATS does not yield better results on the studied task, but we believe both 1) that sufficiently surprising negative results hold scientific value and 2) that in particular, these results shed light on fundamental issues that arise when deploying MCTS together with a learned Q-function. To better highlight this surprising learning process, we included new experiments on a toy environment called \\u201cGoldfish and gold bucket\\u201d. Our empirical results in this controlled environment (see updated Fig. 2) demonstrate that even with perfect modeling, GATS with short rollouts can hurt performance, as seen in our Atari experiments.\\n\\n\\n*** Inverse Wasserstein exploration***\\nWe agree with the reviewer that a proper regret analysis of our proposed inverse Wasserstein exploration method would be very insightful to the community. However, we point out that this analysis is not straightforward and might constitute a lengthy paper unto itself. While we are excited about this line of research, we left the regret bound analysis for future work.\\n\\n*** Domain transfer***\\nAs the reviewer notes, the goal of our domain transfer experiments was to show that the GDM model is powerful and, importantly, quick to adapt to changing model dynamics than the learned Q-function. The quality of the GDM is precisely what makes the negative results so surprising, since even powerful and accurate dynamics models may not result in improved results when MCTS and Q-learning are deployed together with short roll-outs, as in GATS. Thus, our results shed light on approaches that use model-based learning to improve domain transfer, which we hope to expand on in the future.\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for the detailed review and thoughtful suggestions. First, inspired by your suggestion to develop a synthetic example to demonstrate our negative results, we devised the \\u201cGoldfish and gold bucket\\u201d environment (described below). Additionally, in the latest version, we have addressed your concerns about theory (see, e.g., Page 14, Lemma 1). We describe the improvements to the draft below in detail.\\n\\n***Synthetic Example***\\n We devised a simple grid-world based environment called \\u201cGoldfish and gold bucket\\u201d, where an agent must navigate an environment containing sharks (reward -1) and a gold bucket (reward 1). We evaluated generative adversarial tree search (GATS) in the environment both with and without Dyna-Q. Interestingly, we observed that MCTS with short roll-out lengths consistently results in slower learning than vanilla DQN. Find the full quantitative results in Figure 2. We also released the implementation code of our synthetic study publicly. \\n\\n***Theoretical issues***\\nThanks for your attention to detail. Regarding the e_Q term, in the original proof, we accidentally omitted the P(x_H | x, pi_r) term in the equation above Eq. 9. With this term, the sum is bounded by the max difference in the Q estimate for any given state, which is the e_Q term as pointed out by the reviewer. Additionally, per feedback from Reviewer 2, we clarified some steps in the derivation of the theorem, including a few (previously skipped) steps. Please find our detailed derivation in the updated paper following Lemma 1.\", \"regarding_the_study_of_gats_performance_with_inaccurate_models\": \"We would like to bring the reviewer\\u2019s attention to the fact that the surprising result in this paper is that the GATS algorithm, even with an accurate model, results in a deterioration in the performance. The experimental study on the synthetic environment illustrates this phenomenon, even with a perfect model of the environment. In this synthetic study, GATS with limited depth (e.g. 1 and 2), underperformed a model learned with vanilla DQN.\", \"regarding_the_optimality_of_mcts\": \"As the reviewer mentioned, MCTS using the true Q function is indeed optimal, but deploying MCTS with Q learning can have complex interactions. We believe that the new synthetic experiments on \\u201cGoldfish and gold bucket\\u201d help to better illuminate this complex learning process, which we first observed in our Atari experiments. In Proposition 1, we show that given a fixed estimated Q function, MCTS results in a better worst-case error in the Q estimation. But we would like to emphasize that these theoretical results do not guarantee better results. Our experiments indicate that deploying MCTS with Q learning can result in the learning of worse Q functions, which later is used in the leaf nodes. Moreover, Proposition 1 shows that the contribution in the error of Q estimation goes down exponentially in \\\\gamma^H, but when \\\\gamma is equal to 0.99 (a common choice for DRL in Atari games), we might not see much improvement with short rollouts.\", \"regarding_the_figure_captions\": \"We appreciate the feedback. Already we have added a more detailed explanation in the captions in the updated draft and will continue to work to improve our exposition.\"}", "{\"title\": \"General reply to reviewers and area chair\", \"comment\": \"First, we thank the reviewers for three detailed and thoughtful responses to our paper. We were glad to see that the reviewers found the approach interesting and that they appreciated our decision to submit a negative result for publication. While the original scores place the paper on the borderline, the reviewers made exceptionally specific requests for clarifications to the theory and additional extensive experiments to illustrate our negative results in a more controlled environment. Over the past few weeks, we have worked hard to improve the paper, clarifying out theoretical analysis. Per the reviewers\\u2019 suggestions, we also devised a toy environment, \\u201cGoldfish and gold bucket\\u201d, demonstrating our negative results in a controlled setting. We are grateful to the reviewers for their suggestions and hope that they will consider these substantial improvements when updating their reviews and scores. Please find specific replies to each reviewer in the respective threads.\"}", "{\"title\": \"Interesting paper with some missing depths in the analysis of the negative results\", \"review\": \"The submitted paper proposes GATS, a RL model combining model-free and model-bases reinforcement learning. Estimated models of rewards and dynamics are used to perform rollouts in MTCS without actually interacting with the true environment. The authors theoretically and empirically evaluate their approach for low depths rollouts. Empirically they show improved sample complexity on the Atari game Pong.\\n\\nI think publishing negative research results is very important and should be done more often if we can learn from those results. But that is an aspect I feel this paper falls short at. I understand that the authors performed a great deal of effort trying to tune their model and evaluated many possible design choices, but they do not a provide a thorough investigation of the causes which make GATS \\\"fail\\\". I suggest that the authors try to understand the problems of MCTS with inaccurate models better with synthetic examples first. This could give insights into what the main sources of the problem are and how they might be circumvented. This would make the negative results much more insightful to the reader as each source of error is fully understood (e.g., beyond an error reate for predicting rewards which does not tell us about the distribution of errors which for example could have a big effect on the author's observations).\\n\\nAnother issue that needs further investigation is the author's \\\"hypothesis on negative results\\\". It would be great to experimentally underline the author's arguments. It is not trivial (at least to me) to fully see the \\\"expected\\\" interaction of learning dynamics and depths of rollouts. While MCTS should be optimal with any depths of rollouts given the true Q-function, the learning process seems more difficult to understand.\\n\\nI would also like the authors to clarify one aspect of their theoretical analysis. e_Q is defined as the error in the Q-estimate for any single state and action in the main text. This appears to be inconsistent with the proof in the appendix, making the bound miss a factor which is exponential in H (all possible x_H that can be reached within H steps). This would change the properties of the bound quite significantly. Maybe I missed something, so please clarify.\\n\\nOriginality mainly comes from the use of GANs in MCTS and the theoretical analysis.\", \"strengths\": [\"Interesting research at the intersection of model-free and model-based research\", \"Lots of effort went into properly evaluating a wide range of possible design choices\", \"Mainly well written\"], \"weaknesses\": [\"Missing depths in providing a deep understanding of why the author's expectations and empirical findings are inconsistent\", \"The authors use many tweaks and ideas to tune each component of their model making it difficult to draw conclusions about the exact contribution of each of these\", \"Error in the theoretical analysis (?)\"], \"minor_comment\": [\"The paper would benefit from improved and more self-contained figure captions.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Novel idea, but requires more work.\", \"review\": \"This paper presents Generative Adversarial Tree Search (GATS) that simulates trajectories for the Monte-Carlo Tree Search up to a certain depth. The GATS functions as a model that can simulate the dynamics of the system and predict rewards, effectively removing the need for a real simulator in the MCTS search.\\n\\nThey prove some favourable theoretical properties regarding the bias variance trade-off. Specifically, they propose the model based model free trade-off which illustrates the trade-off between selecting longer rollouts, which increases the upper performance bound, and getting a more accurate Q estimate, which decreases the upper performance bound.\\n\\nThey also propose a pseudo count exploration bonus based on the inverse Wasserstein metric as the exploration strategy for GATS.\\n \\nThey observe that when tree-search rollouts are short, GATS fails to outperform DQN on 4 different games.\", \"quality\": \"It is unclear to me how you arrive at the result in Equation (9) of Appendix D. You have assumed in the second equation that the optimal action max_a Q(s,a) and max_a \\\\hat{Q}(s,a) are the same action a. How do you arrive at this conclusion? Since \\\\hat{Q} is an approximation of Q, why would the action a be the same?\", \"clarity\": \"The paper is fairly well written. There are many grammatical mistakes, but the overall message is more or less conveyed.\", \"originality\": \"It is original in the sense that a generative adversarial network is used as the model for doing the tree search. It is disappointing that this model does not yield better performance than the baseline and the theoretical results are questionable. I would like the authors to specifically address the theory in the rebuttal.\", \"significance\": \"While I appreciate negative results and there should be more papers like this, I do think that this paper falls short in a couple of areas that I think the authors need to address. (1) As mentioned in quality, it is unclear to me that the theoretical derivation is correct. (2) The exploration bonus based on the inverse Wasserstein metric would add much value to the paper if it had an accompanying regret analysis (similar to UCB, for example, but adapted to the sequential MDP setting). \\n\\nIt appears in your transfer experiments that you do indeed train the GDM faster to adapt to the model dynamics, but it doesn\\u2019t appear to help your GATS algorithm actually converge to a good level of performance. Perhaps this paper should be re-written as a paper that focuses specifically on learning models that can easily transfer between domains with low sample complexity?\", \"for_the_exploration_bonus\": \"If the authors added a regret analysis of the exploration count and can derive a bound of the number of times a sub-optimal action is chosen, then this could definitely strengthen the paper. This analysis could provide theoretical grounding and understanding for why their new exploration account makes sense, rather than basing it on empirical findings.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"The idea of the paper is interesting and it is valuable to share negative results but it would be beneficial if the paper would focus more on hypothesis evaluation in a more constraint environment.\", \"review\": \"This paper proposes to learn a dynamics model (state to pixels using Generative Adversarial Networks), use this model in conjunction with Monte Carlo Tree Search, model-free reinforcement learning (Q-learning) and a reward prediction network essentially combining model-free with model-based learning. The proposed approach is empirically evaluated on a small subset of the Atari games and theoretical analysis for the bias-variance tradeoff are presented.\\n\\nIt is highly appreciated that this paper presents an idea and discusses why the proposed approach does not result in high performance. This is very valuable and useful for the community. On a high level it would be very useful if Figure 1 would be show less examples but present them much larger since it is almost impossible to see anything in a printout. Further, the caption does not lend itself to understand the figure. Similarly Figure 2 would benefit from a better caption. \\n\\nThe first part of the discussion (7), the individual building blocks, should be mentioned much earlier in the paper. It would be further useful to add more related work on that abstraction level. This would help to communicate the main contribution of this paper very precisely.\", \"on_the_discussion_of_negative_results\": \"It is very interesting that Dyna-Q does not improve the performance and the hypothesis for why this is the case seems reasonable. Yet, it would be very useful to actually perform an experiment in a better controlled environment for which e.g. the dynamics model is based on the oracle and assess the empirical effect of different MCTS horizons and rollout estimates. Further, this scenario would allow to further quantify the importance and the required \\u201cquality\\u201d of the different learning blocks.\\n\\nIn its current form the paper has theoretical contributions and experimental results which cannot be presented in the main paper due to space constraints. Albeit the appendix is already very extensive it would be very useful to structure it into the theoretical derivation and then one section per experiment with even more detail on the different aspects of the experiment. The story of the main paper would benefit from referencing the negative results more briefly and better analyzing the different hypothesis on toy like examples. Further, the introduction could be condensed in order to allow for more in detail explanations and discussions without repetition later on.\\n\\nAs argued in the paper it is clear that image generation is a very expensive simulation mechanism which for games like pong which depend on accurate modeling of small aspects of the image are in itself difficult. Therefore, again although really appreciated, the negative results should be summarized in the main paper and the hypothesis concluded better analyzed. The extensive discussion of hyper parameters and approaches for individual components could be in the appendix and the main paper focuses on the hypothesis analysis.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}" ] }
r1eEG20qKQ
Self-Tuning Networks: Bilevel Optimization of Hyperparameters using Structured Best-Response Functions
[ "Matthew Mackay", "Paul Vicol", "Jonathan Lorraine", "David Duvenaud", "Roger Grosse" ]
Hyperparameter optimization can be formulated as a bilevel optimization problem, where the optimal parameters on the training set depend on the hyperparameters. We aim to adapt regularization hyperparameters for neural networks by fitting compact approximations to the best-response function, which maps hyperparameters to optimal weights and biases. We show how to construct scalable best-response approximations for neural networks by modeling the best-response as a single network whose hidden units are gated conditionally on the regularizer. We justify this approximation by showing the exact best-response for a shallow linear network with L2-regularized Jacobian can be represented by a similar gating mechanism. We fit this model using a gradient-based hyperparameter optimization algorithm which alternates between approximating the best-response around the current hyperparameters and optimizing the hyperparameters using the approximate best-response function. Unlike other gradient-based approaches, we do not require differentiating the training loss with respect to the hyperparameters, allowing us to tune discrete hyperparameters, data augmentation hyperparameters, and dropout probabilities. Because the hyperparameters are adapted online, our approach discovers hyperparameter schedules that can outperform fixed hyperparameter values. Empirically, our approach outperforms competing hyperparameter optimization methods on large-scale deep learning problems. We call our networks, which update their own hyperparameters online during training, Self-Tuning Networks (STNs).
[ "hyperparameter optimization", "game theory", "optimization" ]
https://openreview.net/pdf?id=r1eEG20qKQ
https://openreview.net/forum?id=r1eEG20qKQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "S1xVl8vPlE", "rkem6wHiCX", "rJlurZziCX", "r1x8vz6F0X", "Hyl-6laK07", "B1eDIJ6FCQ", "SklEgC2YCX", "HJx1qfxi3Q", "HkeVPgy5nQ", "SJesafJKh7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545201132163, 1543358395035, 1543344448214, 1543258717745, 1543258296599, 1543257935294, 1543257580362, 1541239430814, 1541169244449, 1541104322800 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1254/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1254/Authors" ], [ "ICLR.cc/2019/Conference/Paper1254/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1254/Authors" ], [ "ICLR.cc/2019/Conference/Paper1254/Authors" ], [ "ICLR.cc/2019/Conference/Paper1254/Authors" ], [ "ICLR.cc/2019/Conference/Paper1254/Authors" ], [ "ICLR.cc/2019/Conference/Paper1254/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1254/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1254/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes an approach to hyperparameter tuning based on bilevel optimization, and demonstrates promising empirical results. Reviewer's concerns seem to be addressed well in rebuttals and extended version of the paper.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"A useful approach to hyperparameter tuning, promising results\"}", "{\"title\": \"Summary of changes\", \"comment\": \"We thank all the reviewers for their helpful comments.\", \"we_have_made_the_following_changes_to_the_paper_to_address_reviewer_concerns\": \"--- Improved clarity: We simplified our notation and included a table of notation in Appendix A. We added an additional figure which clarifies why hyperparameters must be sampled carefully. We have also included a discussion of the direct/response gradient which clarifies our approach.\\n\\n--- Sensitivity to metaparameters: In response to concerns about the sensitivity of our algorithm to its \\u201cmetaparameters\\u201d, we have included sensitivity studies in Appendix H to examine how our method performs under various metaparameter settings. \\n\\n--- Ease of implementation: We emphasize that STNs are easy to implement and use in code simply by replacing existing deep learning modules with \\u201chyper\\u201d counterparts. To illustrate this, we added code listings used for our experiments in Appendix G. \\n\\n--- Comparison to additional hyperparameter optimization methods: We have included a comparison to Hyperband for our LSTM experiments.\"}", "{\"title\": \"Response to the comments of authors\", \"comment\": \"The notation table definitely helps. Ideally, I'd like to see that the Hyperband method is used in all experiments.\"}", "{\"title\": \"Response to Review 2 Continued\", \"comment\": \"Q: Section 5, paragraph Gradient-Based HO: \\\"differentiating gradient descent\\\" needs reformulation -- an algorithm cannot be differentiated.\", \"a\": \"While it is true there are no theoretical guarantees on the quality of the approximation, it is common to lack theoretical guarantees when developing new algorithms for neural networks. Indeed, it is an active area of research to prove the convergence of gradient descent even in shallow, nonlinear networks [1][2][3]. Incorporating the bilevel structure of the problem will likely introduce additional complications, although we hope to investigate this further in future work.\\n\\n[1] Difan Zou, Yuan Cao, Dongruo Zhou, and Quanquan Gu. \\u201cStochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks\\u201d. Preprint, 2018.\\n[2] Simon S. Du, Jason D. Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. \\u201cGradient Descent Finds Global Minima of Deep Neural Networks\\u201d. Preprint, 2018.\\n[3] Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. \\u201cA Convergence Theory for Deep Learning via Over-Parametrization\\u201d. Preprint, 2018.\", \"q\": [\"No theoretical guarantee on the quality of the used approximation for neural networks\"]}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your feedback.\", \"q\": \"Section 3.3: If the hyperparameter is discrete and falls in Case 2, then REINFORCE gradient estimator is used. What about the quality of this gradient?\", \"a\": \"We found it to work well empirically for tuning the number of hidden units. If variance grew too high, it would be possible to use various variance reduction techniques such as RELAX [1].\\n\\n[1] Grathwohl, Will, Choi, Dami, Wu, Yuhuai, Roeder, Geoff, Duvenaud, David. \\u201cBackpropagation through the Void: Optimizing control variates for black-box gradient estimation\\u201d. ICLR 2018\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your feedback.\", \"q\": \"In Table 2 and figure 4, should \\\"Loss\\\" be \\\"Error\\\"?\", \"a\": \"Figure 4 (now 5) is the loss because that is the objective being minimized via gradient descent.\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your feedback.\", \"q\": \"Experiments are run on small scale problems, namely, CIFAR-10 and PTB. Results are encouraging but not stellar. More work would need to be done to validate the utility of the proposed approach on larger scale problems.\", \"a\": \"Smaller datasets such as CIFAR-10 and PTB provide an ideal testbed for hyperparameter optimization algorithms since performance depends heavily on regularization. The architectures used for RNNs are comparable in size to top-performing architectures on PTB [1,2]. In addition, we believe we are the first to tune RNN hyperparameters using gradient-based methods since these hyperparameters are often dropout probabilities that other gradient-based methods can\\u2019t handle.\\n\\nAlexNet is a standard architecture used when ResNets are too powerful and can overfit. This convolutional architecture is comparable to the largest tuned via gradient-based hyperparameter optimization methods in the literature. Papers such as [3,4] evaluate their algorithms on MNIST-size datasets using logistic regression or small feed-forward networks. In [5] a similar size convolutional network to AlexNet is tuned, but they weren\\u2019t able to tune data augmentation hyperparameters and had to use continuous dropout noise to obtain a gradient, unlike our method. \\n\\n\\n[1] Merity, Stephen, Keskar, Nitish S., and Socher, Richard. \\\"Regularizing and optimizing LSTM language models.\\\" ICLR 2018.\\n[2] Melis, Gabor, Dyer, Chris, and Blunsom, Phil. \\u201cOn the State of the Art of Evaluation in Neural Language Models\\u201d ICLR 2018\\n[3] Pedregosa, Fabian. \\u201cHyperparameter optimization with approximate gradient\\u201d ICML 2016\\n[4] Maclaurin, Dougal, Duvenaud, David, and Adams, Ryan. \\u201cGradient-based Hyperparameter Optimization through Reversible Learning\\u201d ICML 2015\\n[5] Luketina, Jelena, Berglund, Mathias, Greff, Klaus, and Raiko, Tapani. \\u201cScalable Gradient-Based Tuning of Continuous Regularization Hyperparameters\\u201d ICML 2016\"}", "{\"title\": \"Principled approach to hyperparameter tuning but only evaluated on small scale problems to-date.\", \"review\": \"The paper proposes a bilevel optimization approach for hyperparameter tuning. This idea is not new having been proposed in works prior to the current resurgence of deep learning (e.g., Do et al., 2007, Domke 2012, and Kunisch & Pock, 2013). However, the combination of bilevel optimization for hyperparameter tuning with approximation is interesting. Moreover, the proposed approach readily handles discrete parameters.\\n\\nExperiments are run on small scale problems, namely, CIFAR-10 and PTB. Results are encouraging but not stellar. More work would need to be done to validate the utility of the proposed approach on larger scale problems.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Good idea, not clear if it is easy to apply.\", \"review\": \"========\\\\\\\\\\nSummary\\\\\\\\\\n========\\\\\\\\\\n\\nThe paper deals with hyper-parameter optimization of neural networks. The authors formulate the problem as a bilevel optimization problem: minimizing the validation loss over the hyperparameters, subject to the parameters being at the minimum of the training loss. The authors propose an approximation of the so-called best-response function, that maps the hyperparameters to the corresponding optimal parameters (w.r.t the minimization of the training loss), allowing a formulate as a single-level optimization problem and the use gradient descent algorithm. The proposed\\napproximation is based on shifting and scaling the weights and biases of the network. There are no guarantee on its quality except in some very simple cases. The approach assumes a distribution on the hyperparameters, governed by a parameter, which is adapted during the course of the training to achieve a compromise between the flexibility of the best-response function and the quality of its local approximation around the current hyperparameters. The authors show\\nthat their approach beats grid-search, random search and Bayesian optimization on the CIFAR-10 and PTB datasets. They point out that the dynamic update of the hyperparameters during the training allows to reach a better performance than any fixed hyperparameter. \\\\\\\\\\n\\n\\n======================\\\\\\\\\\nComments and questions\\\\\\\\\\n======================\\\\\\\\\\n\\nCan cross-validation be adapted to this approach? \\\\\\\\\\n\\nCan this be used to optimize the learning rate? Which is of course a crucial hyperparameter and that needs an update schedule during the training. \\\\\\\\\\n\\nSection 3.2:\\\\\\\\\\n\\n\\\"If the entries are too large, then \\u03b8\\u0302 \\u03c6 will not be flexible enough to capture the best- response over the sampled neighborhood. However, its entries must remain sufficiently large so that \\u03b8\\u0302 \\u03c6 captures the local shape around the current hyperparameter values.\\\" Not clear why -- more explanations would be helpful. \\\\\\\\\\n\\n\\\"minimizing the first term eventually moves all probability mass towards an optimum \\u03bb\\u2217 ,resulting in \\u03c3 = 0\\\". I can't see how minimizing the first term w.r.t \\\\phi (as in section \\\"2.2.Local approximation\\\") would alter \\\\sigma. \\\\\\\\\\n\\n\\\"\\u03c4 must be set carefully to ensure...\\\". The authors still do not explain how to set \\\\tau. \\\\\\\\\\n\\nSection 3.3: \\\\\\\\\\n\\nIf the hyperparameter is discrete and falls in Case 2, then REINFORCE gradient estimator is used. What about the quality of this gradient? \\\\\\\\\\n\\nSection 5, paragraph Gradient-Based HO: \\\"differentiating gradient descent\\\" needs reformulation -- an algorithm cannot be differentiated. \\\\\\\\\\n\\nPros \\\\\\\\\\n- The paper is pretty clear \\\\\\\\\\n- Generalizes a previous idea and makes it handle discrete hyperparameters and scale better. \\\\\\\\\\n- I like the idea of hyperparameters changing dynamically during the training which allows to explore a much larger space than one value \\\\\\\\\\n- Although limited, the experimental results are convincing \\\\\\\\\\n\\nCons \\\\\\\\\\n- The method itself depends on some parameters and it is not clear how to choose them. Therefore it might be tricky to make it work in practice. I feel like there is a lot of literature around HO but very often people still use the very simple grid/random search, because the alternative methods are often quite complex to implement and make really work. So the fact that the method depends on \\\"crucial\\\" parameters but that are not transparently managed may be a big drawback to its applicability. \\\\\\\\\\n- No theoretical guarantee on the quality of the used approximation for neural networks \\\\\\\\\\n- Does not handle the learning rate which is a crucial hyperparameter (but maybe it could) \\\\\\\\\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"The idea is interesing, but the explaination and experiment can be better\", \"review\": \"First, the writing can be better. I had a hard time to understand the paper. It has many symbols, but some of them are not explained. For instance, in formula (9), what are Q or s? Also, formula (14). I probably can guess them. Is it possible to simplify the notations or use a table to list the symbols?\\n\\nFinding good models is a bi-level or tri-level optimization problem. The paper describes a gradient-based hyperparameter optimization method, which finds model parameters, hyperparameter schedules, and network structure (limited) the same time. It is a interesting idea. Comparing random search, grid search and Spearmint, it seems to be better them. The paper rules out the performance gain is from the randomness of the hyperparameters, which is a good thought. \\n\\nMore evidences are needed to show this method is superior. The paper doesn't explain well why it works, and the experimental results are just ok. The network architecture search part is limited to number of filters in the experiments. Certainly, the results is not as good as PNASNet or NASNet. \\n\\nEvolution algorithm or GA shows good performance in hyperparameter optimization or neural architecture search. Why not compare with them? Random and grid search are not good generally, and Bayesian optimization is expensive and its performance depends on implementation. \\n\\nIn Table 2 and figure 4, should \\\"Loss\\\" be \\\"Error\\\"?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rkl4M3R5K7
Optimal Attacks against Multiple Classifiers
[ "Juan C. Perdomo", "Yaron Singer" ]
We study the problem of designing provably optimal adversarial noise algorithms that induce misclassification in settings where a learner aggregates decisions from multiple classifiers. Given the demonstrated vulnerability of state-of-the-art models to adversarial examples, recent efforts within the field of robust machine learning have focused on the use of ensemble classifiers as a way of boosting the robustness of individual models. In this paper, we design provably optimal attacks against a set of classifiers. We demonstrate how this problem can be framed as finding strategies at equilibrium in a two player, zero sum game between a learner and an adversary and consequently illustrate the need for randomization in adversarial attacks. The main technical challenge we consider is the design of best response oracles that can be implemented in a Multiplicative Weight Updates framework to find equilibrium strategies in the zero-sum game. We develop a series of scalable noise generation algorithms for deep neural networks, and show that it outperforms state-of-the-art attacks on various image classification tasks. Although there are generally no guarantees for deep learning, we show this is a well-principled approach in that it is provably optimal for linear classifiers. The main insight is a geometric characterization of the decision space that reduces the problem of designing best response oracles to minimizing a quadratic function over a set of convex polytopes.
[ "online learning", "nonconvex optimization", "robust optimization" ]
https://openreview.net/pdf?id=rkl4M3R5K7
https://openreview.net/forum?id=rkl4M3R5K7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJxCVFYBgN", "rJgF5r4ggN", "HJek2uD1gE", "rkgjJoXyx4", "Syx9kN8dR7", "ByxUiSHuRX", "HklMmBruCQ", "rkeZ49B5hQ", "SyglfqEchm", "H1x1hWe53m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545079093929, 1544730000581, 1544677543111, 1544661730685, 1543164898231, 1543161246097, 1543161114388, 1541196328669, 1541192200189, 1541173670844 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1253/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1253/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1253/Authors" ], [ "ICLR.cc/2019/Conference/Paper1253/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1253/Authors" ], [ "ICLR.cc/2019/Conference/Paper1253/Authors" ], [ "ICLR.cc/2019/Conference/Paper1253/Authors" ], [ "ICLR.cc/2019/Conference/Paper1253/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1253/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1253/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"Four reviewers have evaluated this paper. The reviewers have raised concerns about the specific formulation used for adversarial example generation which requires further clarity in motivation and interpretation. The reviewers have also made the point that the experimental evaluation is against previous work which tried to solve a different problem (black box based attack) and hence the conclusions are unconvincing.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Unclear motivation and significance of empirical results\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"Thanks for the quick response.\\n\\n\\\"The weights are the probabilities that the learner assigns to the classifier, as stated in the first page of the introduction where we write\\\"\\n\\nLet me try to be clearer. Previous work asked : \\\"suppose I want an adversarial perturbation that fools each of k classifiers. How do I find one\\\". You are asking a different question from this, perhaps without realizing it. Your question translates to \\\"Suppose I want a distribution over adversarial perturbations that fools each of k classifiers with some probability\\\". When you change the question, you have to justify why this is an interesting question to ask. The old question, e.g., helps answer \\\"Suppose as a learner I were to apply all of the models on an image and only label it if they all agree. How resistant is this ensemble?\\\". Since in your model, each classifier is seeing a different sample from the adversarial perturbation, one question being answered is \\\"suppose as a learner, I were to pick one classifier and apply it on the image to label it, how resistant can I make that if I picked this classifier from a distribution (or in response the adversary's randomized strategy)?\\\". This is a very different question. Your example shows why the answer can be different. The paper does not make a case that the question is interesting. \\n\\n\\\"The weights do not need to be tied together across examples. \\\"\\nYour paper would be improved if your equations said so as well.\\n\\n\\\"we run the algorithm for 40 iterations with a step size of \\u03b1/40 \\u00d7 1.25 so as to mimic the setup of the authors\\u201d\", \"likely_minor_point\": \"If I am calculating correctly, the 1.25 should be a 1.33 (they had alpha = 0.3, step-size = 0.01).\\n\\nI find it a very counterintuitive that PGD is worse than FGM. This might suggest that something strange happens when you play against ensembles, or may suggest that the parameters weren't quite optimized correctly in your implementation (e.g. the optimal \\\"1.25/1.33\\\" may be different for l2 as compared to linfty).\\n\\n\\\"The hinge loss is smooth, this can be verified from the definition.\\\"\\nThe hinge loss is l_h(w; x, y) = [f(w; x,y)]_+. When f is -eps, the gradient is zero. When f is plus eps, the gradient is non-zero and bounded below. This violates smoothness, which would require that the gradient be Lipschitz.\"}", "{\"title\": \"Thanks for the review\", \"comment\": \"We address all the concerns below.\\n\\nThe review expresses questions about the zero sum game formulation (why there is use of weights and why the problem is formulated for multiple points), and does not find the parameters of the experiments. This can all be found in the paper and we refer the reviewer to places in the paper where we explain why weights are used, attacks on multiple points, and refer the reviewer to the sections where we precisely describe the parameters of the experiments. \\n\\nWe'd appreciate if you read our response and consider reevaluating your score accordingly. \\n\\n1. \\\"it is not clear to me how the specific two player game is motivated. The authors do not justify why it makes sense to allow weights on the ensemble, and also why these weights need to be tied together across examples.\\\"\\n\\n- with regards to: \\\" The authors do not justify why it makes sense to allow weights on the ensemble\\\"\\n\\nThe weights are the probabilities that the learner assigns to the classifier, as stated in the first page of the introduction where we write: \\n\\n\\\"Furthermore, a learner can randomize over classifiers and avoid deterministic attacks (see Figure 1)\\\". In the figure we illustrate and describe an example where randomization gives a learner power to avoid adversarial attacks. \\n\\n- with regards to \\\"why these weights need to be tied together across examples\\\"\\n\\nThe weights do not need to be tied together across examples. \\n\\nIn the paper we defined the attack for m data points, *for any value of m* and one can apply all the results for m=1. Note that all our results are independent of the number of data points m and are a function of the number of classes k, number of classifiers n, noise budget \\\\alpha, and approximation \\\\delta. The parameter m does not affect the complexity of the problem.\\n\\nGiven m data points, one can construct an adversarial attack for every single data point using the framework described here. The formulation of the zero sum games remains identical: the learner randomizes over n classifiers and the learner randomizes over noise vectors. \\n\\n\\n2. With regards to your comments on the experiments,\\n\\n\\u201cFor one, it seems fishy that their Madry et al. attack is worse than the FGM for many of the models and suggests strongly that the parameters for the Madry attack were not properly tuned. It is hard to know since the paper does not report on various parameters for these attacks. \\u201c\\n\\nWe provide full details on the experimental setup in the appendix. In the first paragraph, we clearly state how we use the same choice of parameters that the authors employed in their original paper: \\u201cWhen running the Projected Gradient Method by Madry et al., given a noise budget \\u03b1, we run the algorithm for 40 iterations with a step size of \\u03b1/40 \\u00d7 1.25 so as to mimic the setup of the authors\\u201d\\n\\n\\u201cSecond, these attacks are designed for l_infty and modifying them for l_2 would be necessary for a fair comparison\\u201d\\n\\nAs stated in the paper, we use the cleverhans implementation of these attacks. The library, maintained by Goodfellow and Papernot, provides an option to use the Madry attack as designed for the L2 norm which only involves a projection to the L2 ball instead of L infinity.\\n\\n\\u201cFinally, I am not sure why the authors do not compare to the Carlini and Wagner attacks on Imagenet, which is actually an l2 attack\\u201d\\n\\nThe Carlini Wagner attack is based off a Lagrangian formulation and is hence not guaranteed to return solutions that lie within a prespecified noise budget. While we could clip solution to lie within a specific range, this is not the way the algorithm was designed to be used and would be an unfair comparison. Hence, we compare against other methods, such as the Momentum Iterative Method (the strongest attack as per the 2017 NIPS competition) that explicitly allow for noise constraints.\\n\\n\\n\\n3. Regarding \\u201cThe hinge loss is actually not smooth. However, I don't quite see why you need smoothness there.\\u201c\\n\\nThe hinge loss is smooth, this can be verified from the definition. The condition is necessary to ensure the faster convergence rate of our theorem.\"}", "{\"title\": \"Questions about minmax game definition\", \"review\": \"The paper studies the problem of adversarial examples generation. The authors phrase the following problem: given a set of models C, we want to find an adversarial perturbation that maximizes the loss on an ensemble of models. However, the ensemble weights are chosen by the learner. In the case that we have one example, this is equivalent to asking that the same adversarial perturbation (or distribution over perturbations) fools all the individual models in my collection. This is a reasonable phrasing of the problem, though it seems different from versions studied in literature. In particular, previous works used uniform ensembles.\\nMore generally, the authors consider a set of m examples, and the adversarial player now looks for a (distribution over) perturbations for each of the n examples. The learner player selects mixing weights to minimize the error rate. This is an interesting formulation of the problem: in particular, tying the mixing weights used for all examples is a non-intuitive change and does not have the clean interpretation above any more.\\nThis notion of allowing mixing weights on the learner is a change from previous work. The authors would do well to explain why this formulation is chosen and what the interpretation is. It corresponds to a specific attack model where the learner and the adversary make choices in a very specific order, and could use further explanation on when this a reasonable attack model. Note that previous work looked at the setting of all weights being equal, and one natural variant is to allow a set of mixing weights per example, which would correspond to finding a perturbation (or distribution over perturbations) for this example that fool all models in the set C. The version studied here is left unexplained in the current work.\\n\\nThe authors then argue that we can solve this game by playing MW vs. best response. They propose using best response on the adversarial player. This player is then trying to find the perturbation that maximizes the p-weighted sum of the 0-1 (or rather surrogate) losses, where p represents the mixing weights on C. The authors show that in the convex case, if there is a pure NE, then the best response can be found: in this case we get a convex problem. They study the convex case a bit more, showing that there is at most an exponential number of values for the 0-1 loss, since a {0,1} vector defining which side of each classifier in C x falls in fully defines the loss at x. \\n\\nFinally, the authors move to the non-convex case where the experiments are done. The authors report interesting results on imagenet and for mnist for the convex case. I had some trouble understanding the imagenet results. For one, it seems fishy that their Madry et al. attack is worse than the FGM for many of the models and suggests strongly that the parameters for the Madry attack were not properly tuned. It is hard to know since the paper does not report on various parameters for these attacks. Second, these attacks are designed for l_infty and modifying them for l_2 would be necessary for a fair comparison. Finally, I am not sure why the authors do not compare to the Carlini and Wagner attacks on Imagenet, which is actually an l2 attack and makes the accuracy 0 at a slightly larger perturbation radius. Also, the authors would do well to emphasize that for larger perturbation radii, there are attacks which make the accuracy zero, and the contribution here seems to be look at smaller radii.\\n\\nMy primary concern with the work is that it is not clear to me how the specific two player game is motivated. The authors do not justify why it makes sense to allow weights on the ensemble, and also why these weights need to be tied together across examples. For a paper that makes strong claims about its approach being principled, this is a serious shortcoming in my view. Secondarily, the experiments section leaves me worried that the comparison is with improperly tuned versions of previous work. I would therefore not be in favor of accepting this paper.\", \"comments\": \"\", \"pg_1\": \"\\\"One of the most pressing ...\\\" : that is perhaps an unnecessary exaggeration.\", \"pg_2\": [\"The name \\\"NSFW\\\" is an unfortunate choice, is completely non-informative about the contribution and I strongly recommend the authors reconsider it.\", \"As far as I can tell, Tramer et al. do not build an ensemble model at all; the ensemble word there refers to an ensemble of adversarial perturbations.\", \"The hinge loss is actually not smooth. However, I don't quite see why you need smoothness there.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"The theorem gives an unconditional characterization for linear classifiers\", \"comment\": \"Thanks for your review. It seems like there is a fundamental misunderstanding about the paper and we will appreciate if you could give the paper a second read in light of our response.\\n\\nYour review suggests that we can only \\\"get provable guarantees for linear classifiers provided that there exists a pure nash equilibrium.\\\" This is not true. \\n\\nAs we state in the introduction, page 2 in the summary of our main results: We give a complete characterization for linear classifiers for any instance (specifically, existence of pure or mixed nash equilibrium). In the case of a pure Nash equilibrium we prove that projected gradient descent efficiently converges to the optimal solution and otherwise finding a solution is NP-hard. Note that the fact that finding an optimal attack even for linear classifiers is in general NP-hard is a fact, not a weakness of our proof technique. Note also that for a modest number of classifiers as used in previous work on ensemble attacks, one can still use the characterization to find an optimal attack. \\n\\nRegarding the assumption that the classifiers are linear, we are not aware of principled approached for obtaining guarantees for non-linear classifiers. The use of \\\\alpha-approximate best response oracles in \\\"\\\"Robust Optimization for Non-Convex Objectives\\\" for example avoids the problem entirely, as the whole purpose of this paper is to study the design of such \\\\alpha approximate oracles. Designing an approach for attacks that is provably optimal for the special case of linear classifiers is well-principled and achieves strong empirical results (as you write in your review).\", \"specific_responses_to_comments_in_the_review\": \"Regarding relationship to \\\"Robust Optimization for Non-Convex Objectives\\\". This is paper is orthogonal to \\\"Robust Optimization for Non-Convex Objectives\\\". In this paper we design attacks vs. defenses, but most importantly, we look at a different level of abstraction. In \\\"Robust Optimization for Non-Convex Objectives\\\" the premise of the paper was to assume that one is given a blackbox oracle for solving the best-response problem. In this paper our purpose is to design the best-response oracle; In \\\"Robust Optimization for Non-Convex Objectives\\\" for linear classifiers the best response oracle was a straightforward convex optimization problem, here best response for linear classifiers is a non-convex optimization problem which requires the geometric characterization. \\n\\nRegarding relationship to work on \\u201cTwo-Player Games for Efficient Non-Convex Constrained Optimization\\u201d (which BTW appeared on the ArXiv *after* this paper was submitted), that work seems to follow the model of \\\"Robust Optimization for Non-Convex Objectives\\\", and the relationship between is the same as summarized in the paragraph above.\\n\\n- Regarding \\\"In Section 2.1, the authors seek to show that they can get provable guarantees for *linear* classifiers, provided that there exists a \\u201cpure strategy Nash equilibrium\\u201d, which is a set of noise vectors for which *every* classifier misclassifies *every* example. These conditions seem to me to be so strong that I\\u2019m not sure that this section is really pulling its weight.\\\" :\", \"please_see_4th_bullet_in_introduction_on_page_2\": \"\\\"If the game does not have a pure Nash equilibrium, there is an algorithm for finding an\\noptimal adversarial attack for linear classifiers whose runtime is exponential in the number\\nof classifiers. We show that finding an optimal strategy in this case is NP-hard\\\"\\n\\nThus, this is not a limitation of our analysis, but a fact: optimal attacks, even on linear classifiers, are in general an NP-hard optimization problem. In our paper we identify the most general sufficient condition under which optimal attacks can be carried out against a large number of classifiers (pure nash equilibirum). With a moderate number of classifiers as has been used in literature, we are able to use the characterization to design an algorithm whose runtime is exponential in the number of classifiers and returns an optimal solution.\\n\\nBut more importantly, as you write, this leads to a \\\"well-motivated approach that seems to work well in practice\\\" and \\\"extremely good performance, in fact\\\"\", \"regarding\": \"\\\"While Algorithm 1 makes an unrealistic oracle assumptions\\\"\\n\\nThis is a mistake. We do not make any assumption about the oracle. The assumptions you may be referring to are not assumptions, but conditions on the instance and problem. We prove our characterization using conditions on the *problem* (linear classifiers), and provide conditions on the *instance* under which efficient algorithms exist.\"}", "{\"title\": \"This review breaks an all time record of the theater of the absurd\", \"comment\": \"From the introduction: \\\"It is well known that strategies at equilibrium in a zero sum game can be obtained by applying the celebrated Multiplicative Weights Update framework, given an oracle that computes a best response to a randomized strategy. The main technical challenge we address pertains to the characterization and implementation of such oracles.\\\"\", \"you_might_as_well_have_written_that_the_result_is_is_already_known_and_proven_in\": \"\\\"John Von Neumann and Oskar Morgenstern, Theory of Games and Economic Behavior, 1953\\\" \\n\\nwhere the authors show that every zero-sum game has a mixed equilibrium.\"}", "{\"title\": \"We are not aware of previous similar work; if you wish to reject the paper due to \\\"previous similar work\\\"please give at least one example\", \"comment\": \"Thank you for writing a review. We're not able to properly respond to your review since you write that this work is similar to previous work, but without a single reference. We are not aware of any similar work and are unable to provide proper feedback without references to alleged similar work. We did our best to respond below.\", \"specific_responses\": \"- Regarding: \\\"Originaility: The paper's technical contribution seems limited. They suggest performing PGD to estimate the best response, which is similar to previous work.\\\"\\n\\nWe are not aware of any previous work that proposes a principled solution to attack multiple classifiers. In your review you write that this is \\\"similar to previous work\\\"; we would appreciate any reference to previous work on this so we can respond accordingly. \\n\\n- Regarding concern (1) \\\"The setting where there are multiple classifiers and all of these weights are accessible to the attacker seems unrealistic.\\\" \\n\\nThere is vast literature on whitebox attacks against a single classifier, all of which are referenced in the paper. This paper tackles the problem of optimal attacks against multiple classifiers, for the first time. \\n\\nSimilar to the vast literature on whitebox attacks (when all the weights are known) is interesting for two important reasons. First, in many cases we actually do know the weights used by the classifiers. This is simply due to the fact that many applications use existing pre-trained neural networks. Secondly, and perhaps even more important is that even if we wish to understand how to attack classifiers that are unknown to us, we must first solve the easier problem where the classifiers are known. That is, if we don't know how to solve the whitebox attack problem we don't know how to solve the black-box attack. \\n\\nThe problem of how to optimally design whitebox attacks on classifiers was not known. As we show in this paper, it is highly non-trivial, and we therefore think it makes progress on an important problem. \\n\\n\\n\\n- Regarding concern (2) \\\"it is not surprising that it's possible to find an attack that works for multiple classifiers at the same time, and I believe this has been done in prior work.\\\" \\n\\nBeliefs aside, can you provide concrete references to this \\\"prior work\\\"? We are not aware of any and cannot respond without a proper reference to alleged work. \\n\\n\\n- Regarding: \\\"The theoretical contribution is limited and the technique proposed is just a small modification of existing gradient based algorithms.\\\"\\n\\nAgain, can you please explain which existing gradient-based algorithms you are referring to? We are not aware of any.\\n\\n- Regarding (3) The experimental evaluation is against previous work which tried to solve a different problem (black box based attacks). Hence, they are not convincing.\\n\\nThis is not simply not true. The experimental evaluation is against state-of-the-art methods for attacking multiple classifiers.\"}", "{\"title\": \"Interesting and well-written paper but insufficient motivation\", \"review\": \"Summary: The authors provide a method to attack multiple classifiers, with the key insight that it is insufficient to attack a simple average of the multiple classifier outputs; creating adversarial examples which can fool each classifier independently leads to more success in attacking any defenses that has access to multiple classifiers. Note that white-box access to all the classifiers is assumed.\", \"clarity\": \"Paper is well written and claims are clear and substantiated.\", \"originaility\": \"The paper's technical contribution seems limited. They suggest performing PGD to estimate the best response, which is similar to previous work. However, the authors do multiple rounds of this, with different weights on the multiple classifiers at each step.\", \"concerns\": \"(1) Ensembles have mostly been proposed for black-box attacks. The setting where there are multiple classifiers and all of these weights are accessible to the attacker seems unrealistic. What's the advantage for a defense to commit to a set of trained classifiers before hand? \\n\\n(2) Security concerns aside; it is not surprising that it's possible to find an attack that works for multiple classifiers at the same time, and I believe this has been done in prior work. The theoretical contribution is limited and the technique proposed is just a small modification of existing gradient based algorithms.\\n\\n(3) The experimental evaluation is against previous work which tried to solve a different problem (black box based attacks). Hence, they are not convincing.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Well-motivated approach to an interesting problem\", \"review\": \"This paper is concerned with the problem of finding adversarial examples for an ensemble of classifiers. This is formulated as the task of finding noise vectors that can be added to a set of examples in such a way that, for each example, the best ensemble element performs as badly as possible (i.e. it\\u2019s a maximin problem).\\n\\nThis is formulated as a two-player game (Equation 1), in which the above description has been relaxed slightly: Equation 1 seeks a *distribution* over noise vectors, instead of only one. This linearizes the game, so that we can seek a mixed Nash equilibrium. Given access to a best response oracle, Algorithm 1 results in such a mixed Nash equilibrium. This is pretty standard stuff (see e.g. \\u201cRobust Optimization for Non-Convex Objectives\\u201d in NIPS\\u201917, or \\u201cA Reductions Approach to Fair Classification\\u201d in ICML\\u201918), but the application of this approach to this problem is novel and interesting.\\n\\nIn Section 2.1, the authors seek to show that they can get provable guarantees for *linear* classifiers, provided that there exists a \\u201cpure strategy Nash equilibrium\\u201d, which is a set of noise vectors for which *every* classifier misclassifies *every* example. These conditions seem to me to be so strong that I\\u2019m not sure that this section is really pulling its weight.\\n\\nOn the subject of Section 2.1, the authors might consider whether an analysis based on \\u201cTwo-Player Games for Efficient Non-Convex Constrained Optimization\\u201d (on arXiv) could be used here: convert Equation 1 into a constrained optimization problem by adding a slack variable, then reformulate it as a non-zero-sum game, in which one player uses the zero-one loss, and the other uses e.g. the hinge loss.\\n\\nWhile Algorithm 1 makes an unrealistic oracle assumptions, and I didn\\u2019t find Section 2.1 fully satisfying, I think that overall the theoretical portion of the paper is sufficiently convincing that one should be surprised if their experiments don\\u2019t show good performance (which they do--extremely good performance, in fact). Overall, this is an interesting and important problem, and a well-motivated approach that seems to work well in practice. I think Section 2.1 is a bit weak, but this is a relatively minor issue.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Application to multiplicative weight update for constructing adversarial noise\", \"review\": \"Summary:\\nThe paper considers finding the most adversarial random noise given multiple classifiers. They formulate the problem as the standard min-max game and apply the multiplicative weight updates. The technical contribution is to clarify the computational complexity of implementing/approximating the response oracle. The authors show experimental results.\", \"comments\": \"\", \"i_am_afraid_that_the_main_technical_result_is_already_known\": \"Yoav Freund Robert E. Schapire: Adaptive game playing using multiplicative weights, Games and Economic Behavior, 29:79-103, 1999.\\n\\nThe paper shows that a multiplicative update algorithm can approximately solve the min-max game. If you use the result, you can readily obtain the main results of the present paper.\", \"after_rebuttal\": \"I read the authors comments and I understood the technical contribution more and raised my score. Implementing/appriximating the response oracle is non-trivial. For MWU, I still think that the above paper should be cited (citing the Adaboost paper is not enough) since the paper shows MWU solves the min-max game.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
B1lXGnRctX
Classification in the dark using tactile exploration
[ "Mayur Mudigonda", "Blake Tickell", "Pulkit Agrawal" ]
Combining information from different sensory modalities to execute goal directed actions is a key aspect of human intelligence. Specifically, human agents are very easily able to translate the task communicated in one sensory domain (say vision) into a representation that enables them to complete this task when they can only sense their environment using a separate sensory modality (say touch). In order to build agents with similar capabilities, in this work we consider the problem of a retrieving a target object from a drawer. The agent is provided with an image of a previously unseen object and it explores objects in the drawer using only tactile sensing to retrieve the object that was shown in the image without receiving any visual feedback. Success at this task requires close integration of visual and tactile sensing. We present a method for performing this task in a simulated environment using an anthropomorphic hand. We hope that future research in the direction of combining sensory signals for acting will find the object retrieval from a drawer to be a useful benchmark problem
[ "tactile sensing", "multimodal representations", "vision", "object identification" ]
https://openreview.net/pdf?id=B1lXGnRctX
https://openreview.net/forum?id=B1lXGnRctX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1gqbqAC1V", "HkgGeGbTn7", "SJerAD2j3X", "SJlMo5m9nm" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544641025544, 1541374441771, 1541289933286, 1541188249785 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1252/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1252/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1252/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1252/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper describes the use of tactile sensors for exploration. An important topic which has been addressed in various previous publications, but is unsolved to date.\\n\\nThe research and the paper are unfortunately in a raw state. Rejected unanimously by the reviewers, without rebuttal chances used by the authors.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"unfinished\"}", "{\"title\": \"exciting research problem, but there are problems in the level of detail in the paper, and the approach taken is not convincing\", \"review\": \"This is an exciting research problem, and could be of broad interest in robotics. The problem posed, and associated data sets and simulation code, if shared, could be an interesting and novel source of challenge to machine learning researchers.\\n\\nHowever, the paper appears to be a fairly early work in progress, with missing detail in many areas, and making some odd approximations. One concern is the use of averaged haptic readings over a series of explorations, rather than the haptic reading for a specific pose. The approach of averaging seems certain to blur and confound things unnecessarily, making it harder for the system to learn the relationship between pose, object form and sensation.\\n\\nThe paper itself has weaknesses, e.g. on p5 you say \\\"When employing the predicted means, this accuracy was a bit lower.\\\" when you actually have a drop from 99% to 54%! You do not specify which objects are used for this experiment. and in Section 4.2, you do not specify the exploration strategy used. \\n\\nCan you clarify the explanation of the images in Figure 3 - you say that the image is as in Figure 3, but are you really giving the image of the object AND hand, or just the object itself (if so, you need to change the explanation).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Not clear what's the contribution.\", \"review\": \"The authors propose a task of classifying objects from tactile signals. To do this, the image and haptic data are collected for each object. Then, image-to-haptic and haptic-to-label predictors are trained by supervised learning. In the experiment, prediction accuracy on unseen object class is studied.\\n\\nThe paper is clearly written although it contains several typos. The proposed task of cross-modal inference is an interesting task. I however hardly find any significance of the proposed method. The proposed method is simple non-end-to-end predictors trained by supervised learning. So, the proposed model seems more like a naive baseline. It is not clear what scientific challenge the paper is solving and what is the contribution. Also, the performance seems not impressive. I\\u2019m not sure why the authors average the haptic features. Lots of information will be lost during the averaging, why not RNNs. Overall, the paper seems to require a significant improvement.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Poorly written paper, with lots of confusing statements and incomplete description of the concepts and references introduced. Very weak contribution.\", \"review\": \"This paper is poorly written, and looks like it was not proof-read.\\nPresentation of the problem at hand is presented over so many times that it becomes confusing.\\nAuthors ought to better define the image description space of the objects and the haptic space. \\nMore interesting would have been a good explanation of the different sensors used in the anthropomorphic hand and the vector built to represent the different sensed objects.\", \"the_most_expected_contribution_of_this_work_is_barely_explained\": \"how the haptic sensors' values/object labels vectors were built and fed to the predictor network, what their values looked like for the various objects, how these vectors clustered for the various objects etc.\", \"among_the_many_evident_weaknesses\": \"- Domain specific concepts and procedures of most importance to this work are not explained: \\\"... measure various physical properties of objects using the bio-tac sensor using five different exploration procedures (EP)\\\". Page 3, Paragraph 1. Bio-tac sensor and most importantly exploration procedures (EP) should be presented more clearly.\\n- Incomplete and out of nowhere sentences are common: \\\"The SHAP procedure\\nwas established for evaluating prosthetic hands and arms. With this idea in mind, prior work (?)\\nbuilt a prosthetic arm which could ...\\\" Page 4, Paragraph 1.\\n- Many references are not well introduced and justified: \\\"We then trained the network using\\nADAM (Kingma & Ba (2014)) with an initial learning rate set to 1e-4.\\\" Page 4, Paragraph 6. In the same paragraph, authors explain using \\\"The groundtruth predictions were per-channel averaged haptic forces\\\" without having defined those channels (that one can guess but shouldn't). Concepts have to be clearly defined prior to their use.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
BJzmzn0ctX
Scalable Neural Theorem Proving on Knowledge Bases and Natural Language
[ "Pasquale Minervini", "Matko Bosnjak", "Tim Rocktäschel", "Edward Grefenstette", "Sebastian Riedel" ]
Reasoning over text and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering. Transducing text to logical forms which can be operated on is a brittle and error-prone process. Operating directly on text by jointly learning representations and transformations thereof by means of neural architectures that lack the ability to learn and exploit general rules can be very data-inefficient and not generalise correctly. These issues are addressed by Neural Theorem Provers (NTPs) (Rocktäschel & Riedel, 2017), neuro-symbolic systems based on a continuous relaxation of Prolog’s backward chaining algorithm, where symbolic unification between atoms is replaced by a differentiable operator computing the similarity between their embedding representations. In this paper, we first propose Neighbourhood-approximated Neural Theorem Provers (NaNTPs) consisting of two extensions toNTPs, namely a) a method for drastically reducing the previously prohibitive time and space complexity during inference and learning, and b) an attention mechanism for improving the rule learning process, deeming them usable on real-world datasets. Then, we propose a novel approach for jointly reasoning over KB facts and textual mentions, by jointly embedding them in a shared embedding space. The proposed method is able to extract rules and provide explanations—involving both textual patterns and KB relations—from large KBs and text corpora. We show that NaNTPs perform on par with NTPs at a fraction of a cost, and can achieve competitive link prediction results on challenging large-scale datasets, including WN18, WN18RR, and FB15k-237 (with and without textual mentions) while being able to provide explanations for each prediction and extract interpretable rules.
[ "Machine Reading", "Natural Language Processing", "Neural Theorem Proving", "Representation Learning", "First Order Logic" ]
https://openreview.net/pdf?id=BJzmzn0ctX
https://openreview.net/forum?id=BJzmzn0ctX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJe5ZXsEeV", "BygGfRTik4", "BkeJxH_myE", "HJxcj8ryJE", "HkxqI2-oAm", "B1eBEhWs0m", "BJly-o6tRQ", "rJg7NahKAm", "HyggagnY0Q", "ByeCDPImCm", "r1gk0W7XA7", "Hkglsbm7RQ", "SJl9uZQXRX", "BklwIJXm0m", "SJlb454867", "HkllwPvs2X", "S1x5WfY_27" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545020161998, 1544441354225, 1543894246949, 1543620258145, 1543343186214, 1543343149006, 1543260919068, 1543257386567, 1543254200264, 1542838117603, 1542824391370, 1542824344344, 1542824305607, 1542823758908, 1541978664659, 1541269335812, 1541079553560 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1251/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1251/Authors" ], [ "ICLR.cc/2019/Conference/Paper1251/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1251/Authors" ], [ "ICLR.cc/2019/Conference/Paper1251/Authors" ], [ "ICLR.cc/2019/Conference/Paper1251/Authors" ], [ "ICLR.cc/2019/Conference/Paper1251/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1251/Authors" ], [ "ICLR.cc/2019/Conference/Paper1251/Authors" ], [ "ICLR.cc/2019/Conference/Paper1251/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1251/Authors" ], [ "ICLR.cc/2019/Conference/Paper1251/Authors" ], [ "ICLR.cc/2019/Conference/Paper1251/Authors" ], [ "ICLR.cc/2019/Conference/Paper1251/Authors" ], [ "ICLR.cc/2019/Conference/Paper1251/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1251/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1251/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper focuses on scaling up neural theorem provers, a link prediction system that combines backward chaining with neural embedding of facts, but does not scale to most real-world knowledge bases. The authors introduce a nearest-neighbor search-based method to reduce the time/space complexity, along with an attention mechanism that improves the training. With these extensions, they scale NTP to modern benchmarks for the task, including ones that combine text and knowledge bases, thus providing explanations for such models.\", \"the_reviewers_and_the_ac_note_the_following_as_the_primary_concerns_of_the_paper\": \"(1) the novelty of the contributions is somewhat limited, as nearest neighbor search and attention are both well-known strategies, as is embedding text+facts jointly, (2) there are several issues in the evaluation, in particular around analysis of benefits of the proposed work on new datasets. There were a number of other potential weaknesses, such the performance on some benchmarks (Fb15k) and clarity and writing quality of a few sections.\\n\\nThe authors provided significant revisions to the paper that addressed many of the clarity and evaluation concerns, along with providing sufficient comments to better contextualize some of the concerns. However, the concerns with novelty and analysis of the results still hold. Reviewer 3 mentions that it is still unclear in the discussion why the accuracy of the proposed approach matches/outperforms that of NTP, i.e. why is there not a tradeoff. Reviewer 4 also finds the analysis lacking, and feels that the differences between the proposed work and the single-link approaches, in terms of where each excels, are described in insufficient detail. Reviewer 4 focused more on the simplicity of the text encoding, which restricts the novelty as more sophisticated text embeddings approaches are commonplace.\\n\\nOverall, the reviewers raised different concerns, and although all of them appreciated the need for this work and the revisions provided by the authors, ultimately feel that the paper did not quite meet the bar.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Worthy goal, but limited novelty and analysis\"}", "{\"title\": \"Surprised by the follow-up\", \"comment\": \"Dear reviewer,\\n\\nGiven that we've been very active in addressing all the concerns coming from all the reviewers, and given that we believe this process has made the paper significantly stronger in response to everyone's advice, your downgrade from 6 to 5 comes as a surprise. Would you care to elaborate why that is, so we can at least use your constructive criticism to further improve the paper in the next iteration?\"}", "{\"title\": \"Some final comments\", \"comment\": \"I appreciate the authors' effort in revising the paper and including additional experimental results. However, I feel that some of my concerns remain. There are obvious questions that beg for insights yet are simply ignored by the authors. For example, the computational speedup due to restricting the search space is obvious, so the numerical validation that it indeed results in significant speedup is not surprising. What seems surprising is that there is no price to pay in terms of the various performance metrics -- it even seems to mostly improve the performance. Why?\\n\\nI encourage the authors to prepare perhaps a journal version that fully explains NTP as well as the various improvements proposed here. This removes some of the space restrictions in a conference submission and allows a more detailed and hopefully insightful account of the approach.\"}", "{\"title\": \"Some final comments\", \"comment\": \"Dear Reviewer 4,\\n\\nThank you for being so active in participating in the discussion and rebuttal period with us. Through our exchange, we've been able to make significant improvements to the paper, provide additional results, and expand our analysis of our method in comments which will be added to the final revision of the paper:\\n* https://openreview.net/forum?id=BJzmzn0ctX&noteId=HkxqI2-oAm\\n* https://openreview.net/forum?id=BJzmzn0ctX&noteId=B1eBEhWs0m\\n\\nWe aim to have addressed your concerns and believe we have made the paper significantly stronger in response to your advice. We hope now that you will consider adjusting your score to reflect you reevaluation of our paper in light of the improvements made. At very least we hope you will provide us with constructive criticism as to where further improvements can be made if you feel the paper still falls short despite additional experiments and analysis, but naturally we hope you will agree the paper is now strong enough to be accepted.\"}", "{\"title\": \"Analysis and discussion, part 1\", \"comment\": \"We see your point and wholly agree with it. Sadly, since we cannot update the paper anymore, we provide our analysis and comparisons here.\\n\\nWe started the analysis with the per-predicate comparison of NaNTP and ComplEx (our best performing models for both), in terms of Mean Reciprocal Rank (MRR), on both WN18 and WN18RR.\\n\\nIn the following, we provide the per-predicate MRR results on WN18 (* denotes cases where NaNTP performs better or on par as ComplEx):\\n\\nWN18\\t\\t\\t\\t\\t\\t\\tNaNTP\\tComplEx\\n_hyponym\\t\\t\\t\\t\\t\\t*0.937\\t0.890\\n_member_holonym\\t\\t\\t\\t*0.912\\t0.809\\n_hypernym\\t\\t\\t\\t\\t\\t*0.934\\t0.891\\n_part_of\\t\\t\\t\\t\\t\\t\\t*0.921\\t0.826\\n_derivationally_related_form\\t\\t0.035\\t0.917\\n_member_of_domain_topic\\t\\t0.722\\t0.745\\n_instance_hyponym\\t\\t\\t\\t0.490\\t0.776\\n_synset_domain_topic_of\\t\\t\\t*0.771\\t0.746\\n_synset_domain_region_of\\t\\t\\t0.362\\t0.689\\n_member_of_domain_region\\t\\t0.417\\t0.667\\n_has_part\\t\\t\\t\\t\\t\\t0.680\\t0.839\\n_also_see\\t\\t\\t\\t\\t\\t*0.554\\t0.511\\n_instance_hypernym\\t\\t\\t\\t0.645\\t0.774\\n_member_meronym\\t\\t\\t\\t0.614\\t0.815\\n_verb_group\\t\\t\\t\\t\\t\\t*0.951\\t0.677\\n_synset_domain_usage_of\\t\\t\\t0.775\\t0.776\\n_member_of_domain_usage\\t\\t*0.769\\t0.722\\n_similar_to\\t\\t\\t\\t\\t\\t*1.000\\t*1.000\\n\\nNext, we provide the per-predicate MRR results on WN18RR:\\n\\nWN18RR\\t\\t\\t\\t\\tNaNTP\\tComplEx\\n_hypernym\\t\\t\\t\\t\\t0.022\\t0.092\\n_derivationally_related_form\\t0.934\\t0.941\\n_member_meronym\\t\\t\\t0.055\\t0.133\\n_has_part\\t\\t\\t\\t\\t0.046\\t0.123\\n_also_see\\t\\t\\t\\t\\t*0.593\\t0.522\\n_member_of_domain_region\\t0.011\\t0.040\\n_verb_group\\t\\t\\t\\t\\t*0.893\\t0.825\\n_synset_domain_topic_of\\t\\t0.042\\t0.184\\n_instance_hypernym\\t\\t\\t0.093\\t0.241\\n_member_of_domain_usage\\t0.030\\t0.201\\n_similar_to\\t\\t\\t\\t\\t0.764\\t1.000\\n\\nFrom the results, we can see that NaNTP and ComplEx have complementary strengths and weaknesses. For instance, by inspecting the rules learned by NaNTP on WN18RR, we can see NaNTP learns symmetry rules such as:\\n\\n_derivationally_related_form(X0, Y0) :- _derivationally_related_form(Y0, X0)\\n_similar_to(X0, Y0) :- _similar_to(Y0, X0)\\n\\nThe _derivationally_related_form rule is often used by our model and, as a result of that, it makes NaNTP as accurate as ComplEx on the _derivationally_related_form predicate. The _similar_to rule is interesting as, though it does hold in general, the NaNTP model never uses it when predicting with _similar_to relations. This is a direct consequence of the way WN18RR was created, as these particular examples are filtered out of the dev and test sets. The same rule is induced in WN18 where it is fully utilised. Instead of this rule, NaNTP on WN18RR is using another learned rule:\\n\\n_verb_group(X0, Y0) :- _also_see(Y0, X0).\\n\\nThis is quite interesting, as the same rule, is often used to express multiple symmetrical relationship with specific predicates such as _also_see, _verb_group and _similar_to.\\n\\nThis shows that the result of the originally proposed decoding of the rule with a one-nearest-neighbor (1-NN), though informative, should not be taken as a literal discrete rule. However, although we do not have a concrete representation of a rule as we might wish, we can still decide whether that rule is meaningful or not, and use such insights for refining the model, improving our understanding of the domain, or providing explanations for any given prediction. Moreover, when decoding rules, looking at the final proof paths is highly informative. For example, the following (correct) proof paths:\\n\\n_also_see(coherent.a.01, logical.a.01) is explained by _verb_group(X0, Y0) :- _also_see(Y0, X0) and _also_see(logical.a.01, coherent.a.01)\\n\\n_verb_group(allow.v.03, permit.v.01) is explained by _verb_group(X0, Y0) :- _also_see(Y0, X0) and _verb_group(permit.v.01, allow.v.03)\\n\\n_similar_to(dynamic.a.01, hold-down.n.01) is explained by _verb_group(X0, Y0) :- _also_see(Y0, X0) and _similar_to(hold-down.n.01, dynamic.a.01)\\n\\nessentially tell us that the rule at question is used for representing symmetry.\\n\\nAll in all, NaNTP can learn symmetry rules, while also softly unifying related predicates and by leveraging such rules can perform better or on par with ComplEx on relations exhibiting a clear logical structure (symmetric relations), while still benefiting from the continuous unification.\"}", "{\"title\": \"Analysis and discussion, part 2\", \"comment\": \"The benefit of a clear logical structure is even more evident in the case of WN18, which is characterised by a more logical relational structure. For instance, by learning clear rules such as part_of(X, Y) :- has_part(Y, X), hyponym(X, Y) :- hypernym(Y, X), and hypernym(X, Y) :- hyponym(Y, X), NaNTP can accurately predict the underlying structure in WN18, and use this knowledge to yield more accurate link prediction results than ComplEx in several cases, as present in the table above.\\n\\nHowever, the opposite is also true: we can see that, in some cases, logic rules and continuous unification may are not sufficient for some of the link prediction tasks. For instance, on WN18, NaNTP was not able to learn a set of rules for accurately predicting the _derivationally_related_form predicate and, for such a reason, ComplEx can yield a higher accuracy on this type of relations.\\n\\nOn the other hand, predictions ComplEx yields are not as easy to explain, since the score is a function of the embeddings of the entities involved in the prediction. On WN18RR, ComplEx shines on relations that reflect the cluster structure of the network, such as _also_see and _derivationally_related_form: as it does not need to rely on an underlying logical structure (as NaNTP), it can more accurately handle the cases where such a structure is missing.\\n\\nHowever, ComplEx yields less accurate results on relations which can be accurately predicted by leveraging an underlying logical structure, which NaNTP can learn and then leverate at test time. For instance, on WN18, ComplEx is less accurate than NaNTP on predicates such as _hypernym (logically related to _hyponym), _part_of (related to _has_part), _hyponym (related to _hypernym) and _member_holonym (related to _member_meronym). \\n\\nGiven that ComplEx and NaNTP have complementary strengths (and weaknesses), we believe the gap between them can be narrowed down by using ComplEx or any other link prediction algorithm as a regulariser (akin to the NTP-lambda in the original NTP paper), by proposing a mixture of experts, and possibly by adding a mixture of correctly induced rules from multiple runs of NaNTP.\"}", "{\"title\": \"Clarification on previous concern: discussion is more than just numbers\", \"comment\": \"Thanks for the update. I wanted to clarify that the numbers themselves are not the reason for my previously listed concern and score, but the lack of analysis beyond just the numbers themselves on the large-scale datasets.\\n\\nI gave Das et. al.'s section on FB15k-237 as an example of giving discussion beyond just the numbers. I would like to see a stronger *analysis* of the results on the larger datasets, and an explanation for the numbers. The numbers themselves not reaching SOTA is fine, and does not affect the score I give. To be more concrete, I would like to see\\n\\n1) some representative examples where single-link prediction does well and NaNTP fails (with an analysis of why NaNTP does not do as well and evidence of the example's representativeness), and ideally a conjecture about possible future work to bridge the gap.\\n\\n2) Also include some examples where NaNTP does well but single-link does not (in addition to an analysis of why).\\n\\nI'm looking for a statement like \\\"NaNTPs overall perform worse, but do much better on this class of examples but much worse on this class,\\\" with supporting examples and justification for claims. So no re-training is necessary; I would mainly like to see a deeper comparison at evaluation time. The reason for this is that, in the worst case, NaNTP simply does worse on all classes of examples regardless of how the classes are chosen in the large-data setting. This would definitely be worth reporting (and my review will not penalize the paper's score for reporting negative results). Are we seeing something similar to Naive Bayes vs Logistic regression, where NB does better in the small-data regime but not as well in the large data regime? Hopefully that is not the case, and your analysis will lead to future work on how to bridge the performance gap. The original NTP paper already argued that its interpretable nature was interesting and was able to learn transitivity, hopefully there is more analysis to be done than just restating their claims.\\n\\nI think the term polish in the ICLR guideline is referring to the numbers, hopefully the expectations for analysis of results are still equally high for all papers regardless of topic. Again, I am fine with the numbers, but expect more analysis *beyond* the numbers. I am excited to see the analysis, good luck!\"}", "{\"title\": \"Baselines, ablations, and experiments\", \"comment\": \"Thank you for your answer,\\n\\n> \\u201cif I were to tackle multi-hop link prediction at scale, should I use the NaNTP over other uninterpretable methods?\\u201d --- I am convinced that NaNTP can be scaled; however, I would like a clearer picture of how it compares to related models.\\n\\nYou are completely right, thanks for pointing this out. In Table 1 and Table 3 we added several baselines from the literature (DistMult, ComplEx, ConvE, NeuralLP, and MINERVA, from Das et al.\\u2019s \\u201cGo for a Walk and Arrive at the Answer\\u201d paper), and ablations on attentions and text. We added an official comment enumerating our changes to the revised version of the paper.\\n\\n> demonstrated competitive-to-SOTA performance as reported in other papers\\n\\nIn Table 3 we show that NaNTP is competitive, and often better, than the original NTP [1] on Countries (S1-S3), Kinship, Nations, and UMLS, while being several orders of magnitude faster on the reference datasets. NTP was yielding better results than SOTA methods such as ComplEx, while being able to provide explanations for its predictions. Results for NTP on WN18, WN18RR, and Freebase are not available, since NTP does not scale to such Knowledge Bases.\\n\\nIn the revised paper, we report comparisons with DistMult, ComplEx, ConvE, NeuralLP, and MINERVA, showing that NaNTP yields comparable results. Please note that Neural Link Predictors such as DistMult and ComplEx belong to a family of Representation Learning models that was studied for a decade now [2, 3], while Neural Theorem Provers started gaining momentum in recent months, one main limitation being their scalability. Since Neural Theorem Provers were a less explored area of research, we think they are more likely to have less polished results than better explored areas, such as Neural Link Predictors.\\n\\n[1] Rockt\\u00e4schel and Riedel. End-to-End Differentiable Proving. NIPS 2017\\n[2] Paccanaro and Hinton. Learning Distributed Representations of Concepts Using Linear Relational Embedding. TKDE 2001\\n[3] Bordes et al. Translating Embeddings for Modeling Multi-relational Data. NIPS 2013\"}", "{\"title\": \"Baselines, ablations, improvements in clarity, and experiments.\", \"comment\": [\"We thank all reviewers for insightful and very detailed feedback. We followed their suggestions, and updated our submission as follows:\", \"We added a series of comparisons with state-of-the-art link prediction methods (namely DistMult, ComplEx, ConvE, NeuralLP, and MINERVA) in Table 1 and Table 3.\", \"In Table 1 we also added a series of ablations, for analysing the impact of using attention (on WN18, WN18RR, and FB15k-237.E) and natural language surface forms (on FB15k-237.E), and analysed them in Section 6.\", \"We greatly improved clarity in the section introducing Neural Theorem Proving.\", \"We added more results in the Appendix - Table 6, Table 7, and Table 8 - including additional ablations on using attention and text, showing that both helps improving the model\\u2019s predictive accuracy.\", \"We also added a comparison between Exact and Approximate Nearest Neighbour Search (ANNS), and Random Neighbourhood (Table 8), showing that ANNS yields results on par with Exact NNS, while being orders of magnitude more computationally efficient, in terms of both time and space complexity.\", \"We discussed the issues we found in the evaluation function provided with the code of the NIPS 2017 paper introducing NTPs, and re-computed their experiments.\"]}", "{\"title\": \"Thanks for the edits! Remaining concern below\", \"comment\": \"Thanks for taking the time to make edits. Although the ablation studies are indeed an improvement and address half of my concerns, they are not enough for me to change my score.\\n\\nI\\u2019d like to reiterate that I believe this direction is important and interesting, but my remaining concern is the following: The question the paper answers is \\u201ccan the NaNTP be run on large datasets?\\u201d However, the question I would like answered is not only that, but also \\u201cif I were to tackle multi-hop link prediction at scale, should I use the NaNTP over other uninterpretable methods?\\u201d The other methods should include not only neural link prediction, but also other multi-hop methods.\", \"i_believe_this_would_be_a_more_solid_contribution_if_the_paper\": \"1) demonstrated competitive-to-SOTA performance as reported in other papers (not just the re-implementation of the baselines), as one worry is that the baselines scores are not strong enough. Kadlec et. al [1] note that models trained on FB15K (although different than FB15K-237, the same statement likely applies) are very sensitive to hyperparameters. This would entail either tuning the baselines to match previous numbers or simply using previously reported numbers if computational resources are not available, then improving the performance of the NaNTP if possible. If not possible, provide a convincing explanation of the results. An example of this is in Das et. al. [3], where they justified the superior performance of embedding methods vs path-based methods on FB15K-237 in section 3.1.2.\\n\\n2) includes comparisons to more recent work on multi-hop link prediction such as Minerva [3], Diva [2], etc., where the comparison includes ideally both speed and KB metrics.\\n\\nI am convinced that NaNTP can be scaled; however, I would like a clearer picture of how it compares to related models.\\n\\n[1] Kadlec, Bajgar, Kleindienst. Knowledge Base Completion:Baselines Strike Back. https://aclanthology.info/papers/W17-2609/w17-2609.\\n[2] Chen, Xiong, Fan, Wang. Variational Knowledge Graph Reasoning. https://aclanthology.coli.uni-saarland.de/papers/N18-1165/n18-1165\\n[3] Das et. al. Go for a Walk and Arrive at the Answer: Reasoning Over Paths in a Knowledge Bases using Reinforcement Learning. https://openreview.net/forum?id=Syg-YfWCW\"}", "{\"title\": \"Reading module and textual mentions\", \"comment\": \"Thank you very much for taking time to help bring our paper to a higher standard with your constructive feedback.\\n\\n> I'm not convinced by the model used to integrate textual mentions. These mentions are very short sentences. This could explained why such a simplistic model that simply average word embeddings is sufficient.\\n\\nThank you for pointing this out . We used a very simple reading model for showing that, even with an extremely simple approach, it is possible to integrate textual mentions while effectively improving results. This was a proof-of-concept demonstration on how a scalable end-to-end differentiable reasoning model enables reasoning over text while providing interpretable explanations for any given prediction (Sect. 6.3 and 6.4). It is true that, for Countries, textual mentions tend to be short, but it\\u2019s not the case for FB15k-237.\\n\\nAnother motivation for using a simpler model is that it can perform on par or better than more complex model, thanks to a lower tendency to overfit to training data [1, 2] - we will emphasize this in the paper. We leave exploring more elaborate reading models to future work.\\n\\n[1] Arora et al, 2016, A simple but tough-to-beat baseline for sentence embeddings\\n[2] White et al, 2015, How Well Sentence Embeddings Capture Meaning\"}", "{\"title\": \"Ablations, experiments, and background\", \"comment\": \"> The attention mechanism (essentially reducing the model capacity) is also well-known but its effect in this particular framework is not properly elaborated. The same can be said for the use of mentions.\\n\\nThank you for suggesting a more in-depth ablation study. We followed your advice and in order to assess the effect of attention to this framework, we added an ablation study on benchmark datasets, in Tables 6 and 7 of the appendix.\\n\\nTable 6 shows NaNTP with attention yielding higher average ranking accuracy and lower variances on Countries S1-3 and Kinship, with comparable performance on Nations and UMLS.\\n\\nIn Table 7, we report the ablation results on two larger datasets - WN18 (141k facts) and WN18RR (87k facts). In the case of WordNet, using Attention for learning rules greatly increase the ranking accuracy - for instance, it increases from 83% hits@10 to 94% in the case of WN18, and from 25% to 43% in the case of WN18RR.\\n\\nIn addition, Figure 2 combines the ablation study for both the effects of attention and added textual mentions. NaNTP with attention yields higher ranking accuracies with lower variances for Countries S1-3. As for the mentions, encoding them consistently improves the ranking accuracy in comparison to simply adding them as additional relations.\\n\\n> The authors did mention some last-minute discovery that may affect some of the presented results.\\n\\nThe evaluation issue with the original NTP we discovered has now been fixed and we re-evaluated the original NTP models presented in the paper with the fixed evaluation.\\n\\nConsequently, we updated Table 3 with these scores and outlined why the scores differ to scores in the original paper. The results now fully testify that NaNTP is consistently better or, in the case of UMLS, on par with NTP.\\n\\nWe reiterated over the experimental section, to improve clarity of exposition and focus on insights found.\\n\\n> Section 2 on the NTP framework is not very helpful for a reader that has not read the previous paper on NTP. For a reader that has done so, the section feels redundant.\\n\\nThank you for pointing this out - we tried to hit a sweet spot between being self-contained and avoiding to replicate the NIPS 2017 paper about NTPs, but it was not an easy task. We rephrased Section 2 and shorten the redundant subsections, as you suggest.\"}", "{\"title\": \"Improvements and related work\", \"comment\": \"Many thanks for your constructive criticism - we greatly appreciate your efforts.\\n\\n> The first is a speed-up through nearest neighbor search instead of a brute-force search. This is the most elaborated section out of the three, yet seems like the most trivial\\n\\nThe main focus of this work is making inference and learning in NTPs tractable. Previous to this work, training NTPs on large datasets simply unfeasible. Although it seems conceptually simple, we respectfully disagree that it is trivial to make NTPs\\u2019 end-to-end differentiable proving mechanism efficient by dynamically exploring only the most promising part of the proof space by means of dynamically pruning the computation graph at construction time, while still retaining computational efficiency superior to the original model. Furthermore, we extensively tested our improvements, and supported them with a large experimental assay.\\n\\nImportantly, this change enabled us to drastically increase the speed (more than two orders of magnitude in the case of Kinship and UMLS, and many more for larger datasets) while significantly decreasing the memory footprint of the model. Consequently, this enabled the application of explainable NTPs on large-scale text-enriched data - something that was simply not possible beforehand.\\n\\nThis work is fundamentally different from Khot et al.\\u2019s paper. Although they use contextual similarities as a pre-processing step for defining the structure of a Markov Logic Network, they do not make use of embeddings. In contrast, we use ANNS on the embedding representations of facts and rules for identifying the most promising proof paths during the dynamic computation graph construction (forward pass). As for the word embedding search restriction to neighbourhoods, that is never utilised for for building the computation graph, but for the post-hoc analysis.\\n\\nThe most related paper is [1], where Rae et al. use ANNS for computing a sparse attention distribution over memory entries. Their approach retains the representational power of the original memory networks, whilst training efficiently with very large memories. Similarly, our approach retains the expressiveness and the end-to-end differentiability of NTPs, while scaling to Knowledge Bases with millions of facts. \\n\\n[1] Rae et al., Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes, 2016\\n\\n> [..] unless the authors can provide an analytical bound on the loss in ntp score w.r.t the neighborhood size.\", \"it_is_not_trivial_to_provide_an_analytical_bound_on_nantp\": \"its derivation would depend on the characterisation of the approximation introduced by ANNS (still an open problem), and the greedy proof path selection, which may not yield to a globally optimal solution.\", \"ntps_follow_a_two_step_process\": \"i) given a query, they enumerate all possible proof paths, and ii) they compute a proof score for each proof path, returning the maximum proof score.\\n\\nAfter the proof path associated with the highest score is identified, the final score --- and ts gradient wrt. the model parameters --- can be computed exactly, since \\\\nabla_\\\\theta max(\\\\rho_1, \\\\ldots, \\\\rho_n) = \\\\nabla_\\\\theta \\\\rho_i, where \\\\rho_i = max(\\\\rho_1, \\\\ldots, \\\\rho_n). We clarify this in the paper.\\n\\nExploring all possible proof paths is infeasible for large datasets, hence we propose using ANNS for greedily expanding only the most promising proof paths. This is motivated by observing that the proof score is given by the similarity between a goal and a fact or rule head: the higher the similarity, the higher the proof score.\", \"we_can_also_see_this_problem_in_relation_to_the_exploration_vs_exploitation_trade_off\": \"in Reinforcement Learning and optimisation it is fairly common to limit exploration to the most promising areas in the search space - instead of uniformly searching in the whole search space - at the risk of missing out high-reward regions. We analyse the cost of such a trade-off in our experiments, finding that our results are on par - or sometimes better than - the original model.\\n\\nFurthermore, we added an analysis on the impact of using ANNS in comparison with exact NNS and random neighbour selection, finding that ANNS is directly comparable with exact NNS but significantly faster. We added this characterisation to Table 8 in the Appendix.\"}", "{\"title\": \"Ablations and aims\", \"comment\": \"Thank you for your constructive feedback.\\nIt is great to hear that you find this line of work interesting.\\n\\n> Empirical performance on larger datasets needs further investigation.\\n\\nFirst and foremost, we would like to highlight that the main focus of this paper is not climbing the link prediction leaderboards, but rather pushing NTP (a promising but until now computationally infeasible model) into practice by scaling it to large datasets, yielding results comparable with standard Neural Link Predictors, a class of models that was studied for nearly a decade now [1, 2]. Unlike Neural Link Predictors, NaNTPs can learn interpretable rules, as well as provide explanations for a given prediction, as we demonstrate in the experimental section. Moreover, they allow incorporating domain knowledge in the form of logic rules.\\n\\n> No ablation study is performed so the effect of incorporating mentions and attention are unclear.\\n\\nFollowing your advice on in-depth evaluation, we ran additional ablation studies for both the benchmark datasets and the large datasets.\\n\\nTable 6 in the Appendix shows that using attention in NaNTP for learning rule representations yields higher average ranking accuracy and lower variance on Countries S1-3 and Kinship, while yielding comparable results on Nations and UMLS.\\n\\nIn Table 7, we report the ablation results on two larger datasets - WN18 (141k facts) and WN18RR (87k facts). In the case of WordNet, using attention for learning rules greatly increases the ranking accuracy. For instance, hits@10 increases from 83% to 94% in the case of WN18, and from 25% to 43% in the case of WN18RR.\\n\\nWe hypothesise that this is because the attention has a constraining effect, regularising representations of the rules inside the convex hull of the representations of predicates.\\n\\nFurthermore, Figure 2 shows the ablation study of both the effect of attention and the added textual mentions. Consistently with Table 6, NaNTP with attention yields higher ranking accuracy and lower variance for Countries S1-3. As for the effect of reasoning over text, using distinct encoders for predicates and mentions consistently improves the ranking accuracy in comparison of simply using mentions as additional relation types.\\n\\n> Baseline performance on FB15k-237 seems weak compared to the original papers\\n\\nThe lower baseline performance difference in neural link prediction baselines is mainly due to limiting the embedding size to 100 (d=100), and the number of training epochs to 100. These hyperparameters were used in the original NTP paper [3] and, for the sake of comparison to the original model, we decided to keep them fixed to the same values.\\n\\nFurthermore, exploring different embedding sizes for NaNTP was prohibitive due to a lack of computation resources. In NaNTPs after the ANNS index construction, the complexity of inference (and thus learning) grows logarithmically in the size of the Knowledge Base, and evaluating the ranking of each single test triple requires scoring all its possible corruptions (i.e. 82k triples on WN18): this is a very computationally expensive procedure even for neural link predictors.\\n\\nIn the case of FB15k-237, the experiment also involved the textual mentions proposed in [3]. We corrected this in the revised version of the paper.\\n\\n[1] Paccanaro et al., Learning Distributed Representations of Concepts using Linear Relational Embedding, IEEE Transactions on Knowledge and Data Engineering 2000\\n[2] Bordes et al., Translating Embeddings for Modeling Multi-relational Data, NIPS 2013\\n[3] Rocktaschel et al., End-to-end Differentiable Proving, NIPS 2017\\n[4] Toutanova et al., Representing text for joint embedding of text and knowledge bases, EMNLP 2015\"}", "{\"title\": \"Interesting direction but needs more discussion\", \"review\": \"[Summary]\\nThis paper scales NTPs by using approximate nearest neighbour search over facts and rules during unification. Additionally, the paper incorporates mentions as additional facts where the predicate is the text that the entities of the mention are contained in. The paper also suggests parameterizing predicates using attention over known predicates. The increments presented are reasonable and justified, but the experimental results, specifically on the larger datasets, warrant further investigation.\\n\\n[Pros]\\n- Reasonable and interesting increments on top of NTP.\\n- Scaling the approach to larger datasets is well motivated.\\n- Utilizing text is an interesting direction for NTP in terms of integrating it with past work on KG completion.\\n\\n[Cons]\\n- Empirical performance on larger datasets needs further investigation.\\n- No ablation study is performed so the effect of incorporating mentions and attention are unclear.\\n- Baseline performance on FB15k-237 seems weak compared to the original papers as well as more recent papers re-examining baselines for KG completion (http://aclweb.org/anthology/W17-2609). Is this due to the d=100 restriction, or were pretrained embeddings not used? Without further explanation, the claim that scores are competitive with SOTA seems unjustified, at least for FB15k-237 since the model performs significantly worse than the baselines which seem to be worse than previously reported.\\n\\n[Comments]\\n- For reproducibility: it is unclear whether evaluation in FB15k-237 is carried out on the KB+Text, KB, or Text portions of the dataset.\\n\\n[Overall]\\nIt\\u2019s great that NTP was scaled up to handle larger datasets, however further analysis is needed. The argument that performance is given up for interpretability needs more discussion, and the effect of each addition to the system should be discussed as well.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting results, slightly unbalanced presentation\", \"review\": \"The authors propose several techniques to speed up the previously proposed Neural Theorem Prover approach. The techniques are evaluated via empirical results on several benchmark datasets.\\n\\nLearning interpretable models is an important topic and the results here are interesting and valuable to the community. However, I feel that the paper in its current form is not yet ready for publication in ICLR, for the following reasons:\\n\\n1) The authors propose three improvements. The first is a speed-up through nearest neighbor search instead of a brute-force search. This is the most elaborated section out of the three, yet seems like the most trivial -- unless the authors can provide an analytical bound on the loss in ntp score w.r.t the neighborhood size. It is a standard and well-known technique to restrict the search to a neighborhood, widely used in any applications of word embedding (e.g. in Khot et el's Markov Logic Networks for Natural Language Question Answering). The attention mechanism (essentially reducing the model capacity) is also well-known but its effect in this particular framework is not properly elaborated. The same can be said for the use of mentions.\\n\\n2) The section on experiment results seems a bit rushed -- the authors did mention some last-minute discovery that may affect some of the presented results. The section can be a little hard to parse. In particular, it would be useful for the authors to focus on providing more insights on how the proposed techniques improve the results, and in what ways.\\n\\n3) Section 2 on the NTP framework is not very helpful for a reader that has not read the previous paper on NTP (in particular, the part on training and rule learning). For a reader that has done so, the section feels redundant.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting paper and contributions.\", \"review\": \"This paper propose an extension of the Neural Theorem Provers (NTP) system that addresses the main issues of this method. The contributions of this paper allow to use this model on real-word datasets by reducing the time and space complexity of the NTP model.\", \"pro\": \"The paper is clear and well written and the contribution is relevant to ICLR. NTP systems by combining the advantages of neural models and symbolic reasoning are a promising research direction. Even though the results presented are lower than previous studies, they present the advantage of being interpretable.\", \"cons\": \"I'm not convinced by the model used to integrate textual mentions. The evaluation proposed in section 6.3 proposes to replace training triples by textual mention in order to evaluate the encoding module. However, it seems to me that, in this particular case, these mentions are very short sentences. This could explained why such a simplistic model that simply average word embeddings is sufficient. I wonder if this would still work for more realistic (and thus longer) sentences.\", \"minor_issues\": \"-Page 1: In particular [...] (NLU) and [...] (MR) in particular, ...\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
B1MXz20cYQ
Explaining Image Classifiers by Counterfactual Generation
[ "Chun-Hao Chang", "Elliot Creager", "Anna Goldenberg", "David Duvenaud" ]
When an image classifier makes a prediction, which parts of the image are relevant and why? We can rephrase this question to ask: which parts of the image, if they were not seen by the classifier, would most change its decision? Producing an answer requires marginalizing over images that could have been seen but weren't. We can sample plausible image in-fills by conditioning a generative model on the rest of the image. We then optimize to find the image regions that most change the classifier's decision after in-fill. Our approach contrasts with ad-hoc in-filling approaches, such as blurring or injecting noise, which generate inputs far from the data distribution, and ignore informative relationships between different parts of the image. Our method produces more compact and relevant saliency maps, with fewer artifacts compared to previous methods.
[ "Explainability", "Interpretability", "Generative Models", "Saliency Map", "Machine Learning", "Deep Learning" ]
https://openreview.net/pdf?id=B1MXz20cYQ
https://openreview.net/forum?id=B1MXz20cYQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJxUUoj8g4", "S1edEowWgN", "r1lM-wD-xV", "rJxRJWdayV", "H1x2bX-p1V", "H1g195-CT7", "HJlz9u-RaQ", "HkguvdW0a7", "B1eKk_WRaX", "BklNCIGhh7", "SklSwVYo37", "SkephArKjX" ], "note_type": [ "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545153358151, 1544809263924, 1544808186292, 1544548581606, 1544520452275, 1542490758630, 1542490249796, 1542490207816, 1542490081247, 1541314252282, 1541276765279, 1540083381264 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1250/Authors" ], [ "ICLR.cc/2019/Conference/Paper1250/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1250/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1250/Authors" ], [ "ICLR.cc/2019/Conference/Paper1250/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1250/Authors" ], [ "ICLR.cc/2019/Conference/Paper1250/Authors" ], [ "ICLR.cc/2019/Conference/Paper1250/Authors" ], [ "ICLR.cc/2019/Conference/Paper1250/Authors" ], [ "ICLR.cc/2019/Conference/Paper1250/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1250/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1250/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Fan et al. is orthogonal to our work\", \"comment\": \"As we mentioned above, Fan et al. is orthogonal to our work. We highly recommend you to reread our manuscript to understand the scope of our work.\"}", "{\"title\": \"Provide comparison\", \"comment\": \"Fan et al. is used in saliency prediction and seems to achieve good accuracy as reported in other papers:\", \"https\": \"//openreview.net/forum?id=BJxbYoC9FQ\"}", "{\"metareview\": \"Important problem (explainable AI); sensible approach, one of the first to propose a method for the counter-factual question (if this part of the input were different, what would the network have predicted). Initially there were some concerns by the reviewers but after the author response and reviewer discussion, all three recommend acceptance (not all of them updated their final scores in the system).\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Meta-review\"}", "{\"title\": \"Thanks for reading the rebuttal\", \"comment\": \"Good question. I think it's not just because it's not adversarially generated, since other heuristics infilling are also not trained to do so.\\n\\nI think in high dimensional datasets like images, the infilling has so much freedom to generate out-of-distribution inputs. Image inpainting algorithm explicitly train to restict the infilling to natural images, so it makes the infilling harder to find adversarial perturbations. [1] also has similar insights of using generative models that protects it from the adversarial attack. I also think it's also the reason for fewer artifacts of SSR than SDR since finding the evidence for 1 class have way less freedom than finding evidence for other 999 classes. \\n\\nThank you for reading the rebuttal. We will include this discussion in the version later.\\n\\n[1] Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models\", \"https\": \"//openreview.net/forum?id=BkJ3ibb0-\"}", "{\"title\": \"Thanks for the rebuttal...\", \"comment\": \"The rebuttal addresses some of the issues...\\n* Figure 11 now clearly shows that the proposed algorithm is not merely a combination of two existing approaches.\\n* Authors mentioned and discussed the limitation of this approach in Section 5.\\n\\nAfter reading the revised paper, I have additional comments.\\nIt is known that a classification network can be fooled by a small amount of (adversarial) noise. It is also true that an image inpainting algorithm inevitably synthesizes some artifacts. Then, why do artifacts rendered by the inpainting algorithm not severely corrupt the task that tries to find good regions for classification? Is it just because artifacts are not adversarially generated? It is not necessary but it would be great if the paper discusses this aspect as well.\"}", "{\"title\": \"Thank you for your thorough assessment of our work\", \"comment\": \"We thank you very much for your thorough review and for acknowledging that our proposed use of generative models is a sensible approach and its significance over the field. We believe that this conceptual contribution will help to progress interpretability beyond current limitations when most of the methods are still based on out-of-distribution inputs.\\n\\nWe respectfully disagree with your assessment of the paper as lacking technical originality, as our method is not combining two off-the-shelf methods. Integrating a powerful generative model into current saliency algorithms efficiently is non-trivial (and to our knowledge has not been done, whereas your review might suggest such an algorithm already exists). The difficulty of combining existing generative models with existing saliency algorithms BBMP is evidenced by a new ablation study in the revision (section 4.6). It shows that naive combination of BBMP and CA perform much worse than our method FIDO. By parameterizing a distribution over dropped-out input features, FIDO provides a principled and efficient way to integrate over sensible counterfactual inputs to determine the relevant input features to a rendered prediction.\\n\\nYou have raised an important point about the performance of this framework being upper bounded by the capacity of the in-filling generative model. We believe the reader will benefit from a discussion of this limitation, which we include in the revision (section 5). It is true that the ability to explain the classifier c(y|x) that learns p(y|x) is now somehow tied to the ability of the generative model g(x) to fit p(x). However, we strongly believe optimization-based saliency strategies that ignore or over-simplify p(x) (as existing methods do) are fundamentally misspecifying the counterfactual \\u201cwhat if\\u201d question that will yield an explanation. Moreover this upper bound on performance will increase in the future as generative models improve.\"}", "{\"title\": \"Thank you for your thorough assessment of our work\", \"comment\": \"We thank you for your thorough assessment of our work. You have concisely summarized the key contribution, and we agree with your explanation of how including a generative model of the input space allows FIDO to ask a more meaningful counterfactual question than existing approaches.\", \"in_response_to_your_specific_comments\": \"1. We totally agree that the ideal evaluation here would sample from the true conditional infilling distribution, so the fact that we instead sample from CA-GAN is a limitation and might lead to a preferable performance on FIDO-CA. However we still observe a win that using other generative models (Local and VAE) over heuristics (Mean, Blur and Random). This suggests that generative infilling can still identify more relevant pixels corresponding to the classifier. We include this limitation in our revision.\\n2. Extending BBMP for use with a generative in-filler is not natural since it optimizes over continuous masks in [0, 1] rather than parameters of discrete masks in {0, 1} so the mask does not partition features into observed/unobserved. But we implemented an attempt at this in the revision and describe the result below.\\n3. We believe the reader will also benefit from this ablation study. In section 4.6 of the revision we investigate whether BBMP could be improved by using the CA-GAN to do infill. We threshold the BBMP masks then in-fill with CA-GAN. We find that this approach---called BBMP-CA---remains susceptible to artifacts in the resulting saliency maps and is brittle w.r.t its threshold value. BBMP-CA performs worse on the quantitative metrics than FIDO-CA, and about on par with FIDO-Blur and FIDO-Random, which do not use expressive generative models. Therefore we believe that one must model a discrete distribution over masks (not a point estimate like BBMP) in order to leverage the expressivity of an in-filling generative model.\", \"in_response_to_your_minor_comments\": \"1. It is true that \\\\phi has an indirect dependence on \\\\hat x. But since \\\\hat x is dependent also on x and z as a random variable drawn from the generative models, we think \\\\phi(x, z) is still valid in this case. We make note of the stochasticity of \\\\phi in the revision.\\n2. We include all the true labels in the revision.\"}", "{\"title\": \"Thank you for your helpful review, clarification provided in the comment\", \"comment\": \"We thank you very much for your effort in assessing our work, and for pointing us to the workshop paper on weakly localized supervision by Fan et al. 2017. We suspect that any lack of clarity about our method---its motivation, novelty, and improvement relative to baselines---is due to a misunderstanding about the scope of our paper and its key contribution. We hope to clarify this here and explain why Fan et al. 2017 is not a suitable baseline.\\n\\nOur goal is to explain the prediction produced by a differentiable classifier (that has been previously trained and whose weights are frozen) on a new test input x\\u2019. We formulate this as a search for features of x\\u2019 that change the classifier prediction significantly when they are marginalized out in a probabilistic framework. By contrast BBMP also searches over masks in continuous [0,1] to a point estimate and infills with heuristics (rather than marginalizing). This makes this method susceptible to artifacts in the computed saliency since it produces an explanation that relies on out-of-distribution (o.o.d.) inputs, where the classifier behavior isn\\u2019t well specified. Our key technical differences with BBMP (see blue text in figure 5) are firstly using Bernoulli distribution over masks, and secondly use an expressive generative model for efficient marginalization. These are novel to our knowledge, and neither of these differences are workable alone using existing algorithms; we add a new ablation study in the revision to emphasize this. \\n\\nMeanwhile, Fan et al. seek to solve weakly supervised localization (WSL) of objects in images using adversarial training. The goal of WSL is to locate the object, not to explain a pre-trained classifier; Fan et al 2017 include a classifier in their model, but this classifier\\u2019s weights are trained by their algorithm. We do not train the classifier, since we are trying to explain its predictions. Also, Fan et al use background infilling rather than a strong generative model. It is possible that a generative infilling model could be trained jointly with a classifier for improved WSL relative to Fan et al, but that is orthogonal to the goal and scope of our work.\\n\\nDespite common usage within saliency map papers, WSL is not a fully satisfactory evaluation for saliency map algorithms. Firstly (related to the above point) saliency map algorithms attempt to explain a known classifier rather than predict object localizations. For example, if the classifier ignores the object and classifies based on contextual information, then the correct saliency map should score poorly on WSL because it will also ignore the object. Nevertheless for completeness we evaluated FIDO on this task. There are some other shortcomings of WSL as a saliency metric that we can discuss if you are curious. All of this is motivation for the \\u201csaliency metric\\u201d proposed by Dabowski and Gal 2017, which we also evaluate. In the revision we also compare against a larger class of baseline models (Grad, DeconvNet, GradCAM) along both metrics, which we hope addresses your concern about how our model compares with other methods from the literature.\"}", "{\"title\": \"Revision of the paper\", \"comment\": [\"We thank each of the reviewers for their thoughtful comments, which have helped us to improve the paper in the latest revision. We made the following changes:\", \"We include a new ablation study to help the reader better understand the importance of each technical contribution. The specific goal is to understand whether BBMP, the method most closely related to ours, could be improved with CA-GAN infilling, and find that BBMP+CA-GAN substantially underperforms relative FIDO+CA-GAN. This points toward that the FIDO framework is necessary to leverage expressive generative models in the interpretation of classifiers. We discuss this experiment in further detail below in the response to AnonReviewer2.\", \"For our quantitative evaluations (weakly supervised localization and Dabowski & Gal 2017\\u2019s \\u201csaliency metric\\u201d) we evaluate three additional baseline models. These are Gradient-based class saliency (Simonyan et al 2013), DeconvNet (Springenberg et al 2014), and GradCAM (Selvajaru et al 2016).\", \"We expand our discussion to better describe how the FIDO framework depends on the capacity of the generative model.\", \"We confirm our original findings with increased statistical confidence by evaluating over the entire validation set (50k images). We note there is a discrepency of WSL performances with what Dabowski and Gal 2017 reported. We try to resolve it by communicating with the authors but unfortunatelly they are unable to provide neither the evaluation code nor the original model they use. However for completeness we still compare with this model.\", \"Ground truth labels are now displayed beside the images for the qualitative comparison of saliency maps.\", \"In the supplement we show how batch size of mask samples M affects the saliency computed by FIDO-CA. Performance degrades with small batch size (< 4).\", \"We include additional qualitative examples in the supplementary.\"], \"we_summarize_our_key_contributions\": [\"We propose a novel framework, called FIDO, for explaining classifier decisions that efficiently search for explanations that respect the distribution of input data by generative model.\", \"We show that incorporating strong generative models reduces artifacts substantially and provides more relevent pixels of explanation. This addresses the common shortcomings of existing methods that uses out-of-distribution (o.o.d.) data that leads to increasing artifacts, as shown in our experiment.\", \"We quantitatively show the generative models perform better than heuristics infill on two widely-used evaluation methods. We also extensively compare with the recent literature.\", \"We also show SDR (used by Fong & Vedaldi, 2017) is prone to a much higher degree of artifact compared to SSR qualitatively.\", \"The individual concerns of each reviewer will be addressed in the comments below. Please let us know if you have additional comments, and if there are particular revisions that would increase your assessment of our paper.\"]}", "{\"title\": \"Unclear improvement over state-of-the-art saliency maps extractors\", \"review\": \"This paper introduces a new saliency map extractor to visualize which input features are relevant for the deep neural network to recognize objects in the image. The proposed saliency map extractor searches over a big space of potentially relevant image features and in-fills the irrelevant image regions using generative models.\\n\\nThe algorithmic machinery in the paper is poorly justified, as it is presented as a series of steps without providing much intuition why these steps are useful (especially compared to previous works). Also, I would like to know how this paper compares to Fan et al. \\\"Adversarial localization network\\\" (NIPS workshop, 2017), which has not been cited and it proposes similar ideas.\\n\\nAlso, the results are not convincing. Only one previous work (among many) has been compared with the proposed algorithm, and the qualitative examples are not enlightening showing the advantages of the introduced saliency map extractor. What are the new insights into the functioning of deep networks that were gained from the proposed saliency map extractor?\\n\\nIn summary, it is unclear to me if there is any novelty in the approach (missing references, lack of motivation of the algorithm) and if the results show any improvement over previous works (only one previous work has been compared and the qualitative examples do not show anything particularly interesting).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Very well-written paper that introduces to important innovations to the problem of interpreting black-box NNs\", \"review\": \"The paper is aimed at answering the following question: \\\"for model M, given an instance input and a predicted label, what parts of the input are most relevant for making the M choose the predicted label?\\\".\\nThis is by far not the first paper aimed at answering this question, but it makes important innovations to the best of my knowledge. The most important one is proposing a stronger approach to the counterfactual question \\\"had this part of the input been different, what would have been the output?\\\". Because the input can be different in many ways, an important question is addressing in what specific way would it have been different. \\n\\nSpecifically in the domain of images, most models assume a blurring or simple local in-painting approach: \\\"if this patch were just a blurry average, what would have been the output?\\\". However, ss the current paper correctly points out, blurring or other simple in-painting methods leads to an image which is outside the manifold of natural images and outside the domain of the training set. This can lead to biased or inaccurate results. \\n\\nThe paper therefore propose two innovations on top of existing methods, most closely building on work by Fong & Vedaldi (2017): \\n(1) Optimizing an inference network for discovering image regions which are most informative\\n(2) Using a GAN to in-paint the proposed regions, leading to a much more natural image and a more meaningful counterfactual question.\\n\\nThe presentation is crisp, especially the pseudo-code in Figure 5. In addition, the paper includes several well-executed experiments assessing the contributions of different design choices on different metrics and making careful comparisons with several recent methods addressing the same problem.\", \"specific_comments\": \"1. In sec. 4.5, the comparison is not entirely fair because FIDO was already trained with CA-GAN, and therefore might be better adapted for it.\\n2. Related to the point above: could one train BBMP with a CA-GAN in-painting model?\\n3. I would have liked to see an ablation experiment where either one of the two innovations presented in this paper is missing.\", \"minor\": \"1. In eq. (2), wouldn't it be more accurate to denote it as \\\\phi(x,z,\\\\hat{x}) ? \\n2. I would like to know the true labels for all the examples presented in the paper.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A paper with good motivation and results although the novelty and justifications are somewhat lacking.\", \"review\": \"Summary: This paper aims to find important regions to classify an image. The main algorithm, FIDO, is trained to find a saliency map based on SSR or SDR objective functions. The main novelty of this work is that it uses generative models to in-fill masked out regions by SSR or SDR. As such, compared to existing algorithms, FIDO can synthesize more realistic samples to evaluate.\\n\\nI like the motivation of this paper since existing algorithms have clear limitations, i.e., using out-of-distribution samples. This issue can be addressed by using a generative network as described in this paper.\\n\\nHowever, I think this approach yields another limitation: the performance of the algorithm is bound by the generative network. For example, let\\u2019s assume that a head region is important to classify birds. Also assume that the proposed algorithm somehow predicts a mask for the head region during training. If the generative network synthesizes a realistic bird from the mask, then the proposed algorithm will learn that the head region is a supporting region of SSR. In the other case, however, the rendered bird is often not realistic and classified incorrectly. Then, the algorithm will seek for other regions. As a result, the proposed method interprets a classifier network conditioned on the generative network parameters. Authors did not discuss these issues importantly in the paper.\\n\\nAlthough the approach has its own limitation, I still believe that the overall direction of the paper is reasonable. It is because I agree that using a generative network to in-fill images to address the motivation of this paper is the best option we have at this current moment. In addition, authors report satisfactory amount of experimental results to support their claim.\", \"quality\": \"The paper is well written and easy to follow.\", \"clarify\": \"The explanation of the approach and experiments are clear. Since the method is simple, it also seems that it is easy to reproduce their results.\", \"originality\": \"Authors apply off-the-shelf algorithms to improve the performance of a known problem. Therefore, I think there is no technical originality except that authors found a reasonable combination of existing algorithms and a problem.\", \"significance\": \"The paper has a good motivation and deals with an important problem. Experimental results show improvements. Overall, the paper has some amount of impact in this field.\\n\\nPros and Cons are discussed above. As a summary,\", \"pros\": [\"Good motivation.\", \"Experiments show qualitative and quantitative improvements.\"], \"cons\": [\"Lack of technical novelty and justification of the approach.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
HJlQfnCqKX
Predicting the Generalization Gap in Deep Networks with Margin Distributions
[ "Yiding Jiang", "Dilip Krishnan", "Hossein Mobahi", "Samy Bengio" ]
As shown in recent research, deep neural networks can perfectly fit randomly labeled data, but with very poor accuracy on held out data. This phenomenon indicates that loss functions such as cross-entropy are not a reliable indicator of generalization. This leads to the crucial question of how generalization gap should be predicted from the training data and network parameters. In this paper, we propose such a measure, and conduct extensive empirical studies on how well it can predict the generalization gap. Our measure is based on the concept of margin distribution, which are the distances of training points to the decision boundary. We find that it is necessary to use margin distributions at multiple layers of a deep network. On the CIFAR-10 and the CIFAR-100 datasets, our proposed measure correlates very strongly with the generalization gap. In addition, we find the following other factors to be of importance: normalizing margin values for scale independence, using characterizations of margin distribution rather than just the margin (closest distance to decision boundary), and working in log space instead of linear space (effectively using a product of margins rather than a sum). Our measure can be easily applied to feedforward deep networks with any architecture and may point towards new training loss functions that could enable better generalization.
[ "Deep learning", "large margin", "generalization bounds", "generalization gap." ]
https://openreview.net/pdf?id=HJlQfnCqKX
https://openreview.net/forum?id=HJlQfnCqKX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "r1xq2DCXe4", "BylGPEVq07", "Ske1HVNc07", "HygHuQVq07", "SylJSZ_STX", "B1gSCYhmTX", "S1e5qVsQaQ", "Hkl7VhOqhX", "H1xTRJmc3m", "HkgvJXxt2Q", "BJgDuN3On7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1544968114215, 1543287898415, 1543287862891, 1543287661374, 1541927222845, 1541814733160, 1541809297905, 1541209131350, 1541185493301, 1541108446853, 1541092462774 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1249/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1249/Authors" ], [ "ICLR.cc/2019/Conference/Paper1249/Authors" ], [ "ICLR.cc/2019/Conference/Paper1249/Authors" ], [ "ICLR.cc/2019/Conference/Paper1249/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1249/Authors" ], [ "ICLR.cc/2019/Conference/Paper1249/Authors" ], [ "ICLR.cc/2019/Conference/Paper1249/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1249/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1249/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"The paper suggests a new measurement of layer-wise margin distributions for generalization ability. Extensive experiments are conducted. Though there lacks a solid theory to explain the phenomenon. The majority of reviewers suggest acceptance (9,6,5). Therefore, it is proposed as probable accept.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"A layer-wise geometric margin distribution is used to calibrate the generalization ability, with extensive experimental support yet lacking a theory.\"}", "{\"title\": \"Addressing your comments (contd.)\", \"comment\": \"#If you do a regression analysis on a five layers cnn, can you have a good prediction on a nine layers cnn (or even residue cnn)#\\nIn the Appendix (Section 9.1 and 9.2), we already show both cross-architecture and cross-dataset comparisons, which achieve good predictive accuracy but worse than the result on a single architecture. However, when we tried using the result from cnn alone to predict the generalization gap of residual network or vice versa (not included in the paper), the result does not signify any interesting correlation. Nevertheless, we would like to emphasize that the regression is shared (and gives an accurate prediction) across other significant changes such as channel sizes, batchnorm/group norm, regularization, learning rate, dropout change (presented in appendix section 6)\\n\\n# Novelty #\\nAs you correctly pointed out, our work and Barlett et. al. build on the broad notion of \\u201cmargin distribution\\u201d and \\u201cnormalization\\u201d. However, there are significant differences:\\n1. Bartlet\\u2019s definition of margin relies only on f_i-f_j, which only reflects margin in the output space, as opposed to (f_i-f_j)/||d/dx f_i - d/dx f_j|| which approximates margin in input (or any hidden) space.\\n2. The normalization used in Bartlett et al. is a complexity measure which is drastically different from our normalization that captures more direct geometric properties of the activations. Specifically, Bartlett\\u2019s normalization relies on the spectral complexity of a network which involves spectral norm of weight matrices and reference matrices. In our work, the normalization is defined based on the total variance of the activations of the hidden layers directly (Eqs 4 and 5). \\n4. Barlett et. al. do *not* show any linear relationship between margin and test performance or gap. \\nThe above distinctions lead to very different predictions on the generalization gap as shown in our results (Figure 2 and Table 1). In fact, the choice of distributional features and normalization scheme are crucial for accurate prediction of the generalization gap.\\n\\nFurthermore, we note again that the normalization scheme of Bartlett et. al. cannot be used as-is for residual networks and is not applicable to hidden layers, a drawback not present in our normalization. Finally, we have conducted a far larger scale of experiments as compared to Bartlett et. al. to verify the effect of each prediction scheme of the generalization gap. Like we mentioned in our response to reviewer 1, we will be releasing the 700+ realistic models we used in the paper as a dataset where researchers can easily test theories on generalization, which is one of the first of its kind. \\n\\nRegarding Liao et. al. 2018, as stated in the paper, their proposed normalized loss leads to a significant *decrease* in output margin confidence, which is the opposite of what is desirable. Furthermore, normalized cross-entropy loss is different from margin-based loss, so we do not think their observation takes away the novelty of our paper just because both works illustrate linearity.\"}", "{\"title\": \"Addressing your comments.\", \"comment\": \"Thank you for the review. We address your concerns below.\\n\\n#What benefit can be acquired when using geometric margin defined in the paper.#\\nThe geometric distance is the actual distance between a point \\u201cx\\u201d and the decision boundary f(x)=0, i.e. d1=min_x ||x|| s.t. f(x)=0.This term is usually used in contrast to functional distance defined as d2=f(x). If x is on the decision boundary, d1=d2=0, but otherwise d1 and d2 can differ. Note that d2 can change by simple reparametrization. For instance, consider a linear decision boundary f(x)=w.x. In this case, geometric distance d1=f(x)/||w|| and d2=f(x). Let F(x)=(c*w).x, i.e. just scaling the weights by factor c. This does not change the decision boundary. For such F, d1 remains the same, but d2 scales with c. One can force a condition to make margins equal in both scenarios: by making the closet point to the decision boundary to have distance 1. However, this requires introducing an inequality per point, similar to SVMs. With geometric margin, we can work with an unconstrained optimization and directly apply gradient descent or SGD.\\n\\n#Why does normalization make sense?#\\nOur normalization allows direct analysis of the margins across different models with the same topology (or different datasets trained on the same network), which is otherwise difficult due to the positive homogeneity of ReLU networks. For example, suppose we have two networks with exactly the same weight, and then in one of the networks, we scale weight_i by constant positive factor c and the weight_{i+1} by 1/c (i is a layer index), the predictions of the two networks remain the same; however, their unnormalized margin distribution will be vastly different and the normalized version will be exactly the same.\\n\\n#Why does the middle layer margin can help? #\\nThere is no reason we can assume a-priori that maximizing only input or output margin (for example) is enough for good generalization. As shown in our ablation results in Tables 1 and 4, the combination of multiple layers performs significantly better. If we cut a deep network at any stage, we can treat the first half of the network as a feature extractor and the second half as the classifier. From this perspective, the margins at middle layer can be just as important as the margins in the output layer or input layer. Lastly, we note that Elsayed et. al. show that optimizing margin at multiple layers provides significant benefits for generalization and adversarial robustness. \\n\\n#Why a linear (linear log) relation between the statistic and generalization gap.#\\nWe are not claiming this is the true relationship between the statistics and the generalization gap. The true relationship may very well be nonlinear and one could perform a nonlinear regression to predict the gap, but it would need regularization and more data to avoid overfitting while a linear combination of simple distributional features already attains high quality prediction (according to CoD, k-fold cross validation and MSE) across 700+ pretrained models. This suggests that a linear relationship is indeed a very close *approximation*.\\n\\n#I don't think your comparison with Bartlett's work is fair. Their bounds suggest the gap is approximately Prob(0<X<\\\\gamma) + Const/\\\\gamma for a chosen \\\\gamma, where X is the normalized margin distribution. I think using the extracted signature from margin distribution and a linear predictor don't make sense here.#\\nWe assume the reviewer is referring to theorem 1.1 of Bartlett et al. If one wishes to compute the gap to be the inside of the soft big O, the result will be much larger than the error emitted by our prediction, and will require picking appropriate gamma and delta values. We further note the following: the case study of Bartlett et. al. (section 2) explicitly show in their diagrams (Figures 2 and 3) the normalized distribution as evidence of generalization prediction power (instead of the bound itself) and this normalized distribution is closely related to but is not directly their bounds (they drop the log terms); extracting the statistics in a sense quantifies their case study. Before submitting the paper, we also had personal communication with one of the authors of Bartlett et. al., and the author agreed that our comparison was fair.\"}", "{\"title\": \"Summary of revisions and responses to all reviewers.\", \"comment\": \"We thank all the reviewers for their comments, suggestions and questions. We have responded to each reviewer\\u2019s individual comments below. We have modified the paper as follows to address common questions posed by the reviewers:\\n\\n1. Using negative examples: we have added linear fits to both test accuracy and generalization gap and shown comparisons with and without negative examples. Table 3 in Appendix 7 (page 13) shows these results. We see that using negative margins predicts accuracy better than the generalization gap. However, as noted above, we chose to predict generalization gap, and in that case, a log relationship provides much stronger prediction, but log transform cannot use negative margin values. \\n2. To answer R2\\u2019s question about the importance of hidden layers, we show in Table 4, Appendix 7, the results of fitting every single layer and compare to fitting all layers together. No single layer, input, hidden or output performs as well as the combination. We also provide intuition for why it is important from a theoretical perspective to use margins at hidden layers (Section 3).\", \"we_have_added_to_the_main_body_or_appendix_of_the_paper_a_few_smaller_edits\": \"1. typos identified by R1 (Eq. 4)\\n2. more compact notations for Table 1\", \"clarifying_explanations\": \"1. Why we choose to discard negative margins (Sec. 3.1)\\n2. Why we use both a linear and log regression model (Sec. 3.3)\\n3. Mean square error computations (Tables 1, 3, and 4)\\n4. Why we chose evenly spaced layers for our margin computations. (end of Section 3.2)\\n5. Added references suggested by reviewers and commenter.\\n\\nLastly, we will release all the trained CIFAR-10 and CIFAR-100 models. We hope this work along with the model dataset will open up interesting avenues for future research.\\n\\nWe hope the rebuttal and revision have addressed the reviewers\\u2019 questions and comments. \\n\\nThank you!\"}", "{\"title\": \"An empirical study towards the prediction power based on the margin distribution at each layer.\", \"review\": \"The author(s) suggest using geometric margin and layer-wise margin distribution in [Elsayed et al. 2018] for predicting generalization gap.\\n\\npros,\\na). The author shows large experiments to support their argument.\\n\\ncons,\\na). No theoretical verification (nor convincing intuition) is provided, especially for the following questions,\\n i) what benefit can be acquired when using geometric margin defined in the paper.\\n ii) why does normalization make sense beyond the simple scaling-free reason. For example, spectral complexity as a normalization factor in [Bartlett et al. 2017] is proposed from the fact, that the Lipschitz constant determines the complexity of network space.\\n iii) why does the middle layer margin can help? \\n iv) why a linear (linear log) relation between the statistic and generalization gap.\\n\\nFurther question towards experiment,\\ni) I don't think your comparison with Bartlett's work is fair. Their bounds suggest the gap is approximately Prob(0<X<\\\\gamma) + Const/\\\\gamma for a chosen \\\\gamma, where X is the normalized margin distribution. I think using the extracted signature from margin distribution and a linear predictor don't make sense here.\\nii) If you do regression analysis on a five layers cnn, can you have a good prediction on a nine layers cnn (or even residue cnn)?\\n\\nFinally, I'm not sure the novelty is strong enough since the margin definition comes from [Elsayed et al. 2018] and the strong linear relationship has been shown in [Bartlett et al. 2017, Liao et al. 2018] though in different settings.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Novelty, Experiments, Technical Details\", \"comment\": \"We thank you for your insightful review.\\n\\n## NOVELTY ##\", \"r2\": \"\\u201cThe fact that normalized margins are correlated with generalization was shown in Bartlett Fig 1\\u201d.\\n\\nAs you pointed out, both works build on the broad notion of \\u201cmargin distribution\\u201d and \\u201cnormalization\\u201d. However, there are significant differences:\\n1. Margin in Bartlett uses f_i-f_j that can only reflect output margins, as opposed to (f_i-f_j)/||d/dx f_i - d/dx f_j|| that works for any layer.\\n2. We do not use margin distribution itself to predict the generalization gap, but rather distributional features that involve \\u201cnonlinear transform\\u201d of the distances (quartiles or moments).\\n3. Normalization in Bartlett\\u2019s uses norm of weight matrices, which is drastically different from geometric spread of activations (variance) we use (Eqs 4 and 5). Also their cannot be used as-is for residual networks, a drawback not present in our normalization. \\n\\nThese distinctions result in very different predictions of the generalization, as clearly shown in our Fig 2 and Table 1. In fact, the choice of distributional features and normalization are crucial for accurate prediction of the generalization gap.\\n\\nFinally, we have conducted a far larger scale of experiments, and will be releasing the 700+ realistic models used in the paper so that researchers can easily test generalization theories. This is the first of its kind. \\n\\n\\n## TECHNICAL ##\\n\\n# Missing Absolute Value in Eq (3) #\\n\\nThere is no incorrectness; we deliberately adopt \\u201csigned distance\\u201d. The polarity reflects which side of the decision boundary the point is. Even Eq (7) of Elsayed that you mentioned quickly evolves to signed distance in their Eq (8).\\n\\n# Why Negative Distance Implies Misclassification #\\n\\nIt was our oversight not to mention that \\u201ci\\u201d in our Eq (3) corresponds to the ground truth label. We will clarify this in the final version. In this case, f_i-f_j>0 (i.e. distance is positive) implies correct classification and f_i-f_j<0 implies misclassification. \\n\\n# Why Negative Points are Ignored #\\n\\nWe indeed investigated using negative distances. We observed that:\\n\\n1. Modern deep architectures often achieve near perfect classification on training data. Hence, the contribution of negative distances to the full distribution is negligible in most trained models.\\n\\n2. A small fraction of models do have notable misclassification (due to data augmentation or heavy regularization). For these models, we found that margin distribution computed with only positive samples predicted the generalization gap better than (or at par with) the full distribution. However, we observed that the latter is indeed a better predictor of test accuracy (just not the gap). Since we focus our narrative on the generalization gap, we decided to omit these results from the main paper; however, we will include these results in the appendix.\\nWe also note that there is no technical problem in using margin distribution with only positive samples, e.g. Bartlett\\u2019s work \\u201cThe Sample Complexity of Pattern Classification with Neural Networks\\u201d develops a generalization bound by such samples (paragraph above their Theorem 2).\\n\\n\\n## EXPERIMENTS ##\\n\\n# Why 4 Layers and Why Even Spacing #\\n1. This leads to a fixed-length signature vector, hence agnostic to the architecture and depth.\\n2. Computing signature across all layers is expensive for large deep models.\\n3. Larger signature would require more pre-trained networks to avoid overfitting in regression phase. Given that each pre-trained network is only one sample in the regression task, creating a large pool of models is prohibitively expensive. Our study with 700 realistic sized pre-trained networks is perhaps already beyond the common practice for such empirical analysis. \\n4. The even spacing is merely a natural choice of minimal commitment and already achieves near perfect prediction (CoD close to 1) is some scenarios. However, it is possible to examine other configurations.\\n\\n# Log/Linear #\\nWe are not sure if we understand the question. We provide an answer below, but if this is not what you meant, please let us know. We investigate the use of signature components in two ways: 1. Directly as the input to linear regression, 2. Applying an element-wise log to them before using them as input of the linear regression. In either case, the regression remains linear in optimization variables, but with the log transform we effectively regress the product of signature components to the gap value.\\n\\n# Other Criteria (MSE, AIC, etc.) #\\nWe have pointed out that the coefficient of determination already captures the MSE along with the scale of the error; however, for completeness, we will include this result in the appendix. We report k-fold cross validation results as well, which is known to be asymptotically equivalent to AIC (Stone M. (1977) An asymptotic equivalence of choice of model by cross-validation and Akaike\\u2019s criterion)\"}", "{\"title\": \"Addressing your comments\", \"comment\": \"We would like to thank you for your review and suggestions. We are very glad that you liked the empirical analysis of generalization gap and margin distribution statistics. On that note, while not mentioned in the paper, we are in preparation to release the 700+ models we used in the paper as a dataset where researchers can easily test theories on generalization. We believe this will be one of the first datasets for studying generalization on realistic and modern network architectures and we hope it will be instrumental in the ongoing generalization research.\\n\\n\\n## Construction of Signature from Pairwise Distances (i,j) in Eq (5) ##\\n\\nFor computational efficiency, we picked we pick ground truth label as \\\"i\\\" (as you correctly pointed out), and the highest non-ground truth logit as \\\"j\\\", and compute the distance between the two classes. While aggregating all pairwise distance might be more comprehensive, the complexity scales roughly quadratically with the number of classes. As such, we made the design choice to use the top two classes. In cases where the class with the highest logit is not the ground truth (hence misclassification with negative distance), we discard the data point. We will further explain this choice below. We mention this detail in the text but we will make sure it is more clear.\\n\\n\\n## Notation (i,j) instead of {i,j} to Emphasize Orderedness ##\\n\\nThank you for the suggestion. We agree and will incorporate this in the revision to avoid confusion.\\n\\n\\n## Why Only Positive Distances in Margin Distribution ##\\n\\nYou are right that when \\u201ci\\u201d is the ground truth label, the sign of the distance indicates whether the point is correctly classifier or is misclassified. \\n\\nWe indeed investigated using negative distances when computing the margin distribution. We observed that:\\n\\n1. Modern deep architectures often achieve near perfect classification on training data. Hence, the contribution of negative distances to the full distribution is negligible in most trained models.\\n\\n2. A small fraction of models do have notable misclassification (due to data augmentation or heavy regularization). For these models, we found that margin distribution computed with only positive samples predicted the generalization gap better than (or at par with) the full distribution. However, we observed that the latter is indeed a better predictor of test accuracy (just not the gap). Since we focus our narrative on the generalization gap, we decided to omit these results from the main paper; however, we will include these results in the appendix.\\nWe also note that there is no technical problem in using margin distribution with only positive samples, e.g. Bartlett\\u2019s work \\u201cThe Sample Complexity of Pattern Classification with Neural Networks\\u201d develops a generalization bound by such samples (paragraph above their Theorem 2).\\n\\n\\n## Typo ##\\n\\nThank you for pointing out the typo. It will be fixed in revision.\"}", "{\"title\": \"A nice empirical paper with good intuitions and encouraging results\", \"review\": \"This paper does not even try to propose yet another \\\"vacuous\\\" generalization bounds, but instead empirically convincingly shows an interesting connection between the proposed margin statistics and the generalization gap, which could well be used to provide some \\\"prescriptive\\\" insights (per Sanjeev Arora) towards understanding generalization in deep neural nets.\\n\\nI have no major complaints but for a few questions regarding clarifications,\\n1. From Eq.(5), such distances are defined for only one out of the many possible pairs of labels. So when forming the so-called \\\"margin signature\\\", how exactly do you compose it from all such pair-wise distances? Do you pool all the distances together before computing the statistics, or do you aggregate individual statistics from pair-wise distances? And how do you select which pairs to include or exclude? Are you assuming \\\"i\\\" is always the ground-truth label class for $x_k$ here?\\n\\n2. In Eq.(3), the way you define the distance (that flipping i and j would change the sign of the distance) is implying that {i, j} should not be viewed as an unordered pair, in which case a better notation might be (i, j) (i.e. replacing sets \\\"{}\\\" with tuples \\\"()\\\" to signal that order matters).\\n\\nAnd why do you \\\"only consider distances with positive sign\\\"? I can understand doing this for when neither i nor j corresponds to the ground-truth label of x, because you really can't tell which score should be higher. But when i happens to be the ground-truth label, wouldn't a positive distance and a negative distance be meaningful different and therefore it should only be beneficial to include both of them in the margin samples?\", \"and_a_minor_typo\": \"In Eq.(4), $\\\\bar{x}_k$ should have been $\\\\bar{x}^l$?\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Well written; technically weak\", \"review\": \"After author response, I have increased my score. I'm still not 100% sure about the interpretation the authors provided for the negative distances.\\n\\nThe paper is well written and is mostly clear. (1st line on page 4 has a typo, \\\\bar{x}_k in eq (4) should be \\\\bar{x}^l?)\", \"novelty\": \"I am not sure whether the paper adds any significant on top of what we know from Bartlett et al., Elsayed et al. since:\\n\\n(i). The fact that \\\"normalized\\\" margins are strongly correlated with the test set accuracy was shown in Bartlett et al. (figure 1.). A major part of the definition comes from there or from the reference they cite; \\n(ii). Taylor approximation to compute the margin distribution is in Elsayed et al.; \\n(iii). I think the four points listed in page 2 (which make the distinction between related work) is misleading: the way I see it is that the authors use the margin distribution in Elsayed et al which simply overcomes some of the obstacles that norm based margins may face. The only novelty here seems to be that the authors use the margin distribution at each layer.\", \"technical_pitfalls\": \"Computing the d_{f,x,i,j} using Equation (3) is missing an absolute value in the numerator as in equation (7) Elsayed et al.. The authors interpret the negative values as misclassification: why is it true? The margin distribution used in Bartlett et al. (below Figure 4 on page 5 in arxiv:1706.08498) uses labeled data and it is obvious in this case to interpreting negative values as misclassification. I don't see how this is true for eq (3) here in this paper. Secondly, why are negative points ignored?? Misclassified points in my opinion are equally important, ignoring the information that a point is misclassified doesn't sound like a great idea. How do the experiments look if we don't ignore them?\", \"experiments\": \"Good set of experiments. However I find the results to be mildly taking the claims of the authors made in four points listed in page 2 away: Section 4.1, \\\"Empirically, we found constructing this only on four evenly-spaced layers, input, and 3 hidden layers, leads to good predictors.\\\". How can the authors explain this?\\n\\nBy using linear models, authors implicitly assume that the relationship between generalization gaps and signatures are linear (in Eucledian or log spaces). However, from the experiments (table 1), we see that log models always have better results than linear models. Even assuming linear relationship, I think it is informative to also provide other metrics such as MSE, AIC, BIC etc..\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Thanks for comments\", \"comment\": \"Thank you for your helpful comments.\\n\\n### References ###\\n\\n1. We agree that the interaction of margin and generalization has been subject to a great amount of research in classical ML literature. This makes it impossible to provide a comprehensive survey in a conference paper. So we had to narrow the scope of related works to recent papers that address generalization/margin in the case of *deep* models. Nonetheless, we will be happy to include the references on SVM and clustering that you suggested.\\n\\n2. Regarding the other ICLR2019 submission you mentioned, obviously we were not aware of it prior to ICLR submission deadline (and it is not available on arxiv either). We are aware of that submission, but it seems to have some issues (reading the comments for the paper).\\n\\n### Linear Assumption ###\\n\\n1. Regarding your suspicion of linear relationship between margin and generalization gap: we are not directly relating the two using a linear map. Note that we are converting the margin distribution to a feature vector via a nonlinear map (quartiles/moments), and it is these features that are regressed to the generalization gap by a linear map. This is a widely used idea for nonlinear regression; e.g. as in kernel SVM for regression (nonlinear feature space followed by linear fitting). One could also train a nonlinear (deep) neural net to predict the gap, but it would need regularization and more data to avoid overfitting while a linear combination of simple distributional features already attains high quality prediction (see next point) across ~700 pretrained models. The latter suggests that a linear relationship is indeed a very close approximation.\\n\\n2. The point of the paper is not to claim an optimal feature set, but to leverage *simple* and *easy to compute* features that could be extracted from the distribution (like quartiles or moments) can yet give a reasonable prediction of the generalization gap that is much better than recent theoretical upper bounds in the literature. We hope this could be a step toward constructing *practical* algorithms for improving generalization in deep networks. Regarding mathematical proof for why these features should explain the generalization gap: while such results would be very interesting, it is quite ambitious if not impossible. Nevertheless, we assess the quality of the linear fit using one of the standard statistical tools created for this purpose: Coefficient of Determination (CoD). As mentioned in the paper, in some scenarios we observe CoD=0.97 (max is 1.0) which indicates a reasonably good fit.\"}", "{\"comment\": \"Introducing the theory of margin distribution into the framework of deep learning is an interesting idea. And it seems that there is a related work [Optimal margin Distribution Network, Submission to ICLR 2019], which has tried to design a new loss function based on margin distribution and theoretically proved its generalization effect. As I know, the influence of margin distribution has always been a concern for generalization theory. [Schapire, 1998] [Wang, 2011] [Gao, 2013], and there are several new algorithms based on the theory of margin distribution in both SVM [Zhang, 2017] and Clustering [Zhang, 2018] frameworks. I think that authors should read these papers and add references to them.\\nRegarding the content of the paper, I am confused about the linear (or log() ) estimation of the generalization gap: \\\"$\\\\hat{g} = a^T \\\\phi(\\\\theta) + b$\\\". Does this formula have a theoretical analysis or some statistical models to explain it? It seems unreasonable to directly explain the relationship between margin distribution and generalization with a simple linear relationship. I expect that the authors can theoretically give a formula to explain the relationship between the generalization gap and the margin distribution.\\n\\n\\n[Optimal margin Distribution Network, Submission to ICLR 2019] Anonymous. \\u201cOptimal margin Distribution Network\\u201d Submitted to International Conference on Learning Representations 2019\\n[Schapire, 1998] Schapire, R., Freund, Y., Bartlett, P. L., Lee, W. Boosting the margin: A new explanation for the effectives of voting methods. Annuals of Statistics 26 (5), 1651\\u20131686. 1998\\n[Wang, 2011] Wang, L. W., Sugiyama, M., Yang, C., Zhou, Z.-H., Feng, J. \\u201cA refined margin analysis for boosting algorithms via equilibrium margin.\\u201d Journal of Machine Learning Research 12, 1835\\u20131863. 2011\\n[Gao, 2013] Gao, W., and Zhou, Z.-H. \\\"On the doubt about margin explanation of boosting.\\\" Artificial Intelligence 203, 1-18. 2013\\n[Zhang, 2017] Zhang, T., Zhou, Z.-H. \\\"Multi-Class Optimal Margin Distribution Machine.\\\" International Conference on Machine Learning. 2017.\\n[Zhang, 2018] Zhang, T., Zhou, Z.-H. \\\"Optimal Margin Distribution Clustering.\\\" Proceedings of the National Conference on Artificial Intelligence, 2018.\", \"title\": \"Some comments\"}" ] }
HkgmzhC5F7
A Modern Take on the Bias-Variance Tradeoff in Neural Networks
[ "Brady Neal", "Sarthak Mittal", "Aristide Baratin", "Vinayak Tantia", "Matthew Scicluna", "Simon Lacoste-Julien", "Ioannis Mitliagkas" ]
We revisit the bias-variance tradeoff for neural networks in light of modern empirical findings. The traditional bias-variance tradeoff in machine learning suggests that as model complexity grows, variance increases. Classical bounds in statistical learning theory point to the number of parameters in a model as a measure of model complexity, which means the tradeoff would indicate that variance increases with the size of neural networks. However, we empirically find that variance due to training set sampling is roughly constant (with both width and depth) in practice. Variance caused by the non-convexity of the loss landscape is different. We find that it decreases with width and increases with depth, in our setting. We provide theoretical analysis, in a simplified setting inspired by linear models, that is consistent with our empirical findings for width. We view bias-variance as a useful lens to study generalization through and encourage further theoretical explanation from this perspective.
[ "bias-variance tradeoff", "deep learning theory", "generalization", "concentration" ]
https://openreview.net/pdf?id=HkgmzhC5F7
https://openreview.net/forum?id=HkgmzhC5F7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "B1gZR6YgxN", "Skg6eaCRJN", "r1lJbSzikV", "Hylv7-RqyV", "r1lx0v-c1V", "Skl9QvYE1N", "BkgEWaIy14", "H1lnV2UyJV", "BkxIGRh7T7", "SkxYWjhXTm", "r1es552m67", "rygWfqh767", "SJlSHgE03Q", "rkecRSD23m", "S1gMIHts37" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544752584630, 1544641780814, 1544393975112, 1544376607197, 1544325064305, 1543964449659, 1543625979560, 1543625779737, 1541815821760, 1541815040555, 1541814930974, 1541814792818, 1541451836940, 1541334481706, 1541277002410 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1248/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1248/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1248/Authors" ], [ "ICLR.cc/2019/Conference/Paper1248/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1248/Authors" ], [ "ICLR.cc/2019/Conference/Paper1248/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1248/Authors" ], [ "ICLR.cc/2019/Conference/Paper1248/Authors" ], [ "ICLR.cc/2019/Conference/Paper1248/Authors" ], [ "ICLR.cc/2019/Conference/Paper1248/Authors" ], [ "ICLR.cc/2019/Conference/Paper1248/Authors" ], [ "ICLR.cc/2019/Conference/Paper1248/Authors" ], [ "ICLR.cc/2019/Conference/Paper1248/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1248/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1248/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper revisits the traditional bias-variance trade-off for the case\\nof large capacity neural networks. Reviewers requested several clarifications\\non the experimental setting and underlying results. Authors provided some,\\nbut it was deemed not enough for the paper to be strong enough to be accepted.\\nReviewers discussed among themselved but think that given the paper is mostly\\nexperimental, it needs more experimental evidence to be acceptable.\\nOverall, I found the paper borderline but concur with the reviewers to reject\\nit in its current form.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}", "{\"title\": \"update wrt author response\", \"comment\": \"I don't think author response actually addressed my concern, and I agree with another reviewer that this paper needs more work --- particularly needs to show more understanding in statistical machine learning as DL is just its special case.\"}", "{\"title\": \"Thank you for your time\", \"comment\": \"Thank you for your time.\"}", "{\"title\": \"reply\", \"comment\": \"Thank you for your reply. Unfortunately, without more convincing experiments (along the lines suggested in my previous comment), this work appears incomplete.\\n\\nI appreciate the fact that you may not have the necessary compute resources to carry out these experiments but that does not justify publication of this work.\\n\\nI also disagree that if the focus is on NNs then experiments with other models are not needed. Trying other models is effectively a baseline in this case.\"}", "{\"title\": \"Author Response to Additional Feedback\", \"comment\": \"Thank you for the additional feedback.\\n\\n1. We can only run experiments with as large networks as our hardware allows. The point of the small data experiment is to provide further evidence that the total variance continues to monotonically decreases up to very large widths (relative to the size of the dataset). We take the fact that the variance plateaus for so long (with network width on the x-axis) as evidence that it will not eventually increase (forming a U shape). An experiment with any type of data would have to stop at some network size and take the long plateau as evidence that the variance will not go up.\\n\\n2. While we agree that more datasets is always better, these experiments are quite compute intensive. We developed the theory in Section 5 (Eqs 9-11 and Theorem 1) to better illustrate why this phenomenon is happening.\\n\\n3. The focus of this paper is neural networks. Traditional bias-variance tradeoff curves on other models such as KNNs and kernel regression [1] and trees and boosting [2] have been demonstrated previously. In textbooks, there are more examples of traditional bias-variance tradeoffs on other models (see e.g. Section 3.2 of [3], Section 6.4.4 of [4], etc.). In over-parameterized linear models, we actually do not expect traditional bias-variance tradeoff curves, based on what we present in Equations 9-11.\\n\\n4. First, note that we observe this decreasing total variance trend in the full data setting when all networks use the same step size (Figure 1b). In the small data setting, we use a validation set to choose the step size for each different architecture, which, we acknowledge, is a form of regularization (footnote 3). Most importantly, we demonstrate that BOTH bias and variance are decreasing with capacity, indicating that it is not necessary to trade bias for variance. Regarding the appropriateness of choosing this single hyperparameter (step size) this way, it is known that hyperparameter tuning is important for comparing different models fairly as the same hyperparameters (e.g. step size) do not often transfer across models. The figure in Appendix B.2 is a consequence of the fact that hyperparameter settings often do not transfer across architectures.\\n\\n[1] Neural Networks and the Bias/Variance Dilemma (Geman et al., 1992)\\n[2] A Unified Bias-Variance Decomposition and its Applications (Domingos, 2000)\\n[3] Pattern Recognition and Machine Learning (Bishop, 2006)\\n[4] Machine Learning: a Probabilistic Perspective (Murphy, 2013)\"}", "{\"title\": \"thanks for clarifications\", \"comment\": \"Thank you for clarifications. I think the following would make this more convincing:\\n\\n1. As it is pointed out in the paper, it may be simply the case that you have not reached the bottom of the U curve in your experiments. Can you design a synthetic data experiment to clearly demonstrate that you should have reached that bottom?\\n\\n2. This is an experimental paper (theory is rather limited), so ultimately I would expect to see experiments with a lot more datasets than just 2.5. If we observe the same phenomena with 10+ datasets that would be a lot more convincing.\\n\\n3. Could you also add experiments with other models, e.g. trees, linear models that demonstrate that those are subject to the usual bias variance tradeoff? Again this is an experimental paper after all.\\n\\n4. Ultimately, I am not convinced that it kosher to vary a hyperparameter (step size) between experiments. You diagram B.2. seems to exactly demostrate my concerns if I understand that correctly. In relation to that, it would be interesting to see how bias-variance curves look like for trees, linear models when we allow to vary their hyperparameters.\\n\\nI think as an experimental paper this would require more convincing experiments at the end of the day.\"}", "{\"title\": \"Request for discussion/reconsideration\", \"comment\": \"Thanks to your feedback, we have made the preliminaries more clear. We would appreciate if you'd consider these changes and the additional clarifications provided in our rebuttal. We hope you'll find that we've addressed your concerns and consider changing your score.\"}", "{\"title\": \"Request for discussion/consideration\", \"comment\": \"Thanks to your feedback, we have made improvements to our paper. We would appreciate if you'd consider these changes and the additional clarifications provided in our rebuttal. We hope you'll find that we've addressed your concerns and consider changing your score.\"}", "{\"title\": \"Author Response to Reviewer 3\", \"comment\": \"Thank you for your feedback! It appears that there may have been an important miscommunication regarding our methodology; we hope our answers below will clarify this. Since clarity in the definitions in Section 2 is of utmost importance for understanding the paper, we\\u2019ve also added clarifications in our uploaded revision.\", \"on_the_two_high_level_points\": \"1. On width and capacity: If capacity refers to representation power, then increasingly large width networks have increasingly large capacity -- in fact a wide enough network can fit any dataset [1]. The traditional view of bias-variance tradeoff is that increasingly large capacity models have lower bias and higher variance. This led Geman et al [2[ to claim that wide networks will suffer from high variance. We provide a quote of this claim and a sampling of related quotes from other impactful works in Appendix E. In our work we find that both bias and variance decrease with width, challenging the traditional view that bias and variance are related through a tradeoff merely governed by capacity.\", \"on_effective_capacity\": \"We understand your comment as saying that this probably means that the very notion of capacity should be amended beyond simply representation power (e.g by explicitly taking into account optimization and the data). We completely agree with this. This is also the point made in Zhang et al. [4] and related work in the context of generalization gap analysis. We reach the same conclusion through a proper analysis of the variance of these models. To the best of our knowledge, this is new; this is the first study of the variance since [2] and it reaches opposite conclusions.\\n\\n2. On our variance decomposition: In contrast to the traditional bias-variance decomposition, which only considers one source of randomness (the training set), we are considering two sources of randomness: randomness from the optimization algorithm (mainly initialization) and randomness from sampling the training set. Going by the definition of optimization error in [3], we completely agree that the classical bias-variance tradeoff does not consider the optimization error. We reason that this is partially because variance due to initialization is 0 in the strongly convex case for a batch optimizer; given a decaying step size schedule, this is true for SGD as well [2, Section 4.2]. However, we are not trying to study the optimization error defined in [3]. We extended the classical bias-variance decomposition (via the law of total variance) to have another term that captures variance due to initialization because in the non-convex setting, the learned function is dependent on initialization. We found this extension yielded insightful results as the two variance terms have importantly different trends (Figure 2).\\n\\nAs we mention in 3.2, our results in the full data setting were the same with or without early stopping. Also, as mentioned at the end of Section 3.3, we find the same trends with other optimization algorithms such as batch gradient descent and PyTorchs implementation of LBFGS (included in Appendix B.3).\", \"on_the_technical_parts\": \"Thank you for making it clear to us that the definitions in Section 2 were not given with sufficient precision; this feedback is very valuable to us. We have uploaded a revision that we hope will make this perfectly clear.\\n\\n1. On $p( . |S)$: Given a training set $S$, the learned weight theta depends on the random initialization because of non-convexity. Hence it is not deterministic; $p(\\\\theta|S)$ is the distribution over the learned weights, conditioned on $S$. We have updated the corresponding paragraph in Section 2, making explicit the random variable I that denotes initialization and explaining the relationship of the learning algorithm with S and I.\\n\\n2. On frequentist risk: it looks like there may be a misunderstanding with our notation. We use the standard notion of frequentist risk, just in a more general context. Unfortunately, the notation \\\\theta usually refers to the population parameter, which may have caused some confusion. Our \\\\theta denotes the learned weights of the neural network. Averaging over them w.r.t $p( . |S)$ amounts to averaging over initializations. Hopefully, our revisions in this section makes this all more clear.\\n\\n3. On Eq 5: Hopefully our answers above clarify this equation. The variance is with respect to both initialization and data sampling. Eq. 5 then follows from the law of total variance. Please let us know if anything is still unclear.\", \"in_closing\": \"Thank you for your time. We hope you find that our responses and our revision address your concerns.\\n\\n[1] Neural Networks for Exact Matching of Functions on a Discrete Domain (Shrivastava and Dasgupta, 1990)\\n[2] Understanding Deep Learning Requires Generalization (Zhang et al, 2017)\\n[3] The Tradeoffs of Large Scale Learning (Bottou and Bousquet, 2008)\\n[4] Optimization Methods for Large-Scale Machine Learning (Bottou et al., 2016)\"}", "{\"title\": \"Author Response to Reviewer 1\", \"comment\": \"Thank you for the positive feedback!\\n\\n\\u201cstability of the training algorithm\\u201d:\\nThank you for bringing up this other factor to consider. There are 3 sources of randomness when using SGD (initialization, training set, and mini-batch sampling). We do not focus on variance due to mini-batch sampling because the the decreasing variance phenomenon persisted when using batch gradient descent (no randomness due to mini-batching); this result is included in Appendix B.3. In response to your feedback, we have added a footnote on the 3rd page that addresses this: \\u201cWe do not study randomness from stochastic mini-batching because we found the phenomenon of decreasing variance with width persists when using batch gradient descent (Section 3.3, Appendix B.3).\\u201d\", \"behavior_of_variance_when_using_different_optimizers\": \"When we use other optimizers such as batch gradient descent and LBFGS, we still find that total variance decreases with width. These experiments are mentioned at the end of Section 3.3 and are included in Appendix B.3. We hope you find these results with other optimizers interesting.\\n\\nRandom seeds for outer and inner expectations in Eq. 5:\\nWe greatly appreciate this comment. We believe there may be a misunderstanding, due to our writing in this section. The outer expectation is for estimation over randomness from training set sampling and the inner expectation is for estimation over randomness from initialization. This is necessary because the two terms from the law of total variance both depend on both sources of randomness; it\\u2019s just that they take variances with respect to different random variables. If only one seed were used for the inner expectation (randomness from initialization), we would be estimating a conditional variance (over training set sampling), which is conditioned on one specific initialization. To help make this more clear, we have added the explicit introduction of the random variable I, which denotes the randomness from optimization, in Section 2.1 and have added I to Eq. 5.\\n\\n\\u201cy_bar(x) = E(y|x)\\u201d and bias estimation:\\nYou are absolutely right that y_bar(x) = E(y|x) is unknown. We have added this clarification in footnote 4 of the new revision: \\u201cBecause we don't have access to \\\\bar{y}, we use the labels y to estimate bias. This is equivalent to assuming noiseless labels and is standard procedure for estimating bias (Kohavi and Wolpert, 1996; Domingos, 2000).\\u201d\", \"in_closing\": \"Thank you for your time. We hope that we have adequately addressed your questions and hope our revision makes the significance of main contribution 2 more apparent.\"}", "{\"title\": \"Author Response to Reviewer 2\", \"comment\": \"Thank you for taking the time to review our paper!\\n\\n\\n\\u201cMain comment on experiments [...] It may be the case that as width grows the step size decreases faster and hence hypothesis set shrinks and we observe decreasing variance\\u201d:\\nThank you for bringing up this point that leads to this natural hypothesis. Note that the bias is also going down with width. Traditional bias-variance tradeoff thinking associates decreasing bias with a growing hypothesis set. The fact that we see both bias and variance decrease is what\\u2019s surprising, as it shows we don\\u2019t need to trade bias for variance.\\n\\nIn addition, we do not see that the step sizes are decreasing with width. The step sizes that were used for the decreasing variance in the small data setting (Figure 3a) are provided in Appendix B.1.\\n\\nFurthermore, note that the same experimental procedure did not lead to decreasing variance with depth. By the same line of reasoning as in the above quote, we would expect to get decreasing variance with depth by having smaller and smaller step sizes with deeper networks. However, our experimental procedure did not yield that.\\n\\nWe hope that these points make it clear that we considered this potential explanation of our results, and we determined that this explanation does not capture the whole story. For more discussion on the justification of our experimental design, see Section 3.3 and Appendix B.2.\\n\\n\\n\\u201cresults for depth are what we would expect from theory in general\\u201d:\\nWhile we agree that variance due to initialization is consistent with orthodoxy and discuss this in Section 4.2, we find the observation that variance due to sampling is roughly constant with depth (Figure 2b) quite surprising. This is because the traditional bias-variance tradeoff is exactly about training set sampling randomness (not optimization randomness). The distinction between these two sources of randomness is key to our deeper level of study (main contribution 2).\", \"on_assumptions\": \"We note that the assumptions are strong (though they have their basis in the referenced literature). Our primary goal is to give a rigorous argument beyond the linear case.\\n\\nRegarding your request for more details on the assumptions, we\\u2019ve updated the paragraph just before Section 5.2.1 with another reference and a more clear explanation: \\u201cSagun et al. (2017) showed that the spectrum of the Hessian for over-parametrized networks splits into (i) a bulk centered near zero and (ii) a small number of large eigenvalues, which suggests that learning occurs mainly in a small number of directions.\\u201d This hypothesis was also formulated by Advani & Saxe (2017), and this is an active area of research. For example, there was a paper submitted to this ICLR titled \\u201cGradient Descent Happens in a Tiny Subspace\\u201d that is entirely dedicated to this line of thinking: https://openreview.net/forum?id=ByeTHsAqtX We view our analysis (and identification of these assumptions) as a useful contribution because it is consistent with the experimental results for width, and the assumptions have some basis in the literature.\\n\\nAdditionally, we have added more to Section 5.3 to make the intuition more clear for why varying the depth is very different from varying the width, with respect to these assumptions.\", \"on_seeming_contradiction\": \"Our reference to Advani & Saxe\\u2019s result was indeed very imprecise and seems to result in a contradiction. Thank you for pointing that out. We have updated this in the recent revision to report more clearly their result (just before Section 5.2.1). Their analysis shows that our assumption (a) holds in deep linear networks under a simplifying assumption on the form of the weights that leads to a full decoupling of the dynamics of the weights at different layers. They claim this simplifying assumption is approximately true for small enough initial weights; and our empirical results suggest it might be increasingly inaccurate with depth.\", \"in_closing\": \"Thank you for your time. We hope you find that our revision addresses your concerns.\"}", "{\"title\": \"Global Comment from Authors\", \"comment\": \"Thank you to all of the reviewers for their time. Most importantly, we have revised Section 2.1 to make it more clear and have added an explicit description of all randomness in Eq. 5 in order to aid in the understanding of our decomposition of variance (main contribution 2). We hope this improved clarity in the preliminaries makes the significance of main contribution 2 more apparent.\", \"main_contributions\": \"1. Variance decreases with width (along with bias), indicating that it isn\\u2019t necessary to trade bias for variance.\\n2. We perform a deeper study of variance by decomposing the coarse variance into two terms: variance due to training set sampling (like the classical decomposition) and variance due to initialization. We find variance due to sampling is roughly constant with both width and depth (Figure 2).\\n3. In a simplified setting, inspired by linear models, we provide theoretical analysis in support of our empirical findings for network width.\"}", "{\"title\": \"Interesting paper with some experiments and preliminary results but requires more work\", \"review\": \"This paper studies variance-bias tradeoff as a function of depth and width of a neural network. Experiments suggest that variance may decrease as a function width and increase as a function of depth. Some analytical results are presented why this may the case for width and why the necessary assumptions for the depth are violated.\", \"main_comment_on_experiments\": \"if I am correct the step size for optimization is chosen in a data-dependent way for each size of the network. This is a subtle point since it leads to a data-dependent hypothesis set. In other words, in this experiments for each width we study variance of neural nets that can be found in fixed number of iterations by a step size that is chosen in data-dependent way. It may be the case that as width grows the step size decreases faster and hence hypothesis set shrinks and we observe decreasing variance. This makes the results of experiments with width not so surprising or interesting.\", \"further_comments_on_experiments\": \"it probably worth pointing out that results for depth are what we would expect from theory in general.\", \"more_on_experiments\": \"it would be also interesting to see how variance behaves as a function of width for depth other than 1.\", \"on_assumptions\": \"it is not really clear why assumptions in 5.2 hold for wide shallow networks at least in some cases. Paper provides some references to prior work but it would be great to give more details. Furthermore, some statements seems to be contradicting: sentence before 5.2.1 seems to say that assumption (a) should hold for deep nets while sentence at the end of page 8 seems to say the opposite.\", \"overall\": \"I think this paper presents an interesting avenue of research but due to aforementioned points is not ready for publication.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Report on paper 1248\", \"review\": \"The paper offers a different and surprising view on the bias-variance decomposition. The paper shows, by a means of experimental studies and a simplified theoretical analysis, that variance decreases with the model complexity (in terms of the width of neural nets) , which is opposite to the traditional bias-variance trade-off.\\n\\nWhile the conclusion is surprising, it is somewhat consistent with my own observation. However, there are potential confounding factors in such an experimental study that needs to be controlled for. One of these factors is the stability of the training algorithm being used. The variance term (and the bias) depends on the distribution p(theta|S) of the model parameters given data S. This would be the posterior distribution in Bayesian settings, but the paper considers the frequentist framework so this distribution encodes all the uncertainty due to initialisation, sampling and the nature of SGD optimizer being used. The paper accounts for the first two, but how about the stability of the optimiser? If the authors used a different optimizer for training, what would the variance behave then? A comment/discussion along this line would be interesting.\\n\\nIt is said in Section 3.1 that different random seeds are used for estimating both the outer and inter expectation in Eq. 5. Should the bootstrap be used instead for the outer expectation as this is w.r.t. the data? Another point that isn't clear to me is how the true conditional mean y_bar(x) = E(y|x) is computed in real-data experiments, as this quantity is typically unknown.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"This paper needs to show some serious understanding on statistical machine learning\", \"review\": \"This paper suggests to rethink about the bias-variance tradeoff from statistical machine learning in the context of neural networks. Based on some empirical observations, the main claims in this work are that (1) it is not always the case that the variance will increase when we use bigger neural network models (particularly, by increasing the network width); (2) the variance should be decomposed into two parts: one part accounts for the variance caused by random initialization of network parameters/optimization and the other part is caused by \\\"sampling of the training set\\\".\\n\\nFor the first claim is based the empirical observation that increasing the number of hidden units did not cause the incrase of variance (as in figure 1). However, to my understanding, it only means increasing the number of hidden units is probably not a good way to increase the network capacity. In other words, this cannot be used as an evidence that the bias-variance tradeoff is not valid in neural network learning.\\n\\nFor the second claim, I don't like the way that they decompose the variance into two parts. To be clear, the classical bias-variance tradeoff doesn't consider the optimization error as an issue. For a more generic view of machine learning errors, please refer to \\\"The Tradeoffs of Large Scale Learning\\\" (Bottou and Bousquet, 2008). In addition, if the proposed framework wants to include the optimization error, it should also cover some other errors caused by optimization, for example, early stopping and the choice of a optimization algorithm.\\n\\nBesides these high-level issues, I also found the technical parts of this paper is really hard to understand. For example,\\n\\n- what is exactly the definition of $p(\\\\theta|S)$? The closely related case I can think about is in the Baysian setting, where we want to give a prior distribution of model (parameter). But, clearly, this is not the case here. \\n- similar question to the \\\"frequentist risk\\\", in the definition of frequentist risk, model parameter $\\\\theta$ should be fixed and the only expectation we need to compute is over data $S$\\n- in Eq. (5), I think I need more technical detail to understand this decomposition.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HkGGfhC5Y7
Towards a better understanding of Vector Quantized Autoencoders
[ "Aurko Roy", "Ashish Vaswani", "Niki Parmar", "Arvind Neelakantan" ]
Deep neural networks with discrete latent variables offer the promise of better symbolic reasoning, and learning abstractions that are more useful to new tasks. There has been a surge in interest in discrete latent variable models, however, despite several recent improvements, the training of discrete latent variable models has remained challenging and their performance has mostly failed to match their continuous counterparts. Recent work on vector quantized autoencoders (VQ-VAE) has made substantial progress in this direction, with its perplexity almost matching that of a VAE on datasets such as CIFAR-10. In this work, we investigate an alternate training technique for VQ-VAE, inspired by its connection to the Expectation Maximization (EM) algorithm. Training the discrete autoencoder with EM and combining it with sequence level knowledge distillation alows us to develop a non-autoregressive machine translation model whose accuracy almost matches a strong greedy autoregressive baseline Transformer, while being 3.3 times faster at inference.
[ "machine translation", "vector quantized autoencoders", "non-autoregressive", "NMT" ]
https://openreview.net/pdf?id=HkGGfhC5Y7
https://openreview.net/forum?id=HkGGfhC5Y7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HygO6n8GgV", "rkxyFEXDyE", "ByxiE4XPJ4", "ryl0iUEU14", "H1xKgVEHkN", "r1gBDs-Xk4", "BJxg8jZ71V", "SyeQNj-714", "BygRDlrs6Q", "rkeQ2ZXspm", "HkeSbl7j6X", "Byearq9zpQ", "ryxRaw5zpm", "SylKiwqf6m", "Byg4DS-Zp7", "H1xD-S0dh7", "BJeUvaXU2Q", "r1lCtgTx27" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544871104391, 1544135799014, 1544135731152, 1544074918418, 1544008688824, 1543867228697, 1543867207918, 1543867179363, 1542307942298, 1542300074738, 1542299645435, 1541741124812, 1541740486438, 1541740448643, 1541637468273, 1541100798929, 1540926814301, 1540571269756 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1247/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1247/Authors" ], [ "ICLR.cc/2019/Conference/Paper1247/Authors" ], [ "ICLR.cc/2019/Conference/Paper1247/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1247/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1247/Authors" ], [ "ICLR.cc/2019/Conference/Paper1247/Authors" ], [ "ICLR.cc/2019/Conference/Paper1247/Authors" ], [ "ICLR.cc/2019/Conference/Paper1247/Authors" ], [ "ICLR.cc/2019/Conference/Paper1247/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1247/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1247/Authors" ], [ "ICLR.cc/2019/Conference/Paper1247/Authors" ], [ "ICLR.cc/2019/Conference/Paper1247/Authors" ], [ "ICLR.cc/2019/Conference/Paper1247/Authors" ], [ "ICLR.cc/2019/Conference/Paper1247/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1247/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1247/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": [\"Strengths:\", \"well-written\", \"strong results for non-autoregressive NMT\", \"a novel soft EM version of VQ-VAE\"], \"weaknesses\": [\"as pointed out by reviewers, the improvements are mostly not due to the VQ-VAE modification rather due to orthogonal (and not interesting) changes e.g., knowledge distillation. If there is a genuine contribution of VQ-VAE, it is small and required extensive parameter selection\", \"the explanations provided in the paper do not match the empirical results\", \"Two reviewers criticize the experiments / experimental section: rigour / their discussion. Overall, there is nothing wrong with the method but the experiments are not showing that the modification is particularly beneficial. Given these results and also given that the method is not particularly novel (switching from EM to Soft EM in VQ-VAE), it is hard for me to argue for accepting the paper.\"], \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"a reasonable method but empirical evidence is questionable\"}", "{\"title\": \"Thanks for your attention\", \"comment\": \"We thank the reviewer for reading our updated manuscript and for their feedback. We acknowledge that we missed this sentence in the introduction, since we focused on updating the experimental section and the writing therein as per the comments of R3. We will definitely go over the manuscript carefully and make the presentation more clear. Do you have any more specific aspects of the presentation that you would like to be improved ?\"}", "{\"title\": \"Thanks for your attention\", \"comment\": \"We thank the reviewer for carefully reading our updated manuscript and for raising their score. We acknowledge that it may be difficult for the reader to grasp the difference in implementation of VQ-VAE from Kaiser et al. We have now separated out the two VQ-VAE results in Table 1 in the Experiments section. It will be reflected in the final draft if the paper is accepted.\"}", "{\"title\": \"Thank you for your detailed feedback.\", \"comment\": \"I'm happy to see the revised manuscript tightening its focus on the NMT. I think this makes it easier for readers to catch the core message of the paper.\\nI also observed some of confusing remarks in the experiments are improved, this is another plus. \\n\\nYet I still feel some difficulties in experimental section readings. \\nIn P.7, \\n\\n\\\"\\\"\\\"\\nThe results are summarized in Table 1. Our implementation of VQ-VAE achieves a significantly\\nbetter BLEU score and faster decoding speed compared to Kaiser et al. (2018).\\n\\\"\\\"\\\"\\n\\nThe proposed model (\\\"VQ-VAE\\\" in table 1, I suppose) is compared with the results of (Kaiser+, 2018), but no numbers of (Kaiser+, 2018) are reported in the manuscript body, if I correctly read the paper. I propose to provide these numbers then the readers can verify these claims instantly. \\n\\nI also think it is better to emphasize the difference of your \\\"VQ-VAE\\\" and the implementation of the (Kaiser+, 2018) in more easier-to-understand-visually manner. for example, using a table? \\nI'm afraid that it is still difficult for readers to understand why \\\"VQ-VAE\\\" is included in the \\\"our results\\\", without your detailed feedback comments.\"}", "{\"title\": \"minimal update\", \"comment\": \"thank you for drawing attention to your updated draft. While your comment (and the added sentence in 5.1.1) clarifies the difference between Kaiser et al. (2018) and this work, the presentation in the paper is still misleading (the introduction does not refer to this architecture difference, and still attributes performance differences to \\\"tuning for the code-book size\\\".\\n\\nI don't believe the new draft made any other changes that would update my impression.\"}", "{\"title\": \"Updated paper\", \"comment\": \"R3, we believe have addressed your concerns and clarified some of your points. Do you have an updated impression of our paper? Thanks for your consideration.\"}", "{\"title\": \"Updated paper\", \"comment\": \"R1, we believe have addressed your concerns and clarified some of your points. Do you have an updated impression of our paper? Thanks for your consideration.\"}", "{\"title\": \"Updated paper\", \"comment\": \"R4, we believe have addressed your concerns and clarified some of your points. Do you have an updated impression of our paper? Thanks for your consideration.\"}", "{\"title\": \"Reply to Reviewer 4\", \"comment\": \"We thank the reviewer for a careful reading of our paper and for their thoughtful review. Below we address the specific points raised by the reviewer:\\n\\n>>>\\nThe first contribution of the paper is that it shows a simple VQ-VAE to work well on the EN-DE NMT task, in contrast to the results by Kaiser et al. (2018)...\\n<<<\\n\\nThe main difference between the setup of Kaiser et al (2018) and the current work is the point \\\"Attention to Source Sentence Encoder\\\" in the Analysis section. The discrete latents in Kaiser et al (2018) are a function ae(x, y) where x is the input sequence and y is the target sequence. The dependence on x is in the form of attention layers. This makes it a much more complex function to learn and the authors of that work report that VQ indeed did not work for them, and so they had to resort to Product Quantization (referred to as DVQ in their work) with multiple codebooks to get a good result. We found that the attention to source sequence x to be an unnecessary complication, and so our latents are just a function ae(y), where y is the target sequence.\\n\\nWe do not attribute this to tuning the code-book size, we apologize for the misunderstanding. The robustness of EM is in the case when the latents ae(x, y) are also a function of x, see Figure 5 in the Appendix (\\\" Comparison of VQ-VAE (green curve) vs EM with different number of samples (yellow and blue curves) on the WMT\\u201914 English-German translation dataset with a codebook size of 2 14, with the encoder of the discrete autoencoder attending to the output of the encoder of the source sentence as in Kaiser et al. (2018).\\\") The optimization problem is much harder in this case and we see that the VQ runs collapse while various versions of EM (with different number of samples) still give a good result. The EM version does depend on the number of samples, but is much more stable compared to VQ even when the latents are a function of x. \\n\\nWe apologize if the \\\"Attention to source sentence encoder\\\" was not adequately clear: we had a statement to the effect of \\\"Also, removing this attention step results in more stable training particularly for large code-book sizes, see e.g., Figure 3.\\\", but it unfortunately seems to have got lost in a revision..\\n\\n>>>\\nThe last claimed contribution (using denoising techniques) is hidden in the appendix...\\n<<<\\n\\nDenoising autoencoders as used by Lample et al., were used in the context of learning better initial representations for unsupervised MT. We found that applying it to the context of discrete autoencoders like VQ-VAE can give some improvements\\n in larger datasets like En-Fr. For En-De denoising VQ-VAE did not give us any improvement over VQ-VAE. We do not claim we invented denoising autoencoders, we write: \\n\\n\\\"On the larger English-French dataset, we show that denoising discrete autoencoders gives us a significant improvement (1.0 BLEU) on top of our non-autoregressive baseline (see Section D)\\\"\\n\\n>>>\\nI'd like to see some of the results in the paper published eventually...\\n<<<\\n\\nWe hope our first paragraph addresses the question of why VQ-VAE did not work in Kaiser et al. without product quantization, but worked in our case. We have made this more explicit in the latest version. Also note that all our VQ-VAE runs for MT do not have the attention to source sequence x, except Figure 5 where we explicitly mention this.\\n\\n>>>\\n- the strong performance of the VQ-VAE baseline remains unexplained, and the claimed explanation contradicts empirical results.\\n<<<\\n\\nWe hope that the previous paragraphs and the new draft addresses this concern. \\n\\n>>>\\n- the new EM algorithm gives relatively small improvements, with hyperparameters that were likely selected based on test set scores .\\n<<<\\n\\nThe hyperparameters were selected on WMT'13 and the results are reported on WMT'14. EM gives small improvements with knowledge distillation, because the optimization problem is much easier in this case. When the optimization problem is harder we see more gains from EM: \\n\\n1) In the setting when the latents are informed by the source sequence x, EM is much more stable than VQ-VAE (Figure 5) \\n2) In the case when knowledge distillation is not used it gives a gain of +1.0 BLEU \\n3) When the hidden dimension is smaller (256 or 384) instead of 512, we see gains of +1.3 BLEU and +0.6 BLEU respectively.\\n\\n>>>\\n- most of the empirical gain is attributable to knowledge distillation, which is not a novel contribution\\n<<<\\n\\nThat is a valid point, and we did indeed find knowledge distillation to be very important for good performance for NMT in addition to removing the attention to source sequence x.\"}", "{\"title\": \"reading other reviews/comments\", \"comment\": \"This was an extra review requested after the end of the official review period; now looking at the other reviews and replies, I can see that the question as to whether hyperparameters were optimized on the test set was already addressed. I stand by the comment that obtaining this small improvement required extensive hyperparameter tuning, which devalues it slightly.\"}", "{\"title\": \"interesting parts, but needs more rigour\", \"review\": [\"This paper discusses VQ-VAE for learning discrete latent variables, and its application to NMT with a non-autoregressive decoder to reduce latency (obtained by producing a number of latent variables that is much smaller than the number of target words, and then producing all target words in parallel conditioned on the latent variables and the source text). The authors show the connection between the existing EMA technique for learning the discrete latent states and hard EM, and introduce a Monte-Carlo EM algorithm as a new learning technique. They show strong empirical results on EN-DE NMT with a latent Transformer (Kaiser et al. (2018)).\", \"The paper is clearly written (excepting the overloaded appendix), and the individual parts of the paper are interesting, including the link between VQ-VAE training and hard EM, the Monte-Carlo EM, and strong empirical results. I'm less convinced that the paper as a whole delivers on what it promises/claims.\", \"The first contribution of the paper is that it shows a simple VQ-VAE to work well on the EN-DE NMT task, in contrast to the results by Kaiser et al. (2018). The paper attributes this to tuning of the code-book, but the results (table 3) seem to contradict this, with a code-book size of 2^16 even slightly better than the 2^12 that is used subsequently. The reason for the performance difference to Kaiser et al. (2018) remains opaque. While interesting, the empirical effectiveness of Monte-Carlo EM is a bit disappointing, achieving +0.3 BLEU over the best configuration for EN-DE (after extensive hyperparameter tuning, seen in table 4), and -0.1 BLEU on EN-FR. Monte-Carlo EM also seems very sensitive to hyperparameters, namely the sample size (tables 4,5), contradicting the later claim that EM is robust to hyperparameters. The last claimed contribution (using denoising techniques) is hidden in the appendix, an application of an existing technique, and not compared to knowledge distillation (another existing technique).\", \"I'd like to see some of the results in the paper published eventually. However, the claims need to better match the empirical evidence, and for a paper that has \\\"better understanding\\\" in the title, I'd like to gain a better understanding of the differences to Kaiser et al. (2018) that make VQ-VAE fail for them, but not in the present case.\", \"clearly written paper\", \"interesting, novel EM algorithm for VQ-VAE\", \"strong empirical results on non-autoregressive NMT\", \"the strong performance of the VQ-VAE baseline remains unexplained, and the claimed explanation contradicts empirical results.\", \"the new EM algorithm gives relatively small improvements, with hyperparameters that were likely selected based on test set scores .\", \"most of the empirical gain is attributable to knowledge distillation, which is not a novel contribution\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Reply to Reviewer 3\", \"comment\": \"We thank the reviewer for reading our paper. Below we address specific points raised by the reviewer:\\n\\n>>>\\nI'm feeling difficulties in understanding the experimental part.\\nTo be honest, I think the experimental section is highly unorganized, not a quality for ICLR submission. \\nI'm just wondering why this happens, given clean and organized technical sections...\\n>>>\\n\\nWe have made an effort to clean up the experimental section part in the updated draft. We would appreciate specific comments to help us make the experimental section more readable and organized.\\n\\n>>>\\nFirst, I'm confusing what is the main competent in the Table 1. \\nIn the last paragraph of the page 6, it reads; \\n\\\"Our implementation of VQ-VAE achieves a significantly better BLEU score and faster decoding speed compared to (10).\\\"\\nHowever, Ref. (10) is not mentioned in the Table 1. Which BLEU is the score of Ref. (10)? \\n>>>\\n\\nThis should be fixed in the updated version.\\n\\n>>>\\nSecond, terms \\\"VQ-VAE\\\", (soft?)\\\"EM\\\" and \\\"our {model, approach}\\\" are used in a confusing manner. \\nFor example, in Table 1, below the row \\\"Our Results\\\", there are:\\n- VQ-VAE\\n- VQ-VAE with EM\\n- VQ-VAE + distillation\\n- VQ-VAE with EM + distillation\\n\\nThe \\\"VQ-VAE\\\" is not the proposed model, correct? \\nMy understanding is that the proposal is a VQ-VAE solved via soft EM, which corresponds to \\\"VQ-VAE with EM\\\". \\n<<<\\n\\nYes VQ-VAE is not the proposed model, although we report it in \\\"Our Results\\\" because the implementation is different from Kaiser et al in two crucial aspects 1) No attention to source sequences for the discrete latents 2) Product Quantization (PQ) which the authors of Kaiser et al call DVQ is not being used. Hence we also report it in \\\"Our Results\\\".\\n\\n>>>\\nThird, a paragraph \\\"Robustness of EM to Hyperparameters\\\" is mis-leading. \\nThe figure 3 does not show the robustness against a hyperparameter. \\nIt shows the BLEU against the number of \\\"samples\\\" (in fact, there is no explanation about what the \\\"samples\\\" means). \\nI think hyperparameters are model constants such as the learning rate of the SGD, alpha-beta params for Adam, dimension of hidden units, number of layers, etc. The number of samples are not considered as a model hyperparameter; it's a dataset property. \\n>>>\\n\\nThe number of samples used for EM training of VQ-VAE is a hyperparameter, how is it a property of the dataset? You are free to choose any number of samples regardless of the dataset.\\n\\n>>>\\nThe figure 5 shows the reconstructed images of the original VQ-VAE and the proposed VQ-VAE with EM. \\nHowever, there is no explanation which hyperparameter is tested to assess \\\"the robustness to hyperparameters\\\". \\n<<<\\n\\nOur apologies, this should be robustness to initialization of the codebook. VQ-VAE/K-means is much more sensitive to a good initialization as compared to EM.\\n\\n>>>\\nFourth, there is no experimental report on the image reconstructions (with CIFAR and SVHN) in the main manuscript. \\nIn fact, there is a short paragraph that mentions about the SVHN results, \\nbut it only refers to the appendix. \\nI think appendix is basically used for additional results or proofs, that are not essential for the main message of the paper. \\n\\nHowever, performance in the image reconstruction is one of the main claims written in the abstract, the intro, etc. \\nSo, the authors should include the image reconstruction results in the main body of the paper. \\nOtherwise, claims about the image reconstructions should be removed from the abstract, etc. \\n>>>\\n\\nWe have removed all image references from the main section and now only report it in the Appendix. We hope this helps improving the quality and clarity of the main paper.\"}", "{\"title\": \"Reply to Reviewer 1 continued\", \"comment\": \"Continued from above:\\n\\n>>>\\n- There is no justification of using *causal* self-attention...\\n<<<\\n\\nAttention to the source embeddings is a natural and justified way to inform the discrete latents (see e.g., [2]). Also, the attention to source sequences for generating the discrete latents from the targets is not causal. The only causal attention layers are for encoding the inputs and in the autoregressive decoder from the latents. \\n\\n>>>\\n- As for the experimental evaluation results: it seems that distillation...\\n<<<\\n\\nIn page 7 of the current draft (and page 6 of the original submission), we say \\\"Additionally, we see a large improvement in the performance of the model by using sequence-level distillation (12), as has been observed previously in non-autoregressive models (6; 16).\\\" We have also added a sentence to this effect in the conclusion in the updated draft.\\n\\n>>>\\n- What is the significance of the observed differences in BLEU scores? ...\\n<<<\\n\\nWe point the reviewer to [1, 2, 3, 4] which are the current state-of-the-art literature on non-autoregressive machine translation. None of these works report average or std devs on several runs, instead they select the best hyperparameter from a validation set and report the result of this model on a held out test set (which is a perfectly valid thing to do).\\n\\n>>>\\n- It seems that the tuning of the number of discrete latent codes...\\n<<<\\n\\nThe optimal hyperparameters are selected on the validation set (WMT'13) while the reported results are on the held out WMT'14 test set. This is standard practice in the NMT literature. We have made this more explicit in the latest draft.\\n\\n>>>\\n- It seems that all curves in figure 3 collapse from about 45 BLEU...\\n<<<\\n\\nWe have made this figure larger so that it is easier to read. The figure is intended to show the robustness of the EM runs vs the VQ-VAE runs: the collapsed curve is a VQ-VAE run with bad initialization, while the other superimposed curves are different EM runs of the same configuration with various values of the number of samples. \\n\\n[1] https://openreview.net/forum?id=B1l8BtlCb\\n[2] http://proceedings.mlr.press/v80/kaiser18a/kaiser18a.pdf\\n[3] https://openreview.net/forum?id=r1gGpjActQ\\n[4] https://arxiv.org/abs/1802.06901\"}", "{\"title\": \"Reply to Reviewer 1\", \"comment\": \"We thank the reviewer for taking the time to read our paper. Below we address the specific points raised by the reviewer:\\n\\n>>>\\nOverall the technical writing in the paper is sloppy....\\n<<<\\n\\nIn this work, we improve upon VQ-VAE to learn shorter latent representations of a target sentence in order to speed up MT, rather than to train a generative model. We achieve considerable speedup in decoding state of the art NMT models without much loss in BLEU (a universally accepted metric for translation quality), which has powerful implications for real world, production level MT systems. While evaluating the improvements of our training for generative modeling is interesting, our focus is on using VQ-VAE for a practical task. \\n\\nMoreover, we have now added a paragraph on the generative process (Page 3). We hope that this will clarify some of the content. We welcome the reviewer to share what they think is \\\"sloppy\\\" and \\\"imprecise\\\", and what would help us further improve the content of the paper.\\n\\n>>>\\nThe technical presentation of the work by the authors starts only at page 5...\\n<<<\\n\\nOur goal is to use the autoencoder from VQ-VAE as a tool to compress the target sentence for fast decoding. We therefore chose to focus on the part of the algorithm, describing it's connection to hard-EM and our improvements on it using EM. We would appreciate concrete suggestions to improve the content.\\n\\n>>>\\nQuantitative experimental evaluation is limited to a machine translation task...\\n<<<\\n\\nThe main focus of our work is to design a better non-autoregressive machine translation model and which is an area of active research (see for e.g., [1, 2, 3, 4]). None of those works evaluate their proposed method on datasets other than machine translation because the goal of their work is non-autoregressive MT. We do not care about generative modeling of images with VQ-VAE because plenty of other models do it much better (for e.g., a GAN/VAE/PixelCNN++).\", \"the_keywords_of_our_paper_states\": \"\\\"machine translation, vector quantized autoencoders, non-autoregressive, NMT\\\", while the TL;DR of our submission is \\\"Understand the VQ-VAE discrete autoencoder systematically using EM and use it to design non-autogressive translation model matching a strong autoregressive baseline.\\\"\\n\\n>>>\\n- The related work section (4) provides a rather limited overview of relevant related work...\\n<<<\\n\\nAgain, the main aim of our work is to speed up the decoding for real world Neural Machine Translation (NMT) systems, which is an active area of research (see e.g., [1, 2, 3, 4]). We have focussed on generative models that are practically relevant to non-autoregressive NMT and because of page limitations we have not been able to include every paper on generative modeling. If we have missed relevant references we would appreciate if the reviewer would let us know what they are.\\n \\n[1] https://openreview.net/forum?id=B1l8BtlCb\\n[2] http://proceedings.mlr.press/v80/kaiser18a/kaiser18a.pdf\\n[3] https://openreview.net/forum?id=r1gGpjActQ\\n[4] https://arxiv.org/abs/1802.06901\"}", "{\"title\": \"Reply to reviewer 2\", \"comment\": \"We thank the reviewer for taking the time to read our paper and for the useful comments to help improve our presentation! We have increased the resolution of the images by moving some of them to the appendix, and hope that fixes the visibility issue for the figures. We have also fixed the typo - thanks for pointing it out! We have added the two references pointed out and have also fixed the bibliography style to be the ICLR style. Please let us know if we can improve anything else.\"}", "{\"title\": \"Training procedure for VQ-VAE is equivalent to the EM algorithm\", \"review\": \"General:\\nThe paper presents an alternative view on the training procedure for the VQ-VAE. The authors have noticed that there is a close connection between the original training algorithm and the well-known EM algorithm. Then, they proposed to use the soft EM algorithm. In the experiments the authors showed that the soft EM allows to obtain significantly better results than the standard learning procedure on both image and text datasets.\\n\\nIn general, the paper shows a neat link between the well-known EM algorithm and the learning method for the VQ-VAE. I like the manner the idea is presented. Additionally, the results are convincing. I believe that the paper will be interesting for the ICLR audience.\", \"pros\": [\"The connection between the EM algorithms and the training procedure for the VQ-VAE is neat.\", \"The paper is very well written, all concepts are clear and properly outlined.\", \"The experiments are properly performed and all results are convincing.\"], \"cons\": [\"The paper is rather incremental, however, still interesting.\", \"The quality of Figure 1, 2 and 3 (especially Figure 3) is unacceptable.\", \"There is a typo in Table 6 (row 5: V-VAE \\u2192 VQ-VAE).\", \"I miss two references in the related work on training with discrete variables: REBAR (Tucker et al., 2017) and RELAX (Grathwohl et al., 2018).\", \"The paper style is not compliant with the ICLR style.\", \"--REVISION--\", \"I would like to thank authors for their effort to improve quality of images. In my opinion the paper is nice and I sustain my initial score.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A soft-EM training algorithm for vector-quantized autoencoders\", \"review\": \"Summary:\\n\\nThis paper presents a new training algorithm for vector-quantized autoencoders (VQVAE), a discrete latent variable model akin to continuous variational autoencoders.\\nThe authors propose a soft-EM training algorithm for this model, that replaces hard assignment of latent codes to datapoints with a weighted soft-assignment.\\n\\nOverall the technical writing in the paper is sloppy, and the presentation of the generative model takes the form of an algorithmic description of the training algorithm, rather than being a clear definition of the generative model itself.\\n\\nThe technical presentation of the work by the authors starts only at page 5 (taking less than a full page), after several pages of imprecise presentation of previous and related work. The paper could be significantly improved by making this preceding material more concise and rigorous. \\n\\nQuantitative experimental evaluation is limited to a machine translation task, which is rather uncommon in the literature on generative latent variable models. I would expect evaluation in terms of held-out data log-likelihood (ie bits-per-dimension) used in probabilistic generative models, and possibly also using measures from the GAN literature such as inception scores. Datasets that are common include CIFAR-10 and resized variants of the imagenet dataset.\", \"specific_comments\": [\"Please adhere to the ICLR template bibliography style, which is far more readable than the style that you used.\", \"Figure 1 does not seem to be referenced in the text.\", \"The last paragraph of section 2.1 is unclear. It mentions a sampling a sequence of latent codes. The notion of sequentiality has not been mentioned before, and it is not clear what it refers to in the context of the model defined so far up to that point.\", \"The technical notation is very sloppy.\", \"In numerous places the paper refers to the joint distribution P(x1,\\u2026,x_n, z1, \\u2026, zn) without defining that the distribution factorizes across the samples (xi,zi), and without specifying the forms of p(zi) and p(xi|zi).\", \"This makes that claims such as \\u201ccomputing the expectation in the M step (Equation 11) is computationally infeasible\\u201d are not verifiable.\", \"Please be clear about how much is gained by replacing the exact M-step with a the one based on the samples from the posterior computed in the E-step.\", \"What is the reason to decode the weighted average of the embedding vectors, rather than decoding all of them, and updating the decoder in a weighted manner?\", \"reference 14 for Variational autoencoders is incorrect, please use the following citation instead:\", \"@InProceedings{kingma14iclr,\", \"Title = {Auto-Encoding Variational {B}ayes},\", \"Author = {D. Kingma and M. Welling},\", \"Booktitle = {{ICLR}},\", \"Year = {2014}\", \"}\", \"The related work section (4) provides a rather limited overview of relevant related work.\", \"Half of it is dedicated to recent advances in machine translation, which does not bear a direct connection to the technical material presented in section 3.\", \"There is no justification of using *causal* self-attention on the source embedding, is this a typo?\", \"As for the experimental evaluation results: it seems that distillation is a much more critical factor to achieve good performance than the proposed EM training of the VQ-VAE model. Unfortunately, this fact goes unmentioned when discussing the experimental results.\", \"What is the significance of the observed differences in BLEU scores? Please report average performance and standard deviations over several runs with randomized parameter initialization and batch scheduling.\", \"It seems that the tuning of the number of discrete latent codes (table 2 in appendix) and other hyper-parameters (table 3 in appendix) was done on the test set, which is also used to compare to related work. A separate validation set should be used for hyper parameter tuning in machine learning experiments.\", \"It seems that all curves in figure 3 collapse from about 45 BLEU to values around 17 BLEU, why is this? The figure is hard to read since poor quality, and curves that are superposed.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Experimental section\", \"review\": \"This paper introduces a new way of interpreting the VQ-VAE,\\nand proposes a new training algorithm based on the soft EM clustering. \\n\\nI think the technical aspect of this paper is written concisely. \\nIntroducing the interpretation as hard EM seems natural for me, and the extension\\nto the soft EM training is sound reasonable. \\nMathematical complication is limited, this is also a plus for many non-expert readers. \\n\\nI'm feeling difficulties in understanding the experimental part.\\nTo be honest, I think the experimental section is highly unorganized, not a quality for ICLR submission. \\nI'm just wondering why this happens, given clean and organized technical sections...\\n\\nFirst, I'm confusing what is the main competent in the Table 1. \\nIn the last paragraph of the page 6, it reads; \\n\\\"Our implementation of VQ-VAE achieves a significantly better BLEU score and faster decoding speed compared to (10).\\\"\\nHowever, Ref. (10) is not mentioned in the Table 1. Which BLEU is the score of Ref. (10)? \\n\\nSecond, terms \\\"VQ-VAE\\\", (soft?)\\\"EM\\\" and \\\"our {model, approach}\\\" are used in a confusing manner. \\nFor example, in Table 1, below the row \\\"Our Results\\\", there are:\\n- VQ-VAE\\n- VQ-VAE with EM\\n- VQ-VAE + distillation\\n- VQ-VAE with EM + distillation\\n\\nThe \\\"VQ-VAE\\\" is not the proposed model, correct? \\nMy understanding is that the proposal is a VQ-VAE solved via soft EM, which corresponds to \\\"VQ-VAE with EM\\\". \\n\\nThird, a paragraph \\\"Robustness of EM to Hyperparameters\\\" is mis-leading. \\nThe figure 3 does not show the robustness against a hyperparameter. \\nIt shows the BLEU against the number of \\\"samples\\\" (in fact, there is no explanation about what the \\\"samples\\\" means). \\nI think hyperparameters are model constants such as the learning rate of the SGD, alpha-beta params for Adam, dimension of hidden units, number of layers, etc. The number of samples are not considered as a model hyperparameter; it's a dataset property. \\nThe figure 5 shows the reconstructed images of the original VQ-VAE and the proposed VQ-VAE with EM. \\nHowever, there is no explanation which hyperparameter is tested to assess \\\"the robustness to hyperparameters\\\". \\n\\nFourth, there is no experimental report on the image reconstructions (with CIFAR and SVHN) in the main manuscript. \\nIn fact, there is a short paragraph that mentions about the SVHN results, \\nbut it only refers to the appendix. \\nI think appendix is basically used for additional results or proofs, that are not essential for the main message of the paper. \\nHowever, performance in the image reconstruction is one of the main claims written in the abstract, the intro, etc. \\nSo, the authors should include the image reconstruction results in the main body of the paper. \\nOtherwise, claims about the image reconstructions should be removed from the abstract, etc. \\n\\n\\n+ Insightful understanding of the VQ-VAE as hard EM clustering\\n+ Natural and reasonable extension to soft-EM based training of the VQ-VAE\\n-- Unorganized experiment section. This simply ruins the quality of the technical part. \\n\\n\\n## after feedback\\n\\nSome of my concerns are addressed the feedback. \\nConsidering the interesting technical parts, I raise the score upward, to the positive side.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rJxMM2C5K7
Nested Dithered Quantization for Communication Reduction in Distributed Training
[ "Afshin Abdi", "Faramarz Fekri" ]
In distributed training, the communication cost due to the transmission of gradients or the parameters of the deep model is a major bottleneck in scaling up the number of processing nodes. To address this issue, we propose dithered quantization for the transmission of the stochastic gradients and show that training with Dithered Quantized Stochastic Gradients (DQSG) is similar to the training with unquantized SGs perturbed by an independent bounded uniform noise, in contrast to the other quantization methods where the perturbation depends on the gradients and hence, complicating the convergence analysis. We study the convergence of training algorithms using DQSG and the trade off between the number of quantization levels and the training time. Next, we observe that there is a correlation among the SGs computed by workers that can be utilized to further reduce the communication overhead without any performance loss. Hence, we develop a simple yet effective quantization scheme, nested dithered quantized SG (NDQSG), that can reduce the communication significantly without requiring the workers communicating extra information to each other. We prove that although NDQSG requires significantly less bits, it can achieve the same quantization variance bound as DQSG. Our simulation results confirm the effectiveness of training using DQSG and NDQSG in reducing the communication bits or the convergence time compared to the existing methods without sacrificing the accuracy of the trained model.
[ "machine learning", "distributed training", "dithered quantization", "nested quantization", "distributed compression" ]
https://openreview.net/pdf?id=rJxMM2C5K7
https://openreview.net/forum?id=rJxMM2C5K7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HygN3VQJx4", "B1gs7jQ0AQ", "rJeqaFp30m", "SygVWg7BCX", "BJgVE2fr0X", "ryxWehzSC7", "rJlDh7B0h7", "r1gMEiLihm", "BylmK1YB3m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544660140087, 1543547683204, 1543457217828, 1542955004075, 1542954028360, 1542953961400, 1541456815160, 1541266218421, 1540882299294 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1245/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1245/Authors" ], [ "ICLR.cc/2019/Conference/Paper1245/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1245/Authors" ], [ "ICLR.cc/2019/Conference/Paper1245/Authors" ], [ "ICLR.cc/2019/Conference/Paper1245/Authors" ], [ "ICLR.cc/2019/Conference/Paper1245/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1245/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1245/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers found that the paper needs more compelling empirical study.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Metareview\"}", "{\"title\": \"on the complexity of the algorithm\", \"comment\": \"Thanks for the feedback.\\n\\nI would like to mention that the complexity of the dithered quantization at the workers is similar to the other stochastic quantization methods such as TernGrad and QSGD. Hence, at the worker side, the complexity of the algorithm would be the same.\\nHowever, for dequantization, our method requires the server (or aggregation node) to regenerate the random numbers and then taking their average, which is not of much computational complexity. This can be done while the workers are computing their SGs and the server is waiting to receive their data. Hence, we believe that at each iteration of distributed training, using dithered quantization would not increase the complexity or the required time for computing the average of received SGs from workers.\"}", "{\"title\": \"Not enough for me to change my review.\", \"comment\": \"Thanks for the rebuttal. I still find that the practical impact of this method is not clear.\\n\\nFor one thing, this method needs low level change in the training framework. And if there is no clear quality or performance gain, it becomes hard to justify the extra complexity.\"}", "{\"title\": \"added simulations\", \"comment\": \"Thanks for the comments and suggestions to improve the paper.\\n\\nWe have added two more figures in the Experiments section to highlight how the dithered quantization scheme may improve the convergence rate of the distributed training. We would like to point out that for the presented simulation results, the convergence speed of our proposed method is better than the existing methods and including the baseline (communication without quantization). We argue that this is mainly due to the fact that our method basically adds a controlled independent noise to the stochastic gradients which may help with the convergence, consistent with the findings in [Neelakantan et al. 2015] and [Noh et al. 2017].\"}", "{\"title\": \"clarifications on the contribution of the work\", \"comment\": \"We appreciate the reviewer's comment. However, as there is no direct comments or questions on the paper by the reviewer to justify his/her ratings, we just briefly state some of our contributions in this paper:\\n\\n1- we have considered using dithered quantization for the communication of SG (or in general, parameters' updates) in a distributed training setting. The advantage of using our proposed qunatization scheme is that unlike all other existing methods where the added noise due to the quantization depends on the values of SG, here the quantization noise is inpendent of the SG values. This ensures that almost all existing convergence results on training with SG or its variants readily applicable to the quantized distributed training algorithm, without much modification. (see e.g., Thm. 4)\\n\\n2- We have analyzed how the number of workers and quantization precision (or equivalently number of bits) affect the training times (see. Thm 5 and equation 5)\\n\\n3- We provided a nested scheme to further reduce communication without sacrificing the precision of quantization. For example, theoretically we could achieve the accuracy of two bits quantization with only 1 bit in a distributed setting. (see Thm. 6 and the discussion after)\\n\\n4- Finally, we provided some simulation results to experimentally verify the algorithm. \\n\\n\\n\\nNote that the proofs of all the claims and theoretical results are provided in the appendix.\"}", "{\"title\": \"clarification on the simulations and comparisons w.r.t. other methods\", \"comment\": \"We appreciate the reviewer's comments and suggestions.\\n\\n- Regarding the comparison of dithered quantization with One-bit, TernGrad and QSGD:\\n\\nWe would like to point out that, due to indecency of noise from SGs in our scheme, the proposed distributed training algorithm is expected to behave consistently well irrespective of the database or the neural network.\\n\\nThe results shown in the paper only reflects the final accuracy of the model after enough iterations of the training algorithm that the models have almost converged. Hence, they merely show the effect of the distributed training on the final accuracy, not how fast the models converge. To address this issue, we have added two figures in the paper showing the accuracy vs iteration for CIFAR10 with 4 and 8 workers, for the first 500 iterations of training. We would like to point out that (a) the convergence speed of our proposed method is better than the existing methods and including the baseline method (communication without quantization). We argue that this is mainly due to the fact that our method basically adds a controlled independent noise to the stochastic gradients which may help with the convergence, consistent with the findings in [Neelakantan et al. 2015] and [Noh et al. 2017]. (b) As the number of workers increases, since the average of received quantized SG is computed and used for training, the quantization noise would be decreased proportionately. Hence, the performance gap between almost all of the quantization methods for the distributed training will vanish eventually.\\n\\n\\n\\n- Regarding the comment on the NDQSG:\\n\\nNote that the main contribution of our work is the dithered quantization and its theoretical analysis in distributed training. However, we mentioned NDQSG to further reduce the communication by exploiting the correlations among SGs computed by the workers. The performance of NDQSG depends on the amount of correlation among SGs computed by the workers. The probability of error in distributed communication using NDQSG is bounded by Thm. 6 and equation 8. We advise on using NDQSG whenever the correlation is significant. As shown in Figues 6(a) and 6(b), when the correlations among the SGs computed by the workers are high, using NDQSG can reduce the communication cost. However, in Fig. 6(c), since the noise in SG computation is high, using NDQSG failes to estimate the true SG sometimes, adding some error into the estimation. This can slow down the convergence speed of the distributed training algorithm in some situations.\"}", "{\"title\": \"Nested Dithered Quantization for Communication Reduction in Distributed Training\", \"review\": \"In this paper, the authors propose to apply dithered quantization (DQ) to the stochastic gradients computed through the training process. Though an extra noise is added to the gradient, it improves the quantization error. Hence after the noise is removed at the update server, it achieves superior results when compared against unquantized baseline.\\n\\nThe authors also propose a nested scheme to further reduce communication cost.\\n\\nThis method strictly improves over previous approaches such as QSGD and TernGrad in terms of quantization error. However, the improved quantization performance does not show up in the experiments. In Table 3, it is clear that DQSG does not significantly improve over QSG and TernGrad once there are 8 workers. And they all use the same amount of bits in communication.\\n\\nThe proposed NDQSG though capable of reducing the communication cost by 30%, its accuracy on CIFAR-10 shows noticeable drop.\\n\\nOverall, I think this method is promising, but further tuning is required to make it practical.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting paper but the contribution is not good enough\", \"review\": \"Overall, this paper is well written and clearly present their contribution.\\nAlthough the idea seems to be interesting and novel, but not enough evidence to prove the efficiency, from both theoretical and numerical perspective, even though many numerical experiments are proposed.\\nIn general, this paper is high level in the articles assigned to me.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Establishes a useful connection between distributed optimization and dithered quantization\", \"review\": \"Authors establish a connection between communication reduction in distributed optimization and dithered quantization. This allows us to understand prior approaches in a new perspective, and also motivates authors to develop two new distributed training algorithms which communication overhead is significantly reduced. The first algorithm, DQSG, uses dithered quantization to reduce the communication bits. The second algorithm, NDQSG, uses nested dithered quantization to further reduce the amount of needed communication. The usefulness of these algorithms are empirically validated by computing the raw communication bits and average entropy of them. Therefore, dithered communication seems to provide both theory and algorithm which are useful.\\n\\nThe paper is clearly written. It provides a succinct review of dithered quantization and previous works, and figures provide a good insight into why the algorithm works, especially Figure 3.\\n\\nTheorems in this paper are mostly about plugging in properties of dithered quantization into standard results in stochastic optimization, but they are still useful. The analysis of NDQSG does not seem to be as complete as that of DQSG, however. With NQSG, now workers are divided into two groups, and there would be an interesting tradeoff between assignments to these two: how should we balance two groups? This might be tricky to analyze, but it is still useful to clarify limitations and provide conjectures. At least, this could be analyzed empirically.\", \"pros\": [\"establishing a connection to other topic of research often facilitates productive collaboration between two fields\", \"provides a new perspective to understand prior work\", \"provides new useful algorithms\"], \"cons\": [\"experiments were conducted on small models and small datasets\", \"unclear models are large enough to demonstrate the need for communication reduction; in other words, it is unclear wall-time would actually be reduced with these algorithms.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HkxMG209K7
An Alarm System for Segmentation Algorithm Based on Shape Model
[ "Fengze Liu", "Yingda Xia", "Dong Yang", "Alan Yuille", "Daguang Xu" ]
It is usually hard for a learning system to predict correctly on the rare events, and there is no exception for segmentation algorithms. Therefore, we hope to build an alarm system to set off alarms when the segmentation result is possibly unsatisfactory. One plausible solution is to project the segmentation results into a low dimensional feature space, and then learn classifiers/regressors in the feature space to predict the qualities of segmentation results. In this paper, we form the feature space using shape feature which is a strong prior information shared among different data, so it is capable to predict the qualities of segmentation results given different segmentation algorithms on different datasets. The shape feature of a segmentation result is captured using the value of loss function when the segmentation result is tested using a Variational Auto-Encoder(VAE). The VAE is trained using only the ground truth masks, therefore the bad segmentation results with bad shapes become the rare events for VAE and will result in large loss value. By utilizing this fact, the VAE is able to detect all kinds of shapes that are out of the distribution of normal shapes in ground truth (GT). Finally, we learn the representation in the one-dimensional feature space to predict the qualities of segmentation results. We evaluate our alarm system on several recent segmentation algorithms for the medical segmentation task. The segmentation algorithms perform differently on different datasets, but our system consistently provides reliable prediction on the qualities of segmentation results.
[ "segmentation evaluation", "shape feature", "variational auto-encoder" ]
https://openreview.net/pdf?id=HkxMG209K7
https://openreview.net/forum?id=HkxMG209K7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rkl5-a26JN", "rkxfla79Am", "B1g4YhX50X", "BygYm3X9A7", "SygsgjmqCm", "Bkxr7N33pm", "BJeTwti2p7", "BJeY9XzphQ", "SJeR97Wi27" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544568066390, 1543285993654, 1543285883940, 1543285793409, 1543285491174, 1542403100703, 1542400357268, 1541378961085, 1541243798445 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1244/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1244/Authors" ], [ "ICLR.cc/2019/Conference/Paper1244/Authors" ], [ "ICLR.cc/2019/Conference/Paper1244/Authors" ], [ "ICLR.cc/2019/Conference/Paper1244/Authors" ], [ "ICLR.cc/2019/Conference/Paper1244/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1244/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1244/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1244/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The authors present a method using a VAE to model segmentation masks directly. Errors in reconstruction of masks by the VAE indicate that the mask may be outside the distribution of common mask shapes, and are used to predict poor quality segmentation scenarios that fall outside the distribution of common segmentations.\", \"pros\": [\"R2: Technical idea is interesting, and a number of baselines used to compare.\", \"R1 & R4: Method is novel.\"], \"cons\": [\"R3 & R4: The method ignores the original input in its prediction, making the method wholly reliant on shape priors. In situations where the shape prior is weak, the method may be expected to fail. Authors have confirmed this, but not added any experiments to quantify its effect.\", \"R4: The baseline regressor method is missing key details, which makes it impossible to judge if the comparison is fair (i.e. at minimum, number of learned parameters for each model, number of convolutional layers, structure of network, etc.). Authors have not provided these details. Authors have not investigated datasets with weak shape prior to see how methods compare in this setting.\", \"R2: GANs can be used as a baseline. Authors confirmed, but did not supply results.\", \"Reviewers generally agree that the idea is novel, but the value of the approach cannot be determined due to missing baseline experiments, and missing details of baselines. Recommend reject in current form, but encourage authors to complete experiments.\"], \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Promising direction of research for detecting poor quality segmentation, but further experiments and analysis must be completed.\"}", "{\"title\": \"Response to Reviewer3\", \"comment\": \"We thank the reviewer for the constructive feedback.\", \"q1\": \"Missing literatures.\\nThanks for sharing these related work with us.\\nFor the ACNN[1] work, they used an additional loss function to evaluate the difference between the encoded segmentation prediction and encoded label map to guide the training. It is obvious that this loss function can indicate the quality of a segmentation prediction, but to calculate this loss function requires a label map, while in our work we focus on evaluating the segmentation prediction WITHOUT ground truth. We don\\u2019t see the close connection between this work and ours.\\nFor the other literatures, we also add a paragraph in Section 2 to discuss the connection and difference. The main message is as follows: For [2], they use registration based method to do label propagation. It is a reliable method because it takes the prior of image by setting up a reference dataset. The problem of this method is inefficient testing. Every single case needs to be registrated with all reference data to determine the quality. However the registration on 3D image is usually very slow. In addition, the registration based method can hardly be transferred between datasets or modalities.\\nFor [3][4], they use unsupervised method to estimate the segmentation quality using geometrical and other features. However the application in medical settings is unclear, as is also mentioned in [2].\", \"q2\": \"Segmentation with good shape but low quality\\nThis situation indeed can appear theoretically. But it is very rare. In our experiment, only 1 out of 373 cases has relative plausible shape but very low dice score(See Figure 2, sub-figure line2 col4, one blue point is very far from the line y=x).\", \"q3\": \"High size/shape variation\\nIn this work, we focus on learning the prior of shape so we only do experiments on the organ which has stable shape. We show that this prior can be well learned and shared between different datasets, and can be used in predicting the segmentation quality without ground truth. Take tumor as an example, the prior of the texture may be more important so we may use other methods to deal with the texture prior in our future work.\", \"q4\": \"Image alignment\\nThe training data are first aligned. However, during the training process, we alleviate the dependency on alignment by augmenting the data with rotation, translation and scaling etc.. During testing, no alignment is needed. As is shown in our experiments, our method achieves good performance and does not show a clear dependency on data alignment.\", \"q5\": \"local hint\\nThe prediction from the 3D segmentation methods is usually smooth so the local information here is not always useful. But for 2D segmentation methods like deeplab, they are trained using slices along the axial axis. So the whole 3D prediction may have unsmooth boundary along the axial axis. We think that is why the performance for evaluating deeplab is not as good as evaluating other segmentation algorithms(See Table2). It would be interesting to explore more in the future.\", \"q6\": \"Different modalities\\nOur method is actually appearance independent. We show this by doing experiment cross different datasets. Although we only use datasets of CT scan, the difference between datasets is already big enough. The performance of a state-of-art segmentation algorithm trained on NIH will drop dramatically when tested on the other two datasets. But still our approach can give reliable quality prediction for the segmentation prediction on these two datasets. It shows the shape prior can be shared between datasets. So between modalities, as long as the shape domain doesn\\u2019t change, (i.e. between MRI and CT) the performance on MRI will still be the same.\", \"q7\": \"Adapted for DeepLab-3\\nAlthough deeplab is a 2D based method, it can be applied to 3D CT scan slice by slice and generates a 3D segmentation mask finally. As our method only takes the segmentation mask as input, there is no problem adapting for deeplab.\\n \\n[1] Anatomically Constrained Neural Networks (ACNN): Application to Cardiac Image Enhancement and Segmentation\\n[2] Reverse classification accuracy: Predicting segmentation performance in the absence of ground truth\\n[3] Unsupervised performance evaluation of image segmentation\\n[4] A Novel Unsupervised Segmentation Quality Evaluation Method for Remote Sensing Images\"}", "{\"title\": \"Response to Reviewer1\", \"comment\": \"We thank the reviewer for the constructive feedback.\", \"q1\": \"S(F(X); \\u03b8) itself as an estimator\\nWe have observed that S(F(X); \\u03b8) has strong linear correlation with the real quality but the value of them don\\u2019t exactly match, which is why we add a linear regression on S(F(X); \\u03b8).Only using S(F(X); \\u03b8) itself will change the MAE in Table1 of our method to 3.85, 7.62, 5.40 respectively.\", \"q2\": \"Confusing sentence in the paper\\nSorry for the misleading. By saying VAE, compared with AE, has stronger representation capability, we mean that VAE can better learn the statistical prior of the segmentation mask. For example, we have found that AE can better reconstruct the ground truth segmentation mask but will do worse than VAE in quality prediction, that is, AE tries to reconstruct everything perfectly, while VAE only tries to reconstruct the segmentation mask close to ground truth well. The difference here is because VAE adds a constraint in the latent space by using KL divergence while AE doesn\\u2019t have such constraint.\", \"q3\": \"Confusing notation\\nHere L(S(F(X); \\u03b8);a,b)=a S(F(X); \\u03b8)+b. And E(S(F(X); \\u03b8);a,b)=||a S(F(X); \\u03b8)+b-L(F(X),Y)||2 represents for the loss function in the second step.\", \"q4\": \"Missing literatures.\\nIn our revised paper, we add some missing literatures that are related to our work and discuss the connection and difference.\"}", "{\"title\": \"Response to Reviewer4\", \"comment\": \"We thank the reviewer for the constructive feedback.\", \"q1\": \"Missing literatures.\\nThanks for sharing these related work with us. In our revised paper, we add a paragraph in Section 2 to discuss the connection and difference. The main message is as follows: The existing methods you mentioned [1][2] made use of the softmax output in the last layer of a classifier to calculate the out-of-distribution level. In our case, however, for a segmentation method, we can only get a voxel-wise out-of-distribution level using your mentioned methods. Then how to calculate the out-of-distribution level for the whole case becomes another problem. Also, for most of background voxels, the segmentation algorithm will predict them as background very confidently, making the out-of-distribution level on those voxels less representative. The idea of using the activation value is similar with the uncertainty-based methods mentioned in our paper, so we expect them to have similar performance. Finally, using the softmax output from the classifier makes the out-of-distribution detector dependent on the classifier used, while our method can deal with different segmentation methods as shown in experiments.\", \"q2\": \"Compare to more naive approaches\\nWe have already tried a na\\u00efve approach (see Table 1, direct regression) which takes only segmentation mask as input and regress the quality indicator. That method doesn\\u2019t work very well. We also tried to take both original image and segmentation mask as input which performs roughly the same . When conducting these two baseline methods, we firstly do rotation, translation and scaling on CT scan and test the segmentation algorithm on the augmented CT dataset to generate the predicted segmentation masks. Then we feed these masks into a regressor to predict the quality. When training the regressor, we have generated about 4000 3D segmentation masks as training data, but it still works not as well as our method.\", \"q3\": \"For targets with larger shape variance\\nIn our work, we address a VAE-based method that can learn a statistical prior of the segmentation mask. We show by experiment that this prior is important. It can guide the prediction of segmentation quality and can also be shared between different datasets, which is our main contribution. For targets with large shape variance (e.g. lesion segmentation), how to effectively combine texture (or image) information is a promising research direction. However, with more texture information, it becomes harder to work crossing different datasets.\\n \\n[1] A BASELINE FOR DETECTING MISCLASSIFIED AND OUT-OF-DISTRIBUTION EXAMPLES IN NEURAL NETWORKS\\n[2] ENHANCING THE RELIABILITY OF OUT-OF-DISTRIBUTION IMAGE DETECTION IN NEURAL NETWORKS\"}", "{\"title\": \"Response to Reviewer2\", \"comment\": \"We thank the reviewer for the constructive feedback.\", \"q1\": \"Is the shape a right feature to focus on\\nIt is a useful feature for organ segmentation. We show this by experiment that it is more effective than other features.\", \"q2\": \"Confusing sentence in the paper\\nSorry for the misleading. By saying VAE, compared with AE, has stronger representation capability, we mean that VAE can better learn the statistical prior of the segmentation mask. For example, we have found that AE can better reconstruct the ground truth segmentation mask but will do worse than VAE in quality prediction, that is, AE tries to reconstruct everything perfectly, while VAE only tries to reconstruct the segmentation mask close to ground truth well. The difference here is because VAE adds a constraint in the latent space by using KL divergence while AE doesn\\u2019t have such constraint.\", \"q3\": \"Can GANs be used\\nIt would be interesting to try using GAN. We choose VAE because we think its latent space constraint may help which is proved to be true.\"}", "{\"title\": \"Interesting approach and seems to have adequate convincing experiments\", \"review\": \"Summary:\\n The paper tries to predict the quality of output of a segmentation algorithm applied to medical images. The approach of this paper is to looks at the \\\"true\\\" shape of the segmentation on the training samples and learn a VAE for the shape feature on them for training samples. For the test samples (that are new and are segmented only the algorithm whose quality is to be predicted), a linear function of the loss function of the learnt VAE applied to the output for the segmentation is used to predict quality. The linear function is tuned to the VAE loss of the output of the specific segmentation algorithm on the training samples.\\n\\n The basic premise is that VAE minimizes the gap between between the log likelihood of the true shape and the VAE loss function. Therefore, the gap should be small for \\\"good\\\" shapes while very bad for \\\"bad/wrong\\\" shapes. Therefore the VAE loss trained on the good shapes on the training examples can indicate the goodness of a segmentation algorithm's output.\", \"pros\": \"I think the authors have compared to the number of baselines on three medical imaging datasets and show that their method via various metrics clearly outperforms others on this specific medical imaging application.\\n\\nI like the primary technical idea behind the paper of detecting low quality outputs by projecting to the range space of a VAE and looking at its likelihood.\", \"cons\": \"1) I dont know about the apriori assumption that shape of the segmentation will be the right feature to actually focus on. How general is this assumption for medical imaging tasks ?\\n\\n2) Authors say - \\\"Variational autoencoder(VAE) (Kingma & Welling, 2013), compared with AE, has stronger representation capability\\\" - Why does the VAE have stronger representation capability? - I dont understand this part. Is it because it outputs the probabilities z given Y and Y given z that is somehow more useful ?\\n\\n3) Can GANs be used instead of VAEs? Is there a natural loss function that could be used in this case during quality prediction?\", \"disclaimer\": \"I am not an expert in the area for segmentation of medical images.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting idea, but evaluation and relevant background literature is not thorough.\", \"review\": \"The authors present a method to detect poor quality segmentation results by using a VAE to understand the statistical distribution of segmentation masks, and detect outliers from that distribution in predictions. Method is compared to a few baselines to show improved results.\", \"pros\": \"1) The idea seems slightly novel, simple, and elegant, with respectable results.\", \"cons\": \"1) This method is related to Out-of-Distribution (OOD) detection, which is an entire field unto itself. None of the relevant literature around OOD has been covered by this paper, including several recent ICLR papers:\\n\\nHendrycks, Gimpel, \\\"A BASELINE FOR DETECTING MISCLASSIFIED AND OUT-OF-DISTRIBUTION EXAMPLES IN NEURAL NETWORKS\\\" ICLR 2017\\nLiang et al. \\\"ENHANCING THE RELIABILITY OF OUT-OF-DISTRIBUTION IMAGE DETECTION IN NEURAL NETWORKS\\\" ICLR 2018\\n\\n2) The method is not compared to more naive approaches, such as building a network to take as input both modalities of original image and segmentation mask, and predict (classify) poor quality. \\n\\n3) The method assumes segmentation masks have some strong statistical prior. This may be the case for organs, but can completely break down in other cases, such as skin lesion segmentation ( http://challenge2018.isic-archive.com ). In this circumstance, reviewer questions if more naive approach in (2) above would work better.\\n\\n\\nReviewer believes authors have a good line of research, but that it requires additional literature review and experiments before it is ready for publication.\", \"edit\": \"Reviewer has considered the response by the authors. Key details of the baseline regressor are missing, such as the exact network structure used. As a result: 1) Reviewer is unable to determine if the baseline is a proper fair comparison. 2) Authors have confirmed the methods reliance on strong shape prior, but this caveat is not clearly mentioned in the paper as a requirement for the method to work. Furthermore, authors did not quantify what affect this reliance has by adding experiments on datasets with weak shape priors mentioned by reviewer. As a result, reviewer is lowering score. Reviewer encourages authors to continue this line of research, but carefully consider the feedback given to make the work stronger before publication.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting and novel way of quantifying the quality of segmentations\", \"review\": \"This paper explores the idea of having a VAE modelling the probability distribution of the real segmentations, in order to quantify the quality of the predicted segmentation (using another network). The paper refines this idea by applying regression over two parameters. The overall idea is interesting and novel to the best of my knowledge. Experimental results look convincing.\\n\\nThe paper does a good job at presenting the motivation, reads well in general, and it is well written (except the paragraph Entropy Uncertainty in Sec. 4.2 which contains several typos).\", \"some_comments\": \"S(F(X); \\u03b8) looks good enough as an estimator. It would be good to see how it does by itself, reporting that as an ablation experiment, assessing how important it is to carry out the second step (fitting a, b).\\nIn the last paragraph of Sec. 2, I am not sure what it is meant by \\\"Variational autoencoder(VAE) (Kingma & Welling, 2013), compared with AE, has stronger representation capability and can also serve as a generative model\\\". No doubt about the latter point, but not sure about the former.\\nSec. 3.3 is somewhat confusing, for example: what is E in eq. 9 should be L?\", \"revision\": \"in light of the relevant papers brought up by AnonReviewer3 and AnonReviewer4, that have not been discussed in the paper, I modify my rating to 6.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Evaluates the quality of segmentation using a shape prior learned from ground-truth masks with a variational autoencoder\", \"review\": \"This paper presents an approach to evaluate the quality of segmentations. To achieve this, a variational auto-encoder (VAE) is trained on the ground truth masks to extract shape-relevant information in the feature space, assuming that incorrect segmentations will be far from the normal distribution. Then, a regression model is trained to predict the quality of the segmentation based on the shape-learned features. The authors use several datasets focusing on pancreas segmentation to evaluate their quality-assessment approach, showing competitive performance with respect to other approaches.\\n\\nThe paper is well written, easy to follow in general, and the methodology is sound. Nevertheless, I have some concerns related to the applicability of this approach.\\n\\n- Closely related works in the literature are missing:\", \"there_is_a_closely_related_recent_work_that_used_auto_encoders_on_the_sets_of_ground_truth_masks_to_build_representations_of_shape_and_constrain_the_outputs_of_deep_networks\": \"Otkay et al., Anatomically Constrained Neural Networks (ACNN): Application to Cardiac Image Enhancement and Segmentation, IEEE TMI 2017\\n\\nThis work does not focus directly on quality assessment. However, I believe the loss in this work, which evaluates the difference between the obtained segmentation (characterized by the outputs of a deep network) and an auto-encoder description of shape, can be used directly as a criterion for evaluating the quality of segmentation (on validation data) in term of consistency with the shape prior. I think this work is very closely related and should be discussed. \\n\\nAlso, a quick google search provided some missing references related to this work. I think including comparisons to the recent work in [1], for example, would be appropriate. As the focus is on quality assessment of medical image segmentation, I would suggest a deeper review of the literature.\\n\\n[1] Vanya V Valindria, Ioannis Lavdas, Wenjia Bai, Konstantinos Kamnitsas, Eric O Aboagye, Andrea G Rockall, Daniel Rueckert, and Ben Glocker. Reverse classification accuracy: Predict- ing segmentation performance in the absence of ground truth. IEEE Transactions on Medical Imaging, 2017. \\n[2] S. Chabrier, B. Emile, C. Rosenberger, and H. Laurent, \\u201cUnsupervised performance evaluation of image segmentation,\\u201d EURASIP Journal on Applied Signal Processing, vol. 2006, pp. 217\\u2013217, 2006. \\n[3] Gao H, Tang Y, Jing L, Li H, Ding H. A Novel Unsupervised Segmentation Quality Evaluation Method for Remote Sensing Images. Sensors. 2017 Oct 24;17(10):2427.\\n\\n- The proposed quality assessment uses the learned shape features. Even though it is strong prior information, there might be situations where the predicted segmentation might be plausible in terms of shape, but is not a good segmentation. \\n\\n- I wonder how this approach works in problems with a high size/shape variation. For example, in the case of tumors, where their shape is unpredictable and each unknown case can be seen as a \\u2018rare\\u2019 example.\\n\\n- To better capture the shape in the proposed approach, images need to be aligned, which limits the applicability of this approach to aligned volumes only. \\n\\n- This approach gives a global hint about a given segmentation result, as a whole. I think it would be more interesting to provide local information on a segmentation, as it may happen that a predicted contour is generally correct, but there are some crispy borders in some points due to low contrast, for example. Even though the quality assessment would say that the prediction is correct, the contour may be unusable for certain applications, where a minimal surface distance is required (e.g., radiotherapy).\\n\\n- As the quality assessment is based on shape and not in image information, it would be interesting to see how accurately it predicts the performance on different image modalities (for example, the method is trained on ground truth masks corresponding to CT images and quality is assessed in segmentations performed in MRI).\\n\\nIf I understood correctly, comparison with other methods is done with the same dataset under the same conditions (i.e., all the images are pre-aligned). As the other methods might not have the limitation of requiring aligned images, it would be interesting to compare also the performances in this situation.\\n\\nHow the training (or the VAE) is adapted for DeepLab-3, as it is based on 2D convolutions?\", \"minor\": \"The paper needs a proof-read to fix some issues (e.g. \\u2018the properties of F is encoded\\u2019)\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HkezfhA5Y7
A Rate-Distortion Theory of Adversarial Examples
[ "Angus Galloway", "Anna Golubeva", "Graham W. Taylor" ]
The generalization ability of deep neural networks (DNNs) is intertwined with model complexity, robustness, and capacity. Through establishing an equivalence between a DNN and a noisy communication channel, we characterize generalization and fault tolerance for unbounded adversarial attacks in terms of information-theoretic quantities. Invoking rate-distortion theory, we suggest that excess capacity is a significant cause of vulnerability to adversarial examples.
[ "adversarial examples", "information bottleneck", "robustness" ]
https://openreview.net/pdf?id=HkezfhA5Y7
https://openreview.net/forum?id=HkezfhA5Y7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "r1lTj44zg4", "r1gLCc78J4", "rJlBx_662m", "SyeNJUn52Q", "HkgXexY9nX" ], "note_type": [ "meta_review", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544860837113, 1544071886122, 1541425132937, 1541223899946, 1541210090564 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1242/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1242/Authors" ], [ "ICLR.cc/2019/Conference/Paper1242/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1242/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1242/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"Both authors and reviewers agree that the ideas in the paper were not presented clearly enough.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"needs better presentation\"}", "{\"title\": \"Response to reviewers\", \"comment\": \"We thank the reviewers for their honest feedback. We agree that the ideas were not presented clearly and will work on improving this.\"}", "{\"title\": \"see below\", \"review\": \"The paper discusses on a rate distortion interpretation of adversarial examples by building the equivalence of DNN and a noisy channel. The proposed topic is very interesting. However, it is quite disappointing after reading the paper, that it does not deliver. In a sense, the reader has an impression that the paper is a collection of fractions of small thoughts and empirical observation pieces that are yet to be stringed up coherently.\\n*To start with, the contributions are not clear. The major equations (1-4) are all pre-existing. The main Figure (fig.1) is also not new. Sec.3 is on implications, while it is more a discussion section centered on existing works about capacity and adversarial examples. Although it is claimed in the beginning of the paper 3 theoretical and empirical contributions, they are not clearly presented in the follow-up text.\\n*Empirical evaluation, in Fig.2, a legend should be in place to introduce the colored curves. Currently it is unclear what it is for each of the curves. \\n*Fig.3: it is unclear why the MI plots are of piecewise straight lines. Does it imply that the two MIs are linearly related?\\n*Table 1&2: Not clear how this observation has to do with the RD theory.\\n\\nSeems no response from the authors.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"While the experiment of fault tolerance is interesting, the obtained implications are somewhat trivial.\", \"review\": \"This paper considers the trade-off between the prediction accuracy of deep neural networks (DNNs) and sensitivity to adversarial examples.\\nReviewing the (Gaussian) channel capacity and rate-distortion theory, i.e., the information bottleneck, the authors discuss their implications on the generalization performance of DNNs. The experiments demonstrate the SNR of gradients, information plane, the generalization gap, and fault tolerance against adversarial examples.\\n\\nWhile the interpretations of DNN learning by the information theoretic concepts are interesting, most of them are already known results, and hence provide little novel theoretical knowledge.\\n\\nThe discussions in Section 3 are superficial. It is not clear how they are related to the main arguments of this paper.\\n\\nWhile the experiment of fault tolerance is interesting, the implications obtained from experiments are somewhat trivial.\", \"minor_comments\": \"p.2, l.15: h, w, and c are undefined.\", \"section_2\": \"Rate-distortion theory is usually explained by the sphere covering argument instead of sphere packing.\\nSection 4.3.1: It is not explained what zero and one-shot transfer learning is.\", \"pros\": \"The experiment of fault tolerance is interesting.\", \"cons\": \"Theoretical parts are basic results of information theory.\\nThe implications of experiments are somewhat trivial.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"In my opinion the paper is so poorly written that it makes it difficult for me to judge it\", \"review\": \"This paper tries to draw connections between rate distortion theory and DNNs and use some intuitions from that domain to draw conclusions about robustness and generalization of the DNNs.\\n\\nThe paper is mostly written in a storytelling narrative with very little rigor. In my opinion, this lack of rigor is problematic for a conference paper that has to be concise and rigorous. Moreover, the story is not told in a cohesive way. In most parts of the paper, there is not much relationship between the consecutive paragraphs. And even within most of the paragraphs, I was lost in understanding what the authors meant. I wish the paper would have been self-contained and made concrete definitions and statements instead of very high-level ideas that are difficult to judge. In the current state, it is very difficult for me to say what exactly is the contribution of the paper in terms of the story other than some loosely related high-level ideas. I feel like most parts the story that the authors are telling is already told by many other papers in other forms(papers that authors have cited and many other ones).\", \"rating\": \"2: Strong rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rk4Wf30qKQ
Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks
[ "Sanghyun Hong", "Michael Davinroy", "Yigitcan Kaya", "Stuart Nevans Locke", "Ian Rackow", "Kevin Kulda", "Dana Dachman-Soled", "Tudor Dumitraș" ]
Recent work has introduced attacks that extract the architecture information of deep neural networks (DNN), as this knowledge enhances an adversary’s capability to conduct attacks on black-box networks. This paper presents the first in-depth security analysis of DNN fingerprinting attacks that exploit cache side-channels. First, we define the threat model for these attacks: our adversary does not need the ability to query the victim model; instead, she runs a co-located process on the host machine victim ’s deep learning (DL) system is running and passively monitors the accesses of the target functions in the shared framework. Second, we introduce DeepRecon, an attack that reconstructs the architecture of the victim network by using the internal information extracted via Flush+Reload, a cache side-channel technique. Once the attacker observes function invocations that map directly to architecture attributes of the victim network, the attacker can reconstruct the victim’s entire network architecture. In our evaluation, we demonstrate that an attacker can accurately reconstruct two complex networks (VGG19 and ResNet50) having only observed one forward propagation. Based on the extracted architecture attributes, we also demonstrate that an attacker can build a meta-model that accurately fingerprints the architecture and family of the pre-trained model in a transfer learning setting. From this meta-model, we evaluate the importance of the observed attributes in the fingerprinting process. Third, we propose and evaluate new framework-level defense techniques that obfuscate our attacker’s observations. Our empirical security analysis represents a step toward understanding the DNNs’ vulnerability to cache side-channel attacks.
[ "DNN Security Analysis", "Fingerprinting Attacks", "Cache Side-Channel" ]
https://openreview.net/pdf?id=rk4Wf30qKQ
https://openreview.net/forum?id=rk4Wf30qKQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Hyguw1mVl4", "HJeelb-5C7", "HylS7nAK0Q", "rJguAoCtRQ", "S1gLF5l5am", "BJx5zS5d6m", "rylt1I8ZpQ", "SygLtq7ZpQ", "B1l2z9wgTm", "rygfvh7h2X" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review" ], "note_created": [ 1544986464146, 1543274727649, 1543265309069, 1543265232161, 1542224510393, 1542133009594, 1541658080780, 1541646974457, 1541597715834, 1541319770176 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1241/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1241/Authors" ], [ "ICLR.cc/2019/Conference/Paper1241/Authors" ], [ "ICLR.cc/2019/Conference/Paper1241/Authors" ], [ "ICLR.cc/2019/Conference/Paper1241/Authors" ], [ "ICLR.cc/2019/Conference/Paper1241/Authors" ], [ "ICLR.cc/2019/Conference/Paper1241/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1241/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1241/Authors" ], [ "ICLR.cc/2019/Conference/Paper1241/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers generally had concerns that the goal of recovering only the model architecture was unmotivated (given that knowing the architecture is not a large threat on its own, and there are existing attacks that work without knowledge of the model architecture). Moreover, given the strength of the assumed attack model, recovering model architecture is a fairly unambitious goal (again, more serious attacks have already been demonstrated under weaker attack models). Finally, though less seriously, the analysis is fairly preliminary, e.g. it is unclear if the attack can generalize to nearby architectures that were outside the training set.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"results too weak given strength of attacker\"}", "{\"title\": \"Paper Revision\", \"comment\": \"We thank our reviewers for taking the time to read, evaluate our work, and provide constructive feedback. We have uploaded a revised version of our paper, with edits to address the concerns raised. We summarize our updates below:\\n\\n1. We update the content that made our contributions confusing in Sec. 1.\\n (e.g., black-box attacks -> attacks on black-box models, etc.)\\n\\n2. We further discuss the concurrent work (Yan et al.) in Sec. 2.\\n\\n3. We provide an example attack scenario in Sec. 3.1.\\n\\n4. We provide limitations to our approach and hypothesize the effect of these results on future work in Sec. 3.2.\\n\\n5. We provide an analysis of the reconstructions errors observed in Table 3 in Sec. 3.5.\\n\\n6. We provide some more discussion about what our attack can do in the transfer learning setting after fingerprinting the victim network in Sec. 4.\\n\\n7. We update Table 5. to show the individual attribute errors in Sec. 5.1\\n\\n8. We emphasize that our defender can adaptively choose the attributes to obfuscate by decoy processes in Sec. 5.1.\\n\\n9. We fix typos and improve writing throughout the entire paper.\\n\\nPlease see our replies to each reviewer for our detailed responses to individual points.\"}", "{\"title\": \"Clarification of our threat model and state our contributions out clearly (2/2)\", \"comment\": \"(3) About minor comments.\\n\\n(3) - 1) The comments about the introduction: Yes, we agree with the reviewer that all white-box attacks cannot be defended by gradient masking. Hence, we will fix this claim in the revised version. What we will explain instead is that complete white-box assumptions are not practical, and attackers can spare time and effort when we lessen the black-box assumptions through side-channels.\\n\\n(3) - 2) Our attacker keeps monitoring the time to access the lines of code (or instructions) identified in the shared deep learning framework. If a victim uses specific functions while processing data with a model, the lines in the functions are in the instruction cache. Hence, the access time observed from our attacker will be short. On the other hand, if the code is not in the instruction cache, the access time to the same code will be much longer. From this difference, the attacker identifies which functions are used. We utilize Flush+Reload attack implemented in Mastik toolkit [https://cs.adelaide.edu.au/~yval/Mastik/]. Also, we extracted the lines of code from TensorFlow v1.9.0-rc0.\\n\\n(3) - 3) In the decoy processes, our defenses are difficult to defeat, even by an advanced attacker. This is because the attacker cannot separate the information observed into separate processes and therefore models. Once the DL framework function call is loaded into the instruction cache, the instruction is cached in the same manner between both the victim and decoy processes. This limits the attacker from observing which accesses are from which process and therefore mitigates, and possibly eliminates any useful information an attacker may hope to observe. Also, even if an attacker finds a method to differentiate the information into separate processes, a defender can dynamically and adaptively inject different obfuscating data into the attacker\\u2019s observations by running a different model. In the oblivious model computations, we can randomize the computation orders of unraveled paths while each query is being processed.\\n\\nThus, we emphasize that these defenses do not represent \\u201csecurity through obscurity\\u201d. Because cache side-channels only provide to an attacker coarse-grained timing information from all other processes colocated on the machine, adding a significant amount of noise by means of another process and model architecture is sufficient to make extracting meaningful data incredibly difficult\\u2014even when the adversary knows all the details of the defense (including our implementation).\\n\\n[1] Oh, Seong Joon, et al. \\\"Towards reverse-engineering black-box neural networks.\\\" ICLR. 2018.\\n[2] Tramer, Florian, and Dan Boneh. \\\"Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware.\\\" arXiv preprint arXiv:1806.03287 (2018).\\n[3] Ilyas, Andrew, et al. \\\"Black-box Adversarial Attacks with Limited Queries and Information.\\\" arXiv preprint arXiv:1804.08598 (2018).\\n[4] Hua, Weizhe, Zhiru Zhang, and G. Edward Suh. \\\"Reverse engineering convolutional neural networks through side-channel information leaks.\\\" Proceedings of the 55th Annual Design Automation Conference. ACM, 2018.\\n[5] Yan, Mengjia, Christopher Fletcher, and Josep Torrellas. \\\"Cache telepathy: Leveraging shared resource attacks to learn DNN architectures.\\\" arXiv preprint arXiv:1808.04761 (2018).\\n[6] Bolun Wang, Yuanshun Yao, Bimal Viswanath, Haitao Zheng, and Ben Y Zhao. With great training comes great vulnerability: Practical attacks against transfer learning. USENIX Security, 2018\"}", "{\"title\": \"Clear our contributions over other works (2/2)\", \"comment\": \"(continued: since we are only allowed 5000 chars in each comment.)\\n\\n(2) Comparison to the concurrent work (Yan et al., 2018).\\n\\nThe attack proposed in [6] that reconstructs a DNN architecture by monitoring matrix multiplications through cache side-channels has several limitations. First, this attack is highly dependent on the implementation of General Matrix Multiplication (GeMM). There are various libraries such as OpenBLAS, Intel MKL, Atlas, or NVBLAS that implement GeMM differently, thus, the attacker in [6] needs to need to spend a lot of time reverse engineering the framework they are attacking (Sec 4, 5, 6 in [6]). Also, the GeMM-based reverse engineering can only reveal convolutional or fully connected layers because others such as activations and pooling layers are difficult to characterize by matrix computations. They realize and state this limitation in their paper (Sec 4.4), and probe the addresses of functions that implement these layers, e.g., relu, softmax, or tanh. Our attack takes this simpler approach and attacks control flow functions in the core of machine learning frameworks, suggesting that our approach is more general and applies in more settings. Moreover, the attack in [6] assumes the victim is using the OpenBLAS library, a library built for CPUs. We stress that this approach of attacking code in the CPU implementation of GeMM does not generalize to attacks on a GPU. To monitor GeMM computations running on a GPU, the GPU's cache needs to be shared between the colocated attacker process and the victim's process. This means an attack against a victim's network model on a GPU using this approach would be ineffective. On the other hand, since we attack the general functions, which is implemented and run on the CPU regardless of where the matrix multiplication takes place, we make a stronger claim in (Sec. 3.3) that DeepRecon is independent of hardware --- i.e., we can target a DNN model running on CPUs or GPUs.\\n\\nAdditionally, DeepRecon does not assume knowledge of the victim's network family unlike the attack proposed in [6]. The attack in [6] is only able to reduce search space within a network family, e.g., in their experiments (Sec. 7), they find several candidate network architectures within one specific network family (i.e., VGG or ResNet). In contrast, our attacker can reduce the search space dramatically by using leaked information to classify an arbitrary victim's network into a network family. We can then use this information to simplify the reconstruction of the network architecture fully (Sec 3.4-3.5). We also show that DeepRecon can be applied in transfer learning cases to identify pre-trained networks inside a victim's DNN by identifying the parent model and the layer at which the backpropagation is frozen (Sec 3.3).\\n\\nAn even more important distinction from [6] is that we implement and evaluate two defenses against fingerprinting attacks that exploit cache side channels. As our defenses take advantage of standard DNN computations and do not assume a particular attack implementation, they should be effective against the attacks proposed in [6].\\n\\nFinally, we note that [6] represents concurrent and unpublished work. The paper does not appear to have been published yet in a conference or journal, and it was uploaded to ArXiv on Aug. 14, 2018 (44 days before the ICLR'19 deadline). As our research was conducted primarily in June and July 2018, it is not a derivative of [6] but rather an independent and concurrent project.\\n\\n\\n[1] Suciu, Octavian, et al. \\\"When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks.\\\" arXiv preprint arXiv:1803.06975 (2018).\\n[2] Tram\\u00e8r, Florian, et al. \\\"Stealing Machine Learning Models via Prediction APIs.\\\" USENIX Security Symposium. 2016.\\n[3] Papernot, Nicolas, et al. \\\"Practical black-box attacks against machine learning.\\\" Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. ACM, 2017.\\n[4] Oh, Seong Joon, et al. \\\"Towards reverse-engineering black-box neural networks.\\\" ICLR. 2018.\\n[5] Hua, Weizhe, Zhiru Zhang, and G. Edward Suh. \\\"Reverse engineering convolutional neural networks through side-channel information leaks.\\\" Proceedings of the 55th Annual Design Automation Conference. ACM, 2018.\\n[6] Yan, Mengjia, Christopher Fletcher, and Josep Torrellas. \\\"Cache telepathy: Leveraging shared resource attacks to learn DNN architectures.\\\" arXiv preprint arXiv:1808.04761 (2018).\\n[7] Bolun Wang, Yuanshun Yao, Bimal Viswanath, Haitao Zheng, and Ben Y Zhao. With great training comes great vulnerability: Practical attacks against transfer learning. USENIX Security, 2018\"}", "{\"title\": \"Clarification of our threat model and state our contributions out clearly (1/2)\", \"comment\": \"We thank the reviewer for the constructive feedback. We will update the paper to convey our contributions clearly. Here, we provide the clarification of our threat model and state our key contributions over other works.\\n\\n(1) Clarification of our threat model.\\n\\nThe two initial concerns are \\u201cthe arch. information that we extracted does not seem useful\\u201d and \\u201cthe network architecture details are often not considered private or secret.\\u201d However, this is often not the case. First, our attack enables a continuous threat since the extracted information helps our attacker easily launch various attacks, such as black-box evasion attacks [3], model extractions, or adversarial attacks in transfer learning scenarios [6], with minimal overhead. For instance, to cause multiple evasions, the black-box attack in [3] needs to repeat the attack for multiple targets since it only approximates the gradients around a single target instance. On the other hand, our attack allows the attacker to estimate a surrogate model and lets her easily synthesize multiple evasion samples.\\n\\nWe also address the reviewer\\u2019s concern that \\u201cour attacker requires query accesses to the victim model anyway after reconstructing network architectures of victims.\\u201d However, this is not necessarily true. Suppose that our attacker has a similar dataset or a part of the victim\\u2019s training dataset --- this assumption is reasonable because many datasets for a common task such as face recognition are openly and freely available in online. Then, the attacker separately trains the reconstructed network offline with her own data and uses it as a surrogate model, which does not require query accesses to the model after the reconstruction.\\n\\nBecause of the reasons above, prior work has considered various attackers who aim to extract the network architecture details [1, 4, 5] as well as defenses that keep this information secret, as shown in paper [2] mentioned by the reviewer (i.e., \\u201cModel privacy w.r.t. The server and client\\u201d defined in Sec 2.1). These contributions only make sense in a setting where the architecture is secret.\\n\\nThe last issue raised is that both the co-location and query access assumptions make our adversary stronger than the black-box attacker who only uses query accesses. However, our attacker does not actively query a victim model, which is a non-trivial difference from the black-box attacker. Our attacker passively monitors the model\\u2019s computations and utilizes the information leakages. Suppose an attacker aims to install malware on a machine where an anti-virus system, based on a DNN model, is running. In the black-box attacks, the attacker actively drops files to monitor the model\\u2019s decisions. On the other hand, our attacker has the victim install a chrome add-on that monitors cache behaviors to extract the arch. information of the model. In this case, our attacker is less likely to be detected. Thus, we claim that compared to the black-box attackers, there are cases in which our attack does not assume a stronger adversary. We, therefore, conclude that the settings in which these attacks might be useful to an attacker are often separate and distinct.\\n\\n(2) Comparison with trusted hardware cases.\\n\\nYes, (cache) side-channel attacks against DNNs running on trusted hardware can be interesting direction since concurrent work has proposed hiding computation details of a DNN with this type of hardware. However, our work delivers separate, non-trivial contributions from [2]. We note that the kind of information that can be extracted from DNN models through cache side-channel attacks is an open question. Thus, our work provides an answer: an attacker can extract the same level of information in current deep learning setups via side-channels that is considered to be important to perform further sophisticated attacks in prior work [1, 4, 5, 6], without queries or extensive computations. Also, cache side-channel and timing side-channel attacks often only work when victim\\u2019s computations are data-dependent and are performed on specific hardware (e.g. on a victim running AES128 encryptions on a CPU), whereas we show that cache attacks on DNNs can extract the targeted information regardless of what data is being processed and on what hardware it is being performed (e.g., on any prediction input and on CPUs or GPUs).\"}", "{\"title\": \"More clarifications\", \"comment\": \"We thank the reviewer for the constructive feedback: the questions and comments can improve and make our contributions more concrete. We will update our paper accordingly to include their points. In the meantime, we would like to provide initial answers to the reviewer\\u2019s questions:\\n\\n(1) Could our classifier predict the family of a model correctly (ex. ResNet32) not in the training data?\\n\\nNo, our classifier could not predict this because it cannot learn how to classify unobserved samples into the families that have similar features (or attributes). Suppose that the samples from ResNet18 or 34 are not in our training set. Since the architecture attributes of ResNet18 or 34 are similar to those of VGG16/19 or MobileNetV1/2, our classifier may predict the unseen samples as some other close family (VGGs or MobileNets). However, we are sure that if we include the ResNet18 or 34 to our training set, our classifier will learn to specify them as ResNets.\\n\\nThe key contribution of our (fingerprinting) experiment is to examine which of the architecture attributes that our attacker can extract are essential to specify network families. We identified that four common attributes (#relus, #merges, #convs, and #poolings) are important to know the family of a victim\\u2019s network. This information can help our attacker to launch large-scale attacks in the transfer learning scenario because our attacker already knows multiple commonly used pre-trained models + architectures that she can train her classifier on. Then, by passively observing the information leakage from cache side-channels, the attacker can specify which actual pre-trained model that the victim uses and synthesize adversarial samples with the pre-trained model that also works for the victim model (as prior work [1] warned).\\n\\n(2) Reconstruction errors observable in Table 3.\\n\\nIn our experiments, we could not find specific error patterns in the extracted attribute sequences. As we can see in Table 3, there are the cases where convolutional layers are missing and/or added and activations are missing and/or added. Also, the locations of missing attributes are different in each run. We attribute these errors to a few primary causes: there is background noise of other processes that our flush+reload cache-based side channel attack may pick up (e.g. other background processes pull something into the cache and evict our target functions between when the victim calls the function and we reload it), or we may experience common errors associated with flush+reload (e.g. a victim may call the function during the time when we reload, causing us to see a cache miss instead of correctly observing a cache hit) [2].\\n\\n(3) Comparison of avg. errors in Table 5 (running decoy process as a defense).\\n\\nYes, in Table 5, our experiments indicate that the errors are larger than the sum of the original attribute values (that we can expect from ResNet50). In our experiments in Table 5, we increase the errors associated with the attributes that we aim to obfuscate. For instance, when we run the TinyNet with only one convolutional layer, we observe the #conv attribute is significantly increased. This result is important because, with our defenses, a defender can choose the attributes to obfuscate. By introducing noise into the cache side channel by means of another process, we can make differentiating between functions that are called by our victim and our decoy incredibly difficult and therefore mitigate, and possibly eliminate, any useful information that an attacker can gain by these side channels. Since the defender has control over what noise gets introduced, they can also dynamically and adaptively change what noise is added into the attacker\\u2019s observations, thereby increasing our defenses\\u2019 effectiveness and generalizability.\\n\\n(4) Emphasizing our contributions over the concurrent work (Yan et al., 2018).\\n\\nOur key contributions over the concurrent work (Yan et al., 2018) are highlighted in the initial response to the reviewer\\u2019s comments below [comment 1]: https://openreview.net/forum?id=rk4Wf30qKQ&noteId=B1l2z9wgTm / comment 2: https://openreview.net/forum?id=rk4Wf30qKQ&noteId=Sye8V5wx67]. We plan to include the comparison in our related work section.\\n\\n(5) Fixing typos in our paper.\\n\\nWe are working on revising our paper based on the reviewers\\u2019 feedback. We will include those fixes in the revised version.\\n\\n[1] Bolun Wang, Yuanshun Yao, Bimal Viswanath, Haitao Zheng, and Ben Y Zhao. With great training comes great vulnerability: Practical attacks against transfer learning. USENIX Security, 2018\\n[2] Yarom, Yuval, and Katrina Falkner. \\\"FLUSH+ RELOAD: A High Resolution, Low Noise, L3 Cache Side-Channel Attack.\\\" USENIX Security Symposium. Vol. 1. 2014.\"}", "{\"title\": \"Unclear threat model with a very strong adversary that obtains information of moderate significance.\", \"review\": \"This paper considers the problem of fingerprinting neural network architectures using cache side channels. In the considered threat model, the attacker runs a process co-located with the victim's, and uses standard FLUSH+RELOAD attacks to infer high-level architectural information such as the number and types of layers of the victim's ML model. The paper concludes with the discussion of some \\\"security-through-obscurity\\\" defenses.\\n\\nI don't quite understand the threat model considered in this paper. The main motivating factor given by the authors for uncovering model architecture details is for facilitating black-box attacks against ML models (e.g., for adversarial examples or membership inference). \\nYet, in the case of adversarial examples for instance, knowledge of the architecture is often considered a given as keeping it secret has very little influence on attacks. There are black-box attacks that require no knowledge of the architecture and only a few queries (e.g., Black-box Adversarial Attacks with Limited Queries and Information, Ilyas et al., ICML'18). \\nSo overall, learning such coarse-grained features about a model just doesn't seem particularly useful, especially since architecture-level details are often not considered private or secret to begin with.\\n\\nAfter architectural details have been extracted, the end-goal attacks on ML models considered by the authors (e.g., model stealing, adversarial examples, etc.) require query access anyways. Thus, additionally assuming co-location between the adversary and the victim's model seems to unnecessarily strengthen the attacker model.\\n\\nMaybe the most interesting scenario to consider for cache side-channels in ML is when ML models are run on trusted hardware (e.g., Oblivious Multi-Party Machine Learning on Trusted Processors, Ohrimenko et al.; or this work also submitted to ICLR: https://openreview.net/forum?id=rJVorjCcKQ).\\nCache side channels are much more relevant to that threat model (i.e., ML code running in a trusted hardware enclave hosted by a malicious party). And indeed, there have been many cache side-channel attack papers against trusted hardware such as Intel's SGX (e.g., Software Grand Exposure: SGX Cache Attacks Are Practical, Brasser et al.)\\n\\nBut given what we know about the strength of these cache side channel attacks, one would expect to be able to extract much more interesting information about a target model, such as its weights, inputs or outputs. In the above trusted hardware scenario, solely extracting architecture-level information would also not be considered a very strong attack, especially since coarse-grained information (e.g., a rough bound on the number of layers), can be trivially obtained via timing side channels.\", \"minor_comments\": [\"In the introduction, you say that white-box attacks for adversarial examples are rendered ineffective by gradient masking. This isn't true in general. Only \\\"weak\\\" white-box attacks can be rendered ineffective this way. So far, there are no examples of models that resist white-box attacks yet are vulnerable to black-box attacks.\", \"What exactly causes the cache-level differences you observe? Can you give some code examples in the paper that showcase what happens? Are the TensorFlow code lines listed in Table 1 from a specific commit or release?\", \"The defenses discussed in Section 5 are all forms of \\\"security through obscurity\\\" that seem easily defeated by a determined attacker that adapts its attack (and maybe uses a few additional observations).\", \"--REVISION--\", \"I thank the authors for their rebuttal and clarifications on the threat model and end goals of their attacks. I remain somewhat unconvinced by the usefulness of extracting architectural information. For most of the listed attacks (e.g., building substitute models for adversarial examples, or simply for model extraction) it is not clear from prior work that knowledge of the architecture is really necessary, although it is of course always helpful to have this knowledge. As I mentioned in my review, with current (undefended) ML libraries, it should be possible to extract much more information (e.g., layer weights) using cache side channels.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Simple yet effective attacks to infer model architectures; more clarification would help\", \"review\": \"This paper performs cache side-channel attacks to extract attributes of a victim model, and infer its architecture accordingly. In their threat model, the attacker could launch a co-located process on the same host machine, and use the same DL framework as the victim model. Their evaluation shows that: (1) their attacks can extract the model attributes pretty well, including the number of different types of layers; (2) using these attributes, they train a decision tree classifier among 13 CNN architectures, and show that they can achieve a nearly perfect classification accuracy. They also evaluate some defense strategies against their attacks.\\n\\nModel extraction attack under a black-box setting is an important topic, and I am convinced that their threat model is a good step towards real-world attacks. As for the novelty, although Yan et al. also evaluate cache side-channel attacks, that paper was released pretty shortly before ICLR deadline, thus I would consider this work as an independent contribution at its submission.\", \"i_have_several_questions_and_comments_about_this_paper\": [\"One difference of the evaluation setup between this paper and Yan et al. is that in Yan et al., they are trying to infer more detailed hyper-parameters of the architecture (e.g., the number of neurons, the dimensions of each layer, the connections), but within a family of architectures (i.e., VGG or ResNet). On the other hand, in this paper, the authors extract higher-level attributes such as the number of different layers and activation functions, and predict the model family (from 5 options) or the concrete model architecture (from 13 options). While I think inferring the model family type is also an interesting problem, this setup is still a little contrived. Would the classifier predict the family of a model correctly if it is not included in the training set, say, could it predict ResNet32 as R (ResNet)?\", \"In Table 3, it looks like the errors in the captured computation sequences show some patterns. Are these error types consistent across different runs? Could you provide some explanation of these errors?\", \"In Table 5, my understanding is that we need to compare the avg errors to the numbers in Table 2. In this case, the errors seem to be even larger than the sum of the attribute values. Is this observation correct? If so, could you discuss what attributes are most wrongly captured, and show some examples?\", \"It would be beneficial to provide a more detailed comparison between this work and Yan et al., e.g., whether the technique proposed in this work could be also extended to infer more fine-grained attributes of a model, and go beyond a classification among a pre-defined set of architectures.\", \"The paper needs some editing to fix some typos. For example, in Table 5, the captions of Time (Baseline) and Time (+TinyNet) should be changed, and it looks confusing at the first glance.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Clear our contributions over other works (1/2)\", \"comment\": \"We thank the reviewer for the constructive feedback. We will update the paper accordingly. Additionally, we clarify here the significance of DNN fingerprinting attacks and the relation to the concurrent work of (Yan et al., 2018).\\n\\n(1) The threat of DNN fingerprinting attacks and the significance of our results.\\n\\nPrior work on black-box attacks [1, 2, 3] against neural networks assumes an adversary who has knowledge of the victim's network architecture. This is an impractical assumption, and thus, releasing this assumption is the last-mile problem: if an attacker can easily know the architecture of a victim network, this will enable most black-box attacks on DNNs. For instance, without this knowledge, the success of black-box adversarial sample crafting can decrease dramatically, as illustrated in [7]: in attacks against transfer learning services, where the attacker has partial knowledge about the victim network's architecture, having lesser knowledge can decrease the attack's success rate from 88.4% (only the last 3-4/16 layers are unknown) to 1.2% (the last 6/16 layers are unknown). Additionally, DNNs are often proprietary and represent the key intellectual property, and thus their architectures are hidden from attackers. The reconstruction of DNN attributes is also the topic of [4], published at ICLR'18, where the open reviews deemed the problem setting novel and interesting.\\n\\nWhat is more, this simple cache side-channel attack is more effective than other network reconstruction attacks proposed in prior work [4, 5]. These approaches are either time intensive (i.e., 40 GPU days for the technique proposed in [4]) or monitor computations while an attacker actively queries a victim model. With our DeepRecon attack, we demonstrate that high-level architectural information --- that prior work aims to extract --- can be easily leaked through our side-channel attacks with little computation and passive monitoring (Sec. 3.2-3.4). This allows an attacker to reconstruct the full network architecture of an arbitrary network (Sec. 3.5) without specifying or assuming knowledge of a network family.\\n\\nMoreover, our results go beyond proposing and analyzing a fingerprinting attack. We propose a statistical model for fingerprinting to quantify the importance of each piece of leaked information to the attacker's success (Sec. 4). We also propose simple and effective defenses that obfuscate the observations made through cache side-channels, which can be implemented without specific hardware or operating system support (Sec 5).\\n\\nTo the best of our knowledge, this represents the first comprehensive assessment of the vulnerability of DNNs to cache side-channel attacks. We hope that our results will stimulate follow-on work on defending ML systems against such attacks.\"}", "{\"title\": \"Unclear whether this paper surpasses prior research\", \"review\": \"The paper describes a cache side-channel attack on a deep learning model. In a cache side-channel attack, the attacker sets up a process on the same machine where the victim process (that is running the training or evaluation job for the DNN model) is running. It is assumed that the victim process uses a common shared library for DNN computations as the attacking process. The attacking process flushes the cache, then observes access times for key functions. The paper shows that, based on the speed of accessing previously flushed functions, the attacker can discover the high-level network architecture, namely the types of layers and their sequence. The paper shows that, by spying on such cache access patterns in the Tensorflow library, this method can reliably extract the above high-level information for 11 different network architectures. It also describes a few counterattack alternatives whereby the victim can obfuscate its cache access patterns for self-protection.\\n\\nThe significance of the results is not clear to me. The extracted information is very high level. What realistic attacks can be constructed from such a coarse-grained fingerprinting? The experimental results show that the fingerprint can be used to map the architecture to one of the 13 well-known architectures (VCC16, ResNet, DenseNet, Inception, etc.). But so what? What does the victim lose by revealing that it's using one of a few very well known types of DNNs (the ones tested in this paper). There may very well be a good reason why this is very dangerous, but that is not explained in the paper. Not being familiar with this line of research and its significance, I looked up several of the related papers (Suciu et al., 2018, Tramer et al., 2017, Papernot et al., 2017, Yan et al., 2018). None of them could explain why this particular type of fingerprinting is dangerous.\\n\\nOf the cited previous work, Yan et al., 2018 seems to present the most closely related approach. The method described in that paper is very similar: cache side attack on a shared library through a co-located attacker process. They monitor at a finer grain -- Generalized Matrix Multiplications -- and are thus able to infer more details such as the size of the layers. This also makes the inference problem harder -- they were able to narrow down the search space of networks from >4x10^35 to 16 (on VGG16). On the surface, the results presented in this paper seem stronger. But they are actually solving a much easier problem -- their search space is one of 13 well-known networks. To me, Yan et al.'s approach is a much more powerful and promising setup.\\n\\nOverall, while the paper is clearly written and presents the idea succinctly, it is derivative of previous research, and the results are not stronger. I'm not an expert in this area, so it's possible that I missed something. Based on my current understanding, however, I recommend reject.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}" ] }
HyN-M2Rctm
Mode Normalization
[ "Lucas Deecke", "Iain Murray", "Hakan Bilen" ]
Normalization methods are a central building block in the deep learning toolbox. They accelerate and stabilize training, while decreasing the dependence on manually tuned learning rate schedules. When learning from multi-modal distributions, the effectiveness of batch normalization (BN), arguably the most prominent normalization method, is reduced. As a remedy, we propose a more flexible approach: by extending the normalization to more than a single mean and variance, we detect modes of data on-the-fly, jointly normalizing samples that share common features. We demonstrate that our method outperforms BN and other widely used normalization techniques in several experiments, including single and multi-task datasets.
[ "Deep Learning", "Expert Models", "Normalization", "Computer Vision" ]
https://openreview.net/pdf?id=HyN-M2Rctm
https://openreview.net/forum?id=HyN-M2Rctm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HyeLDcN8xN", "B1gol3FKAX", "BklkkqFYA7", "ByeV2TieRX", "ryghy-BgRX", "BJgbLwQjam", "B1x5phA1T7", "SkeHwZ-q2m", "Hygq9xlc27", "SyeC8uLEnm", "HJx9Mg3gh7", "H1eOYkPQ9m", "ryxCU3AxcQ", "SJeTPa9lqX", "B1gx9Ady5X" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment", "official_comment", "comment", "official_comment", "comment" ], "note_created": [ 1545124446300, 1543244787044, 1543244246975, 1542663595904, 1542635748087, 1542301512774, 1541561537940, 1541177692671, 1541173394005, 1540806742325, 1540567058073, 1538645888226, 1538481238475, 1538465125450, 1538391688414 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1240/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1240/Authors" ], [ "ICLR.cc/2019/Conference/Paper1240/Authors" ], [ "ICLR.cc/2019/Conference/Paper1240/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1240/Authors" ], [ "ICLR.cc/2019/Conference/Paper1240/Authors" ], [ "ICLR.cc/2019/Conference/Paper1240/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1240/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1240/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1240/Authors" ], [ "~Kun_Yuan1" ], [ "ICLR.cc/2019/Conference/Paper1240/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1240/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"The paper develops an original extension/generalization of standard batchnorm (and group norm) by employing a mixture-of-experts to separate incoming data into several modes and separately normalizing each mode. The paper is well written and technically correct, and the method yields consistent accuracy improvements over basic batchnorm on standard image classification tasks and models.\", \"reviewers_and_ac_noted_the_following_potential_weaknesses\": \"a) while large on artificially mixed data, improvements are relatively small on single standard datasets (<1% on CIFAR10 and CIFAR100) b) the paper could better motivate why multi-modality is important e.g. by showing histograms of node activations c) the important interplay between number of modes and batch size should be more thoroughly discussed\\nd) the closely related approach of Kalayeh & Shah 2018 should be presented and contrasted with in more details in the paper. Also comparing to it in experiments would enrich the work.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Original generalization of batchnorm that yields small accuracy improvement.\"}", "{\"title\": \"Re: Response to your rebuttal\", \"comment\": \"(a.) Thanks for your reply and for the acknowledgement of significant differences between our paper and that of Kalayeh & Shah. Since there is no software available from the authors, and given the non-standard optimization technique and extensive hyperparameter tuning required to set it up, we leave a comparison as future work.\\n\\n(b.) Our focus is not to address all the weaknesses of batch normalization, but specifically to increase its robustness against multi-modality. Note however that we show that our model can be incorporated into group norm, which aims to address this issue. So in this sense we show that accounting for modality \\u2013 as in mode group norm (MGN) \\u2013 can increase robustness in a small batch size setting as well.\\n\\nRegarding (c.): Using an oracle to split batches via their original dataset is certainly possible, and results for this particular approach have previously been reported by Rebuffi et al. (2017). Since this approach does not make sense for the majority of our experiments (single task, where D=1), we excluded it from our evaluation. Using an oracle boosts the performance of LeNet by around 1-2%, but please note that this assumes both train and test time domain knowledge and cannot be used in single domain classification tasks.\\n\\n(d.) Our experiments involve tuning the learning rate schedule as well as the single additional hyperparameter of our method, K. For the former, we followed He et al. (2015) (p. 776 left of the CVPR version of their paper) in all experiments. For validating the latter, we randomly sampled 20% of the training set as validation and found K=2 to be a good compromise. After fixing K=2, we train our models on train+validation sets and report the result on test splits.\\n\\nAs requested, we ran additional experiments on deeper networks, both on CIFAR10 and CIFAR100. For this, we implemented ResNet56 (which is more widely used for CIFAR tasks than ResNet50, see e.g. https://dawn.cs.stanford.edu/benchmark/CIFAR10/inference.html). Note that we used the exact same optimization setup as with ResNet20 in these experiments.\\n\\nOn CIFAR10, ResNet56 with BN resulted in a test error of 6.87% (slightly better than the original result of 6.97% reported in He et al. (2015)). Replacing all normalization layers with MN achieves a test error of 6.47%, boosting the performance of BN by \\u00b10.4%. Similarly, MN with ResNet56 obtains a test error of 28.69% versus 29.70% of BN, thus improves 1% over BN.\"}", "{\"title\": \"Re: Solid paper proposing a generalisation of Batch Normalisation\", \"comment\": \"We thank the reviewer for reading through our paper in detail. Three central concerns were raised: (a.) the modality should be quantified in some way, (b.) the parametrization needs to be explained in more detail, and (c.) experiments for constant N/K are missing.\\n\\nRegarding (a.), to the best of our knowledge no quantitative measure exists in the literature to describe the modality of a task. This is a very good question however, and to shed some light on it, we ran an additional analysis and evaluated the average standard deviation of intermediate features (per channel) in our VGG13 experiment. At test time, instead of transforming samples to a normal (with standard deviation 1), BN oversqueezes samples, with a mean deviation of \\u00b10.5, considerably lower than the target of 1. MN yields a deviation of around 0.9, which lies much closer to the training target, so MN is better equipped to deal with modality at test time.\\n\\n(b.): Just as in standard BN, we compute estimators after average pooling over height and width of each image. As such, the affine transformation within the gating unit has its preimage in C. We realize now that the second paragraph on p. 5 was in need of some clarification, and have updated this in our revision. Many thanks for pointing this out to us!\\n\\n(c.): For this, please cross-reference the result for MN of Table 1 (where K=2, N=128, N/K=64) with Table 5 in the Appendix (K=4, N=256, N/K=64). While the gradient updates for the MN units (i.e. its estimators and the parameters of its transformations) receive equivalently informed gradients in both trials, the gradients for the convolutional layers differ, and in all likelihood the larger batch size of N=256 overdamps the gradient information for these layers. This overdamping issue is persistent even when doubling the number of training epochs.\"}", "{\"title\": \"Response to your rebuttal\", \"comment\": \"(a.) Frist, I do apologize for letting you think I was accusing you of plagiarism. This is a serious offense, and by no means I implied such a thing. While reviewing your paper and looking up the recent literature about Batch Normalization, I quickly came across the paper by Kalayeh & Shah, and I was surprised you didn\\u2019t mentioned it in your paper. I simply thought you had been scooped. I also apologize for not having taken a closer look (which I did now) at this paper.\\n\\nThat said, I thank you for your detailed comment on the difference between both paper. As you mentioned, such comparison should figure in your literature review, since both methods are designed to provide multi-modality to BN. The key difference is indeed how it is implemented: They use an outside-of-the-loop GMM, while you use an attention mechanism. Your method is certainly easier to implement and use in modern deep learning frameworks than the GMM approach. A comparison with the GMM approach would still have been nice, or some histogram plots showing the means and variances of different modes.\\n\\n(b.) My point was that MN suffers even more than BN from the small size regime (note that this could also be a positive effect, as it could introduce stronger regularization). In Table 2, we can see that BN drops 3% error rate when going from 16 to 4 examples per mini-batch, where MN drops 4%. Also, this experiment is heavily multimodal in the first place (and thus one can expect BN to perform poorly, and this is the reason why I proposed (c.) for a more fair comparison). The gap in performances between MN and BN on CIFAR and ImageNet gets smaller and smaller, as the effective mini-batch size get smaller.\\n\\nAlso by my comment that your paper \\\"try to address BN\\u2019s weakness, which is an important direction in deep learning\\\", I meant that your paper is going beyond uni-modal normalization, not that it is designed to solve the small size issue of BN.\\n\\n(c.) Sorry if I didn\\u2019t expressed myself clearly enough here. I was suggesting to use the information from which dataset D (MNIST, CIFAR, ...) one example comes from, and normalize it using the examples in the mini-batch that also come from dataset D. You would then obtain different statistics for different datasets. This would help to see how well your method compares against explicit separated normalization.\\n\\n(d.) I'm still interested to know if 1. you ran experiments on deeper networks (like the ResNet50) and 2. what is the validation sets you used through your experiments.\\n\\nI hope I let you enough time to answer again if you want to, and I will certainly increase the score of my review now that the difference between the two papers has been clearly established.\"}", "{\"title\": \"Re: Normalization method that assumes multi-modal distributions\", \"comment\": \"Many thanks for the review. Regarding 1): we consider MN to be a generalization of BN, and \\u2013 see paragraph 4 on p. 5 \\u2013 wanted to make sure the normalization unit can assume the standard form of BN, whenever that is optimal and yields the best performance. The obvious benefit of not regularizing this behavior is that MN becomes seamlessly insertable into any deep network. Regarding sparseness: note that (even at test time) assignments are usually quite pronounced, at roughly 0.95-0.99 on average.\\n\\n2): Allowing individual affine parameters only improves test performance minimally (differences are in the regime of 0.05-0.2%). In all likelihood this is because normalizing features with multiple means and standard deviations already standardizes them sufficiently.\\n\\n3): As shown in paragraph 2, p. 5, when K=1, MN reduces to standard BN. We also went ahead and implemented your suggestion to activate with a sigmoid. Unfortunately, the resulting performance was worse than that of vanilla BN.\"}", "{\"title\": \"Re: Might have already been published and pushes BN towards small mini-batches\", \"comment\": \"Three main concerns were raised: (a.) a similar publication exists, giving grounds for a clear rejection of this paper. We thank the reviewer for bringing the interesting paper by Kalayeh & Shah to our attention, but show below that this claim is unjustified. (b.) MN suffers from weaknesses that BN also suffers from in the small batch size regime, and (c.) the paper should discuss some additional related methods.\\n\\nRegarding (a.): we are thankful for having this paper pointed out to us and will include it in our revision. That being said, we strongly rebut the claim that their paper is equivalent to ours, as their approach is very different. After reading their preprint in detail, we summarize below.\\n\\nThe crucial difference is that in MN we employ a Mixture of Experts (MoE) approach and parametrize each expert with a simple attention-like mechanism on the image\\u2019s features. MN can effortlessly be added to any modern deep convolutional network, can be optimized with standard SGD, has a very small computational overhead, and introduces only a single hyperparameter (number of modes K). On the other hand, Kalayeh & Shah propose using a GMM to fit the feature distribution within the normalization unit (from hereon, we thus abbreviate MN-GMM). As it happens, we experimented with a GMM-based approach before designing MN, so we are well familiar with the several technical difficulties and impracticalities that using GMMs imposes:\\n\\n* Due to the complexity of fitting GMMs, in their experiments Kalayeh & Shah never swap out all BN layers with MN-GMM layers, see p. 7 (right). So their resulting network is a mixture of BN and (very few, usually 1) MN-GMM normalizations. We designed MN to be lightweight and easy to deploy, and in our experiments show that MN can replace the entirety of BN layers, even in a deep network.\\n* As Kalayeh & Shah explain on p. 6 (right column) they fit the GMM via EM, in a completely separate optimization step, outside the training loop of the network. In designing our method, it was important to us to sidestep this restriction, and MN can be trained end-to-end alongside the other parameters of the network.\\n* Further complicating MN-GMM is that it requires careful, manual decisions in its tuning. From our own experiments, we are well aware of the considerations one needs to ponder over in MN-GMM. A few examples: (i.) how many EM iterations are needed? (ii.) Which BN units should be replaced, which should remain intact? (iii.) How should the GMM parameters be initialized? (iv.) How many components should be assumed? In MN, the practitioner needs to make a single choice (in that K needs to be set). Once that choice has been made, MN can be used off-the-shelf, making it straightforward to use in an applied setting.\\n\\nIn MN-GNN Kalayeh & Shah (2018) propose an interesting modification to BN, however it should be clear from the above points that the similarities to our method are extremely limited. R2 states that \\u201cI didn\\u2019t took the time to read this paper in details\\u201d, only to continue \\u201cgiven the similarity with another paper already in the literature, I reject the paper\\u201d. We were very surprised by the rejection based on a \\u201cquick read\\u201d, and \\u2013 for a top-tier conference like ICLR \\u2013 would have found it appropriate to read the mentioned paper and to compare it to ours in a more careful manner. Once more, we firmly reject the implication that our proposed method has been covered in their publication, or that we, in any way, copied from their work.\\n\\n(b.): splitting up batches does introduce errors from finite estimation, which is an issue that we raise ourselves on p. 6, third paragraph. As we argue in our paper, many applications exist where the batch size restriction isn\\u2019t a major issue, and a larger error results from the underlying modality of the task. MN is aimed at alleviating issues in these particular tasks, we never designed it to solve the small batch size issues of BN, and at no point claim that it does.\\n\\nThat being said, even though MN splits minibatches into multiple modes by construction (thereby collecting statistics from less samples than BN), in practice MN still performs better than BN, even for small batch sizes. This is shown in Table 2, where MN clearly is more robust to smaller batch sizes than BN.\\n\\n(c.): FiLM learns to adaptively influence the output of a neural network by applying transformations to intermediate features conditioned on some input. FiLM\\u2019ed networks still use BN, and thus FiLM does not address any shortcomings of BN, so MN can simply be used alongside FiLM. There is a weak connection to our paper in that MN can also be seen as a conditional layer, however with the completely different focus of adapting feature normalizations. We thank the reviewer for pointing out this work, and have included it in our revision.\"}", "{\"title\": \"Normalization method that assumes multi-modal distributions\", \"review\": \"The authors proposed a normalization method that learns multi-modal distribution in the feature space. The number of modes $K$ is set as a hyper-parameter. Each sample $x_{n}$ is distributed (softly assigned) to modes by using a gating network. Each mode keeps its own running statistics.\\n\\n1) In section 3.2, it is mentioned that the MN didn't need and use any regularizer to encourage sparsity in the gating network. Is MN motivated to assign each sample to multiple modes evenly or to a distinct single mode? It would be better to provide how the gating network outputs sparse assignment along with the qualitative analysis.\\n\\n2) The footnote 3 showed that individual affine parameters doesn't improve the overall performance. How can this be interpreted? If the MN is assuming multi-modal distribution, it seems more reasonable to have individual affine parameters.\\n\\n3) The overall results show that increasing the number of modes $K$ doesn't help that much. The multi-task experiments used 4 different datasets to encourage diversity, but K=2 showed the best results. Did you try to use K=1 where the gating network has a sigmoid activation?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Solid paper proposing a generalisation of Batch Normalisation\", \"review\": \"The paper proposes a generalisation of Batch Normalisation (BN) under the assumption that the statistics of the unit activations over the batches and over the spatial dimensions (in case of convolutional networks) is not unimodal. The main idea is to represent the unit activation statistics as a mixture of modes and to re-parametrise by using mode specific means and variances. The \\\"posterior\\\" mixture weights for a specific unit are estimated by gating functions with additional affine parameters (followed by softmax). A second, similar variant applies to Group Normalisation, where the statistics is taken over channel groups and spatial dimensions (but not over batches).\\n\\nTo demonstrate the approach experimentally, the authors first consider an \\\"artificial\\\" task by joining data from MNIST, Fashion MNIST, CIFAR10 and SVHN and training a classifier (LeNet) for the resulting 40 classes. The achieved error rate improvement is 26.9% -> 23.1%, when comparing with standard BN. In a second experiment the authors apply their method to \\\"single\\\" classification tasks like CIFAR10, CIFAR100 and ILSVRC12 and use large networks as e.g. VGG13 and ResNet20. The achieved improvements when comparing with standard BN are one average 1% or smaller.\\n\\nThe paper is well written and technically correct.\", \"further_comments_and_questions_to_the_authors\": [\"The relevance of the assumption and the resulting normalisation approach would need further justification. The proposed experiments seem to indicate that the node statistics in the single task case are \\\"less multi-modal\\\" as compared to the multi-task. Otherwise we would expect the comparable improvements by mode normalisation in both cases? On the other hand, it should be easy to verify the assumption of multi-modality experimentally, by collecting node statistics in the learned network (or at some specific epoch during learning ). It should be also possible to give some quantitative measure for it.\", \"Please explain the parametrisation of the gating units more precisely (paragraph after formula (3)). Is the affine mapping X -> R^k a general one? Assuming that X has dimension CxHxW, this would require a considerable amount of additional parameters and thus increase the VC dimension of the network (even if its primary architecture is not changed). Would this require more training data then? I miss a discussion of this aspect.\", \"When comparing different numbers of modes (sec. 4.1, table 1), the size of the batch size was kept constant(?). The authors explain the reduction of effectiveness of higher mode numbers as a consequence of finite estimation (decreasing number of samples per mode). Would it not be reasonable to increase the batch size proportionally, such that the amount of samples per mode is kept constant?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Might have already been published and pushes BN towards small mini-batches\", \"review\": \"Summary:\\nBatch Normalization (BN) suffers from 2 flaws: 1) It performs poorly when the batch size is small and 2) computing only one mean and one variance per feature might be a poor approximation for multi-modal features. To alleviate 2), this paper introduces Mode Normalization (MN) a new normalization technique based on BN. It uses a gating mechanism, similar to an attention mechanism, to project the examples in the mini-batch onto K different modes and then perform normalization on each of these modes.\", \"clarity\": \"The paper is clearly written, and the proposed normalization is well explained.\", \"novelty\": \"The proposed normalization is somewhat novel. I also found a similar paper on arXiv (submitted for review to IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018): M. M. Kalayeh, M. Shah, Training Faster by Separating Modes of Variation in Batch-normalized Models, arXiv 2018. I didn\\u2019t took the time to read this paper in details, but the mixture normalization they propose seems quite close to MN. Could the authors comment on this?\", \"pros_and_cons\": [\"Clearly written and motivated\", \"Try to address BN\\u2019s weakness, which is an important direction in deep learning\", \"I found similar papier in the literature\", \"The proposed method aims to make BN perform better, but pushes it toward small batch settings, which is where BN performs poorly.\", \"Misses comparisons with other techniques (see detailed comments).\"], \"detailed_comments\": \"1. Multi-modality:\\nIt is not clear if the features are multimodal when performing classification tasks. Some histograms of a few features in the network would have help motivate the proposed normalization. However, it seems indeed to be an issue when training GANs: to make BN work when placed in the discriminator, the real and fake examples must be normalized separately, otherwise the network doesn't train properly. Moreover, when dealing with multimodal datasets (such as the one you created by aggregating different datasets), one can use the FiLM framework (V. Dumoulin et al., Feature-wise transformations, Distill 2018), and compute different means and variances for each datasets. How would the proposed method perform against such method?\\n2. Larger scale:\\nIt would be nice to see how MN performs on bigger networks (such as the ResNet50, or a DenseNet), and maybe a more interesting fully-connected benchmark, such as the deep autoencoder.\\n3. Small batch regime:\\nIt seems that the proposed method essentially pushes BN towards a regime of smaller mini-batch size, where it is known to performs poorly. For instance, the gain in performances on the ImageNet experiments drops quite a lot already, since the training is divided on several GPUs (and thus the effective mini-batch is already reduced quite a lot). This effect gets worse as the size of the network increases, since the effective mini-batch size gets smaller. This problem also appears when working on big segmentation tasks or videos: the mini-batch size is typically very small for those problems. So I fear that MN will scale poorly on bigger setups. I also think that this is the reason why you need to use extremely small K.\\n4. Validation set:\\nWhat validation sets are you using in your experiments? In section 4.1, the different dataset and their train / test splits are presented, but what about validation?\", \"conclusion\": \"Given the similarity with another paper already in the literature, I reject the paper. Also, it seems to me that the technique actually pushed BN towards a small batch regime, where it is known to perform poorly. Finally, it misses comparison with other techniques.\", \"revision\": \"After the rebuttal, I increased my rating to a 6. I feel this paper could still be improved by better motivating why multi-modality is important for single tasks (for example, by plotting histograms of activations from the network). I also think that the paper by Kalayeh & Shah should be presented in more details in the related work, and also be compared to in the experimental setup (for example on a small network), especially because the authors say they have experience with GMMs.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Re: About the Gating Network and Algorithm 1\", \"comment\": \"Hi, thanks for your interest and your questions. We parametrize the gating functions with an affine transformation followed by a softmax, see second paragraph on p. 5. Using an alternative in any subset of layers is certainly possible, this would need to be decided on a case-by-case basis though, as it depends on e.g. choice of architecture, or the task at hand.\\n\\nRegarding your second question, we apply the normalization to the full image, while estimators are computed after pooling over height and width, so we follow the exact same protocol as in batch norm.\"}", "{\"comment\": \"1. Since features in different layers represent differently, is there necessary to add a gating network alongside each normalization module? And what is the structure of your gating network?\\n2. Can you provide more details about Algorithm 1? Especially $y_{nk}$ and $x_n-\\\\mu_k$\\uff0csince different shape between (n,c,h,w) and (k,c) can not do subtraction directly.\", \"title\": \"About the Gating Network and Algorithm 1\"}", "{\"title\": \"Re: Re: Details of experiments\", \"comment\": \"Thank you for your continued interest. MN does not use any explicit label information, and (given the complexity of the datasets that we study here) is unable to uncover the underlying cluster structure, see penultimate paragraph on p. 5. Nonetheless, in our experiments we observe that MN does allocate samples into joint modes that have similar qualities, such as color or object size, c.f. Fig 2.\"}", "{\"comment\": \"Thanks for reply. I still have a question. Are the examples normalized by the same mode in MN from the same category?\", \"title\": \"Re: Details of experiments\"}", "{\"title\": \"Re: Details of experiments\", \"comment\": \"Many thanks for your interest in our paper and your comment. Indeed, increasing the number of modes does not always increase performance, see also our third paragraph on p. 6.\\n\\nIntuitively, one would expect larger choices of K to always improve performance (at the expense of some computational cost). The fact that this isn\\u2019t the case connects to the same issue that also makes BN vulnerable to small batch sizes: for fixed N, increasing K results in less and less samples being assigned to a joint mode. Estimators are then computed from smaller partitions, in turn making them less accurate. Besides this, a second dynamic arguably comes into play in the hierarchicality of deep architectures. If the original network has L normalizations, then \\u2013 compared to BN \\u2013 we introduce L(K-1) additional normalizations in MN. So even in its simplest configuration, MN comes with L additional normalizations, which could be more than the network needs to account for the relevant modes in the distribution.\\n\\nIn practice choosing K=2 gave us a significant performance boost in all our experiments (and therefore we recommend this value), going beyond that only resulted in benefits if the batch size was chosen to be sufficiently large, see the Appendix.\"}", "{\"comment\": \"From table 1, it looks that increasing the number of K in MN also increases error rate. What value of K shall we use in practice?\", \"title\": \"Details of experiments\"}" ] }
r1GbfhRqF7
Kernel Change-point Detection with Auxiliary Deep Generative Models
[ "Wei-Cheng Chang", "Chun-Liang Li", "Yiming Yang", "Barnabás Póczos" ]
Detecting the emergence of abrupt property changes in time series is a challenging problem. Kernel two-sample test has been studied for this task which makes fewer assumptions on the distributions than traditional parametric approaches. However, selecting kernels is non-trivial in practice. Although kernel selection for the two-sample test has been studied, the insufficient samples in change point detection problem hinder the success of those developed kernel selection algorithms. In this paper, we propose KL-CPD, a novel kernel learning framework for time series CPD that optimizes a lower bound of test power via an auxiliary generative model. With deep kernel parameterization, KL-CPD endows kernel two-sample test with the data-driven kernel to detect different types of change-points in real-world applications. The proposed approach significantly outperformed other state-of-the-art methods in our comparative evaluation of benchmark datasets and simulation studies.
[ "deep kernel learning", "generative models", "kernel two-sample test", "time series change-point detection" ]
https://openreview.net/pdf?id=r1GbfhRqF7
https://openreview.net/forum?id=r1GbfhRqF7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rylbEV8VeE", "rJgGC4xhAm", "rkeOQQopa7", "S1lkdzo7T7", "rJlwQ-j76m", "HJejUysX6m", "BJxw5aOa2m", "SJg2i1Pp3X", "r1lg5FuhoQ" ], "note_type": [ "meta_review", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544999976872, 1543402697531, 1542464287742, 1541808743214, 1541808415095, 1541807955254, 1541406094733, 1541398435655, 1540290951712 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1239/Area_Chair1" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1239/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1239/Authors" ], [ "ICLR.cc/2019/Conference/Paper1239/Authors" ], [ "ICLR.cc/2019/Conference/Paper1239/Authors" ], [ "ICLR.cc/2019/Conference/Paper1239/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1239/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1239/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes a new kernel learning framework for change point detection by using a generative model. The reviewers agree that the paper is interesting and useful for the community. One of the reviewer had some issues with the paper but those were resolved after the rebuttal. The other two reviewers have short reviews and somewhat low confidence, so it is difficult to tell how this paper stands among other that exist in the literature. Overall, given the consistent ratings from all the reviewers, I believe this paper can be accepted.\", \"confidence\": \"2: The area chair is not sure\", \"recommendation\": \"Accept (Poster)\", \"title\": \"A good paper but short reviews\"}", "{\"comment\": \"This is a good work, with a novel idea and strong experiment results. Besides kernel selection, using RNN is also very interesting to me; in fact, I hardly see a motivation when samples are iid.\", \"i_have_a_few_questions\": [\"problem setting: do you really test if a point is a change-point, or if the sequence may contain a change-point? A point which is not a change-point does not mean all the points are from P. It appears to me that the latter case is considered. If this is true, how about using the maximum partition strategy on $X^(r)$?\", \"the illustrative example in Sec 3: 1) in the insufficient sample case, do you also use the same number of samples from $P$ to compute the unbiased MMD estimate? With some experience on toy examples, the selected kernel in Sutherland, et al can still help when using the biased MMD. 2). In Sec. 3.2, the selected $G$ seems not consistent with the proposed method. In the proposed method, $G$ should be close but not too close to $P$, and is independent of $Q$. However, the selected $G$ depends on the parameter of the real $Q$.\", \"problem formulation: compared with the MMD GAN paper, the formulation in this paper has an additional form term $\\\\hat M_k(X, X')$. So how does this term help in finding the kernel?\", \"test threshold approximation: can you give more details on this part?\", \"Finally, look forward to a new version and the codes.\"], \"title\": \"Interesting work, with a novel idea and strong experiment results\"}", "{\"title\": \"addressed some concerns\", \"comment\": \"The authors have made a detailed reply to my comments and addressed a number of my concerns.\"}", "{\"title\": \"RE: A very neat idea with strong results\", \"comment\": \"Thank you for your review and appreciation of our work.\\n\\n- It is true that for some real-world applications, real-time CPD should be applied where only past samples are observed and anomaly alarm should be made immediately after observing a new sample. This paper, on the other hand, focuses on retrospective CPD where samples from both directions (past and future) are available for anomaly detection. While this setting is not real-time, but it typically offers more robust anomaly detection.\"}", "{\"title\": \"RE: A interesting study of how to optimize kernel change-point detection algorithm.\", \"comment\": \"Thank you for your review and appreciation of our work. We will provide more details about how surrogate distributions are approximated by the generative models in our revision.\"}", "{\"title\": \"RE: Not convinced that improvements are from better power\", \"comment\": \"Thank you very much for your valuable comments. We would try to address your concerns as below.\\n\\nWe agree with you that there are multiple settings for change point detection (CPD) where samples could be piecewise iid, non-iid autoregressive, and more. It is truly difficult to come up with a generic framework to tackle all these different settings. In this paper, following the previous CPD works [1,2,3,4], we stay with the piecewise iid assumption of the time series samples. Extending the current model to other settings, such as the scene detection task, is interesting and we leave it for future work.\\n\\nFor the piecewise iid case, as shown in our toy experiment in Sec. 3, optimizing kernel using surrogate distribution G indeed leads to better test power when samples from Q are insufficient. This demonstrates the effectiveness of our kernel selection objective without any autoregressive/RNN modeling to control the Type-I error. In Table 3, It is even interesting to see that for the synthetic Jumping-Mean and Scaling-Variance datasets that are generated from an autoregressive model with non-iid temporal samples, the non-parametric methods (RDR-KCPD and Mstats-KCPD) without RNN modeling are also comparable, sometimes better, compared to the AR-based methods.\\n\\nFor the non-iid temporal structure in real-world applications, the concern is the improvement coming from adopting RNN and controlling type-I error for model selection (kernel selection). Indeed, using RNN parameterized kernels (trained by minimizing reconstruction loss) buy us some gain compared to directly conduct kernel two-sample test on the original time series samples (Fig 3 cyan bar rises to blue bar), but we still have to do model selection to decide the parameters of RNN. In Table 2, we studied a kernel learning baseline, OPT-MMD, that optimizing an RNN parameterized kernel by controlling type-I error but without the surrogate distribution. OPT-MMD is inferior to the KL-CPD that introduce the surrogate distribution with an auxiliary generator. On the other hand, from Table 2, we can also observe KL-CPD is better than other RNN alternatives, such as LSTNet. Those performance gaps between KL-CPD, OPT-MMD (regularizing type-I only) and other RNN works indicate the proposed maximizing testing power framework via an auxiliary distribution serves as a good surrogate for kernel (model) selection. \\n \\nIn summary, we agree with you part of the improvement coming from introducing RNN, and that our framework does have some limitation for different CPD settings. For the choice of real-world applications, we mainly follow the CPD literature [2,4], which also made piecewise iid assumptions on the time series samples while applying their framework on the likely non-iid real-world datasets, but we still observe the improvement brought by KL-CPD. It would be interesting to develop a theoretical framework for kernel learning to deal with non-iid data, and we leave it as future work.\\n\\n[1] Kernel change-point analysis, NIPS 2009\\n[2] Change-point detection in time-series data by relative density-ratio estimation, Neural Networks 2013\\n[3] A nonparametric approach for multiple change point analysis of multivariate data, JASA 2014\\n[4] M-statistic for kernel change-point detection, NIPS 2015\"}", "{\"title\": \"A very neat idea with strong results\", \"review\": [\"Using a generative model as the surrogate distribution for kernel two-sample test is novel\", \"An important and new application of deep generative models\", \"Strong experiments on synthetic and real-world time series data sets\", \"Very clear writing and explanation of the idea\", \"reply sample segments from both directions (past and future) while in the practical setting, CPD is usually sequential and in one directional\", \"lack theoretical understanding of the limit of the neural-generator in the kernel two-sample test\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A interesting study of how to optimize kernel change-point detection algorithm.\", \"review\": \"A new approach to choose a kernel to maximize the test power, for the kernel change-point detection. This provides an extension to the two-sample version of the problem (Gretton et al. 2012b, Sutherland et al. 2017). The difficulty is caused by that there is very limited samples from the abnormal distribution. The idea is based on choosing a surrogate distributions using generative model. The idea makes sense although there seems to be not much detail in how to choose the surrogate distribution. There is a mechanism to study the threshold. Real-data and simulation demonstrates the good performance. I think the idea is really interesting and I am impressed by the completeness of the work.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Not convinced that improvements are from better power\", \"review\": \"The manuscript entitled \\\"Kernel Change-Point Detection with Auxiliary Deep Generative Models\\\" describes a novel approach to optimising the choice of kernel towards increased testing power in this challenging machine learning problem. The proposed method is shown to offer improvements over alternatives on a set of real data problems and the minimax objective identified is well motivated, however, I am not entirely convinced that (a) the performance improvements arise for the hypothesised reasons, and (b) that the test setting is of wide applicability.\\n\\nA fundamental distinction between parametric and non-parametric tests for CPD in timeseries data is that the adoption of parametric assumptions allows for an easier introduction of strict but meaningful relationships in the temporal structure---e.g. a first order autoregressive model introduces a simple Markov structure---whereas non-parametric kernel tests typically imagine samples to be iid (before and after the change-point). For this reason, the non-parametric tests may lack robustness to certain realistic types of temporal distributional changes: e.g. in the parameter of an autoregressive timeseries. On the other hand, it may be prohibitively difficult to design parametric models to well characterise high dimensional data, whereas non-parametric models can typically do well in high dimension when the available data volumes are large. In the present application it seems that the setting imagined is for low dimensional data of limited size in which there is likely to be non-iid temporal structure (i.e., outside the easy relative advantage of non-parametric methods). For this reason it seems to me the key advantage offered by the proposed approach with its use of a distributional autoregressive process for the surrogate model may well be to introduce robustness against Type 1 errors due to otherwise unrepresented temporal structure in the base distribution (P). In summarising the performance results by AUC it is unclear whether it is indeed the desired improvement in test power that offers the advantages or whether it is in fact a decrease in Type 1 errors.\", \"another_side_of_my_concern_here_is_that_i_disagree_with_the_statement\": \"\\\"As no prior knowledge of Q ... intuitiviely, we have to make G as close to P as possible\\\" interpretted as a way to maximise test power; as a way to minimise Type 1 errors, yes.\\n\\nAcross change-point detection methods it is also important to distinguish key aspects of the problem formulation. One particular specification here is that we have already some labelled instances of data known to come from the P distribution, and perhaps also some fewer instances of data labelled from Q. This is distinct from fully automated change point detection methods for time series such as automatic scene selection in video data. Another dissimilarity to that archetypal scenario is that here we suppose the P and Q distributions may have subtle differences that we're interested in; and it would also seem that we assume there is only one change-point to detect. Or at least the algorithm does not seem to be designed to be applied in a recursive sense as it would be for scene selection.\\n\\nFinally there is no discussion here of computational complexity and cost?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BJzbG20cFQ
Towards Metamerism via Foveated Style Transfer
[ "Arturo Deza", "Aditya Jonnalagadda", "Miguel P. Eckstein" ]
The problem of visual metamerism is defined as finding a family of perceptually indistinguishable, yet physically different images. In this paper, we propose our NeuroFovea metamer model, a foveated generative model that is based on a mixture of peripheral representations and style transfer forward-pass algorithms. Our gradient-descent free model is parametrized by a foveated VGG19 encoder-decoder which allows us to encode images in high dimensional space and interpolate between the content and texture information with adaptive instance normalization anywhere in the visual field. Our contributions include: 1) A framework for computing metamers that resembles a noisy communication system via a foveated feed-forward encoder-decoder network – We observe that metamerism arises as a byproduct of noisy perturbations that partially lie in the perceptual null space; 2) A perceptual optimization scheme as a solution to the hyperparametric nature of our metamer model that requires tuning of the image-texture tradeoff coefficients everywhere in the visual field which are a consequence of internal noise; 3) An ABX psychophysical evaluation of our metamers where we also find that the rate of growth of the receptive fields in our model match V1 for reference metamers and V2 between synthesized samples. Our model also renders metamers at roughly a second, presenting a ×1000 speed-up compared to the previous work, which now allows for tractable data-driven metamer experiments.
[ "Metamerism", "foveation", "perception", "style transfer", "psychophysics" ]
https://openreview.net/pdf?id=BJzbG20cFQ
https://openreview.net/forum?id=BJzbG20cFQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HylyXR-llE", "S1gispPmRX", "r1e9ZnwmA7", "ryx7JhwmRX", "BygPHovmC7", "HyeGxiD7C7", "rJeH99PXA7", "B1li_tDmAm", "HJedMX8Na7", "rJxupYl0hm", "Byxt733Fhm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544719894859, 1542843811342, 1542843394125, 1542843355273, 1542843198858, 1542843113687, 1542843020967, 1542842738986, 1541853968341, 1541437888499, 1541159968768 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1238/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1238/Authors" ], [ "ICLR.cc/2019/Conference/Paper1238/Authors" ], [ "ICLR.cc/2019/Conference/Paper1238/Authors" ], [ "ICLR.cc/2019/Conference/Paper1238/Authors" ], [ "ICLR.cc/2019/Conference/Paper1238/Authors" ], [ "ICLR.cc/2019/Conference/Paper1238/Authors" ], [ "ICLR.cc/2019/Conference/Paper1238/Authors" ], [ "ICLR.cc/2019/Conference/Paper1238/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1238/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1238/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.\\n\\n- The problem is well-motivated and related work is thoroughly discussed\\n- The evaluation is compelling and extensive.\\n\\n2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.\\n\\n- Very dense. Clarity could be improved in some sections.\\n\\n3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it\\u2019s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.\\n\\nNo major points of contention.\\n\\n4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.\\n\\nThe reviewers reached a consensus that the paper should be accepted.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"novel high-performing model; thorough experimental analysis and discussion; clarity could be improved\"}", "{\"title\": \"General Comments to All Reviewers\", \"comment\": \"We\\u2019d like thank all reviewers for the feedback and assessment of our paper. We hope to have individually addressed all your concerns. We have uploaded a modified version of our paper where we have addresses such concerns, re-arranged figures, and fixed minor typos and corrections. These include:\\n\\nMoving Figure 13 to the Supplementary Material for a detailed discussion on the potential interpretability of V1 and V2 metamers in the human visual system.\\n\\nEnhancing Figure 4 with the subfigures of Figure 3 where we show how the interpolation is done in the encoded space.\\n\\nAdding the clarification in Section 3 that the FS model includes structural constraints in the metamer generation pipeline as shown in Wallis et al. 2016.\\n\\nAn extended version of Figure 9 where we tune a gamma function via the perceptual optimization framework of Experiment 1, but using other IQA metrics such as MS-SSIM and IW-SSIM. This figure is supplementary and added in the Supplementary Material.\\n\\nThe histograms of the permutation tests that verify the scale invariance of the gamma function.\\n\\nConfidence intervals for the estimates of critical scaling and absorbing factors, as well as the lapse rate for each observer in Experiment 2.\"}", "{\"title\": \"Comments to AnonReviewer 1 [Part 3]\", \"comment\": \"- How is the model trained? Do the authors use the pre-trained model of Huang & Belongie or is the training different in the context of the proposed method? I could only find the statement that the decoder is trained to invert the encoder, but that doesn't seem to be what Huang & Belongie's model does and the paper does not say anything about how it's trained to invert. Please clarify.\\n\\nWe use the pre-trained decoder by Huang and Belongie as stated in our paper. The decoder does invert the encoder with high fidelity if the input images for style and content are the same, both in theory (all the statistics are matched in the VGG19), and in practice (visual inspection and alpha=0 SSIM scores as reported in the paper). In the training pipeline, the encoder is fixed and the decoder is trained to learn how to invert the structure of the content image, and the texture of the style image, thus when the content and style image are the same, then the decoder approximates the inverse of the encoder. In the supplementary material we provide details of such training as uploaded in our submission, the content images were natural scenes from ImageNet and the style images were a collection of paintings and texture-like images.\\n\\nWe would like to emphasize as stated in our paper that within our pipeline, there is no explicit training to render metamers, but rather to invert image structure and texture-driven distortions in the encoded space.\"}", "{\"title\": \"Comments to AnonReviewer 1 [Part 2]\", \"comment\": \"- I would like to know some details about the inference of the critical scaling. It seems surprisingly spot on 0.5 as in F&S for synth vs. synth, but looking at the data in Fig. 12 (rightmost panel), I find the value 0.5 highly surprising given that all the blue points lie more or less on a straight line and the point at a scaling factor of 0.5 is clearly above chance level. Similarly, the fit for original vs. synth does not seem to fit the data all that well and a substantially shallower slope seems equally plausible given the data. How reliable are these estimates, what are the confidence intervals, and was a lapse rate included in the fits (see Wichmann & Hill 2001)?\\n\\nWe followed your suggestion and reported the lapse rates in the updated manuscript (all under 3%). There was little variability in the fits, as the absorbing factor generally takes care of modulating the asymptotic performance of the psychometric function for each subtack in our roving experiment. We elaborate more on the fitting procedure as well as the characteristics of the psychometric function and lapse rates in the Supplementary Material. We also included confidence intervals for all the estimates of critical scaling factor, absorbing factor and lapse rates in our updated version (See Figure 12).\\n\\nYou are correct. In order to compute the average fitted values for the pooled observer, we averaged the fitted values for the 3 observers and labelled them as the average fit, rather than performing least squares regression on the average values (which was done individually). \\n\\nWe added these clarifications in the supplementary material of paper, as well as a derivation on how to compute the lapse rate for our ABX task.\\n\\n- I don't get the point of Figs. 4, 13 and 14. I think they could as well be removed without the paper losing anything. Similarly, I don't think sections 2.1 and the lengthy discussion (section 5) are useful at all. Moreover, section 3 seems bogus. I don't understand the arguments made here, especially because the obvious options (alpha=1 or overlapping pooling regions; see above) are not even mentioned.\\n\\nWe moved Figure 13 to the Supplementary Material.\\n\\nWe believe Figures 4, and 14 provide a clear interpretation of the model in terms of distortions in the encoded space and how distortions when viewed by the human observer change as a function of alpha and the geometry of the surface in the encoded space. We think there is a lot of room and work to do in terms of integrating the literature of visual metamerism within the context of differential geometry -- which is outside of the scope of this paper, but we provide hints on why it could be appropriate, and is currently being developed as a follow up work. The recent work of Henaff (2018) on the perceptual straightening hypothesis, as well as Eidolon distortions by Koenderink et al. (2017), are both examples of integrations between vision science and differential geometry. \\n\\nSection 3 provides mathematical insight of the psychophysical tractability of the metamer rendering problem, given that a structural constraint should be included as we clarified previously. For example, a per-pooling region tuning of maximal distortion is psychophysically intractable given that the experimenter would have to explore all pooling regions, over many values of alpha and over a wide collection of images, over many scales and for multiple trials. If we assume (similar to our settings): 100 pooling regions, 5 scales, 10 steps for alpha, 10 images, 30 trials, and 2 seconds per trial for observer response this amounts to roughly 1 month of raw psychophysics time. If we take into account that observers usually do a maximum of 2 hours per day -- this would extrapolate to a year in actual data collection time. It is possible, but unreasonable. This is the benefit of the perceptual optimization simulated experiment we propose.\\n\\nSection 3 also provides formality of the psychophysical optimization to be performed to find the critical scaling value for the FS metamers and motivates the need for Experiment 1, where we reduce the hyper-parametric nature of our model to a single parameter.\"}", "{\"title\": \"Comments to AnonReviewer 1 [Part 1]\", \"comment\": \"Thank you for providing critical insights in our paper and giving such positive feedback! We would like to address your concerns:\\n\\n- The motivation for introducing alpha not clear to me. Wasn't the idea of F&S that you can reduce the image to its summary statistics within a pooling region whose size scales with eccentricity? Why do you need to retain some content information in the first place? How do images with alpha=1 (i.e. keep only texture) look?\\n\\nYou are correct on the idea that F&S introduce the texture matching hypothesis for peripheral processing (as preceded by Balas et al., 2009), however while the original work of Balas et al. 2009 uses pure texture matching with no structural constraints to explain losses in performance for a visual search task, the model and implementation of FS includes a prior of image structure at each step when performing gradient descent for texture matching. You could think of this as globally trying to minimize the mean square error (MSE) between the initial noise seed and the final image in pixel space, and locally at the same time, they are matching the MSE in texture space (via the Portilla-Simoncelli statistics) between the initial noise seed and the image content that lies within each receptive field. Images with pure alpha=1 for each receptive field show highly aberrant distortions, as would FS metamers if the content/structure matching restriction would not be added in their implementation. Consequently, a pure texture matching approach does not work for visual metamerism. This has been clarified with great detail in Wallis et al. (Journal of Vision) 2016 -- See Figure 7., General Discussion (Texture Statistics & Metamerism), and Acknowledgements of their paper, and also been suggested recently in Wallis, Funke et al., 2018. In addition the pioneering work of Rosenholtz et al. 2012 (Journal of Vision) on Mongrels as well as the Texforms of Long, Yu & Konkle (PNAS, 2018) provide the same intuitions and clarifications with regards to preserving structure. It is a subtle detail present in the original FS code, and might have not been emphasized in the original paper.\\n\\nHere is a link to the line in the code, where they project the image to its low pass residual (a way of enforcing structural constraints) for every step of the texture matching procedure that is done via a set of coarse-to-fine sub-iterations:\", \"https\": \"//github.com/freeman-lab/metamers/blob/master/main/metamerSynthesis.m#L191\\n\\nWe would really like to thank you for pointing this out, as it is a detail that if not properly addressed, defeats the whole purpose of trying to preserve image structure -- and introducing an alpha parameter in the first place. Hopefully we have addressed your main concern, and appreciate the rigorous feedback that has propelled this work forward from previous versions.\\n\\n- Related to above, why does alpha need to change with eccentricity? Experiment 1 seems to suggest that changing alpha leads to similar SSIM differences between synths and originals as F&S does, but what's the evidence that SSIM is a useful/important metric here?\\n\\nAlpha should change as a function of eccentricity given higher effects of crowding. We empirically verified this is the case by fitting a gamma function that tunes each alpha coefficient as a function of receptive field size which increase with eccentricity. With regards to the choice of SSIM over other IQA metrics, please see our detailed response to AnonReviewer 3 who has suggested trying Experiment 1 with other IQA metrics. We have done so, finding that the tuning properties of the gamma function still hold, and have added these results in the updated Supplementary Material (Section 6.7) .\\n\\n- Again related to above, why do you not use the same approach of blending pooling regions like F&S did instead of introducing alpha?\\n\\nWe do indeed use blended pooling regions as in F&S, and would like to clarify that the interpolation in Figures 3 and 4 are done for each pooling region, rather than the whole image. You could think of Figure 3 as a \\u2018zoomed in\\u2019 pooling region as we wanted to magnify the effects of the distortions within a receptive field. These smoothly blended pooling regions are used for local style transfer for each receptive field. Figure 9 (top), shows how we assign an alpha coefficient to each pooling regions (receptive field) and Section 6.2 in the supplementary material provides details on the construction of blended pooling regions.\"}", "{\"title\": \"Comments to AnonReviewer 3 [Part 2]\", \"comment\": \"--- (Also not necessarily a negative) Exercising SSIM is a valid decision given it's widespread use. I am curious if MS-SSIM, IW-SSIM or other metrics make any significant difference.\\n\\nThis is also a great observation. In principle we chose SSIM because it is has been empirically been shown to be monotonic with human judgments of visual perception in terms of distortions. Other important factors of our choice of SSIM that we did not include in the paper, is that SSIM is based on changes of luminance, contrast, and structure (via normalized contrast), all of these which are critical aspects when analyzing distortions. In addition, SSIM is upper bounded, symmetric and has a unique maximum; which are all ideal traits to have for the perceptual optimization pipeline proposed in Experiment 1 (Section 4.1). MS-SSIM (multiscale SSIM) and IW-SSIM (image content weighted SSIM, computed via mutual information between the encoded reference and distorted image) also share these properties and following your suggestion we decided to re-run Experiment 1 with these IQA metrics to analyze the robustness of choice for SSIM vs other metrics as well as to analyze the potential change of shape of the gamma function. This experiment served as a great control: as it showed that our optimization scheme is extendible to other IQA metrics. (See Algorithm 1 in the Supplementary Material in our original and updated submission).\\n\\nWe have added a page in the Supplementary Material (Section 6.7), with such updates results, figures and permutation tests, and where we discuss our updated results and what we found. We have copied them here:\", \"there_are_the_3_key_observations_that_stem_from_these_additional_results\": \"1) The sigmoidal natural of the gamma function is found again and is also scale independent, showing the broad applicability of our perceptual optimization scheme and how it is extendable to other IQA metrics that satisfy SSIM-like properties (upper bounded, symmetric and unique maximum).\\n\\n2) The tuning curves of MS-SSIM and IW-SSIM look almost identical, given that IW-SSIM is not more than a weighted version of MS-SSIM where the weighting function is the mutual information between the encoded representations of the reference and distortion image across multiple resolutions. Differences are stronger in IW-SSIM when the region over which it is evaluated is quite large (i.e. an entire image), however given that our pooling regions are quite small in size, the IW-SSIM score asymptotes to the MS-SSIM score. In addition both scores converge to very similar values given that we are averaging these scores over the images and over all the pooling regions that lie within the same eccentricity ring. We found that ~90% of the maximum alpha's had the same values given the 20 point sampling grid that we use in our optimization. Perhaps a different selection of IW hyperparameters (we used the default set), finer sampling schemes for the optimal value search, as well as averaging over more images, may produce visible differences between both metrics.\\n\\n3) The sigmoidal slope is smaller for both IW-SSIM and MS-SSIM vs SSIM, which yields more conservative distortions (as alpha is smaller for each receptive field). This implies that the model can still create metamers but potentially with different critical scaling factors for the reference vs synth experiment, and for the synth vs synth experiment. Future work should focus on psychophysically finding these critical scaling factors, and if they still are within the range of rate of growth of receptive field sizes of V1 and V2.\"}", "{\"title\": \"Comments to AnonReviewer 3 [Part 1]\", \"comment\": \"Thank you for having a very positive outlook on our paper, we will address some of your comments and questions\\n\\n--- At the extreme tradeoff between intrinsic structure and texture, the notion of a metamer seems somewhat obscured. At what point is a metamer no longer a metamer?\\n\\nThis is a great question. In general, two stimuli are metameric to each other when they are perceptually indistinguishable, under certain viewing conditions. In our experiments the viewing condition is restricted to a forced fixation task at the center of each image. To answer your question, this happens when the scaling value that is used to construct the size of the pooling regions exceeds their critical limit. All images below such critical scaling values remain metameric to each other contingent on the testing paradigm: Reference vs Synthesis (s=0.25) and Synthesis vs Synthesis Experiment (s=0.5). Indeed, you could imagine a small alteration in an image, such as modifying a specific pixel by 1 bit, that could also produce a metamer. Yet that distortion is somewhat uninteresting, and most importantly it does not provide theoretical insights on the computations done by the human visual system (texture matching in the periphery as proposed in Balas et al, 2009 and Freeman and Simoncelli 2011). Moreover, we find a function (the gamma function), that modulates how much distortion (quantified by alpha) to insert contingent on the size of each receptive field, for any scaling factor. Figure 4 illustrates this idea with the blue contour around the blue dot which we call the metameric boundary, if a distortion exceeds such value, the synthesized image will fail to be metameric locally for a receptive field, and thus for the entire image.\"}", "{\"title\": \"Comments to AnonReviewer 4\", \"comment\": \"Thanks for taking the time to review our paper. We also share your enthusiasm with regards to metamerism. Below we address some of the comments:\\n\\n--- The quantitative evaluation is somewhat lacking in that there are no quantitative psychophysical experiments to compare this model to competing ones across different observers. For example, it would have been interesting to compare the ability of observers to distinguish between original images and metamers generated by different models.\\n\\nThis is an excellent point and we are currently working towards that direction. The current submission represents a good first step: to fully describe our model, and psychophysically evaluate it under 2 conditions (synth vs synth, and synth vs reference). A next step is to evaluate our model with other models including the FS for the same set of images. One current limitation when considering such rigorous evaluation, is that both the SideEye model and the CNN Synthesis model are not publicly available -- thus the differences in performance might be driven by hyperparameter/implementation settings for each model, rather by the model itself. Along these lines, we are looking forward to release our code and make it public similar to the FS model, to promote the development of improved metamer generation models as well as to see potential applications of metamerism in computer vision as suggested in the discussion section.\\n\\n--- Additional comments: On page 10., you show Fig. 13 however you mention at the end of the first paragraph you further elaborate on Fig 13. in the Supplementary Materials. I think it would be better to either provide more discussion in the text and refer to the figure, or just move it fully to Supplementary materials. \\n\\nThanks for pointing this out. We moved Figure 13 to the Supplementary Material where we elaborate more on the geometrical interpretation of these distortions in the encoded space and how a human observer might not be able to discriminate between such distortions.\\n\\n--- Additional comments: Also, in the qualitative comparison of various models you mention that SideEye runs in milliseconds whereas NF runs in seconds. It would be interesting to discuss the potential trade-off between speed and the quality of generated metamers between the models.\\n\\nWe agree, and this goes back to the point we mentioned earlier with regards to publicly available code from the authors. One of the main differences that we can comment on, is that they differ in distortions given the difference in texture statistics. We have verified this via visual inspection. The SideEye model uses a Fully Convolutional Network to approximate a Texture Tiling Model (Mongrel) in O(1) time that locally matches texture distortions everywhere in the field analogous to the metamers of FS. These Mongrels use Portilla Simoncelli texture statistics as compared to the output of the VGG-Net that we use in our parametrization.\\n\\nComparing all models is a next step in metamer research, and we will begin conversations with some of the other authors to see if we can share/distribute our code for such comparisons. In addition, the work of Wallis, Funke et al., 2018 has also shown that the choice of evaluations on images (texture-like, scene-like and man-made) also affects the difficulty of metameric rendering. Thus, the field is not only limited by access to models, and the code, but also by the lack of a standardized set of images and psychophysical paradigm for evaluation.\"}", "{\"title\": \"Towards Metamerism via Foveated Style Transfer\", \"review\": \"Summary\\nThis paper proposes a NeuroFovea (NF) model for generation of point-of-fixation metamers. As opposed to previous algorithms which use gradient descent to match the local texture and image statistics, NF proposed to use a style transfer approach via an Encoder-Decoder style architecture, which allows it to produce metamers in a single forward pass, allowing it to achieve a significant speed-up as compared to early approaches.\\n\\nPros\\n-The paper tackles a very intriguing topic.\\n-The paper is very well written using concise and clear language allowing it to present a large -amount of information in the 10 pages + appendix.\\n-The paper provides a thorough discussion of both the problem, related work and the model itself.\\n-A single forward pass nature of the model allows it to achieve a 1000x speed-up in generating metamers as opposed to previous GD based approaches.\\n-The authors provide enough details to allow for reproducibility.\\n\\nCons\\n-(Not necessarily a negative) Requires a very careful reading as the paper provides a lot of information (though as mentioned it is very well written)\\n-The quantitative evaluation is somewhat lacking in that there are no quantitative psychophysical experiments to compare this model to competing ones across different observers. For example, it would have been interesting to compare the ability of observers to distinguish between original images and metamers generated by different models. \\n\\nAdditional comments\\nOn page 10., you show Fig. 13 however you mention at the end of the first paragraph you further elaborate on Fig 13. in the Supplementary Materials. I think it would be better to either provide more discussion in the text and refer to the figure, or just move it fully to Supplementary materials.\\n\\nAlso, in the qualitative comparison of various models you mention that SideEye runs in milliseconds whereas NF runs in seconds. It would be interesting to discuss the potential trade-off between speed and the quality of generated metamers between the models.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review of Towards Metamerism via Foveated Style Transfer\", \"review\": \"This paper presents an interesting analysis of metamerism and a model capable of rapidly producing metamers of value for experimental psychophysics and other domains.\\n\\nOverall I found this work to be well written and executed and the experiments thorough. Specific points on positives and negatives of the work follow:\", \"positives\": [\"The paper shows a solid understanding of the literature in this domain and presents a strong motivation\", \"The problem itself is addressed at a deep level with many nuanced (but important) considerations discussed\", \"Ultimately the results of the model seem convincing in particular with the accompanying psychophysical experiments\"], \"negatives\": [\"(Maybe not a negative, but a question) At the extreme tradeoff between intrinsic structure and texture, the notion of a metamer seems somewhat obscured. At what point is a metamer no longer a metamer?\", \"(Also not necessarily a negative) Exercising SSIM is a valid decision given it's widespread use. I am curious if MS-SSIM, IW-SSIM or other metrics make any significant difference.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Somewhat obscure writing, but reasonable contribution\", \"review\": \"Summary:\\nThe paper proposes a fast method for generating visual metamers \\u2013 physically different images that cannot be told apart from an original \\u2013 via foveated, fast, arbitrary style transfer. The method achieves the same goal as an earlier approach (Freeman & Simoncelli 2011): locally texturizing images in pooling regions that increase with eccentricity, but is orders of magnitude faster. The authors perform a psychophysical evaluation to test how (in)discriminable their synthesized images are amongst each other and compared with originals. Their experiment replicates the result of Freeman & Simoncelli of a V2-like critical scaling in the synth vs. synth condition, but shows that V1-like or smaller scaling is necessary for the original vs. synth condition.\\n\\nI reviewed an earlier version of this paper for a different venue, where I recommended rejection. The authors have since addressed some of my concerns, which is why I am more positive about the paper now.\", \"strengths\": [\"The motivation for the work is clear and the implementation straightforward, combining existing tools from style transfer in a novel way.\", \"It's fast. Rendering speed is indeed a bottleneck in existing methods, so a fast method is useful.\", \"The perceptual quality of the rendered images is quantified by psychophysical testing.\", \"The role of the scaling factor for the pooling regions is investigated and the key result of Freeman & Simoncelli (pooling regions scale with 0.5*eccentricity) is replicated with the new method. In addition, the result of Wallis et al. (2018) that lower scale factors are required for original vs. synth is replicated as well.\"], \"weaknesses\": [\"Compared with earlier work, an additional fudge parameter (alpha) is introduced. It is not clear why it is necessary and it complicates interpretation.\", \"The paper contains a number of sections with obscure mathiness and figures that I can't follow and whose significance is unclear.\"], \"conclusion\": \"The work is well motivated, the method holds up to its promise of being fast and is empirically validated. However, it feels quite ad-hoc and the writing of the paper is very obscure at various places, which leaves room for improvement.\", \"details\": [\"The motivation for introducing alpha not clear to me. Wasn't the idea of F&S that you can reduce the image to its summary statistics within a pooling region whose size scales with eccentricity? Why do you need to retain some content information in the first place? How do images with alpha=1 (i.e. keep only texture) look?\", \"Related to above, why does alpha need to change with eccentricity? Experiment 1 seems to suggest that changing alpha leads to similar SSIM differences between synths and originals as F&S does, but what's the evidence that SSIM is a useful/important metric here?\", \"Again related to above, why do you not use the same approach of blending pooling regions like F&S did instead of introducing alpha?\", \"I would like to know some details about the inference of the critical scaling. It seems surprisingly spot on 0.5 as in F&S for synth vs. synth, but looking at the data in Fig. 12 (rightmost panel), I find the value 0.5 highly surprising given that all the blue points lie more or less on a straight line and the point at a scaling factor of 0.5 is clearly above chance level. Similarly, the fit for original vs. synth does not seem to fit the data all that well and a substantially shallower slope seems equally plausible given the data. How reliable are these estimates, what are the confidence intervals, and was a lapse rate included in the fits (see Wichmann & Hill 2001)?\", \"I don't get the point of Figs. 4, 13 and 14. I think they could as well be removed without the paper losing anything. Similarly, I don't think sections 2.1 and the lengthy discussion (section 5) are useful at all. Moreover, section 3 seems bogus. I don't understand the arguments made here, especially because the obvious options (alpha=1 or overlapping pooling regions; see above) are not even mentioned.\", \"How is the model trained? Do the authors use the pre-trained model of Huang & Belongie or is the training different in the context of the proposed method? I could only find the statement that the decoder is trained to invert the encoder, but that doesn't seem to be what Huang & Belongie's model does and the paper does not say anything about how it's trained to invert. Please clarify.\", \"At various places the writing is somewhat sloppy (missing words, commas, broken sentences), which could have been avoided by carefully proof-reading the paper.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
S1gWz2CcKX
Neural MMO: A massively multiplayer game environment for intelligent agents
[ "Joseph Suarez", "Yilun Du", "Phillip Isola", "Igor Mordatch" ]
We present an artificial intelligence research platform inspired by the human game genre of MMORPGs (Massively Multiplayer Online Role-Playing Games, a.k.a. MMOs). We demonstrate how this platform can be used to study behavior and learning in large populations of neural agents. Unlike currently popular game environments, our platform supports persistent environments, with variable number of agents, and open-ended task descriptions. The emergence of complex life on Earth is often attributed to the arms race that ensued from a huge number of organisms all competing for finite resources. Our platform aims to simulate this setting in microcosm: we conduct a series of experiments to test how large-scale multiagent competition can incentivize the development of skillful behavior. We find that population size magnifies the complexity of the behaviors that emerge and results in agents that out-compete agents trained in smaller populations.
[ "MMO", "Multiagent", "Game", "Reinforcement Learning", "Platform", "Framework", "Niche Formation", "Exploration" ]
https://openreview.net/pdf?id=S1gWz2CcKX
https://openreview.net/forum?id=S1gWz2CcKX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1lMFvztlE", "SygiCyqi1V", "r1lZUCMvyN", "SJgIcAKL07", "rkl9_0FL07", "SyxEQRY8CX", "BylsB35O6m", "rkggOp9Y37", "rJeSmPOthm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545312122351, 1544425426558, 1544134217111, 1543048846435, 1543048817612, 1543048732319, 1542134850552, 1541152104388, 1541142301257 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1237/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1237/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1237/Authors" ], [ "ICLR.cc/2019/Conference/Paper1237/Authors" ], [ "ICLR.cc/2019/Conference/Paper1237/Authors" ], [ "ICLR.cc/2019/Conference/Paper1237/Authors" ], [ "ICLR.cc/2019/Conference/Paper1237/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1237/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1237/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers raise a number of concerns including limited methodological novelty, limited experimental evaluation (comparisons), and poor readability. Although the authors did address some of the concerns, the paper as is needs a lot of polishing and rewriting. Hence, I cannot recommend this work for presentation at ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview\"}", "{\"title\": \"Review of revised paper.\", \"comment\": \"First, sorry for the delay on this. I missed the original open review email and didn\\u2019t realize there was an update. Thank you for the ping. I have done a complete reread and have a second review bellow.\", \"pro\": \"(from last version)\\nYou are correct that the paper greatly improved. Many confusing points have been resolved. The underlying story of building large scale environments and the experiment results remain interesting and are easier to follow.\", \"cons\": \"The combat and targeting system are still confusing. In fact, I don\\u2019t think they are particularly core to the story and could even be expanded and moved to the appendix. Targeting is not really define. Consider adding a diagram showing what this is. I have a guess from playing RTS style games, but not 100% sure. You state learned targeting does not work yet provide figures about it. I also wouldn\\u2019t recommend including these results as they detract from the main story. \\n\\nFigure 8 it is unclear to me what the difference between the two maps on the right side are.\\n\\nAlso, because I think it would be interesting and informative, it would be great to include a video of what \\u201clife\\u201d looks like at multiple different resolutions. \\n\\nTry not to present new results in the discussion. Right now the discussion section reads more like an experimental section. I would merge all results, and write a discussion that is more big picture about this type environment, limitations, and future work. Right now, 1-5 are great, clear, tell a nice story and are self contained. The message becomes a foggier with the discussion section \\u2014 doesn\\u2019t fit the story and seems like a random other grab bag of experiments.\\n\\nIn light of improvements, and given that this work was interesting to read, and exploring an undeveloped area of research, I am going to change my review by 2 pt: 4=>6. I do hope the authors continue to do work on this and improve the messaging around the paper as this will ensure larger impact!\", \"typos\": \"\\u201cTechnical details We run each experiment using 100\\u201d\\nDelete technical details? Is it meant to be a header?\\n\\n\\u201cWe train for a fixed number of trajectories per population 9.\\u201d\\nWhat is the 9 for? Is this a typo?\"}", "{\"title\": \"Review Updates\", \"comment\": \"Please let us know if we can provide any additional information to aid in reevaluating our work -- we have revised the paper to address all of your major concerns. We believe the paper is much improved, and both other reviewers have updated their evaluation.\"}", "{\"title\": \"Major revisions addressing all reviewer concerns\", \"comment\": \"Thank you for your detailed and thorough commentary on our work. We appreciate the amount of effort you put into your review. You are absolutely correct: the initial writing needed a lot of work, and author proximity to the game genre behind the environment certainly played a part in the obfuscation. We have undergone a major rewrite and restructuring of the text and figures\\n\\nMuch of the initial confusion among all reviewers surrounds minor implementation details that we scattered throughout the initial submission, such as melee/ranged/mage, attack ranges/damage values, and minor spawning mechanic details. These have been pruned, aggregated, and placed appropriately in a Framework appendix. This has also cut the paper to eight pages.\\n\\nThe diagrams also do needed improvement. We have redone every figure in the main paper to at least include better captions, labeling, and color bars for the quantities being measured, and will continue to iterate on the visualizations in future drafts of the paper. It is amusing that you mention Gym usage snippets because our environment is callable through an almost identical API. We have included additional details in the framework appendix, but would provide full documentation with the open source release pending publication.\\n\\nThank you for clarifying a potential source of confusion surrounding learning. Our experiments are consistent with your characterization of policy gradient RL, though our framework supports both definitions. We have clarified this in the writing.\\n\\nYour final suggestion is a phenomenal idea! Open ended task learning is precisely the long term objective of our work. We have tried to abstain from discussion of philosophy in the text of our paper because the base game environment does not yet provide a sufficiently general setting for varied and compelling problem solving. However, as our environment grows to better match the scale of human MMOs, we predict that agent behavior will diverge to fill a large space of reasonable but perhaps not entirely optimal strategies, as occurs among human players of MMOs.\"}", "{\"title\": \"Major revisions addressing all reviewer concerns\", \"comment\": [\"Thank you for your commentary\\u2014your recommendations surrounding comparison to related work has allowed us to better define the space that our work fills. The original writing was admittedly confusing; we have undergone a major restructuring and rewording of the paper in order to address your criticisms and those of the other reviewers:\", \"The framework assumes only that agents receive information about their local environment and output decisions. Hardcoded and algorithmic agents are supported in the framework. While our work is closer to a platform than a fixed task, random baseline agents live for 10-20 timesteps on average.\", \"The main difference from the four works you mention is that our environment is modeled after the real game genre of MMOs. This combines much of the interpretability of earlier game environments, such as Atari, with forms of large scale multiagent interaction unavailable in prior game environments. We have added additional citations.\", \"We do not argue that massively multiplayer interaction is needed in all settings; the purpose of this work is precisely to investigate the types of behaviors that do emerge in such a setting.\", \"All figures have been recaptioned and augmented with scales for the quantities measured. These were clearly confusing given that it was not obvious that the weird dot patterns you mention are actually the agents.\", \"Input and action spaces of the environment are now described in full detail.\", \"\\u201cTuning\\u201d in our setting is simply game development. Our goal was to produce an initial set of design choices that support interesting multiagent interactions and avoid trivial behavior.\", \"All classic game terms such as \\u201cspawn cap\\u201d and \\u201cserver merge\\u201d are toned down and clearly defined when used.\", \"Multiagent competition as a curriculum magnifier is a driving paradigm behind our platform; the argument for this is now detailed in Discussion.\", \"A better description of MMOs in the context of this work has been added.\"]}", "{\"title\": \"Major revisions addressing all reviewer concerns\", \"comment\": [\"Thank you for your detailed list of revision suggestions. Agreed\\u2014the original writing was a mess! Many of the details you were looking for were present, but scattered throughout the paper in a less than comprehensible manner. Here is a non exhaustive summary of improvements as per your comments:\", \"Writing tightened to 8 pages. Many details important to documentation but not core to the message have been moved to a Framework appendix.\", \"Clarified exact training details\", \"Simplified and added relevant scales to all figures\", \"Moved all details of ongoing work to discussion and additional insights in order to isolate them from matured experiments.\"]}", "{\"title\": \"Interesting ideas and results but lacking clarity and focus.\", \"review\": [\"This paper proposes a multi agent life simulators as an environment for RL. The environment is procedurally generated, with possibly many different game dynamics including foraging, and combat. They train deep RL agents in this environment and show various emergent behaviors such as exploration, and niche development. Additionally, they propose a tournament competition scheme to evaluate different populations of agents against each other.\", \"This paper has a number of interesting findings, but overall lacks polish and coherence. The writing is verbose and informal in many places. There are many details not included -- for example specifics on combat targeting, how RL agents are trained, and information how to parse figures (what do colors mean?).\", \"Pro\", \"Interesting idea, and demonstration of a system. From the intro, I believe an environment such as this is will be fruitful to study.\", \"Results seem preliminary but are interesting. In particular, the finding that agents generalize and thus perform better on tournament selection when trained in larger population is intriguing as well as the exploration results with population count!\", \"Reproducibility: authors claim they will release environment simulator code.\", \"Con\", \"Paper can be considerably tightened. It is currently quite long (9.5 pages vs the suggested 8 page). There are also a lot of details included that don't seem core to the message. For example -- the multiple types of API / IPC communication, much of the RPGs section.\", \"Some areas of writing could be improved, either too casual, or sloppy. For example -- various names are not capitalized in bibliography.\", \"Examples of imprecise / casual writing: \\\"good performance without discounting, but training was less stable.\\\" What does \\\"less stable\\\" actually mean? \\\"postprocess trajectories using a discount factor\\\" this is part of the REINFORCE algorithm -- postprocessing, to me, implies modifying the observations. The term \\\"numerical collapse\\\" is not a term I am aware of.\", \"It is unclear what is shown in many of the figures. What are the colors in figures 8,9,10 for example?\", \"Lots of details and ongoing work put in which distracts from a clear message. For example, why was \\\"entity targeting\\\" included? It doesn't appear to be described and the results shown in figure 10 are confusing. I would consider stepping back, and figuring out what one thing you want to show the reader, then drop all detail not around that point.\", \"Lacking a conclusion of somesort. Ideally there would be something to pull the whole paper together.\", \"use of terminology -- unclear why neural mmo is name of this environment. This is not a MMO, nor does the environment have anything \\\"neural\\\" related -- one can train reinforcement learning agents without neural network function approximators on it for example. I would consider renaming.\", \"In its current form, I do not recommend accepting this paper but I do encourage the authors to continue working on it to both tighten the writing and presentation as well as continue to show interesting results via RL experiments.\"], \"edit\": \"See bellow, raised score from 4 --> 6.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Official Review: a multiplayer environment. Lacks comparison with related settings, many arbitrary choices, needs rewriting.\", \"review\": \"The paper presents a new evaluation platform based on massive multiplayer games, allowing for a huge number of neural agents in persistent environments.\\nThe justification evolves from MMO as a source of complex behaviours arguing that these settings have some characteristics of life on earth, being a \\u201ccompetitive game of life\\u201d. However, there are many combinations with completely different insights and implications. The key characteristics for the setting in this paper seem to be:\\n1.\\tCognitive evolution with learning, rather than physical or just genetic evolution (all bodies and architectures are equal)\\n2.\\tChanging environments (tasks), between parameter updates\\n3.\\tSurvival-oriented rewards\\nAnd for some experiments some agents share policy parameters to simulate \\u201cspecies\\u201d.\\nFrom the introduction and the rest of the paper, it\\u2019s not clear whether the same platform can be used with agents that are not neural, or even agents that are hardcoded (for the sake of diversity or to analyse specific behaviours). This is an important issue, as other platforms allow for the definition of some baseline agents, including random agents, agents with simple policies, etc.\\nThe background and related work section covers MMO and artificial life, but has some important omissions, especially those ideas in the recent literature that are closest to this proposal.\\nFirst, why can\\u2019t Yang et al., 2018 be extended with further tasks? \\nSecond, conceptually, the whole setting is very similar to the Darwin-Wallace setting proposed in Orello et al. 2011:\\n@inproceedings{hernandez2011more,\\n title={On more realistic environment distributions for defining, evaluating and developing intelligence},\\n author={Hern{\\\\'a}ndez-Orallo, Jos{\\\\'e} and Dowe, David L and Espa{\\\\~n}a-Cubillo, Sergio and Hern{\\\\'a}ndez-Lloreda, M Victoria and Insa-Cabrera, Javier},\\n booktitle={International Conference on Artificial General Intelligence},\\n pages={82--91},\\n year={2011},\\n organization={Springer}\\n}\\n\\nThe three characteristics mentioned before are the key elements of this evaluation setting, which changes environments between generations. Also, the setting is presented in the context of evaluation and experimentation, as this manuscript.\\n\\nThird, regarding multi-agent evaluation setting, Marlo over Minecraft (Malmo) is covering this niche as well.\", \"https\": \"//www.degruyter.com/downloadpdf/j/jagi.2018.9.issue-1/jagi-2018-0002/jagi-2018-0002.pdf\\nFigures are not very helpful. Especially the captions do not really explain what we see in the figures. For instance, Figure 2 doesn\\u2019t show much. Figure 3 left and middle show some weird dots and patterns, but they are not explained. Also, the one on the right tries to show \\u201cghosting\\u201d, but colours and their meaning are not explained. Similarly, it is not clear what the agents see and process. I assume it is a local grid as the one seen in figure 4. But this is quite an aerial view, and other grid options might do the job as well.\\nSimilarly, some actions are mentioned (it seems that N, S, E, W and \\u201cPass\\u201d? plus some attack options, but they are not described). In the end, I understand many choices have to be made for any evaluating setting, but many choices are very arbitrary (end of section 3 and especially experiments) and there is a lot of tuning, so it\\u2019s unclear whether some of the observations happen just in a particular combination of choices, but are more general. The authors end up with many inconclusive observations and doubts (\\u201cperhaps\\u201d) about small changes, at the end of section 5.\\nOther things such as the \\u201cspawn cap\\u201d and the \\u201cserver merge\\u201d are poorly explained, with clear definitions and proper justification of their role. Similarly, I\\u2019m not sure about how reproduction takes place or not, and if so, whether weights are inherited or reinitialised. Something related is said about species.\\nI found the statement about multiagent competition being a curriculum magnifier, not a curriculum itself, very interesting, but is this really shown in the paper or elsewhere?\\nIn general, I miss many details and justifications for the whole architecture and mechanism of this neural MMO.\", \"pros\": [\"Designed to be scalable\", \"Goes in the right direction of benchmarks that can capture generally variable (social) behaviour.\"], \"cons\": [\"Poor comparison with existing platforms and similar ideas.\", \"Too many arbitrary decisions for the setting and the experiments to make it work or show complex behaviours\", \"The paper needs extensive rewriting, clarifying many details, with the figures really helping for the understanding.\"], \"typos_and_minor_things\": [\"\\u201cSusan Zhang 2018\\u201d is named a couple of times, but the reference is missing. Also, it is quite unusual to use the given name for this researcher while this is not done for any other of the references.\", \"\\u201cas show in Figure 2\\u201d -> shown\", \"\\u201cimpassible\\u201d -> \\u201cimpassable\\u201d\", \"****************************\", \"I've read the new comments from the authors and the new version of the paper. I think that the paper has improved significantly in terms of presentations, coverage of related work. I still see that the contribution is somewhat limited, but I'm updating the score to better account with this new version of the paper.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Revised Review (Score revised up to 7, from 5)\", \"review\": \"Revised Review:\\n\\nThe authors of this work has taken my concerns, and concerns of other reviewers, and revised their paper during the rebuttal period. They have increased the quality of the writing / clarity, restructured the presentation (i.e. put many details in the Appendix section), and committed to open-sourcing the platform post publication. For these reasons I believe this work is now at a state that should be published at ICLR, and I revised my score from 5 to 7. I hope other reviewers can reread the work and post their updated comments.\\n\\nI'm excited about the work, because it incorporates good ideas from A-Life / evolution / open-endedness communities, to introduce new paradigms and new ways of thinking to the RL community. I look forward to using this environment in my own research going forward, regardless of whether this work gets accepted or not. Good luck!\", \"minor_comment\": \"On page 4, the section 5 Experiments, I think \\\"Technical details\\\" should be in bold font before the sentence \\\"We run each experiment using 100 worlds.\\\" so it is distinguished from being part of that sentence.\", \"original_review\": \"The authors present a new game environment inspired by MMORPGs. The environment supports a \\\"massive\\\" number of agents, each have different neural net brains (*) and have some foraging and combat skills. They use distributed RL to train the policies (using REINFORCE) and over time can observe the dynamics of the population of these artificial life agents interact with each other, where the only reward is survival. There are many interesting insights, such as looking at how multi-agent cooperative (and deceptive) strategies emerge, and how some agents with different niche skills co-evolve with agents with other niche skills. They also plan to open source the platform and I have high hopes that this will be a fantastic research environment. While I'm very optimistic about this work and direction, there are issues with this particular paper, and I feel it is not ready for publication in its current form. While I have no doubt that the software project will be great, as a reviewer I'm evaluating this particular paper, and I want to highlight flaws about the paper and what can be done to fix it during the rebuttal/review period.\", \"my_recommendations_to_improve_the_article\": \"(1) Writing - I really enjoyed this work, but frankly, the writing is horrible. It took me days of effort to decipher every paragraph and understand all the terms and what is going on. The article reads like it is written by the person who programmed the game, and played MMORPGs almost every day of his childhood and adult life, so someone who is not reading the article thru the lens of the author might have an incredibly tough time digesting the content. For instance, there are sentences like \\\"It adds melee, ranged, and magic-based combat\\\"... \\\"Melee, ranged, and magic combat have maximum Manhattan distance of effect of 1, 2, and 3 cells respectively. They do 10, 2, and 1 damage respectively\\\"... \\\"This prevents uninteresting 'spawn killing' and is a common technique in human games\\\". These are only a small selection of samples. There are also terminology like \\\"#ent and #pop\\\" which I feel should be replaced by $N_{ent}$ and $N_{pop}$ for a paper. In contrast, older works related to population-based RL training like [2], or RL in games like [3] are examples of clear and understandable writing. I highly recommend you give the draft to someone outside of your team, who is sufficiently isolated from this project (or perhaps to a professional writer if your lab has one), to go over each paragraph, and make the writing more clear. This would benefit the work in the long term as people refer back to the paper when they run your code.\\n\\n(2) Diagrams - While the diagrams look interesting, IMO they are poorly made. When I look at Figure 1, 4, and 9, it is really difficult to understand what is going on. I recommend redoing the diagrams, perhaps get some inspiration from distill.pub or OpenAI blog posts. There are things that are not clear, like what the inputs are into each agent, and how the training works. I recommend having some pseudocode snippets (like the Gym framework) to explain parts of the overall picture in more detail as figures.\\n\\nGiven a work of this magnitude, I'm personally okay that they went over 8 pages, as long as it is properly used for clarity.\", \"discussion\": \"Concepts from Artificial Life and Evolution has been introduced in this work. There is some confusion between what is \\\"learning\\\" and what has been \\\"evolved\\\" in your setup. Some readers coming from the evolution, or biology fields (who I bet will find your paper interesting to read and experiment with) might interpret \\\"learning\\\" to be weight changes during a life time, while \\\"evolution\\\" would be changes to the weight parameters from one generation to the next, but I think in policy-gradient RL, \\\"learning\\\" means weight changes after an agent dies and is reborn. Should consider clarifying in the introduction, the definition of learning, and whether it is inter-life or intra-life.\\n\\nYou cited some of Stanley's talks on open-endedness, but I wonder if you considered their work [1] where they proposed that having a minimum criteria condition might encourage diversity of solutions. For instance, perhaps in your environment, an agent doesn't have to be the very best, but only manage to survive, to move on to the next generation, which might cause very interesting multi-agent population dynamics. A parallel to modern life is that people (at least those in wealthy nations) live with such a good social safety net that people don't really have to be the best \\\"agent\\\" to reproduce and survive, and this might explain the large diverse cultures and ideas we end up with as human species, compared to other animals (where the current game is probably a suitable model of). An experiment to explore an experiment where only the very weakest agents die, but leaving agents with mediocre foraging and combat skills still live on (and pursue their own interests, whatever they may be) will be super interesting, and I encourage you to explore these ideas of open-endedness.\", \"bugs\": \"In the appendix, the citation for OpenAI Five needs fixing.\\n\\nCurrently it pains me that I can only assign a score of 5 of this work (NOTE: this has been since revised upwards to 7 upon reading revision after rebuttal period), since I don't think the current writing is up to standards. In my opinion, it deserves a score of 7-8. If you work on points (1) and (2) and submit a revised draft with much better writing, visualization, figures to explain the work, I'll happily revise my score and improve it by 1-3 points depending on how much improvement is made.\\n\\n[1] Brand and Stanley. \\\"Minimal Criterion Coevolution: A New Approach to Open-Ended Search\\\" (GECCO 2017) http://eplex.cs.ucf.edu/papers/brant_gecco17.pdf\\n[2] https://arxiv.org/abs/1703.03864\\n[3] https://arxiv.org/abs/1804.03720\\n\\n(*) well, sort of, due to compute limits they are clustered to some extent to their species, so agents within a species have identical brains, unlike the real world.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
BJgbzhC5Ym
NECST: Neural Joint Source-Channel Coding
[ "Kristy Choi", "Kedar Tatwawadi", "Tsachy Weissman", "Stefano Ermon" ]
For reliable transmission across a noisy communication channel, classical results from information theory show that it is asymptotically optimal to separate out the source and channel coding processes. However, this decomposition can fall short in the finite bit-length regime, as it requires non-trivial tuning of hand-crafted codes and assumes infinite computational power for decoding. In this work, we propose Neural Error Correcting and Source Trimming (NECST) codes to jointly learn the encoding and decoding processes in an end-to-end fashion. By adding noise into the latent codes to simulate the channel during training, we learn to both compress and error-correct given a fixed bit-length and computational budget. We obtain codes that are not only competitive against several capacity-approaching channel codes, but also learn useful robust representations of the data for downstream tasks such as classification. Finally, we learn an extremely fast neural decoder, yielding almost an order of magnitude in speedup compared to standard decoding methods based on iterative belief propagation.
[ "joint source-channel coding", "deep generative models", "unsupervised learning" ]
https://openreview.net/pdf?id=BJgbzhC5Ym
https://openreview.net/forum?id=BJgbzhC5Ym
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJxBhpabe4", "ryedu97hyN", "S1eRzd3K1E", "ByxKs1laAX", "BJg-hw220Q", "BkgxULTlR7", "BJlknyukRX", "rkxojKYRTQ", "H1xHRxl86Q", "S1xMN4UWa7", "HkgDn_1T3X", "SklUDGzq3m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review" ], "note_created": [ 1544834477006, 1544465008271, 1544304662418, 1543466913247, 1543452584887, 1542669896321, 1542582183061, 1542523298566, 1541959885253, 1541657642015, 1541367983198, 1541182045562 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1236/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1236/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1236/Authors" ], [ "ICLR.cc/2019/Conference/Paper1236/Authors" ], [ "ICLR.cc/2019/Conference/Paper1236/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1236/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1236/Authors" ], [ "ICLR.cc/2019/Conference/Paper1236/Authors" ], [ "ICLR.cc/2019/Conference/Paper1236/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1236/Authors" ], [ "ICLR.cc/2019/Conference/Paper1236/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1236/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes a principled solution to the problem of joint source-channel coding. The reviewers find the perspectives put forward in the paper refreshing and that the paper is well written. The background and motivation is explained really well.\\n\\nHowever, reviewers found the paper limited in terms of modeling choices and evaluation methodology. One major flaw is that the experiments are limited to unrealistic datasets, and does not evaluate the method on a realistic benchmarks. It is also questioned whether the error-correcting aspect is practically relevant.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Meta-Review\"}", "{\"title\": \"Agreed\", \"comment\": \"Sorry for the delay \\u2013\\n\\nThank you, qualifying the statement with the datasets like you suggested is acceptable.\"}", "{\"title\": \"follow-up?\", \"comment\": \"Dear AnonReviewer2,\\n\\nWe hope that our response clarified some points of concern. We are happy to go into further detail -- please let us know if there are additional points that we can address to further improve our paper.\"}", "{\"title\": \"proposed modification of statement\", \"comment\": \"Thank you for the feedback. We agree that we have demonstrated our results on MNIST, CelebA, Omniglot, and SVHN, which are datasets where generative models such as variational autoencoders have been shown to work well. We are happy to revise the sentence:\\n\\n\\u201cWe showed that the model: (1) is competitive against a combination of industry standard compression and error-correcting codes...\\u201d\\nto\\n\\u201cWe showed that the model: (1) is competitive against a combination of JPEG and LDPC codes on Omniglot and CelebA...\\u201d\\n\\nin the final version of the paper. Alternatively, we are open to suggestions for a more appropriate wording.\"}", "{\"title\": \"Issues remain\", \"comment\": \"Thank you for providing these justifications, which I find mostly acceptable.\\n\\nUnfortunately, I find myself still dissatisfied with the experimental validation.\\n\\nThe fact that Kodak can't be used due to larger image resolution is irrelevant here - I brought up Kodak as an example. The criticism was that Omniglot and CelebA are not generic images. If resolution is a problem, you could use CIFAR, for example, downsample Kodak, or use any other dataset that is not restricted to a particular class of images (such as faces or characters).\\n\\nI do not think the paper should be published with the statement that it is \\\"competitive with industry standard compression\\\". The provided data does not support this conclusion, because the comparison is not fair.\\n\\nTo be clear, my current rating is contingent on either removing this statement in the camera-ready version, or providing better evaluation data.\"}", "{\"title\": \"thank you for the reply\", \"comment\": \"Dear authors ~ thank you for the reply ~ and for agreeing to make the source code available.\\n\\nWe are pretty much in synch here and now, so I stand by my ACCEPT rating.\\n\\nRegards!\\n\\n.<AnonReviewer3>\"}", "{\"title\": \"justifications and modifications\", \"comment\": \"We thank the reviewer for their insightful comments and feedback! We address the main points below:\\n\\n1. Novelty of MI maximization: We agree that there has been a lot of work on (variational) MI maximization and R-D optimization, and we have added several relevant citations (Hinton & van Camp 1993, Honkela & Valpora 2004, Alemi et al. 2017, Chen et al. 2016, Ball\\u00e9 et al. 2016), into our paper (Section 4.1, paragraph 2). However, to the best of our knowledge none of these works address the problem of JSCC, and we view this paper as building on the existing body of work on image transmission.\\n\\n2. Niche setup: We agree in that allowing the model to handle variable-rate image compression is important, and plan for this extension in future work. For this paper, we decided to first evaluate the feasibility of using fixed-rate codes and reported our findings as we think (1) our observations and (2) this framework\\u2019s connection with generative models (with are \\u201cfixed-rate\\u201d) could be valuable to the community. \\nRegarding the setup\\u2019s relevance, we would like to note that even in wireless image/video transmission, JSCC is an important and active area of research in the systems/IEEE media communities. While there has been a lot of work in the early 2000\\u2019s [Cai et al. 2000, Mohr et al. 2000, Wu et al. 2005, Bi et al. 2014], JSCC was considered to be a very difficult problem. It was not until recently with the rise of deep learning that JSCC has garnered interest again in text/image modeling [Rao et al. 2018, Bourtsoulatze et al. 2018]. NECST serves as a way to model this entire communication process; specifically, our model\\u2019s implicit addition of redundancy into the latent codes is analogous to the forward error-correcting (FEC) techniques typically applied across/within packets in packet-switched networks [Zhai and Katsaggelos 2007]. Although more work and research is needed to turn these ideas into a deployable system, our work demonstrates the feasibility of an end-to-end neural network approach. \\n\\n3. Experimental Details: \\n3a) Why JPEG: We found that JPEG performs quite well for small images, as most of the improvements in JPEG2000 and WebP come from larger block sizes (larger images). Additionally, as WebP only supports RGB images (https://developers.google.com/speed/webp/faq), we were unable to use it for grayscale images as it led to inferior compression. Thus we decided to use JPEG across all our image datasets. As a final note, we exclude the bytes that are shared among an image dataset when comparing to the number of bits used by NECST to make the comparison as fair as possible.\\n3b) Unfair JPEG vs. NECST compression: We would like to highlight that NECST\\u2019s ability to leverage statistical structure in images is an advantage when training data is available. Additionally, we discard a significant chunk of bytes of the compressed JPEG image that are shared across a dataset when comparing the image sizes, to try to make the comparison as fair as possible.\\nWith regards to the Kodak dataset, we are unable to validate our model on it because it would require a new dataset comprised of images with the same dimension, drawn from the same distribution as that of the Kodak images. This would also require a new architectural adjustment to NECST, as our framework is unable to handle the images of very high resolution. We plan to explore such improvements upon our model for future work. We have also modified our sentence in the Section 8 to make it more clear that our model is \\u201ccompetitive against a combination of industry standard compression and error-correcting codes.\\u201d\\n3c) Figure 1: We computed an average rate per individual image. As per Appendix D.1, we obtain the target distortion level by using a fixed bit-length budget with NECST, then use the JPEG compressor to encode the image at the desired level of distortion. We then use the resulting size of the image (ignoring headers, which we call f(d)) to obtain an estimate m = f(d)/C for the number of bits. After running this procedure over all images, we obtain an average at the very end.\\n\\n4. Sections 5.3 / 5.4: \\n4a) VAE + LDPC comparison: We chose to double the length of the VAE representation such that we could work with a simple rate-1/2 LDPC code. In our initial experiments, where we fixed the number of LDPC bits (e.g. 200) and varied the number of bits used for the VAE, we found that there was not a significant difference in the results obtained.\\n4b) Runtime experiment: The BP implementation we use from Radford Neal is such that once the tentative decoding (based on bit-by-bit probabilities) is a valid codeword, the algorithm halts. Therefore, we implicitly allow the LDPC decoder to terminate as early as it wants. In the original LDPC package documentation, there were several examples provided where the LDPC decoder was run for a maximum of 200-250 iterations of BP. In initial experiments, we found that for our setup roughly 50 iterations as the max were sufficient.\"}", "{\"title\": \"justifications and clarifications\", \"comment\": \"We thank the reviewer for their insightful comments and feedback regarding the paper! We address the main points of the reviewer\\u2019s concern below:\\n\\n1. No source code: We agree with the reviewer that providing code will facilitate future research, and will make the source code publicly available.\\n\\n2. More complex channels: We agree with the reviewer that simulating different channels (e.g. fading/erasure/correlated error sequences) would certainly be very interesting, but we note that the BSC is a more difficult channel to work with than the erasure channel. It\\u2019d be especially interesting to see the effect on the \\u201cfeatures\\u201d learned by the model under different noise models. \\n\\n3. Choice of VIMCO: We agree with the reviewer in that experiments comparing VIMCO to other methods of training discrete latents would be useful. Our decision to use VIMCO was motivated by earlier experiments using Gumbel-Softmax. As mentioned in our reply to Reviewer 3, we found that using a continuous relaxation of the discrete latent variables allowed the latent codes to capture more information than should be necessary during training. Then at validation/test time, the latent codes would be forced to be discrete. This led to worse reconstructions and samples overall.\\n\\n4. Stability of hyperparameters: We did not have too much difficulty across different hyperparameters. The biggest issue we ran into when training on more complex datasets (e.g. celebA) was that we had to decrease the learning rate by an order of magnitude as compared to a simpler dataset. The more complex datasets such as celebA required more complex architectures as well -- the quality of reconstructions improved drastically when we used a convolutional architecture as opposed to an MLP.\\n\\n5. Fixed code length limitation: The NECST architecture requires the user to pre-specify the allotted bit-length budget N for learning the latent codes. Therefore, the model is only able to encode images in a particular training set to that fixed N, and cannot encode to codes that are shorter or longer than N. One could imagine that in the case of entropy coding, it may be more efficient to encode frequently occurring images using a shorter-length code, while encoding less common images using longer codes. In its current form, NECST is not able to adaptively learn different code lengths over a given dataset. We plan to address this limitation in follow up work.\"}", "{\"title\": \"Interesting, yet limited paper\", \"review\": [\"The authors set out to tackle an old problem (joint source-channel coding) with a principled approach and a fresh perspective. However, I find the paper quite limited both in terms of modeling choices as well as evaluation methodology. Specifically:\", \"The mutual information maximization approach is appropriate, but hardly novel. Besides being highly related to ELBO maximization, there have been several recent papers on rate-distortion optimization, as well as on deriving variational bounds for MI (see, for instance, Alemi et al.).\", \"The experimental setup is somewhat niche: in the context of image compression, both the fixed-rate constraint as well as the use of a binary symmetric channel are unusual. The vast majority of image compression methods are variable-rate, and for good reason: generic images tend to carry vastly different amounts of self-information, such that a fixed-rate code is almost guaranteed to achieve suboptimal *average* performance in terms of rate-distortion. Additionally, the vast majority of images today are sent over channels that already perform error correction, such as packet-switched networks (e.g., the Internet) or digital storage media, so that it's unclear why this particular case of joint source-channel coding would be practically relevant.\", \"I find the claim that the model is \\\"competitive against industry standard compression\\\" hardly justified based on the presented data. First, JPEG is now almost 40 years old. Since its inception, newer industry standards have exceeded it multiple times over in terms of rate-distortion performance. Second, JPEG was designed as a compression method for generic images. Comparing its performance on Omniglot and CelebA datasets is unfair, because the presented model can be trained to exploit special probabilistic structure in these datasets, while JPEG cannot. A widely used and accessible dataset better suited to compare against exisiting image compression methods would be the Kodak set, for example. And third, as explained above, JPEG is a variable-rate compression algorithm. How exactly were the number of bits required for JPEG to achieve the same distortion as NECST computed? To produce the plot in Figure 1, did the authors first compute an average rate for each average distortion, or was the computation done for each individual image, and then averaged to produce Figure 1 in a second step? This distinction could make a big difference.\", \"Regarding Sections 5.3 and 5.4: Could the authors please justify why they just double the length of the VAE representation? Wouldn't it be fairer towards LDPC to compare NECST to a VAE+LDPC code with various amounts of redundancy? Similarly, could the authors please justify comparing runtime only against a fixed 50 iterations of LDPC, rather than comparing against a range of possible values to make sure they are giving LDPC the benefit of the doubt?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"clarification of the paper's main contribution and runtime experiment\", \"comment\": \"We thank the reviewer for their insightful comments and feedback regarding the paper! We address the main points of the reviewer\\u2019s concern below:\\n\\n1. Questionable novelty of discrete latent variable modeling: The novelty in this paper is our approach in addressing the joint-source channel coding (JSC) problem. The fact that NECST can be seen as a generative model with discrete latent variables is a novel observation from this paper. Precisely because we are **not in the traditional generative modeling setting**, our model : (1) includes latent additive noise (a discrete bit-flipping procedure), which to the best of our knowledge has not been done before, and (2) does not use a prior distribution as in a VAE-like setup [refs 1-3] while also not relying on additional discretization techniques [ref 4]. NECST is not comparable with [ref 5] because we want hard assignments of our data to the latent codes, as opposed to a mixture of codes. We also could not find any mention of adding noise to discrete latents in [ref 5] -- could the reviewer clarify this point?\\n \\nAlthough we build on prior work on discrete latent variable models, our (a) motivating application and (b) resulting training objectives are fundamentally different. Refs 1-5 do not address the problem of JSC. For example, it is unclear as to how these methods would address the problem of adding redundancy (when |z| is very large) to achieve robustness. For example, when we train a model on MNIST designed specifically for representation learning (discrete VAE) and evaluate for JSC (channel noise = 0.2), we find that the distortion increases by a factor of 3x.\\n\\n2. Weak experimental section:\\n2a) Minor improvements against VAE + LDPC baseline: We would like to highlight that (1) we do see small performance improvements over VAE + LDPC in almost all cases, and (2) our method is orders of magnitude faster as we do not need to run multiple iterations of belief propagation for decoding. Also, LDPC is not a trivial baseline - it is an industry standard optimized over decades.\\n2b) Classification on noisy + noiseless MNIST: The purpose of the downstream classification experiment was to demonstrate that the latent codes, when trained with simulated channel noise, become more robust (\\u201cmore useful\\u201d) for downstream tasks. This is in itself a novel observation: one would not normally think of JSC as a feature extractor. Specifically, when the latent codes are corrupted by the channel, the codes will be \\u201cbetter separated\\u201d in latent space so that the model will still be able to reconstruct accurately despite the added noise. Thus NECST can also be thought of as a \\u201cdenoising autoencoder\\u201d-style method for learning more robust latent features with the added twist that the noise is injected into the latent space as opposed to the data space.\\n\\n3. Runtime computation: The reviewer pointed out that batching the forward pass for the NECST decoder would lead to easy gains in speed. This is indeed the setup that we used, as we believe that NECST\\u2019s ability to allow for batching in the encoding/decoding process serves as an advantage of our model. But for a more fair comparison, we have re-run the timing experiment without batching the inputs (decoding one codeword at a time). We find that NECST still outperforms traditional LDPC decoding. Specifically, CPU is *slightly* faster than GPU (same order of magnitude), while NECST decoding still remains an order of magnitude faster than LDPC without batching.\\n\\nMNIST\", \"channel_noise\": \"0.0\\t\\t0.1\\t\\t0.2\\t\\t0.3\\t 0.4\\t\\t0.5\", \"ldpc\": \"['6.80E-05s', '1.09E-03s', '1.24E-03s', '1.21E-03s', '1.16E-03s', '7.28E-04s']\\nNECST (CPU):\\t ['3.57E-04s', '3.87E-04s', '3.72E-04s', '3.63E-04s', '3.69E-04s', '3.67E-04s']\\n\\n4. Choice of optimizing a lower bound on I(X,Y): We note that there seems to be a misunderstanding: as we are not in a generative modeling setup, we need a different objective from the standard ELBO as in a VAE. In JSC, we want to maximize the amount of error-free information that can be transmitted over our noisy channel. This is by definition (see MacKay: http://www.inference.org.uk/itprnn/book.html) the channel capacity, or the maximum mutual information I(X,Y) between our data X and noisy codes Y. Optimizing this lower bound on I(X,Y) (as computing the true objective is intractable) also has the nice and novel interpretation for NECST that allows us to view the framework from a generative modeling perspective.\"}", "{\"title\": \"Review\", \"review\": \"Summary of paper: For the finite-bit case of the noisy communication channel model, it is suboptimal to optimize source coding (compression of input) and error correction (fault tolerance for inherent noise in the channel) separately. The authors propose a neural network model (NECST) that is very similar to the standard VAE, except using binary latents with corruption (e.g., random bit flipping in the style of a binary symmetric channel). They use VIMCO to optimize through the discrete units. In their experiments, they show that they can outperform a JPEG+ideal channel code model, but perform similarly to a VAE+LDPC (LDPC is a classic error correcting code) setup.\\n\\nFirst of all, the paper is quite well written and easily readable. Great work on explaining the motivation and the model -- the writing is clear and explains background knowledge extremely well.\\n\\nThe main contribution in the model is the use of discrete binary latents, instead of the standard continuous latents in a VAE. However, I am uncertain about the novelty of this contribution. There have been numerous works examining discrete latent variables in autoencoders (a random sampling: [1, 2, 3, 4]) and beyond. Furthermore, the method of training through discrete latents is also standard (VIMCO, though one can also imagine using more recent advances like REBAR or RELAX). The only difference would be the addition of noise to the discrete. I would be curious to see how that compares to recent works that have also added noise to discrete latents [5].\\n\\nThus, it strikes me that the main contribution of this work would be in comparing against the current best techniques for coding. However, the experiments section is weak, and does not provide significant evidence that the NECST model is better than the alternatives. NECST outperforms JPEG+ideal channel coding, but doesn't do much better than a VAE+LDPC baseline. This suggests that most of the gains comes from the encoder (source coding) model q(\\\\hat{y} | x), instead of the joint training of source coding and error correcting code. It is not surprising that using a neural network to generate codes would provide significant gains. It's not clear that error correcting code aspect (noise in the latents) is particularly important.\\n\\nFurthermore, in the classification results, the MLP model trained on the discrete codes gets 93% accuracy on noiseless MNIST inputs. You can easily get this accuracy by training logistic regression directly on the pixels. Despite what the authors write, this result suggests that the codes are not very useful for downstream learning. Furthermore, it is unclear why adding random noise to the inputs would significantly improve some of the weaker classifiers. The only reason I can think of is data augmentation, but this has nothing to do with the NECST model.\\n\\nIn conclusion, this is a well written paper, but the novelty is not apparent and the experimental results are weak, and so I am not convinced this is suitable for ICLR.\", \"additional_questions\": \"* How is the runtime computed? Specifically, for NECST, do you batch the data and then divide the forward pass time by the batch size? If this is how runtime is computed, it's not surprising that NECST does better, given that batching is cheap with modern hardware. If the actual forward pass time for a single example is cheaper than that of LDPC's belief propagation, then that would be quite promising.\\n* The authors state that VAEs optimize a lower bound on the marginal log-likelihood p(X), whereas NECST optimizes a lower bound on the mutual information I(X, Y), where Y is the noised code. The authors however do not discuss why one should optimize for mutual information compared to marginal log-likelihood. What are the advantages and disadvantages between the two?\\n\\n[1] Semi-Supervised Learning with Deep Generative Models (https://arxiv.org/abs/1406.5298)\\n[2] Discrete Variational Autoencoders (https://arxiv.org/abs/1609.02200) \\n[3] Neural Discrete Representation Learning (https://arxiv.org/abs/1711.00937)\\n[4] Discrete Autoencoders for Sequence Models (https://arxiv.org/abs/1801.09797)\\n[5] Theory and Experiments on Vector Quantized Autoencoders (https://arxiv.org/pdf/1805.11063.pdf)\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Good paper, well written and well motivated, good results, no-source code!\", \"review\": \"This interesting paper tackles the problem of joint source-channel coding, by means of learning.\\n\\nFrom 100kft heights, especially given the choice of VIMCO gradient estimates, this is effectively a \\\"let's embed a source-channel-decoder simulator and differentiate through it\\\", and find a solution that is better than source|channel factorized classic methods, or hand-tuned approaches.\\n\\nThe method and results are good. The authors also show some interesting results about the representations learned, about how decoded samples (images) change smoothly when the (discrete) embedding (the-codes) changes over deltas of hamming_d()=1bit. This is very good results IHMO. One limitation of this method is the fixed-code-length.\", \"jumping_straight_to_my_main_main_issue_with_this_paper\": \"no code was made available, at least not at this time.\\n\\nWhile the authors do provide an extensive appendix with hyper-parameter specs, usually in my experience when dealing with discrete / monte-carlo methods, it's usually rather hard to reproduce results. I really strongly advise the authors to provide fully reproducible code for this paper, to help further research on this topic.\\n\\nBesides that I have three technical comments / request regarding this paper:\\n\\n1// the choice of BSC channel - while this is the easiest most natural choice, and we should certainly have results on BSC, I am left wondering why the authors didn't try other more complex / more realistic channels? The authors only mention this as potential area of future research in the last sentence of the conclusions.\", \"there_are_several_reasons_for_this_comment\": \"first of all, it is well known that even classic joint source-channel coding methods do shine on complex channels, such fading/erasure channels and/or in general channels with correlated error sequences. Such channels are indeed key in modern wireless communications, and are easy to simulate. Given that more-complex channels could be introduced in the channel model p(y_hat|y) - it would not change the rest of the method - it would be particularly interesting to see what results this method achieve in these more complex environments.\\n\\n2// I would like to hear more about the choice of VIMCO. Understood the authors statement to \\\"preserve the hard discreteness\\\" ~ that said methods like Gumbel-SM and several others also referenced in the paper ~ have been used successfully to solve for propagating gradients through discrete units. This is where, in my opinion, experiments comparing VIMCO approximation results to at least one other method could allow to decide / validate the best architecture. \\n\\nThis is also because, in my previous experience, this type of networks with discrete units may be hard to train. I would like to hear from the authors about how stable the training was under different hyper-parameters, and perhaps see some convergence curves for the loss function(s).\\n\\n3// it's not 100% clear to me where the limitation of fixed code-length come into play from the architecture. Could the authors please point this out clearly?\\n\\nThank you!\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
S1MeM2RcFm
BlackMarks: Black-box Multi-bit Watermarking for Deep Neural Networks
[ "Huili Chen", "Bita Darvish Rouhani", "Farinaz Koushanfar" ]
Deep Neural Networks (DNNs) are increasingly deployed in cloud servers and autonomous agents due to their superior performance. The deployed DNN is either leveraged in a white-box setting (model internals are publicly known) or a black-box setting (only model outputs are known) depending on the application. A practical concern in the rush to adopt DNNs is protecting the models against Intellectual Property (IP) infringement. We propose BlackMarks, the first end-to-end multi-bit watermarking framework that is applicable in the black-box scenario. BlackMarks takes the pre-trained unmarked model and the owner’s binary signature as inputs. The output is the corresponding marked model with specific keys that can be later used to trigger the embedded watermark. To do so, BlackMarks first designs a model-dependent encoding scheme that maps all possible classes in the task to bit ‘0’ and bit ‘1’. Given the owner’s watermark signature (a binary string), a set of key image and label pairs is designed using targeted adversarial attacks. The watermark (WM) is then encoded in the distribution of output activations of the DNN by fine-tuning the model with a WM-specific regularized loss. To extract the WM, BlackMarks queries the model with the WM key images and decodes the owner’s signature from the corresponding predictions using the designed encoding scheme. We perform a comprehensive evaluation of BlackMarks’ performance on MNIST, CIFAR-10, ImageNet datasets and corroborate its effectiveness and robustness. BlackMarks preserves the functionality of the original DNN and incurs negligible WM embedding overhead as low as 2.054%.
[ "Digital Watermarking", "IP Protection", "Deep Neural Networks" ]
https://openreview.net/pdf?id=S1MeM2RcFm
https://openreview.net/forum?id=S1MeM2RcFm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJxGDucSlV", "BJg3dt1yTX", "ryxrFSh62m", "r1gtuXeYh7" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1545082970232, 1541499251594, 1541420413428, 1541108592855 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1234/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1234/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1234/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1234/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The reviews agree the paper is not ready for publication at ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"reject\"}", "{\"title\": \"Review\", \"review\": \"A method for multi-bit watermarking of neural networks in a black-box setting is proposed. In particular, the authors demonstrate that the predictions of existing models can carry a multi-bit string that can later be used to verify ownership.\\nExperiments on MNIST, CIFAR-10 and ImageNet are presented in addition to a robustness assessment w.r.t. different WM removal attacks.\\n\\nQuestions/Comments:\\n\\nRegarding the encoding scheme, a question that came up is whether one needs to perform clustering on the last layer before the softmax? In principle, this could be done at any point, right?\\n\\nAnother question is how the method scales with the key length. Did you experiment with large/small values of K (e.g., 100,200,...)? It would be interesting, e.g., to see a plot that shows key length vs. accuracy of the marked model, or, key\\nlength vs. detection success (or BER).\\n\\nApart from these comments, how does the proposed model compare to zero-bit WM schemes? I am missing a clear comparison to other, related work, as part of the experiments. While there might not exist other \\\"black-box multi-bit\\\"\\nschemes in the literature, one could still compare against non-multi-bit schemes. \\n\\nIn light of a missing comparison, my assessment is \\\"Marginally below acceptance threshold\\\", but I am willing to vote\\nthis up, given an appropriate response.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Some interesting ideas, but a better evaluation is needed to show the effectiveness of the method\", \"review\": \"summary:\\n\\nThe paper proposes an approach for model watermarking (i.e., watermarking a trained neural neteowrk). The watermark is a bit string, which is embedded in the model as part of a fine-tuning procedure, and can be decoded from the network from the model's specific predictions for a specific set of inputs (called keys) chosen during the fine-tuning step. The process generates a watermark when we can be confident that a model that didn't go through the exact same fine-tuning procedure gives significantly different predictions on the set of keys. The application scenario is when a company A wants to deploy a model for which A has IP ownership, and A wants to assess whether a competitor is (illegaly) re-using A's model. The approach presented in the paper works in the black-box setting, meaning that whether a model posesses the watermark can be assessed only by querying the model (i.e., without access to the internals of the model).\\n\\nThe overall approach closely follows Merrer et al. (2017), but extends this previous work to multi-bit watermarking. The similarity with Merrer et al. is that keys are generated with a procesdure to generate adversarial examples, watermarking is performed by specifically training the network to give the source label (i.e., the label of the image from which the adversarial example has been generated). The differences with Merrer et al. lie in the fact that each key encoded a specific bit (0 or 1), and the multi-bit watermark is encoded in the predictions for all keys (in case of a multi-class classifier, the labels are first partitionned into two clusters to map each class to either 0 or 1). In contrast, Merrer et al. focused on \\\"zero-bit\\\" watermarking, meaning that all keys together are only used to perform a test of whether the model has been watermarked (not encode the watermark). Another noticeable difference with Merrer et al. is in step 4 of the algorithm, in which several unmarked models are generated to select better key images.\", \"comments\": \"While overall the approach makes sense and most of the design decisions seem appropriate, many questions are only partly addressed. My main concerns are:\\n1- the watermarks are encoded in adversarial examples for which the trained model gives the \\\"true\\\" label (i.e., the watermark is embedded in adversarial examples on which the model is robust). The evaluation does not address the concerns of false alarms on models trained to be robust to adversarial examples. Previous work (e.g., Merrer et al.) study at least the effect of fine-tuning with adversarial examples..\\n\\n2- A watermark of length K is encoded in K images, and the test for watermarking is \\\"The owner can prove the authorship of the model if the BER is zero.\\\". This leaves little room to model manipulation. For instance, the competitor could randomize its predictions once in a while (typically output a random label for one out of K inputs), with very small decrease in accuracy and yet would have a non-negligible probability of having a non-zero BER.\", \"other_comments\": \"\", \"1__overhead_section\": \"in step 4 of the algorithm, there is a mention of \\\"construct T unmarked models\\\": why aren't they considered in the overhead? This seems to be an extremely significant part of the cost (the overall cost seems to be more T times the cost of building a single unmarked model rather than a few percent)\", \"2__step_2_page_4\": \"\\\"The intuition here is that we want to filter out the highly transferable WM keys\\\": I must have misunderstood something here. Why are highly transferable adversarial examples a problem? That would be the opposite: if we want the key to generate few false alarms (i.e., we do not want to claim ownership of a non-watermarked model), then we need the adversarial examples to \\\"transfer\\\" (i.e., be adversarial for non-watermarked models), since the watermarked model predicts the source class for the key. Merrer et al. (2017) on the contrary claim \\\" As such adversaries seem to generalize across models [...] , this frontier tweaking should resist model manipulation and yield only few false positives (wrong identification of non marked model).\\\", which means that transferability of adversarial examples is a fundamental assumption underlying the approach.\\n\\n3- under Eq. 1: \\\"Note that without the additional regularization loss (LWM), this retraining procedure resembles \\u2018adversarial training\\u2019 (Kurakin et al., 2016).\\\": I do not understand that sentence. Without L_{WM}, the loss is the usual classification loss (L_0), and has nothing to do with adversarial training.\\n\\n4- more generally, the contribution of the paper is on multi-bit watermarking, but there is no clear application scenario/experiment where the multi-bit is more useful than the zero-bit watermarking.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"review\", \"review\": \"Strengths:\\n\\nWell written paper, covers most of the relevant related work\\nTechnique is conceptually easy to understand (~ adversarial training)\", \"weaknesses\": \"Unclear set of desiderata properties for a watermarking technique\\nNo formal guarantees are verified, the mechanism is only tested\\nAttacks tested are not tailored to the technique proposed\", \"feedback_and_rebuttal_questions\": \"This submission is easy to read and follow, and motivates the problem of watermarking well in light of intellectual property concerns. The technique proposed exploits unused capacity in the model to train it to associate specific inputs (computed adversarially) to specific outputs (the keys). Watermarking succeeds when the bit error rate between the predicted signature and the expected one is zero. This approach is conceptually easy to understand. \\n\\nThe experimental setup used to evaluate the approach is however limited. First, it is unclear why desiderata stated in Section 3.1 and summarized in Table 1 are necessary and sufficient. Would you be able to justify their choice in your rebuttal? For instance, the \\u201csecurity\\u201d requirement in Table 1 overlaps with \\u201cfidelity\\u201d. Similarly, the property named \\u201cintegrity\\u201d really refers to only a subset of what one would typically describe as integrity. It basically calls for a low false positive or high precision. \\n\\nThe attack model described in Section 3.2 only considers three existing attacks: model fine-tuning, parameter pruning and watermark overwriting. These attacks do not consider how the adversary could adapt and they are not optimal strategies for attacking the specific defensive mechanism put in place here. For instance, could you explain in your rebuttal why pruning the smallest weights in the architecture in the final architecture would help with removing adversarial examples injected to watermark the model? Similarly, given that adversarial subspaces have large volumes, it makes sense that multiple watermarks could be inserted simultaneously and thus watermark overwriting attacks would fail.\\n\\nIf the approach is based on exploring unused capacity in the model, the adversary could in fact attempt to use a compression technique to preserve the model\\u2019s behavior on the task and remove the watermarking logic. For instance, the adversary could use an unlabeled set of inputs and have them labeled by the watermarked model. Because these inputs will not be \\u201cadversarial\\u201d, the watermarked model\\u2019s decision surface used to encode the signatures will remain unexplored during knowledge transfer and the resulted compressed or distilled model would solve the original task without being watermarked. Is this an attack you have considered in your experiments and if not could you elaborate why one may exclude it in your rebuttal?\", \"minor_comments\": \"\", \"p3\": \"Typo \\u201cVerifiabiity\\u201d\", \"p5\": \"Could you add a reference or additional experimental results that justify why transferable keys would be located near the decision boundaries?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HkMlGnC9KQ
On Regularization and Robustness of Deep Neural Networks
[ "Alberto Bietti*", "Grégoire Mialon*", "Julien Mairal" ]
In this work, we study the connection between regularization and robustness of deep neural networks by viewing them as elements of a reproducing kernel Hilbert space (RKHS) of functions and by regularizing them using the RKHS norm. Even though this norm cannot be computed, we consider various approximations based on upper and lower bounds. These approximations lead to new strategies for regularization, but also to existing ones such as spectral norm penalties or constraints, gradient penalties, or adversarial training. Besides, the kernel framework allows us to obtain margin-based bounds on adversarial generalization. We show that our new algorithms lead to empirical benefits for learning on small datasets and learning adversarially robust models. We also discuss implications of our regularization framework for learning implicit generative models.
[ "regularization", "robustness", "deep learning", "convolutional networks", "kernel methods" ]
https://openreview.net/pdf?id=HkMlGnC9KQ
https://openreview.net/forum?id=HkMlGnC9KQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ryxw5m4GgV", "HkgaEqpayV", "S1lj0an61N", "rkloNE79C7", "SyevlN2F0X", "SJeWhgkfpQ", "HJxIJl1za7", "Skx8HkyGT7", "S1eid00WaQ", "S1x3WD8ChQ", "SylRu6annQ", "rkxTYDf2i7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544860558918, 1544571444556, 1544568275071, 1543283762949, 1543255023309, 1541693608629, 1541693405802, 1541693245927, 1541693042788, 1541461763838, 1541361013580, 1540265860988 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1233/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1233/Authors" ], [ "ICLR.cc/2019/Conference/Paper1233/Authors" ], [ "ICLR.cc/2019/Conference/Paper1233/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1233/Authors" ], [ "ICLR.cc/2019/Conference/Paper1233/Authors" ], [ "ICLR.cc/2019/Conference/Paper1233/Authors" ], [ "ICLR.cc/2019/Conference/Paper1233/Authors" ], [ "ICLR.cc/2019/Conference/Paper1233/Authors" ], [ "ICLR.cc/2019/Conference/Paper1233/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1233/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1233/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"Reviewers generally found the RKHS perspective interesting, but did not feel that the results in the work (many of which were already known or follow easily from known theory) are sufficient to form a complete paper. Authors are encouraged to read the detailed reviewer comments which contain a number of critiques and suggestions for improvement.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"interesting perspective but insufficient contribution\"}", "{\"title\": \"reply\", \"comment\": \"Right, but note that this expression does not apply for x > -lambda epsilon: phi(x) is simply equal to 1 in this case, by definition. We will consider including a plot of the function phi if it helps clarify this definition.\"}", "{\"title\": \"response\", \"comment\": \"Thank you for you interest in our paper and for going through our proof.\", \"we_believe_our_current_choice_of_function_phi_is_correct\": \"it corresponds to a ramp-like function that is equal to 0 to the left of (-gamma - lambda epsilon), 1 to the right of (- lambda epsilon), and is linear from 0 to 1 in between. Indeed, evaluating $1 + (x + lambda epsilon) / gamma$ at $x = -gamma - lambda epsilon$ and $x = -lambda epsilon$ yields 0 and 1, respectively.\"}", "{\"title\": \"Thanks for the response!\", \"comment\": \"Dear Paper1233 Authors,\\n\\nAfter reading the response carefully, I still feel like this paper is not ready to publish. Part of the reason is the organization of this paper does not highlight its main contributions, and also the paper lacks in-depth original contribution.\\n\\nI encourage the authors to continue their works in this direction and reorganize the work to emphasize the main contributions. Especially, the lower bound+upper bound methods for RKHS regularization can be further developed and extended to a good work on its own.\\n\\nThanks,\\nPaper1233 AnonReviewer1\"}", "{\"title\": \"Update after revision\", \"comment\": [\"We have updated the paper with clarifications on the novelty of our RKHS perspective to regularization, and emphasized the benefits of controlling the RKHS norm, an aspect which was not clear in the original submission. We also included additional empirical results.\", \"In particular, we hope to have clarified the following points:\", \"empirically, we find that an appropriate control of the RKHS norm seems to be missing for existing methods based on robust optimization (which can give up global stability in favor of local robustness) or spectral norms (which reduce model complexity but can remain unstable locally)\", \"the penalties |f|_M^2 and |\\\\nabla f|^2 that we obtain from RKHS arguments seem to provide a better control of the RKHS norm in practice\", \"combining lower and upper bound approaches can further help control this norm\", \"empirically, these methods often yield the best generalization performance on small datasets, and additionally can provide the most useful guarantees on adversarially robust generalization\", \"We hope this update clarifies the concerns of the reviewers, and we would like to sincerely thank all reviewers again for their useful comments and remarks, which helped us improve our paper.\"]}", "{\"title\": \"Response\", \"comment\": \"We thank the reviewer for his comments. We discuss the novelty aspects in our general response ( https://openreview.net/forum?id=HkMlGnC9KQ&noteId=S1eid00WaQ ) and will be happy to clarify this in the paper. Further comments are addressed below.\\n\\n** controlling the amount of deformations\\n\\nThe stability bounds of B+M provide upper bounds on ||Phi(x') - Phi(x)|| (where x' is a deformation of x) based on quantities related to the corresponding diffeomorphism, i.e. the maximum norm and the maximum jacobian norm. For simple classes of deformations these can be computed precisely in terms of the parameters of the deformation, e.g. for translations, rotations, scaling or simple parametric warps. When bounding these away from zero by a certain constant, ||Phi(x') - Phi(x)|| is then included in a centered ball of the RKHS with a radius growing with this constant. This constant then acts as a regularization parameter, just like the size of additive perturbations in the case of adversarial perturbations, and can be tuned by cross-validation.\\n\\n** tightness of the lower bounds\\n\\nThis is something that we verify empirically in our experiments at the end of training by checking the values of spectral norms as a proxy of the upper bound, and looking at the gap with the lower bound. In particular, when using the ||f||_M penalty, lower and upper bounds seem to be controlled together in our experiments (Figure 2), making the bound useful, in contrast to PGD, for which spectral norms grow uncontrolled when the lower bound decreases. We will further clarify this in the paper.\\n\\neqn (8), (12): thanks for pointing these out, we will fix this in the paper.\"}", "{\"title\": \"Response\", \"comment\": \"We thank the reviewer for his comments. Our general response ( https://openreview.net/forum?id=HkMlGnC9KQ&noteId=S1eid00WaQ ) details the aspects related to novelty. Further comments are addressed below.\\n\\n** comparison with Parseval networks + works of Miyato et al.\\n\\nWe agree that a better comparison with the Parseval network paper would be useful. Regarding generalization, the Parseval networks paper seems to only discuss standard generalization performance based on robustness, not *adversarial* generalization (that is, test error in the presence of an adversary), as considered in our work. Also, our bound seems significantly better: whereas the bound in Ciss\\u00e9 et al. (2017) has an exponential dependence on the dimension due to covering number of the input space (this is a weakness of the generalization bounds from Xu and Mannor (2012), which do not leverage statistical properties of the function class being used), our margin bound has no dependence on the dimension, or only a logarithmic dependence if we use the Rademacher analysis of Bartlett et al. (2017) instead of our kernel framework.\\nRegarding the improper acknowledgement of Miyato's work, we are a bit surprised by the reviewer's comment: we cite the work of Miyato almost each time we mention spectral regularization and the acknowledgment seems clear to us throughout the paper. However, if the reviewer finds any ambiguous claim in our paper that we would have missed, we would be happy to clarify it.\\n\\n ** role of the specific regularizer\\n\\nThe reviewer points out that some of the regularization functions we consider such as the spectral norm penalties, are not based on the precise upper bound we derive. Whereas optimizing a product of spectral norms is impractical, which naturally leads to other variants (sums of spectral norms or constraints), we would like to emphasize that such variants are empirically effective in the sense that the quantities obtained at the end of training---such as the spectral norms, (local or global) Lipschitz constants, and the margins of each datapoint---are controlled. These quantities are also what governs our generalization guarantees. Besides, we also note that many deep architectures with ReLUs (particularly VGG-like, if we ignore bias terms) are homogeneous in the weight matrices, making the relative norms at each layer not crucial (multiplying one layer by a scalar and dividing another by the same scalar leads to an equivalent model). In particular, this justifies using the same value for the spectral norm constraint of each layer.\\n\\n ** Usefulness of the RKHS framework\", \"the_rkhs_framework_was_quite_beneficial_in_our_work_because_it_displays_several_properties_at_once\": \"(1) clear understanding of regularization and generalization through margin bounds\\n (2) makes a clear link between stability/robustness and regularization/generalization by using the RKHS norm and properties of the kernel mapping\\n (3) yields practical regularization algorithms through upper and lower bounds\\n\\nLooking at alternatives, if we consider the product of spectral norms instead of the RKHS norm, then we may have (1) using results of Bartlett et al.(2017) and partly (2) since we can upper bound the Lipschitz constant, however algorithms based on lower bounds are crucially missing, and our experiments suggest that these algorithms are often important for good performance, both for regularization on small datasets and for robustness.\\nIf instead we consider the robust optimization approach, we obtain variants of good algorithms (3) such as PGD or gradient penalties, and perhaps some connections to regularization following Xu et al. (2009), however it is difficult to obtain useful generalization guarantees without defining a precise quantity of model complexity. Additionally, such approaches may favor local over global robustness, particularly with powerful function approximators such as neural networks, which may be undesirable when one wants global guarantees.\\n\\n** bounding l2 robustness with product of spectral norms\\n\\nIt is indeed easy to upper bound l2 robustness using the product of spectral norms. However, such a robustness guarantee is only useful if this quantity is appropriately controlled during training. In particular, for methods like PGD, we find that such a quantity is poorly controlled on Cifar10, and would thus only provide very weak guarantees.\\n\\n** on global vs local Lipschitz constants\\n\\nWe agree that in some cases local robustness is enough in practice, however this may come at the cost of having weak guarantees on adversarial generalization, and may require expensive verification procedures locally around each test example for guaranteed robustness, as mentioned in our general response.\\n\\nWe will happily clarify some of these points in an updated version of the paper.\"}", "{\"title\": \"Response\", \"comment\": \"We thank the reviewer for his comments. We address the comments about novelty in our general response ( https://openreview.net/forum?id=HkMlGnC9KQ&noteId=S1eid00WaQ ), for instance concerning the relationship to previous work, and the regularization penalty ||f||_M we propose. More detailed comments are addressed below.\\n\\n** weakness of adversarial training\\n\\nAs noted in our general response, our ||f||_M regularization approach empirically yields models with a more useful certified generalization guarantee in the presence of adversaries on Cifar10, while PGD adversarial training would likely require local verification of robustness around each test example, and we are not aware of useful guarantees on adversarial generalization for such models. We agree that this aspect is not clear in the current submission, and we will improve it in the next version.\\n\\n** relationship with traditional RKHS regularization\\n\\nThere is indeed no question that kernel methods/RKHSs have been widely used for regularization of non-linear functions, for over 20 years now, however these methods typically rely on solving convex optimization problems using the kernel trick, or various kernel approximations (such as random Fourier features). Separately, defining RKHSs that contain neural networks has indeed been the study of previous work, such as Bietti and Mairal (2018) or Zhang et al. (2016; 2017), however these only study theoretical properties of the kernel mapping and the RKHS norm, or derive convex learning procedures to replace training neural networks. Our approach is quite different, in that we leverage these insights to obtain practical regularization strategies for generic neural networks.\\n\\n** new regularization methods\\n\\nIn addition to the ||f||_M lower bound penalty discussed in our general response, we note that combined approaches based on lower bound + upper bound methods are also novel to the best of our knowledge, and in particular we found combining robust optimization techniques with spectral norm constraints to be quite successful in many of the small data scenarios considered (see Table 1).\\n\\nWe will happily clarify some of these points in an updated version of the paper.\"}", "{\"title\": \"General response to reviewers\", \"comment\": \"First, we would like to thank the reviewers for their useful remarks and suggestions. We provide general comments here that are relevant for all reviewers, with specific comments for each reviewer in individual replies.\\n\\nBased on the reviewers' comments, we realize that our original submission focuses too much on establishing links between regularization with the RKHS norm and existing strategies, which we found particularly interesting, rather than highlighting the benefits of the newly obtained strategies. Additionally, some of our observations and insights were a bit preliminary in the submission, particularly regarding robustness and security applications, given that our original main motivation for this work was different, focusing on regularization in small data settings. We hope to better clarify the concerns about novelty here, and welcome further remarks by the reviewers. We will do our best to update the paper accordingly before the end of the discussion period.\\n\\n** Novelty compared to Bietti and Mairal (2018)\\n\\nBietti and Mairal (2018) study theoretical properties of the RKHS only. Here, we provide *practical algorithms* for regularizing usual neural networks using the RKHS norm, which is a major step forward compared to this existing work. We are also not aware of previous work that considers the RKHS norm for regularization of deep networks in practice. An interesting insight of our work is that this norm is quite large for standard networks trained with SGD, and that explicitly controlling it brings clear benefits.\\n\\n** Novelty of the regularization strategies\\n\\nThe adversarial perturbation penalty ||f||_M that we introduce in this work is quite different from previous work: (i) it encourages stability of the entire prediction function by considering a separate penalty term in the optimization objective; (ii) it optimizes worst-case stability across the domain, in contrast to other approaches which only optimize this on average over training points. On Cifar10, our method seems most effective in controlling the RKHS norm compared to other methods, where we observe that both the lower bound and spectral norms are controlled together. In contrast, other lower bound methods seem less effective at controlling the upper bound, and this is particularly pronounced in the case of PGD in to our experiments. This is mentioned in the last paragraph of Section 4, and displayed in Figure 2(left).\\nThe approach ||f||_M yields the best accuracy for regularization on some of the small dataset problems we consider, and can achieve the best performance in some regimes of the robustness/generalization trade-off. Additionally, it provides the most useful certified guarantee on adversarial generalization in our experiments (see next bullet point).\\n\\nAnother key difference between our approach and previous ones is that our penalties involve a global optimization problem across the space of inputs X rather than only an average over training samples. This is true both for ||f||_M vs. PGD and for our gradient penalty vs. existing strategies based on gradients. We admittedly did not yet investigate the importance of this local vs. global regularization effect (and we currently only optimize across examples in a mini-batch), which we plan to do in a longer version of the paper. This indeed paves the way to transductive and semi-supervised settings, which we plan to investigate as well.\\n\\n** Certified guarantees for adversarial generalization, and novelty of our theoretical analysis\\n\\nWe also would like to point out that our original main motivation was to study regularization benefits, while some of our observations regarding robustness and security were somewhat preliminary at the time of submission. Yet, one important aspect of our work in the context of robustness is that controlling the RKHS norm can provide a model with *certified* guarantees on *adversarial* generalization (i.e. test accuracy in the presence of an adversary), as given by our margin bound analysis, although it depends on the RKHS norm which can only be approximated. We note that while margin bounds have been useful to establish (standard) generalization guarantees for neural networks, to our knowledge our work is the first to use similar arguments for bounding adversarial generalization.\\n\\nOur experiments suggest that the most useful guarantees are obtained for models trained with our penalty ||f||_M, for which the upper and lower bounds are more tightly related than for other methods (see Figure 2). In contrast, while methods like PGD may give improved robustness empirically in some regimes, our experiments on CIFAR10 suggest that the obtained models have large spectral norms, yielding quite weak guarantees on adversarial generalization.\\nThis suggests that the robustness of such models may be only local, so that one may need (possibly costly) verification procedures on each test example in order to guarantee robustness against all adversaries.\"}", "{\"title\": \"Well written with interesting findings, but limited novelty\", \"review\": \"Regularizing RKHS norm is a classic way to prevent overfitting. The authors\\nnote the connections between RKHS norm and several common regularization and\\nrobustness enhancement techniques, including gradient penalty, robust\\noptimization via PGD and spectral norm normalization. They can be seen as upper\\nor lower bounds of the RKHS norm.\\n\\nThere are some interesting findings in the experiments. For example, for\\nimproving generalization, using the gradient penalty based method seems to work\\nbest. For improving robustness, adversarial training with PGD has the best\\nresults (which matches the conclusions by Madry et al.); but as shown in Figure\\n2, because adversarial training only decreases a lower bound of RKHS norm, it\\ndoes not necessarily decrease the upper bound (the product of spectral norms).\\nThis can be shown as a weakness of adversarial training if the authors explore\\nfurther and deeper in this direction.\\n\\nOverall, this paper has many interesting results, but its contribution is\", \"limited_because\": \"1. The regularization techniques in reproducing kernel Hilbert space (RKHS) has\\nbeen well studied by previous literature. This paper simply applies these\\nresults to deep neural networks, by treating the neural network as a big\\nblack-box function f(x). Many of the results have been already presented in\\nprevious works like Bietti & Mairal (2018).\\n\\n2. In experiments, the authors explored many existing methods on improving\\ngeneralization and robustness. However all these methods are known and not new.\\nIdeally, the authors can go further and propose a new regularization method\\nbased on the connection between neural networks and RKHS, and conduct\\nexperiments to show its effectiveness.\\n\\nThe paper is overall well written, and the introductions to RKHS and each\\nregularization techniques are very clear. The provided experiments also include\\nsome interesting findings. My major concern is the lack of novel contributions\\nin this paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting ideas, but not enough of an independent contribution\", \"review\": \"This paper looks at adversarial examples from the context of RKHS norms for neural networks. The work builds conceptually on the work of Bietti and Mairal (2018), who investigate approximate RKHS norms for neural networks (including computation via a specialized convolutional kernel), and Xu et al., (2009) which looks at robustness properties of kernel classifiers. The authors discuss how the RKHS norm of neural network functions provide robustness guarantees for the resulting classifier, both in terms of a straightforward robustness property for a given example, as well as in terms of generalization guarantees about robustness.\\n\\nOverall, I think there are some interesting ideas in this work, but ultimately not enough to make a compelling independent paper. The core issue here is that the RKHS properties are used only in a very minimal manner to actually provide much analysis or insight into the robustness properties of the network. For example, the upper bound in (8) seems to be central here to illustrating how a bound on the RKHS norm can be upper bounded as a function of the operator l2 norm of the inner weight matrices (though the actual form of the bound isn't mentioned), and the latter term could thus provide a certified bound on the robustness loss of a classifier. However, there are two big issues here: 1) it's trivial to directly bound the l2 robustness of a classifier by the product of the weight spectral norms and 2) the actual regularization term the authors proposed to use (the sum of spectral norms) is notably _not_ an upper bound on either the robust loss or the RKHS norm; naturally, this penalty along with the constrained version will still provide some degree of control over the actual robustness, but the authors don't connect this to any real bound. I also think the authors aren't properly acknowledging just how similar this is to past work: the Parseval networks paper (Cisse et al., 2017), for instance, presents a lot of similar discussion of how to bound generalization error based based upon terms involving operator norms of the matrices, and the actual spectral normalization penalty that the authors advocate for has been studied by Miyato et al. (2018). To be clear, both of these past works (and several similar ones) are of course cited by the current paper, but from a practical standpoint it's just not clear to me what the takeaways should be here above and beyond this past work, other than the fact that these quantities _also_ bound the relevant RKHS norms. Likewise the generalization bound in the paper is a fairly straightforward application of existing bounds given the mechanics of the RKHS norm defined by previous work.\\n\\nTo be clear, I think the RKHS perspective that the authors advocate for here is actually quite interesting. I wasn't particularly familiar with the Bietti and Mairal (2018) work, and going through it in some detail for reviewing this paper, I think it's an important directly for analysis of deep networks, including from a perspective of robustness. But the results here seem more like a brief follow-on note to the past work, not a complete set of results in and of themselves. Indeed, because the robustness perspective here can largely be derived completely independently of the RKHS framework, and because the resulting training procedures seem to be essentially identical to previously-proposed approaches, the mere contribution of connecting these works to the RKHS norm doesn't seem independently to be enough of a contribution in my mind.\\n\\nOne final, though more minor, point: It's worth pointing out that (globally) bounding the Lipschitz constant seems top stringent a condition for most networks, and most papers on certifiable robustness seem to instead focus on some kind of local Lipschitz bound around the training or test examples. Thus, it's debatable whether even the lower bound on the RKHS norm is really reasonable to consider for the purposes of adversarial robustness.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review\", \"review\": \"In this paper, the authors consider CNN models from the lens of kernel methods. They build upon past work that showed that such models can be seen to lie in appropriate RKHS, and derive upper and lower bounds for the kernel norm. These bounds can be used as regularizers that help train more robust neural networks, especially in the context of euclidean perturbations of the inputs, and training GANs. They show that the bounds can also be used to recover existing special cases such as spectral norm penalizations and gradient regularization. They derive generalization bounds from the point of view of adversarial learning, and report experiments to buttress their claims.\\n\\nOverall, the paper is a little confusing. A lot of the times, the result seem to be a derivative of the work by Bietti and Mairal, and looks like the main results in this paper are intertwined with stuff B+M already showed in their paper. It's hard to ascertain what exactly the contributions are, and how they might not be a straightforward consequence of prior work (for example, combining results from Bietti and Mairal; and generalization bounds for linear models). It might be nice to carefully delineate the authors' work from the former, and present their contributions.\", \"page_4\": \"Other Connections with Lower bounds: The first line \\\" \\\"we may also consider ... \\\". This line is vague. How will you ensure the amount of deformation is such that the set \\\\bar{U} is contained in U ?\", \"page_4_last_paragraph\": \"\\\"One advantage ... complex architectures in practice\\\" : True, but the tightness of the bounds *do* depend on \\\"f\\\" (specifically the RKHS norm). It needs to be ascertained when equality holds in the bounds you propose, so that we know how tight they are. What if the bounds are too loose to be practical?\\n\\neqn (8): use something else to denote the function 'U'. You used 'U' before to denote the set. \\n\\neqn (12): does \\\\tilde{O} hide polylog factors? please clarify.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}" ] }
BJxgz2R9t7
Learning To Solve Circuit-SAT: An Unsupervised Differentiable Approach
[ "Saeed Amizadeh", "Sergiy Matusevych", "Markus Weimer" ]
Recent efforts to combine Representation Learning with Formal Methods, commonly known as the Neuro-Symbolic Methods, have given rise to a new trend of applying rich neural architectures to solve classical combinatorial optimization problems. In this paper, we propose a neural framework that can learn to solve the Circuit Satisfiability problem. Our framework is built upon two fundamental contributions: a rich embedding architecture that encodes the problem structure and an end-to-end differentiable training procedure that mimics Reinforcement Learning and trains the model directly toward solving the SAT problem. The experimental results show the superior out-of-sample generalization performance of our framework compared to the recently developed NeuroSAT method.
[ "Neuro-Symbolic Methods", "Circuit Satisfiability", "Neural SAT Solver", "Graph Neural Networks" ]
https://openreview.net/pdf?id=BJxgz2R9t7
https://openreview.net/forum?id=BJxgz2R9t7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "r1ef54VegN", "rke-zhvdk4", "ryeNhLYzJN", "HklvKK14aX", "Skx8EY1Nam", "rJgGgF1V6Q", "SJg8puyN6m", "HyeLzuJVTX", "BkekmsMohm", "rJxGBgb527", "HyevmThdhX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544729738077, 1544219656557, 1543833260314, 1541826943090, 1541826861996, 1541826794398, 1541826750330, 1541826574390, 1541249815051, 1541177401909, 1541094687374 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1232/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1232/Authors" ], [ "ICLR.cc/2019/Conference/Paper1232/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1232/Authors" ], [ "ICLR.cc/2019/Conference/Paper1232/Authors" ], [ "ICLR.cc/2019/Conference/Paper1232/Authors" ], [ "ICLR.cc/2019/Conference/Paper1232/Authors" ], [ "ICLR.cc/2019/Conference/Paper1232/Authors" ], [ "ICLR.cc/2019/Conference/Paper1232/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1232/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1232/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"This paper introduces a new graph neural network architecture designed to learn to solve Circuit SAT problems, a fundamental problem in computer science. The key innovation is the ability to to use the DAG structure as an input, as opposed to typical undirected (factor graph style) representations of SAT problems. The reviewers appreciated the novelty of the approach as well as the empirical results provided that demonstrate the effectiveness of the approach. Writing is clear. While the comparison with NeuroSAT is interesting and useful, there is no comparison with existing SAT solvers which are not based on learning methods. So it is not clear how big the gap with state-of-the-art is. Overall, I recommend acceptance, as the results are promising and this could inspire other researchers working on neural-symbolic approaches to search and optimization problems.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper, comparison with traditional SAT solvers would be helpful\"}", "{\"title\": \"Discussion\", \"comment\": \"Thank you for taking time to give us your feedback. We appreciate your comment and completely agree with you that to further enhance the neural frameworks for SAT solving, it's essential to perform a comprehensive comparison with the state-of-the-art, non-learning solvers, especially under different, non-uniform distributional assumptions. In fact, this is something we are currently working toward. Nevertheless before doing so, we needed to first establish an architecture capable of harnessing DAG structure (i.e. the DG-DAGRNN model) and a differentiable, unsupervised methodology to train such architecture (i.e. the Evaluator network); this is what we have tried to accomplish in the current work.\"}", "{\"title\": \"Modified score\", \"comment\": \"Thank you for your response and taking the time to do the comparison to MiniSAT. I have increased the score to 6, but I still feel that any reader who is even somewhat familiar with the extensive literature on SAT will find the comparisons to prior work unsatisfying. I completely understand and support the argument made in your response and by the NeuroSAT paper that the goal of learning-based approaches should not be to beat the state-of-the-art solvers in the short term. But this is not a good enough argument to avoid comparisons to them. The comparisons are needed to quantify how far away learning-based solvers are from state-of-the-art and what kind of improvements are needed to (eventually) match or beat them and under what assumptions about the problem instance distributions.\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"- In figure 1 (a), what are x11, x12, etc?\\nx represents the node feature vector that specify the nodes of the input DAG; in particular, x_{ij} is the j-th feature of the i-th node. As explained in the paper, in the Circuit-SAT problem, node feature vectors represent the type of node operation (i.e. AND, OR, NOT or VARIABLE) in the input circuit, represented as one-hot vectors.\\n\\n- Correctness guarantee:\\nThis is a great point indeed that we have briefly mentioned in the paper. In our framework, an input circuit is deemed as SAT if and only if the Solver network can produce an assignment that satisfies the Evaluator network. The test-time Evaluator network (or the train-time Evaluator network at low temperatures) mimics the exact behavior of the input circuit for continuous soft assignments; that is, if the soft assignment produced by the Solver network satisfies the test-time Evaluator network, its 0/1 hard counterpart will also satisfy the original circuit. Using this mechanism has two implications: (a) our framework does NOT produce false positives; if the input circuit is deemed as SAT, it means we have already found a satisfying solution for it. (b) if the satisfiability value for an input circuit is less than 0.5, all we can say about its SAT status is \\\"unknown\\\"; in other words, our method does not provide any proof of unsatisfiability.\\n\\n- Clarifying \\\"SAT Solving\\\":\\nWe have clarified this in the revised draft.\\n\\n- Clarifying \\\"solutions\\\":\\nWe have clarified this in the revised draft.\\n\\n- min() < S_min() < S_max() < max():\\nAgain the reviewer is correct and the ordering relation holds for all the inputs (a_1, ... a_n); we have clarified this in the revised draft.\\n\\n- The effect of temperature on the Evaluator network:\\nIn the beginning of the training, when the temperature is high, all the AND and OR gates in the Evaluator network (represented as S_min and S_max functions, respectively) act almost as arithmetic mean, so the training can be seen as maximizing the average values over the soft assignments (or their negations) while the gradient signal propagates back through all paths in the circuit; this is the exploration phase. As the training gradually progresses and the temperature anneals toward zero, the s_min and the s_max functions converge toward min and max functions, respectively, which in turn mimic the behavior of AND and OR gates for soft assignments. At this stage, the gradient signal will, for the most part, travel back only through the active paths in the circuit; this is the exploitation phase of learning.\\n\\n- Proving (circuit is UNSAT iff S_\\\\theta <= 0.5 for all soft assignments):\\nHere we give a sketch of the proof, but we are hoping to add an entire Appendix in the camera-ready version detailing the proof.\\n\\n(1) circuit is UNSAT if S_\\\\theta <= 0.5 for all soft assignments:\\nThe proof can simply be achieved by contradiction.\\n\\n(2) S_\\\\theta <= 0.5 for all soft assignments if circuit is UNSAT:\\nThe proof can be achieved by induction on the size (i.e. the number of gates) in the circuit: each time, isolate and remove the sink node of the circuit DAG (i.e. the last logical operator evaluated in the expression) and show that the output of the circuit is always less than or equal to 0.5 for all types of sink gates by assuming the statement of the theorem holds for the the resulted sub-circuits which have strictly smaller sizes compared to the original circuit. \\n\\nAs for false positives, yes our model never produces false positives. Please refer to the above proof as well as the correctness explanation above.\\n\\n- Figure 2 explanation:\\nYes, all the test cases are SAT instances and, as the reviewer mentioned, there are some SAT examples where neither of the two models can decode within the allowed T_max iterations. We suspect these examples belong to the region of the K-SAT instances that's super close to SAT-UNSAT phase transition point. This region mostly contains the hardest K-SAT instances.\\n\\n- Real-world datasets:\\nWe completely agree with the reviewer that one of the main benefits of using learning methods for SAT solving is the ability of these methods to adapt to the target distribution of specific domains. This is indeed one of our current, ongoing efforts to adapt our framework to specific real-world domains. Nevertheless, we should emphasize that in our experiments, despite being random, both the training and the test examples are drawn from the hardest region of the SAT problems (the area close to SAT-UNSAT phase transition). This is achieved by using the data generation process proposed in the NeuroSAT paper.\\n\\n- The training time and testing time:\\nThank you for bringing up this important point. We have included a new paragraph in Section 5.1 detailing the time complexity.\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We have been working to improve the clarity of the paper including the figures. So we are hoping the final version would address all the clarity concerns.\\n\\nAs for pre-processing the input data, we only perform a CNF-to-Circuit conversion step as fully explained in Appendices B and C. As mentioned in the paper, although there are some problem domains where the input instances naturally come in the circuit format (e.g. circuit verification), in order to make a fair comparison with NeuroSAT, we decided to use the same CNF datasets that we used for NeuroSAT and as such we needed a pre-processing step to convert those CNFs to circuits. Nevertheless, we made sure this pre-processing step to be of O(N) in the worst case. As an extra advantage point, as mentioned in Appendix C, since our framework is capable of harnessing the circuit structure, domain-specific heuristics can be injected into the circuit structure during the pre-processing step - e.g. in graph k-coloring.\"}", "{\"title\": \"Response to AnonReviewer2 (Part 2)\", \"comment\": \"-SAT as an Integer Linear Program (ILP):\\nModeling the SAT problem as a (relaxed) ILP is a very interesting idea and there are some prior works on that in the literature. Nevertheless, such methodology would require solving an optimization problem for every problem instance at the test time. However, our proposed methodology is quite different (even though we also work with relaxed assignments): after training, our framework produces a recursive neural network (the Solver network) that can be run on test problem instances on GPU *without* needing to solve any optimization at the test time. That said, one interesting idea would be to replace our Evaluator network (i.e. the relaxed circuit) with a network that encodes the relaxed ILP and study the effects of that on training the Solver network. Exploring different options for the Evaluator network is indeed a future direction on our agenda.\\n\\n-A modified version of NeuroSAT to take in circuit structure:\\nOur understanding is that the main ingredients that make NeuroSAT NeuroSAT are (a) a graph neural network for bi-partite graphs to embed the input CNF and (b) training this network toward SAT classification. In order for NeuroSAT to consume circuit structure, one would need to replace the first part with another sophisticated graph neural network that can process and understand variable-sized and topologically-diverse DAGs (circuits). But that's exactly what we have developed in this paper: the DG-DAGRNN architecture. So while we can in theory replace a fundamental ingredient of NeuroSAT with our proposed model, we are not sure we can still call the resulted framework NeuroSAT and close the gap. In other words, upgrading NeuroSAT to understand circuit structure is a non-trivial task and in fact one of the main contributions of the present work.\"}", "{\"title\": \"Response to AnonReviewer2 (Part 1)\", \"comment\": \"-Tailoring to DAG structure / directed vs undirected propagation:\\nWe would like to emphasize that our experimental setup does NOT aim at comparing the directed vs undiredted message passing on graphs. In particular, any form of message passing on graphs by definition imposes (momentary) directions on the edges of the graph even if the underlying graph is undirected; that is, message passing is always directed. On the other hand, what we are contrasting in this paper is *sequential* propagation based on some specific node order vs *synchronous* propagation based on no order. Furthermore, we argue the \\\"specific order\\\" for sequential propagation cannot be just any random order, but it has to arise from the semantics of the problem. In particular, in the Circuit-SAT problem, the node order (and its reverse version) is induced by the order by which the logical operators are evaluated in the circuit (i.e. the topological order of the input DAG). In theory, given unbounded training data and training time, one should still be able to learn the target Circuit-SAT function while ignoring this order and using synchronous propagation, but in practice with finite data and time, the learning is intractable for general circuits. In fact, before fully developing our DG-DAGRNN framework, we experimented with synchronous propagation for general circuits, but we were not able to learn the SAT function. The reason is somewhat intuitive: if we want to consume general (non-flat) circuits, ignoring the evaluation order of operators (i.e. using synchronous propagation) adds an extra task of figuring out the correct expression structure on the top of learning to solve the SAT problem itself which makes the learning task way more difficult. And that's why providing this structure explicitly via the DG-DAGRNN framework makes a huge improvement. In contrast, the synchronous propagation is NOT problematic for the CNF-SAT problem because the clauses in a flat CNF do not adhere any specific order and can be evaluated in any order, and therefore, synchronous propagation works well in NeuroSAT which only consumes CNFs. \\n\\n-Comparison against modern SAT solvers:\\nThis is indeed a very reasonable concern; nevertheless, we should emphasize that neither our framework nor NeuroSAT lay any claim to being on par with modern SAT-solvers at the moment. But that's not the goal here. This specific area in representation learning is relatively new and we are still in the feasibility study phase to see how much signal we can extract for SAT solving via deep learning. For practical purposes however, our intuition is that a successful approach that can potentially beat the classical solvers would be a hybrid of both learned models and traditional heuristic search components. But before getting there, we would need to gain a good understanding of what kind of useful signals we can or cannot extract from the problems structure via pure learning.\\nThat said, we have made a time comparison with MiniSAT (a popular, highly-optimized solver for moderate size problems). Even though, MiniSAT runs faster per example, our model, being a a neural network, is far more parallelizable and can solve many problems concurrently in a single batch. This would in turn make our method much faster than MiniSAT when applied on large sets of problems. We have included a new paragraph in the revised version describing this phenomenon.\"}", "{\"title\": \"Revision #1\", \"comment\": \"We would like to thank all the reviewers for bringing up some important questions and their detailed, constructive feedbacks. We have uploaded the first revised version of the paper addressing some of these concerns. In particular, the new draft includes:\\n\\n1) A revised version of Figure 1 to fix an error in the figure.\\n2) A few clarifying statements to address some reviewers concerns regarding clarity.\\n3) A new paragraph detailing the time comparisons between the competing methods as well as the off-the-shelf MiniSAT solver.\\n\\nIn what follows, we will address the reviewers' questions and concerns in more details.\"}", "{\"title\": \"Possibly interesting ideas, but needs more experiments\", \"review\": \"The paper proposes a graph neural network architecture that is designed to use the DAG structure in the input to learn to solve Circuit SAT problems. Unlike graph neural nets for undirected graphs, the proposed network propagates information according to the edge directions, using a deep sets representation to aggregate over predecessors of each vertex and GRUs to implement recurrent steps. The network is trained by using a \\\"satisfiability function\\\" which takes soft variable assignments computed by the network and applying a relaxed version of the circuit to be solved (replacing AND with softmax, OR with softmin, and NOT with 1 - variable value) to compute a continuous score that measures how satisfying the assignment is. Training is done by maximizing this score on a dataset of problem instances that are satisfiable. Results are shown on random k-SAT and graph coloring problems.\\n\\nThe paper is reasonably well-written and easy to follow. The idea of using the relaxed version of the circuit for training is nice. Combining ideas from DAG-RNNs and Deep Sets is interesting, although incremental.\", \"criticisms\": [\"How much does tailoring the network architecture to the DAG structure of the circuit actually help? A comparison to a regular undirected graph neural network on the circuit input without edge directions would be useful. In particular, since both edge directions are used in the current architecture but represented as two different DAGs, it naturally raises the question of whether a regular undirected graph neural net would also work well.\", \"How does the proposed approach compare to the current state-of-the-art non-learning approaches to SAT (CDCL, local search, etc.)? There is a huge literature on SAT, and ignoring all that work and comparing to only NeuroSAT seems unjustified. Without such comparisons, it is hard to say what is the benefit learning approaches in general, and the specific approach in this paper, provide in this domain. Even basic sanity-check baselines, e.g., random search, can be valuable given that the domain is somewhat new to learning approaches.\", \"One way to interpret the proposed approach is that it is learning to propose soft assignments that can be easily rounded. It would be good to compare to a Linear Programming relaxation-based approach that represents the SAT instance as an integer program with binary variables, relaxes the variables to be in [0,1], solves the resulting linear program, and rounds the solution. Do these approaches share the same failure modes, how does their performance differ, etc.\", \"The proposed approach has an obvious advantage over NeuroSAT in that it has access to the circuit structure, in addition to the flat representation of the SAT instance. According to the paper, not providing the circuit structure to the proposed approach hurts its performance. It would be useful to devise an experiment where a modified version of NeuroSAT is given the circuit structure as an additional input to see whether that closes the gap between the approaches.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"strong paper\", \"review\": \"The Authors of this paper investigate Neuro-Symbolic methods in the context of learning a SAT solver generalized to the Circuit-SAT problem. Using a reinforcement learning \\u2013 inspired approach to demonstrate a framework that is capable of (unsupervised) learning, by means of an end-to-end differentiable training procedure. Their formulation incorporates the solving of a given SAT problem into the architecture, meaning the algorithm is trained to produce a solution if a given problem is satisfiable. This is in contrast to previous similar work by (Selsam et al. 2018), where the framework was trained as a SAT classifier. Their results outline the performance increase over the previous work (Selsam et al. 2018) on finding a given solution for a SAT problem, on in-sample and out-sample results.\", \"neg\": \"Figure descriptions are not very clear\\nWhen it comes to comparing the results, they do use a prepossessing step for their algorithm which they do not incorporate into the results\", \"pros\": \"Clear outline of the data sets used for benchmarks.\\nGood Literature review, expressing in-depth knowledge of the current state of the art formulation for same/similar tasks \\nExtensive background section, that explains the theoretical concepts and their architecture used well.\\nClear outline of the Solver, where the individual parts/networks are explained and justified in detail\\nVery well outlined argumentation for approaching this particular problem by the proposed method/\\nThe experimental results as well are easy to follow and show promising results for the proposed framework\\nThe proposed method as well is novel and outperforms similar algorithms in the experimental evaluation.\\n\\n\\nThe paper is very well written, proposes a novel Neuro-Symbolic approach to the classical SAT problem, and demonstrates promising results.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"review for Circuit-SAT\", \"review\": [\"The paper makes a nice contribution to solving Circuit-SAT problem from a Neuro-Symbolic approach, particularly, 1) a novel DAG embedding with a forward layer and a reverse layer that captures the structural information of a circuit-sat input. 2) Compared with Selsam et al.'s work of Neuro-SAT, the proposed model in this paper, DG-DAGRNN, directly produces an assignment of variables, and the method is unsupervised and end-to-end differentiable. 3) Empirical experiments on random k-SAT and random graph k-coloring instances that support the authors' claim on better generalization ability.\", \"The paper is lucid and well written, I would support its acceptance at ICLR. Though I have a few comments and questions for the authors to consider.\", \"In figure 1 (a), what are x11, x12, etc?\", \"When comparing the two approaches of Neuro-Symbolic methods, besides the angles of optimality and training cost, it is worth to mention that the first one that based on classical algorithms always has a correctness guarantee, while the second one (learning the entire solution from scratch) usually does not.\", \"Section 4.1, as a pure decision problem, solving SAT means that giving a yes/no answer (i.e., a classification); while for practical purposes, solving SAT means that producing a model (i.e., a witness) of the formula if it is SAT. This can be misleading for some readers when the authors mentioning \\\"solving SAT\\\", and it would be clear if the authors could make a distinction when using such terms.\", \"Section 4.1, \\\"without requiring to see the actual SAT solutions during training\\\", again, what is the meaning of \\\"solutions\\\" is not very clear at this point. Readers may realize the experiments in the paper only train with satisfiable formulae from the afterward description, so the \\\"solutions\\\" indicates the assignments of variables. But it would be better to make it clear.\", \"Section 4.1/The Evaluator Network, \\\"one can show also show that min() < S_min() <= S_max() < max()\\\", what is the ordering relation (i.e., < and <=) here? It is a bit confusing if a forall quantifier for inputs (a_1, ... a_n) is required here.\", \"Section 4.1/The Evaluator Network, how does the temperature affect the results of R_G? It would be helpful to show their dynamics.\", \"Section 4.1/Optimization, \\\"if the input circuit is UNSAT, one can show that the maximum achievable values for S_\\\\theta is 0.5\\\", it would be better to provide a brief description of how it is guaranteed. Also, this seems to be suggesting the DG-SAGRNN solver has no false positives, i.e., it will never produce a satisfiable result for unsatisfiable formulae? This would be interesting toward some semi-correctness if the answer is yes.\", \"Section 5.1, are the testing data all satisfiable formulae? If yes, then the figure 2 shows there is a number of satisfiable formulae but both the models cannot produce correct results -- is that a correct understanding of figure 2? If not, then what is the ground truth?\", \"I would love to see more experiments on SAT instances with a moderate number of variables but from real-world applications. It would be interesting to see how the model utilizes the rich structural information of instances from real applications (instead of randomly generated formulae).\", \"The training time and testing time(per instance) are not reported in the experiments.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
HkekMnR5Ym
Meta-Learning Neural Bloom Filters
[ "Jack W Rae", "Sergey Bartunov", "Timothy P Lillicrap" ]
There has been a recent trend in training neural networks to replace data structures that have been crafted by hand, with an aim for faster execution, better accuracy, or greater compression. In this setting, a neural data structure is instantiated by training a network over many epochs of its inputs until convergence. In many applications this expensive initialization is not practical, for example streaming algorithms --- where inputs are ephemeral and can only be inspected a small number of times. In this paper we explore the learning of approximate set membership over a stream of data in one-shot via meta-learning. We propose a novel memory architecture, the Neural Bloom Filter, which we show to be more compressive than Bloom Filters and several existing memory-augmented neural networks in scenarios of skewed data or structured sets.
[ "meta-learning", "memory", "one-shot learning", "bloom filter", "set membership", "familiarity", "compression" ]
https://openreview.net/pdf?id=HkekMnR5Ym
https://openreview.net/forum?id=HkekMnR5Ym
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BygYT82Hl4", "Hkl9AlNPA7", "ByxXh6ZPAQ", "ryeKu7-wCX", "S1lXv_J707", "B1llq3T-0Q", "rJeha6qa6m", "BylFBY5aa7", "BklHAWcapX", "HyeHtaK6pm", "S1gCw_-T6X", "BkxdWIvphQ", "H1lg3Mjdn7", "HJlfcMvI3m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545090753372, 1543090385768, 1543081387340, 1543078768969, 1542809690623, 1542737032191, 1542462916261, 1542461760940, 1542459853367, 1542458748872, 1542424677653, 1541400064013, 1541087911978, 1540940426042 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1230/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1230/Authors" ], [ "ICLR.cc/2019/Conference/Paper1230/Authors" ], [ "ICLR.cc/2019/Conference/Paper1230/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1230/Authors" ], [ "ICLR.cc/2019/Conference/Paper1230/Authors" ], [ "ICLR.cc/2019/Conference/Paper1230/Authors" ], [ "ICLR.cc/2019/Conference/Paper1230/Authors" ], [ "ICLR.cc/2019/Conference/Paper1230/Authors" ], [ "ICLR.cc/2019/Conference/Paper1230/Authors" ], [ "ICLR.cc/2019/Conference/Paper1230/Authors" ], [ "ICLR.cc/2019/Conference/Paper1230/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1230/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1230/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This work proposes and interesting approach to learn approximate set membership. While the proposed architecture is rather closely related to existing work, it is still interesting, as recognized by reviewers. Authors's substantial rewrites has also helped make the paper clearer. However, the empirical merits of the approach are still a bit limited; when combined with the narrow novelty compared to existing work, this makes the overall contribution a bit too thin for ICLR. Authors are encouraged to strengthen their work by showing more convincing practical benefit of their approach.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting approach, empirical validation needs strengthening\"}", "{\"title\": \"Updated speed benchmark with LSTM\", \"comment\": \"To supplement the discussion on \\\"why create this model versus use an LSTM or variant\\\". Aside from the fact that we could not get any RNN to solve the database task with 5,000 elements; we ran an LSTM on the speed benchmark for this task to determine how much slower/faster it would be ... If we could somehow train it to solve this task.\", \"insertion_throughput_for_bloom_filter_on\": \"~60K (CPU)\", \"insertion_throughput_for_neural_bloom_filter\": \"~4K (CPU), ~58K (GPU)\", \"insertion_throughput_for_lstm\": \"~2K (CPU), ~5K (GPU).\\n\\nSo as discussed earlier, a Neural Bloom Filter can match the throughput of an Bloom Filter when run on a GPU. A couple of GPUs reserved along with the thousand of CPU cores for a large Bigtable database seems feasible. However if we look at the LSTM numbers, the insertion throughput is about 10x less (5K insertions per sec vs 60K). Thus the sequential write scheme of RNNs (in this case, an LSTM) is not only a problem from an optimization-perspective - as the LSTM fails to learn the task - but also reduced the throughput of insertions by an order of magnitude during evaluation. We have added these numbers to the Appendix G and discussed this point in the main paper, Section 5.4. This is just extra empirical evidence that there is room for a memory model with a feed-forward & compressive write scheme, alike to a bloom filter.\"}", "{\"title\": \"Re. Re.\", \"comment\": \"Thanks R3 for reading the revision and rebuttal.\\n\\nSmall point re. analytical bounds --- we have these two sentences in S 5.1 which was perhaps not pushed into the revision (although I see it now) however we could also put (analytical) in the plot legends if you think this is worthwhile.\\n\\n\\\"The false positive rate is measured empirically over a sample of queries for the learned models; for the Bloom Filter we employ the analytical false positive rate. Beating a Bloom Filter\\u2019ss pace usage with the analytical false positive rate implies better performance for any given BloomFilter library version (as actual Bloom Filter hash functions are not uniform), thus the comparison is fair.\\\"\\n\\nAside from the text motivating the model, what do you think could be added or amended in this study to make it more clearly worthwhile of publishing going forward? We ask because you appear to be interested in the subject area. \\n\\nE.g.\\n- One-shot learning for real applications is not sufficiently motivated?\\n- Some experiments are missing (from your perspective)?\\n- You think there are flaws in the model or comparison approach?\\n\\nAny further feedback would be highly appreciated.\"}", "{\"title\": \"Response to revision\", \"comment\": [\"I thank the authors for their detailed responses and revision.\", \"The revision to section 5 explaining the training procedure is helpful.\", \"The revision to section 3 is also helpful. It may have helped to go even further in explaining the objectives of the memory access learning task (as in the response to Rev2) with analogs to BF, and distinguishing them from some aspects that seem like engineering details, as the former is (to me) the conceptually significant portion of the paper.\", \"I still could not find where it is stated that the BF plots are analytic; my apology if I missed it. This is not a major issue, and I understand the choice to use the theoretical bound, but there is some discord in including an analytic curve next to empirical curves on the same plot without clearly marking it as such, as it may give a wrong impression as to what the reader is seeing (not an actual experiment, but an estimate of what an experiment would have yielded based on probabilistic concentration).\", \"I have revised my score to 6.\"]}", "{\"title\": \"Feedback on revision\", \"comment\": \"Dear reviewers,\\n\\nGiven the consensus was that the model motivation and the one-shot learning setup was not clear enough, versus fundamental disagreements with the subject area or potential impact, it would be very valuable for feedback on the paper revision. The principal changes are found in Section 3, Section 5 intro (+ Algorithm 1) & Section 5.1, box. We have surveyed several of our peers unfamiliar with meta-learning and they said they understood the training regime much better and felt 80%+ sure of how the model was trained and evaluated. So we would be very grateful if you could consider our response and paper revision!\"}", "{\"title\": \"Backup Bloom Filter\", \"comment\": \"Sorry we realize we did not address this comment,\\n\\n\\\"Some of the positive results attained using neural bloom filters is a bit tempered by the fact that the experiments were using a back up bloom filter.\\\"\\n\\nActually when comparing the space of our one-shot model versus a Bloom Filter; we compute the size of the state (in bits) *plus* the size of the backup Bloom Filter which stores the false negatives. The backup Bloom Filter (which *only* stores false negatives) thus must be very small in comparison with the original Bloom Filter for the total space of the neural bloom filter to be smaller. In the case of the database task where we see >30x space reduction, it is clearly negligible. \\n\\nWe only use a backup filter to ensure an apples-to-apples comparison between the neural bloom filter and bloom filter (i.e. a guaranteed 0% false negative rate). For applications where a small false negative rate is acceptable, one could avoid using the backup bloom filter completely. We have clarified this in the text in the experiments section. In terms of speed, the backup bloom filter does not add latency to queries because it can be queried in parallel to the neural bloom filter.\"}", "{\"title\": \"Explanation of training setup and why it's one-shot classification.\", \"comment\": \"Thank you for reading the paper, and we apologize for its opacity upon first pass. We completely agree the paper has mis-judged its audience and was not easy to read straight-through, this feedback is very useful in correcting this. We wrote the paper for someone highly familiar with meta-learning memory-augmented neural networks but not familiar with bloom filters; this left out an important audience.\\n\\n--- Re. \\u201cI had a hard time understanding how the model is trained...\\u201d\\n\\nThe model learns in one-shot because it observes a set S = (k1, k2, \\u2026 kn) and writes it to a memory (or state) M with only one observation of this dataset. It then answers queries \\u201cis my query x in S\\u201d using the read operation, conditioning on the memory, M. It is the same one-shot classification approach as \\\"Matching Networks\\\" Vinyals et al. 2016 however we focus on classifying familiarity versus image or text class. We have added several paragraphs and an algorithm box with further explanation of the meta-learning training setup. We will just briefly summarize it here. \\n\\nWe have a collection of sets Strain1, Strain2, , \\u2026 Strainm reserved for training; and a collection of queries Q = {q1, q2, \\u2026, qL} and targets yi = 1 if qi in S and 0 otherwise. In the example of a database we can think of a given set Si = {k1, \\u2026, kN} as a set of rowkeys for a given file on disk (e.g. SSTable). We have many sets because we have many files; for training we have reserved some for an offline training routine.\\n\\nDuring training we calculate M = fwrite(S), and then we calculate oi = fread(S, qi). We calculate the cross-entropy loss L = \\\\sumi yi log(oi) + (1 - yi)log(1-oi) and backprogate through the network (through the parameters controlling both the read, write, and encoder networks). One can consider the creation of M = fwrite(S) as a fast one-shot learning procedure; the network learns a state which can help it solve the classification problem, \\u201cis q in S?\\u201d. The slow-moving \\u2018meta-learning\\u2019 process is in the network parameters, which are slowly being optimized over several set membership tasks, i.e. several different sets S1:m, to be effective at one-shot classification. At test time, when we observe a new subset (or stream of elements) we can insert them with fwrite in one-shot and the resulting data-structure is the external memory, M.\\n\\n-- Re. \\u201cA lot of details are relegated to the Appendix. For instance B.2 talks about the encoder architecture for one of the experiments.\\u201d \\n\\nThis is a good point. We have removed B. 2 from the appendix and promoted the details to the model section. Furthermore we have given an example instantiation of the full architecture in the model section, so one does not need to consult the appendix. We have not completely removed the appendix as some details are tangential discussion points (e.g. how to implement the model in sub-linear time) but other details, such as space comparison, are now described in more detail in the experiments section.\\n\\nWe have significantly re-written the paper\\u2019s model and experiments section to remedy this --- please take a look and let us know if this addresses concerns.\"}", "{\"title\": \"Model motivation and applicability.\", \"comment\": \"Thank you for your review, and for your keen eye to detail.\\n\\nRe. \\u201cwhy not develop further an LSTM\\u201d. We are principally interested in whether it is possible to learn a compressive set membership data-structure in one-shot. Because many applications of Bloom Filters are in highly dynamic settings (e.g. databases), the requirement that a network may be able to beat a Bloom Filter with only a single computational pass over the data is quite important. It wasn\\u2019t clear from the beginning of this research project (to us and our peers) whether it would be possible, and if so - in what setting. Thus we feel that it would be a worthwhile scientific contribution to show this is the case with any model --- even an LSTM. \\n\\nFirstly, it is worth noting the LSTM is non-trivially less efficient for the database task even when the sequence length is quite short. But the real issue with an LSTM and other RNNs (partially covered in the Reviewer 3 response) is that it they are difficult to scale to larger set sizes, because one has to BPTT over the entire input sequence linearly (elements of our storage size S) during training. Say S contains 5,000 elements\\u2026 One would have to train with sequences of length 5,000, insert all elements sequentially and BPTT over the 5,000 long sequence. Training is way too slow, and the optimization problem becomes intractable (network fails to learn). Furthermore the LSTM has quadratic computation cost with respect to the hidden state size. Since set membership is order-invariant, it seemed preferable to try out a memory architecture which does not rely on sequential computation and BPTT (like a memory network) that is still very compressive (unlike a memory network).\\n\\nOur mistake in the original exposition, which you rightly point out, is that we have presented our solution (the architecture) without much of the motivation that lead to its incarnation. We have re-written the model section to remedy this. But we will also briefly state the model motivation here: \\n\\n- We want a simple write scheme and no BPTT -> additive write. (it\\u2019s order-invariant, and alike to the Bloom Filter\\u2019s logical-or write).\\n\\n- We want the network to choose where to write, as well as what to write -> address is a softmax over memory based on content. (alike to the Kanerva Machine Wu et al. (2018))\\n\\n- We want the network\\u2019s network trainable parameters to be small and independent of memory size -> make the addressing matrix A non-trainable.\\n\\n- We want the addressing to be efficient -> make it sparse (alike to Rae et al. (2016)).\\n\\n- We found a sparse address led to the network fixating on a subset of memory -> whiten the query vector.\\n\\nWhitening (or sphering) may appear complex but was only necessary if one adopts the sparse attention for efficiency. We implemented it in four lines of TensorFlow code, so at least it is not too complex from an engineering standpoint. Whitening has been used within deep learning literature before, e.g. \\u201cnatural neural networks\\u201d [1] . An alternative to whitening would be to use a \\u201cflow\\u201d such as real NVP [2] which actually transforms the query to something which appears to be truly gaussian. Crucially, this was a trick to get sparse attention working, if one wishes to avoid sparse attention and just use the full softmax over memory then this side-detail of whitening can be ignored. \\n\\n-- Re. \\u201cAlso, the neural bloom filters do well only when there is some sort of querying pattern. All of these details would seem to reduce the applicability of the proposed approach.\\u201d \\n\\nFortunately the proposed approach does well if there is structure to the query pattern *or* storage set. In the case of the database task, our queries are picked uniformly from the universe --- there is not much structure. However there is structure to the storage sets (which represent row keys in an disk file within a database) and this is why our approach outperforms the classical data-structures so significantly. \\n\\nMore generally we think the research area of using neural networks to replace data-structures, in this case a bloom filter, is so exciting because (we would argue) they are very rarely applied to data that contains no structure. Using a neural network to exploit redundancy and save space feels like a very impactful thing to do, and thought leaders within Computer Science (e.g. Jeff Dean, a co-author of the kraska et al. 2018 paper) appear to believe so. There are patterns to the rowkey schema that is used within our databases, there are patterns to blacklist URLs and IPs within our firewalls, there are patterns to our search queries. \\n\\nWe have re-written the model and experiment section to address your concerns!\\n\\n[1] https://deepmind.com/research/publications/natural-neural-networks/\\n[2] https://arxiv.org/abs/1605.08803\"}", "{\"title\": \"R3 response\", \"comment\": \"Thank you for this thoughtful and comprehensive review.\\n\\n-- We agree the recent Mitzenmacher arxiv posts should have been included in the related works, and they have now been added. \\n\\n-- Re. \\u2018strong empirical case for NBF\\u2026\\u2019 The fact that an LSTM does well on the MNIST class-based familiarity task is a useful data-point. However we do see a substantial gain for the database task. However the main problem with RNNs such as the LSTM (and DNC) is that they are not scalable. need to be trained to store N items by ingesting the N elements sequentially, and then backpropagating over the entire sequence. For large N this does not end up being scalable; e.g. for the large database task (Table 1) where N = 5,000. Thus we develop a memory model that does not rely on BPTT (alike to memory networks) but is compressive (unlike memory networks). \\n\\nThe crucial design-point of the model is that it uses a commutative write operation (addition) which is much simpler than the DNC & LSTM write (e.g. no gating, no squashing of the state) and is like a continuous relaxation of the Bloom Filter\\u2019s write (logical or). A simple additive write scheme also means the model will produce the same external memory M regardless of the ordering of the inputs (because addition is commutative) which makes sense given that familiarity does not depend upon input ordering, thus we also do not get strange effects where older inputs have much worse performance than newer inputs (which will occur with an RNN). We discuss the model\\u2019s motivation more explicitly in the revised text. \\n\\n-- Re. \\u201cOne interesting thing to look at would be the workload partition between the learning component and the backup filter\\u201d. This is a very interesting question you ask here. Your intuition is absolutely pretty well for class-based familiarity, the backup filter is used where the encoder essentially miss-classifies a character (so it is very lightly used). For uniform sampling, the model essentially captures a small random subset of inputs but mostly relies on the backup bloom filter. For the imbalanced data the model appears to store and characterise well the \\u2018heavy hitter\\u2019 i.e. frequent elements in the state memory and uses the backup bloom filter for infrequent elements. \\n\\n-- Re. \\u2018problem setting is loosely sketched\\u2026\\u2019 - the reviewer is correct, we originally wrote the paper for readers familiar with the recent one-shot memory-augmented meta-learning literature (e.g. matching networks [vinyals et al. 2016], MANN [santoro et al. 2016]) but unfamiliar with Bloom Filters. This was an unfortunate choice, we have thus expanded on what we mean by meta-learning and described how the training regime works. It is the exact same training regime as that in vinyals et al. 2016 and many follow-on works, only the classification problem is set membership, versus image classification. We have added a subsection with further explanation and an algorithm box with a succinct summary of the meta-learning training setup.\\n\\nWe will just briefly summarize the training setup here. We have a collection of sets {S_1, S_2, , \\u2026 S_m} reserved for training (each set contains n points to insert); and a collection of queries Q = {q1, q2, \\u2026, qL} and targets yi = 1 if qi in S and 0 otherwise. In the example of a database we can think of a given set Si = {k1, \\u2026, kN} as a set of rowkeys for a given file on disk (e.g. SSTable). We have many sets because we have many files; for training we have reserved some for an offline training routine.\\n\\nDuring training we calculate M = f_write(S), and then we calculate oi = f_read(S, qi), our query responses having observed the set S only once. We calculate the cross-entropy loss L = \\\\sumi yi log(oi) + (1 - yi)log(1-oi) and backprogate through the network (through the parameters controlling both the read, write, and encoder networks). One can consider the creation of M = f_write(S) as a fast one-shot learning procedure; the network learns a state which can help it solve the classification problem, \\u201cis q in S?\\u201d in one-shot. The slow-moving \\u2018meta-learning\\u2019 process is in the network parameters, which are slowly being optimized over several set membership tasks, i.e. several different sets S_1:m, to be effective at one-shot classification. At test time, when we observe a new subset (or stream of elements) we can insert them with f_write in one-shot and the resulting data-structure is the external memory, M.\\n\\n-- Re. Bloom Filter space usage, we indeed used the analytical bound. We have clarified this in the text. We feel this is fair as it makes the task of beating a Bloom Filter\\u2019s space performance slightly more difficult (as the analytic bound is slightly more compressive than in-practice), and it absolves any dispute over the choice of Bloom Filter library / choice of hash function etc.\\n\\n-- We have clarified the false positive rate is with respect to the distribution of queries in the text.\"}", "{\"title\": \"Re: \\\"How is this different to Kraska et al. 2018\\\"\", \"comment\": \"One concern that reviewer 2 and 3 raised that we would like to quickly address is, \\u2018what is the point of this model vs kraska et al. 2018?\\u2019 The simple answer is that kraska et al. 2018 learns a set membership classifier by training a feed-forward neural classifier from scratch over many (hundreds to thousands) epochs of the storage set S, and it is compressed into the weights of the network. We propose a method where a neural network learns to produce a classifier with a single pass over S, and the set is represented by an external memory M of compressed activations.\\n\\nIn the case of a banned URL list, where S may not change very much, it may be tenable to use the kraska et al. 2018 approach with multiple epochs of gradient descent. In the case of databases that uses Bloom Filters (e.g. Google Bigtable, Apache Cassandra, Redis) where one may have thousands of separate bloom filters (one per disk file, say) which are dynamically updating, it is impractical to train thousands of separate networks from scratch. Thus a one-shot approach (our paper) is absolutely necessary, and this paper serves as an existence proof that significant compression can be obtained in this challenging setting.\"}", "{\"title\": \"Substantial revision - thank you!\", \"comment\": \"Thank you for reading the paper and leaving your detailed feedback. The unified message from all three of you is that the paper could have done a better job in motivating the model, and describing the training regime. We have perhaps \\u2018regularized\\u2019 the paper\\u2019s contents too heavily in the endeavor to be succinct. We provide an updated manuscript with additions to \\u2018model\\u2019, \\u2018experiments\\u2019, and \\u2018related work\\u2019 --- an extra page of text. The proposed architecture is now better explained and the training regime is much more explicit. It\\u2019s a much better paper thanks to your comments.\\n\\nIf you think the problem setting has no potential for impact, or if you think there are fundamental flaws in our research approach then we would really appreciate feedback on this (and a rejection). Otherwise we would ask you to read the updated manuscript and update your response. We will also respond to each comment individually.\\n\\n----\", \"key_changes\": [\"Re-written \\u2018model\\u2019 section with a much clearer motivation. Added more specific details (e.g. encoder architecture) to model section, less reliance on appendix.\", \"Re-written experiments: explained meta-learning training in detail with algorithm box, explained why this is meta-learning / one-shot learning, added space comparison info, less reliance on appendix.\", \"Added speed comparison benchmarks (some peers were interested in us adding these numbers). The summary of these numbers is the latency of the neural bloom filter is much higher than a bloom filter, but the throughput can be comparable if the model is run on a gpu.\"]}", "{\"title\": \"Unclear paper, difficult to understand how the algorithm works or why\", \"review\": \"The paper proposes a method whereby a neural network is trained and used as a data structure to assess approximate set membership. Unlike the Bloom filter, which uses hand-constructed hash functions to store data and a pre-specified method for answering queries, the Neural Bloom Filter learns both the Write function and the Read function (both are \\\"soft\\\" values rather than the hard binary values used in the Bloom filter). Experiments show that, when there is structure in the data set, the Neural Bloom Filter can achieve the same false positive rate with less space.\\n\\nI had a hard time understanding how the model is trained. There is an encoding function, a write function, and a query function. The paper talks about one-shot meta-learning over a stream of data, but doesn't make it clear how those functions are learned. A lot of details are relegated to the Appendix. For instance B.2 talks about the encoder architecture for one of the experiments. But even that does not contain much detail, and it's not obvious how this is related to one-shot learning. Overall, the paper is written from the perspective of someone fully immersed in the details of the area, but who is unable to pop out of the details to explain to people who are not already familiar with the approach how it works. I would suggest rewriting to give an end-to-end picture of how it works, including details, without appendices. The approach sounds promising, but the exposition is not clear at all.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"Interesting topic, some concerns\", \"review\": \"SUMMARY\\nThe paper proposes a neural network based architecture to solve the approximate set membership problem, in the distributional setting where the in-set and out-of-set elements come from two unknown and possibly different distributions.\\n\\n\\nCOMMENTARY\\nThe topic of the paper is interesting, and falls into the popular trend of enhancing classical data structures with learning algorithms. For the approximate set membership problem, this approach was already suggested by (Kraska et al. 2018) and studied further in (Mitzenmacher 2018a,b). The difference in the current paper is that the proposed approach relies on \\\"meta-learning\\\", apparently to facilitate online training and/or learning across multiple sets arising from the same distribution; this is what I gather from the introduction, even though as I write below, I feel this point is not properly explained.\\n\\nMy main issue with the paper is that its conceptual contribution seems limited and unclear. It suggests a specific architecture whose details seem mostly arbitrary, or at least this is the impression the reader is left with, as the paper does rather little in terms of discussing and motivating them or putting them in context. Moreover, since the solution ultimately relies on a backup Bloom Filter as in (Kraska et al. 2018), it is hard to not view it as just an instantiation of the model in (Kraska et al. 2018, Mitzenmacher 2018a) with a different plugging of learning component. It would help to flesh out and highlight what the authors claim are the main insights of the paper.\\n\\nAnother issue I suggest revising pertains to the writing. The problem setting is only loosely sketched but not properly defined. How exactly do different subsets coming into play? Specifically, the term \\\"meta-learning\\\" appears in the title and throughout the paper, but is never defined or explained. The authors should write out what exactly they mean by this notion and what role it plays in the paper. This is important since to my understanding, this is the main point of departure from the aforementioned recent works on learning-enhanced Bloom Filters.\\n\\nThe experiments do not seem to make a strong case for the empirical advantage of the Neural Bloom Filter. They show little to no improvement on the MNIST tasks, and some improvement on a non-standard database related task. One interesting thing to look at would be the workload partition between the learning component and the backup filter, meaning what is the rate of false negatives emitted by the former and caught by the latter, and how the space usage breaks down between them (vis-a-vis the formula in Appendix B). For example, it seems plausible that on the class familiarity task, the learning component simply learns to be a binary classifier for the chosen two MNIST classes and mostly ignores the backup filter, whereas in the uniform distribution setting, the learning component only memorizes a small number of true and false positives and defers almost the entire task to the backup filter. I am not sure what to expect on the intermediate exponential distribution task.\\n\\nOther comments/questions:\\n1. For the classical Bloom Filter, do the results reported in the experimental plots reflect the empirical false-positive rate measured in the experiment, or just the analytic bound?\\n2. On that note, it is worth noting that the false positive rate of the classical Bloom Filter is different than the one you report for the neural-net based architectures. The Bloom Filter FP probability is over its internal randomness (i.e. its hash functions) and is independent of the distribution of queries, which need not be randomized at all. For the neural-net based architectures, the measured FP rate is w.r.t. a specific distribution of queries. See the discussion in (Mitzenmacher 2018a), sections B-C.\\n3. The works (Mitzenmacher 2018a,b) should probably at least be referenced in the related work section.\\n\\n\\nCONCLUSION\\nWhile I like the overall topic of the paper, I currently find the conceptual contribution to be too thin, raising doubts on novelty and significance. In addition, the presentation is somewhat lacking in clarity, and the practical merit is not well established. Notwithstanding the public nature of ICLR submissions, I would suggest more work on the paper prior to publication.\\n\\n\\nREFERENCES\\nM. Mitzenmacher, A Model for Learned Bloom Filters and Related Structures, 2018, see https://arxiv.org/pdf/1802.00884.pdf.\\nM. Mitzenmacher, Optimizing Learned Bloom Filters by Sandwiching, 2018, see https://arxiv.org/pdf/1803.01474.pdf.\\n\\n(Update: score revised, see below.)\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Details of the architecture not well motivated\", \"review\": \"The paper proposes a learnable bloom filter architecture. While the details of the architecture seemed a bit too complicated for me to grasp (see more on this later), via experiments the authors show that the learned bloom filters are more compact that regular bloom filters and can outperform other neural architectures when it comes to retrieving seen items.\\n\\nA bloom filter is fairly simple, K hash functions hash seen items into K bit vectors. During retrieval, if all of the bits hashed to are 1 then we say we've seen the query. I think there's simpler ways to derive a continuous, differentiable version of this which begs the question why the authors chose a relatively more elaborate architecture involving ZCA transform and first/second moments. Perhaps the authors need to motivate their architecture a bit better.\\n\\nIn their experiments, a simple LSTM seems to perform remarkably well (it is close to the best in 2 (a), (b); and crashes in (c) but the proposed technique is also outperformed by vanilla bloom filters in (c)). This is not surprising to me since LSTMs are remarkably good at remembering patterns. Perhaps the authors would like to comment on why they did not develop the LSTM further to remedy it of its shortcomings. Some of the positive results attained using neural bloom filters is a bit tempered by the fact that the experiments were using a back up bloom filter. Also, the neural bloom filters do well only when there is some sort of querying pattern. All of these details would seem to reduce the applicability of the proposed approach.\\n\\nThe authors have addressed most (if not all) of my comments in their revised version. I applaud the authors for being particularly responsive. Their explanations and additional experiments go a long way towards lending the insights that were missing from the original draft of the paper. I have upped my rating to a 7.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
BylkG20qYm
On Meaning-Preserving Adversarial Perturbations for Sequence-to-Sequence Models
[ "Paul Michel", "Graham Neubig", "Xian Li", "Juan Miguel Pino" ]
Adversarial examples have been shown to be an effective way of assessing the robustness of neural sequence-to-sequence (seq2seq) models, by applying perturbations to the input of a model leading to large degradation in performance. However, these perturbations are only indicative of a weakness in the model if they do not change the semantics of the input in a way that would change the expected output. Using the example of machine translation (MT), we propose a new evaluation framework for adversarial attacks on seq2seq models taking meaning preservation into account and demonstrate that existing methods may not preserve meaning in general. Based on these findings, we propose new constraints for attacks on word-based MT systems and show, via human and automatic evaluation, that they produce more semantically similar adversarial inputs. Furthermore, we show that performing adversarial training with meaning-preserving attacks is beneficial to the model in terms of adversarial robustness without hurting test performance.
[ "Sequence-to-sequence", "adversarial attacks", "evaluation", "meaning preservation", "machine translation" ]
https://openreview.net/pdf?id=BylkG20qYm
https://openreview.net/forum?id=BylkG20qYm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HyxPDLt7lN", "BJlBx-PjJV", "BJxrCaJtCX", "HylVuxePR7", "Ske4mggw0X", "BJgGtygvR7", "SJxazWdlCQ", "rJlanx6La7", "HJePML_8pX", "BylH3PXm6Q", "rkxBIwX7T7", "HkxhXwXQTQ", "rJlfD8QQpX", "Skexz1j3hm", "r1xFfsu9nm", "Byxb0Ssv3Q" ], "note_type": [ "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544947294539, 1544413420678, 1543204300814, 1543073899540, 1543073820152, 1543073657547, 1542648085303, 1542013108952, 1541993999437, 1541777324582, 1541777228602, 1541777187588, 1541776985894, 1541349127628, 1541208849234, 1541023176863 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1229/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1229/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1229/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1229/Authors" ], [ "ICLR.cc/2019/Conference/Paper1229/Authors" ], [ "ICLR.cc/2019/Conference/Paper1229/Authors" ], [ "ICLR.cc/2019/Conference/Paper1229/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1229/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1229/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1229/Authors" ], [ "ICLR.cc/2019/Conference/Paper1229/Authors" ], [ "ICLR.cc/2019/Conference/Paper1229/Authors" ], [ "ICLR.cc/2019/Conference/Paper1229/Authors" ], [ "ICLR.cc/2019/Conference/Paper1229/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1229/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1229/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper present a framework for creating meaning-preserving adversarial examples. It then proposes two attacks within this framework: one based on k-NN in the word embedding space, and another one based on character swapping.\\n\\nOverall, the goal of constructing such meaning-preserving attacks is very interesting. However, it is unclear how successful the proposed approach really is in the context of this goal. \\n\\nAdditionally, it is not clear how much novelty there is compared to already existing methods that have a very similar aim.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Important goal but the evaluation and relationship to the previous work needs improvement\"}", "{\"title\": \"Interesting framework but lack of novelty and unclear evaluation of attack\", \"review\": \"The authors present a framework for creating meaning-preserving adversarial examples, and give two methods for such attacks. One is based on k-nn in the word embedding space, and another is based on character swapping. The authors further study a series of automatic metrics for determining whether semantic meaning in the input space has changed, and find that the chrF method produces scores most correlated with human judgement of semantic meaning. The authors finally give an evaluation of the two methods.\", \"positive\": [\"The authors give a framework with the explicit goal of preserving meaning in attacks.\"], \"negative\": [\"Unclear novelty: previous work also gives the goal of preserving input meaning in attacks, even if the attacks themselves do not preserve meaning effectively (ie Zhao et al)\", \"Unclear attack effectiveness: The chrF scores for CharSwap and kNN methods have higher chrF scores than the \\\"unconstrained\\\" method, but it is unclear what this means in context. Similarly, the RDchrF scores show that the average output changes in meaning by some amount, but the authors do not show in context what this really means in terms of meaning.\"], \"details_of_negatives\": \"\", \"unclear_attack_effectiveness\": [\"Using chrF score as a proxy for human judgement is unmotivated. There is little analysis of the distribution of chrF scores compared to human judgement - the only analysis given is that a) there is a .586 correlation on French and .497 correlation on English, and b) that :\\\"90% of French sentence pairs to which humans gave a score of 4 or 5 in semantic similarity have a chrF > 78\\\". It would be good to plot the distribution of chrF score vs human judgement, so that the reader is able to tell what the chrF scores really mean in context here - a correlation score of approximately .5 is difficult to interpret.\", \"The chrF/RDchrF scores in the source and target spaces (respectively) as they relate to \\\"meaning-preservingness\\\" suffer from uninterpretability as a reader, both because of the point above and also because there are few examples of adversarial examples with their chrF/RDchrF scores given (only two).\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Still disagree\", \"comment\": \"I continue to stand by my original review. I think this effort is ambitious and interesting, but the limitation to preserving meaning makes the problem both unnecessarily difficult and insufficiently broad to be of practical utility at present. I don\\u2019t think the experiments carried out within this framework are very informative.\\n\\nOn the specific question of source-side perturbations that always produce OOVs, I can\\u2019t see that redefining \\u2018model\\u2019 makes any difference. The point is just that if the translation process is unaffected by a particular class of source perturbations, as in your experiments, it makes no sense to compare these to the resulting target-side perturbations.\"}", "{\"title\": \"Response to reviewer follow-up\", \"comment\": \"We thank the reviewer for their follow-up. Regarding issues raised by other reviewers, we have provided additional follow-up comments and we invite the reviewer to consult them.\"}", "{\"title\": \"Response to reviewer follow-up\", \"comment\": \"We thank the reviewer for following up. We have addressed some of the points raised by other reviewers in separate replies, let us summarize and reiterate these responses here for the sake of clarity:\\n\\n> (a) Measuring semantic similarity is an extremely hard problem in itself\\n\\nWhile this is certainly true, our general framework does not rely on the existence of a perfect model of semantic similarity to be of use. Indeed, we show in the paper that, while not ideal, an automatic metric still provides a positive signal to differentiate between e.g. unconstrained attacks (which are not expected to be meaning-preserving) and charswap attacks (which are expected to preserve meaning to some extent). This is useful because in cases where meaning preservation is suspected, but less straightforward (our kNN constraint for example), we can only rely on evaluation a posteriori, and chrF provides a good proxy.\\n\\n> (b) The correlation of chrF with human judgement doesn't inspire much confidence, especially given that it might be already inflated because of the varying number of perturbations introduced.\\n\\nFirst, we would like to reiterate that the main takeaway from the human judgement experiments is not so much the absolute value of the correlation coefficient---although of course it is important that there be a positive correlation. Rather the difference between different metrics (BLEU, METEOR, chrF) is of greater interest.\\nThat being said, we looked up correlations within each edit-distance bin and the results are as follow:\", \"edit_distance_1\": \"BLEU = 0.351\\nMETEOR = 0.351\\nchrF = 0.486\", \"edit_distance_2\": \"BLEU = 0.403\\nMETEOR = 0.424\\nchrF = 0.588\", \"edit_distance_3\": \"\", \"bleu\": \"0.334\", \"meteor\": \"0.392\", \"chrf\": \"0.559\\n\\nIn summary, our conclusions hold within each edit distance category (chrF is better than BLEU and meteor with p<0.01, with the caveat that the sample size is now smaller for each subset). Therefore the good correlation is not due only to the metrics being able to detect different edit distances (=number of perturbations).\\n\\nIf, this point being addressed, the reviewer still thinks that the correlation of chrF with human judgement does not inspire confidence, we would be grateful if they could elaborate on their concerns further, so that we can either dispel them or ameliorate the experimental setup.\"}", "{\"title\": \"Response to reviewer follow-up\", \"comment\": \"We extend our thanks to the reviewer for their detailed follow-up and the thoughtful discussion it is generating.\\nWe will again attempt to summarize what we think are the reviewer\\u2019s main point and address these.\\n\\n> Meaning preservation is hard to define (example: what about adding a nonsense token?)\\n\\nIn the specific example given, we would refer to human judgement. Note that the case of ill-formed inputs is taken into account in the rating scale we propose in section 2.2.1 (specifically options 0 and the distinction between options 4 and 5).\\n\\n> Evaluating meaning preservation is very hard (essentially paraphrase detection), at least as hard as MT.\\n\\nWhile this is certainly true, our general framework does not rely on the existence of a perfect model of semantic equivalence to be of use. Indeed, we show in the paper that, while not ideal, an automatic metric still provides a positive signal to differentiate between eg. unconstrained attacks (which are not expected to be meaning-preserving) and charswap attacks (which are expected to preserve meaning to some extent). This is useful because in cases where meaning preservation is suspected, but less straightforward (our kNN constraint for example), we can only rely on evaluation a posteriori, and chrF provides a good proxy.\\n\\n> Why focus on meaning-preserving attacks? What about cases where we alter the meaning of the source, but also alter the reference accordingly?\\n\\nWhile we think that the setting(s) described by the reviewer are highly relevant to adversarial attacks and MT in general, they fall out of the intended scope of this paper, which is adversarial perturbation where the resulting output is compared to the reference. So while our framework doesn\\u2019t cover the entirety of the area of adversarial attacks on MT (let alone MT robustness), we think it is still relevant for a non-negligible part of the literature (cf references in the paper, notably the last paragraph of Section 6).\\n\\n> If what we\\u2019re really doing is perturbing some characters in the source and measuring how many characters change in the target as a result, it seems clearer just to describe it that way\\n\\nWe don\\u2019t think that this description of the setting is completely accurate, as it leaves out perturbations where we change entire words (Unconstrained and kNN) in the source.\\n\\n> Since the model sees CharSwaped words as OOVs no matter how they were perturbed, the relationship between source chrF and target RDchrF is arbitrary (as source chrF can vary depending on the perturbation method while target RDchrF doesn\\u2019t change).\\n\\nWe argue that in MT, preprocessing (including replacing OOVs with a special token) is part of the model. From this perspective, an attack that would introduce 3 OOVs, but obtain these OOVs by eg. replacing words with nonsensical sequences of characters will not be the same as our CharSwap attack from the model\\u2019s point-of-view.\\n\\n> Source chrF are unlikely to be meaningful (to humans). The role of chrF here is to distinguish between fine degrees of preserving meaning, a task that seems well out of reach for raw character ngrams.\\n\\nIn the context of CharSwap, the role of chrF (or any other metric) is not to distinguish between character swaps that are meaning preserving or not. Rather, it is to distinguish between types of constraints (eg. CharSwap vs Unconstrained).\"}", "{\"title\": \"Still not convinced\", \"comment\": \"Thank you for such a detailed feedback.\\n\\nI agree with the authors that in the light of their focus on meaning preserving adversarial perturbations for NMT, their work is indeed novel. However, there are certain issues with the approach proposed which have been raised by other reviewers as well:\\n(a) Measuring semantic similarity is an extremely hard problem in itself. \\n(b) The correlation of chrF with human judgement doesn't inspire much confidence, especially given that it might be already inflated because of the varying number of perturbations introduced.\\n\\nDue to these fundamental issues, the framework proposed is not convincing. Lack of subword-based model experimentation also make the experimental section weak. Hence I'll keep my original rating.\"}", "{\"title\": \"Modified rating in light of other reviews\", \"comment\": \"I still like the overall mission of this paper and found it highly readable. However, after a more careful reading I do agree with the issues raised by the other reviewers. It seems that there is a fundamental question in the field as to a) how important meaning preservation is for adversarial attacks and b) how this should be assessed. In its current form, I don't think this paper provides satisfactory answers to these questions, but it does point at an important topic to be resolved.\"}", "{\"title\": \"Still not convinced by the framework and the experiments\", \"comment\": \"I appreciate the detailed and careful responses by the authors, but I feel that they don\\u2019t directly address the main concerns I had with this paper. I have tried to restate these more clearly below.\\n\\nRegarding the proposed framework, I don\\u2019t think it\\u2019s a good idea to try to limit the scope of adversarial attacks to ones that are \\u201cmeaning preserving\\u201d, for several reasons. First the notion is hard to define, especially when perturbations produce ill-formed input. For instance, does introducing a nonsense token at the beginning of a sentence preserve its meaning? This is not just a theoretical question, since such perturbations occur in real data, and can trigger \\u201challucinatory\\u201d behavior in NMT that is very different from what a human translator would do. Second, even if we had a satisfactory definition for \\u201cmeaning preserving\\u201d in this context, it would be very difficult to measure reliably. This is essentially the problem of paraphrase, and it\\u2019s not any easier than MT - in fact, harder in practice, due to the lack of parallel data. Finally, even if the above two problems were resolved, I don\\u2019t see the point in specifically excluding attacks that change meaning. On the contrary, changing words or grammatical attributes in constrained ways seems like very fertile ground to explore. For instance, \\u201cJohn loves Mary\\u201d -> \\u201cBob loves Mary\\u201d, \\u201cJohn sees Mary\\u201d, \\u201cJohn loved Mary\\u201d, \\u201cJohn is loved by Mary\\u201d, etc. Of course, these would invalidate any existing reference translation, but permissible changes to the reference could be checked automatically if the experiment were set up carefully. There is work on this from the burgeoning field of challenge sets for MT; see, eg, the \\u201cextra test suites\\u201d from WMT 2018. Furthermore, cases where the attack triggers hallucinatory behavior are relatively easy to detect, even without a reference. Such behavior is perhaps the most significant problem for MT robustness at the moment, and it is absent from the current paper.\\n\\nTurning to what the paper actually does, the basic idea to measure the discrepancy between source-side and target-side semantic difference associated with an adversarial attack makes sense in principle. In practice, given the current state of the art, such measurements are always going to amount to just surface distances like chrF as espoused here. If what we\\u2019re really doing is perturbing some characters in the source and measuring how many characters change in the target as a result, it seems clearer just to describe it that way. Absent careful constraints like limiting perturbations to word-internal swaps, many such changes won\\u2019t preserve meaning, but as I argue above, that\\u2019s not necessarily a bad thing.\\n\\nA final note about the central character-swap experiments. The technique is to find the three tokens that result in the biggest probability drop when replaced with OOVs (resulting from character swapping), then measure the resulting target-side relative delta chrF. That\\u2019s fine, although it\\u2019s not clear what is to be gleaned from the results. What\\u2019s not fine is to also measure the source-side chrF and compare this to the target-side chrF. From the model\\u2019s perspective, all these perturbations are exactly the same (three OOVs, regardless of how they were produced), so the relation between source- and target-side chrF is completely arbitrary. Even from a human perspective, the source chrF scores are unlikely to be meaningful. As the authors correctly observe, the vast majority of word-internal character swaps are meaning preserving in the sense that we automatically correct them when we read them in context. So the role of chrF here is to distinguish between fine degrees of preserving meaning, a task that seems well out of reach for raw character ngrams.\"}", "{\"title\": \"Author response for reviewer 2\", \"comment\": \"We thank the reviewer for their encouraging comments. We appreciate the importance that they put on the problematic of evaluating meaning preservation in adversarial attacks.\", \"regarding_some_specific_comments\": \"> What about CharSwap where OOV is not forced?\\n\\nThis would indeed be an interesting experiment, which we will try to carry-out should time permit. Note that an attractive property of OOV is that it renders the optimization problem (2) relatively simple which might favor gradient-based attacks. Finally, as the reviewer pointed out, the effectiveness of the various attacks is not the focal point of the paper.\\n\\n> It would be nice to add correlation for each type of constraint as well to Table 2\", \"for_each_constraint\": \"\", \"unconstrained\": [\"BLEU: 0.582\", \"METEOR: 0.572\", \"chrF: 0.599\"], \"knn\": [\"BLEU: 0.533\", \"METEOR: 0.584\", \"chrF: 0.606\"], \"charswap\": [\"BLEU: 0.273\", \"METEOR: 0.318\", \"chrF: 0.382\", \"As the reviewer can see, we observe the same trend for each kind of constraints (BLEU<meteor<chrF), except for Unconstrained where all metrics correlate highly with human judgment. However, none of those differences are statistically significant (with p<0.01). Note that the sample size is also smaller.\", \"We will include these results in a revised version of the paper.\"]}", "{\"title\": \"Author response for reviewer 1 (Pt 2)\", \"comment\": \"> training with OOVs (resulting from character swaps) is of course not likely to hurt performance on test sets containing few OOVs\\n\\nThis is a reasonable remark, although a discussion could be had as to whether changing the training distribution while keeping the test distribution and model capacity constant should not decrease test performance in general.\\n\\n> word-based systems are not state of the art, and it isn\\u2019t clear how much we could expect any conclusions to carry over to sub-word models\\n\\nWe acknowledge that this is a valid criticism of our work. Although we expect that our central contribution (clarifying the importance of evaluating meaning-preservation in adversarial perturbations) will carry over to sub-word models, we will be running experiments during the rest of the rebuttal period to validate this claim.\\n\\n> For kNN, being semantically related doesn\\u2019t imply that the relationship is synonymy\\n\\nWe agree with the reviewer, however \\n(1): this is a somewhat good approximation in languages where we may not have access to precise synonymy information (like wordnet)\\n(2): Our point is precisely that even though one may have preconceptions of the capacity of a class of perturbations to preserve meaning, meaning-preservation should still be evaluated explicitly.\\n\\n> If you\\u2019re just going to replace a work with an OOV symbol in any case, why go to the trouble of swapping characters?\\n\\nFor for human and automatic evaluation, we still need to provide \\u201cvalid\\u201d sentences that don\\u2019t just replace words with \\u201c<unk>\\u201d. This is a quirk of word-based model and our experiments with sub-word models should help resolve this.\\n\\n> Ebrahimi et al only work with classification, and don\\u2019t use IWLST\\n\\nWe suspect the reviewer is referring to the Ebrahimi et al 2018b Hotflip paper, however the 2018a reference points to the COLING paper \\u201cOn adversarial examples for character-level neural machine translation\\u201d paper which indeed works with MT on IWSLT. Arguably the lettering is a bit confusing here, we will address this in a revised version.\", \"references\": \"[1]: Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. \\\"Explaining and Harnessing Adversarial Examples.\\\" arXiv preprint arXiv:1412.6572 (2014).\\n[2]: Ebrahimi, Javid, et al. \\\"Hotflip: White-box adversarial examples for text classification.\\\" Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Vol. 2. 2018.\\n[3]: Cheng, Minhao, et al. \\\"Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples.\\\" arXiv preprint arXiv:1803.01128 (2018).\\n[4]: Ebrahimi, Javid, Daniel Lowd, and Dejing Dou. \\\"On Adversarial Examples for Character-Level Neural Machine Translation.\\\" Proceedings of the 27th International Conference on Computational Linguistics. 2018.\"}", "{\"title\": \"Author response for reviewer 1 (Pt 1)\", \"comment\": \"We thank the reviewer for their extensive and in-depth review, and glad that the overall direction was deemed interesting and valuable, even if there were disagreements with the experimental details. We believe that a number of these disagreements have already been resolved in the paper, or can be resolved with additional experimentation, which we will try our hardest to do. Please see the detailed responses below, and we are happy to address any additional comments.\\n\\n> The framework is too narrow, doesn\\u2019t consider adversarial inputs that are not perturbations of existing samples\\n\\nThe reviewer is correct that our contribution focuses on adversarial perturbations only. We do not think that this setting is too narrow as those kinds of attacks constitute a significant chunk of the literature on the topic (in NLP and other areas [1,2,3,4] inter alia).\\n\\n> The framework excludes perturbations with varying amounts of noise\\n\\nOn the contrary, our framework implicitly quantifies the amount of noise through the value of the semantic similarity metric. For example adversarial perturbations within edit distance 3 will have lower eg. BLEU score than perturbations within edit distance 1. Ultimately this depends on the chosen similarity metric.\\n\\n> Perturbations are limited to nearest neighbors and character swap\\n\\nPlease keep in mind that for both the human judgment experiments we include unconstrained perturbations as well, as explained in 4.2.\\n\\n> kNN and charSwap constraints are unnecessary because knowing the class of perturbation already gives you a lot of information about semantic distance\\n\\nThis is certainly somewhat true, but there are exceptions. For example, a nearest neighbor may be syntactically similar but semantically distant, or swapping characters may change the word to another meaning: \\u201ccare\\u201d -> \\u201cacre\\u201d. However, this is irrespective of our main point here, which is to show that meaning-preservation *should* be evaluated independently of what a-priori knowledge one has of the level of meaning-preservation.\\n\\n> Automatic metrics are too coarse to reliably distinguish among different perturbations\\n\\nResults in Table 3 seem to contradict this statement. However, we do understand that our metrics are not perfect, and future metrics may make the results even more significant.\\n\\n> I think the good correlation is likely due to the metrics being able to detect that, eg, changing 3 tokens makes things worse than changing only one\\n\\nFirst, we would like to point out that this does not explain the (statistically significant) difference in correlations between eg. BLEU and RDchrF in the source. However the reviewer raises an interesting point. We computed correlations with each edit-distance bin and the results are as follow:\", \"edit_distance_1\": [\"BLEU = 0.351\", \"METEOR = 0.351\", \"chrF = 0.486\"], \"edit_distance_2\": [\"BLEU = 0.403\", \"METEOR = 0.424\", \"chrF = 0.588\"], \"edit_distance_3\": \"- BLEU: 0.334\\n - METEOR: 0.392\\n - chrF: 0.559\\n\\nIn summary, our conclusions hold within each edit distance category (chrF is better than BLEU and meteor with p<0.01, with the caveat that the sample size is now smaller for each subset). Therefore the good correlation is not due only to the metrics being able to detect different edit distances.\\n\\nWe will add the results to the revised version of the paper.\\n\\n> The conclusions are not clear\\n\\nWe will try to clarify this in the paper. The null hypothesis here is that no one type of adversarial attack is better than the other at preserving meaning, and therefore meaning-preservation should not be evaluated. Our experiments show that this is not the case, and the choice of adversarial attack highly affects the amount that meaning is preserved. Thus, when a new variety of adversarial attack is conceived, meaning preservation should definitely be taken into account when comparing it to previous attacks.\\n\\n> it seems obvious a priori that perturbations intended to be relatively meaning preserving would indeed preserve meaning better than unconstrained ones\\n\\nWe agree with the reviewer that this appears obvious, particularly in hindsight, but the previous literature has not taken this into account in their evaluations whatsoever. The attacks compared in this paper are relatively straightforward and our conclusions are logical, but we expect that for future works that propose more sophisticated attacks we may not be able to predict the conclusions a-priori nearly as easily. Our point is that future work in the literature should take this problem into account when performing evaluation.\"}", "{\"title\": \"Author response for Reviewer 3\", \"comment\": \"We thank the reviewer for their time and their comments.\\n\\nBefore addressing specific comments, we would like to emphasize that the intended contribution of this paper is not so much about proposing new adversarial attacks as to raise the issue of explicitly evaluating meaning preservation in the context of adversarial attacks on sequence to sequence models. We respectfully disagree with the reviewer that this is a minor contribution, as there is a flourishing literature on adversarial attacks on NLP (and seq2seq) models that often sidesteps this important issue [1,2,3].\", \"now_on_to_specific_remarks\": \"> It is debatable whether kNN or CharSwap are indeed preserving meaning\\n\\nWe agree and the point of this work is precisely to show that this meaning preservation should not just merely be left as an assumption but actually evaluated via human judgement or automatic proxies thereof.\\n\\n> Major overlap with Belinkov & Bisk (2017)\\n\\nWe disagree with this assessment, While there are similarities in the choice of perturbations (notably CharSwap), B&B only look at random character replacements whereas we use a systematic approach to generate perturbations using gradients. Moreover, their contribution focuses on the brittleness of character level MT systems to noise, while ours is about the necessity to evaluate the level of meaning preservation of any kind of perturbation. As such, we think that these two works present very distinct contributions.\\n\\n> Word level models whereas SOTA models use subwords (BPE)\\n\\nThis is a fair criticism that has been raised by several reviewers. To clarify, we expect our main contribution (evaluating meaning-preservation is important) to carry over to subwords (or character models). We will be running experiments on BPE models to confirm this hypothesis before the end of the author response period. We would like to emphasize here again that the specific constraints are not the main contribution of the paper.\\n\\n> minor issues\\n\\nWe acknowledge these and will address those in the revised version. Specifically for (b): d is the dimension of word embeddings and for (d): the metric is RDchrF\", \"references\": \"[1]: Zhao, Zhengli, Dheeru Dua, and Sameer Singh. \\\"Generating natural adversarial examples.\\\" arXiv preprint arXiv:1710.11342 (2017).\\n[2]: Cheng, Minhao, et al. \\\"Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples.\\\" arXiv preprint arXiv:1803.01128 (2018).\\n[3]: Ebrahimi, Javid, Daniel Lowd, and Dejing Dou. \\\"On Adversarial Examples for Character-Level Neural Machine Translation.\\\" Proceedings of the 27th International Conference on Computational Linguistics. 2018.\"}", "{\"title\": \"Not enough novelty for acceptance\", \"review\": \"The paper is about meaning-preserving adversarial perturbations in the context of Seq2Seq models. The paper proposes two ways of achieving that: (a) kNN - substituting word with nearest neighbors from the word embedding space, and (b) character swapping. It's debatable if character swapping is really meaning preserving since a lot of typos can really change the word. Similarly a case can be made about kNNs as well. But even if these are the best approximations we have, I have some major issues about the novelty of the work. Firstly, while the authors are trying to pitch the work in a new mold, there's major overlap with Belinkov and Bisk, 2018. The use of character swapping as an adversarial perturbation/noise and the subsequent benefits of training with adversarial noise have already been shown in Belinkov and Bisk, 2018. Secondly, the models tested are operating at word-level whereas most of the state-of-the-art systems nowadays are all using subword-level vocabularies. The character swap method presented would need to be adapted and some of the takeaways from results are hence less relevant for the current SOTA models. Coming to positives, the two real contributions for me are: (a) the result that chrF correlates better with human judgement, and (b) the measurement of adversarial perturbation's success measured via a sum that includes relative decrease in target score and the similarity of source sentence with the perturbed version. However, these are minor contributions and not enough to cover up the major flaws that I discussed above.\", \"some_other_minor_issues\": \"(a) Table 1: The first example has the CharSwap row missing the word \\\"faire\\\".\\n(b) Section 3.1.1: \\\"d\\\" is not defined when discussing time complexity. \\n(c) No separate section 3.1.2 required as it can be merged with 3.1.1 and would be more easy to understand without confusing the readers that there's some context change.\\n(d) Table 6 entries are not clearly defined. How is robustness measured?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting, but significant methodological and experimental problems.\", \"review\": \"Summary: Proposes a framework for performing adversarial attacks on an NMT system in which perturbations to a source sentence aim to preserve its meaning, on the theory that an existing reference translation will remain valid if this is done. Given source and target metrics for measuring similarity, an attack is deemed successful if the source difference is smaller than the relative decrease in target similarity to the reference. A first experiment measures correlation with human judgements of similarity between original and perturbed sentences, and concludes that chrF is better than BLEU and METEOR for this purpose. Next, standard gradient-based adversarial attacks are carried out, replacing the three tokens that result in the biggest drop in (approximate) reference probability, either 1) with no constraints, 2) constrained to character swaps of the original token, or 3) constrained be among the 10 closest embeddings to the original token. In comparisons on three language pairs from IWSLT, the constrained attacks are found to preserve meaning and yield more successful attacks according to the current framework. The Transformer architecture was also found to deal less well with attacks under the 10-closest embedding constraint. Finally, adversarial training with the character-swap constraint confers some robustness to this attack, without degrading performance on normal text.\\n\\nI think it is a good idea to formalize a method for carrying out and assessing adversarial attacks, but the framework proposed here seems too narrow, as it excludes adversarial inputs that are sensible but not a close perturbation of an existing source/reference pair, or ones that contain varying amounts of noise. It is more difficult to measure output quality for such attacks, but that doesn\\u2019t seem like a good reason for excluding them from what is intended to be a general framework. Note also that \\u201cmore difficult\\u201d doesn\\u2019t mean impossible, since good attacks can produce severely degraded output that is relatively easy to detect.\\n\\nI found some of the methodology questionable. Limiting source perturbations to character swaps and neighbors in embedding space, then using automatic metrics to measure semantic distance seems both unnecessary and unlikely to succeed. Unnecessary because knowing the class of perturbation already gives you a lot of information about semantic distance. Unlikely to succeed because automatic metrics are too coarse to reliably distinguish among different perturbations. This is particularly obvious in the case of using character ngram distance (chrF) to determine which character swaps preserve meaning best. The experiments that support the viability of automatic metrics in 4.2 do so by measuring correlation with human judgment when the number of perturbed tokens varies from 1 to 3. I think the good correlation is likely due to the metrics being able to detect that, eg, changing 3 tokens makes things worse than changing only one. To be convincing, the experiments would have to be repeated with number of perturbations fixed at 3, to match the setting in the remaining experiments. \\n\\nApart from the interesting observation about the Transformer\\u2019s performance on embedding-neighbor attacks mentioned above, it is difficult to know what conclusions to draw from the experiments. In 4.3 it seems obvious a priori that perturbations intended to be relatively meaning preserving would indeed preserve meaning better than unconstrained ones. Similarly, it is not surprising that character swaps that by design produce an OOV token will cause more damage than choosing a near neighbor in embedding space. In 5.3, training with OOVs (resulting from character swaps) is of course not likely to hurt performance on test sets containing few OOVs, and, as is known from previous work, it will improve robustness to the same kind of noise. A final comment about the experiments is that word-based systems are not state of the art, and it isn\\u2019t clear how much we could expect any conclusions to carry over to sub-word models.\\n\\nTo conclude, although this is an interesting initiative, both the framework and the methodology need to be tightened up.\", \"details\": \"End of 2.1: this would be easier to interpret if you had previously specified the allowed range for s_src.\\n\\n3.2 For kNN, being semantically related doesn\\u2019t imply that the relationship is synonymy, as would be required for meaning preservation. It also doesn\\u2019t imply that the substitution will be grammatical, which could jeopardize meaning preservation even if the words are synonyms.\\n\\nCharSwap seems odd. If you\\u2019re just going to replace a work with an OOV symbol in any case, why go to the trouble of swapping characters? No matter what actual semantic shift is caused by the swap, the model will always see exactly the same representation.\\n\\n4.1 \\u201cFollowing previous work on adversarial examples for seq2seq models (Belinkov & Bisk, 2018; Ebrahimi et al., 2018a)\\u201d - this is misleading: Ebrahimi et al only work with classification, and don\\u2019t use IWLST.\\n\\n4.1 Should mention the size of the training sets in this section.\\n\\nTable 1, first sentence, CharSwap example omits \\u201cfaire\\u201d.\\n\\n4.3, \\u201cAdding Constraints Helps Preserve\\u2026\\u201d last sentence: but here you need to reason in the opposite direction.\\n\\n5.2 It would be good to also give absolute scores for table 6, so we can judge how much the systems actually benefited, and whether these gains were statistically significant.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"An inspiring study on adversarial attacks for natural language\", \"review\": \"The authors provide a natural definition of adversarial examples for natural language transduction (meaning-preserving on source side while meaning-destroying on target side) and a human judgment task to measure it. They then investigate three different ways of generating adversarial examples and show that a metric based on character n-gram overlap (chrF) has a stronger correlation with human judgment. Finally, they show that adversarial training with the attack most consistent with the introduced meaning-preservation criteria results in improved robustness to this type of attack without degradation in the non-adversarial setting.\\n\\nOverall this is a strong paper. It is well structured, the problem studied is highly interesting and the proposed meaning-preserving criteria and human judgement will be useful to anyone interested in adversarial attacks for natural language. While the studied attack methods are fairly primitive, the empirical results are still interesting.\\n\\nComments\\n---------------\\nI wish the authors would include experiments with CharSwap where OOV is not forced as I'm not sure the assumption that OOV is more meaning-destroying in the target side is necessarily true (one could also argue that since the models are already trained with OOV words, they may be more robust to OOV words than in-vocabulary words in the wrong context).\\n\\nIt would be nice to add correlation for each type of constraint as well to Table 2. The result would be even stronger if the experiment was replicated in the opposite direction or for another language pair as well.\\n\\nI don't understand why the adversarial output in the second example in table 4 has a RDchrF of zero (the word July is completely dropped).\\n\\nFrom Table 6 it looks like random sampling is actually slightly better than adversarial training in terms of robustness to CharSwap attacks in the Transformer model. Moreover, the benefit of adversarial rather than random sampling is quite small in the LSTM model as well. This could be made more clear in the text.\\n\\nIt would be interesting to see how adversarial training with the CharSwap method fares against the unconstrained and kNN attacks in table 6.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
r1GkMhAqYm
CoDraw: Collaborative Drawing as a Testbed for Grounded Goal-driven Communication
[ "Nikita Kitaev", "Jin-Hwa Kim", "Xinlei Chen", "Marcus Rohrbach", "Yuandong Tian", "Dhruv Batra", "Devi Parikh" ]
In this work, we propose a goal-driven collaborative task that contains language, vision, and action in a virtual environment as its core components. Specifically, we develop a Collaborative image-Drawing game between two agents, called CoDraw. Our game is grounded in a virtual world that contains movable clip art objects. The game involves two players: a Teller and a Drawer. The Teller sees an abstract scene containing multiple clip art pieces in a semantically meaningful configuration, while the Drawer tries to reconstruct the scene on an empty canvas using available clip art pieces. The two players communicate via two-way communication using natural language. We collect the CoDraw dataset of ~10K dialogs consisting of ~138K messages exchanged between human agents. We define protocols and metrics to evaluate the effectiveness of learned agents on this testbed, highlighting the need for a novel "crosstalk" condition which pairs agents trained independently on disjoint subsets of the training data for evaluation. We present models for our task, including simple but effective baselines and neural network approaches trained using a combination of imitation learning and goal-driven training. All models are benchmarked using both fully automated evaluation and by playing the game with live human agents.
[ "CoDraw", "collaborative drawing", "grounded language" ]
https://openreview.net/pdf?id=r1GkMhAqYm
https://openreview.net/forum?id=r1GkMhAqYm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HyxoXmXFxV", "Ske8XQAwAX", "rJe0qGAvAX", "Hklr3W0D0m", "H1lU2zbJ6Q", "BJlFdvccnm", "BJllz-x5hX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545315106767, 1543131934029, 1543131798483, 1543131565514, 1541505709835, 1541216112970, 1541173512381 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1228/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1228/Authors" ], [ "ICLR.cc/2019/Conference/Paper1228/Authors" ], [ "ICLR.cc/2019/Conference/Paper1228/Authors" ], [ "ICLR.cc/2019/Conference/Paper1228/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1228/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1228/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers raise a number of concerns including no methodological novelty, limited experimental evaluation, and relatively uninteresting application with very limited real-world application. This set of facts has been assessed differently by the three reviewers, and the scores range from probable rejection to probable acceptance. I believe that the work as is would not result in a wide interest by the ICLR attendees, mainly because of no methodological novelty and relatively simplistic application. The authors\\u2019 rebuttal failed to address these issues and I cannot recommend this work for presentation at ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview\"}", "{\"title\": \"Re: Exciting task! Not sure about model results\", \"comment\": \"Thank you for your feedback!\\n\\nWe've updated the related works section to include some of the references you provided and contrast the CoDraw task with these works.\\n\\nWe tried several drawer variations that we did not include in the submission due to space concerns. Replacing the LSTM in the drawer with a bag-of-words representation results in an average score of 3.04 (compared to 3.34 when using an LSTM). If we additionally remove the dependence on the current state of the canvas \\u2014 such that the drawer has no memory of prior events in the conversation -- the score drops further to 2.71. Both language understanding and longer-term reasoning are important for the drawer to achieve the performance we report in the paper.\"}", "{\"title\": \"Re: An artificial task for modeling and evaluation of goal-oriented dialogs\", \"comment\": \"Thank you for your comments!\\n\\nWe have updated the draft to make it more clear that the contributions of this paper are the new dataset, an associated evaluation protocol, and models that highlight challenges in the dataset as well as will serve as strong baselines for future work on this dataset.\\n\\nWe've added a new Section 6.1 to the paper discussing errors made by our models. These errors reflect the challenging aspects of the CoDraw task. We've also updated the appendix to include a greater variety of qualitative results. The examples there should also help establish a qualitative feel for how the various models differ.\\n\\nOur updated draft has a new Figure 3 to give an example of codebook-like language use by agents trained on the same data.\\n\\nWe have also re-written several paragraphs (including those that describe data preparation) for clarity based on your recommendations.\"}", "{\"title\": \"Challenges posed by the CoDraw task\", \"comment\": \"Thank you for your feedback!\\n\\nAs you point out, the task, dataset, and evaluation protocol are among the main contributions of this work. We also present several models that highlight challenges in the dataset and can serve as strong baselines for future work on this task. We have updated the draft to make it more clear what the contributions are.\\n\\nThere are substantial differences between CoDraw and previous work involving abstract scenes. Here are several:\\n\\n1. The need to faithfully reconstruct the entire image results in longer and more detailed descriptions. At the bottom of this comment, we provide an example of language associated with the same scene in different datasets\\n2. Past work has focused mostly on scene generation from sentences. CoDraw, on the other hand, also requires doing the reverse (generating sentences from scenes). Our task is a natural way to \\\"ground\\\" image caption generation into a objective task with measurable evaluation. This is a contribution as well.\\n3. Our dataset records the Drawer's canvas at each step of the dialog. If all we had was a monologue with a single image at the end, we wouldn't be able to build most of the models discussed in this paper.\", \"the_task_poses_a_number_of_challenges\": \"1. The Teller must describe the scene in a sensible order. Describing the clip art pieces in a random order would be incoherent and hard to understand. People do this using a combination of planning and incorporating world knowledge: for example, it makes more sense to say \\\"there is a sandbox / the boy sits in the sandbox\\\" than \\\"the boy is in a sitting position / there is a sandbox below him\\\"\\n2. The Teller must describe all aspects of the scene without omitting anything important. Maintaining such long-term coherency is actually a significant challenge: simply training our LSTM-based Teller to minimize perplexity on the training data results in a model that frequently describes the same objects multiple times while omitting others entirely. As we show in our paper a rule-based nearest-neighbor baseline outperforms the imitation-learning approach for this reason!\\n3. In the example below, the language includes transitions like \\\"next to the swing\\\" and \\\"inside the sandbox\\\" that maintain the flow of the dialog by referring back to previous parts of the conversation. A Teller bot should learn to generate such transitions.\\n\\nWe respectfully disagree with the implication that the clip art domain results in simplistic language. There are many tasks in NLP that use simple domains but real language, for example the SCONE dataset (Long et al. 2016). We're still dealing with noisy and ambiguous text written by humans. Here are a few sentences from CoDraw to give a flavor of some linguistic challenges:\\n\\n1. \\\"far left girl chest at skyline reaching hands to right, happy , on her right smallest cat facing left\\\". The words left/right can be used in multiple ways: \\\"far left\\\" is a position in the absolute frame of reference, \\\"on her right\\\" is a relative position in the girl's frame of reference, and \\\"facing left\\\" indicates direction.\\n2. \\\"in the center is a pine tree with an owl in it and it is wearing a wizards hat.\\\": will a model know to rule out the interpretation where the tree is wearing the hat?\\n\\n==========\\nSentences for the same scene, from multiple datasets:\\n\\nA. Zitnick and Parikh 2013:\\n\\\"Mike is upset because his sand castle got destroyed by Jenny's soccer ball.\\\"\\n\\nB. Zitnick et al. 2013:\", \"0\": \"Jenny kicked the soccer ball into the sandbox.\", \"1\": \"Mike was playing in the sandbox.\", \"2\": \"The playground had lots of toys to play with.\\n\\nC. CoDraw (this work):\", \"t\": \"that is everything\", \"d\": \"ok\"}", "{\"title\": \"Mostly a dataset paper, writing is not coherent, results are not convincing\", \"review\": \"In this paper a new task namely CoDraw is introduced. In CoDraw, there is a teller who describes a scene and a drawer who tries to select clip art component and place them on a canvas to draw the description. The drawing environment contains simple objects and a fixed background scene all in cartoon style. The describing language thus does not have sophisticated components and phrases. A metric based on the presence of the components in the original image and the generated image is coined to compute similarity which is used in learning and evaluation. Authors mention that in order to gain better performance they needed to train the teller and drawer separately on disjoint subsets of the training data which they call it a cross talk.\", \"comments_about_the_task\": \"The introduced task seems to be very simplistic with very limited number of simple objects. From the explanations and examples the dialogs between the teller and drawer are not natural. As explained the teller will always tell \\u2018ok\\u2019 in some of the scenarios. How is this different with a system that generates clip art images based on a \\u201cfull description\\u201d? Generating clip arts based on descriptions is a task that was introduced in the original clip art paper by Zitnick and Parikh 2013. This paper does not clarify how they are different than monologs of generating scenes based on a description.\", \"comments_about_the_method\": \"I couldn\\u2019t find anything particularly novel about the method. The network is a combination of a feed forward model and an LSTM and the learning is done with a combination of imitation learning and REINFORCE.\", \"comments_about_the_experimental_results\": \"It is hard to evaluate whether the obtained results are satisfying or not. The task is somehow simplistic since there a limited number of clip art objects and the scenes are very abstract which does not have complications of natural images and accordingly the dialogs are also very simplistic. All the baselines are based on nearest neighbors.\", \"comments_about_presentation\": \"The writing of this paper needs to be improved. The current draft is not coherent and it is hard to navigate between different components of the method and different design choices. Some of the design choices are not experimentally proved to be effective: they are mentioned to be observed to be good design choices. It would be more effective to show the effect of these design choices by some ablation study.\", \"there_are_many_details_about_the_method_which_are_not_fully_explained\": \"what are the details of your imitation learning method? Can you formalize your RL fine-tuning part with the use of some formulations? With the current format, the technical part of the paper is not fully understandable.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"An artificial task for modeling and evaluation of goal-oriented dialogs\", \"review\": \"The paper proposes a game of collaborative drawing where a teller is\\nto communicate a picture to a drawer via natural language. The picture\\nallows only a small number of components and a fixed and limited set\\nof detailed variations of such components.\", \"pros\": \"The work contributed a dataset where the task has relatively objective\\ncriteria for success. The dataset itself is a valuable contribution\\nto the community interested in the subject. It may be useful for\\npurposes beyond those it was designed for.\\n\\nThe task is interesting and its visual nature allows for easy inspection\\nof the reasons for successes or failures. It provides reasonable grounding\\nfor the dialog. By restricting the scope of variations through the options\\nand parameters, some detailed aspects of the conversation could be explored\\nwith carefully controlled experiments.\\n\\nThe authors identified the need for and proposed a \\\"crosstalk\\\" protocol\\nthat they believe can prevent leakage via common training data and\\nthe development of non-language, shared codebooks that defeat the purpose\\nof focusing on the natural language dialog.\\n\\nThe set up allows for pairing of human and human, machine and machine,\\nand human and machine for the two roles, which enables comparison to\\nhuman performance baselines in several perspectives.\\n\\nThe figures give useful examples that are of great help to the readers.\\n\\nCons.:\\n\\nDespite the restriction of the task context to creating a picture with\\nseverely limited components, the scenario of the dialogs still has many\\ndetails to keep track of, and many important facets are missing in the\\ndescriptions, especially on the data.\\n\\nThere is no analysis of the errors. The presentation of\\nexperimental results stops at the summary metrics, leaving many\\ndoubts on why they are as such.\\n\\nThe work feels somewhat pre-mature in its exploration of the models\\nand the conclusions to warrant publication. At times it feels like the\\nauthors do not understand enough why the algorithms behave as they do.\\nHowever if this is considered as a dataset description paper and\\nthe right expectation is set in the openings, it may still be acceptable.\\n\\nThe completed work warrants a longer report when more solid conclusions\\ncan be drawn about the model behavior.\\n\\nThe writing is not organized enough and it takes many back-and-forth rounds\\nof checking during reading to find out about certain details that are given\\nlong after their first references in other contexts. Some examples are\\nincluded in the followings.\\n\\nMisc.\\n\\nSection 3.2, datasets of 9993 dialogs:\\nAre they done by humans? Later it is understood from further descriptions.\\nIt is useful to be more explicit at the first mention of this data collection effort.\\nThe way they relate to the 10020 scenes is mentioned as \\\"one per scene\\\", with a footnote on some being removed.\\nDoes it mean that no scene is described by two different people? Does this\\nlimit the usefulness of the data in understanding inter-personal differences?\\n\\nLater in the descriptions (e.g. 4.1 on baseline methods) the notion of\\ntraining set is mentioned, but up to then there is no mentioning of how\\ntraining and testing (novel scenes) data are created.\", \"it_is_also_not_clear_what_training_data_include\": \"scenes only?\\nDialogs associated with specific scenes? Drawer actions?\\n\\nSection 4.1, what is a drawer action? How many possibilities are there?\\nFrom the description of \\\"rule-based nearest-neighbor drawer\\\" they seem to be\\ncorresponding to \\\"teller utterance\\\".\\nHowever it is not clear where they come from. What is an example of a drawer action?\\nAre the draw actions represented using the feature vectors discussed in the later sections?\\n\\nSection 5.1, the need for the crosstalk protocol is an interesting observation,\\nhowever based on the description here, a reader may not be able to understand\\nthe problem. What do you mean by \\\"only limited generalization has taken place\\\"? Any examples?\\n\\nSection 5, near the end: the description of the dataset splits is too cryptic.\\nWhat are being split? How is val used in this context?\\n\\nAll in all the data preparation and partitioning descriptions need substantial clarification.\", \"section_6\": \"Besides reporting averaged similarity scores, it will be useful to report some error analysis.\\nWhat are the very good or very bad cases? Why did that happen?\\nAre the bad scenes constructed by humans the same as those bad scenes\\nconstructed by machines? Do humans and machines tend to make different errors?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Exciting task! Not sure about model results\", \"review\": \"This paper presents CoDraw, a grounded and goal-driven dialogue environment for collaborative drawing. The authors argue convincingly that an interactive and grounded evaluation environment helps us better measure how well NLG/NLU agents actually understand and use their language \\u2014 rather than evaluating against arbitrary ground-truth examples of what humans say, we can evaluate the objective end-to-end performance of a system in a well-specified nonlinguistic task. They collect a novel dataset in this grounded and goal-driven communication paradigm, define a success metric for the collaborative drawing task, and present models for maximizing that metric.\\n\\nThis is a very interesting task and the dataset/models are a very useful contribution to the community. I have just a few comments below:\\n\\n1. Results:\\n1a. I\\u2019m not sure how impressed I should be by these results. The human\\u2013human similarity score is pretty far above those of the best models, even though MTurkers are not optimized (and likely not as motivated as an NN) to solve this task. You might be able to convince me more if you had a stronger baseline \\u2014 e.g. a bag-of-words Drawer model which works off of the average of the word embeddings in a scripted Teller input. Have you tried baselines like these?\\n1b. Please provide variance measures on your results (within model configuration, across scene examples). Are the machine\\u2013machine pairs consistently performing well together? Are the humans? Depending on those variance numbers you might also consider doing a statistical test to argue that the auxiliary loss function and and RL fine-tuning offer certain improvement over the Scene2seq base model.\\n\\n2. Framing: there is a lot of work in collaborative / multi-agent dialogue models which you have missed \\u2014 see refs below to start. You should link to this literature (mostly in NLP) and contrast your task/model with theirs.\\n\\nReferences\\nVogel & Jurafsky (2010). Learning to follow navigational directions.\\nHe et al. (2017). Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings.\\nFried et al. (2018). Unified pragmatic models for generating and following instructions.\\nFried et al. (2018). Speaker-follower models for vision-and-language navigation.\\nLazaridou et al. (2016). The red one!: On learning to refer to things based on their discriminative properties.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
Hyg1G2AqtQ
Variance Reduction for Reinforcement Learning in Input-Driven Environments
[ "Hongzi Mao", "Shaileshh Bojja Venkatakrishnan", "Malte Schwarzkopf", "Mohammad Alizadeh" ]
We consider reinforcement learning in input-driven environments, where an exogenous, stochastic input process affects the dynamics of the system. Input processes arise in many applications, including queuing systems, robotics control with disturbances, and object tracking. Since the state dynamics and rewards depend on the input process, the state alone provides limited information for the expected future returns. Therefore, policy gradient methods with standard state-dependent baselines suffer high variance during training. We derive a bias-free, input-dependent baseline to reduce this variance, and analytically show its benefits over state-dependent baselines. We then propose a meta-learning approach to overcome the complexity of learning a baseline that depends on a long sequence of inputs. Our experimental results show that across environments from queuing systems, computer networks, and MuJoCo robotic locomotion, input-dependent baselines consistently improve training stability and result in better eventual policies.
[ "reinforcement learning", "policy gradient", "input-driven environments", "variance reduction", "baseline" ]
https://openreview.net/pdf?id=Hyg1G2AqtQ
https://openreview.net/forum?id=Hyg1G2AqtQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "S1xnLMIgeV", "Bkx4lJjByV", "HklKcMENAm", "ByxufjGtaX", "SylcuqGYp7", "SylCvSzKaX", "H1xPbNztp7", "S1e9N1YX6m", "ryxDVNFc2X", "BJx1GFr92X" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544737364376, 1544036076001, 1542894225197, 1542167311611, 1542167153572, 1542165861997, 1542165503047, 1541799730042, 1541211183213, 1541196038626 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1227/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1227/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1227/Authors" ], [ "ICLR.cc/2019/Conference/Paper1227/Authors" ], [ "ICLR.cc/2019/Conference/Paper1227/Authors" ], [ "ICLR.cc/2019/Conference/Paper1227/Authors" ], [ "ICLR.cc/2019/Conference/Paper1227/Authors" ], [ "ICLR.cc/2019/Conference/Paper1227/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1227/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1227/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes an input-dependent baseline function to reduce variance in policy gradient estimation without adding bias. The approach is novel and theoretically validated, and the experimental results are convincing. The authors addressed nearly all of the reviewer's concerns. I recommend acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"meta review\"}", "{\"title\": \"Maintaining score\", \"comment\": \"The discussions have not change my opinion that this is an important and useful paper for practitioners applying RL to very noisy control settings.\"}", "{\"title\": \"Paper update\", \"comment\": [\"We again thank all reviewers for their comments, and have updated our paper accordingly.\", \"Specifically, the major changes are:\", \"In \\u00a74.1, we improved the clarity of our notations by explicitly defining the observation \\\\omega_t at each time t. We used \\\\omega_t instead o_t because the letter o is visually too similar to a. We updated our theorems and proofs using this notation.\", \"We extended the case 2 of input-driven MDP to include the POMDP case (Figure 3b), and have showed all our derivation and conclusions apply.\", \"We added a comparison to the meta-policy optimization approach (Clavera et al. 2018) in Appendix N.\", \"In addition to mentioning our findings with LSTM in \\u00a75, we also added the corresponding learning curves in appendix G.\", \"We updated our motivating example (\\u00a73) to give a better intuition.\", \"We shortened the policy gradients description in the introduction and background sections, and moved the proof of theorem 1 into the main text in \\u00a74.1.\", \"In \\u00a75, we added a discussion of when we expect the gain of MAML to further exceed that of the multi-value-network approach.\", \"Please let us know if you have further comments. Thanks!\"]}", "{\"title\": \"Responses to other questions\", \"comment\": \"-- Why do the rewards depend on the input process conditioned on the state? \\n\\nTo clarify, by \\u201cstate dynamics and rewards depend on the input process,\\u201d we mean that the input process can affect the rewards because it affects the state transitions. However, our model indeed covers the general case, in which the reward might depend on both the state and the input. For example, consider a robotics task in which the reward is the speed of the robot, the state is the current position of the robot\\u2019s joints, and the input is an external force applied to the robot at each step. The speed of the robot (reward) depends on the force (input) even with knowledge of its current position (state). \\n\\n-- What makes the input process we considered distinct from any stochastic dynamics?\\n\\nThe main distinction here is that the input process must be \\u201cexogenous,\\u201d i.e. it doesn\\u2019t depend on the state and actions; see the graphical models in Figure 3. This property is necessary for the input-dependent baseline to not introduce bias. \\n\\n-- A strong action could end up with a lower-than-average return if the input sequence following the action is unfavorable -> vague\\n\\nThis sentence was trying to give an intuition for why the variance in reward caused by the input process can confuse a policy gradient algorithm. We will rephrase the sentence and explain this better. We will also provide more intuition about this point in Section 3. \\n\\nConsider the load balancing example in Section 3. The return (total reward) for an action depends on the job arrival sequence that follows that action. For example, if the arrivals consist of a burst of large jobs, the reward (negative number of jobs in the system) will be poor, regardless of the action. We will add this intuition to Section 3. \\n\\n-- Is just the baseline input dependent or does the policy need to be input dependent as well? \\n\\nThe baseline depends on the sequence of input values z_{t:\\\\infty}, but the policy can only depend on the input observed at the current step t. Note that the policy cannot depend on the future input values, since at time t, the agent has no way of knowing z_{t+1,\\\\infty}. \\n\\n-- \\u201cIn input-driven MDPs, the standard input-agnostic baseline is ineffective at reducing variance\\u201d -> can you give some more intuition/proof as to why.\\n\\nAs mentioned above, we will add more intuition for this to Section 3. \\n\\n-- More discussions about theorem 1 and 2.\\n\\nThanks for the suggestion! We will trim the discussion of policy gradient and include the proof of theorem 1.\\n\\n-- Algorithm 1 should use eqn 4.\\n\\nYes, it is more appropriate to refer to Equation 4 in Algorithm 1. We will use this.\\n\\n-- Is it possible to know z at each step? What if z is not observable and hard to infer\\n\\nIn many applications, the input process is naturally observable to the agent. For example, in most computer systems applications, the inputs to the environments (e.g., network bandwidth, workload) are measured or readily observed. However, even if the agent does not observe the input at each step, our proposed approach (multi-value-network and meta-learning) can still work as long as we can repeat the same input sequence during training. As discussed in Section 5, this can be done with a simulator (e.g., control the wind in MuJoCo simulator) or by repeating input sequences (e.g., repeat the same workload for a load balancing agent) in an actual system. For future work, we think that investigating efficient architectures for input-dependent baselines for cases where the input process cannot be controlled in training is an interesting direction.\\n\\n-- Meta Learning Priors for Efficient Online Bayesian Regression\\n\\nThank you for the suggestion. This is a relevant piece of work on applying meta learning for faster adaptation of GP regression. We will add it in the related work session.\"}", "{\"title\": \"Author response\", \"comment\": \"Thank you for the constructive comments!\\n\\nWe first address the major comments and then respond to the detailed questions in a separated comment.\\n\\n1. [What is observed?] During policy inference at each MDP step t, the agent observes s_t and z_t (the current value of the input process). Therefore the policy can depend on the current observed value of the input z_t, but not on the future input sequence z_{t:\\\\infty} (which has not yet happened). At training time, however, the baseline computation for step t depends on the entire future sequence z_{t:\\\\infty}. As explained in the beginning of Section 4.1, this is possible because the entire input sequence is known at training time. \\n\\nWe realize that the notation was confusing. As mentioned in the 2nd paragraph of page 5, we use s_t to denote the tuple (s_t,z_t) for the derivations. We will improve the notation by explicitly defining the observed signal, o_t = (s_t, z_t), which the policy takes as input at each step t.\\n\\n2. [Additional comparisons to prior work] Policy adaptation approaches like Clavera et al. learn a \\u201cmeta-policy\\u201d that can be quickly adapted for different environments. By contrast, our goal is to learn a single policy that performs well in the presence of a stochastic input process. In other words, we are improving policy optimization itself in environments with stochastic inputs. We do not consider transfer of a policy trained for one environment to another. In terms of training a common policy, our work is more related to RARL (Pinto et al.), which we discuss and compare with in Appendix L.\\n\\nIt is worth noting that approaches like Clavera et. al. are well-suited to handling model discrepancy between training and testing. However, in our setting, there isn\\u2019t any model discrepancy. In particular, the distribution of the input process is the same during training and testing. Nonetheless, our work shows that standard policy gradient methods have difficulty in input-driven environments, and input-dependent baselines can substantially improve performance.\\n\\nTherefore our work is orthogonal and complementary to policy adaptation approaches. Since some of these methods require a policy optimization step (e.g., Section 4.2 of Clavera et al. 2018), our input-dependent baseline can help these methods by reducing variance during training. Appendix L shows an example of such improvements for RARL. We will try to also add an example for a policy adaptation approach. \\n \\n3. [The LSTM method for learning input-dependent baselines] LSTM suffers from unnecessarily high complexity in training. In our experiments, we considered an LSTM approach but ruled it out when initial experiments showed that it requires orders of magnitude more data to train than conventional baselines for our environments (cf. beginning of Section 5). We will add the learning curves with LSTM baseline in the appendix.\\n\\n4. [The meta-learning baseline] The actual performance gain for a meta-learned baseline over a multi-value-network is environment-specific. Conceptually, the multi-value-network falls short when the task requires training with a large number of input instantiations to generalize to new input instances. We have not analyzed how policy quality varies with the number of input instantiations considered during training. However, we expect that this depends on a variety of factors, such as the distribution of the input process (e.g., from a large deviations standpoint); the time horizon of the problem; the relative magnitude of the variance due to the input process compared to other sources of randomness (e.g., actions). The advantage of the meta-learning approach compared to the multi-value network approach is that we can train with an unbounded number of input instantiations. We will add this discussion to Section 5.\"}", "{\"title\": \"Author response\", \"comment\": \"Thank you for the insightful comments!\", \"regarding_these_comments\": \"1) UVFAs predict values based on specific goals. These methods require taking \\u201cgoal embedding\\u201d explicitly as input. In our formulation of input driven environment, however, there aren\\u2019t really different goals in each task. Nonetheless, one can still use similar idea to take the exogenous sequence as an explicit input in the value function, using recurrent neural network structures such as LSTM. We actually did this and reported our findings in the paper, in the beginning of Section 5: \\u201cA natural approach to train such baselines is to use models that operate on sequences (e.g., LSTMs). However, learning a sequential mapping in a high-dimensional space can be expensive. We considered an LSTM approach but ruled it out when initial experiments showed that it requires orders of magnitude more data to train than conventional baselines for our environments.\\u201d We intend to add an experiment showing the learning curve with an LSTM approach to the appendix.\\n\\n2) The point of this example is to show that the variance from the input process can negatively affect the policy, even for an extremely simple task. In this 2-server load balance task, the agent should just learn the simple optimal policy of joining the shortest queue (visualized in Figure 2(c) left). However, the variance in the input sequence makes the PG unable to converge to the optimal. Here, we compared the vanilla A2C with the standard state-only baseline to that with the input-dependent baseline. It is clear that vanilla A2C performs suboptimally (Figure 2(b) right); and this is due to the significant difference in the PG variance in different baselines (Figure 2(b) left, notice the log scale). \\n\\nThe reason that vanilla A2C is ineffective in this example is that the return (total reward) for an action depends on the job arrival sequence that follows that action. For example, if the arrivals consist of a burst of large jobs, the reward (negative number of jobs in the system) will be poor, regardless of the action. We will expand the discussion in Section 3 to provide more details and intuition.\", \"about_the_input_to_the_baseline_and_policy\": \"the input-dependent baseline takes state s_t and the entire future input process z_{t:\\\\infty} as input; the state-only baseline only takes s_t as input; in both cases, the policy network takes s_t and z_t (only at time t) as input.\\n\\n3 and 4) Thank you for this interesting comment. We focused on the two cases in Figure 3 mainly because they result in fully observable MDPs, and in many applications of interest, the input is readily observable. However, the scenario in which the input z_t is not observed is indeed also interesting. This case results in a POMDP. \\n\\nInput-dependent baselines reduce variance in the POMDP case as well. Our results (e.g., Theorems 1 and 2) also apply to this setting. In fact, in the POMDP case, the input process does not even need to be Markov; it can be any general stochastic process that does not depend on the states and actions. \\n\\nIntuitively, the reason is that much of the variance in PG in input-driven environments is caused by the variance in the input sequence that follows an action. For example, in the windy walker environment (Figure 1c), it is the entire sequence of wind after step t that affects the total reward, not just the wind observation at time t. As a result, regardless of whether or not the input is observed at each step t, using the entire input sequence in the baseline reduces variance. \\n\\nInterestingly, the HalfCheetah with floating tiles environment (Figure 1d) is actually a POMDP---the agent only observes the torques of the cheetah\\u2019s body but not the buoyancy of the tiles. As shown in Figure 4 (middle), our technique helped reduce variance and improve PG performance. Also, we re-ran our experiments on the Walker2d with wind environment without providing z (the wind) to the policy. The results show that our input-dependent baseline improves the policy performance similar to the case where z is observed. We will shortly add this result to the paper. \\n\\nIn summary, we are making the following changes to the paper. We will add a case for POMDP to Figure 3, and discuss the derivation for the POMDP (which is almost identical to the MDP case). We will also include the POMDP version of Walker2d with wind result. \\n\\nWe also realize that the notation was confusing. As mentioned in the 2nd paragraph of page 5, we were using s_t to denote the tuple (s_t, z_t) in the derivations. We will improve the notation by explicitly defining the observed signal, o_t, used by the policy in each case. For the MDP case, o_t = (s_t, z_t). For the POMDP case, o_t = s_t.\"}", "{\"title\": \"Author response\", \"comment\": \"We appreciate your encouraging comments!\\n\\nWe agree that the traffic control environment is a perfect fit for the techniques we proposed. Thanks for the suggestions and the pointers to the existing simulators---we will mention these potential applications in the introduction/conclusions. \\n\\nIn our submission, we moved the proofs to appendix due to space constraints. We will trim down the text of the facts about PG methods to clear up rooms for the proof of Theorem 1.\"}", "{\"title\": \"Interesting premise, needs more clarity/comparisons\", \"review\": \"\", \"introduction\": \"\\u201cSince the state dynamics and rewards depend on the input process\\u201d -> why do the rewards depend on the input process conditioned on the state? \\n\\nDoes the scenario being considered basically involve any scenario with stochastic dynamics? Or is the fact that the disturbances may come from a stateful process what makes this distinct?\\n\\nif the input sequence following the action -> vague, would help if this would just be written a bit more clearly. \\n\\nIs just the baseline input dependent or does the policy need to be input dependent as well? From later reading, this point is still quite confusing. One line says \\u201cAt time t, the policy only depends only on (st, zt).\\u201d. Another line says that the policy is pi_theta(a|s), with no mention of z. I\\u2019m pretty confused by the consistency here. This is also important in the proof of Lemma 1, because P(a|s,z) = pi_theta(a|s). Please clarify this.\", \"section_4\": \"Is the IID version of Figure 3 basically the same as stochastic dynamics? (Case 2)\\n\\nSection 4.1\\n\\u201cIn input-driven MDPs, the standard input-agnostic baseline is ineffective at reducing variance\\u201d -> can you give some more intuition/proof as to why. \\n\\nIn Lemma 2, how come the Q function is dependent on z, but the policy is only dependent on s (not even the current and past z\\u2019s). \\n\\nI think the proof of theorem 1 should be included in the main paper rather than unnecessary details about policy gradient. \\n\\nTheorem 1 and theorem 2 are really some of the most important parts of the paper, and they deserve a more thorough discussion besides the 2 lines that are in there right now. \\n\\n\\nAlgorithm 1 -> should it be eqn 4?\\n\\nThe meta-algorithm provided in Section 5 is well motivated and well described. An experimental result including what happens with LSTM baselines would be very helpful. \\n\\nOne question is whether it is actually possible to know what the z\\u2019s are at different steps? In some cases these might be latent and hard to infer?\\n\\nCan you compare to Clavera et al 2018? It seems like it might be a relevant comparison. \\n\\nThe difference between MAML and the 10 value network seems quite marginal. Can the authors discuss why this is? And when we would expect to see a bigger difference.\", \"related_work\": \"Another relevant piece of work\\nMeta-Learning Priors for Efficient Online Bayesian Regression\", \"major_todos\": \"1. Improve clarity of what z's are observed, which are not and whether the policy is dependent on these or not. \\n2. Compare with other prior work such as Clavera et al, Harrison et al. \\n3. Add more naive baselines such as training an LSTM, etc. \\n4. Provide more analysis of the meta-learning component, how much does it actually help.\", \"overall_impression\": \"I think this paper covers an interesting problem, and proposes a simple, straightforward approach conditioning the baseline and the critic on the input process. What bothers me in the current version of the paper is the lack of clarity about the observability of z, where it comes from and also some lack of comparisons with other prior methods. I think these would make the paper stronger.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Strong paper for environment in which outcomes are strongly influenced by exogenous factors\", \"review\": \"The paper introduces and develops the notion of input-dependent baselines for Policy Gradient Methods in RL.\", \"the_insight_developed_in_the_paper_is_clear\": \"in environments such as data centers or outside settings external factors (traffic load or wind) constitute high magnitude perturbations that ultimately strongly change rewards.\\nLearning an input-dependent baseline function helps clear out the variance created by such perturbations in a way that does not bias the policy gradient estimate (the authors provide a theoretical proof of that fact).\", \"the_authors_propose_different_methods_to_train_the_input_dependent_baseline_function\": \"o) a multi-value network based approach\\n o) a meta-learning approach\\nThe performance of these two methods is compared on simulated robotic locomotion tasks as well as a load balancing and video bitrate adaptation task.\\nThe input dependent baseline strongly outperforms the state dependent baseline in both cases.\", \"strengths\": \"o) The paper is well written\\n o) The method is novel and simple while strongly reducing variance in Monte Carlo policy gradient estimates without inducing bias.\\n o) The experiment evidence is strong.\", \"weaknesses\": \"o) Vehicular traffic has been the subject of recent development through deep reinforcement learning (e.g. https://arxiv.org/pdf/1701.08832.pdf and https://arxiv.org/pdf/1710.05465.pdf). In this particular setting exogenous noise (demand for throughput and accidents) could strongly benefit from input dependent baselines. I believe the authors should mention such potential applications of the method which may have major societal impact.\\n o) There is a lot of space dedicated to well know facts about policy gradient methods. I believe it could be more impactful to put the proof of Theorem 1 in the main body of the paper as it is clearly a key theoretical property.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting research problem (input-driven MDPs), but I think they are missing the most interesting and practically relevant scenario.\", \"review\": \"\", \"summary\": \"This work considers the problem of learning in input-driven environments -- which are characterized by an addition stochastic variable z that can affect the dynamics of the environment and the associated reward the agent might see. The authors show how the PG theorem still applied for a input-aware critic and then they show that the best baseline one can use in conjecture with this critic is a input-dependent one. My main concerns are highlighted in points (3) and (4) in the detailed comments below.\", \"clarity\": \"Generally it reads good, although I had to go back-and-forth between the main text and appendix several times to understand the experimental side. Even with the supplementary material, examples in Section 3 and Sections 6.2 could be improved in explanation and discussion.\", \"originality_and_significance\": \"Limited in this version, but could be improved significantly by something like point (3)&(4) in detailed comments. Fairly incremental extension of the PG (and TRPO) with the conditioning on the potentially (unobserved) input variables. The fact that a input-aware critic could benefit from a input-aware baseline is not that surprising. The fact that it reduces variance in the PG update is an interesting result; nevertheless I strongly feel the link or comparison needed is with the standard PG update.\", \"disclaimer\": \"I have not checked the proofs in the appendix.\", \"detailed_comments\": \"1) On learning the input-dependent baselines: Generalising over context via a parametric functional approximation, like UVFAs [1] seems like a more natural first choice. Also these provide a zero-shot generalisation, bypassing the need for a burn-in period of the task. Can you comment on why something like that was not used at least as baseline?\\n\\n2) Motivating example. The exposition of this example lacks a bit of clarity and can use some more details as it is not a standard MDP example, so it\\u2019s harder to grasp the complexity of this task or how standard methods would do on it and where would they struggle. I think it\\u2019s meant to be an example of high variance but the performance in Figure 2 seems to suggest this is actually something manageable for something like A2C. It is also not clear in this example how the comparison was done. For instance, are the value functions used, input-dependent? Is the policy input aware? \\n\\n3) Input-driven MDP. Case 1/Case 2 : As noted by the authors, in case 1 if both s_t and z_t are observed, this somewhat uninteresting as it recovers a particular structured state variable of a normal MDP. I would argue that the more interesting case here, is where only s_t is observed and z_t is hidden, at least in acting. This might still be information available in hindsight and used in training, but won\\u2019t be available \\u2018online\\u2019 -- similar to slack variable, or privileged information at training time. And in this case it\\u2019s not clear to me if this would still result in a variance reduction in the policy update. Case 2 has some of that flavour, but restricts z_t to an iid process. Again, I think the more interesting case is not treated or discussed at all and in my opinion, this might add the best value to this work.\\n \\n4) Now, as mentioned above the interesting case, at least in my opinion, is when z is hidden. From the formulae(eq. (4),(5)), it seems to be that the policy is unaware of the input variables. Thus we are training a policy that should be able to deal with a distribution of inputs z. How does this compare with the normal PG update, that would consider a critic averaged over z-s and a z-independent baseline? Is the variance of the proposed update always smaller than that of the standard PG update when learning a policy that is unaware of z?\", \"references\": \"[1] Schaul, T., Horgan, D., Gregor, K. and Silver, D., 2015, June. Universal value function approximators. In International Conference on Machine Learning (pp. 1312-1320).\\n\\n[POST-rebuttal] I've read the author's response and it clarified some of the concerns. I'm increase the score accordingly.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HklyMhCqYQ
Super-Resolution via Conditional Implicit Maximum Likelihood Estimation
[ "Ke Li*", "Shichong Peng*", "Jitendra Malik" ]
Single-image super-resolution (SISR) is a canonical problem with diverse applications. Leading methods like SRGAN produce images that contain various artifacts, such as high-frequency noise, hallucinated colours and shape distortions, which adversely affect the realism of the result. In this paper, we propose an alternative approach based on an extension of the method of Implicit Maximum Likelihood Estimation (IMLE). We demonstrate greater effectiveness at noise reduction and preservation of the original colours and shapes, yielding more realistic super-resolved images.
[ "super-resolution" ]
https://openreview.net/pdf?id=HklyMhCqYQ
https://openreview.net/forum?id=HklyMhCqYQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Syg2hNyWlV", "Skg5JP42k4", "Bkl9w6GnkE", "rkgIZHPj0X", "H1x-h4wiA7", "rJgGtNwsRQ", "HkeCTOqY2X", "SJxh_DXN2Q", "SJeKkSMYs7", "HyxcMnORcX", "Bylsy6q997", "rJlQGqnY97", "ByxSpthFcX", "rJgtXKEv9Q", "SkxSWW0mqX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment", "official_comment", "official_comment", "comment", "comment" ], "note_created": [ 1544774835786, 1544468193936, 1544461666475, 1543365885914, 1543365801094, 1543365754171, 1541150918332, 1540794227541, 1540068576529, 1539374098433, 1539120355029, 1539062283256, 1539062205366, 1538898209098, 1538674941038 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1226/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1226/Authors" ], [ "ICLR.cc/2019/Conference/Paper1226/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1226/Authors" ], [ "ICLR.cc/2019/Conference/Paper1226/Authors" ], [ "ICLR.cc/2019/Conference/Paper1226/Authors" ], [ "ICLR.cc/2019/Conference/Paper1226/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1226/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1226/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1226/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1226/Authors" ], [ "ICLR.cc/2019/Conference/Paper1226/Authors" ], [ "(anonymous)" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"The main novelty of the paper lies in using multiple noise vectors to reconstruct the high resolution image in multiple ways. Then, the reconstruction with minimal loss is selected and updated to improve the fit against the target image. The most important control experiment in my opinion should compare this approach against the same architecture with only with m=1 noise vector (i.e., using a constant noise vector all the time). Unfortunately, the paper does not include such a comparison, which means the main hypothesis of the paper is not tested. Please include this experiment in the revised version of the paper.\", \"ps\": \"There is another high level concern regarding the use of PSNR or SSIM for evaluation of super-resolution methods. As shown by \\\"Pixel recursive super resolution (Dahl et al.)\\\" and others, PSNR and SSIM metrics are only relevant in the low magnification regime, in which techniques based on MSE (mean squared error) are very competitive. Maybe you need to consider large magnification regime in which GAN and normalized flow-based models are more relevant.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"The paper needs to be improved\"}", "{\"title\": \"Yes, using multiple noise vectors is important\", \"comment\": \"Yes, we have tried using m=1, but found that this resulted in blurrier images because not allowing the net to output multiple possibilities essentially forces it to predict the mean of the different possibilities. We'll include this result in the camera-ready.\"}", "{\"title\": \"Baseline comparison\", \"comment\": \"The main novelty of the paper lies in using multiple noise vectors to reconstruct the high resolution image in multiple ways. Then you select the reconstruction with minimal loss and update the parameters to improve the fit for the best reconstruction. I think this is a neat idea, but for completeness, an important control experiment is using the same architecture with only m=1 noise vector (i.e., using a constant noise vector all the time). Have you demonstrated the benefit of your approach over this baseline?\\n\\n--AC\"}", "{\"title\": \"Response to your review\", \"comment\": \"Thank you for your review. We have updated our paper to include a discussion of other methods that use a multi-stage architecture in section 2.3. To generate the upper noise vector, we generated samples with the noise input to the lower sub-network fixed and then perform kNN search among this pool of samples. Using hierarchical sampling during training improves the results at test time compared to vanilla sampling using the same number of samples, because it functions as if more samples were generated using vanilla sampling at training time, which in the context of IMLE results in improved performance. We tried performing all computation on the GPU, but ran into memory issues with our implementation (which may be an issue specific to our implementation). It\\u2019s possible that projection using a random Gaussian matrix could introduce distortions, but we also tried using the original features without projection and observed no significant difference in the results (which can also be explained theoretically by the Johnson-Lindenstrauss lemma). The low-resolution input is generated by applying Gaussian blur and subsampling. In our experience, LPIPS focuses more on the high-level semantics and less on the low-level details, and so does not correlate well with human judgement in the context of super-resolution. We conducted human evaluation to eliminate possible biases that can be introduced by the choice of evaluation metric. To generate different samples from the same input image for SRGAN, we trained a SRGAN model where we add a second input to the generator containing random noise. We found that even when with random noise as input, SRGAN cannot generate multi-modal results due to mode collapse. As per your suggestion, we have updated the paper to include comparisons to [f][g][h][i] in the appendix and found that our method outperforms these methods in terms of image quality.\"}", "{\"title\": \"Response to your review\", \"comment\": \"Thank you for your review. To train the sub-networks, we first train the sub-network for the lower input resolution; then we add the second sub-network on top and train both sub-networks jointly. The feature space is pre-trained and fixed in our setting.\\n\\nAt test time, given a low-resolution input image, we randomly generate one noise input for each sub-network and feed all inputs into the network to get the super-resolved output. The variation exhibited by different samples corresponds to different plausible ways to super-resolve ambiguous regions of the input image. To productionize such a network in real-time systems, we can use established techniques for model compression and binarization. Our approach is as easy to productionize as GAN-based methods because the sampling procedure for our method and GAN are the same at test time (only the training algorithm differs). Multi-modality could be a problem in this case because it is important in various applications to have control over which mode we want to output; for example, if we\\u2019d like to super-resolve all frames in a video, we need to make sure the same mode is selected consistently across all frames, so that the blurry regions are super-resolved in the same way in all frames. To choose one specific mode in a conscious way, we can simply choose a noise input that results in a super-resolved image that we prefer and use this (fixed) noise input for all low-resolution input images. \\n\\nIn our paper, we focused on a more challenging setting than that is typically considered in the literature, where the input image is of a relatively low resolution (64x64). We chose this setting because most existing methods already perform very well when the input is of a higher resolution, and so the differences between different methods are easier to discern under a more challenging lower-resolution setting. Because the limitations of SRGAN are more perceptible under this setting, SRGAN does not outperform bicubic interpolation by a large margin; however, SRIM does outperform it by a fairly large margin.\"}", "{\"title\": \"Response to your review\", \"comment\": \"Thank you for your review. We note that the existing datasets commonly used in the super-resolution literature for evaluation are quite small (they typically contain <=100 images), whereas ImageNet is a lot bigger and can provide more reliable results. Common super-resolution testing datasets are also typically used in a high-resolution setting (i.e. fairly high-resolution inputs are fed into the super-resolution algorithm), whereas our experiments are conducted with 64x64 inputs, which can contain 3-6x fewer pixels than the inputs that are used with common datasets. Because the typical inputs are at higher resolutions, the typical setting is easier and so differences between different methods are harder to discern, which is why we performed evaluation under a more challenging setting. However, as per your request, we have updated our paper to include a comparison of our method to SRGAN on the commonly used dataset of Set14 in the appendix and found that both methods perform comparably, but note that this comparison is less informative because all methods perform well on Set14. We have also updated our paper to include comparisons to more recent methods in the appendix, including EnhanceNet, SFT network, EDSR network and RDN. We found that our method outperforms these methods in terms of image quality.\"}", "{\"title\": \"Interesting approach, but experiments are not appropriate\", \"review\": [\"Summary\", \"This paper proposes a method based on implicit maximum likelihood estimation for single-image super-resolution. The proposed method aims at avoiding common artifacts such as high-frequency noise and shape distortion. The proposed method shows better performance than SRGAN in terms of PSNR, SSIM, and human evaluation of realism on the ImageNet dataset.\", \"Pros\", \"The proposed method shows better performance than SRGAN in terms of PSNR, SSIM, and human evaluation.\", \"The selection of the evaluation methods is appropriate. In the field of image super-resolution tasks, both signal accuracy (e.g., PSNR) and perceptual quality (e.g., human evaluation) are important.\", \"Cons\", \"The experiments are conducted thoroughly in the ImageNet, but the selection of the dataset is not appropriate. It would be better to apply the proposed method to other datasets which are used recent papers.\", \"Also, the selection of the methods to be compared is not appropriate. It would be better to provide recent state-of-the-art methods and compare the proposed method with them.\", \"The proposed approach is interesting and promising, but the selection of the methods and datasets to be compared is not appropriate.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Paper is written well and except for some sections, it provides enough details. The work is original enough but might need some improvement or more explanation in experiments/result section.\", \"review\": \"This paper proposes a technique to find a maximum-likelihood estimate of the super-resolved images under latent variables without computing it. Paper is mostly clearly written and except for some sections, it provides enough details. The work is original enough but might need some improvement or more explanation in experiments/result section.\", \"pros\": \"-The idea seems to be original enough, simple and easy to implement.\\n-A nice follow-up of the recent work in NN search and Implicit maximum likelihood estimation. \\n-Many details that could be helpful for further research in the area are given.\", \"cons\": \"-Regarding methodology, an unclear point in the paper is how different networks trained according to algorithm 1. Is each sub-network trained separately? Is the visual perception based feature space pre-trained and fixed, or is it jointly retrained with the super-resolution network? \\n\\n-Another critical point is post-training, particularly the way learned parameters are used could be explained better: Given a super-resolution model f, how the super-resolution of a single image is performed? What is the sampling variation? How likely such a network can be productionized in real-time systems (e.g., digital displays or embedded systems)? How does the proposed approach compared to GAN based methods with regards to that? Is multi-modality a problem in this case? Any way to choose one specific mode in a conscious way?\\n\\n-My main concern about the paper is the results section: Authors perform both large-scale offline comparison (imagenet) and a small subset human evaluation. The results in human evaluation need some explanation. This comparison is identical to several previous 1-1 comparisons performed in literature and almost every single such comparison it has been found that state of the art techniques (e.g., 10+ years of super-resolution algorithms) significantly outperform bicubic interpolation. However, Table 2 in the paper suggests that both SRGAN and SRIM barely beats bicubic interpolation. For example, authors in https://arxiv.org/pdf/1209.5019.pdf showed that a relatively older supervised technique beat bicubic 90% of the time. There seems to be some explanation needed here: Is it the sample size? Are the samples from both SRIM and SRGAN very variable?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"A super-resolution method with with fewer visual artifacts than the SRGAN method.\", \"review\": \"Quality: The overall quality of this paper is good. It adopts a simple but novel idea on SISR and shows clear improvement against existing method (e.g., SRGAN).\", \"clarify\": \"This paper is well written and easy to follow. It shows a clear motivation for adopting the implicit probabilistic model.\", \"originality\": \"To the best of my knowledge, this paper is the first work to learn multi-modal probabilistic model for SISR.\", \"significance\": \"While the results can be further improved (still look a bit blurred), this paper shows an interesting and important direction to learn better mappings for SISR.\", \"pros\": [\"The writing is clear.\", \"The proposed method is well motivated and easy to understand.\", \"The experimental results include both objective and subjective evaluations.\"], \"cons\": \"- The two-stage architecture is similar to the following generative models and SR methods. It\\u2019s suggested to discuss them as well.\\n[a] Denton, E. L., Chintala, S., & Fergus, R. \\u201cDeep generative image models using a\\ufffc laplacian pyramid of adversarial networks\\u201d. NIPS, 2015.\\n[b] Karras, T., Aila, T., Laine, S., & Lehtinen, J. \\u201cProgressive growing of gans for improved quality, stability, and variation\\u201d. ICLR 2018.\\n[c] Lai, W. S., Huang, J. B., Ahuja, N., & Yang, M. H. \\u201cDeep laplacian pyramid networks for fast and accurate super-resolution.\\u201d CVPR 2017.\\n[d] Wang, Y., Perazzi, F., McWilliams, B., Sorkine-Hornung, A., Sorkine-Hornung, O., & Schroers, C. \\u201cA Fully Progressive Approach to Single-Image Super-Resolution.\\u201d. CVPR Workshops 2018.\\n\\n- In the hierarchical sampling (section 2.4), it\\u2019s not clear how to generate the upper noise vector \\u201cconditioned on the lower noise vector\\u201d. \\n\\n- The hierarchical sampling seems to improve the efficiency of training. I wonder does it affect the results of testing?\\n\\n- In the implementation details (section 2.5), I don\\u2019t understand why you need to transfer the the feature activations from GPU to CPU? I think all the computation can be done on GPU for most common toolboxes. Projecting the activations to a lower dimension with a \\u201crandom Gaussian matrix\\u201d sounds harmful to the results.\\n\\n- How do you generate the low-resolution images? Are you using bicubic downsampling or other approaches? This detail should be clarified.\\n\\n- While the evaluation with PSNR and SSIM is a reference to show the quality, many literatures already show that PSNR and SSIM are not correlated well with human perception. It is suggested to also evaluate with some perceptual metrics, e.g., LPIPS [e].\\n[e] Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. \\u201cThe unreasonable effectiveness of deep features as a perceptual metric.\\u201d CVPR 2018.\\n\\n- In Figure 7, how do you generate different results from the same input image for SRGAN? From my understanding, SRGAN doesn\\u2019t take any noise vector as input and cannot generate multi-modal results.\\n\\n- I feel that the comparison with only SRGAN is not enough. There are some GAN-based SR methods [f][g]. It\\u2019s also suggested to compare with MSE-based state-of-the-art SR algorithms [h][i].\\n\\n[f] Sajjadi, M. S., Sch\\u00f6lkopf, B., & Hirsch, M. \\u201cEnhancenet: Single image super-resolution through automated texture synthesis.\\u201c ICCV 2017.\\n[g] Wang, X., Yu, K., Dong, C., & Loy, C. C. \\u201cRecovering realistic texture in image super-resolution by deep spatial feature transform.\\u201d CVPR 2018.\\n[h] Lim, B., Son, S., Kim, H., Nah, S., & Lee, K. M. \\u201cEnhanced deep residual networks for single image super-resolution.\\u201d CVPR Workshops 2017.\\n[i] Zhang, Y., Tian, Y., Kong, Y., Zhong, B., & Fu, Y. \\u201cResidual dense network for image super-resolution.\\u201d CVPR 2018.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Response to your clarification\", \"comment\": \"Thanks for your clarification. EnhanceNet is similar to SRGAN in terms of perceptual quality but has higher reconstruction error, as shown by Figure 2 in the referenced paper. SFTGAN uses a semantic segmentation model to predict the categories of objects and therefore uses auxiliary supervision in the form of segmentation masks. Our method does not use such supervision, and so cannot be directly compared. ProGAN is somewhat lesser known; thanks for bringing it to our attention. We ran their model and found that the results exhibit similar types of artifacts as those of SRGAN.\"}", "{\"comment\": \"Thank you for your response. I was referring to the overview of the field in the first pages of the referenced paper. Some prior works mentioned include EnhanceNet - Sajjadi et al. ICCV 2017, SFTGAN - Wang et al. CVPR 2018, ProGAN - Wang et al. CVPR Workshops 2018. These three have released their models publicly.\", \"title\": \"Clarifying the comment\"}", "{\"title\": \"Response to your comment\", \"comment\": \"Thanks for your comment. As is standard in machine learning, the training and test sets are from the same collection of images (ImageNet) to eliminate any possibility of biases in the evaluation results due to domain shift. We chose to use ImageNet for training because that was used by SRGAN, and this choice mandated the use of ImageNet for testing.\\n\\nNevertheless, as per your suggestion, we evaluated our method on BSD100 and found the results were comparable to those on ImageNet: SSIM was 0.7254 (compared to 0.7153 on ImageNet) and PSNR was 26.39 (compared to 25.36 on ImageNet).\"}", "{\"title\": \"Response to your comment\", \"comment\": \"Thanks for your comment. We note that the referenced paper reports on the performance of methods submitted to a recently concluded ECCV workshop challenge. Because the methods were only released a week before the submission deadline, whose code and implementation details remain unavailable in many cases, we weren\\u2019t able to compare to these methods. We do note that Figure 2 in the referenced paper shows that SRGAN is one of the best available methods in terms of visual quality at the time the challenge was conducted.\"}", "{\"comment\": \"I enjoyed reading your paper, your approach is quite novel in my opinion. One question: why is the dataset used for evaluation in your paper not those commonly used for testing super resolution algorithms? The commonly used datasets for image super-resolution are BSD100, Urban100 and DIV2K, as in SRGAN (Ledig et al. CVPR17), EDSR (Lim et al. CVPRW17), EnhanceNet (Sajjadi et al. ICCV17), DBPN (Haris et al. CVPR18), etc. These datasets allow a straightforward comparison with others.\\n\\nAlso, there is no official implementation of SRGAN, but there are officially published results of SRGAN on the BSD100 dataset (there is a link in the SRGAN paper). Evaluating on the BSD100 dataset would allow a comparison with the original SRGAN algorithm, and not with a reproduction attempt from Github.\", \"title\": \"Non-standard evaluation dataset\"}", "{\"comment\": \"Interesting paper. Please note that there has been significant progress in this field since SRGAN, see for example https://arxiv.org/pdf/1809.07517.pdf\", \"title\": \"Relation to prior work\"}" ] }
BJlyznAcFm
Advocacy Learning
[ "Ian Fox", "Jenna Wiens" ]
We introduce advocacy learning, a novel supervised training scheme for classification problems. This training scheme applies to a framework consisting of two connected networks: 1) the Advocates, composed of one subnetwork per class, which take the input and provide a convincing class-conditional argument in the form of an attention map, and 2) a Judge, which predicts the inputs class label based on these arguments. Each Advocate aims to convince the Judge that the input example belongs to their corresponding class. In contrast to a standard network, in which all subnetworks are trained to jointly cooperate, we train the Advocates to competitively argue for their class, even when the input belongs to a different class. We also explore a variant, honest advocacy learning, where the Advocates are only trained on data corresponding to their class. Applied to several different classification tasks, we show that advocacy learning can lead to small improvements in classification accuracy over an identical supervised baseline. Through a series of follow-up experiments, we analyze when and how Advocates improve discriminative performance. Though it may seem counter-intuitive, a framework in which subnetworks are trained to competitively provide evidence in support of their class shows promise, performing as well as or better than standard approaches. This provides a foundation for further exploration into the effect of competition and class-conditional representations.
[ "competition", "supervision", "deep learning", "adversarial", "debate" ]
https://openreview.net/pdf?id=BJlyznAcFm
https://openreview.net/forum?id=BJlyznAcFm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJloypYxgE", "Byezma9tC7", "rkgZR25t0m", "rJxw5sqFAQ", "HylFBoqFC7", "BkxmyeY92X", "BJe82CNjim", "r1e-mhjfim" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544752354875, 1543249178176, 1543249097119, 1543248782736, 1543248705490, 1541210074562, 1540210349622, 1539648536724 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1225/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1225/Authors" ], [ "ICLR.cc/2019/Conference/Paper1225/Authors" ], [ "ICLR.cc/2019/Conference/Paper1225/Authors" ], [ "ICLR.cc/2019/Conference/Paper1225/Authors" ], [ "ICLR.cc/2019/Conference/Paper1225/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1225/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1225/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The paper presents a novel architecture, reminescent of mixtures-of-experts,\\ncomposed of a set of advocates networks providing an attention map to a\\nseparate \\\"judge\\\" network. Reviewers have several concerns, including lack\\nof theoretical justification, potential scaling limitations, and weak\\nexperimental results. Authors answered to several of the concerns, which did\\nnot convinced reviewers. The reviewer with the highest score was also the least\\nconfident, so overall I will recommend to reject the paper.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}", "{\"title\": \"Response\", \"comment\": \"We thank you for your feedback, and are glad you liked the paper. To answer some of your comments:\\n1. Why the Honest Advocate outperformed the standard Advocate on MIMIC\\n\\nAnswering this question was our main motivation for including the Imbalanced and Binary MNIST problems. MIMIC differs from MNIST and FMNIST in several major ways discussed in the paper. We chose to measure the impact of the binary labels and class imbalance by modifying MNIST, to hold constant the change in network architecture and data type. The results in Table 2 suggest that the main factor is the binary classes, as this change brought the advocacy net performance to below the honest advocate and multi-attention net. As for why fewer classes are detrimental to advocacy learning, we hypothesize that advocates work to balance each other out, and more advocates provide the judge with a more complete picture of the input. We have modified the end of section 4.4 to highlight this point (see response to AnonReviewer1, point 2).\\n\\n2. Are the benefits of advocacy learning largely from very similar class-pairs?\\n\\nThere is some evidence to support this. In Figure 2 we see the largest changes in prediction errors between the advocacy net and the multi-attention net were fairly isolated. On FMNIST, the main improvements were seen on combinations of classes 2, 4, and 6, which from Figure 3 are all fairly similar (they all look like long-sleeve shirts). On MNIST the biggest gains are on 4 vs. 9 and 7 vs. 9, both of which can indeed look similar on MNIST. There is a bit of a bias in this analysis however, as the most similar class combinations will probably generate the most errors, leaving the biggest room for improvement (this is a particular factor in MNIST, where there are fewer than 100 misclassifications on the test set). We added text in section 4.2 to emphasize this point:\\n\\n\\u201cThis evidence suggests that advocacy learning improves performance by distinguishing among classes with similar morphology. Though this analysis is confounded by the fact that it tends to be those class pairs that have the greatest potential room for improvement.\\u201d\\n\\n3. A dataset or situations where the method does not work\\n\\nWe believe MIMIC presents a dataset on which standard advocacy learning does not work well, as it performs roughly on par with the Random net, suggesting that the advocates are not adding anything to the discriminative power of the judge. Another case where advocacy learning fails is with non-adaptive optimizers, we found the use of ADAM as critical to maintaining good performance. Using standard SGD with or without momentum we found performance fluctuated wildly, falling to near random levels. We have added text in section 3.2 to note this:\\n\\n\\u201cWe examined using stochastic gradient descent with momentum in place of ADAM, but found that it led the advocacy networks to diverge.\\u201d\"}", "{\"title\": \"Response\", \"comment\": \"We thank you for your feedback. We also find the interpretability aspect of advocacy learning interesting, particularly insofar as any individual advocate should provide evidence only for the class it represents. One of our original motivations was the ability to extract class-conditional representations of input data. In response to the drawbacks you indicate:\\n\\n1. The input to the Judge scales linearly with the number of classes\\n\\nThis is indeed a limitation of our current approach; it would likely be infeasible to train a network on a 3k channel input without specialized hardware. One potential way around this would be to train advocates on groups of classes, instead of individual classes. For example, in ImageNet, you could train a \\u2018bird\\u2019 advocate that advocates equally for all bird-classes. This approach could work particularly well in ImageNet, as the WordNet label hierarchy could allow for automatically identifying good class groups for advocates. However, we view this extension as outside of the scope of this paper. To acknowledge this limitation, we have included the following sentences in the conclusion:\\n\\n\\u201cA limitation of this architecture is the one-to-one relationship between the number of classes and number of advocates, which makes training on datasets like ImageNet implausible. Future work could address this limitation through hierarchical structures.\\u201d\\n\\n2. The attention/saliency maps could be difficult to compute for complex data.\\n\\nWe are unsure if we fully understand this question. Is the issue that the generation of an attention map for a very large input (like a video) will be computationally intensive? Or is the issue that the processing of complicated input using an autoencoder-like model could lead to suboptimal attention generation? The answer to both of these questions depends on the architecture used for the advocates. Certainly the generation of a large attention map using a fully connected network would be infeasible for large inputs, but our advocate modules are fully-convolutional, which limits parameter size. Similarly, the autoencoding structure, which includes fairly aggressive downsampling, could present an issue in generating good attention maps. This could be rectified by using a model for the advocate, such as an Hourglass network, where information is not forced to pass through a bottleneck layer. We\\u2019ve added a note to this effect in section 2.2:\\n\\n\\u201cNote that for complex input, such as medical images, other fully convolutional architectures such as U-Nets may be more appropriate (Ronneberger et al.).\\u201d\\n\\n\\n3. There is no guarantee that the advocates provide interpretable arguments\\n\\nThis point is certainly true, and is a limitation of our current approach from an interpretability standpoint. In practice, the honest advocacy network generates arguments that are somewhat interpretable from a human standpoint (see the Honest Advocacy Net attention map for class 1 in Figure 3), but the advocacy net attention maps tended to be much sparser and less interpretable. The limiting factor in the advocate interpretability is the interpretability of the judge. If the judge is an interpretable model, then the advocate output should naturally follow, as the judge will not be swayed by errant pixels or input that seems like noise. However, as recent research has shown, neural networks are vulnerable to minor input perturbations, and thus the advocates may learn to capitalize on this. We have added text to address this in section 2.2:\\n\\n\\u201cThis has important implications for the interpretability of the derived attention maps. If the judge is a high-capacity nonlinear network then the evidence that may convince it will by default be non-interpretable to humans. However, the flexibility of architecture requirements means that work that has examined training interpretable networks or interpreting trained networks applies (Ribeiro et al., Zhang et al.).\\u201d\\n\\n4. The experiments are conducted on simple datasets and show only marginal benefit.\\n\\nWhile the vision datasets are simple, MIMIC is one of the largest and most complex publicly available datasets. We\\u2019ve edited the fourth paragraph of section 4.4 to highlight this. Though advocacy learning fails on it (for reasons explored in the paper), honest advocacy learning offers some benefit. However, the major finding of this work is the fact that advocacy learning, a somewhat counter-intuitive training scheme, works as well as it does across a variety of datasets. We also evaluated advocacy learning on CIFAR, finding that it lead to a small improvement over the baseline approaches (~83% accuracy for the multi-attention net, ~86% accuracy for advocacy learning). We did not report these results as we had difficulty in getting the baseline performance up to state-of-the art, despite modifying the judge architecture to use a network known to achieve >90% accuracy. This may be due to the structure of the attention modules, relating to your second question.\"}", "{\"title\": \"Response part 2/2\", \"comment\": \"2. The experimental evidence is inconsistent across Table 1 and 2, and the datasets used for evaluation are too small to be meaningful.\\n\\nWe believe the inconsistencies between Table 1 and 2 (the relative performance of advocacy and honest advocacy learning) to be interesting, and likely demonstrative of the conditions under which advocacy learning works well. Two of the main differences between MIMIC and the image datasets include class imbalance and the smaller number of classes. This prompted us to consider imbalanced and binary versions of MNIST for comparison, resulting in the finding that the number of classes was a particularly important aspect. We have modified the end of section 4.4 to highlight this point:\\n\\n\\u201cThis reversal of the results from Table 1 is interesting, and helps illuminate cases where advocacy learning may or may not work. There are many differences between MIMIC and MNIST/FMNIST that could explain why advocacy learning fails. Two of the major differences, besides the data type, are class imbalance and the smaller number of classes. To see the isolated effect of these changes, we created two modified versions of MNIST.\\n\\nFor the first modified MNIST, Imbalanced MNIST, we subsampled the training set, introducing class imbalance. After subsampling, the least represented class, 0, had 600 training samples, and each successive class had 600 additional samples. The test set remained unchanged, which is why we report results in accuracy. We found that class imbalance lowered the performance of all models by 0.1-0.3\\\\%; the Advocate net was more strongly affected than the honest Advocate net. However, both models wind up with very similar accuracy (advocacy learning $99.17 \\\\pm 0.14$ \\\\textit{vs.} honest advocacy learning $99.17 \\\\pm 0.06$).\\n\\nFor the second modification, we created Binary MNIST, a variant with only two classes: 4 and 9. The per-class number of examples in the training and test set were unchanged. We found that the switch to a binary formulation \\\\textit{reduced} absolute performance for the advocacy network by $0.7\\\\%$, a sizable decrease for MNIST for what should be an easier problem. We did not observe similar decreases with either the honest advocate net or the multi-attention net. This decrease suggests that, in practice, the competition between many advocates helps the Judge achieve good performance in the presence of deception. In all datasets considered, the class-conditional attention provided by honest advocacy learning did not hurt, and in the presence of imbalanced data helped relative to the supervised baseline. This suggests the value of competition in training, with or without deception.\\u201d\\n\\nIt is worth noting that, while it may be small by computer vision standards, MIMIC is seen in the machine learning for health community as a large EHR dataset, indeed it is the largest well-curated EHR dataset publicly available. To better express this point, we have significantly edited the fourth paragraph of section 4.4, which now reads:\\n\\n\\u201cThe results presented so far all involve multi-class image datasets with balanced classes. To\\nexplore how these assumptions change the impact of deception in competition, we applied\\nadvocacy learning to a large electronic health record (EHR) dataset, MIMIC III (Johnson\\net al., 2016). This dataset, one of the largest publicly available repository of EHR data, has become\\nan important benchmark in the machine learning for health community (Harutyunyan et al.,\\n2017), and is helping to drive advances in precision health (Desautels et al., 2016; Maslove\\net al., 2017; Oh et al., 2018). \\u201d\\n\\n3. Do advocacy nets include additional class-specific supervision? Do honest advocates suffer from a lack of data?\\n\\nWe are not completely sure we understand the first question, and have tried to rephrase it here. The advocate modules do not receive any additional class-conditional supervision (that is, they do not \\u2018know\\u2019 if a particular example actually belongs to their class). The class-conditional representation emerges because each advocate is rewarded only for giving evidence that convinces the judge that the input belongs to the advocate\\u2019s class. \\n\\nFor the second question, it is likely that an honest advocate trained with more data would do better, however the limited amount of data per-advocate is a necessary limitation of this method, and is the main advantage of vanilla advocacy learning. Notably, honest advocacy learning still performs about as well or better than the multi-attention net, where each attention module is trained on all data, over every dataset. In all approaches the judges still have access to the full pool of data.\"}", "{\"title\": \"Response part 1/2\", \"comment\": \"Thank you for your comments. We are glad that you found the idea interesting and clearly presented. In response to your concerns:\\n\\n1. There is no formal justification, theoretical background, or clear intuition for this idea.\\n\\nWe have added a subsection to the discussion (Section 4.5 Intuition for Advocates) to address this, the text is quoted below:\\n\\n\\u201cThe fundamental idea of this work: advocate modules that compete with one another instead\\nof cooperating, is counter-intuitive from a performance perspective. The fact that this training\\nscheme works at all, let alone better than the baselines across several datasets, is at first glance surprising. However, there are several intuitive reasons for why such a training approach could work. Honest Advocates are similar to a mixture-of-experts model, and such models have a long and rich history (Yuksel et al., 2012). Advocacy learning introduces competition during training. In economic theory, competition plays a vital role in efficiently allocating resources, leading to better functioning systems (Godfrey, 2008). In machine learning, the notion of competition has been found useful as an adaptive loss function for image generation (Goodfellow et al., 2014) and self-competition was used to surpass professional Go players (Silver et al., 2017). While these systems used competition between networks during training, competition within a network has been used as well. A winner-take-all competitive framework was found to lead to superior semi-supervised image classification performance (Makhzani and Frey 2015), and the dynamic routing used in Capsule Networks can been seen as type of competition (Sabour et al. 2017).\\n\\nOne may also draw a parallel with the field of multi-objective optimization. There, it is well known that multiple gradient descent, a form of gradient descent applied against multiple (possibly contradicting) objective functions, achieves a Pareto equilibrium (D\\u00e9sid\\u00e9ri, 2014). Viewed through this lens, advocacy learning may capitalize on the asymmetry in the objective functions between the advocates and the judge. Each advocate is neutral to the ordering of class assignments within a batch of data; the objective of each advocate depends solely on the number of class labels to which it is assigned. However, the Judge is highly sensitive to the ordering, since its objective function requires classes to be properly labeled. Given the neutrality of the advocates, during the optimization, one might expect predictions for misclassified examples to change. However, one would expect the predictions for correctly classified examples to remain constant since the judge is sensitive to this. As a result, over the optimization procedure we expect the performance to increase, converging at perfect training performance. This is an oversimplification, since our use of ADAM and the non-convexity of the loss function would complicate the analysis. However, it provides some theoretical backing to the empirical success of advocacy learning.\\u201d\"}", "{\"title\": \"Interesting idea but not convincing\", \"review\": \"This paper presents a novel concept of supervised learning, advocacy learning. In this framework, supervised learning procedure is given by two subnetworks, advocates and judge. Advocates generate evidence in the form of attention for individual classes and judge decide the final class labels.\\n\\nThe main idea looks interesting, and the paper is clear enough to deliver the idea. However, this paper has the following major issues.\\n\\n1. There is no formal justification of the idea. Although the idea looks interesting, there is no theoretical background and no clear intuition.\\n\\n2. Experiment is weak and even inconsistent. Evaluation is performed on very small datasets only, where all baseline methods already show very high accuracy and accuracy gain given by the proposed method is very marginal. In particular, Table 1 and 2 have inconsistent results; advocacy network is better in Table 1 while worse in Table 2 compared to honest advocacy network. To make the idea more convincing, it is required to test it on much larger datasets, at least ImageNet scale, and more desirable to show results in other tasks such as object detection and image segmentation.\\n\\n3. I am not sure if advocacy network has any separate supervision to enforce it to be learned in a class-conditional manner. Also, in honest advocacy network, each subnetwork can look at only a part of dataset (data corresponding to the class), and I wonder if there is any problem given by data deficiency issue.\\n\\nOverall, the paper does not look ready for publication because the idea is clearly justified neither theoretically nor empirically.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review\", \"review\": \"The paper proposes a novel network architecture for classification problems that is based on decomposing the network into two parts classed the advocates and the judge. The advocates learn by competing with each other to provide a judge-convincing \\\"evidence\\\" -- an attention map over the input that supposedly highlight the most class-relevant parts of the input.\\nI find the very general idea interesting because it could potentially help to improve interpretability of neural networks by explicitly putting in the network a corresponding bottleneck.\\nHowever, in its current form the approach has a number of drawbacks:\\n\\n1) The input to the judge network scales linearly with the number of classes which potentially prevents from learning on large-scale datasets such as ImageNet.\\n2) The attention / saliency map might be very difficult to compute for complex data if relies an autoencoding-like computation. \\n3) There is no guarantee or an intuition on why would the advocates learn to provide evidences that are interpretable to humans. \\n\\nThe provided experiments are conducted on rather simple datasets and to argue on wide applicability of the method I suggest using more visually-diverse datasets like Cifar. \\nI also find the gains on classification accuracy quite marginal and perhaps less important than the interpretability of the evidences which has not been convincingly demonstrated.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Adversarial Lawyers an interesting idea for a deep learning architecture\", \"review\": \"This seems like a very interesting concept, creating adversarial agents for each class that essentially compete with each other. It seems like this might be a very promising method for arguing for even more abstract classes like \\\"circus\\\" vs \\\"zoo\\\"\\n\\nI wise more had been said about why the Honest Advocate outperformed the standard Advocate on the MIMIC dataset.\", \"the_authors_state\": \"\\\"Advocates can effectively compete to generate higher quality evidence, though this effect was\\nlargely localized to a few class-pairs (e.g. shirts v.s. pullovers). \\\"\\n\\nDoes it do this on things that are essentially very similar? \\n\\nOverall, I think this is a great idea. I have been looking for some similar work and consider this work to be similar in the multi-generative aspect: \\\"MEGAN: Mixture of Experts of Generative Adversarial Networks for Multi-modal Image Generation\\\" - Park, Yoo, Bahng, Choo and Park, IJCAI 2018, but I cannot find similar work using the generative experts as collective adversaries for discrimination.\\n\\nThe paper is clear and well written. Improvements for the paper would be going into more detail about why the method works. It would have been great to have seen a data set on which the method performs poorly - that would give additional insight into its strengths and weaknesses.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}" ] }
SyxAb30cY7
Robustness May Be at Odds with Accuracy
[ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ]
We show that there exists an inherent tension between the goal of adversarial robustness and that of standard generalization. Specifically, training robust models may not only be more resource-consuming, but also lead to a reduction of standard accuracy. We demonstrate that this trade-off between the standard accuracy of a model and its robustness to adversarial perturbations provably exists even in a fairly simple and natural setting. These findings also corroborate a similar phenomenon observed in practice. Further, we argue that this phenomenon is a consequence of robust classifiers learning fundamentally different feature representations than standard classifiers. These differences, in particular, seem to result in unexpected benefits: the features learned by robust models tend to align better with salient data characteristics and human perception.
[ "adversarial examples", "robust machine learning", "robust optimization", "deep feature representations" ]
https://openreview.net/pdf?id=SyxAb30cY7
https://openreview.net/forum?id=SyxAb30cY7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SygWXdvelV", "S1gqQEuNCX", "HklCQsZ4CX", "Bkl044W9a7", "Bye3G4Zc6X", "r1l8pmZc6m", "BJlcXUnnhQ", "B1gCuge92Q", "rkgtSKNtn7" ], "note_type": [ "meta_review", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544742936803, 1542911009794, 1542884133886, 1542226998017, 1542226963986, 1542226878031, 1541355042031, 1541173366042, 1541126464620 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1223/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1223/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1223/Authors" ], [ "ICLR.cc/2019/Conference/Paper1223/Authors" ], [ "ICLR.cc/2019/Conference/Paper1223/Authors" ], [ "ICLR.cc/2019/Conference/Paper1223/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1223/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1223/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper provides interesting discussions on the trade-off between model accuracy and robustness to adversarial examples. All reviewers found that both empirical studies and theoretical results are solid. The paper is very well written. The visualization results are very intuitive. I recommend acceptance.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper. Accept.\"}", "{\"title\": \"Author response\", \"comment\": \"Thank you for your interest in our paper. In Theorem 2.1 we are proving upper bounds on the *robust accuracy* for a given *standard accuracy* (e.g. standard accuracy >95% implies robust accuracy <45%). One can consider the contrapositive to obtain bounds on the *standard accuracy* for a given *robust accuracy*. That is \\\"If the robust accuracy is at least p * \\u03b4 / (1-p) then the standard accuracy has to be <1 - \\u03b4\\\" (i.e. any classifier with at least 45% robust accuracy cannot have standard accuracy more than 95%).\"}", "{\"comment\": \"Nice paper! You have provided the empirical results on how the adversarial training hurts the standard accuracy in the high data regime. While in Theorem 2.1, you proved how the robust accuracy can be upper bounded for a given standard accuracy, there is no proof of how the standard accuracy is upper bounded for a given robust accuracy.\\n\\nIs that right? or I am missing something?\", \"title\": \"The effect of adversarial training on standard accuracy\"}", "{\"title\": \"Author response\", \"comment\": \"We thank the reviewer for their kind comments. The reviewer\\u2019s suggestion about the nature of errors made by standard vs. robust models is really interesting, and we will pursue it in future work.\"}", "{\"title\": \"Author response\", \"comment\": \"We thank the reviewer for the kind comments and suggestion. We address concerns raised below:\\n\\n- We agree with the reviewer that \\\"inherent trade-off\\\" might be perceived incorrectly. We only intended to refer to an inherent tradeoff in *our setting*. While we do argue that this is a reasonable hypothesis for the difficulties we face in practice, we cannot definitively conclude that this is the case. We have edited the manuscript to reflect this.\\n\\n- We agree that alternative methods can be used to obtain robustness in Thm 2.2. We only stated that \\\"adversarial training is necessary\\\" because we wanted to emphasize that simply minimizing the standard loss (ignoring the adversary) will not lead to robustness. We have edited the manuscript to elaborate on this.\\n\\nWe thank the reviewer for the other comments. We have edited the manuscript to address them.\"}", "{\"title\": \"Author response\", \"comment\": \"We thank the reviewer for the detailed comments. We will address the concerns raised below:\\n\\n- The aim of our paper is to demonstrate an inherent trade off between robustness and standard accuracy in a concrete setting. We believe that exhibiting the tradeoff in a simple and natural setting is a strength rather than a weakness of our paper, since such simple settings can manifest as special cases of more complex settings. We want to emphasize that our proof does not depend on the specific setting in any crucial way. In particular, the proof can be straightforwardly extended to a more general setting where each feature is an independent Gaussian with a different mean (and thus different correlation with the label).\\n\\nThe main idea is that, for a given adversary in this setting, we can always separate the features into \\\"robust\\\" (utilizing these features can only help robust classification) and \\\"non-robust\\\" (the adversary can manipulate these features to a degree where they become harmful for the model's accuracy). Any feature with correlation less than a threshold determined by epsilon is considered as non-robust in this context. Hence, a robust classifier cannot rely on these non-robust features.\\n\\nAs a result, if there is any standard accuracy that can be gained by utilizing these non-robust features, the model trained in standard way will benefit from it (at the expense of reducing its robust accuracy) and the robust model will not be able to get such a benefit, leading to its standard accuracy being lower.\\n\\nThus the trade-off discussed in the paper would manifest as long as there are some non-robust features which contribute to the accuracy of the standard model. Since extending our results to such settings would be fairly routine, we decided to keep our setting simple and highlight the key principle at play.\\n \\n- We thank the reviewer for bringing this paper to our attention. We added a discussion of the paper in the related work discussion. We want to emphasize that our goal is to understand and theoretically demonstrate the standard vs. robust accuracy tradeoffs observed in practice (reported multiple times in prior work as we discuss in our paper, as well as in the suggested paper). We are not claiming to be the first ones to observe tradeoffs of this nature _empirically_, but we are the first to provide some insight into its roots.\"}", "{\"title\": \"interesting findings, however seems to confirm some of the already known behavior in linear classification setup\", \"review\": \"This paper presents a study of tradeoffs between adversarial and standard accuracy of classifiers. Though it might be expected that training for adversarial robustness always leads to improvement in standard accuracy, however the authors claim that the actual situation is quite subtle. Though adversarial training might help towards increasing standard accuracy in certain data regimes such as data scarcity, but when sufficient data is available there exists a trade-off between the two goals. The tradeoff is demonstrated in a fairly simple setting in which case data consists of two kinds of features - those which are weakly correlated with the output, and those which are strongly correlated. It is shown that adversarial accuracy depends on the feature which exhibit strong correlation, while standard accuracy depends on weakly correlated features.\\n\\nThough the paper presents some interesting insights. Some of the concerns are :\\n - The paper falls short in answering the tradeoff question under a more general setup. The toy example is very specific with a clear separation between weak and strongly correlated features. It would be interesting to see how similar results can be derived when under more complicated setup with many features with varying extent of correlation.\\n - The tradeoff between standard and robustness under linear classification has also been demonstrated in a recent work [1]. In [1], it is also argued that for datasets consisting of large number of labels, when some of the labels are under data-scarce regimes, an adversarial robustness view-point (via l1-regularization) helps in accuracy improvement for those labels. However, for other set of labels for which there is sufficient data available, l2-regularization is more suited, and adversarial robustness perspective decreases standard accuracy. From this view-point, one could argue that some of the main contributions in the current paper, could be seen as empirical extensions for deep learning setup. It would be instructive to contrast and explore connections between this paper, and the observations in [1].\\n[1] Adversarial Extreme Multi-label Classification, https://arxiv.org/abs/1803.01570\\n==============post-rebuttal======\\nthanks for the feedback, I update my rating of the paper\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"good paper, interesting findings, should be cautious on over-claiming\", \"review\": \"This paper discusses the hypothesis of the existence of intrinsic tradeoffs between clean accuracy and robust accuracy and corresponding implications. Specifically, it is motivated by the tradeoffs between clean accuracy and robust accuracy of adversarially trained network. The authors constructed a toy example and proved that any classifier cannot be both accurate and robust at the same time. They also showed that regular training cannot make soft-margin SVM robust but adversarial training can. At the end of the paper, they show that input gradients of adversarially trained models are more semantically meaningful than regularly trained models.\\n\\nThe paper is well written and easy to follow. The toy example is novel and provides a concrete example demonstrating robustness-accuracy tradeoff, which was previously speculated. Demonstrating adversarially trained models has more semantically meaningful gradient is interesting and provides insights to the field. It connects robustness and interpretability nicely.\\n\\nMy main concern is on the overclaiming of applicability of the \\\"inherent tradeoff\\\". The paper demonstrated that the \\\"inherent tradeoff\\\" could be a reasonable hypothesis for explaining the difficulty of achieving robust models. I think the authors should emphasize this in the paper so that it does not mislead the reader to think that it is the reason.\\n\\nOn a related note, Theorem 2.2 shows adversarial training can give robust classifier while standard training cannot. Then the paper says \\\"adversarial training is necessary to achieve non-trivial adversarial accuracy in this setting\\\". The word \\\"necessary\\\" is misleading, here Thm 2.2 showed that adversarial training works, but it doesn't exclude the possibility that robust classifiers can be achieved by other training methods. \\n\\nminor comments\\n- techinques --> techniques\\n- more discussion on the visual difference between the gradients from L2 and L_\\\\infty adversarially trained networks\\n- Figure 5 (c): what does \\\"w Robust Features\\\" mean? are these values accuracy after perburtation?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Good paper, clear accept\", \"review\": \"The paper demonstrates the trade-off between accuracy and robustness of a model. The phenomenon is shown in previous works, but this work interestingly proposes a theoretical model that supports the idea. The proving technique can be particularly beneficial to developing theoretical understanding for the phenomenon. Besides, the authors also visualize the gradients and adversarial examples generated from standard and adversarially trained models, which show that these adversarially trained models are more aligned to human perception.\", \"quality\": \"good, clarity: good, originality: good, significance: good\", \"pros\": [\"The paper is fairly well written and the idea is clearly presented\", \"To the best of my knowledge (maye I am wrong), this work is the first one that\", \"provides theoretical explanation for the tradeoff between accuracy and robustness\", \"The visualization results supports their hypothesis that adversarially trained models\", \"percepts more like human.\"], \"suggestions\": \"It would be interesting to see what kind of real images can fool the models and see whether the robust model made mistakes more like human.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}" ] }
H1g0Z3A9Fm
Supervised Community Detection with Line Graph Neural Networks
[ "Zhengdao Chen", "Lisha Li", "Joan Bruna" ]
Community detection in graphs can be solved via spectral methods or posterior inference under certain probabilistic graphical models. Focusing on random graph families such as the stochastic block model, recent research has unified both approaches and identified both statistical and computational detection thresholds in terms of the signal-to-noise ratio. By recasting community detection as a node-wise classification problem on graphs, we can also study it from a learning perspective. We present a novel family of Graph Neural Networks (GNNs) for solving community detection problems in a supervised learning setting. We show that, in a data-driven manner and without access to the underlying generative models, they can match or even surpass the performance of the belief propagation algorithm on binary and multiclass stochastic block models, which is believed to reach the computational threshold in these cases. In particular, we propose to augment GNNs with the non-backtracking operator defined on the line graph of edge adjacencies. The GNNs are achieved good performance on real-world datasets. In addition, we perform the first analysis of the optimization landscape of using (linear) GNNs to solve community detection problems, demonstrating that under certain simplifications and assumptions, the loss value at any local minimum is close to the loss value at the global minimum/minima.
[ "community detection", "graph neural networks", "belief propagation", "energy landscape", "non-backtracking matrix" ]
https://openreview.net/pdf?id=H1g0Z3A9Fm
https://openreview.net/forum?id=H1g0Z3A9Fm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rklhGf2ggN", "S1lor2mcAQ", "H1g2iCOrC7", "SkxI53uPpX", "rylL0sdwT7", "HkgV9quwam", "SyxP9MFRhm", "rklkDauTn7", "r1x6e8mXhX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544761876501, 1543285827515, 1542979236081, 1542061198057, 1542061005559, 1542060684435, 1541472910753, 1541406039124, 1540728308563 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1222/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1222/Authors" ], [ "ICLR.cc/2019/Conference/Paper1222/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1222/Authors" ], [ "ICLR.cc/2019/Conference/Paper1222/Authors" ], [ "ICLR.cc/2019/Conference/Paper1222/Authors" ], [ "ICLR.cc/2019/Conference/Paper1222/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1222/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1222/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper introduces a new graph convolutional neural network, called LGNN, and applied it to solve the community detection problem. The reviewers think LGNN yields a nice and useful extension of graph CNN, especially in using the line graph of edge adjacencies and a non-backtracking operator. The empirical evaluation shows that the new method provides a useful tool for real datasets. The reviewers raised some issues in writing and reference, for which the authors have provided clarification and modified the papers accordingly.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper, accept\"}", "{\"title\": \"updated version\", \"comment\": \"We would like to thank again our three reviewers for their time and high-quality feedback. We have integrated their comments into an updated manuscript. The main changes include:\\n\\n-- ablation experiments of our GNN/LGNN architectures, in Sections 6.1 and 6.2\\n-- fixed several typos.\\n-- clarified assumptions of our landscape analysis (and mention that an open question is to study their validity in SBM models). (Section 5). \\n-- clarified finite-sample effects in our computational-to-statistical gap results (Section 6.2).\"}", "{\"title\": \"Ok for the answer\", \"comment\": \"Possibly it would help the reader, in order to connect the different parts of the paper, if the authors say in Section 5 explicitly that specifying the region of parameters for which these assumptions are satisfied for the SBM (and other models) is an open question.\\n\\nOtherwise I find the suggested adjustments satisfactory, and maintain my original rating.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you very much for the constructive and high-quality comments.\\n \\n\\u201c\\u2026why this paper restricts itself to community detection, rather than general node-classification problems for broader audience\\u201d\\n \\nThe reason why we restrict ourselves to community detection problems is that it is a relatively well-studied setup, for which several algorithms have been proposed, and where computational and statistical thresholds are known in several cases. In addition, synthetic datasets can be easily generated for community detection. Therefore, we think it is a good testbed for comparing different algorithms. However, it is a very good point that GNN and LGNN can be applied to other node-wise classification problems as well. We will modify the text to highlight this point. \\n \\n\\u201cTo make sure the actual gain of LGNN, this needs be done with some ablation studies.\\u201d\\n \\nThis is a valid suggestion. You correctly pointed out that GAT does not utilize the degree matrix directly, and so we are planning to perform ablation experiments by removing the degree matrix from GNN and LGNN. We did add spatial batch normalization steps to the GAT and MPNN models we used, and in the experiments we found that spatial batch normalization is crucial for the performance of the models including GNN, LGNN, GAT and MPNN. The reason for this is outlined at the end of Section 4.1, in which we assimilate the spatial normalization with removing the DC component of node features, which is aligned with the eigenvector of the adjacency matrix of leading eigenvalue. \\n\\n \\n \\u201cThe performance gain is not so significant compared to other simpler baselines, so the net contribution of the line-graph extension is unclear considering the above.\\u201d\\n \\nAlthough not all differences in the results are statistically significant (where we consider 2 sigma to be significant), we still think it is worth noting that in all of the experiments (binary SBM, 5-class dissociative SBM, GBM and SNAP data), LGNN achieved better averaged performance than all other algorithms, including GNN without line graph included. We also note that the complexity in operations/memory of using LGNN is the same as the alternative edge-learning methods we compared against, so these gains come essentially for free.\\n \\n\\\"The experimental section considers only a few number of classes (2-5) so that it\\u2019s does not show how it scales with a large number of classes\\\"\\n \\nThis is indeed an interesting direction for future research. We will highlight this current limitation and discuss possible routes.\"}", "{\"title\": \"Response\", \"comment\": \"We sincerely thank the reviewer for his time and constructive comments.\\n\\nRegarding the reference of Krzakala et al., 2013, \\u201cSpectral redemption in clustering sparse networks\\u201d, you are correct that we should mention the fact that it introduced the non-backtracking operator for community detection. Thanks for this important remark, this is in fact a landmark paper central to our construction.\\n \\n\\u201cOn the Computational-Statistical Gap Experiment\\u201d\\nIt is correct that the computational and statistical thresholds for detection are defined asymptotically, and therefore our experimental results with finite-size graphs do not contradict those thresholds. We only hoped to demonstrate the good performance of the GNN and LGNN models in these scenarios. We hypothesize two possible scenarios: either that the network is picking up finite-size effects that standard BP is unable to exploit, either that the network actually improves asymptotic detection. We are currently exploring this question and hoping to provide some answers to it. In any case, we appreciate your comment, and will modify the statement of the implication of our experimental results in the paper.\"}", "{\"title\": \"Response to the Review\", \"comment\": \"We very much appreciate the compliments as well as the comments on the several claims in the paper.\\n \\nBy \\u201cimproving upon current computational thresholds in hard regimes,\\u201d indeed we meant to say that the results on finite-size graphs of our algorithms are better than those of belief propagation, which is known to reach the computational threshold of such problems. We will change the phrasing of the claim in the paper.\\n \\n\\u201cOn the simplifications of the energy landscape analysis\\u201d:\\nThe simplifications that we made in the theoretical analysis are actually discussed in detail in section 5, including using squared cosine distance in place of cross-entropy loss, using a single feature map, removing nonlinearities, replacing spatial batch normalization by projection onto the unit l_2 ball, as well reparametrizing the network\\u2019s parameters according to the Krylov subspace generated by the set of operators. Assumptions are the four quantities defined in Theorem 5.1 are finite. It is indeed a highly interesting question under which of graphs (for example, for what regimes of the stochastic block model) these assumptions are satisfied. We don\\u2019t have theoretical results for this question yet, although it will certainly be of great interest to future work.\\n \\nOn \\\"multilinear fully connected neural networks whose landscape is well understood (Kawaguchi, 2016).\\\" this is in my opinion grossly overstated.\\u201d \\n\\nThe reviewer is correct in that the optimization landscape of deep, nonlinear neural networks is still far from understood. We were referring to the case with no activation functions (multilinear), in which the situation is much simpler. We will modify the text to make sure there is no ambiguity.\"}", "{\"title\": \"an interesting and novel GNN, but somehow unclear in experiments.\", \"review\": \"This paper introduces a novel graph conv neural network, dubbed LGNN, that extends the conventional GNN using the line graph of edge adjacencies and a non-backtracking operator. It has a form of learning directed edge features for message-passing. An energy landscape analysis of the LGNN is also provided under linear assumptions. The performance of LGNN is evaluated on the problem of community detection, comparing with some baseline methods.\\n\\nI appreciate the LGNN formulation as a reasonable and nice extension of GNN. The formulation is clearly written and properly discussed with message passing algorithms and other GNNs. Its potential hierarchical construction is also interesting, and maybe useful for large-scale graphs. In the course of reading this paper, however, I don\\u2019t find any clear reason why this paper restricts itself to community detection, rather than general node-classification problems for broader audience. It would have been more interesting if it covers other classification datasets in their experiments. \\n\\nMost of the weak points of this paper lie in the experimental section. \\n1. The experimental sections do not have proper ablation studies, e.g., as follows. \\nAs commented in Sec 6.3, GAT may underperform due to the absence of the degree matrix and this needs to be confirmed by running GAT with the degree term. And, as commented in footnote 4, the authors used spatial batch normalization to improve the performance of LGNN. But, it\\u2019s not clear how much it obtains for each experiment and, more importantly, whether they use the same spatial batch norm in other baselines. To make sure the actual gain of LGNN, this needs be done with some ablation studies. \\n2. The performance gain is not so significant compared to other simpler baselines, so the net contribution of the line-graph extension is unclear considering the above. \\n3. The experimental section considers only a few number of classes (2-5) so that it\\u2019s does not show how it scales with a large number of classes. In this sense, other benchmark datasets with more classes (e.g., PPI datasets used in GAT paper) would be better. \\n\\nI hope to get answers to these.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting new take on GNN with the non-backtracking operator\", \"review\": \"Graph Neural Networks(GNN) are gaining traction and generating a lot of interest. In this work, the authors apply them to the community detection problem, and in particular to graphs generated from the stochastic block model. The main new contribution here is called \\\"line graph neural network\\\" that operate directly over the edges of the graph, using efficiently the power of the \\\"non backtracking operator\\\" as a spectral method for such problems.\\n\\nTraining such GNN on data generated from the stochastic block model and other graph generating models, the authors shows that the resulting method can be competitive on both artificial and real datasets.\\n\\nThis is definitely an interesting idea, and a nice contribution to GNN, that should be of interest to ICML folks.\\n\\nReferences and citations are fine for the most part, except for one very odd exception concerning one of the main object of the paper: the non-backtracking operator itself! While discussed in many places, no references whatsoever are given for its origin in detection problems. I believe this is due to (Krzakala et al, 2013) ---a paper cited for other reasons--- and given the importance of the non-backtracking operator for this paper, this should be acknowledged explicitly.\", \"pro\": \"Interesting new idea for GNN, that lead to more powerful method and open exciting direction of research. A nice theoretical analysis of the landscape of the graph.\", \"con\": \"The evidence provided in Table 1 is rather weak. The hard phase is defined in terms of computational complexity (polynomial vs exponential) and therefore require tests on many different sizes.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"An impressive piece of work opening the exciting possibility of discovering optimal algorithms with machine learning. A couple of misleading statements to be adjusted.\", \"review\": \"This paper presents a study of the community detection problem via graph neural networks. The presented results open the possibility that neural networks are able to discover the optimal algorithm for a given task. This is rather convincingly demonstrated on the example of the stochastic block model, where the optimal performance is known (for 2 symmetric groups) or strongly conjectured (for more groups). The method is rather computationally demanding, and also somewhat unrealistic in the aspect that the training examples might not be available, but for a pioneering study of this kind this is well acceptable.\\n\\nDespite my overall very positive opinion, I found a couple of claims that are misleading and overall hurt the quality of the paper, and I would strongly suggest to the authors to adjust these claims:\\n\\n** The method is claimed to \\\"even improve upon current computational thresholds in hard regimes.\\\" This is misleading, because (as correctly stated in the body of the paper) the computational threshold to which the paper refers apply in the limit of large graph sizes whereas the observed improvements are for finite sizes. It is shown here that for finite sizes the present method is better than belief propagation. But this clearly does not imply that it improves the conjectured computational thresholds that are asymptotic. At best this is an interesting hypothesis for future work, not more. \\n\\n** The energy landscape is analyzed \\\"under certain simplifications and assumptions\\\". Conclusions state \\\"an interesting transition from rugged to simple as the size of the graphs increase under appropriate concentration conditions.\\\" This is very vague. It would be great if the paper could offer intuitive explanation of there simplifications and assumptions that is between these unclear remarks and the full statement of the theorem and the proof that I did not find simple to understand. For instance state the intuition on in which region of parameters are those results true and in which they are not. \\n\\n** \\\"multilinear fully connected neural networks whose landscape is well understood (Kawaguchi, 2016).\\\" this is in my opinion grossly overstated. While surely that paper presents interesting results, they are set in a regime that lets a lot to be still understood about landscape of fully connected neural networks. It is restricted to specific activation functions, and the results for non-linear networks rely on unjustified simplifications, the sample complexity trade-off is not considered, etc.\", \"misprint\": \"Page 2: cetain -> certain.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
r1xRW3A9YX
Riemannian TransE: Multi-relational Graph Embedding in Non-Euclidean Space
[ "Atsushi Suzuki", "Yosuke Enokida", "Kenji Yamanishi" ]
Multi-relational graph embedding which aims at achieving effective representations with reduced low-dimensional parameters, has been widely used in knowledge base completion. Although knowledge base data usually contains tree-like or cyclic structure, none of existing approaches can embed these data into a compatible space that in line with the structure. To overcome this problem, a novel framework, called Riemannian TransE, is proposed in this paper to embed the entities in a Riemannian manifold. Riemannian TransE models each relation as a move to a point and defines specific novel distance dissimilarity for each relation, so that all the relations are naturally embedded in correspondence to the structure of data. Experiments on several knowledge base completion tasks have shown that, based on an appropriate choice of manifold, Riemannian TransE achieves good performance even with a significantly reduced parameters.
[ "Riemannian TransE", "graph embedding", "multi-relational graph", "Riemannian manifold", "TransE", "hyperbolic space", "sphere", "knowledge base" ]
https://openreview.net/pdf?id=r1xRW3A9YX
https://openreview.net/forum?id=r1xRW3A9YX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJgHIF64eN", "ByxG803IRm", "S1g6X03LRm", "Sklj263IRQ", "HklXoxNThm", "SJgeuTL5hX", "rJgVurunjX", "BJgB_El19m", "HJlnVGo6Fm", "BylRsQETKm", "HyxxjkO2KQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment", "official_comment", "comment" ], "note_created": [ 1545029965186, 1543061066045, 1543061028618, 1543060914799, 1541386395411, 1541201256007, 1540289899847, 1538356332837, 1538269748036, 1538241445821, 1538191256028 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1221/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1221/Authors" ], [ "ICLR.cc/2019/Conference/Paper1221/Authors" ], [ "ICLR.cc/2019/Conference/Paper1221/Authors" ], [ "ICLR.cc/2019/Conference/Paper1221/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1221/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1221/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1221/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1221/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes a generalization of the translation-style embedding approaches for link prediction to Riemannian manifolds. The reviewers feel this is an important contribution to the recent work on embedding graphs into non-Euclidean spaces, especially since this work focuses on multi-relational links, thus supporting knowledge graph completion. The results on WN11 and FB13 are also promising.\", \"the_reviewers_and_ac_note_the_following_potential_weaknesses\": \"(1) the primary concern is the low performance on the benchmarks, especially WN18 and FB15k, and not using the appropriate versions (WN18-RR and FB15k-237), (2) use of hyperbolic embedding for an entity shared across all relations, and (3) lack of discussion/visualization of the learned geometry.\\n\\nDuring the discussion phase, the authors clarified reviewer 1's concern regarding the difference in performance between HolE and ComplEx, along with providing a revision that addressed some of the clarity issues raised by reviewer 3. The authors also justified the lower performance due to (1) they are focusing on low-dimensionality setting, and (2) not all datasets will fit the space of the proposed model (like FB15k). However, reviewers 2 and 3 still maintain that the results provide insufficient evidence for the need for Riemannian spaces over Euclidean ones, especially for larger, and more realistic, knowledge graphs.\\n\\nThe reviewers and the AC agree that the paper should not be accepted in the current state.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Insufficient evidence\"}", "{\"title\": \"Author feedback\", \"comment\": \"We sincerely thank you for the valuable comments and suggestions.\\nQuestion 1\\uff1aLack of clarity of the paper\", \"response\": \"The comparison of time complexity relies on two aspects: the sample size and dimensionality. In terms of the dimensionality, the Riemannian TransE has linear complexity w.r.t to D-dimensional by using any of hyperbolic spaces, spheres or Euclidean spaces, as well as direct products among them. Hence, it has the same complexity as that of TransE. In terms of the sample size, the both methods also have the same complexity, as they employ the same loss function (equation (8)) and the same stochastic gradient descent algorithm for optimization. Indeed, worth to mention that both of the methods have the same algorithm as long as using the same other algorithms, e.g., (Riemannian) SVRG and (Riemannian) Adagrad. To sum up, we can conclude that the Riemannian TransE has the same complexity as TransE.\", \"question_2\": \"Underperform results compared to baselines on FB15k and WN18\", \"question_3\": \"Speed comparison between Riemannian TransE and TransE\"}", "{\"title\": \"Author feedback\", \"comment\": \"Thanks very much for your careful reading and constructive feedbacks. We are also greatly encouraged by your insightful comments, e.g. interesting paper and great job at summarizing existing scoring functions.\", \"question_1\": \"Results on WN18 and FB15k are less accurate\", \"response\": \"Thanks for pointing out a good question. According to the first column in Table 1, the number of parameter for Riemannian TransE is significantly smaller than TransH , TransR and TransD, but is slightly larger than other approaches. However, since usually |R|<<|V| (For example, in our experiment , |R| is around 10 but |V| is often larger than 100000), hence the number of parameters for Riemannian TransE and other methods are nearly the same, i.e., D|V|, approximately. In addition, for clearer demonstration and comparison, we have also added the detailed score function of our Riemannian TransE into the first row of Table 1.\", \"question_2\": \"Introduction-what does \\\"evaluating dense matrices or tensors\\\" mean?\", \"question_3\": \"Related Work - confusing for describing mappings using terms like \\\"planet\\\", \\\"launcher\\\", \\\"satellite\\\" etc.\", \"question_4\": \"The inner product used by DistMult is not really a \\\"dissimilarity\\\" between two representations (but rather the opposite).\", \"question_5\": \"Whether the number of parameters is the same for Riemannian TransE as for the other methods\"}", "{\"title\": \"Author feedback\", \"comment\": \"Thank you very much for your time and expertise in reviewing our paper. We address your concerns and questions as follows.\", \"concern_1\": \"Motivation for in this paper.\", \"response\": \"1) Thank you for pointing out an important point. Yes, ComplEx and HolE are \\\"equivalent\\\" in [1], however the \\\"equivalent\\\" cannot always hold. As we know, the vectors, e.g., [z1,\\u2026, zD]^T, in ComplEx are complex D-dimensional, HolE in [1] converts a real D-dimensional vectors [x1,\\u2026, xD] into a complex D-dimensional [z1,\\u2026, zD] with Fourier transform. Hence, if [x1,\\u2026, xD] are real, then the corresponding [z1,\\u2026, zD] should be symmetric. From this viewpoint, HolE is not \\\"equivalent\\\" to ComplEx but ComplEx with the symmetricity constraint. In other words, HolE is a subset of ComplEx. According to the original paper of HolE [2], HolE also adopts real vectors in our experiment, so the results for HolE and ComplEx are different.\\n[1] Hayashi, K., and Shimbo, M. \\\"On the equivalence of holographic and complex embeddings for link prediction\\\", 2017.\\n[2] Trouillon, T, and Nickel, M. \\\"Complex and Holographic Embeddings of Knowledge Graphs: A Comparison\\\", 2017.\\n\\n2) Though we conducted experiments for each compared method according to their original settings, the discrepancies exist largely due to different reduced dimensionalities between previous work and our experiment . Previous results are based on a specific dimensionality, e.g., 50 in [1]. However, to validate the effectiveness of the proposed approach on a further reduced the low-dimensional embedding , results in our paper are based on different dimensionalities, e.g., 8 or 16. \\n\\n[1] Bordes, Antoine, et al. \\\"Translating embeddings for modeling multi-relational data.\\\" Advances in neural information processing systems. 2013.\", \"concern_2\": \"Benefits of using a single (hyperbolic) embedding of entities across all relation types.\", \"concern_3\": \"1) Different results for HolE and ComplEx\\n 2) Discrepancies of our experimental results from previous work\"}", "{\"title\": \"Review - Riemannian TransE\", \"review\": \"The paper proposes a new approach to compute embeddings of multi-relational data such as knowledge graphs. For this purpose, the paper introduces a variant of TransE that operates on Riemannian manifolds, in particular, Euclidean, Spherical, and Hyperbolic space. This approach is motivated by the results of Nickel & Kiela (2017), who showed tha Hyperbolic space can provide important advantages for embedding graphs with hierarchical structure.\\n\\nHyperbolic and Riemannian embeddings are a promising research area that fits well into ICLR. Extending hyperbolic, and more general, Riemannian embeddings to multi-relational data is an important aspect in this context, as it allows to extend such methods to new applications such as Knowledge Graph Completion. Overall, the paper is written well and mostly good to understand. However, I am concerned about multiple aspects in the current version:\\n\\n- What is the motivation for using this particular form of translation? In Riemannian manifolds, the analogue of vector addition and subtraction is typically taken as the exponential or logarithmic map (as expm and logm in Euclidean space are exactly vector addition and subtraction). For the spherical and hyperbolic manifold both maps have closed form expressions and are differentiable. It is therefore not clear to me what the advantage of the proposed approach is compared to these standard methods. In any case, it would be important to include them in the experimental results.\\n\\n- It is also not clear to me whether the benefits of hyperbolic embeddings translate into the setting that is proposed here. The advantage of hyperbolic embeddings is that they impose a hierarchical structure in the latent space. The method proposed in this paper uses then a single (hyperbolic) embedding of entities across all relation types. This implies that there should be a single consistent hierarchy that explains all the links in all relations. This seems unlikely and might explain some of the hyperbolic results. A more detailed discussion and motivation would be important here.\\n\\n- Regarding the experimental results: Why are the results for HolE and ComplEx so different? As [1,2] showed, both models are identical, and for that reason should get close to identical results. The large differences seem inconsistent with these results. Furthermore it seems that the results reported in this paper do not match previously reported results. What is the reason for these discrepancies?\\n\\n[1] Hayashi, K., and Shimbo, M. \\\"On the equivalence of holographic and complex embeddings for link prediction\\\", 2017.\\n[2] Trouillon, T, and Nickel, M. \\\"Complex and Holographic Embeddings of Knowledge Graphs: A Comparison\\\", 2017.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Very interesting approach, but underwhelming results (despite the complexity)\", \"review\": \"In this paper, authors focus on the problem of efficiently embedding Knowledge Graphs in low-dimensional embedding spaces, a task where models are commonly evaluated via downstream link prediction and triple classification tasks. The proposed model - Riemannian TransE, based on TransE [Bordes et al. 2013] - maps entities to points in a non-Euclidean space, by minimising a loss based on the geodesic distance in such space. This paper is especially interesting, since extends previous approaches - such as Poincare embeddings - to the multi-relational setting. Results look promising on WN11 and FB13, but authors mention results on the more commonly used WN18 and FB15k are less accurate than those obtained by the baselines (without reporting them). It is worth mentioning that WN18 and FB15k were found to be solvable by very simple baselines (e.g. see [1]). Furthermore, authors do not report any finding on the geometry of the learned spaces.\\n\\nIntroduction - Wording is a bit weird sometimes, e.g. what does \\\"evaluating dense matrices or tensors\\\" mean?\\nRelated Work - Likewise, this section was a bit hard to follow. I do not fully get why authors had to use terms like \\\"planet\\\", \\\"launcher\\\", \\\"satellite\\\" etc. for describing mappings between entities and points in a manifold, relations and points in another manifold, and the manifold where the geodesic distances between representations are calculated.\\nWhat is the intuition behind this?\\nTab. 1 does a great job at summarising existing scoring functions and their space complexity. However, it may be worth noticing that e.g. the inner product used by DistMult is not really a \\\"dissimilarity\\\" between two representations (but rather the opposite). Is the number of parameters the same for Riemannian TransE as for the other methods (including the extra \\\"l\\\" parameters)? If it isn't the comparison may be slightly unfair.\\n\\n[1] https://arxiv.org/abs/1707.01476\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Lack of clarity and limited experimental success\", \"review\": \"This paper presents a generalization of TransE to Riemannian manifolds. While this work falls into the class of interesting recent approaches for using non-Euclidean spaces for knowledge graph embeddings, I found it very hard to digest (e.g. the first paragraph in Section 3.3). Figure 3 and 4 confused me more than helping me to understand the method. Furthermore, current neural link prediction methods are usually evaluated on FB15k and WN18. In fact, often on the harder variants FB15k-237 and WN18RR. For FB15k and WN18, Riemannian TransE seems to underperform compared to baselines -- even for low embedding dimensions, so I have doubts how useful this method will be to the community and believe further experiments on FB15k-237 and WN18RR need to be carried out and the clarity of the paper, particularly the figures, needs to be improved. Lastly, I would be curious about how the different Riemannian TransE variants compare to TransE in terms of speed?\", \"update\": \"I thank the authors for their response and revision of the paper. To me, results on WN18RR and FB15k-237 are inconclusive w.r.t. to the choice of using Riemannian as opposed to Euclidean space. I therefore still believe this paper needs more work before acceptance.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"LOW refers to around 10 dimension in our (or other non-Euclidean embeddings) context\", \"comment\": \"Please kindly note that in non-Euclidean embedding context such as [3] [4], LOW dimension refers to around 10 dimension. In our context, 200 dimension is TOO HIGH (note that [1] reports the case where m=n=100, i.e. 200 dimensional case). Our results in 64 and 128 dimension are no more than additional information (to observe relation between dimension and accuracy).\\nMoreover, we found that in their experiments, they used ADADELTA and RMSProp in [1] and [2], respectively, whereas we used SGD for all Trans-X methods for fair comparison (we should distinguish the problem of model and optimizer), which we think is the reason made difference in results (and we suspect that TransD is very sensitive to hyperparameters, datasets and tasks, for TransD gives good results even in our experiments with some settings such as hit@10 in WN18). \\nHowever, we admit that we overlooked the results of TransE in 20 dimension in [2].\\nAlthough one might think that 20 dimension is still high and our method works well in lower than 20 dimension, it is worth while to note the result in [2]. Thank you for your information.\\n\\n[1] Knowledge Graph Embedding via Dynamic Mapping Matrix. ACL 2015.\\n[2] Neighborhood Mixture Model for Knowledge Base Completion. CoNLL 2016.\\n[3] Nickel, Maximillian, and Douwe Kiela. \\\"Poincar\\u00e9 embeddings for learning hierarchical representations.\\\" Advances in neural information processing systems. 2017.\\n[4] Ganea, Octavian-Eugen, Gary B\\u00e9cigneul, and Thomas Hofmann. \\\"Hyperbolic Entailment Cones for Learning Hierarchical Embeddings.\\\" ICML 2018.\"}", "{\"comment\": \"In the past (even now), early models are often evaluated using LOW dimensionality (might be because of limitations of computer resources). For example, in TransD [1], the embedding size is in {20, 50, 80, 100}, and TransD obtains the accuracies of 86.4% and 89.1% on WN11 and FB13 respectively. Another example is about TransE from [2], TransE gets the accuracy of 85.2% only using the embedding size of 20 on WN11, and gets the accuracy of 87.6% using the embedding size of 100 on FB13.\\n\\n[1] Knowledge Graph Embedding via Dynamic Mapping Matrix. ACL 2015.\\n[2] Neighborhood Mixture Model for Knowledge Base Completion. CoNLL 2016.\", \"title\": \"Early models are often evaluated using LOW dimensionality\"}", "{\"title\": \"Performance in low dimensionality is our focus\", \"comment\": \"Thank you for your comment.\\nPlease kindly note that we focus on relation between accuracy and dimensionality, and specifically on performance in LOW dimensionality. \\nThis is because good performance in LOW dimensionality is an advantage of using non-Euclidean spaces, as shown in [2].\\nOn the other hand, most papers (including the paper you referred) only report the result in the best dimensionality after grid search. This is why we had to do experiments by ourselves.\\n\\nHowever, as you suggested, noting that some methods can attain better in other dimensionality (though it might be much higher than in our experiments) might be kinder to readers. We'll add a note about that in the revised version.\\n\\nThank you. \\n\\n[2] Nickel, Maximillian, and Douwe Kiela. \\\"Poincar\\u00e9 embeddings for learning hierarchical representations.\\\" Advances in neural information processing systems. 2017.\"}", "{\"comment\": \"You should report the experimental results from original papers which are much better than all results (including yours) you reported in Table 2. You can see a part from [1].\\n\\n[1] An overview of embedding models of entities and relationships for knowledge base completion.\", \"title\": \"should report results from original papers\"}" ] }
H1e0-30qKm
Unlabeled Disentangling of GANs with Guided Siamese Networks
[ "Gökhan Yildirim", "Nikolay Jetchev", "Urs Bergmann" ]
Disentangling underlying generative factors of a data distribution is important for interpretability and generalizable representations. In this paper, we introduce two novel disentangling methods. Our first method, Unlabeled Disentangling GAN (UD-GAN, unsupervised), decomposes the latent noise by generating similar/dissimilar image pairs and it learns a distance metric on these pairs with siamese networks and a contrastive loss. This pairwise approach provides consistent representations for similar data points. Our second method (UD-GAN-G, weakly supervised) modifies the UD-GAN with user-defined guidance functions, which restrict the information that goes into the siamese networks. This constraint helps UD-GAN-G to focus on the desired semantic variations in the data. We show that both our methods outperform existing unsupervised approaches in quantitative metrics that measure semantic accuracy of the learned representations. In addition, we illustrate that simple guidance functions we use in UD-GAN-G allow us to directly capture the desired variations in the data.
[ "GAN", "disentange", "siamese networks", "semantic" ]
https://openreview.net/pdf?id=H1e0-30qKm
https://openreview.net/forum?id=H1e0-30qKm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJlv8jwWlE", "ryed_g33JE", "Syg5BZ_3yV", "HkgCEG12kV", "ryxmbDb7J4", "HJlG9UWmk4", "H1xlkUt20X", "B1xWrHZPRQ", "H1emnxgIRX", "HJl6Oxl8A7", "HklxQglURX", "rkemAkgIRQ", "BJxPm1eLRX", "HyeZp_ivTQ", "S1lR3oO8TX", "HyxHQfLq3m", "H1xX2I782m" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544809295069, 1544499311637, 1544483137943, 1544446517610, 1543866106611, 1543865993876, 1543439831519, 1543079225413, 1543008427344, 1543008372758, 1543008279627, 1543008203452, 1543008030708, 1542072505438, 1541995446276, 1541198364830, 1540925099499 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1220/Authors" ], [ "ICLR.cc/2019/Conference/Paper1220/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1220/AnonReviewer5" ], [ "ICLR.cc/2019/Conference/Paper1220/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1220/Authors" ], [ "ICLR.cc/2019/Conference/Paper1220/Authors" ], [ "ICLR.cc/2019/Conference/Paper1220/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1220/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1220/Authors" ], [ "ICLR.cc/2019/Conference/Paper1220/Authors" ], [ "ICLR.cc/2019/Conference/Paper1220/Authors" ], [ "ICLR.cc/2019/Conference/Paper1220/Authors" ], [ "ICLR.cc/2019/Conference/Paper1220/Authors" ], [ "ICLR.cc/2019/Conference/Paper1220/AnonReviewer5" ], [ "ICLR.cc/2019/Conference/Paper1220/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1220/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1220/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Author Response\", \"comment\": \"Thank you again for your review.\\n\\nUD-GAN and other unsupervised techniques already capture and disentangle various attributes in a given dataset. In this paper, our guided approach (UD-GAN-G) complements the unsupervised literature by offering a simple way to further disentangle some of the spuriously correlated variations without labeled data. For some datasets and desired behaviours, the design of a guidance function is straightforward - while for others, as correctly pointed out, it can be a very difficult task. However, our main goal is not to design guidance functions for each and every attribute. Instead, we supplement the unsupervised disentanglement process with guidance.\"}", "{\"metareview\": \"The paper received mixed reviews. It proposes a variant of Siamese network objective function, which is interesting. However, it\\u2019s unclear if the performance of the unguided method is much better than other baselines (e.g., InfoGAN). The guided version of the method seems to require much domain-specific knowledge and design of the feature function, which makes the paper difficult to apply to broader cases.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview\"}", "{\"title\": \"Rebuttal response\", \"comment\": \"I appreciate the authors' efforts made during the rebuttal period. The new results, especially the comparisons against other methods and the experiments on unguided approach, made the paper much stronger. However, I still have some concerns about the practical usefulness of the guidance employed in the paper, as it is designed heuristically specifically suitable for a certain dataset. Considering the main contribution of the paper is introducing a generative framework that can incorporate the additional guidance to learn disentangled representation, demonstrating the results with only ad-hoc guidance on a few specific datasets looks considerable drawback to me to recommend the acceptance of the paper.\"}", "{\"title\": \"feedback on author response\", \"comment\": \"Dear Authors,\\n\\nThank you for your response. The experimental setup makes more sense to me after your clarifications As a consequence, I have increased my score from 5 to 6. However, all the explanations should be properly discussed in the paper.\\n \\nI can not give a higher score given that I still think that the proposed approach to add weak-supervision is too ad-hoc and difficult to apply in real scenarios. On the positive side, the unsupervised version of the model has its own merit.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for your response. Please do let us know if you have any further inquires about the updated version of our paper.\"}", "{\"title\": \"Author Response\", \"comment\": \"Dear Reviewer,\\n\\nThank you again for your review and inquires. You can find our response below.\\n\\n[Embedding Dimensionality or Our Methods]\\n\\nIn order to compute the disentanglement score, we chose our embedding dimensionality to be consistent with Beta-VAE and DIP-VAE. For the CelebA dataset, our embedding vector has a total of 32 dimensions. For the 2D shapes dataset, it has 10 dimensions. We take these values from Beta-VAE, which are also repeated by DIP-VAE. You can find more details for our unsupervised and guided approaches below. We will explicitly emphasize this in the next iteration of our paper.\\n\\n(UD-GAN) For the CelebA dataset, we have 32 knobs, each correspond to a one-dimensional latent slice. Our generator maps this 32-dimensional latent vector into images. After that, we could follow two ways to embed a generated image:\\n\\n1) By using 32 different siamese networks, each produce a 1-dimensional embedding\\n2) By using 1 siamese network, which produces a 32-dimensional embedding\\n\\nWe use the second approach, which shares one siamese network and embeds an image into a 32-dimensional embedding vector. The main reason for this parameter sharing is to save GPU memory. Note that, for each knob, we calculate the contrastive loss by only using the corresponding single embedding dimension, not on the whole 32-dim embedding vector. This means that, each latent dimension corresponds to a single and unique embedding dimension.\\n\\n(UD-GAN-G) For the CelebA dataset, we use 32 knobs. The first 28 knobs are unguided and processed by the same siamese network, which embeds a generated image into a 28-dimensional embedding vector. As we explained for the unguided case above, this is only to save GPU memory and each embedding dimension is still separately treated. For the remaining 4 knobs, a generated image is guided by four image crops and fed into four different siamese networks, each embeds an image into a 1-dimensional vector. The concatenation of all siamese network outputs has thus 32-dimensions. For the 2D shapes dataset, we use 10 knobs. The first 7 knobs are unguided and the rest three knobs are guided with three separate siamese networks.\\n\\n[Visual Results for UD-GAN]\\n\\nWe did not include visual results for UD-GAN case due to space constraints. It indeed learns semantically meaningful factors. We will add these visual results to our Appendix for a better comparison.\\n\\n[UD-GAN High-Level Generative Factors and Performance]\\n\\nIn Section \\u201c3.4 Probabilistic Interpretation\\u201d, we show that minimizing the contrastive loss results in a disentangled embedding representation. This is also illustrated in the correlation matrix of the inferred embedding vectors in Table 9 in Appendix G. Some of the embedding dimensions are strongly related to certain high-level generative factors, because it was possible to use our embedding vectors to classify the existence of an attribute, as shown in Table 2. With our unsupervised method (UD-GAN), there is no guarantee to capture all of the desired high-level generative factors. However, it is still possible to model some of them, as long as the GAN does not have convergence issues, such as mode collapse.\\n\\n[Siamese Networks - Hair Color Attribute - Disentangling]\\n\\nIn Figure 2, for the hair color attribute, we only change the knob \\\\q_{top} that corresponds to the guided siamese network \\\\phi_{top}. The other latent dimension values are kept the same. The output of \\\\phi_{top} is a one-dimensional embedding, but is not shown in Figure 2.\\n\\nIn Table 9, Appendix G, we show the correlation matrix of the concatenated embedding vectors (in total 32 dimensions for the CelebA dataset) both for UD-GAN and UD-GAN-G. We infer embedding vectors by passing real images through our siamese embedding networks and (similar to DIP-VAE) computing the correlation matrix on these vectors. As illustrated, individual embedding dimensions are uncorrelated with each other and correlated with real high-level attributes in the CelebA dataset.\\n\\n[Ad-Hoc Guidance in UD-GAN-G]\\n\\nComing up with a guidance function is easier for certain variations than others. Our main goal is not to design guidance functions for each and every attribute. Instead, our guidance approach complements the literature in the sense that it offers a way to disentangle some of the spuriously correlated variations without labeled data. The rest of the variations can be modeled in an unsupervised way, similar to how UD-GAN and other unsupervised techniques operate.\"}", "{\"title\": \"feedback on author response\", \"comment\": \"Dear authors,\\n\\n\\nI appreciate the efforts made during the rebuttal period. I think that the quality of the paper has been significantly increased. Specially, the inclusion of the soft-margin term in the contrastive loss sounds very interesting. \\n\\n\\nAlthough some of my initial concerns have been addressed, there are some remaining issues which still remain unclear. Moreover, I have new questions given the large amount of new experimental results added.\\n\\n\\n[Unguided Method]\\n\\nBy comparing UD-GAN with state-of-the-art methods, you have shown that you are able to outperform other unsupervised approaches in standard benchmarks. However, I think that the followed experimental setup is not clearly explained in the paper:\\n\\n\\n-How many \\u201cknobs\\u201d/Siamese networks are used in UD-GAN? If it\\u2019s only one (as I have understood), how is the method supposed to model different variation factors? \\n\\n\\n-What is the dimensionality of the latent representation used to compute the disentanglement metric? Is it the same than the one used in the compared methods? Otherwise, the reported results are not directly comparable.\\n\\n\\n-The authors do not show any qualitative result on the CelebA dataset for the unguided version. Does UD-GAN learn semantically meaningful factors in these dataset? \\n\\n\\nWithout all this information it is difficult to assess that UD-GAN is really learning disentangled representations. Moreover, as the authors state in the paper, there is not any guarantee that their unsupervised method will learn to model high-level generative factors. Therefore, it is counter-intuitive that UD-GAN outperformed all the previous state-of-the-art unsupervised methods. The paper does not provide any convincing explanation or discussion about this issue. \\n\\n\\n\\n[Guided Method]\\n\\nI still think that the approach used to provide weak-supervision is too ad hoc. I am not convinced about how the proposed strategy can be applied to real scenarios where removing information related with the generative factors can be extremely difficult. For example, in the CelebA dataset, how the followed strategy could be used to disentangle the \\u201cMake-up\\u201d attribute?\\n\\n\\nApart from this, I have also questions about the experimental setup followed to evaluate the guided method. In particular, it is unclear which is the representation used to compute the disentangle metric. As far as I have understood, it is the concatenation of the last layer of the different Siamese networks. This results in a much larger dimensionality compared to the one used in UD-GAN and in the compared baselines. As a consequence, the reported numbers are not directly comparable. \\n\\n\\nAlso related with this issue, if one Siamese network is supposed to capture information about one high-level factor (e.g, hair color), why the representation of all the Siamese networks is used?. In my opinion, a convincing evaluation would consists on using only the representation of the Siamese network that is supposed to model the specific attribute (e.g \\\\psi_{top}). This is the only way to actually show that the proposed method is disentangling the different high.level attributes.\\n\\n\\n[Revised score]\\n\\nIn conclusion, I slightly updated my score given the additional material provided in the updated version. However, I still think that the paper is not ready for publication given all the discussed issues.\"}", "{\"title\": \"response\", \"comment\": \"Thank you for the updated paper. The revised version is significantly better than the initial submission and addresses many of the points raised (most importantly, it provides quantitative comparison against existing methods). I have updated my score based on the latest iteration of the paper.\"}", "{\"title\": \"Author Response\", \"comment\": \"We would like to thank you for reviewing our paper.\\n\\n[Unguided Case] Please refer to our general comment above on why our unguided case performs better now. The main usefulness of our guided approach is to directly capture some of the desired variations in the data. This is now clearer on our quantitative and visual results in the \\u201cExperiments\\u201d section.\\n\\n[Heuristic Guidance] The main premise behind guiding our siamese networks is to find very simple, yet effective ways to capture some of the variation in the data, through weak supervision. For more complex semantics, we discuss the possibility of using a pre-trained network as guidance. Please refer to our \\u201cDiscussion\\u201d section for more details.\\n\\n[Differentiable Guidance] The transformations need to be differentiable in order to backpropagate the gradients into our generator. This is now pointed out and discussed in our \\\"Discussion\\\" section. Although this limits the function families, we can still use differentiable relaxations of more complicated functions.\\n\\n[Gaussian Prior on Latents] In our new experiments, we used uniform distributions to model the generative factors. We had experiments with categorical variables, however, we faced training stability issues with them. We now point this out in our \\\"Discussion\\\" section. \\n\\n[Similar Latent Factors] We now use an adaptive margin that depends on the distance between two latent samples. So, if samples are close to each other, the margin is smaller, and vice versa. \\n\\n[Experiments Section] We now compare our method against Beta-VAE, DIP-VAE, and InfoGAN, both qualitatively and quantitatively. Please refer to our updated \\\"Experiments\\\" section.\\n\\n[Information of Guidance] In Figure 3, we visualize which part of an image was visible to a siamese network. In addition, we show how changing the corresponding guided knob affects the generated images.\\n\\n[More Than Two Attributes] We now use 32 dimensions for the CelebA dataset and 10 dimensions for the 2D shapes dataset.\"}", "{\"title\": \"Author Response\", \"comment\": \"We would like to thank you for reviewing our paper.\\n\\n[Experiments Section] We have significantly updated qualitative and quantitative results in our \\\"Experiments\\\" section and now compare our methods against Beta-VAE, DIP-VAE, and InfoGAN.\\n\\n[InfoGAN] Compared to InfoGAN, our method is novel in two ways: First, we use separate networks to obtain the image embeddings, which enables us to guide some of these networks with simple functions. The guidance allows more control over the latent space, even in lack of data. Second, we use pairwise similarity/dissimilarity in order to perform disentangling, which is different from InfoGAN's approach of maximizing the label likelihood. This point is now addressed in our \\\"Related Work\\\" section.\"}", "{\"title\": \"Author Response\", \"comment\": \"We would like to thank you for reviewing our paper.\\n\\n[Unguided Case and Disentanglement] Please refer to our general comment above on why our unguided case performs better now. We also updated our \\u201cProbabilistic Interpretation\\u201d section with analysis on how the contrastive loss helps us to learn a disentangled representation. Evidence and comparison to other methods on disentanglement is provided in Table 9 in Appendix G, where we visualize the correlations between our embedding dimensions.\\n\\n[Experiments Section] We have significantly updated qualitative and quantitative results in our \\\"Experiments\\\" section and now compare our methods against Beta-VAE, DIP-VAE, and InfoGAN.\"}", "{\"title\": \"Author Response\", \"comment\": \"We would like to thank you for reviewing our paper.\\n\\n[Principled Guidance] The design of guidances is heuristic, but as illustrated in Figure 2 and in Table 2, they are easy to design and are effective. Further, we added our unsupervised analyses to show that the method works even without explicit guidance on all tested datasets. In this paper, we propose the idea of guidance itself and show that it is imposing a desired semantics on the latent space without having labeled data. In our future work, we plan to investigate more principled ways of deciding guidances. We now address this point in our \\\"Discussion\\\" section.\\n\\n[Experiments Section] We have significantly updated qualitative and quantitative results in our \\\"Experiments\\\" section and now compare our methods against Beta-VAE, DIP-VAE, and InfoGAN.\"}", "{\"title\": \"Changes in the Paper\", \"comment\": [\"We would like to thank all of our reviewers for their insightful comments. Inspired by their suggestions, we have performed the following changes in our paper:\", \"We updated our abstract to be more consistent with the changes in our paper\", \"We changed the loss function we use from the WGAN-GP [1] to the original GAN loss in [2]. This significantly helped our approach to disentangle without guidance.\", \"We have empirically found out that the gradient penalty term in WGAN-GP loss was preventing our unsupervised method to learn a disentangled representation. However, a theoretical insight on why this happens requires further analysis.\", \"We added a section to explain guidance functions and their purpose.\", \"We updated our \\u201cProbabilistic Interpretation\\u201d section to be more concise.\", \"We significantly updated our \\u201cExperiments\\u201d section with quantitative and qualitative comparisons with the state-of-the-art techniques, such as Beta-VAE [3], DIP-VAE [4], and InfoGAN [5].\", \"We improved our \\u201cDiscussion\\u201d section to address the limitations of our method.\", \"[1] Improved Training of Wasserstein GANs (Gulrajani et. al., NIPS 2017)\", \"[2] Generative Adversarial Networks (Goodfellow et. al., NIPS 2014)\", \"[3] Beta-vae: Learning basic visual concepts with a constrained variational framework. (Higgins et. al., ICLR 2017)\", \"[4] Variational inference of disentangled latent concepts from unlabeled observations. (Kumar et. al., ICLR 2018)\", \"[5] Infogan: Interpretable representation learning by information maximizing generative adversarial nets. (Chen et. al., NIPS, 2016)\"]}", "{\"title\": \"Interesting idea but incomplete justifications\", \"review\": \"[Edit] I changed my rating from 4 to 5 based on the author responses.\\n=======\\nThis paper proposed a GAN that learns a disentangled factors of variations in unsupervised (or weakly-supervised) manner. To this end, the proposed method incorporates a contrastive loss together with Siamese network, which encourages the generator to output smaller variations in samples if they are drawn by varying the same latent factors. The proposed idea is evaluated on simple datasets such as MNIST and centered faces, and show that it is able to learn disentangled latent codes by incorporating some heuristics. \\n\\nAlthough the paper presents an interesting and reasonable idea, I think the paper is incomplete and in the proof-of-concept stage. In terms of method, the guidance for learning Siamese networks are designed heuristically (e.g. edges, colors, etc.) which limits its applicability over various datasets; I think that designing more principled approach to build such guidances from data should be one of the key contributions of the paper. In terms of evaluation, the authors only presented a few qualitative results on simple datasets, which is not comprehensive and convincing. \\n\\nIn conclusion, I suggest a reject of this paper due to the lacks of comprehensive study and evaluation.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"review\", \"review\": \"[EDIT]: I have updated my score after the author response and paper revision.\\n=============================\\n\\n[I was asked to step in as a reviewer last minute. I did not look at the other reviews].\\n\\n-------------------------------\\nSummary\\n-------------------------------\\nThis paper proposes to learn disentangled latent states under the GAN framework. The core idea is to partition the latent states into N partitions, and correspondly have N Siamese networks that pull the generated images with the same latent partition towards each other, along with a contrastive loss which ensures generated images with different latent partitions to be different. The authors experiment with two setups: in the \\\"unguided setup\\\" training is completely unsupervised, while in the \\\"guided\\\" setup, there is some weak supervision to encourage different partitions to learn different factors.\\n\\n-------------------------------\\nEvaluation\\n-------------------------------\\nWhile the motivation is nice, I find the results (especially in the unguided setup) underwhelming. This does not seem surprising to me, as in the unguided case, the constrative loss seems not strong enough to encourage the latent partitions to be different. Results with weak supervision (their method for injecting weak supervision was very nice) are more impressive. However, there is no comparison against existing work. Learning disentangled representations with deep generative models is very much an active area. Here are some recent papers:\", \"https\": \"//arxiv.org/abs/1802.04942\\n\\nImportantly, there are no quantitative metrics. I do not think this work is ready for publication.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Very interesting idea with insufficient experimental validation\", \"review\": \"The paper proposes a framework for learning interpretable latent representations for GANs. The key idea is to use siamese networks with contrastive loss. Specifically, it decomposes the latent code to a set of knobs (sub part of the latent code). Each time it renders different images with different configurations of the knobs. For example, 1) as changing one knob while keeping the others, it expects it would only result in change of one attribute in the image, and 2) as keeping one knob while changing all the others, it expects it would result in large change of image appearances. The relative magnitude of change for 1) and 2) justifies the use of a Siamese network in addition to the image discriminator in the standard GAN framework. The paper further talks about how to use inductive bias to design the Siamese network so that it can control the semantic meaning of a particular knob.\\n\\nWhile I do like the idea, I think the paper is still in the early stage. First of all, the paper does not include any numerical evaluation. It only shows a couple of examples. It is unclear how well the proposed method works in general. In addition, the InfoGAN work is designed for the same functionality. The paper should compare the proposed work to the InfoGAN work both quantitatively and qualitatively to justify its novelty.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting problem but limited approach and evaluation\", \"review\": \"Summary\\n\\nThe paper presents a novel approach for learning a generative model where different factors of variations can be independently manipulated. The method is build upon the GAN framework where the latent variables are divided into different subsets (chunks) which are expected to encode information about high-level factors of variation. To this end, a Siamese Network for each chunk is trained with a contrastive loss minimizing the distance between generated images sharing the same factor (the latent variables in the chunk are equal), and maximizing the distance between pairs where the latent variables differ. Given that the proposed model fails in this fully-unsupervised setting, the authors propose to add weak-supervision into the model by forcing the Siamese networks to focus only on particular aspects of generated images (e.g, color, edges, etc..). This is achieved by applying a basic transformation over the input images in order to remove specific information. The evaluation of the proposed model is carried out using the MS-Celeb dataset where the authors provide qualitative results.\\n\\n\\nMethodology\\n\\n*Disentangling generative factors without explicit labels is a challenging and interesting problem. The idea of dividing the latent representation in different subsets and using a proxy task involving triplets of images has been already explored in [3]. However, the use of Siamese networks in this context is novel and sound.\\n\\n*As shown in the reported results, the proposed method fails to learn meaningful factors in the unsupervised setting. However, the authors do not provide an in-depth discussion of this phenomena. Given that previous works [1,2,3] have successfully addressed this problem using a completely unsupervised approach, it would be necessary to give more insights about: (i) why the proposed method is failing (ii) why this negative result is interesting and (iii) if the method could be useful in other potential scenarios. \\n\\n*The strategy proposed to introduce weak-supervision is too ad-hoc. I agree that using cues such as the average color of an image can be useful if we want to model basic factors of variation. However, it is unclear how a similar strategy could be applied if we are interested in learning variables with higher-level semantics such as the expression of a face or its pose.\\n\\n*As far as I understand, the transformations applied to the input images (e.g, edge detection) must be differentiable (given that it is necessary to backpropagate the gradient of the contrastive loss through the generator network). If this is the case, this should be properly discussed in the paper. Moreover, given that the amount of differentiable transformations is reduced, this also limits the application of the proposed method for more interesting scenarios. \\n\\n*It is not clear why the latent variables modelling the generative factors are defined using a Gaussian prior. How the case where two images have a very similar latent factor is avoided while generating pairs of images for the Siamese network? Have the authors considered to use categorical or binary variables? The use of the contrastive loss sounds more appropriate in this case.\\n\\n\\nExperimental results\\n\\n*The experimental section is too limited. First of all, only a small number of qualitative results are reported and, therefore, it is very difficult to assess the proposed method and draw any conclusion. For example, when the edge extractor is used, what kind of information is modeled by the latent variables? Is it consistent across different samples?\\n\\nMoreover, it is not clear why the authors have limited the evaluation to the case where only two \\u201cchunks\\u201d are used. In principle, the method could be applied with many more subsets of latent variables and then manually inspect them to check it they are semantically meaningful (see [2]) \\n\\n*As previously mentioned, there are many recent works addressing the same problem from a fully-unsupervised perspective [1,2,3]. All these works provide quantitative results evaluating the learned representations by using them to predict real labels (e.g, attributes in the CelebA data-set). The authors could provide a similar evaluation for their method by using the feature representations learned by the siamese networks in order to evaluate how much information they convey about real factors of variation. This could clarify the advantages of the weakly-supervised strategy compared to unsupervised approaches.\\n\\nReview summary\\n\\n+The addressed problem (learning disentangled representations without explicit labeling) is challenging and interesting.\\n\\n+The idea of using a proxy task (contrastive loss with triplets of generated images) is somewhat novel and promising.\\n\\n- The authors report only negative results for the fully-unsupervised version of UD-GAN The paper lacks and in-depth discussion about why this negative result is interesting.\\n\\n-The strategy proposed to provide weak-supervision to the model is too ad-hoc and it is not clear how to apply it in general applications\\n\\n-The experimental section do not clarify the benefits of the proposed approach. In particular, the qualitative results are too limited and no quantitative evaluations is provided.\\n\\n\\n[1] Variational Inference of Disentangled Latent Concepts from Unlabelled Observations (Kumar et al, ICLR 2018)\\n\\n[2] Beta-vae: Learning basic visual concepts with a constrained variational framework. (Higgins et. al, ICLR 2017)\\n\\n[3] Disentangling Factors of Variation by Mixing Them. (Hu et. al, CVPR 2018)\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
ryeaZhRqFm
Link Prediction in Hypergraphs using Graph Convolutional Networks
[ "Naganand Yadati", "Vikram Nitin", "Madhav Nimishakavi", "Prateek Yadav", "Anand Louis", "Partha Talukdar" ]
Link prediction in simple graphs is a fundamental problem in which new links between nodes are predicted based on the observed structure of the graph. However, in many real-world applications, there is a need to model relationships among nodes which go beyond pairwise associations. For example, in a chemical reaction, relationship among the reactants and products is inherently higher-order. Additionally, there is need to represent the direction from reactants to products. Hypergraphs provide a natural way to represent such complex higher-order relationships. Even though Graph Convolutional Networks (GCN) have recently emerged as a powerful deep learning-based approach for link prediction over simple graphs, their suitability for link prediction in hypergraphs is unexplored -- we fill this gap in this paper and propose Neural Hyperlink Predictor (NHP). NHP adapts GCNs for link prediction in hypergraphs. We propose two variants of NHP --NHP-U and NHP-D -- for link prediction over undirected and directed hypergraphs, respectively. To the best of our knowledge, NHP-D is the first method for link prediction over directed hypergraphs. Through extensive experiments on multiple real-world datasets, we show NHP's effectiveness.
[ "Graph convolution", "hypergraph", "hyperlink prediction" ]
https://openreview.net/pdf?id=ryeaZhRqFm
https://openreview.net/forum?id=ryeaZhRqFm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJlPeBuEeV", "r1xLwJqHJ4", "rJxYm2KHk4", "B1xcPUDSyV", "HJlSr0o80Q", "rklrbi_4Rm", "r1xcxV_4R7", "SJxfTVfNpQ", "BJenCQzNaX", "B1xL2zM4pX", "S1xMYoWc2Q", "r1x10QuUhm", "HJxLFwebh7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545008367447, 1544032094177, 1544031264823, 1544021602241, 1543056956658, 1542912764867, 1542910962067, 1541838010015, 1541837780502, 1541837486188, 1541180281883, 1540944839086, 1540585342499 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1218/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1218/Authors" ], [ "ICLR.cc/2019/Conference/Paper1218/Authors" ], [ "ICLR.cc/2019/Conference/Paper1218/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1218/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1218/Authors" ], [ "ICLR.cc/2019/Conference/Paper1218/Authors" ], [ "ICLR.cc/2019/Conference/Paper1218/Authors" ], [ "ICLR.cc/2019/Conference/Paper1218/Authors" ], [ "ICLR.cc/2019/Conference/Paper1218/Authors" ], [ "ICLR.cc/2019/Conference/Paper1218/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1218/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1218/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper describes a method for the link prediction problem in both directed and undirected hypergraphs. While the problem discussed in the paper is clearly importnant and interesting, all reviewers agree that the novelty of the proposed approach is somewhat limited given the prior art.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting and important problem, somewhat limited novelty of the approach\"}", "{\"title\": \"Our clarifications\", \"comment\": \"Thanks for the response.\", \"on_the_novelty_of_our_work\": \"We reiterate that the main novelty / contribution of our work is to explore\\n1) an unexplored problem (link prediction in directed hypergraphs) \\n2) an underexplored problem (link prediction in undirected hypergraphs) and to propose the first neural-network-based method for the problem \\n\\nWe have proposed a unified framework for the two important and interesting problems and our proposed solution is conceptually simple, yet effective.\\n\\n\\n\\nOn including the extra term L_d for CMM + MLP:\\nThe baseline we have compared against is plain CMM + MLP (sequential). CMM uses the expectation-maximisation (EM) algorithm to optimise its objective function to predict hyperlinks. Since CMM is not solved by the conventional gradient descent-based methods, using the term L_d jointly with EM is a non-trivial problem in itself (it is not as straightforward as adding an extra term to the loss function).\", \"on_sampling_candidate_papers\": \"As motivated in section 3, in the case of multi-author collaborations of academic/technical papers, hyperlinks have cardinalities less than a small number, as papers seldom have more than 6 authors. We looked at the distribution of the number of authors of actual (positive) papers and sampled an equal number of negative (fake) papers from the distribution. This means that although there are a large number of potential fake papers, we can make do with a vastly reduced number because of our sampling strategy. \\n\\nA related work [1] also has sampled an equal number of negative links for all datasets in its experiments. \\n[1] Link Prediction Based on Graph Neural Networks, Muhan Zhang and Yixin Chen, NeurIPS 2018\"}", "{\"title\": \"On the novelty of our work\", \"comment\": \"Thanks for the response. We reiterate that the main novelty / contribution of our work is to explore\\n1) an unexplored problem (link prediction in directed hypergraphs)\\n2) an underexplored problem (link prediction in undirected hypergraphs) and to propose the first neural-network-based method for the problem\\n\\nWe have proposed a unified framework for the two important and interesting problems and our proposed solution is conceptually simple, yet effective.\"}", "{\"title\": \"Addressed some of my concerns, baselines need clarification, marginal technical merit\", \"comment\": \"Thanks for your response. It addressed some of my concerns. However, I still have concerns on the baselines in the directed setting and the technical merit (novelty). For the baseline in the directed setting, does the CMM+MLP include the extra term L_d in Eq. (4) or it is just the plain CMM plus MLP?\\n\\nFor the sampling of candidate papers, it seems to be quite a unrealistic (and overly simplified) setting, since the number of possible combinations of authors is huge. It would be interesting (and more realistic) to sample much more fake papers and see how the methods perform (ideally we can see similar margin between NHP and the baselines).\"}", "{\"title\": \"Improved experimental results, but novelty still somewhat questionable\", \"comment\": \"Hi,\\n\\nI have read the authors' responses to my comments, as well as the other reviews and responses. The additional experiments do clarify some of my questions about the empirical behavior of the proposed approach.\\n\\nStill, all reviewers had questions about the novelty of the proposed approach. I do not believe any of the rebuttals address these concerns, so my overall rating is not changed.\"}", "{\"title\": \"Summary of revisions\", \"comment\": [\"We thank the reviewers for their reviews. Below, we have summarised the revisions made to our paper in the rebuttal period. The majority of the revisions have been in the experiments (section numbers 5, 6 and 7).\", \"In section 5, we have added descriptions and results of a couple of baselines (node2vec, and GCN on star expansion) as suggested by reviewer 3.\", \"In section 6, we have added descriptions and results of three baselines (node2vec + MLP, CMM + MLP, and GCN on star expansion + MLP) as suggested by reviewers 2 and 3.\", \"We have added a new section (section 7) to compare our strategy of positive unlabeled learning and negative sampling uniformly at random as suggested by reviewer 1.\", \"We have corrected all typos, and cited missing references as suggested by the reviewers. The revisions can be compared using the compare revisions option on the revisions page.\"]}", "{\"title\": \"Our response to minor comments of AnonReviewer1\", \"comment\": \"On comparisons with random negative sampling:\\nBelow, we have compared our strategy of positive-unlabeled learning against uniform random negative sampling. \\n-------------------------------------------------------------------------------------------------------------------------\\n dataset iAF692 iHN637 iAF1260b iJO1366\\n-------------------------------------------------------------------------------------------------------------------------\\nrandom negative sampling 236 +/- 32 415 +/- 47 967 +/- 125 1074 +/- 168\\n-------------------------------------------------------------------------------------------------------------------------\\npositive-unlabeled learning 313 +/- 6 360 +/- 5 1258 +/- 9 1381 +/- 9\\n-------------------------------------------------------------------------------------------------------------------------\\n\\nAs can be seen from the table, the standard deviations of random negative sampling are on the higher side. This is expected as the particular choice made for negative samples decides the decision boundary for the binary classifier.\\n\\nWe request the reviewer to see the updated paper for AUC numbers and a discussion around these results in the updated section 7 of our paper. \\n\\n\\n\\nOn adding Recall@$\\\\Delta E$ in tables:\\nWe have added recall@$\\\\Delta E$ of NHP for all datasets in both undirected and directed hypergraph experiments in our updated paper. We have retained the raw hyperlinks recovered as they contain standard deviations in addition to mean values.\\n\\n\\n\\nOn the arXiv submission 1809.09401:\\nThe submission uses the clique expansion to approximate the hypergraph which is similar to our work. We have cited the submission in our updated paper. However, it does not use the dual hypergraph idea nor any negative sampling technique. \\n\\n\\n\\nOn connecting experimental results and dataset sizes/densities:\\nWe cannot draw general conclusions connecting results and dataset sizes/densities. In general we observe that NHP outperforms the baselines because the graph convolutional network is tailor-made for semi-supervised learning with small amounts of labeled data (10% in our experiments).\"}", "{\"title\": \"Our response to AnonReviewer3\", \"comment\": \"Thanks for the review.\", \"on_the_novelty_of_our_work\": \"Link prediction in undirected hypergraphs is an underexplored problem and that in directed hypergraphs is an unexplored problem. Our main contribution is a unified framework for both the settings and our proposed solution is conceptually simple, yet effective. We believe the problem settings are important and interesting (as noted by the other reviewers too), and that this paper will inspire further research in this direction.\\n\\n\\n\\nOn comparison with PinSage [Ying et al. KDD 2018]:\\nPinSage has been designed to work on the bipartite graph of Pinterest. The Pinterest graph can be seen as the star expansion of a hypergraph with pins (hypernodes) on one side of the partition and boards (hyperlinks) on the other side. Following the reviewer\\u2019s suggestion, we have compared NHP against star expansion below.\", \"on_comparison_with_node2vec\": \"Following the reviewer\\u2019s suggestion, we have compared NHP against node2vec. Node2vec has been shown to be superior to DeepWalk and LINE in [Grover et al. KDD 2016] and hence we have compared only against it. We have also compared NHP against CMM+MLP as suggested by reviewer #2. We report only the number of reactions recovered in the undirected hypergraph experiments.\\n\\n-----------------------------------------------------------------------------------------------------------------------------------\\ndataset\\t\\t\\t\\t \\t iAF692\\t iHN637\\t iAF1260b\\t iJO1366\\n-----------------------------------------------------------------------------------------------------------------------------------\\nnode2vec \\t\\t\\t\\t 299 +/- 10\\t 303 +/- 4\\t 1100 +/- 13\\t 1221 +/- 21\\n-----------------------------------------------------------------------------------------------------------------------------------\\nGCN on star expansion\\t 174 +/- 5\\t 219 +/- 12\\t 649 +/- 10\\t 568 +/- 18\\n-----------------------------------------------------------------------------------------------------------------------------------\\nNHP-U (ours)\\t\\t\\t 313 +/- 6\\t 360 +/- 5\\t 1258 +/- 9\\t 1381 +/- 9\\n-----------------------------------------------------------------------------------------------------------------------------------\\n\\nWe request the reviewer to see the updated paper for AUC numbers. We have updated the results for all the other datasets and experiments and we request the reviewer to see the paper.\\n\\nFrom the table above, we can see that the star expansion of a hypergraph is less effective because there are no direct connections between chemical reactions (because the graph is bipartite). Clique expansion, on the other hand, connects two chemical reactions if they share a chemical substance and hence can exploit the relationships much better.\", \"on_the_size_of_the_datasets_used\": \"Our work was motivated by the task of predicting reactions, for which we used datasets already available in the literature (given by Zhang et. al, AAAI 2018). Regarding the co-authorship datasets used, we had to filter the large datasets already available to ensure that meaningful hyperlinks were obtained which led to some reduction in size. We request the reviewer to take a look at the appendix for the exact details.\", \"on_simultaneous_learning_of_node_and_edge_embeddings\": \"NHP, our proposed method, learns node embeddings in the dual hypergraph which is the same as learning hyperlink embeddings in the primal. While PinSage works on the Pinterest bipartite graph (star expansion) and hence involves simultaneous learning of node/edge embeddings, NHP works on the clique expansion and learns node embeddings of the dual.\"}", "{\"title\": \"Our response to AnonReviewer2\", \"comment\": \"Thanks for the review.\", \"on_the_novelty_of_our_work\": \"Link prediction in undirected hypergraphs is an underexplored problem and that in directed hypergraphs is an unexplored problem. Our main contribution is a unified framework for both the settings and our proposed solution is conceptually simple, yet effective. We believe the problem settings are important and interesting (as noted by the other reviewers too), and that this paper will inspire further research in this direction.\", \"on_adding_an_extra_term_to_cmm_as_a_baseline_for_directed_hyperlink_experiments\": \"Following the reviewer\\u2019s suggestion, we have added CMM + MLP as a baseline. We have also compared NHP against node2vec + MLP and star expansion + MLP as suggested by reviewer #3. We report below the number of reactions recovered in the directed hypergraph experiments. \\n\\n---------------------------------------------------------------------------------------------------------------------------------\\n dataset\\t\\t\\t\\tiAF692\\t iHN637\\t iAF1260b iJO1366\\n---------------------------------------------------------------------------------------------------------------------------------\\nnode2vec + MLP\\t\\t\\t 255 +/- 5\\t 237 +/- 5\\t 838 +/- 13\\t 902 +/- 11\\n---------------------------------------------------------------------------------------------------------------------------------\\nCMM + MLP \\t\\t\\t 253 +/- 9\\t 241 +/- 11\\t 757 +/- 26\\t 848 +/- 21\\n---------------------------------------------------------------------------------------------------------------------------------\\nGCN on star expansion + MLP\\t 242 +/- 5\\t 241 +/- 10\\t 786 +/- 13\\t 852 +/- 11\\n---------------------------------------------------------------------------------------------------------------------------------\\n\\nNHP-D (sequential)\\t\\t\\t 263 +/- 7\\t 221 +/- 10\\t 867 +/- 31\\t 954 +/- 29\\n\\nNHP-D (joint)\\t\\t\\t\\t 262 +/- 8\\t 236 +/- 8\\t 869 +/- 13\\t 944 +/- 20\\n\\n---------------------------------------------------------------------------------------------------------------------------------\\nWe request the reviewer to see the updated paper for AUC numbers. We have also observed that NHP-U outperforms all its baselines in the undirected experiments and the results have been updated in our paper.\", \"on_candidate_papers_in_coauthorship_networks_such_as_cora\": \"The standard cora dataset has 2708 papers. We sampled an equal number of fake papers at random to get the 5416 candidate papers for cora. We request the reviewer to see the appendix for more details.\\n\\n\\n\\nOn comparison with Lugo-Martinez and Radivojac, 2017:\\nWe had difficulty reproducing the results, given that the code was not available and the authors did not respond to our request emails, and the method as described in the paper is rather vague about the details.\"}", "{\"title\": \"Our response to major comments of AnonReviewer1\", \"comment\": \"Thanks for the review\", \"on_the_novelty_of_our_work\": \"Link prediction in undirected hypergraphs is an underexplored problem and that in directed hypergraphs is an unexplored problem. Our main contribution is a unified framework for both the settings and our proposed solution is conceptually simple, yet effective. We believe the problem settings are important and interesting (as noted by the other reviewers too), and that this paper will inspire further research in this direction.\", \"on_the_discussion_of_results_for_directed_hyperlink_prediction\": \"Both NHP-D (joint) and NHP-D (sequential) perform similarly. To appreciate the results, we have added three baselines as suggested by reviewers 2 and 3. We request the reviewer to see below for a sample of the updated results and the paper for all updated results.\\n\\n---------------------------------------------------------------------------------------------------------------------------------\\n dataset\\t\\t\\t\\t \\tiAF692\\t iHN637\\t iAF1260b iJO1366\\n---------------------------------------------------------------------------------------------------------------------------------\\nnode2vec + MLP\\t\\t\\t 255 +/- 5\\t 237 +/- 5\\t 838 +/- 13\\t 902 +/- 11\\n---------------------------------------------------------------------------------------------------------------------------------\\nCMM + MLP \\t\\t\\t 253 +/- 9\\t 241 +/- 11\\t 757 +/- 26\\t 848 +/- 21\\n---------------------------------------------------------------------------------------------------------------------------------\\nGCN on star expansion + MLP\\t 242 +/- 5\\t 241 +/- 10\\t 786 +/- 13\\t 852 +/- 11\\n---------------------------------------------------------------------------------------------------------------------------------\\n\\nNHP-D (sequential)\\t\\t\\t 263 +/- 7\\t 221 +/- 10\\t 867 +/- 31\\t 954 +/- 29\\n\\nNHP-D (joint)\\t\\t\\t\\t 262 +/- 8\\t 236 +/- 8\\t 869 +/- 13\\t 944 +/- 20\\n\\n---------------------------------------------------------------------------------------------------------------------------------\", \"on_variance_in_the_results\": \"We observed variances of AUC values to be in the third decimal places (i.e., very close to zero). We have reported variances in the number of hyperlinks recovered in all experiments. These are much more interpretable/statistically significant.\", \"on_10_trials\": \"We report the mean values over 10 different splits of train and test.\", \"on_random_features\": \"The feature initialisations are random for metabolic network experiments as we do not have any available features to exploit. We believe the neighbourhood feature aggregation of GCN causes useful node embeddings to be learnt during training.\\nWe also observe that NHP is competitive with a node2vec baseline (suggested by reviewer 3) which is a featureless approach.\", \"on_creation_of_fake_papers\": \"In these experiments, authors correspond to nodes in the (primal) graph, while papers correspond to hyperlinks, i.e., sets of authors. So in this context, fake papers are the same as fake author lists and hence cannot be attached to existing (true) papers. The set of candidate edges is the set of true papers union the set of fake papers.\"}", "{\"title\": \"Interesting problem, but incremental contribution\", \"review\": \"[Relevance] Is this paper relevant to the ICLR audience? yes\\n\\n[Significance] Are the results significant? somewhat\\n\\n[Novelty] Are the problems or approaches novel? rather incremental\\n\\n[Soundness] Is the paper technically sound? yes\\n\\n[Evaluation] Are claims well-supported by theoretical analysis or experimental results? marginal\\n\\n[Clarity] Is the paper well-organized and clearly written? okay\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\", \"seen_submission_posted_elsewhere\": \"No\", \"detailed_comments\": \"In this work, the authors propose an approach to the (hyper-) link prediction problem in both directed and undirected hypergraphs. The approach first applies an existing dual transformation to the hypergraph such that the link prediction problem (in the primal) becomes a node classification problem in the dual. They then use GCNs to classify the (dual) nodes. Experimentally, the proposed approach marginally outperforms existing approaches.\\n\\n=== Major comments\\n\\nI found the novelty of the proposed approach rather limited. The proposed approach essentially just concatenates three existing strategies (dual reformulation from Scheinerman and Ullman, GCNs from Kipf and Welling, and negative sampling which is common in many communities, e.g., Han and Chen, but many others, as well). I believe the contribution for link prediction in directed hypergraphs is a more novel contribution, however, I had difficulty following that discussion.\\n\\nIt is difficult to interpret the experimental results. Tables 3 and 6 do not include a measure of variance. Thus, it is not clear if any of the results are statistically significant. It is also not clear whether the \\u201c10 trials\\u201d mentioned in the figure captions correspond to a 10-fold cross-validation scheme or something else. It is unclear to me what the random feature matrix for the metabolic network is supposed to me or do. It is also unclear to me why \\u201cfake papers\\u201d are needed for the citation networks; it is clear that \\u201cfake author lists\\u201d are needed for negative sampling, but it seems they could be attached to existing papers. Similarly, it is unclear how the set of candidate edges (\\\\mathcal{E}) was chosen.\\n\\nI appreciate that the authors made the code available. I did not run it, but I did have a look, and I believe it could be adapted by others without an unreasonable amount of work.\\n\\n=== Minor comments\\n\\nThis work is very similar to the arXiv submission 1809.09401. To the best of my knowledge, though, that work has not yet been published in a peer-reviewed venue, so I do not consider it a problem that it is not cited here.\\n\\nAccording to Tables 1 and 2, iAF692 and iHN637 datasets are smaller than the other datasets except DBLP; those two are also less dense than DBLP. According to Table 3, NHP-U seems noticeably better than SHC and CMM on the, while does not appear very significant in the other cases. Is there some relationship between NHP\\u2019s performance and the size/density of the graph? or is there some other explanation for this behavior?\\n\\nRelated to the above point, Table 3 shows that the performance on the undirected versions for those two datasets is better than on the other two metabolic networks, while Table 6 shows the opposite for the directed versions. Is there some explanation for this? For example, are there qualitative differences in the size of the hypernodes?\\n\\nThe described strategy for negative sampling seems as though it selects \\u201ceasy\\u201d negative samples, in the sense that they are far away from observed positives; thus, they are also likely far away from any sort of decision boundary. How does the performance change if more \\u201cdifficult\\u201d (or just uniformly random) negative samples are chosen?\\n\\nI believe Recall@100 (or Precision@100, or @$\\\\Delta E$, etc.) is a more meaningful value to report in Tables 4 and 7, rather than the raw number of edges. That is, it would be more helpful to report something so that numbers across datasets are at least somewhat comparable.\\n\\n=== Typos, etc.\\n\\nIn Equation (4), the \\u201ck\\u201d index in d_{ijk} is in {1,2}, but in the text, it is in {0,1}.\\n\\n\\u201ctable 2\\u201d -> \\u201cTable 2\\u201d, and many other similar examples throughout the paper.\\n\\n\\u201chigher-order etc.\\u201d -> \\u201chigher-order, etc.\\u201d\\n\\u201cGCN based\\u201d -> \\u201cGCN-based\\u201d, and similar in several places in the paper\\n\\u201ca incomplete\\u201d -> \\u201can incomplete\\u201d\", \"rating\": \"6: Marginally above acceptance threshold\"}", "{\"title\": \"Interesting and important problem; technical contribution is limited given existing work.\", \"review\": \"This paper proposed Neural Hyperlink Predictor (NHP) to perform link prediction based on graph convolutional network (GCN). Following prior work, the hyperlink prediction is perform in the dual hypergraph, where each node represents a hyperlink in the primal hypergraph. The original problem is then equivalent to a simple node classification problem. To deal with directed hyperlink, a separate term is added to distinguish heads from tails.\\n\\nThe problem of link prediction in hypergraph is important and interesting, especially in the chemistry domain. However from the technical point of view, this work is somewhat incremental since prior work has done link prediction using GCN (Zhang and Chen, 2018). The idea of performing hyperlink prediction in the dual hypergraph is not new, either (Lugo-Martinez and Radivojac, 2017). As for the directed hypergraph setting, it seems to be a straightforward extension once one knows how to do in the undirected setting (adding an extra term to classify head/tail).\\n\\nIn terms of experiments, given the similarity between Lugo-Martinez and Radivojac, 2017 and NHP (both operates in the dual hypergraph), it would be better if the former could also be used as a baseline, as least in the undirected setting.\\n\\nIt is reasonable to have a subset of links as candidate reactions in the metoboli network datasets. For CORA and DBLP, it is not clear where the \\u2018actual papers\\u2019 and \\u2018candidate papers\\u2019 come from. For example in CORA there are 1072 authors; yet there are only 5416 candidate papers.\\n\\nIt seems the joint learning of NHP-D does not improve the accuracy in the directed setting as claimed in Sec. 5.2. Besides, there is no baseline in the directed setting. It is difficult to appreciate the performance in Sec. 6. One thing one can do is to use previous methods in the undirected setting, e.g., CMM, with the extra term L_d in Eq. (4).\", \"minor_comments\": \"\", \"typo\": \"\", \"p5\": \"What is GCN 2?\\nSec. 5: \\u2018p = 32 in 1\\u2019 and \\u2018shown in 2\\u2019\\n\\nMissing references on link prediction and/or deep learning:\\nDiscriminative relational topic models. PAMI 2014.\", \"relational_deep_learning\": \"A deep latent variable model for link prediction. AAAI 2017\\nNeural relational topic models for scientific article analysis. CIKM 2018.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Novelty of the proposed method is very marginal\", \"review\": \"This paper proposed to use graph convolutional neural networks for link prediction. The authors proposed to use the dual graph to simultaneously learn node and edge embeddings. The label of the edges (positive or negative) are used as supervised signal for training the GCNs. Experiments on a few small data set prove the effectiveness of the proposed approaches.\", \"strength\": [\"important problem\"], \"weakness\": [\"the novelty of the proposed method is very marginal\", \"the experiments are quite weak\"], \"details\": [\"the novelty of the proposed method seems to be very marginal, which simply applies the GCN for link prediction. The existing GCN based method for recommendation shares similar ideas (e.g., Yin et al. 2018, PinSage), though dual hypergraph is not used. But the essential idea is very similar.\", \"the data sets used in the experiments are too small\", \"the node embedding based methods should be compared for link prediction, e.g., DeepWalk, LINE, and node2vec.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
B1gabhRcYX
BA-Net: Dense Bundle Adjustment Networks
[ "Chengzhou Tang", "Ping Tan" ]
This paper introduces a network architecture to solve the structure-from-motion (SfM) problem via feature-metric bundle adjustment (BA), which explicitly enforces multi-view geometry constraints in the form of feature-metric error. The whole pipeline is differentiable, so that the network can learn suitable features that make the BA problem more tractable. Furthermore, this work introduces a novel depth parameterization to recover dense per-pixel depth. The network first generates several basis depth maps according to the input image, and optimizes the final depth as a linear combination of these basis depth maps via feature-metric BA. The basis depth maps generator is also learned via end-to-end training. The whole system nicely combines domain knowledge (i.e. hard-coded multi-view geometry constraints) and deep learning (i.e. feature learning and basis depth maps learning) to address the challenging dense SfM problem. Experiments on large scale real data prove the success of the proposed method.
[ "Structure-from-Motion", "Bundle Adjustment", "Dense Depth Estimation" ]
https://openreview.net/pdf?id=B1gabhRcYX
https://openreview.net/forum?id=B1gabhRcYX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "r1xA3ZcpyV", "BkgvvbtzkN", "r1xqEgFcCX", "H1gAMXd90X", "rkxhFe_qAm", "H1ljP2vqAQ", "HkeWLII90Q", "SJx-VMJcnm", "SylPHRPDnQ", "r1x8O_Sw3X" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544556982385, 1543831903150, 1543307313742, 1543303958082, 1543303300063, 1543302242843, 1543296585260, 1541169705468, 1541008959303, 1540999277798 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1217/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1217/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1217/Authors" ], [ "ICLR.cc/2019/Conference/Paper1217/Authors" ], [ "ICLR.cc/2019/Conference/Paper1217/Authors" ], [ "ICLR.cc/2019/Conference/Paper1217/Authors" ], [ "ICLR.cc/2019/Conference/Paper1217/Authors" ], [ "ICLR.cc/2019/Conference/Paper1217/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1217/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1217/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The first reviewer summarizes the contribution well: This paper combines [a CNN that computes both a multi-scale feature pyramid and a depth prediction, which is expressed as a linear combination of \\\"depth bases\\\"]. This is used to [define a dense re-projection error over the images, akin to that of dense or semi-dense methods]. [Then, this error is optimized with respect to the camera parameters and depth linear combination coefficients using Levenberg-Marquardt (LM). By unrolling 5 iterations of LM and expressing the dampening parameter lambda as the output of a MLP, the optimization process is made differentiable, allowing back-propagation and thus learning of the networks' parameters.]\", \"strengths\": \"While combining deep learning methods with bundle adjustment is not new, reviewers generally agree that the particular way in which that is achieved in this paper is novel and interesting. The authors accounted for reviewer feedback during the review cycle and improved the manuscript leading to an increased rating.\", \"weaknesses\": \"Weaknesses were addressed during the rebuttal including better evaluation of their predicted lambda and comparison with CodeSLAM.\", \"contention\": \"This paper was not particularly contentious, there was a score upgrade due to the efforts of the authors during the rebuttal period.\", \"consensus\": \"This paper addresses an interesting area of research at the intersection of geometric computer vision and deep learning and should be of considerable interest to many within the ICLR community. The discussion of the paper highlighted some important nuances of terminology regarding the characterization of different methods. This paper was also rated the highest in my batch. As such, I recommend this paper for an oral presentation.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Oral)\", \"title\": \"Nice work combining Bundle Adjustment and Deep Learning Methods\"}", "{\"title\": \"The response has addressed enough of my concerns\", \"comment\": \"The response has addressed enough of my concerns and I determine to increase my rating from 6 to 7.\"}", "{\"title\": \"We thank the reviewer for raising the score.\", \"comment\": \"We thank the reviewer for raising the score.\\n\\nWe submitted the response and the revision until the last minute because a lot of extra works have been done for the revision, and we want to ensure the correctness and completeness.\\n\\nBut we will have a better-planned schedule for the next ICLR to fit the purpose of openreview.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for the comments and appreciate that the reviewer likes our idea of including optimization in the network. But our contribution is beyond adopting Levenberg-Marquardt instead of Gauss-Newton. We would like to clarify several things to address the reviewer's concerns:\\n\\nQ1. The advantages of Levenberg-Marquardt over Gauss-Newton is unclear (the main reason for rejection):\\n\\nFirstly, we want to clarify that our contribution is beyond improving the Gauss-Newton optimization to Levenberg-Marquardt. More importantly, our contribution is the combination of conventional multi-view geometry (i.e. joint optimization of depth and camera poses) and end-to-end deep learning (I.e. depth basis generator learning and feature learning). This contribution is achieved by our differentiable LM optimization that allows end-to-end training. \\n\\nSecondly, we agree with the reviewer that comparing with the Gauss-Newton algorithm will be interesting and have updated such a comparison in Appendix B in the revised version according to the reviewer\\u2019s suggestions: \\n\\n 1. We retrained the whole pipeline with Gauss-Newton, to make sure the features are learned specifically for Gauss-Newton.\\n\\n 2. We compared with various constant lambda values to see how the performance varies along with lambda. Note that we also fine-tune the network to make sure the features fit different lambda. \\n\\nIn Table 4 of the revised version (Appendix B), our method outperforms the Gauss-Newton algorithm in the last column. This is because the objective function to be optimized is non-convex, and the vanilla Gauss-Newton method might get stuck at saddle point or local minimum. This is why the Levenberg-Marquardt algorithm is the standard choice for conventional bundle adjustment.\\n\\nIn Figure 6 of the revised version (Appendix B), our method also consistently performs better than different constant lambda values. This is because the value of lambda should be adapted to different data and optimization iterations. There is no \\u2018optimal\\u2019 constant lambda for all data and iterations.\\n\\n\\nQ2. Comparison with CodeSLAM:\\nWe have included that in Figure 7 of the revised version (Appendix E). Since there is no public code for CodeSLAM, we cite its results directly from the CodeSLAM paper.\\n\\nQ3. The state vector Chi is not defined for the proposed method.\\nThe Chi is defined in Section 3 as the vector containing all camera poses and point depths. Since our method also solves for these unknowns as in classic methods, we did not redefine the Chi. But in the revised version we have recapped the definition of Chi when introducing our method at the beginning of Section 4.\\n\\nQ4. Should the paper be called Bundle Adjustment?:\\nThe term \\u2018Bundle Adjustment\\u2019 is originally used to refer to the joint optimization of 3D scene points and camera poses by minimizing the reprojection error. The keyword Bundle comes from the fact that a bundle of camera view rays pass through each of the 3D scene points. Multiple recent works, e.g. [Engel et al., 2017,Delaunoy and Pollefeys, 2014], have generalized it to \\u201cphotometric BA\\u201d where scene points and camera poses are optimized together by minimizing the photometric error. Our method is along this line. But we further improve the photometric error to featuremetric error. Each 3D scene point is still constrained by a bundle of camera view rays, though the error function has been changed. So we believe it is justified to call this method feature-metric BA. \\n\\nBut we agree with the reviewer that the word \\u2018reprojection\\u2019 is misleading when we introduce our feature-metric BA and the photometric BA. So we use the word \\u2018align\\u2019 as the reviewer suggested and use \\u2018reprojection\\u2019 only for the geometric BA.\\n\\nQ5. Is B the same for all scenes?:\\nIn the revised version, We added Figure 8 to visualize of the term B in Equation 7 (Page 6) for different scenes. We can clearly see that it is scene dependent. \\n\\nQ6.Typos:\\nWe have fixed all the typos as suggested in the revised version.\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for the comments. We have revised the paper according to the suggestions and would like to clarify several things:\\n\\nQ1. Evaluation Time: \\nWe have added the detailed running time for each component in Table 3 in Appendix A of the revised version.\\n\\nQ2. Implementation Details: \\nWe will share all the source code to make sure it is reproducible. Meanwhile, we have included more details as suggested in Appendix A, including a visualization of all layers of the different parts of the network. If 1-2 extra pages are allowed, we can include those details to the paper.\\n\\nQ3. Figure 1 is too abstract:\\nWe have updated the figure to make it more intuitive and contains more details.\\n\\nQ4. The top row of Figure 2b is confusing:\\nWe apologize for the confusion caused. Shown at the top row of Figure 2b are not three consecutive frames. They are the R, G, B channels of a single frame. To avoid confusing, we use different colors for them and explained that in the figure.\\n\\nQ5. How the first camera pose is initialized?:\\nAll the camera pose including the first camera are initialized with identity rotation and zero translation, which are aligned with the coordinate system of the first camera. We clarified this at the end of Section 4.3 in the revised version.\\n\\nQ6. Evaluation metrics are not clear:\\nTo facilitate comparisons with other methods, we use the evaluation metrics in previous works in Table 1 and 2, so that we can cite the results of previous methods. As we described in the paper, the depth metric are the same as Eigen and Fergus (2015). The translation metrics(ATE) are the same as [Wang et al. 2018, Zhou et al. 2017]. In the revised version, we briefly introduce the definition of these metrics at the beginning of each paragraph in Section 5.2.\\n\\nQ7. Attention should be given to the notation in formulas (3) and (4):\\nWe changed the parameters from \\u2018d\\u2019 to \\u2018d \\\\cdot p\\u2019 which is a 3D point. We also removed the redundant subindex \\u20181\\u2019, because all points \\u2018q\\u2019 are on the first frame.\\n \\nQ8. Terminology consistency through the paper:\\nThanks for the suggestion. We consistently use the term \\u201cfeature-metric BA\\u201d and \\u201cbasis depth maps\\u201d through the paper now.\\n\\nQ9. Typos, Grammar, Format, and Bibliography:\\nThanks for pointing them out. We have revised the paper to fix these problems.\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for the comments and appreciation, and would like to answer the reviewer\\u2019s questions as follows:\\n\\nQ1. The use of the word \\u201cguarantees\\u201d is imprecise:\\nThanks for pointing out this. We have adjusted the word. A theoretical analysis will be an interesting future work.\\n\\nQ2. Whole sequence reconstruction results:\\nOur current implementation only allows up to 5 images in a single 2015 TITANX GPU with 12GB memories. This is because we implemented the whole pipeline using tensorflow in python, which is memory inefficient, especially during training. Each image takes about 2.3GB memory on average, and most of the memory is consumed by the CNN features and matrix operation. But it is straightforward to concatenate multiple 5-frame segments to reconstruct a complete sequence, which is demonstrated in the comparison with CodeSLAM in Figure 7 of the revised version. It is also straightforward to implement our BA-Layer in CUDA directly to reduce the memory consumption of matrix operation and push the number of frames.\"}", "{\"title\": \"Response to all reviewers\", \"comment\": \"We thank all the reviewers for their insightful comments. We have revised the paper as suggested by the reviewers, and summarize the major changes as follows:\\n\\n* Network architecture details and evaluation time required by Reviewer2 are added as Appendix A.\\n\\n* The Figure 1. is updated to include more details as required by Reviewer2.\\n\\n*Ablation studies comparisons with Gauss-Newton and different constant lambda value required by Reviewer3 are updated in Appendix B.\\n\\n*Comparison with CodeSLAM on EuroC required by Reviewer3 are updated in Appendix E.\\n\\nWe also would like to ask for the reviewers\\u2019 suggestions if it is allowed to have one more extra page to include more details and comparisons, and make the paper more informative to ensure reproducibility. We targeted at 8 pages in the initial submission, but according to the reviewers\\u2019 comments, it will be helpful to have more details in the main text \\n\\nThe other concerns raised by the reviewers have also been addressed individually.\"}", "{\"title\": \"Very well written paper on an important subject, with clear technical contribution and convincing results\", \"review\": \"This paper presents a novel approach to bundle adjustment, where traditional geometric optimization is paired with deep learning.\\nSpecifically, a CNN computes both a multi-scale feature pyramid and a depth prediction, expressed as a linear combination of \\\"depth bases\\\".\\nThese values are used to define a dense re-projection error over the images, akin to that of dense or semi-dense methods.\\nThen, this error is optimized with respect to the camera parameters and depth linear combination coefficients using Levenberg-Marquardt (LM).\\nBy unrolling 5 iterations of LM and expressing the dampening parameter lambda as the output of a MLP, the optimization process is made differentiable, allowing back-propagation and thus learning of the networks' parameters.\\n\\nThe paper is clear, well organized, well written and easy to follow.\\nEven if the idea of joining BA / SfM and deep learning is not new, the authors propose an interesting novel formulation.\\nIn particular, being able to train the CNN with a supervision signal coming directly from the same geometric optimization process that will be used at test time allows it to produce features that will make the optimization smoother and the convergence easier.\\nThe experiments are quite convincing and seem to clearly support the efficacy of the proposed method.\\n\\nI don't really have any major criticism, but I would like to hear the authors' opinions on the following two points:\\n\\n1) In page 5, the authors write \\\"learns to predict a better damping factor lambda, which gaurantees that the optimziation will converged to a better solution within limited iterations\\\".\\nI don't really understand how learning lambda would _guarantee_ that the optimization will converge to a better solution.\\nThe word \\\"guarantee\\\" usually implies that the effect can be somehow mathematically proved, which is not done in the paper.\\n\\n2) As far as I can understand, once the networks are learned, possibly on pairs of images due to GPU memory limitations, the proposed approach can be easily applied to sets of images of any size, as the features and depth predictions can be pre-computed and stored in main system memory.\\nGiven this, I wonder why all experiments are conducted on sets of two to five images, even for Kitti where standard evaluation protocols would demand predicting entire sequences.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"An interesting work but lacking some details concerning the implementation and experimentations\", \"review\": \"I believe that the authors have a solid contribution that can be interesting for the ICLR community.\\nTherefore, I recommend to accept the paper but after revision because the presentation and explanation of the ideas contain multiple typos and lacking some details (see bellow).\", \"summary\": \"The authors propose a new method called BA-Net to solve the SfM problem by explicitly incorporating geometry priors into a machine learning task. The authors focus on the Bundle Adjustment process. \\n\\nGiven several successive frames of a video sequence (2 frames but can be extended up to 5), BA-Net jointly estimates the depth of the first frame and the relative camera motion (between the first frame and the next one).\\nThe method is based on a convolutional neural network which extracts the features of the different pyramid levels of the two images and in parallel computes the depth map of the first frame. The proposed network is based on the DRN-54 (Yu et al., 2017) as a feature extractor. \\n\\nThis is complemented by the linear combination of depth bases obtained from the first image.\\nThe features and the initial depth then passed to the optimization layer called BA-layer where the feature re-projection error is minimized by the modified LM algorithm. \\n\\nThe authors adapt the standard multi-view geometry constraints by a new concept of feature re-projection error in the BA framework (BA-layer) which they made differentiable. \\nDifferentiable optimization of camera motion and image depth via LM algorithm is now possible and can be used in various other DL architectures (ex. MVS-Net can probably benefit from BA-layer).\\n\\nThe authors also propose a novel depth parametrization in the form of linear combination of depth bases which reduces the number of parameters for the learning task, \\nenables integration into the same backbone net as used or feature pyramids and makes it possible to jointly train the depth generator and the BA-layer. \\n\\nOriginally the proposed approach depicts the network operating in the two-view settings. The extensibility to more views is also possible and, as shown by authors, proved to improve performance. It is, however, limited by the GPU capacity. \\n\\nOverall, the authors came up with an interesting approach to the standard BA problem. They have managed to inject the multi-view geometry priors and BA into the DL architecture.\", \"major_comments_regarding_the_paper\": \"It would be interesting to know the evaluation times for the BA-net and more importantly to have some implementation details to ensure reproducibility.\", \"minor_comments_regarding_the_paper\": [\"The spacing between sections is not consistent.\", \"Figures 1 is way too abstract given the complicated set-up of the proposed architecture. It would be nice to see more details on the subnet for depth estimator and output of the net.\", \"Overall it would be helpful for reproducibility if authors can visualize all the layers of all the different parts of the network as it is commonly done in the DL papers.\", \"Talking about proposed formulation of BA use either of the following and be consistent across the paper:\", \"Featuremetric BA / Feature-metric BA / Featuremetric BA / \\u2018Feature-metric BA\\u2019\", \"Talking about depth parametrization use \\u2018basis\\u2019 or \\u2018bases\\u2019 not both and clearly defined the meaning of this important notion.\", \"Attention should be given to the notation in formulas (3) and (4). The projection function there is no longer accepts a 3D point parametrized by 3 variables. Instead only depth is provided.\", \"In addition, the subindex \\u20181\\u2019 of the point \\u2018q\\u2019 is not explained.\", \"More attention should be given to the evaluation section. Specifically to the tables (1 and 2) with quantitative results showing the comparison to other methods.\", \"It is not clear how the depth error is measured and it would be nicer to have the other errors explained exactly as they referred in the tables (e.g. ATE?).\", \"How the first camera pose is initialized?\", \"In Figure 2.b I\\u2019m surprised by the difference obtained in the feature maps for images which seems very similar (only the lighting seems to be different). Is it three consecutive frames?\", \"Attention should be given to the grammar, formatting in particular the bibliography.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"dense SfM with Deep Learning\", \"review\": \"edit: the authors added several experiments (better evaluation of the predicted lambda, comparison with CodeSLAM), which address my concerns. I think the paper is much more convincing now. I am happy to increase my rating to clear accept.\\n\\nI also agree with the introduction of the Chi vector, and with the use of the term of \\\"photometric BA\\\", since it was used before, even if it is unfortunate in my opinion. I thank the authors to replace reprojection by alignment, which is much clearer.\\n\\n---------------\", \"this_paper_presents_a_method_for_dense_structure_from_motion_using_deep_learning\": \"The input is a set of images; the output is the camera poses and the depth maps for all the images.\\nThe approach is inspired by Levenberg-Marquardt optimization (LM): A pipeline extracting image features computes the Jacobian of an error function. This Jacobian is used to update an estimate of the camera poses. As in LM optimization, this update is done based on a factor lambda, weighting a gradient descent step and a Gauss-Newton step. In LM optimization, this lambda evolves with the improvement of the estimate. Here lambda is also predicted using a network based on the feature difference.\\n\\nIf I understand correctly, what is learned is how to compute image features that provide good updates, how to predict the depth maps from the features, and how to predict lambda.\\n\\nThe method is compared against DeMoN and other baselines with good results.\\n\\nI like the fact that the method is based on LM optimization, which is the standard method in 'geometric bundle adjustment', while related works consider Gauss-Newton-like optimization steps. The key was to include a network to predict lambda as well.\\n\\nHowever, I have several concerns:\\n\\n* the ablation study designed to compare with a Gauss-Newton-like approach does not seem correct. The image features learned with the proposed method are re-used in an approach using a fixed lambda. If I understand correctly, there are 2 things wrong with that:\\n- for GN optimization, lambda should be set to 0 - not a constant value. Several constant values should also have been tried.\\n- the image features should be re-trained for the GN framework: Since the features are learned for the LM iteration, they are adapted to the use of the predicted lambda, but they are not necessarily suitable to GN optimization.\\nThus, the advantage of using a LM optimization scheme is not very convincing.\\n\\nSince the LM-like approach is the main contribution, and the reported experiments do not show an advantage over GN-like approaches (already taken by previous work), this is my main reason for proposing rejection.\\n\\n* CodeSLAM (best paper at CVPR'18) is referenced but there is no comparison with it, while a comparison on the EuRoC dataset should be possible.\", \"less_critical_concerns_that_still_should_be_taken_into_account_if_the_paper_is_accepted\": [\"the state vector Chi is not defined for the proposed method, only for the standard bundle adjustment approach. If I understand correctly is made of the camera poses.\", \"the name 'Bundle Adjustment' is actually not adapted to the proposed method. 'Bundle Adjustment' in 'geometric computer vision' comes from the optimization of several rays to intersect at the same 3D point, which is done by minimizing the reprojection errors. Here the objective function is based on image feature differences. I thus find the name misleading. The end of Section 3 also encourages the reader to think that the proposed method is based on the reprojection error. The proposed method is more about dense alignment for multiple images.\"], \"more_minor_points\": \"\", \"1st_paragraph\": \"Marquet -> Marquardt\", \"title_of_section_3\": \"revisitED\", \"1st_paragraph_of_section_3\": \"audience -> reader\", \"caption_of_fig_1\": \"extractS\\nEq (2) cannot have Delta Chi on the two sides. Typically, the left side should be \\\\hat{\\\\Delta \\\\Chi}\\nbefore Eq (3): the 'photometric ..' -> a 'photometric ..'\\n1st paragraph of Section 4.3: difficulties -> reason\\ntypo in absolute in caption of Fig 4\\nEq (6): Is B the same for all scenes? It would be interesting to visualize it.\\nSection 4.5: applies -> apply\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
S1M6Z2Cctm
Harmonic Unpaired Image-to-image Translation
[ "Rui Zhang", "Tomas Pfister", "Jia Li" ]
The recent direction of unpaired image-to-image translation is on one hand very exciting as it alleviates the big burden in obtaining label-intensive pixel-to-pixel supervision, but it is on the other hand not fully satisfactory due to the presence of artifacts and degenerated transformations. In this paper, we take a manifold view of the problem by introducing a smoothness term over the sample graph to attain harmonic functions to enforce consistent mappings during the translation. We develop HarmonicGAN to learn bi-directional translations between the source and the target domains. With the help of similarity-consistency, the inherent self-consistency property of samples can be maintained. Distance metrics defined on two types of features including histogram and CNN are exploited. Under an identical problem setting as CycleGAN, without additional manual inputs and only at a small training-time cost, HarmonicGAN demonstrates a significant qualitative and quantitative improvement over the state of the art, as well as improved interpretability. We show experimental results in a number of applications including medical imaging, object transfiguration, and semantic labeling. We outperform the competing methods in all tasks, and for a medical imaging task in particular our method turns CycleGAN from a failure to a success, halving the mean-squared error, and generating images that radiologists prefer over competing methods in 95% of cases.
[ "unpaired image-to-image translation", "cyclegan", "smoothness constraint" ]
https://openreview.net/pdf?id=S1M6Z2Cctm
https://openreview.net/forum?id=S1M6Z2Cctm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJgOT2SYkV", "S1xNXGahT7", "r1xN-MahTm", "HJxUobThTQ", "H1xHSb636X", "Hkl52g6nam", "rJgUEgahpm", "SJeW-lphpQ", "Sye1L1Bn37", "H1lGEHeonm", "SylYVmZKnm", "B1xpkXvb2Q", "ryeFogT13Q", "H1l5el1hjX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "comment", "official_comment", "comment" ], "note_created": [ 1544277184039, 1542406684409, 1542406652080, 1542406557693, 1542406461010, 1542406322146, 1542406190427, 1542406136754, 1541324615214, 1541240106064, 1541112624959, 1540612837094, 1540505760526, 1540251633685 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1216/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1216/Authors" ], [ "ICLR.cc/2019/Conference/Paper1216/Authors" ], [ "ICLR.cc/2019/Conference/Paper1216/Authors" ], [ "ICLR.cc/2019/Conference/Paper1216/Authors" ], [ "ICLR.cc/2019/Conference/Paper1216/Authors" ], [ "ICLR.cc/2019/Conference/Paper1216/Authors" ], [ "ICLR.cc/2019/Conference/Paper1216/Authors" ], [ "ICLR.cc/2019/Conference/Paper1216/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1216/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1216/AnonReviewer1" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1216/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": [\"The proposed method introduces a method for unsupervised image-to-image mapping, using a new term into the objective function that enforces consistency in similarity between image patches across domains. Reviewers left constructive and detailed comments, which, the authors have made substantial efforts to address.\", \"Reviewers have ranked paper as borderline, and in Area Chair's opinion, most major issued have been addressed:\", \"R3&R2: Novelty compared to DistanceGAN/CRF limited: authors have clarified contributions in reference to DistanceGAN/CRF and demonstrated improved performance relative to several datasets.\", \"R3&R1: Evaluation on additional datasets required: authors added evaluation on 4 more tasks\", \"R3&R1: Details missing: authors added details.\"], \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"New objective term enforcing consistent similarity between image patches across domains. Improvements made based on reviews.\"}", "{\"title\": \"Thanks for your constructive comments -- please see our answers below. (part 2/2)\", \"comment\": \"(continued from above)\", \"q4\": \"The proposed approach is highly constrained to the settings where structure in input-output does not change. I am not sure how would this approach work if the settings from Gokaslan et al. ECCV'18 were considered (like cats to dogs where the structure changes while going from input to output)?\", \"a4\": \"It is a interesting idea to change the shapes and structures of objects during translation. The proposed method is implemented based on CycleGAN, which doesn\\u2019t have the capacity to change structure. In this work, we focus on improving the translation by introducing the smoothness constraint to provide similarity-consistency between image patches during the translation. The application of changing structure could be considered in future work.\", \"q5\": \"Does the proposed approach also provide temporal smoothness in the output? E.g. Fig. 7 shows an example of man on horse being zebra filed. My guess is that input is a small video sequence, and I am wondering if it provides temporal smoothness in the output? The failure on human body makes me wonder that smoothness constraints are helping learn the edge discontinuities. What if the edges of the input (using an edge detection algorithm such as HED from Xie and Tu, ICCV'15) were concatenated to the input and used in formulation? This would be similar in spirit to the formulation of deep cascaded bi-networks from Zhu et al . ECCV'16.he relevant literature\", \"a5\": \"We focus on image-to-image translation, so we have not considered temporal smoothness in the output, but we agree that would be an interesting topic to explore in future work.\\nHarmonicGAN aims at preserving similarity from the overall view of the image manifold, rather than getting a \\\"smoother\\\" images/labels in the translated domain. Thus, the smoothness constraint is not suitable to learn the edge discontinuities. For more analysis, please refer to the answer and experimental results comparing to CRF which are in our response to Question #1 of Reviewer #2.\"}", "{\"title\": \"Thanks for your constructive comments -- please see our answers below. (part 1/2)\", \"comment\": \"Q1: The paper is lacking in technical details: a. what is the patch-size used for RGB-histogram? b. what features or conv-layers are used to get the features from VGG (19?) net?\", \"a1\": \"For the RGB histogram, we set the patch size to 8 \\\\times 8. For the CNN features, we select the layer 4_3 after ReLU from VGG-16 network. Considering the limited space of ICLR submission, we put the demonstration of implementation details in the appendix; given that multiple reviewers pointed this out, we've moved the implementation details to the main paper and expanded the paper to 9 pages.\", \"q2\": \"Other than medical imaging where there isn't a variation in colors of the two domains, it is not clear why RGB-histogram would work?\", \"a2\": \"The RGB-histogram for non-medical image cases is still useful as it captures the \\\"textureness\\\" of an image patch although it might not be a very rich representation.\\nBased on our experiments, our framework learns translations of changing colors and textures. E.g. for the task of Horse2Zebra (or Zebra2Horse), regions of horse are brown and are expected to be translated to zebra-like texture with black and white stripes. At the same time, the background often shows different appearance for the horse or zebra. Therefore, two patches which are both from the horse or both from the background will have small distance in the RGB histogram, while two patches from horse and background respectively will have larger distance in the RGB histogram. This makes the RGB histogram useful for building a smoothness constraint in the proposed method to improve the translation results. In the task of Label2City, labels are shown with different colormaps, so here again it is reasonable to employ RGB histograms to represent the label patches. However, for the Photos2City, there are some categories which have variable colors and patterns and are not suitable to be represented by a RGB histogram, such as cars and humans. Therefore, using the RGB histogram may be damaging for the diversity of these categories, and this is why the RGB histogram shows a little lower performance than standard CycleGAN in Table 2.\", \"q3\": \"the current formulation can be thought as a variant of perceptual loss from Johnson et al. ECCV'16 (applied for the patches, or including pair of patches). In my opinion, implementing via perceptual loss formulation would have made the formulation cleaner and simpler? The authors might want to clarify as how it is different from adding perceptual loss over the pair of patches along with the adversarial loss. One would hope that a perceptual loss would help improve the performance. Also see, Chen and Koltun, ICCV'17.\", \"a3\": \"The proposed smoothness term has a great difference compared with perceptual loss. A key and one-sentence summary would be: the perceptual loss preserves the ABSOLUTE high-level feature values for A pattern before and after the translation (therefore effective in style transfer to preserve the content part) whereas HarmonicGAN preserves the DIFFERENCE/DISTANCE of a PAIR of patterns before and after the translation.\\n\\nPerceptual loss is proposed for the style transfer task. It forces the result to maintain the content of the content target and preserve the style of the style target. Perceptual loss includes two parts, for content and style respectively, formulated as:\", \"content_perceptual_loss\": \"L_{content}(x, y) = ||\\\\phi_j (y) - \\\\phi_j (x)||^2_2 / (C_j H_j W_j),\", \"style_perceptual_loss\": \"L_{style}(x, y) = || G_j(y) - G_j(x) ||^2_F,\\nwhere \\\\phi_j represents the activations of the jth layer in a pre-trained network (e.g. VGG-Net), and C_j, H_j, W_j are the channel, height, width of jth layer, G_j represents the Gram matrix computed on the jth layer. Therefore, perceptual loss enforces the output y to reconstruct the feature of the Gram matrix of the input x. \\n\\nIn contrast, the proposed smoothness term in HarmonicGAN aims to provide similarity-consistency between image patches during the translation, formulated in Eq. 6, 7, 8. The smoothness term is designed to build a graph Laplacian on all pairs of image patches, and the smoothness constraint preserves the overall integrity of the translation from the manifold learning perspective, rather than reconstructing the input sample directly. In addition, although the smoothness constraint in HarmonicGAN is measured on the features of each patches, including a RGB histogram or CNN features, it is not suitable to treat the smoothness constraint as a variant of perceptual loss: the CNN feature is only a kind of representation of image patches, not a major design part of the smoothness constraint. Other methods of representing image patches could also be employed in the smoothness constraint, such as RGB histogram.\\n\\n(continued below)\"}", "{\"title\": \"Thank you for your constructive comments. Please see our answers below. (part 3/3)\", \"comment\": \"(continued from above)\", \"a2\": \"In eq. 6, 7, 8, the smoothness term defines a graph Laplacian with the minimal value achieved as a harmonic function. We define the set consisting of individual image patches as the nodes of the graph, and define the affinity measure (similarity) computed on image patches as the edges of the graph. Then the smoothness term acts as a graph Laplacian on all pairs of image patches. Our definition of harmonic function is consistent with what was defined in (Zhu et al. ICML 2003) where the smoothness term defines a graph Laplacian with the minimal value achieved at \\\\Delta f = 0 as a harmonic function. In our paper, the smoothness term (Eq. 6, 7, 8) defines a Laplacian \\\\Delta = D - W, where W is our weight matrix in Eq. 6 and D is a diagonal matrix with D_{i} = \\\\sum_j w_{ij}. In the implementation, the losses and gradients of smoothness term are computed in parallel, which is efficient computing in GPUs. We also randomly sample the image pairs to further reduce computation complexity.\", \"q3\": \"Missing citations & term vs constraint\", \"a3\": \"We have added citations to CRFs and other papers. About the term \\\"constraint\\\", you are right that we don't have an explicit equality or inequality to satisfy here. However, recent constrained optimization literature makes less distinction between the two. We have replaced \\\"constraint\\\" in most locations by \\\"term\\\" but in a few places calling it \\\"constraint\\\" is easier to understand.\", \"q4\": \"When using feature from pre-training (VGG) in the CRF loss, the comparison with unsupervised CycleGAN is not fair.\", \"a4\": \"Firstly, the VGG model used to obtain semantic features of image patches are pre-trained in a large scale classification dataset, e.g. ImageNet dataset. The VGG model has not seen the data of image-to-image translation during its training process, and the VGG model is fixed during extracting features in the training process of image-to-image translation. Therefore, the VGG model will not bring extra supervised information about the image-to-image translation datasets. Secondly, we only use the VGG model as a feature extractor during the training process. In the inference stage, the VGG model is removed along with all the constraints and the discriminator. That means the structure of models from CycleGAN and the proposed HarmonicGAN are exactly the same since they use the same structure for generator. We also provide alternative results using RGB histogram features. In conclusion, we think it is fair to employ VGG as a feature extractor in the training process of the proposed method.\", \"q5\": \"In Table 2 (Label translation on Cityscapes), CycleGAN outperforms the proposed method in all metrics when only unsupervised histogram features are used, which makes me doubt about the practical value of the proposed regularization in the context of image-translation tasks. Having said that, the histogram-based regularization is helping in the medical-imaging application (Table 1). By the way, the use of histograms (of patches or super-pixels) as unsupervised features in pairwise regularization is not new neither. Also, it might be better to use super-pixels instead of patches.\", \"a5\": \"The main contribution of the proposed HarmonicGAN comes from the smoothness constraint which enforces consistent mappings during the translation. When computing the distance for the graph Laplacian, we adopt two types of feature measures, the RGB histogram and CNN features. These two feature measures could be selected according to the specialty of the domain. For example, for medical imaging, the major translation in images of two medical domains are colors. Thus, it is reasonable to use histogram features to represent the image patches, and histogram features improve the translation performance. However, for the task of label to city, regions of the same color should be translated to objects of the same category. Since objects of the same category may have different colors and appearances (e.g. cars of different colors and pedestrians wearing different clothes), the histogram feature is not suitable to represent the category information. This is why the results of the histogram feature for label to city task are unsatisfactory, and the CNN features are more suitable to represent the objects for this task. Results in Table 2 provide evidence for this explanation: the proposed method using the histogram performs slightly worse than CycleGAN, while the method using CNN features outperforms CycleGAN. In conclusion, selecting suitable feature measures for the smoothness constraint according to the image domains is important, and different domains benefit from different features.\"}", "{\"title\": \"Thank you for your constructive comments. Please see our answers below. (part 2/3)\", \"comment\": \"(continued from above)\\n\\n4. Effectiveness\", \"the_effect_of_the_binary_term_in_crf_is_to_encourage_the_joint_probability_to_be_faithful_to_the_training_labels\": \"p(y_i, y_j|X_i, X_j; w)\\nThis term itself is not necessarily about smoothness. It only happens to be the case that most of the time the ground-truth labels are mostly the same for the neighboring pixels. Importantly, the overall effect of the binary term in CRF has been widely observed being secondary for image labeling tasks, meaning it can help smooth the output boundaries, but the learning procedure is mostly dictated by the unary term. In fact, it is very difficult for a CRF model to fundamentally improve the wrong prediction for large areas. As shown in Fig 9, HarmonicGAN instead is able to almost completely correct the mistakes made by the unary term (the CycleGAN loss) for the BRATS experiment.\\n\\n5. Role in the algorithm\\nAs stated by the reviewer, \\\"CRFs have made a significant impact when used as post-processing\\\", but the smoothness term in HarmonicGAN is not about post-processing, at which stage it may anyway be too late to correct large mistakes. The smoothness term in HarmonicGAN works closely with the CycleGAN loss to create meaningful translations while maintaining the overall integrity of the image contents. The improvement of HarmonicGAN over CycleGAN goes way beyond the 5-20% improvement of adopting CRF in the standard image labeling tasks. HarmonicGAN provides a significant boost over CycleGAN in all cases and turns a failure case in BRATS to a success.\\nAs a matter of fact, the smoothness term in HarmonicGAN is not about obtaining \\\"smoother\\\" images/labels in the translated domain, as seen in the experiments; instead, HarmonicGAN is about preserving the overall integrity of the translation itself for the image manifold. This is the main reason for the large improvement of HarmonicGAN over CycleGAN.\\n\\nTo further demonstrate the difference of HarmonicGAN and CRF, we perform an experiment of applying the pairwise regularization of CRFs to the CycleGAN framework. For each pixel of the generated image, we compute the unary term and binary term with its 8 neighbors, and then minimize the objective function of CRF. The results are:\\n\\n Flair -> T1 T1 -> Flair\\n MAE\\\\downarrow MSE\\\\downarrow MAE\\\\downarrow MSE\\\\downarrow\\nCycleGAN 10.47 674.40 11.81 1026.19 \\nCycleGAN+CRF 11.24 839.47 12.25 1138.42\\nHarmonicGAN-Histogram 6.38 216.83 5.04 163.29\\nHarmonicGAN-VGG 6.86 237.94 4.69 127.84\\n\\nAs shown in the the above quantitative results, the pairwise regularization of CRF is unable to handle the problem of CycleGAN illustrated in Fig. 1. What's worse, using the pairwise regularization may over-smooth the boundary of generated images, which results in extra artifacts. In contrast, HarmonicGAN aims at preserving similarity from the overall view of the image manifold, and thus exploit similarity-consistency of the generated images rather than over-smooth the boundary. We have added these results along with a comparison and discussion to Section 6.2 in the paper to clarify this.\", \"q2\": \"I did not get how the loss in (9) gives a harmonic function. Could you please clarify and give more details? In my understanding, the harmonic solution in [ Zhu and Ghahramani, ICML 2013] comes directly as a solution of the graph Laplacian (and it assumes some labeled points, i.e., a semi-supervised setting). Even, if the solution is correct (which I do not see how), I do not think it is an efficient way to handle pairwise-regularization problems in image processing, particularly when matrix W = [w_{ij}] is dense (which might be the case here, unless you are truncating the Gaussian kernel with some heuristics). In this case, back-propagating the proposed loss would be of quadratic complexity w.r.t the number of image patches.\\n\\n(continued below)\"}", "{\"title\": \"Thank you for your constructive comments. Please see our answers below. (part 1/3)\", \"comment\": \"Q1: This paper adds a spatial regularization loss to the well-known CycleGAN loss for unpaired image-to-image translation (Zhu et al., ICCV17). Essentially, the regularization loss (Eq. 6) is similar to imposing a CRF (Conditional Random Field) term on the network outputs, encouraging spatial consistency between patches within each generated image. Imposing pairwise regularization on the outputs of modern deep networks has been investigated in a very large number of works recently, particularly in the context of weakly-supervised and supervised CNN segmentation, e.g., Tang et al., ECCV 18 , Lin et al. CVPR 2016, Chen et al. ICLR 2015 and Zheng et al., ICCV 2015. Very similar in spirit to this ICLR submission, these works impose within-image pairwise regularization (e.g., CRF) on the latent outputs of deep networks, with the main difference that these works use CNN semantic segmentation classifiers whereas here we have a CycleGAN for image generation. The manifold regularization terminology is misleading. The regularization is not over the feature space of image samples. It is within the spatial domain of each generated image (patch or pixel level); so, in my opinion, CRF (or spatial) regularization (instead of manifold regularization) is a much more appropriate terminology.\", \"a1\": \"There are some fundamental differences between the CRF literature and our work. They differ in output space, mathematical formulation, application domain, effectiveness, and the role in the overall algorithm. The similarity between CRF and HarmonicGAN lies the adoption of a regularization term: a binary term in the CRF case and a Laplacian term in HarmonicGAN. The differences are detailed below:\\n\\n1. Label space vs. feature space\\nThe key difference is the explicit graph Laplacian adopted in HarmonicGAN on vectorized representation on all pairs vs. a binary term for the neighboring labels on the scalar representation.\\n\\nHarmonicGAN is indeed formulated in the feature space, not just limited to patches within the single image. The CycleGAN implementation by Zhu et al. happens to include one image only in a batch for computational reason. We follow the standard pipeline of CycleGAN in HarmonicGAN and might have created a confusion here. The description has been clarified in the revised text and we have added citations to the mentioned papers.\\n\\n2. Mathematical formulation\\n\\nWhen learning a CRF model, the objective function often combines a unary term and binary term to minimize\\n\\\\arg \\\\min_{w,a} - \\\\sum_{i} \\\\log p(y_i|X_i; w) + \\\\sum_{(i,j) \\\\in Neighborhood} a \\\\log p(y_i, y_j|X_i, X_j; w)\\nwhere w and a are the parameters in CRF to be learned, and y_i and y_i are SCALAR \\\\in {1,...,k} for k-class labels.\\nFor HarmonicGAN, the objective function includes bidirectional translation having the unary term (CycleGAN loss) and binary term. For simplicity we can look at one direction only:\\n\\\\arg \\\\min_{G,F} \\\\sum_{i} |F(G(X))_i, x_i| + \\\\sum_{i,j \\\\in ImageLattice} w_{ij} Dist[F(y)(i), F(y)(j)]\\nwhere w_{i,j} defines the similarity measure and F(y)(i) computes a feature VECTOR center at i.\\nThe key difference lies in the explicit graph Laplacian defined with w_{ij} for Dist[F(y)(i), F(y)(j)] for all pairs whereas p(y_i, y_j|X_i, X_j; w) is a joint probability for the neighboring pixels i and j.\\nIn both supervised CRF or weakly-supervised CRF, y_i and y_j are scalars, which are not applicable to the general image translation task for non-labeling tasks since the feature vector space is too high for CRF to model. In addition, the graph Laplacian term in HarmonicGAN is explicitly modeled, which is very different from a joint probability model on the labels (scalar) for the neighboring pixels. It is true that HarmonicGAN adopts a smoothness term but so do semi-supervised learning, manifold learning, Markov Random Fields, spectral clustering and normalized cuts, and Laplacian eigenmaps.\\n\\n3. Application domain\\nCFR models are used in supervised and weakly-supervised image labeling task but HarmonicGAN, like CycleGAN, is applied to the generic image translation tasks where the output is beyond image labels. The reason we show the result on Cityscapes here is twofold: (1) it is shown in the original CyceleGAN paper and we want to have a direct comparison with, and (2) the labeling result can have a quantitative measures since the ground-truth labels are available. The family of unpaired image translation tasks can be quite broad, as seen in a number of applications following CycleGAN.\\n\\n(continued below)\"}", "{\"title\": \"We appreciate your constructive comments. Please see below for our answers. (part 2/2)\", \"comment\": \"(continued from above)\\n\\n(3) They show different results. We add Fig. 6 to show the qualitative results of CycleGAN, DistanceGAN and the proposed HarmonicGAN on the BRATS dataset. As shown in Fig. 6, the problem of randomly adding/removing tumors in the translation of CycleGAN is still present in the results of DistanceGAN, while the proposed method solves the problem and connecrts the location of the tumors. Table 1 shows the quantitative results on the whole test set, which also reach the same conclusion. The results of DistanceGAN on four metrics are even worse than CycleGAN, while HarmonicGAN yields a large improvement over CycleGAN.\\n\\nIn conclusion, the proposed method differs significantly from DistanceGAN in motivation, formulation, implementation and results. We have added a comparison and discussion about the differences between DistanceGAN and HarmonicGAN in Section 6.1 in the revision to make this clear.\", \"q2\": \"Lots of method details are missing. Implementation details are missing. In Section 3.3.2, what layers are chosen for computing the semantic features? What exactly is the metric for computing the distance between semantic features.\", \"a2\": \"In the implementation we select the layer 4_3 after ReLU from the VGG-16 network for computing the semantic features. In Eq. 6, 7, 8, we first normalize the features to [0,1] and then use the L1 distance of normalized features as the Dist function (for both Histogram and CNN features). Considering the limited space in an ICLR submission, we had moved the implementation details to the appendix; we've now moved it back to the main paper and expanded the paper to 9 pages. Are there any other details in particular that you would like to know?\", \"q3\": \"The qualitative results on the task, Horse2Zebra and Zebra2Horse, are not impressive. Obvious artifacts can be observed in the results. Although the paper claims that the proposed method does not change the background and performs more complete transformations, the background is changed in the result for the Horse2Zebra case in Fig. 5. More qualitative results are needed to demonstrate the effectiveness of the proposed method.\", \"a3\": \"The task of unpaired image-to-image translation is highly difficult due to the lack of paired training data. Although the proposed method could not generate \\u201cperfect\\u201d results on some samples, it shows significantly better performance compared to the standard state of the art CycleGAN framework. The result of the human perceptual study in Table 4 demonstrates that the proposed method achieves a higher Likert score and the larger percentage of user preference over CycleGAN and DistanceGAN. As shown in Table 4, the users give the highest score (72%) to the proposed method, significantly higher than CycleGAN (28%). Meanwhile, the average Likert score of our method was 3.60, outperforming 3.16 of CycleGAN and 1.08 of DistanceGAN. Both CycleGAN and our method may change the color or tone of background, which also looks realistic overall (such as translating the color of grass from green to yellow). However, sometimes CycleGAN may translate some parts of the background to zebra-like texture, which is an artifact. The proposed method performs better on preventing these zebra-like parts and makes the generated results more realistic as shown e.g. in the comparisons in Fig. 7 and Fig. 10. Considering the limited space in the paper, please see more qualitative results in Fig. 10 in the appendix.\", \"q4\": \"To demonstrate the effectiveness of a general unpaired image-to-image translation method, the proposed method is needed to be testified on more tasks.\", \"a4\": \"As suggested, we apply the proposed method on 4 more tasks in Fig. 11, including translation between apples and oranges, facades and labels, aerials and maps, summer and winter, and compare these to CycleGAN. These results demonstrate that the proposed method generalizes well to these tasks and outperforms CycleGAN.\"}", "{\"title\": \"We appreciate your constructive comments. Please see below for our answers. (part 1/2)\", \"comment\": \"Q1: The key idea of this paper is very similar to that of DistanceGAN. The proposed method can be regarded as a combination of the advantages of DistanceGAN and CycleGAN.\", \"a1\": \"There is a large difference between DistanceGAN and the proposed HarmonicGAN. First, DistanceGAN already included the CycleGAN loss. Second, DistanceGAN is about preserving the AVERAGED distance between the sample pairs from the source to the target domain, which is not sufficient to retain the underlying integrity and manifold structure.\\n\\nNext, we elaborate the key difference between DistanceGAN and HarmonicGAN. DistanceGAN encourages the distance of samples to be close to an ABSOLUTE MEAN during translation. In contrast, HarmonicGAN enforces a smoothness term naturally under the graph Laplacian, making the motivations of DistanceGAN and HarmonicGAN quite different.\\n\\nIn more detail, the distance constraint in DistanceGAN uses the expectation of the absolute differences between the distances in each domain, formulated as:\\n\\nL_{distance}(G, X) = E_{x_i, x_j \\\\in X} \\\\left| ( || x_i - x_j || - \\\\mu_X) / \\\\sigma_X + ( || G(x_i) - G(x_j) || - \\\\mu_Y ) / sigma_Y \\\\right|,\\n\\nwhere \\\\mu_X, \\\\mu_Y (\\\\sigma_X, \\\\sigma_Y) are the precomputed means (standard deviations) of pairwise distances in the training sets from domain X and Y.\\nThis distance preserving is interesting but not strong enough to preserve the manifold structure. We suspect that it is probably the reason for DistanceGAN not performing well, as seen in the qualitative and quantitative measures.\\n\\nDifferently, HarmonicGAN introduces a smoothness constraint to provide similarity-consistency between image patches during the translation. The smoothness term defines a graph Laplacian with the minimal value achieved as a harmonic function. We define the set consisting of individual image patches as the nodes of the graph, and define the affinity measure (similarity) computed on image patches as the edges of the graph. The smoothness term acts as a graph Laplacian imposed on all pairs of image patches. For the translation from X to Y, the smoothness constraint is formulated as:\\n\\nL_{smooth} (G, X, Y) = E_{{\\\\bf x} \\\\in X} \\\\big [\\\\sum_{i,j} w_{ij}(X) \\\\times Dist[G(\\\\vec{x})(i), G(\\\\vec{x})(j)] + \\\\sum_{i,j} w_{ij}(G(X)) \\\\times Dist[F(G(\\\\vec{x}))(i), F(G(\\\\vec{x}))(j)]} \\\\big]\\n\\nwhere w_{ij}(X) = \\\\exp_{- Dist[\\\\vec{x}(i), \\\\vec{x}(j)] / \\\\sigma^2} defines the affinity between the two patches \\\\vec{x}(i) and \\\\vec{x}(j). Additionally, the similarity of a pair of patches is measured on the features of each patch, e.g. Histogram or CNN features. \\n\\nComparing the distance constraint in DistanceGAN and the smoothness constraint in HarmonicGAN, we can conclude the following main three differences between them:\\n\\n(1) They show different motivations and formulations. Most importantly, the loss term in DistanceGAN essentially matches the distance of sample pairs in the source domain to the AVERAGED distance in the target domain; it is not about preserving the distance of the individual sample pairs. From a manifold learning point of view, preserving the averaged distance is not sufficient for preserving the underlying manifold structure. In contrast, the smoothness constraint in our method is designed from a graph Laplacian to build the similarity-consistency between image patches. Thus, the smoothness constraint uses the affinity between two patches as weight to measure the similarity-consistency between two domains. Our approach is in the vein of manifold learning. The smoothness term defines a Laplacian \\\\Delta = D - W, where W is our weight matrix and D is a diagonal matrix with D_{i} = \\\\sum_j w_{ij}, thus, the smoothness term defines a graph Laplacian with the minimal value achieved as a harmonic function.\\n\\n(2) They are different in implementation. The smoothness term in HarmonicGAN is computed on image patches while the distance term in DistanceGAN is computed for the average. Therefore, the smoothness constraint is more fine-grained compared to the distance preserving term in DistanceGAN. Moreover, the distances in DistanceGAN is directly computed from the samples in each domain. They scale the distances with the precomputed means and stds of two domains to reduce the effect of gap between two domains. Differently, the smoothness constraint in HarmonicGAN is measured on the features (Histogram or CNN features) of each patches, which maps samples in two domains into the same feature space and removes the gap between two domains.\\n\\n(continued below)\"}", "{\"title\": \"A very similar idea to DistanceGAN\", \"review\": \"This paper proposes a method called HarmonicGAN for unpaired image-to-image translation. The key idea is to introduce a regularization term on the basis of CycleGAN, which encourages similar image patches to acquire similar transformations. Two feature domains are explored for evaluating the patch-level similarity, including soft RGB histogram and semantic features based on VGGNet. In fact, the key idea is very similar to that of DistanceGAN. The proposed method can be regarded as a combination of the advantages of DistanceGAN and CycleGAN. Thus, the technical novelty is very limited in my opinion. Some experimental results are provided to demonstrate the superiority of the proposed method over CycleGAN, DistanceGAN and UNIT.\\n\\nGiven the limited novelty and the inadequate number of experiments, I am leaning to reject this submission.\", \"major_questions\": \"1. Lots of method details are missing. In Section 3.3.2, what layers are chosen for computing the semantic features? What exactly is the metric for computing the distance between semantic features.\\n2. The qualitative results on the task, Horse2Zebra and Zebra2Horse, are not impressive. Obvious artifacts can be observed in the results. Although the paper claims that the proposed method does not change the background and performs more complete transformations, the background is changed in the result for the Horse2Zebra case in Fig. 5. More qualitative results are needed to demonstrate the effectiveness of the proposed method.\\n3. To demonstrate the effectiveness of a general unpaired image-to-image translation method, the proposed method is needed to be testified on more tasks.\\n4. Implementation details are missing. I am not able to judge whether the comparisons are fair enough.\\n\\n[New comment:] I have read the authors' explanations and clarifications that make me increase my rating. Regarding the technical novelty, I still don't think this paper bears sufficient stuff. If there is extra quota, I would recommend Accept.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Add spatial pairwise regularization to CycleGAN loss for image-to-image translation\", \"review\": \"This paper adds a spatial regularization loss to the well-known CycleGAN loss for unpaired image-to-image translation (Zhu et al., ICCV17). Essentially, the regularization loss (Eq. 6) is similar to imposing a CRF (Conditional Random Field) term on the network outputs, encouraging spatial consistency between patches within each generated image.\\n\\nThe paper is clear and well written.\\n\\nUnpaired Image-to-Image translation is an important problem. \\n\\nThe way the smoothness loss (Eq. 6) is presented gives readers the impression that spatial pairwise regularization is new, ignoring its long history (e.g., CRFs) in computer vision (not a single classical paper on CRFs is cited). Putting aside classical spatial regularization works, imposing pairwise regularization on the outputs of modern deep networks has been investigated in a very large number of works recently, particularly in the context of weakly-supervised semantic CNN segmentation, e.g., [Tang et al., On Regularized Losses for Weakly-supervised CNN Segmentation, ECCV 18 ], [Lin et al. : Scribblesup: Scribble-supervised convolutional networks for semantic segmentation, CVPR 2016], among many other works. Very similar in spirit to this ICLR submission, these works impose within-image pairwise regularization (e.g., CRF) on the latent outputs of deep networks, with the main difference that these works use CNN semantic segmentation classifiers whereas here we have a CycleGAN for image generation.\\n\\nAlso, in the context of supervised CNN segmentation, CRFs have made a significant impact when used as post-processing step, e.g., very well known works such as [DeepLab by Chen et al. ICLR15] and [CRFs as recurrent Neural Networks by Zheng et al., ICCV 2015]. \\n\\nIt might be a valid contribution to evaluate spatial regularization (e.g., CRFs losses) on image generation tasks (such as CycleGAN), but the paper really needs to acknowledge very related prior works on regularization (at least in the context of deep networks).\\n\\nThere are also related pioneering semi-supervised deep learning works based on graph Laplacian regularization, e.g., [Westen et al., Deep Learning via Semi-supervised embedding, ICML 2008], which the paper does not acknowledge/discuss. \\n\\nThe manifold regularization terminology is misleading. The regularization is not over the feature space of image samples. It is within the spatial domain of each generated image (patch or pixel level); so, in my opinion, CRF (or spatial) regularization (instead of manifold regularization) is a much more appropriate terminology. \\n\\nAlso, I would not call this approach HarmonicGan. I would call it CRF-GAN or Spatially-Regularized GAN. The computation of harmonic functions is just one way, among many other (potentially better) ways to optimize pairwise smoothness terms (including the case of the used smoothness loss). And, by the way, I did not get how the loss in (9) gives a harmonic function. Could you please clarify and give more details? In my understanding, the harmonic solution in [ Zhu and Ghahramani, ICML 2013] comes directly as a solution of the graph Laplacian (and it assumes some labeled points, i.e., a semi-supervised setting). Even, if the solution is correct (which I do not see how), I do not think it is an efficient way to handle pairwise-regularization problems in image processing, particularly when matrix W = [w_{ij}] is dense (which might be the case here, unless you are truncating the Gaussian kernel with some heuristics). In this case, back-propagating the proposed loss would be of quadratic complexity w.r.t the number of image patches. Again, there is a long tradition in optimizing efficiently pairwise regularizers in vision/learning (even in the case of dense affinity matrices), and one very well-known work, which is currently being used a lot in the context imposing CRF structure on the outputs of deep networks, is [Krahenbuhl and Koltun, Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials], NIPS 2011. This highly related and widely used inference work for dense pairwise regulation is not cited/discussed neither. The Gaussian filtering ideas of the work of Krahenbuhl and Koltun, which ease optimizing dense pairwise terms (from quadratic to linear) are applicable here (as a Gaussian kernel is used), and are widely used in computer vision, including closely related works imposing spatial regularization losses on the outputs of deep networks, e.g., [Tang et al., On Regularized Losses for Weakly-supervised CNN Segmentation, ECCV 18], among many others. \\n \\nWhen using feature from pre-training (VGG) in the CRF loss, the comparison with unsupervised CycleGAN is not fair. In Table 2 (Label translation on Cityscapes), CycleGAN outperforms the proposed method in all metrics when only unsupervised histogram features are used, which makes me doubt about the practical value of the proposed regularization in the context of image-translation tasks. Having said that, the histogram-based regularization is helping in the medical-imaging application (Table 1). By the way, the use of histograms (of patches or super-pixels) as unsupervised features in pairwise regularization is not new neither; see for instance [Lin et al.: Scribblesup: Scribble-supervised convolutional networks for semantic segmentation, CVPR 2016]. Also, it might be better to use super-pixels instead of patches. \\n\\nSo, in summary, the technical contribution is minor, in my opinion (imposing pairwise regularization on the outputs of deep networks has been done in many works, but not for CycleGAN); optimization of the proposed loss as a harmonic function is not clear to me; using VGG in the comparisons with CycleGAN is not fair; and the long history of closely-related spatial regularization terms (e.g., CRFs) in computer vision is completely ignored.\", \"minor\": \"please use \\u2018term\\u2019 instead of \\u2018constraint\\u2019. These are unconstrained optimization problems and there are no equality or inequality constraints here.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"New smoothness constraint to Cycle-GAN formulation.\", \"review\": \"Summary: The paper proposes a new smoothness constraint in the original cycle-gan formulation. The cycle-gan formulation minimizes reconstruction error on the input, and there is no criterion other than the adversarial loss function to ensure that it produce a good output (this is in sync with the observations from Gokaslan et al. ECCV'18 and Bansal et al. ECCV'18). A smoothness constraint is defined across random patches in input image and corresponding patches in transformed image. This enables the translation network to preserve edge discontinuities and variation in the output, and leads to better outputs for medical imaging, image to labels task, and horse to zebra and vice versa.\", \"pros\": \"1. Additional smoothness constraints help in improving the performance over multiple tasks. This constraint is intuitive.\\n\\n2. Impressive human studies for medical imaging.\\n\\n3. Improvement in the qualitative results for the shown examples in paper and appendix.\", \"things_not_clear_from_the_submission\": \"1. The paper is lacking in technical details: \\n\\na. what is the patch-size used for RGB-histogram?\\n\\nb. what features or conv-layers are used to get the features from VGG (19?) net? \\n\\nc. other than medical imaging where there isn't a variation in colors of the two domains, it is not clear why RGB-histogram would work?\\n\\nd. the current formulation can be thought as a variant of perceptual loss from Johnson et al. ECCV'16 (applied for the patches, or including pair of patches). In my opinion, implementing via perceptual loss formulation would have made the formulation cleaner and simpler? The authors might want to clarify as how it is different from adding perceptual loss over the pair of patches along with the adversarial loss. One would hope that a perceptual loss would help improve the performance. Also see, Chen and Koltun, ICCV'17.\\n\\n2. The proposed approach is highly constrained to the settings where structure in input-output does not change. I am not sure how would this approach work if the settings from Gokaslan et al. ECCV'18 were considered (like cats to dogs where the structure changes while going from input to output)? \\n\\n3. Does the proposed approach also provide temporal smoothness in the output? E.g. Figure-6 shows an example of man on horse being zebrafied. My guess is that input is a small video sequence, and I am wondering if it provides temporal smoothness in the output? The failure on human body makes me wonder that smoothness constraints are helping learn the edge discontinuities. What if the edges of the input (using an edge detection algorithm such as HED from Xie and Tu, ICCV'15) were concatenated to the input and used in formulation? This would be similar in spirit to the formulation of deep cascaded bi-networks from Zhu et al . ECCV'16.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"comment\": \"Thanks for the detailed replies. Looking forward to the revised text.\", \"title\": \"Thanks for the replies\"}", "{\"title\": \"Answers to questions\", \"comment\": \"Thank you for the great suggestions on improving the writing! See our responses below. We will integrate these clarifications into the actual paper once it's editable.\", \"q1\": \"What is their distance (\\u2018Dist\\u2019) function? Is it lower/upper-bounded?\", \"a1\": \"We first normalize the features to [0,1] and then use the L1 distance of normalized features as the Dist function (for both Histogram and CNN features). Therefore the range of the 'Dist' function outputs is lower & upper-bounded within [0,1]. We will mention in the revision.\", \"q2\": \"How does Eq. 9 lead to a \\u2018harmonic function\\u2019?\", \"a2\": \"The definition of a harmonic function is a twice continuously differentiable function f : \\\\mathbb{R}^n \\\\rightarrow \\\\mathbb{R} that satisfies Laplace's equation: \\\\Delta f = 0. Our definition of harmonic function is consistent with what was defined in (Zhu et al. ICML 2003) where the smoothness term defines a graph Laplacian with the minimal value achieved at \\\\Delta f = 0 as a harmonic function. In our paper, the smoothness term (Eq. 6, 7, 8) defines a Laplacian \\\\Delta = D - W, where W is our weight matrix in Eq. 6 and D is a diagonal matrix with D_{i} = \\\\sum_j w_{ij}.\", \"q3\": \"Have the authors performed any experiments with datasets in larger domains? The largest dataset used contains few thousand images, while much larger datasets are available. Does this mean that their method is not applicable in larger domains?\", \"a3\": \"The datasets we evaluated on (BRATS, Cityscapes and horse/zebra) are all challenging benchmarks that have been commonly used for the task of unpaired image translation (Zhu et al. ICCV 2017). Note image translation performs dense pixel labeling/prediction which normally utilizes much smaller datasets than standard image classification tasks like ImageNet. It is primarily due to the difficulty of obtaining dense pixel-wise labeling for training and evaluation.\\n\\nOur method works very well on the standard benchmarks and there is no clear bottleneck for HarmonicGAN not to work on larger datasets. It is a good idea to be more ambitious and try to experiment on situations that are more complicated and on larger datasets. For example, the MSCOCO dataset for semantic and instance segmentation is becoming increasingly larger. Thanks for the suggestion.\", \"q4\": \"The whole idea is based on manifold learning but there are hardly few sentences for it in the whole manuscript. Even in related work, there is a one sentence reference; elaborating more on it would make it easier to follow the intuition and the claims (even in the appendix).\", \"a4\": \"Thanks for the comment. We cited a number of references for manifold learning as well as the graph-based semi-supervised learning literature, but didn't go into details. We will provide more elaboration in the revision.\", \"q5\": \"What is the graph G suddenly mentioned in a single sentence in page 5?\", \"a5\": \"We introduce the graph on page 5 section 3.1 and elaborate on it on the same page in section 3.3. We introduce smoothness constraints to unpaired image-to-image translation inspired by graph-based semi-supervised learning (Zhu et al. ICML 2003, Zhu 2006). Briefly, the graph is used by the smoothness constraint; its nodes are individual image patches and its edges are similarity computed for a pair of image patches. The smoothness term acts as a graph Laplacian imposed on all pairs of samples. We will clarify this earlier on in the paper.\", \"q6\": \"Are the arrows in Fig. 4 correct? For instance in (a) there are two arrows pointing to generator F, but zero arrows pointing out of it.\", \"a6\": \"Thanks for pointing it out. The arrows in the figure are indeed a bit confusing. In (a) the arrow pointed from F(G(x)) to F should be horizontal flipped. Similarly, in (b) the arrow pointed from F(G(x)) to G should also be horizontally flipped. We will revise the direction of these two arrows.\", \"q7\": \"The way the patches are considered is also not explained. Are they overlapping? How are they considered during training? Dense patch extraction?\", \"a7\": \"Yes, they are dense patches with overlaps. The Histogram/CNN features of patches are densely learned in parallel. In the implementation, the smoothness term is computed from patch pairs randomly selected from all pairs.\"}", "{\"comment\": [\"Even though the new loss term seems interesting idea, the authors could improve their text to make it easier for the readers. Few questions from reading it:\", \"What is their distance (\\u2018Dist\\u2019) function? Is it lower/upper-bounded?\", \"How does eq. 9 lead to a \\u2018harmonic function\\u2019?\", \"Have the authors performed any experiments with datasets in larger domains? The largest dataset used contains few thousand images, while much larger datasets are available. Does this mean that their method is not applicable in larger domains?\", \"Some text improvements that the authors might consider:\", \"The whole idea is based on manifold learning but there are hardly few sentences for it in the whole manuscript. Even in related work, there is a one sentence reference; elaborating more on it would make it easier to follow the intuition and the claims (even in the appendix).\", \"What is the graph G suddenly mentioned in a single sentence in page 5?\", \"Are the arrows in Fig. 4 correct? For instance in (a) there are two arrows pointing to generator F, but zero arrows pointing out of it.\", \"The way the patches are considered is also not explained. Are they overlapping? How are they considered during training? Dense patch extraction?\"], \"title\": \"Questions\"}" ] }
SkGpW3C5KX
Heated-Up Softmax Embedding
[ "Xu Zhang", "Felix Xinnan Yu", "Svebor Karaman", "Wei Zhang", "Shih-Fu Chang" ]
Metric learning aims at learning a distance which is consistent with the semantic meaning of the samples. The problem is generally solved by learning an embedding, such that the samples of the same category are close (compact) while samples from different categories are far away (spread-out) in the embedding space. One popular way of generating such embeddings is to use the second-to-last layer of a deep neural network trained as a classifier with the softmax cross-entropy loss. In this paper, we show that training classifiers with different temperatures of the softmax function lead to different distributions of the embedding space. And finding a balance between the compactness, 'spread-out' and the generalization ability of the feature is critical in metric learning. Leveraging these insights, we propose a 'heating-up' strategy to train a classifier with increasing temperatures. Extensive experiments show that the proposed method achieves state-of-the-art embeddings on a variety of metric learning benchmarks.
[ "softmax", "samples", "metric", "embeddings", "classifier", "aims", "distance", "consistent", "semantic meaning", "problem" ]
https://openreview.net/pdf?id=SkGpW3C5KX
https://openreview.net/forum?id=SkGpW3C5KX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1xu-7GggV", "r1xD4p4FRm", "HkgwBhNKA7", "rJlU-sVtR7", "ByeFu5_ahQ", "S1loo0Von7", "BJlWjOyc2X" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544721151661, 1543224622929, 1543224382659, 1543224061660, 1541405297080, 1541258915169, 1541171353479 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1215/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1215/Authors" ], [ "ICLR.cc/2019/Conference/Paper1215/Authors" ], [ "ICLR.cc/2019/Conference/Paper1215/Authors" ], [ "ICLR.cc/2019/Conference/Paper1215/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1215/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1215/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.\\n \\n- The method and justification are clear\\n- The quantitative results are promising.\\n\\n2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.\\n\\n- The contribution is minor\\n- Analysis of the properties of the method is lacking.\\nThe first point was the major factor in the final decision.\\n\\n3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it\\u2019s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.\\n\\nReviewer opinion was quite divergent but both AR1 and AR2 had concerns about the 2 weaknesses mentioned in the previous section (which remained after the author rebuttal). \\n\\n4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.\\n\\nNo consensus was reached. The source of disagreement was on how to weigh the pros vs the cons. The final decision was aligned with the lower ratings. The AC agrees that the contribution is minor.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"promising quantitative results but limited contribution over previous work\"}", "{\"title\": \"Reply to R3\", \"comment\": \"Q1: How to select the intermediate temperature alpha.\", \"a1\": \"A good intermediate temperature value can be selected by cross-validation. According to our experiment, choosing alpha value = 16 generally gives a good performance. According to the new experiment in Appendix E, we can learn an alpha value and apply the heating-up strategy based on that value.\", \"q2\": \"How \\\"heated-up\\\" strategy produces well generalized feature?\", \"a2\": \"The explanation is given in Appendix C. For BN models learned with different alpha values (Fig. 4(c)-4(j)), in training set (left-hand side), we clearly see the trend that a smaller alpha value will lead to more compact features. Unfortunately, this trend doesn\\u2019t hold for test set (right hand side). For the model trained with alpha= 4,8, although in the training set the histograms show nice compactness and spread-out properties, the features are not compact at all on the test set. It\\u2019s a clear signal of overfitting.\\n \\nComparing the histograms of the features learned without heating-up (Fig. 4(g)-4(h)) and with heating-up (Fig. 4(k)-4(l)), applying the proposed \\u201cheating-up\\u201d strategy, i.e. fine-tuning the network with smaller alpha and learning rate, makes the positive pairs more compact while keeping the negative pairs spread-out. We don\\u2019t observe clear overfitting phenomenon for the \\u201cheated-up\\u201d feature.\", \"q3\": \"How different alpha values change the ratio of #incorrect samples with #total samples and #boundary samples with #total samples?\", \"a3\": \"To verify this, we show the final training accuracy with different alpha values on Car196 dataset in Table 7 in the revised manuscript. \\u201c1 - Train Acc\\u201d indicates the ratio of the number of \\u201cincorrect\\u201d sample and the number of total samples. The result shows that when training with small alpha value, especially when alpha = 2, the training accuracy is significantly lower (~11%) than the that of trained with larger alpha values (alpha=8, 16, 32). It verifies that training with large alpha values will focus more on the ``incorrect'' samples and provide higher training accuracy. To show the ratio of the #boundary samples to the #total samples is not really possible, since there is no formal definition of what a \\u201cboundary\\u201d sample is.\", \"q4\": \"Do multiple heating-up strategies improve learning speed?\", \"a4\": \"We thank the reviewer for suggesting this good idea. This may help improve the learning speed. However, since the alpha value is only a hyperparameter, and the network needs sufficient time to respond to the change of the hyperparameter, combining different heating-up strategies may not help a lot.\"}", "{\"title\": \"Reply to R2\", \"comment\": \"Q1: Missing comparison to [Wang et al., 2017], which learns the temperature.\", \"a1\": \"We thank the reviewer for pointing out this recent related work. We added a comparison to [Wang et al., 2017] in the revised manuscript in the Appendix E. We applied different alpha values [1,2,4,8,16,32] as initialization. According to the results, training with a fixed intermediate alpha value (BN) outperforms the method with trainable alpha. A similar conclusion is also reported by Ranjan et al. (2017). It is interesting to observe that the final learned alpha values are always increased compared to the initial values, which is the opposite of what our heating-up strategy does. The reason for the lower performance is that learning the alpha value is equivalent to learning the norm of the feature. As mentioned in Appendix B, the classifier will tend to get larger alpha (larger feature norm) which results in a not compact feature.\\n\\nWe also applied the heating-up strategy to both learnable and fixed alpha settings. The proposed heating-up strategy shows performance gains for all the settings. Learning with fixed alpha and the heating-up is still the overall winner.\", \"q2\": \"Provide a comparison of HLN/HBN to LN/BN for \\u201calpha=4\\u201d\", \"a2\": \"The comparison is given in Table 3. In the table, only the second column is the result of softmax without normalization. All other results are with normalization applied to both the feature and the weight which is the same setting as HBN. We\\u2019ve changed the statement there to make it clearer.\", \"q3\": \"Explain why the temperature T should be increased in the training and better explain Fig. 1(d).\", \"a3\": \"Increasing temperature T will assign larger gradient to boundary samples which results in a compact feature that is beneficial for metric learning.\\n\\nThe left-hand side of Fig. 1(d) shows the first step of \\u201cHeating-Up\\u201d, which at the beginning of the training, uses an intermediate alpha value to train the network. At the beginning, there are many \\u201cincorrect samples\\u201d and \\u201cboundary samples\\u201d, choosing an intermediate alpha value (red dashed line in Fig. 1(a)-(c)) will assign large gradients to the \\u201cincorrect samples\\u201d, which will quickly push them to be the \\u201cboundary samples\\u201d. The \\u201cboundary samples\\u201d will also be pushed towards the center.\\n \\nAfter the first step, the positions of the samples are shown in the right-hand side of Fig. 1(d). Continuing using an intermediate alpha will not give enough gradient for boundary samples. Therefore, we should use a smaller alpha (green dashed line in Fig. 1(a)-(c)) to assign larger gradient for updating the boundary samples, which will make the final embedding more compact.\", \"q4\": \"Show the importance of normalization.\", \"a4\": \"Applying l2 normalization to the classifier weights in network training helps increase angular margin instead of Euclidean distance between different classes. It helps to get more discriminative feature representation. We\\u2019ve added Appendix F to explain this and provided experiments results of training the embedding without normalization.\", \"q5\": \"Discuss methods using different losses, such as [Wan et al. 2018].\", \"a5\": \"We thank the reviewer for pointing out this reference. The reference proposes to use Mahalanobis distance instead of inner product in the softmax function and achieve good performance in image classification task. One main difference between the deep metric learning and [Wan et al. 2018] is that [Wan et al. 2018] requires the compactness and spread-out properties with Mahalanobis distance while in deep metric learning those properties are required in Euclidean space. Therefore, [Wan et al. 2018] may not give an optimal solution to the deep metric problem. Unfortunately, since the method is not originally designed for deep metric learning, given limited rebuttal time, we weren\\u2019t able to implement it for metric learning. We will definitely discuss this reference in our final version.\"}", "{\"title\": \"Reply to R1\", \"comment\": \"Q1: The proposed method is not Metric learning.\", \"a1\": \"Conventional metric learning only learns a metric between different samples. However, Deep Metric Learning first learns a low-dimensional embedding for all the samples and use Euclidean distance on the new embedding as a metric. The embedding and the Euclidean distance are regarded as a whole as the \\u2018metric\\u2019. This terminology and setting have been widely used in recent years (Hoffer & Ailon, 2015, Harwood et al., 2017; Yuan et al., 2017; Wu et al., 2017; Song et al., 2016; 2017).\", \"q2\": \"The idea of using temperature and second to the last layer\\u2019s embedding is not novel.\", \"a2\": \"We didn\\u2019t claim the idea of using temperature and second to the last layer\\u2019s embedding is the novelty of the paper. We have reviewed the related works on temperature and softmax embedding. As mentioned by R2 and R3, our novelty is to understand how different temperature values determine the distribution of the final embedding by assigning different gradients to different samples and the proposed heating-up strategy. The insight is completely different from existing literatures.\", \"q3\": \"The correlation between the final performance and temperature setting is not evaluated.\", \"a3\": \"The correlation between the final performance and temperature is evaluated in Table 3. A new comparison of a learned alpha [Wang et al., 2017] versus our heating-up strategy is provided in Appendix E, showing that our proposed strategy significantly outperforms the learned approach.\", \"q4\": \"The side-effect on the learning rate setting is not analyzed.\", \"a4\": \"The idea of heating-up is to use small alpha values to \\u201cfine-tune\\u201d the network. Choosing a learning rate that is 1/10 of the starting learning rate is a common strategy used for fine-tuning. Using a large learning rate will make the network converge to results similar to directly using a small alpha for training. The best learning rate can be chosen through cross-validation. However, 1/10 of the starting learning rate generally gave good performance for all datasets in our experiments.\"}", "{\"title\": \"Heated-Up Softmax Embedding\", \"review\": \"This paper presents an interesting idea to improve the softmax embedding performance with heated-up strategy. It is well-written and the proposed method is easy to implement. Several experiments on metric learning datasets demonstrate the effectiveness of the proposed method.\\n\\nThe motivation to find a balance between the compactness and \\\"spread-out\\\" embedding is reasonable. The major weakness is the intermediate temperature selection, it might be a little tricky. How to generalize it to other applications?\\n\\nThe authors claim that \\\"heated-up\\\" strategy produces well generalized feature, but the rationale behind is unclear. And there is no quantitative analysis to support this point. \\n\\nThe starting temperature aims at pushing the \\u201cincorrect\\u201d samples to \\u201cboundary\\u201d samples and pushing the \\u201cboundary\\u201d samples to \\u201ccentroid\\u201d samples. I would like to see the ratio of #incorrect/total and #boundary/total changed with different temperature in training process, i.e., alpha = 16, 4, 1. This experiment may help to verify the idea.\\n\\nAs mentioned in Section 3, multiple strategies could be defined to increase the temperature. It is interesting to design a multiple heat-up strategy. Does it help to improve the learning speed?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Novelty\", \"review\": \"The introduction and the title does not match. Metric learning does not require to specify the dimension; while the embedding has to specify the reduced dimension. I feel confused that the authors mix these two concepts.\\n\\nThe objective in (1) is very close to that of t-SNE[5], where it uses the KL as the objective. Then other update formula are similar. \\n\\nThis paper facilitates the effect of temperature in the Softmax function to heuristically learn a compact and spread-out embedding. However, such an idea have been widely used and investigated in Reinforcement learning [1], Knowledge distillation [2], classification [3] and discrete variable optimization [4] and t-SNE visualization [5] etc. Thus, the insight about the temperature effect on the embedding from the second last layer, cannot be novel any more. Based on this, the proposed ``heating-up\\u201d strategy to leverage its effect on the embedding is heuristic, since the temperature parameter is manually set instead of automatically learning. In this case, I do expect the authors should provide more in-depth theoretical analysis. \\n\\nThe authors do not present more experimental results on the correlation between the final performance and this temperature setting. \\n\\nBesides, as the alpha increases or decreases, the side-effect on the learning rate setting for the optimization have not clearly analyzed, which leaves more concerns on tuning performance. \\n\\n\\n[1] Sutton, R. S. and Barto A. G.\\u00a0Reinforcement Learning: An Introduction. The MIT Press, Cambridge, MA, 1998.\\n[2] Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. NIPS 2015.\\n[3] Guo, Chuan, et al. \\\"On calibration of modern neural networks.\\\"\\u00a0ICML 2017.\\n[4] Jang E, Gu S, Poole B. Categorical reparameterization with gumbel-softmax. ICLR 2017.\\n[5] Maaten L, Hinton G. Visualizing data using t-SNE[J]. Journal of machine learning research, 2008, 9(Nov): 2579-2605.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"It is a simple and interesting method, but lacks discussions and/or empirical evaluation in comparison with the prior work.\", \"review\": \"Summary:\\nThis paper proposes a novel optimization strategy regarding softmax cross-entropy loss, to extract the effective features of well generalization in the framework of metric learning.\\nThe authors focus on the \\\"temperature\\\" parameter in the softmax and through analyzing the role of the temperature in terms of gradient, propose the approach of heating-up softmax in which the temperature is varied from low to high in training.\\nAnd, the effects of normalization such as by l2 and BatchNorm are discussed in the framework of heated-up softmax.\\nThe experimental results on metric learning tasks demonstrate the effectiveness of the proposed method in comparison with the other methods.\", \"comments\": \"\", \"pros\": [\"The idea of heating up the temperature in softmax is interesting, and seems novel in the literature of metric learning.\", \"The performance improvement, especially produced by batchNorm-based normalization, is shown.\"], \"cons\": \"- The formulation of tempered softmax with normalization is already presented in [Wang et al., 2017].\\n- The reason why the heating-up approach contributes to better metric learning is not clearly provided in a well convincing way.\\n- It lacks an important ablation study to fairly validate the method.\\n- The discussion/comparison is limited to the simple softmax function.\\n\\nAlthough the reviewer likes the idea of heating up softmax, this paper can be judged as a borderline slightly leaning toward reject, due to the above weak points, the details of which are explained as follows.\\n\\n- Formulation\\nThe softmax equipped with temperature for the normalized features and weights are shown in [Wang et al., 2017]. The only difference from that work is the way to deal with temperature; in [Wang et al., 2017], the temperature is \\\"optimized\\\" as a trainable parameter, while it is dealt with in a hand-crafted way of heating up in this work. Honestly speaking, it is unclear which approach is better, though the optimization in [Wang et al., 2017] seems elegant as stated in that paper. The only way to validate this work compared to [Wang et al., 2017] is to empirically evaluate those two methods in the experiments. Such a comparison experiment is not found and it is a main flaw of this paper.\\n\\n- Justification of the method\\nThe gradients of the softmax cross-entropy loss parameterized with a temperature T are well analyzed in Sections 3.1&3.2. But, in Section 3.3, the reviewer cannot find the clear and convincing explanation for why the temperature T should be increased in the training. My question is: why don't you use alpha=4 consistently throughout the training?\\n It might be related to the process of simulated annealing (though \\\"temperature\\\" is usually cooled down in SA), and more interestingly, it would also be possible to find connection with the work of [Guo et al., 2017]. In [Guo et al., 2017], the temperature in the softmax is optimized as a post processing for calibrating the classifier outputs. Though the calibration task itself is a little bit apart from the metric learning of the authors' interest, we can find in that paper an interesting result that the temperature is heated up to increase the confidence of the classifier outputs, which is quite similar to the process of fine-tuning by heating up softmax as done in this work. Therefore, the reviewer guesses that the effectiveness of heating up softmax can also be interpreted from the viewpoint of [Guo et al., 2017].\\n\\nThere is also less description about Figure 1; in particular, the reviewer cannot understand what Figure 1(d) means.\\n\\n- Ablation study\\nTo empirically resolve the above concerns, it is necessary to present the empirical comparison with the \\\"static\\\" softmax.\\nNamely, the methods of HLN/HBN should be carefully compared to LN/BN of \\\"alpha=4\\\", not only those of alpha=16 shown in Table 1&2; the comparison in Table 3 seems unfair since the authors apply the static softmax without normalization.\\nAnd, it would be better to show the performance of heated-up softmax \\\"without\\\" normalization to show the important role of the normalization, as done in [Wang et al., 2017].\\nIn summary, since the proposed method is composed of a heating-up approach and feature normalization, the authors are required to validate the method from those two aspects, respectively, for increasing the significance of this paper.\\n\\n- Other loss function\\nFor achieving a compactness in feature representation, the simple softmax requires both temperature and normalization. It, however, is also conceivable to employ the other types of loss function for that purpose, such as [a] which is based on the (Mahalanobis) distance with taking into account the margin between categories. The distance based loss also embeds features into localized clusters, which satisfies the authors' objective in this work. To validate the proposed method, it is required to compare the method with such a different types of loss function.\\n\\n[a] Wan, W., Zhong, Y., Li, T., & Chen, J. (2018). Rethinking Feature Distribution for Loss Functions in Image Classification, In CVPR2018, pp. 9117\\u20139126.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
rklaWn0qK7
Learning Neural PDE Solvers with Convergence Guarantees
[ "Jun-Ting Hsieh", "Shengjia Zhao", "Stephan Eismann", "Lucia Mirabella", "Stefano Ermon" ]
Partial differential equations (PDEs) are widely used across the physical and computational sciences. Decades of research and engineering went into designing fast iterative solution methods. Existing solvers are general purpose, but may be sub-optimal for specific classes of problems. In contrast to existing hand-crafted solutions, we propose an approach to learn a fast iterative solver tailored to a specific domain. We achieve this goal by learning to modify the updates of an existing solver using a deep neural network. Crucially, our approach is proven to preserve strong correctness and convergence guarantees. After training on a single geometry, our model generalizes to a wide variety of geometries and boundary conditions, and achieves 2-3 times speedup compared to state-of-the-art solvers.
[ "Partial differential equation", "deep learning" ]
https://openreview.net/pdf?id=rklaWn0qK7
https://openreview.net/forum?id=rklaWn0qK7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Bylef-n3-E", "HkgU6zj4g4", "ByeZO-P3kN", "BJxZsitoa7", "H1eZ3PYjaX", "BygG_etiTQ", "Bkgt-vXc3X", "rklkZrg8nm", "BJg5yrONnm", "H1g2mv84om", "B1e_-B_MoX" ], "note_type": [ "comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1546596616126, 1545020093626, 1544479080873, 1542327193220, 1542326185313, 1542324330033, 1541187328926, 1540912375318, 1540814049830, 1539757859976, 1539634432007 ], "note_signatures": [ [ "~francesco_bardi1" ], [ "ICLR.cc/2019/Conference/Paper1214/Authors" ], [ "ICLR.cc/2019/Conference/Paper1214/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1214/Authors" ], [ "ICLR.cc/2019/Conference/Paper1214/Authors" ], [ "ICLR.cc/2019/Conference/Paper1214/Authors" ], [ "ICLR.cc/2019/Conference/Paper1214/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1214/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1214/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1214/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"comment\": [\"We participated in ICLR Reproducibility Challenge 2019 and you can find our full report and code, written in Python using PyTorch, on Github: https://github.com/francescobardi/pde_solver_deep_learned.\", \"This paper was very interesting and challenging, it introduced - for us at least - a novel idea on how to apply ML-techniques.\", \"We could partially confirm the results reported in the original paper, not every result was reproducible either through lack of time or certainty in how these results where achieved or measured. We did not have the opportunity to test the solver using the MultiGrid method, nor the square-Poisson problem.\", \"We have some questions regarding different aspects of the paper.\", \"1 Code\", \"We have not found any reference to the source code used to produce the results, if it is publicly available you could add a reference in the paper.\", \"2 Reset operator\", \"It is not clear if the proposed approach to enforce boundary condition can be extended to boundary conditions other than Dirichlet or to other iterative methods such as Gauss Seidel.\", \"3 Training process\", \"How many problem instances were considered to build the loss function?\", \"What is the range for the random value that is applied to each edge of the square?\", \"How do you obtain the ground truth solution?\", \"Are the weights of the convolutional kernels randomly initialized or do you set them?\", \"What optimization algorithm do you use if any?\", \"How do you propose to include the spectral radius constraint?\"], \"4_model_testing\": [\"How were the numbers reported in Table 1 obtained?\", \"How do you implement the cylindrical geometry in a finite difference framework? Did you use radial coordinates or a non-uniform grid?\"], \"title\": \"ICLR Reproducibility Challenge 2019 - Team Name: zoidberg\"}", "{\"title\": \"Reply to technical questions\", \"comment\": \"Hi Redouane,\\n\\nThank you for your question! We are also happy to hear from other people trying to solve Poisson Equation.\\n\\n1) how many filters do you consider in your Convk and U-Net, to make it quicker than your baseline\\n\\nWe only use 1 filter at each layer for both Convk and U-Net models. We haven\\u2019t tried adding more filters, but we expect it to give similar or better results.\\n\\n2) how do you deal with image boundaries? I mean how do you deal with padding (zero? no padding?) by keeping the boundary conditions of the problem.\\n\\nOnce we get the error terms, we pad it with zeros. We do this so that the conv layers don\\u2019t reduce the input size, and it makes sense since the error of the boundary values should be zero.\\n\\n3) do you use TF or PyTorch, and do you think that has big impact on speed?\\n\\nWe use PyTorch. I am not entirely sure but I don\\u2019t think it has a big impact on speed.\"}", "{\"metareview\": \"Quality: The overall quality of the work is high. The main idea and technical choices are well-motivated, and the method is about as simple as it could be while achieving its stated objectives.\", \"clarity\": \"The writing is clear, with the exception of using alternative scripts for some letters in definitions.\", \"originality\": \"The biggest weakness of this work is originality, in that there is a lot of closely related work, and similar ideas without convergence guarantees have begun to be explored. For example, the (very natural) U-net architecture was explored in previous work.\", \"significance\": \"This seems like an example of work that will be of interest both to the machine learning community, and also the numerics community, because it also achieves the properties that the numerics community has historically cared about. It is significant on its own as an improved method, but also as a demonstration that using deep learning doesn't require scrapping existing frameworks but can instead augment them.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"A nice example of allowing learning without losing guarantees\"}", "{\"title\": \"Reply to AnonReviewer3\", \"comment\": \"Thank you for your helpful reviews and suggestions.\\n\\n1) \\u201cThe method seems to rely strongly on the linearity of the solver and its deformation (to guarantee the correctness of the solution). The operator H is a matrix of finite dimensions and it is not completely clear to me what is the role of the multi-layer parameterization. \\n\\u201cwhy is the deep network parameterization needed? Since no nonlinearities are present, isn t this equivalent to fix the rank of H?\\u201d\\n\\nEven though composition of linear functions is still linear, using d linear layers is better than one. On a grid with n^2 vertices: one convolution layer requires O(n^2) computations and have local receptive field; one fully-connected layer requires O(n^4) computations and have global receptive field; our deep U-Net architecture has O(n^2) computations but global receptive field. Our hope is that the deep U-Net architecture learns a linear function with both good computation properties (O(n^2)) and convergence properties, which is impossible for one layer models.\\n\\nOur learned network H is a convolutional operator, which does not have low rank. A low rank H is unlikely to perform well because many different errors may be mapped to the same correction term, while a high rank H can correct different errors differently. Our parameterization learns a high rank H with O(n^2) computation.\\n\\n\\n2) \\u201cBased on a grid approach, the idea applies only to one- or two-dimensional problems.\\u201d\", \"our_method_generalizes_without_modification_to_any_dimensional_problems\": \"simply replace 2-D convolution to k-D convolution.\\n\\n\\n3) \\u201cin the introduction, what does it mean that generic solvers are effective 'but could be far from optimal'? Does this refer to the convergence speed or to the correctness of the solution?\\u201d\\n\\nWe meant that generic solvers like Jacobi are hand-designed and theoretically correct, but may not be optimal in terms of convergence speed. Designing a solver is a trade-off between computation-per-iteration and spectral radius. We would like to have the smallest spectral radius given computation budget. We verify in our experiments: human designed solvers (e.g. Jacobi) are not Pareto optimal, and are outperformed by our learned solvers. Similar observations have also been made in other fields: learned models outperform hand designed ones, e.g. Andrychowicz et al., 2016, Song et al, 2017.\\n\\n\\n4) \\u201cother deep learning approaches to PDE solving are mentioned in the introduction. Is the proposed method compared to them somewhere in the experiments?\\u201d\\n\\nTo the best of our knowledge, related works applying ML to PDEs directly fit the solution with deep networks, which have no correctness or generalization guarantees and are restricted to specific dimensions and geometries. Our algorithm is the first deep learning based method with provable correctness and generalization guarantees.\\n\\n\\n5) \\u201cgiven a PDE and some boundary conditions, is there any known method to choose the liner iterator T optimally? For example, since u* is the solution of a linear system, could one choose the updates to be the gradient descent updates of a least-squares objective such as || A u - f||^2?\\u201d\\n\\nActually, this is exactly the update rule for most existing methods (conjugate gradient, Jacobi, etc). We compared with these methods (conjugate gradient, Jacobi) in experiments and outperform them. For example, if we minimize the objective 1/2 u^T A u - u^T f, given that A is symmetric, positive-definite, this objective has a unique minimizer, and the derivative is exactly Au - f. If we perform gradient descent on this objective with learning rate 1, we get exactly the Jacobi update.\\n\\nGradient descent may not be optimal; improving it is undergoing active research (e.g. ADAM, Adagrad, \\u201cLearning to learn\\u201d [1]). We tackle a special class of optimization problems and design methods with both correctness guarantees and better performance.\\n\\n[1] Andrychowicz, Marcin, et al. \\\"Learning to learn by gradient descent by gradient descent. NIPS, 2016.\\n\\n\\n6) \\u201cgiven the `interpretation of H' sketched in Section 3.3, is there any relationship between the proposed accelerated update and the update of second-order coordinated descent methods (like Newton or quasi-Newton)?\\u201d\\n\\nOn a grid with k (= n^2) vertices, second order methods (e.g. Newton) have optimal convergence speed (solve linear equations with a single update), but poor computation complexity (O(k^3) to solve A inverse). Our method only require O(k) computation per-iteration, and we hope to achieve a good trade-off between convergence speed and computation budget by optimization.\"}", "{\"title\": \"Reply to AnonReviewer1\", \"comment\": \"Thank you for your helpful reviews and suggestions.\\n\\n1) \\u201cYou need to spend considerably more space discussing the related work on using ML to improve PDE solvers. Most readers will be unfamiliar with this. You should explain what they do and how they are qualitatively different than your approach.\\u201d\\n\\nWe will add more discussions in our updated paper. To the best of our knowledge, related works applying ML to PDEs directly fit the solution with deep networks, which have no correctness or generalization guarantees and are restricted to specific dimensions and geometries.\\n\\n\\n2) \\u201cYou do a good job 3.3 of motivating for what H is doing. However, you could do a better job of motivating the overall setup of (6). Is this a common formulation? If so, where else is it used?\\u201d\", \"this_formulation_is_a_novel_idea_that_provides_correctness_guarantees_by_leveraging_a_hand_designed_solver\": \"we modify the residual of a hand designed solver. Another idea that also modify the residual (but not of a hand designed solver) is conjugate gradient.\\n\\n\\n3) \\u201cI\\u2019m surprised that you didn\\u2019t impose some sort of symmetry conditions on the convolutions in H, such as that they are invariant to flips of the kernel. This is true, for example, for the linearized Poisson operator.\\u201d\\n\\nGeneralization, for our model, is almost for free because of our linear ConvNet setup. Therefore, we didn\\u2019t find strong reasons to restrict the network parameters and reduce dimensionality. Enforcing symmetry introduces unnecessary overhead.\\n\\n\\n4) \\u201cValid iterators converge to a valid solution. However, can\\u2019t there be multiple candidate solutions? How would you construct a method that would be able to find all possible solutions?\\u201d\\n\\nFor most PDEs with Dirichlet boundary conditions (e.g. Possion, Helmholtz), the solution is always unique. Thus, a valid iterator should converge to the unique solution. We currently consider PDEs that have unique solutions.\\n\\n\\n==Minor comments==\\n\\n5) \\u201cIn (9), why do you randomize the value of k? Wouldn\\u2019t you want to learn a different H depending on what computation budget you knew you were going to use downstream when you deploy the solver?\\u201d\\n\\nOur hope is to learn a generic solver for a type of PDE that can be applied to a variety of applications. Therefore, we train the model agnostic to downstream applications. Nonetheless practitioners who know their computation budget can certainly fine tune our iterator with a fixed k.\\n\\n6) \\u201cIn future work it may make sense to learn a different H_i for each step i of the iterative solver.\\u201d\\n\\nThank you for the suggestion. We can try in the future, e.g. there are some methods that take the history of the iteration into account, which means it has a different H for each step.\\n\\n\\n7) \\u201cWhen introducing iterative solvers, you leave it as an afterthought that b will be enforced by clamping values at the end of each iteration. This seems like a pretty important design decision. Are there alternatives that guarantee that u satisfies b always, rather than updating u in such a way that it violates G and then clamping it back? Along these lines, it might be useful to pose (2) with additional terms in the linear system to reflect G.\\u201d\\n\\nThis is the most straightforward way to satisfy the boundary condition, and most existing iterative solvers enforce boundary conditions with this reset operation. We will also explicitly add G into our update rule in our updated paper.\"}", "{\"title\": \"Reply to AnonReviewer2\", \"comment\": \"Thank you for your helpful reviews and suggestions.\\n\\n1) \\\"Why didn\\u2019t you try the nonlinear deep network? Is it merely for computational efficiency? I expect that nonlinear networks might result in even better estimates of H and further reduce the number of fixed-point iterations, despite each operation of H will be more expensive. There might be some trade-off here. But I would like to see some empirical results and discussions.\\\"\\n\\nThe reason we did not use nonlinear deep networks is that it\\u2019s hard to prove correctness guarantees. Our linear iterator has provably correct fixed point while nonlinear iterators may have non-unique or incorrect fixed points. In addition, it is easy to prove convergence by spectral theory, while this is not the case for nonlinear operators.\\n\\n\\n2). \\\"The evaluation is only on Poisson equations, which are known to be easy. Have you tried other PDEs, such as Burger\\u2019s equations? I think your method will be more meaningful for those challenging PDEs, because they will require much more fine-grained grids to achieve a satisfactory accuracy and hence much more expensive. It will be great if your method can dramatically improve the efficiency for solving these equations.\\\"\\n\\nWe did additional experiments on Helmholtz equations, \\\\nabla^2 u + k^2 u = 0, which is known to be very challenging [1]. So far we have some preliminary results of Conv1 model in a square domain: we outperform traditional methods by a similar margin. The following show for different values of k, the ratio of computation cost compared to Jacobi in terms of layers / flops (same as Table 1). \\nk = 1: 0.422 / 0.685\\nk = 2: 0.396 / 0.643\\nk = 3: 0.383 / 0.622\\n\\nWe leave more thorough analysis of the Helmholtz equation for future work.\\n\\n[1] Oliver G. Ernst and Martin J. Gander. Why it is Difficult to Solve Helmholtz Problems with Classical Iterative Methods. Numerical analysis of multiscale problems, 2012.\\n\\n\\n3). \\\"I am a bit confused about the statement of Th 3 --- the last sentence H is valid for all parameters f and b if the iterator \\\\psi converges \\u2026 I think it should be 'for one parameter'. \\\"\\n\\nIn Theorem 1 and Lemma 1, we showed that if our iterator is convergent, it converges to the correct solution, hence it is valid. In Theorem 3, we showed that if the iterator is valid for some f and b, then the iterator is valid for every f and b. These combined implies that the iterator is valid for every f and b if it is convergent for one f and b. We will rephrase Theorem 3 to remove the confusion.\"}", "{\"title\": \"A Good and Solid Work\", \"review\": \"This paper develops a method to accelerate the finite difference method in solving PDEs. Basically, the paper proposes a revised framework for fixed point iteration after discretization. The framework introduces a free linear operator --- the choice of the linear operator will influence the convergence rate. The paper uses a deep linear neural network to learn a good operator. Experimental results on Poisson equations show that the learned operator achieves significant speed-ups. The paper also gives theoretical analysis about the range of the valid linear operator (convex open set) and guarantees of the generalization for the learned operator.\\n\\nThis is, in general, a good paper. The work is solid and results promising. Solving PDEs is no doubt an important problem, having broad applications. It will be very meaningful if we can achieve the same accuracy using much less computational power. Here, I have a few questions. \\n\\n1). Why didn\\u2019t you try the nonlinear deep network? Is it merely for computational efficiency? I expect that nonlinear networks might result in even better estimates of H and further reduce the number of fixed-point iterations, despite each operation of H will be more expensive. There might be some trade-off here. But I would like to see some empirical results and discussions.\\n\\n2). The evaluation is only on Poisson equations, which are known to be easy. Have you tried other PDEs, such as Burger\\u2019s equations? I think your method will be more meaningful for those challenging PDEs, because they will require much more fine-grained grids to achieve a satisfactory accuracy and hence much more expensive. It will be great if your method can dramatically improve the efficiency for solving these equations. \\n\\n3). I am a bit confused about the statement of Th 3 --- the last sentence \\u201cH is valid for all parameters f and b if the iterator \\\\psi converges \\u2026\\u201d I think it should be \\u201cfor one parameter\\u201d.\", \"miscellaneous\": \"1)\\tTypo. In eq. (7) \\n2)\\tSection 3.3, H(w) should be Hw (for consistency)\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting, well-written paper\", \"review\": [\"==Summary==\", \"This paper is well-executed and interesting. It does a good job of bridging the gap between distinct bodies of literature, and is very in touch with modern ML ideas.\", \"I like this paper and advocate that it is accepted. However, I expect that it would have higher impact if it appeared in the numerical PDE community. I encourage you to consider this conference paper to be an early version of a more comprehensive piece of work to be released to that community.\", \"My main critique is that the paper needs to do a better job of discussing prior work on data-driven methods for improving PDE solvers.\", \"==Major comments==\", \"You need to spend considerably more space discussing the related work on using ML to improve PDE solvers. Most readers will be unfamiliar with this. You should explain what they do and how they are qualitatively different than your approach.\", \"You do a good job 3.3 of motivating for what H is doing. However, you could do a better job of motivating the overall setup of (6). Is this a common formulation? If so, where else is it used?\", \"I\\u2019m surprised that you didn\\u2019t impose some sort of symmetry conditions on the convolutions in H, such as that they are invariant to flips of the kernel. This is true, for example, for the linearized Poisson operator.\", \"==Minor comments==\", \"Valid iterators converge to a valid solution. However, can\\u2019t there be multiple candidate solutions? How would you construct a method that would be able to find all possible solutions?\", \"In (9), why do you randomize the value of k? Wouldn\\u2019t you want to learn a different H depending on what computation budget you knew you were going to use downstream when you deploy the solver?\", \"In future work it may make sense to learn a different H_i for each step i of the iterative solver.\", \"When introducing iterative solvers, you leave it as an afterthought that b will be enforced by clamping values at the end of each iteration. This seems like a pretty important design decision. Are there alternatives that guarantee that u satisfies b always, rather than updating u in such a way that it violates G and then clamping it back? Along these lines, it might be useful to pose (2) with additional terms in the linear system to reflect G.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A linear method for speeding up PDE solvers with good empirical performances\", \"review\": \"Summary:\\nThe authors propose a method to learn and improve problem-tailored PDE solvers from existing ones. The linear updates of the target solver, specified by the problem's geometry and boundary conditions, are computed from the updates of a well-known solver through an optimized linear map. The obtained solver is guaranteed to converge to the correct solution and\\nachieves a considerable speed-up compared to solvers obtained from alternative state-of-the-art methods.\", \"strengths\": \"Solving PDEs is an important and hard problem and the proposed method seems to consistently outperform the state of the art. I ve liked the idea of learning a speed-up operator to improve the performance of a standard solver and adapt it to new boundary conditions or problem geometries. The approach is simple enough to allow a straightforward proof of correctness.\", \"weaknesses\": \"The method seems to rely strongly on the linearity of the solver and its deformation (to guarantee the correctness of the solution). The operator H is a matrix of finite dimensions and it is not completely clear to me what is the role of the multi-layer parameterization. Based on a grid approach, the idea applies only to one- or two-dimensional problems.\", \"questions\": [\"in the introduction, what does it mean that generic solvers are effective 'but could be far from optimal'? Does this refer to the convergence speed or to the correctness of the solution?\", \"other deep learning approaches to PDE solving are mentioned in the introduction. Is the proposed method compared to them somewhere in the experiments?\", \"given a PDE and some boundary conditions, is there any known method to choose the liner iterator T optimally? For example, since u* is the solution of a linear system, could one choose the updates to be the gradient descent updates of a least-squares objective such as || A u - f||^2?\", \"why is the deep network parameterization needed? Since no nonlinearities are present, isn t this equivalent to fix the rank of H?\", \"given the ` interpretation of H' sketched in Section 3.3, is there any relationship between the proposed accelerated update and the update of second-order coordinated descent methods (like Newton or quasi-Newton)?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Differences and problems with Deep Multigrid\", \"comment\": \"Hello!\\n\\nThank you for pointing out this unpublished but relevant work! We were not aware of it and we will certainly add a reference. Deep Multigrid has some surface resemblance to our method, but there are major differences:\\n\\n(1) Generalization: Deep Multigrid does not generalize to different grid size or different geometries. The learned prolongation and restriction operators are fully-connected layers, which need retraining for each grid size and geometry. Our model generalizes (both by design and in experiments) to very different grid sizes and geometries after training on a single example (Figure 1).\\n\\n(2) Usability: Deep Multigrid only experimented on 1D grids, with no proposed generalization to 2D or 3D geometries. \\nOn 1D grid, the matrix A is tridiagonal, and Au = f can be solved exactly by Gaussian elimination in O(n) time [1]. Contrastly, our method applies without modification to any dimension (by using d-dimensional convolution).\\n\\n(3) Flexibility: Deep Multigrid only learns prolongation and restriction operators. Our U-Net model is end-to-end: it implicitly includes smoothing, prolongation, and restriction. Our approach is simpler yet more general.\\n\\n(4) Experiments: Deep multigrid does not compare runtime with state-of-the-art solvers. Our method is faster (wall-clock time and number of operations) than both Jacobi Multigrid and FEniCS.\\n\\n\\n[1] Randall J LeVeque. Finite difference methods for ordinary and partial differential equations: steady-state and time-dependent problems, volume 98. Siam, 2007.\"}", "{\"comment\": \"Hello!\\nThe similar idea is proposed in \\\"Deep Multigrid: learning prolongation and restriction matrices\\\" (https://arxiv.org/abs/1711.03825), where authors optimize parameters of the multigrid method with a neural network reformulation of the multigrid method and automatic differentiation tool. Also, almost the same objective function to measure parameters quality is used, but with explanation how does this objective relate to the spectral radius of the iteration matrix.\", \"title\": \"Deep Multigrid\"}" ] }
BJgTZ3C5FX
Generative model based on minimizing exact empirical Wasserstein distance
[ "Akihiro Iohara", "Takahito Ogawa", "Toshiyuki Tanaka" ]
Generative Adversarial Networks (GANs) are a very powerful framework for generative modeling. However, they are often hard to train, and learning of GANs often becomes unstable. Wasserstein GAN (WGAN) is a promising framework to deal with the instability problem as it has a good convergence property. One drawback of the WGAN is that it evaluates the Wasserstein distance in the dual domain, which requires some approximation, so that it may fail to optimize the true Wasserstein distance. In this paper, we propose evaluating the exact empirical optimal transport cost efficiently in the primal domain and performing gradient descent with respect to its derivative to train the generator network. Experiments on the MNIST dataset show that our method is significantly stable to converge, and achieves the lowest Wasserstein distance among the WGAN variants at the cost of some sharpness of generated images. Experiments on the 8-Gaussian toy dataset show that better gradients for the generator are obtained in our method. In addition, the proposed method enables more flexible generative modeling than WGAN.
[ "Generative modeling", "Generative Adversarial Networks (GANs)", "Wasserstein GAN", "Optimal transport" ]
https://openreview.net/pdf?id=BJgTZ3C5FX
https://openreview.net/forum?id=BJgTZ3C5FX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "yFrmYiBr1gF", "Syg3jNhEJE", "rygLK5fqnX", "BklBblb52Q", "rylIWjW827" ], "note_type": [ "comment", "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1589290850983, 1543976099949, 1541184126038, 1541177341427, 1540918014414 ], "note_signatures": [ [ "~Alexander_Mathiasen2" ], [ "ICLR.cc/2019/Conference/Paper1213/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1213/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1213/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1213/AnonReviewer2" ] ], "structured_content_str": [ "{\"comment\": \"> Indeed, it requires an exponential number of samples to even differentiate between two batches of the same Gaussian [4].\\nAre you referring to Lemma 1?\", \"title\": \"Exponential Number of Samples to Differentiate Batches of same Gaussian.\"}", "{\"metareview\": \"This method proposes a primal approach to minimizing Wasserstein distance for generative models. It estimates WD by computing the exact WD between empirical distributions.\\n\\nAs the reviewers point out, the primal approach has been studied by other papers (which this submission doesn't cite, even in the revision), and suffers from a well-known problem of high variance. The authors have not responded to key criticisms of the reviewers. I don't think this work is ready for publication in ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"lack of novelty, variance in high dimensions\"}", "{\"title\": \"promising results and idea\", \"review\": \"The paper proposed to use the exact empirical Wasserstein distance to supervise the training of generative model. To this end, the authors formulated the optimal transport cost as a linear programming problem. The quantitative results-- empirical Wasserstein distance show the superiority of the proposed methods.\", \"my_concerns_come_from_both_theoretical_and_experimental_aspects\": \"The linear-programming problem Eq.(4)-Eq.(7) has been studied in existing literature.\\nThe contribution is about combining this existing method to supervise a standard neural network parametrized generator, so I am not quite sure if this contribution is sufficient for the ICLR submission.\\nIn such a case, further experimental or theoretical study about the convergence of Algorithm 1 seems important to me.\\n \\nAs to the experiments, firstly, EWD seems to be a little bit biased since EWD is literally used to supervise the training of the proposed method.\\nOther quantitative metric studies can help justifying the improvement.\\nAlso, given that the paper brings the WGAN family into comparison, the large scale image dataset should be included since WGAN have already demonstrated their success.\\n \\nLast things, missing parentheses in step 8 of Algorithm 1 and overlength of url in references.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Review for \\\"Generative model based on minimizing exact empirical Wasserstein distance\\\".\", \"review\": \"The authors propose to estimate and minimize the empirical Wasserstein distance between batches of samples of real and fake data, then calculate a (sub) gradient of it with respect to the generator's parameters and use it to train generative models.\\n\\nThis is an approach that has been tried[1,2] (even with the addition of entropy regularization) and studied [1-5] extensively. It doesn't scale, and for extremely well understood reasons[2,3]. The bias of the empirical Wasserstein estimate requires an exponential number of samples as the number of dimensions increases to reach a certain amount of error [2-6]. Indeed, it requires an exponential number of samples to even differentiate between two batches of the same Gaussian[4]. On top of these arguments, the results do not suggest any new finding or that these theoretical limitations would not be relevant in practice. If the authors have results and design choices making this method work in a high dimensional problem such as LSUN, I will revise my review.\\n\\n[1]: https://arxiv.org/abs/1706.00292\\n[2]: https://arxiv.org/abs/1708.02511\\n[3]: https://arxiv.org/abs/1712.07822\\n[4]: https://arxiv.org/abs/1703.00573\\n[5]: http://www.gatsby.ucl.ac.uk/~gretton/papers/SriFukGreSchetal12.pdf\\n[6]: https://www.sciencedirect.com/science/article/pii/0377042794900337\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Title claim seems wrong\", \"review\": \"The paper \\u2018Generative model based on minimizing exact empirical Wasserstein distance' proposes\\na variant of Wasserstein GAN based on a primal version of the Wasserstein loss rather than the relying\\non the classical Kantorovich-Rubinstein duality as first proposed by Arjovsky in the GAN context.\\nComparisons with other variants of Wasserstein GAN is proposed on MNIST.\\n\\nI see little novelty in the paper. The derivation of the primal version of the problem is already \\ngiven in \\nCuturi, M., & Doucet, A. (2014, January). Fast computation of Wasserstein barycenters. In ICML (pp. 685-693).\\n\\nUsing optimal transport computed on batches rather the on the whole dataset is already used in (among\\nothers)\\n Genevay, A., Peyr\\u00e9, G., & Cuturi, M. (2017). Learning generative models with sinkhorn divergences. AISTATS\\n Damodaran, B. B., Kellenberger, B., Flamary, R., Tuia, D., & Courty, N. (2018). DeepJDOT: Deep Joint distribution optimal transport for unsupervised domain adaptation. ECCV \\n\\nAlso, the claim that the exact empirical Wasserstein distance is optimized is not true. The gradients, evaluated on \\nbatches, are biased. Unfortunately, the Wasserstein distance does not enjoy similar U-statistics as MMD. It is very \\nwell described in the paper (Section 3):\", \"https\": \"//openreview.net/pdf?id=S1m6h21Cb\\n\\nComputing the gradients of Wasserstein on batches might be seen a kind of regularization, but it remains to be\\nproved and discussed.\\n\\nFinally, the experimental validation appears insufficient to me (as only MNIST or toy datasets are considered).\", \"typos\": \"Eq (1) and (2): when taken over the set of all Lipschitz-1 functions, the max should be a sup\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
rklhb2R9Y7
Reinforced Imitation Learning from Observations
[ "Konrad Zolna", "Negar Rostamzadeh", "Yoshua Bengio", "Sungjin Ahn", "Pedro O. Pinheiro" ]
Imitation learning is an effective alternative approach to learn a policy when the reward function is sparse. In this paper, we consider a challenging setting where an agent has access to a sparse reward function and state-only expert observations. We propose a method which gradually balances between the imitation learning cost and the reinforcement learning objective. Built upon an existing imitation learning method, our approach works with state-only observations. We show, through navigation scenarios, that (i) an agent is able to efficiently leverage sparse rewards to outperform standard state-only imitation learning, (ii) it can learn a policy even when learner's actions are different from the expert, and (iii) the performance of the agent is not bounded by that of the expert due to the optimized usage of sparse rewards.
[ "imitation learning", "state-only observations", "self-exploration" ]
https://openreview.net/pdf?id=rklhb2R9Y7
https://openreview.net/forum?id=rklhb2R9Y7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ryl6OpaZe4", "Sygig_e9RQ", "SJxH9o-fRm", "S1gXhcwkCm", "H1l-XOhNam", "BkxIVC4fTm", "ryeub04zpQ", "S1eBU1kkaX", "ByeSUJ03nX", "SkeVxQuo27" ], "note_type": [ "meta_review", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544834420837, 1543272435198, 1542753164966, 1542580906660, 1541879832707, 1541717549794, 1541717504272, 1541496652756, 1541361485125, 1541272299657 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1212/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1212/Authors" ], [ "ICLR.cc/2019/Conference/Paper1212/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1212/Authors" ], [ "ICLR.cc/2019/Conference/Paper1212/Authors" ], [ "ICLR.cc/2019/Conference/Paper1212/Authors" ], [ "ICLR.cc/2019/Conference/Paper1212/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1212/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1212/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes to combine rewards obtained through IRL from rewards coming from the environment, and evaluate the algorithm on grid world environments. The problem setting is important and of interest to the ICLR community. While the revised paper addresses the concerns about the lack of a stochastic environment problem, the reviewers still have major concerns regarding the novelty and significance of the algorithmic contribution, as well as the limited complexity of the experimental domains. As such, the paper does not meet the bar for publication at ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Metareview\"}", "{\"title\": \"Paper update\", \"comment\": \"We would like to thank for all reviews again. We updated the paper based on them.\\n\\nThe main update is the description of (and the results on) the partially observable environment. We wanted to design the new environment to be different from the one presented while being able to perform the same set of experiments to check that our findings are general. \\n\\nWe decided to use another grid world with agents having same action spaces considered before. However, the new environment differs a lot. First of all, it is partially observable and just a small subgrid (5x5) around the agent is inputted to the models (both the policy and the discriminator). The map is larger and consist of four small rooms with randomly located passages between them. Hence, the agent has to explore and get to know the map to reach the goal. That makes the use of LSTM for policy necessary.\\n\\nDespite all these differences, the results for the new environment are coherent with the previous results and justify the importance of self-exploration. They further confirm our intuition and provide the answer which of the discriminator inputs should be used based on the relation between the expert and the learner action spaces.\\n\\nAlthough the experiments conducted in the first version of the paper show the main properties of the proposed algorithm, we agree that the extra experiments added in the new version are important since the methods should generalise to different problems. We hope that the new set of experiments address the main concerns of the reviewers. Additionally, we have cleaned our source code that will be released upon acceptance, so our results are easily reproducible.\"}", "{\"title\": \"Response\", \"comment\": \"We thank you for your feedback.\\nWe agree that three of the references mentioned are very relevant and they are highly cited throughout the paper (they are also baselines which we use in experimental section). Hence, we believe that we show enough respect for these work.\\n\\nWe fully agree (and acknowledge in the paper) that using state-only demonstration in GAIL framework is not a novelty. However, to the best of our knowledge, no systematic experiments studying state-only demonstrations have been done. Especially for the case of different action spaces.\\nWe include baselines that are straightforward implication of previous work (the ones aforementioned and dully cited on the paper) and we compare them to what we propose (RTGD, ATD, self-exploration). We show quantitatively how learning is achieved under different teacher-student situations.\\n\\nThird-Person Imitation Learning focus mostly on the domain adaptation viewpoint issue, while assuming same action spaces between student and teacher. However, we understand this is an important application of GAIL and will mention it in the Introduction.\\n\\nThe reward augmentation trick in InfoGAIL adds surrogate reward with a fixed weight for the whole training procedure. We do not agree that self-exploration is a variant of that since in our case the weight (zero or one) dynamically changes and is a part of an agent input. The environment reward is also assumed to be sparse and in our experiments it is always added only once, in the end of the trail (the last environment reward).\\nOur understanding of InfoGAIL makes us believe that the work focuses on different aspect of imitation learning. However, similar to Third-Person Imitation Learning work, InfoGAIL is a well-known paper that uses GAIL and we will consider mentioning that.\"}", "{\"comment\": \"I think this paper basically makes a incremental follow-up to [Kang et.al., 2018] and [Li et.al., 2017]. However, it seems that the authors do not show enough respect for these work. From the perspective of [Kang et.al., 2018], this paper simply changes GAIL over (s, a) into GAIL over (s_t,.. s_t+n), which has been repeatedly proposed in [Bradly et.al., 2016][Josh et.al., 2017][Faraz et.al., 2018]. On the other hand, from [Li ei.al., 2017], we could also consider this paper as a variant of reward augmentation trick.\\n\\nOverall, I think the novelty of this paper is quite limited and may not fit into ICLR requirements.\\n\\n[Kang et.al., 2018] Policy Optimization with Demonstrations, Kang et.al., in ICML, 2018\\n[Li et.al., 2017] InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations, Li et.al., in NIPS, 2017\\n[Bradly et.al., 2016] Third-Person Imitation Learning, Bradly et.al., in ICLR, 2017\\n[Josh et.al., 2017] Learning human behaviors from motion capture by adversarial imitation, Josh et.al., arXiv 1707.02201\\n[Faraz et.al., 2018] Generative Adversarial Imitation from Observation, Faraz et.al., arXiv 1807.06158\", \"title\": \"Some comment on novelty\"}", "{\"title\": \"Initial reply\", \"comment\": \"Thanks for the review and the feedback.\\nWe are especially thankful for raising the topic of exploring the \\\"new\\\" actions, i.e. $A^l \\\\setminus A^e$. Our method, in contrary to previously presented, does not limit the learner to use the same actions as used in expert demonstrations. As shown in our experiments, we were able to successfully train our agent even when action spaces are disjoint. Considering your specific example (with 8-way-king learner and 4-way expert), we already have a short comment on that in the paper -- the learner on average gets to the goal quicker (in respect to number of steps), hence it uses \\u201cnew\\u201d actions. We believe that it is important remark and we will edit paper to make this more clear and visible.\\nWe did not have any experiments supporting the hypothesis of being immune to domain shift in our setup. That\\u2019s a very interesting direction, but we believe it is general question for all GAN-based methods and should be a part of another work. However, we would be glad to explore this in future research.\\nI believe that our idea will work in any setup but it is well-suited for cases when expert and learner have different action spaces. The Atari games are created in a such way that all buttons (actions) are important and usually necessary to perform well. However, we started experimenting on complex environment, Vizdoom (3D POMPD), where the agent is still able to perform well using just a subset of all possible actions. Preliminary results lead to the similar conclusions, but we need more time to make any statement in this setting.\\nWe have not shown GAIL for comparison in Fig 3. and Fig. 4 because the straightforward application of GAIL method is possible only when learner action space is a subset of expert action space which is rarely case in our experiments.\\nThanks you again for providing related work and all minor remarks. We will update our paper according to them.\"}", "{\"title\": \"Initial reply\", \"comment\": \"We thank you for the constructive comments.\\nWe agree that these results corroborates with one's intuition. And this is precisely why we think the results are interesting. \\n\\nAlthough the experiments show the main properties of the proposed algorithm, we also agree that more experiments on different tasks would definitely be helpful.\\nWe explored the method in another gridworld setting, in which the agent does not have access to the whole map, but only partial observations (we use a 5x5 subgrid surrounding the agent). We used the same action spaces for both expert and learner to be able to compare the result directly. Results in the POMDP setting are very similar to the fully-observed setting.\\nWe started experimenting on more complex environment, Vizdoom (3D POMPD). This environment is much more (computationally) demanding, so conducting a systematic set of experiment (as done for the original case) is more challenging. Preliminary results lead to the similar conclusions, but we need more time to make any statement in this setting.\\n\\nWe would appreciate to hear your opinion about choices for additional experiments and suggestion on what we can add to the paper to make it better.\"}", "{\"title\": \"Initial reply\", \"comment\": \"We thank the reviewer for the detailed comments.\\n\\nWe fully agree (as acknowledged that in the paper) that using state-only demonstration in the GAIL framework is not new. However, to the best of our knowledge, no systematic experiments studying state-only demonstrations have been done. We propose baseline methods that are straightforward implication of previous works and we compare them to our methods (RTGD, ATD, self-exploration). We show quantitatively how all methods presented perform in different situations.\\n\\nWe agree to the fact that in deterministic environments the consecutive states encode the action performed. The choice of deterministic environment was made on purpose. We believe that this choice makes the problem more challenging because discriminator is more likely to \\u2018decode\\u2019 action spaces used by two agents and easily discriminate based on that -- which leads to providing non-informative rewards (rewards that are not linked with the policy).\", \"this_intuition_is_confirmed_in_the_paper\": \"CSD performs bad when action spaces are disjoint. We report results for methods that can not decode actions (RTGD, ATD and SSD) and they perform better than CSD (unless action states are not disjoint, then all methods perform similar). We will modify the paper to make this more clear.\\n\\nAlthough the experiments show the main properties of the proposed algorithm, we also agree that more experiments on different tasks would definitely be helpful.\\nWe explored the method in another gridworld setting, in which the agent does not have access to the whole map, but only partial observations (we use a 5x5 subgrid surrounding the agent). We used the same action spaces for both expert and learner to be able to compare the result directly. Results in the POMDP setting are very similar to the fully-observed setting.\\nWe started experimenting on more complex environment, Vizdoom (3D POMPD). This environment is much more (computationally) demanding, so conducting systematic experiment set (as done for the original problem) is more challenging. Preliminary results lead to the similar conclusions, but we need more time to make any statement in this setting.\\nVizdoom environment dynamic implements inertia and hence the previous action cannot be inferred from consecutive states. Also the behavior of another agents (bots) is non-deterministic.\\n\\nWe would appreciate any suggestions on the experimental setting that could improve our work.\"}", "{\"title\": \"heuristic combining environment rewards with an IRL-style rewards\", \"review\": \"The draft proposes a heuristic combining environment rewards with an IRL-style rewards recovered from expert demonstrations, seeking to extend the GAIL approach to IRL to the case of mismatching action spaces between the expert and the learner. The interesting contribution is, in my opinion, the self-exploration parameter that reduces the reliance of learning on demonstrations once they have been learned sufficiently well.\", \"questions\": [\"In general, it's known that behavioral cloning, of which this work seem to be an example in so much it learns state distributions that are indistinguishable from the expert ones, can fail spectacularly because of the distribution shift (Kaariainen@ALW06, Ross&Bagnell@AISTATS10, Ross&Bagnell@AISTATS11). Can you comment if GAN-based methods are immune or susceptible to this?\", \"Would this work for tasks where the state-space has to be learned together with the policy? E.g. image captioning tasks or Atari games.\", \"Is it possible to quantify the ease of learning or the frequency of use of the \\\"new\\\" actions, i.e. $A^l \\\\setminus A^e$?. Won't learning these actions effectively be as difficult as RL with sparse rewards? Say, in a grid world where 4-way diagonal moves allow reaching the goal faster, learner is king 8-way, demonstrations come from a 4-way expert, rewards are sparse and each step receives a -1 reward and the final goal is large positive -- does the learner's final policy actually use the diagonals and when?\"], \"related_work\": [\"Is it possible to make a connection to (data or policy) aggregation methods in IL. Such methods (e.g. Chang et al.@ICML15) can also sometimes learn policies better than the expert.\"], \"experiments\": [\"why GAIL wasn't evaluated in Fig. 3 and Fig. 4?\"], \"minor\": [\"what's BCE in algorithm 1?\", \"Fig.1: \\\"the the\\\"\", \"sec 3.2: but avoid -> but avoids\", \"sec 3.2: be to considers -> be to consider\", \"sec 3.2: any hyperparameter -> any hyperparameters\", \"colors in Fig 2 are indistinguishable\", \"Table 1: headers saying which method is prior work and which is contribution would be helpful\", \"Fig. 3: if possible try to find a way of communicating the relation of action spaces between expert and learner (e.g. a subset of/superset of). Using the same figure to depict self-exploration make it complicated to analyse.\", \"sec 3.2: wording in the last paragraph on p.4 (positive scaling won't _make_ anything positive if it wasn't before)\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"State only demonstrations but in deterministic environments\", \"review\": \"The paper proposes to combine expert demonstration together with reinforcement learning to speed up learning of control policies. To do so, the authors modify the GAIL algorithm and create a composite reward function as a linear combination of the extrinsic reward and the imitation reward. They test their approach on several toy problems (small grid worlds).\\n\\nThe idea of combining GAIL reward and extrinsic reward is not really new and quite straight forward so I wouldn't consider this as a contribution. Also, using state only demonstration in the framework of GAIL is not new as the authors also acknowledge in the paper. Finally, I don't think the experiments are convincing since the chosen problems are rather simple. \\n\\nBut my main concern is that the major claim of the authors is that they don't use expert actions as input to their algorithm, but only sequences of states. Yet they test their algorithm on deterministic environments. In such a case, two consecutive states kind of encode the action and all the information is there. Even if the action sets are different in some of the experiments, they are still very close to each other and the encoding of the expert actions in the state sequence is probably helping a lot. So I would like to see how this method works in stochastic environments.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"First review\", \"review\": \"This paper proposes some new angles to the problem of imitation learning from state only observations (not state-action pairs which are more expensive).\\nSpecifically, the paper proposes \\\"self exploration\\\", in which it mixes the imitation reward with environment reward from the MDP itself in a gradual manner, guided by the rate of learning.\\nIt also proposes a couple of variants of imitation rewards, RTGD and ATD inparticular, which formulate the imitation rewards for random or exhaustive pairs of states in the observation data, as opposed to the rewards proposed in existing works (CSD, SSD), which are based on either consecutive or single states, which constitute the baseline methods for comparison.\\nThe authors then perform a systematic experiment using a particular navigation problem on a grid world, and inspect under what scenarios (e.g. when the action spaces of the expert and learner are the same, disjoint or in a containment relationship) which of the methods perform well relative to the baselines. \\nSome moderately interesting observations are reported, which largely confirm one's intuition about when these methods may perform relatively well. \\nThere is not very much theoretical support for the proposed methods per se, the paper is mostly an empirical study on these competing reward schemes for imitation learning.\\nThe empirical evaluation is done in a single domain/problem, and in that sense it is questionable how far the observed trends on the relative performance of the competing methods generalizes to other problems and domains. \\nAlso the proposed ideas are all reasonable but relatively simple and unsurprising, casting some doubt as to the extent to which the paper contributes to the state of understanding of this area of research.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HyxnZh0ct7
Meta-learning with differentiable closed-form solvers
[ "Luca Bertinetto", "Joao F. Henriques", "Philip Torr", "Andrea Vedaldi" ]
Adapting deep networks to new concepts from a few examples is challenging, due to the high computational requirements of standard fine-tuning procedures. Most work on few-shot learning has thus focused on simple learning techniques for adaptation, such as nearest neighbours or gradient descent. Nonetheless, the machine learning literature contains a wealth of methods that learn non-deep models very efficiently. In this paper, we propose to use these fast convergent methods as the main adaptation mechanism for few-shot learning. The main idea is to teach a deep network to use standard machine learning tools, such as ridge regression, as part of its own internal model, enabling it to quickly adapt to novel data. This requires back-propagating errors through the solver steps. While normally the cost of the matrix operations involved in such a process would be significant, by using the Woodbury identity we can make the small number of examples work to our advantage. We propose both closed-form and iterative solvers, based on ridge regression and logistic regression components. Our methods constitute a simple and novel approach to the problem of few-shot learning and achieve performance competitive with or superior to the state of the art on three benchmarks.
[ "few-shot learning", "one-shot learning", "meta-learning", "deep learning", "ridge regression", "classification" ]
https://openreview.net/pdf?id=HyxnZh0ct7
https://openreview.net/forum?id=HyxnZh0ct7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "r1eEpO66S4", "ByeHwtpR4E", "BkxDPnDZMV", "H1eyie3leV", "BkgTUolegV", "SJeA_NNyxE", "r1l23D05y4", "H1llepity4", "SyxWU3iYkE", "HyeglRQ_JE", "BJg6awLwk4", "rygq4xgwkN", "r1e-G4YL1N", "r1gGLd-507", "Syg19STh6Q", "SklLz3csp7", "H1e3St5u67", "SyedD2rXaQ", "Byg2xy8MpX", "Hkgvq5AZpm", "rkevXdCb6m", "BkljQHC-T7", "B1lS7rnxT7", "Syegwm5yaQ", "r1xPm1Kah7", "SJlghKO937", "r1ggxa85nm", "SkeX8K6thm" ], "note_type": [ "official_comment", "official_comment", "comment", "meta_review", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1550862524338, 1549879645025, 1546906719039, 1544761494564, 1544715093104, 1544664182159, 1544378291927, 1544301799639, 1544301640915, 1544203752502, 1544148932756, 1544122418133, 1544094729345, 1543276618355, 1542407559156, 1542331406491, 1542134084261, 1541786719761, 1541721843523, 1541692047126, 1541691423070, 1541690659324, 1541616925225, 1541542743787, 1541406494547, 1541208488090, 1541201128458, 1541163339103 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1211/Authors" ], [ "ICLR.cc/2019/Conference/Paper1211/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1211/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1211/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1211/Authors" ], [ "ICLR.cc/2019/Conference/Paper1211/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1211/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1211/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1211/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1211/Authors" ], [ "ICLR.cc/2019/Conference/Paper1211/AnonReviewer2" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1211/Authors" ], [ "ICLR.cc/2019/Conference/Paper1211/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1211/Authors" ], [ "ICLR.cc/2019/Conference/Paper1211/Authors" ], [ "ICLR.cc/2019/Conference/Paper1211/Authors" ], [ "ICLR.cc/2019/Conference/Paper1211/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1211/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1211/AnonReviewer1" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1211/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Camera-ready.\", \"comment\": \"The ICLR'19 camera-ready version of the paper has been uploaded.\", \"code_available_at_https\": \"//github.com/bertinetto/r2d2\"}", "{\"title\": \"Thank you for your contribution\", \"comment\": \"We agree on the importance of making the research in ML (or any field) accessible and reproducible - we are glad that initiatives such as the reproducibility challenge exist.\\n\\nWe are also glad that the authors were able to reproduce our main findings, despite not having had access to our implementation and using a different framework (we used PyTorch and we are working on finalizing the code release).\", \"in_response_to_specific_details_of_the_report\": \"- We agree that the details of stride and padding amounts are missing, and we will update the paper accordingly. This should also resolve the difference in feature dimension between our paper and the replication.\\n- We believe that our sentence \\u201cTraining is stopped when the error on the meta-validation set does not decrease meaningfully for 20,000 episodes\\u201d has been misinterpreted, as the authors say: \\u201cwe opted to meta-train for a fixed 20k iterations\\u201d.\\nWhat we meant is that we performed early-stopping if error does not decrease for a period of 20k episodes, not that we train for 20k episodes in total.\\nClearly, in this way the total number of training epoch varies, but we observed that generally it stops around 60k-80k episodes. We will make this point more clear in the camera ready. This also means that our results were obtained with longer training than in the replication.\\n- Re the sentence \\u201cdifferent neural architectures should be taken into consideration when comparing results\\u201d and direct comparison with MAML in general.\\nThis comment refers to the fact that we did not report results on a 32-channels embedding in our experiments, which is instead what MAML uses.\\nHowever, we believe that our experiments already show that performance is not simply the result of a trivial increase in capacity.\\nTo demonstrate that, we reported both a) results of our method on a 64-channels embedding and b) results of three representative baselines (protonets, MAML and GNN) with our embeddings (the * in our tables).\"}", "{\"comment\": \"We have carried out a reproducibility analysis of this interesting paper on meta-learning. Some parameters and training methodologies, which would be required for full reproducibility, are not present in the manuscript at the time of writing:\\n- stride of the convolutional filters\\n- padding of the convolutional filters\\n- a clear stopping criterion (<-> \\\"the error on the meta-validation set does not decrease meaningfully for 20k episodes\\\"),\\n\\nHowever, making reasonable assumptions, we were able to reproduce the most important part of the paper (R2D2) in TensorFlow and achieve similar results. We did not reproduce the LRD2 part of the paper, as we wanted to focus on the truly differentiable closed-form solver (R2D2). Most importantly, we were able to reproduce the increase in performance of the proposed method (with the given architecture) over some reproduced baseline results, which supports the conclusions in the original paper.\\n\\nThe different neural network architectures should be taken into consideration when comparing results. For example the MAML baseline of Finn et al. (2017) uses four convolutional blocks with [32, 32, 32, 32] filters, whereas this paper's four blocks employ a [96, 192,384, 512] scheme. Because of this we implemented R2D2 with both the architecture mentioned in the paper and the MAML baseline architecture. In our reproducibility report we show that when using the exact same baseline architecture as MAML, and standard training procedure, the improvement in performance of the proposed method is not clear.\", \"our_full_reproducibility_report_is_available_at\": \"https://github.com/reproducibility-challenge/iclr_2019/blob/c53e6c1ea8d0e158f66b7d70681fa6ecde6a4f2b/papers/LCAX-HyxnZh0ct7/LCAX.pdf\", \"our_codebase\": \"https://github.com/ArnoutDevos/r2d2\", \"title\": \"ICLR 2019 Reproducibility Challenge key findings\"}", "{\"metareview\": \"The reviewers disagree strongly on this paper. Reviewer 2 was the most positive, believing it to be an interesting contribution with strong results. Reviewer 3 however, was underwhelmed by the results. Reviewer 1 does not believe that the contribution is sufficiently novel, seeing it as too close to existing multi-task learning approaches.\\n\\nAfter considering all of the discussion so far, I have to agree with reviewer 2 on their assessment. Much of the meta learning literature involves changing the base learner *for a fixed architecture* and seeing how it affects performance. There is a temptation to chase performance by changing the architecture, adding new regularizers, etc., and while this is important for practical reasons, it does not help to shed light on the underlying fundamentals. This is best done by considering carefully controlled and well understood experimental settings. Even still, the performance is quite good relative to popular base learners.\\n\\nRegarding novelty, I agree it is a simple change to the base learner, using a technique that has been tried before in other settings (linear regression as opposed to classification), however its use in a meta learning setup is novel in my opinion, and the new experimental comparison regression on top of pre-trained CNN features helps to demonstrate the utility of its use in the meta-learning settings.\\n\\nWhile the novelty can certainly be debated, I want to highlight two reasons why I am opting to accept this paper: 1) simple and effective ideas are often some of the most impactful. 2) sometimes taking ideas from one area (e.g., multi-task learning) and demonstrating that they can be effective in other settings (e.g., meta-learning) can itself be a valuable contribution. I believe that the meta-learning community would benefit from reading this paper.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"A closed form solver for the base learner is new in the meta-learning literature, and the experiments are sufficiently carried out to show its effectiveness.\"}", "{\"title\": \"We were responding to the claim that these papers had small novelty and thus our own as well, which we disagree with\", \"comment\": \"1) We wrote: \\u201c\\u201c[multi-task learning] is different to our work, and in general to all of the previous literature on meta-learning applied to few-shot classification (e.g. Finn et al. 2017, Ravi & Larochelle 2017, Vinyals et al. 2016, etc). Notably, these methods and ours take into account adaptation *already during the training process*, which requires back-propagating errors through the very fine-tuning process.\\u201d\\u201d\\n\\n2) R1 answered with: \\u201c\\u201cMerely because some other paper also had small novelty and got accepted in the past I can not see why this paper should also get accepted\\u201d\\u201d\\n\\n3) We then observed that *R1 did not refute any of our point of rebuttal* (long answer in this thread) and seems to be dismissive of the above papers, which are widely accepted by the community.\\n\\n> \\u201c\\u201c However, using a multi-task technique in meta-learning setting cannot be treated as a novel or original contribution.\\u201d\\u201d\\nAgain, it is not what we do - we amply addressed this point both on OpenReview (last two answers to the reviewer) and in the paper.\\n\\nWe would like to repeat that if this were true, the baseline experiment we described (applying ridge regression in the manner that the reviewer refers to as standard) would not have been possible, since our method and the baseline would then be the same (which they are not -- both in methodology and results).\"}", "{\"comment\": \"The 3 meta-learning papers developed new techniques and/or models for meta-learning (which have never been proposed or used in multi-task learning), while this paper applies existing multi-task learning technique in the meta-learning setting. The contributions in these two cases are very different. I think it is misleading to indicating that the contribution of this paper is as novel as the 3 meta-learning papers.\\n\\nIt is fine to apply multi-task learning technique to the meta-learning problem. To some extent, meta-learning can be explained as a generalization of multi-task learning in the way that meta-learning applies to any set of tasks sampled from certain *task distribution*, while the set of tasks in multi-task learning are fixed. They both need knowledge transfer between different tasks. However, using a multi-task technique in meta-learning setting cannot be treated as a novel or original contribution.\", \"title\": \"It is misleading to indicate that the paper is as novel as the 3 meta-learning papers\"}", "{\"title\": \"3 papers with hundreds of citations (Finn et al., Ravi & Larochelle, Vinyals et al.) cannot be dismissed as \\u201csome other paper also had small novelty\\u201d\", \"comment\": \"The reviewer has not refuted any of the points we made above. Namely:\\n\\n- That meta-learning approaches (like ours) back-propagate errors through the fine-tuning process, a major departure from standard multi-task/transfer learning.\\n- That *not doing so* incurs a large performance penalty, as demonstrated by our experiments.\\n\\nWe invite the reviewer to address these points, rather than just reiterate a subjective judgment over the value of meta-learning. While we respect this opinion, our paper cannot be rejected based solely on the reviewer\\u2019s opinion that meta-learning papers are not novel in general (compared to multi-task learning).\"}", "{\"title\": \"I respectfully disagree with Reviewer #2.\", \"comment\": \"Merely combining ridge regression (trivial, and nothing novel) inside meta-learning is not sufficentlynovel in my opinion.\\n\\nIn essence we agree to disagree. I request the AC to make a decision based on both our inputs.\"}", "{\"title\": \"I still feel the novelty is very small (I'm reviewer #1)\", \"comment\": \"I disagree with Reviewer #2 and the authors about the novelty. The delta from just simple multi-task learning approach of eg Caruana 93 is extremely small -- the same algorithms are trivially extended to deal with meta-learning. The mere fact of using closed form ridge regression in this setting does not feel like sufficient contribution to warrant an ICLR paper to this reviewer. Merely because some other paper also had small novelty and got accepted in the past I can not see why this paper should also get accepted with minimal novel contributions.\"}", "{\"title\": \"Thanks\", \"comment\": \"Thank you for pointing us to this interesting paper! We agree that methods with limited inductive bias such as protonets are attractive, and there is indeed a good case for their performance scaling better with computation and data.\\nWe are looking forward to try out the proposed testbed. One possible advantage of using our R2-D2 with the deeper architectures of their setup is that we can concatenate activations from multiple layers together without increasing the computational burden of the base-learner thanks to the Woodbury identity.\"}", "{\"comment\": \"I understand your points. Overall, I like the idea of using closed-form base learner which also demonstrate good performance when the backbone network is shallow. However, as a practioner, I may not adopt the proposed method for now.\\n\\nIn my opinion, meta-learning is about learning a data-driven inductive bias for few-shot learning. Closed-form regression itself introduces a strong inductive bias which is not learned. Therefore, it is interesting to investigate whether the inductive bias of closed-form regression is needed when the backbone network gets deeper.\", \"as_shown_in_the_figure_3_of_https\": \"//openreview.net/pdf?id=HkxLXnAcFQ , the performance gap between different meta-learning methods diminishes as the backbone gets deeper. One intersting point in the figure is that ProtoNet typically outperforms other methods when the network is deeper.\", \"title\": \"what happens when the backbone network gets deeper\"}", "{\"title\": \"These improvements can be used in any few-shot learning methods. We outperform prototypical networks in an apples-to-apples comparison.\", \"comment\": \"We thank the anonymous commenter for pointing out a GitHub repo with improvements. We note that neither data augmentation nor the optimizer schedule are mentioned at all in the associated published paper.\\n\\nAdditionally, the mentioned improvements are not specific to prototypical networks (or to any method for that matter), and can also be applied to ours. As such, we fail to see how this says anything about the merits of our proposal.\\nIn our experiments, we compare against prototypical networks using the same setup of the original paper (Adam optimizer, halving LR every 20 epochs; no data augmentation).\\nIn this fair comparison, we outperform it.\\n\\nWe would gain no knowledge by showing that \\u201cproto-nets with data augmentation and optimizer improvements\\u201d (as suggested) beats \\u201cR2D2 with no data augmentation\\u201d, or that \\u201cMAML with a ResNet base\\u201d beats \\u201cR2D2 with 4 layers\\u201d. These are apples-to-oranges comparisons which make any scientific conclusion very hard to draw.\\n\\nInstead, a proper comparison is to take the innovation of each paper -- the prototype layer in proto-nets, and the ridge regression layer in R2D2 -- and compare them, with everything else fixed. This includes data augmentation, as well as network model and initialization.\\n\\nCarefully controlled comparisons are a core part of the scientific method, and ignoring them will lead to unsubstantiated conclusions.\"}", "{\"comment\": \"Using closed-form base learner is an interesting idea. However, the results are underwhelming.\", \"as_shown_in_https\": \"//github.com/gidariss/FewShotWithoutForgetting , Prototypical Networks can be quite powerful with some modifications. The modifications include:\\n1. add data augmentation\\n2. use SGD with momentum optimizer\\n3. scale the output of the euclidean distance to a suitable range\\n\\nUsing a 4-Conv backbone with 64 channels, Protypical Networks are able achieve remarkable results in MiniImagenet: 1-shot: 53.30% +/- 0.79 5-shot: 70.33% +/- 0.65\\n\\nEven without data augmentation, in my experiments, Protypical Networks can still get 5-shot accuracy around 68.8%.\\n\\nConsidering this, the proposed method has not demonstrated superior empirical results than Protypical Network yet.\", \"title\": \"interesting idea but underwhelming results\"}", "{\"title\": \"To all: Appendix now includes runtime analysis, 1-vs-all experiment and extended discussion\", \"comment\": [\"We would like to thank both reviewers and anonymous commenters for their feedback and participation.\", \"In light of the discussion, the Appendix of the paper has been updated:\", \"Section B offers a runtime analysis which reveals that R2-D2 is several times faster than MAML and almost as fast as a simple (fixed) metric learning method such as prototypical networks, while still allowing per-episode adaptation.\", \"Section A reports the accuracy of the 1-vs-all variant of LR-D2 (as suggested by AnonReviewer2), which is comparable with the one of R2-D2.\", \"Finally, Section C extends the discussion sparked here on OpenReview about a) the nature of our contribution b) the disambiguation with the multi-task learning paradigm .\"]}", "{\"title\": \"Re-using existing components in clever ways for new problems should be encouraged!\", \"comment\": \"I respectfully disagree with the argument regarding lack of novelty. Indeed, the authors did not invent the meta-learning framework, and they did not invent ridge regression. Yet the two of them had not been combined before in this way, and this combination is evidently beneficial. It does seem like a natural idea, but if it was so obvious, how come it wasn't done before?\\n\\nIt may be tempting to create complicated models to solve a problem, yielding \\\"more novel\\\" solutions. But this seems wrong if the same problem can be solved in a simpler way! I feel strongly that re-using existing components in clever ways that yield good results on new problems is an important contribution and should be encouraged.\"}", "{\"comment\": \"Thanks for your reply!\\n\\n> Clearly, the overall training framework is not novel and it is common in the few-shot learning literature. \\n\\nThanks for the clarification! But if the \\\"essential difference\\\" (asked in my first post and answered in your previous reply) is not the contribution, it is hardly to tell the essential novelty of this method.\\n\\n> We strongly disagree with the statement. This is exactly the nature of the contribution of most approaches for few-shot classification.\\n\\nI do not agree with this statement. Simply replacing the base learner and following the standard meta learning/few-shot learning scheme sounds not novel to me. The claimed adaptation capability comes from the standard meta-learning scheme, while the claimed efficiency comes from the closed-form solver. Both are well known and common for years. \\n\\nYes MAML can be explained to be using SGD as base learner (but there are other more intuitive explanations), but they redesigned the learning procedure specifically for SGD, since SGD is a dynamic optimization algorithm rather than a model. Other meta-learning methods either proposes new algorithm or new model structure specifically for few-shot learning. BTW, I do not agree that \\\"some papers propose their methods in the similar way, so our paper also presents contribution of similar novelty\\\".\\n\\n>Our contribution is to use closed-form solvers such as ridge regression to tackle few-shot classification, which is novel in the literature and it is a non-trivial endeavor.\\n\\nUsing closed-form solver for sure can converge faster than using deep neural networks or doing second order optimization (like MAML). But this is an advantage of the existing closed-form solvers. In addition, as mentioned in your reply and paper, the fine-tuning still needs to backpropagate the error from the closed form solver to the pre-trained deep CNN. Together they still compose a deep model whose last layer is the closed-form solver, and each epoch of the fine tuning might need heavy computation (**This has been also pointed out by Reviewer 1**). Then the advantage of using shallow model is not clear: you can always find a good trade-off between fine tuning a large/small backbone model and a complex/simple base learner. Besides, logistic regression does not have a closed-form solver so the title is somehow misleading.\\n\\nOverall, I agree that using closed-form solver of a shallow model might have some practical value, especially in the case when you use a very powerful pre-trained CNN as the backbone model. However, I am not convinced that this is a novel contribution.\", \"title\": \"Using a classical linear model as base learner is not novel to me\"}", "{\"title\": \"This is a comment on a different technique than what we propose\", \"comment\": \"We thank the reviewer for the comment.\\nHowever, we believe that the low score originates from a misunderstanding of our proposal.\\nBelow, we try to bring some clarity by disambiguating between what the reviewer refers to and our method.\\nIf our interpretation of what the reviewer refers to as \\u201centirely common\\u201d is incorrect, it would be great to be provided with at least one reference, so that we can continue the conversation on the same ground.\\n\\n> \\u201cnovel contribution?\\u201d , \\u201ctraining multi-task neural nets with shared feature representation and task specific final layer is probably 20-30 years old by now and entirely common.\\u201d\\n\\u201cIt is also common freeze the feature representation learned from the first set of tasks, and to simply use it for new tasks by modifying the last layer\\u201d\\n\\nWe understand that the reviewer is hinting at the common multi-task scenario with a shared network and task-specific layers (e.g. Caruana 1993). He/she also refers to basic transfer learning approaches in which a CNN is first pre-trained on one dataset/task and then adapted to a different dataset/task by simply adapting the final layer(s) (e.g. Yosinski et al. \\u201cHow transferable are features in deep neural Networks?\\u201d - NIPS 2014; Chu et al. \\u201cBest Practices for Fine-tuning Visual Classifiers to New Domains\\u201d - ECCVw 2016).\\n\\nIf so, then this is significantly different to our work, and in general to all of the previous literature on meta-learning applied to few-shot classification (e.g. Finn et al. 2017, Ravi & Larochelle 2017, Vinyals et al. 2016, etc).\\nNotably, these methods and ours take into account adaptation *already during the training process*, which requires back-propagating errors through the very fine-tuning process.\\n\\nWithin this setup, our main contribution is to propose an adaptation procedure based on closed-form regressors, which have the important characteristic of allowing different models for different episodes while still being fast because of 1) their convergence in one (R2-D2) or few (LR-D2) steps, 2) the use of the Woodbury identity, which is particularly convenient in the few-shot data regime, and 3) back-propagation through the closed-form regressor can be made efficient.\\n\\nTo better illustrate our point, we conducted a baseline experiment.\\nFirst, we pre-trained the same 4-layers CNN architecture, but for a standard classification problem, using the same training samples as our method. We simply added a final fully-connected layer (with 64 outputs, like the number of classes in the training splits) and used the cross-entropy loss.\\nThen, we used the convolutional part of this trained network as a feature extractor and fed its activation to our ridge-regression layer to produce a per-episode set of weights.\\nOn miniImagenet, the drop in performance w.r.t. our proposed R2-D2 is very significant: 13.8% and 11.6% accuracy for the 1 and 5 shot problems respectively.\\nResults are consistent on CIFAR, though less drastic: 11.5% and 5.9%.\\n\\nThis confirms that simply using a \\u201cshared feature representation and task specific final layer\\u201d as commented by the reviewer is not what we are doing and it is not a good strategy to obtain results competitive with the state-of-the-art in few-shot classification.\\nInstead, it is necessary to enforce the generality of the underlying features during training explicitly, which we do by back-propagating through the fine-tuning procedure (the closed-form regressors).\\n\\nWe would like to conclude remarking that, probably, the source of confusion arises from the overlap that exists in general between the few-shot learning and the transfer/multi-task learning sub-communities.\\nWe realize that the two have developed fairly separately while trying to solve very related problems, and unfortunately the similarities/differences are not acknowledged enough in few-shot classification papers, including our own. We intend to alleviate this problem in our related work section, and invite the reviewer to suggest more relevant works from this area.\"}", "{\"title\": \"There is ample precedent in the few-shot learning literature for proposing new base learners as the main contribution.\", \"comment\": \"Thank you.\\n\\n> \\u201cI understand that the main novelty here is to apply fine tuning on the test set (of tasks sampled for training) in meta-learning, instead of on the training data of a single supervised learning task (as we normally did in supervised learning).\\u201d\\n\\nSorry but this is not claimed in the paper or in the answer above. Clearly, the overall training framework is not novel and it is common in the few-shot learning literature. In fact, we specifically wrote: \\u201cOur training procedure (and indeed, all meta-learning methods for few-shot learning, such as MAML, SNAIL, etc) ...\\u201d.\\n\\nThe point of our previous comment was simply to clarify why different episodes correspond to different sets of parameters.\\n\\n\\n> \\u201c\\u201cchanging the model of base learners cannot be recognized as a novelty\\u201d\\nWe strongly disagree with the statement. This is exactly the nature of the contribution of most approaches for few-shot classification. For example, both MAML and prototypical networks use the same algorithm (SGD) in the external loop, while they vastly differ for the method used in the inner loop (SGD and nearest neighbour respectively).\\n\\nOur contribution is to use closed-form solvers such as ridge regression to tackle few-shot classification, which is novel in the literature and it is a non-trivial endeavor.\", \"as_stated_by_ar2\": \"\\u201c[it] strikes an interesting compromise between not performing any adaptation for each new task (as is the case in pure metric learning methods [e.g. prototypical networks]]) and performing an expensive iterative procedure, such as MAML or Meta-Learner LSTM where there is no guarantee that after taking the few steps prescribed by the respective algorithms the learner has converged.\\u201d\\n\\nBesides offering a trade-off with respect to existing techniques, our proposal also presents a significant practical value in terms of performance, as outlined in our experimental section.\"}", "{\"comment\": \"Thanks a lot for your reply and explanation! I understand that the main novelty here is to apply fine tuning on the test set (of tasks sampled for training) in meta-learning, instead of on the training data of a single supervised learning task (as we normally did in supervised learning). However, I agree with AnonReviewer1: I do not think this work presents very original contributions. It applies the existing fine-tuning technique by following standard meta-learning setting, as many other meta-learning methods already did.\\n\\nFine tuning is an existing technique that can be generally applied to different learning settings. The basic idea is to update a pre-trained model and continue to train it on new training instances. In supervised learning, each training instance is a data point, and the learning goal is to minimize the training error on each data point. In meta-learning, each training instance is an (n-way k-shot) classification task, and the learning goal is to minimize the validation/test error on the test set of each training task. Therefore, fine tuning in meta-learning should be applied to the test sets of training tasks (as this paper does). In fact, in meta-learning, any training happening on task-shared part (e.g., meta-learner or shared pre-trained model) should minimize the error/loss on the test sets of training tasks. However, these are all well-known facts, derived from the very early optimization formulation of \\\"learning to learn\\\" (although meta-learning becomes very popular topic very recently). So they are not the contributions of this paper.\\n\\nIn addition, as the authors mentioned, many existing meta-learning methods use the same idea, the only difference here is that the base learner for each task changes to ridge/logistic regression model. But changing the model of base learners cannot be recognized as a novelty. Therefore, I think this is a successful application of existing technique, it re-explains how to do fine-tuning in meta-learning setting, but is not novel to me.\", \"title\": \"Fine tuning on the test set of training tasks is not novel\"}", "{\"title\": \".\", \"comment\": \"Thank you, this is a really nice paper. The bi-level optimization point of view is very insightful. Although their framework is very general, they seem to specialize it in the experiments using gradient descent for the inner loop, which is different from our closed-form solutions.\"}", "{\"title\": \"It's the procedure to generate the pre-trained convnet\", \"comment\": \"> \\u201cI am confused about whether the proposed method is the same as \\u2026 multiple models (e.g., logistic regression) for different tasks based on shared input features provided by a pre-trained model (e.g., CNN)\\u201d\\n\\nThank you for participating in the discussion. This describes well only the behavior at test-time -- when facing a new task, a new regressor is learned based on pre-trained features (hence, different tasks will have different parameters). However, this leaves out a crucial detail: where does this pre-trained CNN come from?\\n\\nThe standard approach is to use a CNN that was pre-trained on ImageNet or another task. However, there is no guarantee that the CNN features will transfer well to unknown tasks. In the case of few-shot learning, with only 1 or 5 training samples, fine-tuning will result in extreme over-fitting.\\n\\nOur training procedure (and indeed, all meta-learning methods for few-shot learning, such as MAML, SNAIL, etc) train the CNN features specifically to perform well on new, unseen tasks. \\u201cPerforming well on unseen tasks\\u201d is formalized as achieving a low error after fine-tuning. This means that we have to back-propagate errors through the fine-tuning procedure, which can be SGD (MAML) or a ridge/logistic regression solver (ours). The end result is a CNN that is especially trained to be fine-tuned later under the same conditions; this differs substantially from standard pre-training.\\n\\nThere is a nice, informal introduction to this (admittedly subtle!) distinction, that was written by the authors of MAML:\", \"https\": \"//bair.berkeley.edu/blog/2017/07/18/learning-to-learn/\"}", "{\"title\": \"Our proposal demonstrates results competitive with SNAIL despite using a much simpler architecture (SNAIL uses ResNet, we just use 4 conv layers).\", \"comment\": \"We thank the reviewer for the comments and questions.\\n\\n> \\u201cWhy one can simply treat \\\\hat{Y} as a scaled and shifted version of X\\u2019W?\\u201d\\nIn the case of logistic regression, the scaling and shifting is not needed, and we have \\\\hat{Y}=X\\u2019W. This is because logistic regression is a classification algorithm, and directly outputs class scores. These scores are fed to the (cross-entropy) loss L.\\n\\nHowever, ridge regression is a regression algorithm, and its regression targets are one-hot encoded labels, which is only an approximation of the discrete problem (classification). This means that an extra calibration step is needed (eq. 6), to allow the network to tune the regressed outputs into classification scores for the cross-entropy loss L.\\n\\n> \\u201cThe empirical performance of the proposed approach is not very promising and it does not outperform the comparison methods, e.g., SNAIL\\u201d\\nOur method actually outperforms SNAIL on an apples-to-apples comparison, with the same number of layers. We would like to draw the reviewer\\u2019s attention to the last paragraph of the \\u201cMulti-class classification\\u201d subsection (page 8).\\n\\nThe result mentioned by the reviewer uses a ResNet, while we use a 4-layer CNN to remain comparable to prior work. SNAIL with a 4-layer CNN ([11] Appendix B) performs much worse than our method (7.4% to 10.0% accuracy improvement).\\n\\nEven disregarding the great difference in architecture capacity, our proposal's performance coincides with SNAIL on miniImageNet 5way-5shot and it is comparable on 3 out of 4 Omniglot setups. We would have liked to establish a comparison also on CIFAR, but unfortunately the official code for SNAIL hasn\\u2019t been released.\", \"borrowing_the_words_of_anonreviewer2\": \"\\u201cNotably, the ridge regression variant can reach results competitive with SNAIL that uses significantly more weights and is shown to suffer when its capacity is reduced.\\u201d\\n\\nWe hope that this addresses the two concerns raised by the reviewer. We will be happy to answer any other question about the paper.\"}", "{\"title\": \"Response to AR2\", \"comment\": \"We thank the reviewer for the insightful comments and analysis.\\n\\n> \\u201cOne-vs-all classifiers\\u201d for LR-D2\\nThis is a great suggestion, and we are not quite sure how we missed it. We will update the results for 5-way classification incorporating this method.\\n\\n> \\u201cablation where for the LR-D2 variant SGD was used ... instead of Newton\\u2019s method\\u201d\\nWe previously did exactly this experiment, although for the R2-D2 (ridge regression) variant. We did not include it due to space constraints. It is equivalent to MAML, which also uses SGD, but adapting only the classification layer for new tasks (instead of adapting all parameters).\\n\\nWe tested this variant on miniImageNet with 5 classes, with the lowest-capacity CNN (which is the most favorable model for MAML/SGD). It yields 45.4\\u00b11.6% accuracy for 1-shot classification and 61.7\\u00b11.0% for 5-shot classification. Comparing it to Table 1, there\\u2019s a drop in performance compared to our closed form solver (3.5% and 4.4% less accuracy, respectively), and also compared to the original MAML (3.3% and 1.4% respectively).\\n\\nAlthough we expect the conclusions for logistic regression (LR-D2) to be similar, we will extend the experiment to this case and report the results.\\n\\n> \\u201cNeither MAML nor MetaLearner LSTM have been showed to be as effective as Prototypical Networks for example\\u201d\\nWe agree, and will amend the text. Their interest may lie more in their technical novelty.\\n\\n> Suggestions on multinomial term and sentence grammar\\nThese do improve the readability of the text and will be corrected.\"}", "{\"comment\": \"IMO, shared parameters are optimized for Base test-set (Figure 1) instead of Base training-set, which is different than multi-task learning setup. ( I think AnonReviewer1 also raised similar issues...)\\n\\nAnd, I think authors missed a reference, which is very relevant.\", \"https\": \"//arxiv.org/abs/1806.04910\", \"title\": \"shared parameters are optimized for Base test-set\"}", "{\"title\": \"results are not very promising\", \"review\": \"This paper proposes a new meta-learning method based on closed-form solutions for task specific classifiers such as ridge regression and logistic regression (iterative). The idea of the paper is quite interesting, comparing to the existing metric learning based methods and optimization based methods.\\n\\nI have two concerns on this paper. \\nFirst, the motivation and the rationale of the proposed approach is not clear. In particular, why one can simply treat \\\\hat{Y} as a scaled and shifted version of X\\u2019W?\\n\\nSecond, the empirical performance of the proposed approach is not very promising and it does not outperform the comparison methods, e.g., SNAIL. It is not clear what is the advantage.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Not clear what is novel here\", \"review\": \"Summary: The paper proposes an algorithm for meta-learning which amounts to fixing the features (ie all hidden layers of a deep NN), and treating each task as having its own final layer which could be a ridge regression or a logistic regression. The paper also proposes to separate the data for each task into a training set used to optimize the last, task specific layer, and a validation set used to optimize all previous layers and hyper parameters.\", \"novelty\": \"This reviewer is unsure what the paper claims as a novel contribution. In particular training multi-task neural nets with shared feature representation and task specific final layer is probably 20-30 years old by now and entirely common. It is also common freeze the feature representation learned from the first set of tasks, and to simply use it for new tasks by modifying the last (few) layer(s) which would according to this paper qualify as meta-learning since the new task can be learned with very few new examples.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"comment\": \"After reading this paper, I am confused about whether the proposed method is the same as a widely used technique, i.e., training multiple models (e.g., logistic regression) for different tasks based on shared input features provided by a pre-trained model (e.g., CNN), which can be fine-tuned. Although a minor difference here is that the tasks are sampled from a distribution of tasks rather than a fixed set (which follows a standard meta-learning setting), the used technique already exists and is well-known.\\n\\nSince the proposed method is claimed to be a meta-learning approach that can quickly adapt to novel tasks, the training algorithm or the meta-learner should do something different for different tasks (i.e., adaptive to each specific task). However, The CNN remains the same for different tasks, and the closed-form solvers do not have any hyper-parameters changed with the task. I am not sure if it can be recognized as a meta-learning method. It might be more suitable to be categorized in multi-task learning, where models for different tasks share the same feature extractor (the CNN here).\\n\\nPlease correct me if I am wrong in the understanding of the essential idea of this paper. Thanks a lot!\", \"title\": \"What is the essential difference compared to training multiple models that share a pre-trained ConvNet (fine-tuning is allowed) providing input features?\"}", "{\"title\": \"A good idea that achieves good results\", \"review\": \"This paper proposes a meta-learning approach for the problem of few-shot classification. Their method, based on parametrizing the learner for each task by a closed-form solver, strikes an interesting compromise between not performing any adaptation for each new task (as is the case in pure metric learning methods) and performing an expensive iterative procedure, such as MAML or Meta-Learner LSTM where there is no guarantee that after taking the few steps prescribed by the respective algorithms the learner has converged. For this reason, I find that leveraging existing solvers that admit closed-form solutions is an attractive and natural choice.\\n\\nSpecifically, they propose ridge regression as their closed-form solver (R2-D2 variant). This is easily incorporated into the meta-learning loop with any hyperparameters of this solver being meta-learned, along with the embedding weights as is usually done. The use of the Woodbury equation allows to rewrite the closed form solution in a way that scales with the number of examples instead of the dimensionality of the features; therefore taking advantage of the fact that we are operating in a few-shot setting. While regression may seem to be a strange choice for eventually solving a classification task, it is used as far as I understand due to the availability of this widely-known closed-form solution. They treat the one-hot encoded labels of the support set as the regression targets, and additionally calibrate the output of the network (via a transformation by a scale and bias) in order to make it appropriate for classification. Based on the loss of ridge regression on the support set of a task, a parameter matrix is learned for that task that maps from the embedding dimensionality to the number of classes. This matrix can then be used directly to multiply the embedded (via the fixed for the purposes of the episode embedding function) query points, and for each query point, the entry with the maximum value in the corresponding row of the resulting matrix will constitute the predicted class label.\\n\\nThey also experimented with a logistic regression variant (LR-D2) that does not admit a closed-form solution but can be solved efficiently via Newton\\u2019s Method under the form of Iteratively Reweighted Least Squares. When using this variant they restrict to tackling the case of binary-classification.\", \"a_question_that_comes_to_mind_about_the_lr_d2_variant\": \"while I understand that a single logistic regression classifier is only capable of binary classification, there seems to be a straightforward extension to the case of multiple classes, where one classifier per class is learned, leading to a total of N one-vs-all classifiers (where N is the way of the episode). I\\u2019m curious how this would compare in terms of performance against the ridge regression variant which is naturally multi-class. This would allow to directly apply this variant in the common setting and would enable for example still oversampling classes at meta-training time as is done usually.\\n\\nI would also be curious to see an ablation where for the LR-D2 variant SGD was used as the optimizer instead of Newton\\u2019s method. That variant may require more steps (similar to MAML), but I\\u2019m curious in practice how this performs.\", \"a_few_other_minor_comments\": [\"In the related work section, the authors write: \\u201cOn the other side of the spectrum, methods that optimize standard iterative learning algorithms, [...] are accurate but slow.\\u201d Note however that neither MAML nor MetaLearner LSTM have been showed to be as effective as Prototypical Networks for example. So I wouldn\\u2019t really present this as a trade-off between accuracy and speed.\", \"I find the term multinomial classification strange. Why not use multi-class classification?\", \"In page 8, there is a sentence that is not entirely grammatically correct: \\u2018Interestingly, increasing the capacity of the other method it is not particularly helpful\\u2019.\", \"Overall, I think this is good work. The idea is natural and attractive. The writing is clear and comprehensive. I enjoyed how the explanation of meta learning and the usual episodic framework was presented. I found the related work section thorough and accurate too. The experiments are thorough as well, with appropriate ablations to account for different numbers of parameters used between different methods being compared. This approach is evidently effective for few-shot learning, as demonstrated on the common two benchmarks as well as on a newly-introduced variant of cifar that is tailored to few-shot classification. Notably, the ridge regression variant can reach results competitive with SNAIL that uses significantly more weights and is shown to suffer when its capacity is reduced. Interestingly, other models such as MAML actually suffer when given additional capacity, potentially due to overfitting.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BkfhZnC9t7
Zero-shot Learning for Speech Recognition with Universal Phonetic Model
[ "Xinjian Li", "Siddharth Dalmia", "David R. Mortensen", "Florian Metze", "Alan W Black" ]
There are more than 7,000 languages in the world, but due to the lack of training sets, only a small number of them have speech recognition systems. Multilingual speech recognition provides a solution if at least some audio training data is available. Often, however, phoneme inventories differ between the training languages and the target language, making this approach infeasible. In this work, we address the problem of building an acoustic model for languages with zero audio resources. Our model is able to recognize unseen phonemes in the target language, if only a small text corpus is available. We adopt the idea of zero-shot learning, and decompose phonemes into corresponding phonetic attributes such as vowel and consonant. Instead of predicting phonemes directly, we first predict distributions over phonetic attributes, and then compute phoneme distributions with a customized acoustic model. We extensively evaluate our English-trained model on 20 unseen languages, and find that on average, it achieves 9.9% better phone error rate over a traditional CTC based acoustic model trained on English.
[ "zero-shot learning", "speech recognition", "acoustic modeling" ]
https://openreview.net/pdf?id=BkfhZnC9t7
https://openreview.net/forum?id=BkfhZnC9t7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BygSwQO-g4", "BJgHrFCT0X", "HJlUHCjo0m", "Bkx9UfE5Am", "BylKaWEqAm", "BkgCYZV5Cm", "HkgGv8a7A7", "SkxEjxzm0X", "rke7KlGmCQ", "rkxpSgGXAQ", "H1e4b3x9hQ", "r1eK6OJKnQ", "ByeYvTNOhQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544811356956, 1543526716555, 1543384638264, 1543287378132, 1543287232983, 1543287174347, 1542866521652, 1542819996089, 1542819963276, 1542819909337, 1541176315843, 1541105856876, 1541061984805 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1210/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1210/Authors" ], [ "ICLR.cc/2019/Conference/Paper1210/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1210/Authors" ], [ "ICLR.cc/2019/Conference/Paper1210/Authors" ], [ "ICLR.cc/2019/Conference/Paper1210/Authors" ], [ "ICLR.cc/2019/Conference/Paper1210/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1210/Authors" ], [ "ICLR.cc/2019/Conference/Paper1210/Authors" ], [ "ICLR.cc/2019/Conference/Paper1210/Authors" ], [ "ICLR.cc/2019/Conference/Paper1210/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1210/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1210/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": [\"This paper studies the really hard problem of zero-shot learning in acoustic modeling for languages with limited resources, using data from English. Using a novel universal phonetic model, the authors show improvements compared to using an English model for 20 other languages in phone recognition quality.\", \"Strengths\", \"Reviewers agree that the problem is an important one, and the presented ideas are novel.\", \"Universal phonetic model to represent phones in any language is interesting.\", \"Weaknesses\", \"The results are really weak, to the point that it is unclear how effective or general the techniques are. The work is an interesting first step, but is not developed enough to be accepted at this point.\", \"The universal phonetic model being trained only in English might affect generalizability to languages that do not share phonetic characteristics. The authors agree partly, and argue that the method already addresses some issues since the model can already represent unseen phones. But, coupled with the high phone error rates, it is still unclear how appropriate the technique will be in addressing this issue.\", \"Novelty: Although the idea of mapping phones to attributes, and using those for ASR is not novel (e.g., using articulatory features), application for zero-shot learning is. The work assumes availability of a small text corpus to learn phone-sequence distribution, so is similar to other zero-resource approaches that assume some data (audio, as opposed to text) is available in the new language.\", \"This paper presents interesting first steps, but lacks sufficient experimental validation at this point. Therefore, AE recommendation is to reject the paper. I encourage the authors to improve and resubmit in the future.\"], \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Good first step, but error rates are too high\"}", "{\"title\": \"Reply\", \"comment\": \"Thanks for updating your rate. After your rating update on 26th Nov we edited the paper and also added in some more discussions regarding the points from your comments that we may have missed before.\"}", "{\"title\": \"Update\", \"comment\": \"Based on the rebuttal I've changed my rating from 4 to 5.\"}", "{\"title\": \"Paper revision comment for reviewer 2\", \"comment\": \"Thanks again for the detailed comments and references! We have tried to answer almost all the detailed comments, suggestions and questions that you mentioned in our revised paper.\\n\\nIn detail the changes that we made and some comments - \\n1. Abstract now mentions what the baseline model is.\\n2. Corrected the strong statements mentioned in the introduction with proper justification and added in the missing references. We added a discussion about zero resource speech processing tasks in related work and introduction referencing 1,2,3 and some more work. \\n3. Thanks for pointing out to reference 4 and 5! We have talked about them in detail in related work section.\\n4. Language model can be integrated using WFST decoding by assuming that our language model is well resourced and considering the words seen during language model training as the words being predicted during decoding. \\n5. Thanks a lot for pointing out the grammatical errors we have now fixed them.\\n6. Yes we agree multilingual bottleneck features would be helpful, but the problem of having \\u201cunseen\\u201d phones would still exist no matter how many languages we add. This paper focuses on solving this issue where the model has seen only English phones during training. Infact this idea would lead to a better model as it would have better coverage of the phonemes and it would be a good extension towards this paper, hence mentioned as part of future work.\"}", "{\"title\": \"Reply\", \"comment\": \"Thanks for your question. We are running some new experiments with the domain robust acoustic feature. Our initial experiment on a reduced dataset suggests that these features have potential to improve performance by about 5 percent, but due to our computational limitations we could not complete our experiments on the whole dataset and have mentioned it as part of the future work.\"}", "{\"title\": \"Paper revision comment for reviewer 1\", \"comment\": \"Thanks again for your comments, as suggested we revise the paper and mention a few potential works that can be extended on top of the proposed framework. For instance, label smoothing might be a useful technique to regularize attribute distribution or phone distribution [1], we can also increase the coverage of our phonemes by training the model on more diversity of languages or by training them with better features.\\n\\n[1] Pereyra, Gabriel, et al. \\\"Regularizing neural networks by penalizing confident output distributions.\\\" arXiv preprint arXiv:1701.06548 (2017).\"}", "{\"title\": \"Clarification\", \"comment\": \"Will these additional experiments be included in the revised manuscript?\"}", "{\"title\": \"Reply to reviewer 2\", \"comment\": \"Thank you for detailed comments and references ! We will use them to enhance our paper by providing more discussion of related works. We discuss several points that distinguish our paper from the suggested references here -\\n\\n1.2.3: Those papers are works in zero resource speech recognition. As you suggested, we will discuss more about the connection between our work and those papers. Those zero resource works assume that no transcribed labels are available but a lot of audio data is provided for the target language. In contrast, our work assumes that both transcribed labels and audio are not available for the test language, but we use a limited amount of text sentences instead.\\n\\n4. We were unaware of this work and will update our related work. The work is applying one-shot learning to speech recognition by proposing a generative model. As its name suggests, the work was trying to classify words with only one training sample available per word. Our work is different from this one because we are using no training speech data for the target corpus.\\n\\n5. Thanks for pointing us to this work. We investigated further on this paper and also talked to some of the authors of the paper. We agree that the motivation behind this work is similar but it is limited to some extent. This work proposes an extrapolation approach to predict phones for low resource languages, however, the extrapolation mapping is done manually. Additionally the evaluation is carried out on Dutch/English pair which is similar in terms of their phonetics and language family. It does not show whether the approach will work for language pairs from unknown/unrelated linguistic groups . In contrast, our work proposes a generic algorithm to recognize any unknown phones by decomposing them into its phone attributes. We have shown that our approach is effective over 20 languages from different language families.\"}", "{\"title\": \"Reply to reviewer 3\", \"comment\": \"Thank you for your valuable comments !\\n\\nWe think that the term \\u201cUniversal Phonetic Model\\u201d might have confused the reviewer. We are sorry about that. The problem that we want to address is the task of zero-shot learning for speech recognition, which consist of learning an acoustic model without any resources for a given target language. We call our model \\u201cUniversal Phonetic Model\\u201d, because it has the ability to predict any phoneme, even the ones that are not present during training (therefore it covers a \\u201cuniversal\\u201d set). We achieve this by decomposing the phone label into its phone attributes. \\n\\nOne of the weakness that has been pointed out is that the idea and model is not novel. However, we did not find any works that attempt the same problem with a similar model. It is possible we are unaware of related work, it would be helpful if the reviewer can give some references so that we can investigate further. Most of the work on zero-shot in speech community that we found only identified \\u201csimilar\\u201d speech concepts or sounds, but could not ground them to phone labels making it hard to do speech recognition. Similarly, the idea of decomposing sounds into articulatory features is old, but our work presents the first approach that actually decomposes sounds into \\u201cuniversal\\u201d articulatory features and recognizes speech in unseen languages using such representations.\\n\\nAs we mentioned to AnonReviewer1, we agree that our baseline model has too high a phone error rate to be usable in practice. Unfortunately this is what the current CTC acoustic models provide for the task of zero-shot speech recognition. Both the baseline and UPM models had practical and competitive phoneme error rate in the test data of the 3 english datasets that were used during training. However, we do believe that some of the performance reduction when using this model cross-lingually and cross-domain could because our input features are not robust against acoustic domain mismatch. We are currently re-running the experiments with a new set of input features proposed in [1], and first results indicate that we can get even better improvements in the same settings, and on top of a much improved baseline. We believe this is due to the stronger (noise robust and domain invariant) overall baseline allowing for a better sharing of the linguistically informative information across languages, and we are working towards applying this idea to all the experiments including baseline, so that an updated version of the paper will again be consistent. \\n\\n\\nWe do agree that a better universal phoneme recognizer can be built by training on even more languages. But we believe that our experiments show that problem we want to address here, specifically the ability to predict unseen phonemes, in a zero-shot speech recognition scenario, can be tackled with the proposed method. Using more training languages would reduce appearance of unknown phonemes, but there will still almost always be at least a few unseen phones, which we show our model is effective in reducing..\\n\\n[1] S.Dalmia, X. Li, F. Metze and AW Black, \\u201cDomain Robust Feature Extraction for Rapid Low Resource ASR Development\\u201d, in Proc SLT 2018, https://arxiv.org/pdf/1807.10984.pdf\"}", "{\"title\": \"Reply to reviewer 1\", \"comment\": \"We appreciate your time reviewing our paper ! Thank you for your encouraging comments and remarks. We agree that our baseline model has too high a phone error rate to be usable in practice. We believe that this is because the input features currently being used are not robust against acoustic domain mismatch. We are currently re-running the experiments with a new set of input features proposed in [1], and first results indicate that we can get even better improvements in the same settings, and on top of a much improved baseline. We believe this is due to the stronger (noise robust and domain invariant) overall baseline allowing for a better sharing of the linguistically informative information across languages, and we are working towards applying this idea to all the experiments including baseline, so that an updated version of the paper will again be consistent.\\n\\n[1] S.Dalmia, X. Li, F. Metze and AW Black, \\u201cDomain Robust Feature Extraction for Rapid Low Resource ASR Development\\u201d, in Proc SLT 2018, https://arxiv.org/pdf/1807.10984.pdf\"}", "{\"title\": \"Review\", \"review\": \"This paper presents an approach to address the task on zero-shot learning for speech recognition, which consist of learning an acoustic model without any resources for a given language. The universal phonetic model is proposed, which learns phone attributes (instead of phone label), which allows to do prediction on any phone set, i.e. on any language. The model is evaluated on 20 languages and is shown to improve over a baseline trained only on English.\", \"the_proposed_upm_approach_is_novel_and_significant\": \"being able to learn a more abstract representation for phones which is language-independent is a very promising lead to handle the problem of ASR on languages with low or no resources available.\\n\\nHowever, the results are the weak point of the paper. While the results demonstrate the viability of the approach, the gain between the baseline performance and the UPM model is quite small, and it's still far from being usable in practice. \\n\\nTo improve the paper, the authors should discuss the future work, i.e. what are the next steps to improve the model.\\n\\nOverall, the paper is significant and can pave the way for a new category of approaches to tackle zero-shot learning for speech recognition. Even if the results are not great, as a first step they are completely acceptable, so I recommend to accept the paper.\", \"revision\": \"The approach of using robust features is interesting and promising, as well as the idea of training on multiple languages. Overall, the authors response addressed most of the issues, therefore I am not changing my rating.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting but not good enough!\", \"review\": [\"This paper proposes to train a Universal Phonetic Model for building speech recognition for new languages without any training data. It suggests to use X-SAMPA to map phones from all the languages into a single phonetic space. The prediction models are designed to first predict the phonetic features and then the phones depending on the target language.\", \"Overall , the paper is quite clear written.\", \"Strengthens:\", \"It observed overall improvements for all the target languages.\", \"Weaknesses:\", \"The idea and the proposed model are not novel.\", \"All the baseline systems have relative high phone error rates.\", \"The authors claimed to have a universal phonetic model but actually the model was trained only with English data. Therefore, experimental setup could be improved. In my opinion, it makes more sense to define a bunch of resource-rich languages as source and then train a real universal phonetic model.\", \"Overall, this paper lacks an analysis what are exactly improved and why the improvements for some target languages are larger than for the others.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Claims of being first not completely justified\", \"review\": \"Overview:\\n\\nThis paper proposed an approach for zero-shot phoneme recognition, where it is possible to recognise phonemes in a target language which has never been seen before. Rather than just training a phoneme recogniser directly on background data and then applying it to unseen data, phonetic features are first predicted, allowing phonemes not in the source language set to be predicted.\", \"main_strengths\": \"The paper's main strength lies in that this is a very unexplored area that could assist in the development of speech technology where it is currently not possible. The proposed model (Section 2) has also not been considered in prior work.\", \"main_weaknesses\": \"The paper's main weakness is in some of its claims and that it misses some very relevant literature. Detailed comments together with a minimal list of references are given below (but I would encourage the authors to also read a bit more broadly). But in short I do not think it is that easy to claim that this is the first paper to do zero-shot learning on speech; many of the zero-resource studies where unlabelled audio is used could be seen as doing some for of zero-shot matching. Specifically [5] is able to predict unseen phoneme targets. Multilingual bottleneck features can be applied to languages that have never been seen before [2], and the output of phoneme recognisers trained on one language have long been applied to get output on another unseen language. The first one-shot learning speech paper [4] (to my knowledge) is also not mentioned at all. The approach in the paper also still relies on some text data from the target language; if this then can be described as \\\"zero-shot\\\" learning, then I think many of these previous studies c also make this claim.\", \"overall_feedback\": [\"There is definitely value in this work, but it should be much better situated within the broader literature. Below I give some editorial suggestions and also outline some suggestions for further experiments.\", \"Detailed comments, suggestions and questions:\", \"Abstract: It would be useful to have some details of the \\\"baseline model\\\" here already, especially since it is such a new task.\", \"Introduction: \\\"... but they can hardly predict phones or words directly due to their unsupervised nature.\\\" This is a strong statement that maybe requires more justification. On the one hand, the statement is true, and the high word error rates in e.g. [3] can be cited. On the other hand, it has been shown that at the phone-distinction level, these models perform quite well and sometimes outperform supervised models [1]. Since this paper also considers phone error rate as a metric, I think care should be taken with such statements.\", \"Introduction: \\\"While zero-shot learning has attracted a lot of attention in *the* computer vision community, this setup has hardly been studied in speech recognition research especially in acoustic modeling.\\\" Definitely look at some of the studies mentioned below, and also [4] specifically.\", \"\\\"However, we note that our model can be combined with a well-resourced language model to recognize words.\\\" How would this be done, since I think this is actually quite a challenging task.\", \"Section 2: \\\"... useful the original ESZSL architecture ...\\\" -> \\\"... useful in the original ESZSL architecture ...\\\"\", \"Section 2.2: I assume the small text corpus is at the phone level (and not characters directly)? This should be clarified, and it could raise the question of whether this approach is truly \\\"zero-shot\\\".\", \"Section 3.2: \\\"We used EESEN framework ...\\\" -> \\\"We used the EESEN framework ...\\\"\", \"Section 4: You could look at the recent work in [2], which uses multilingual bottleneck features trained on 10 languages and applied to multiple unseen languages. It would be interesting to also train your approach on multiple languages instead of only English.\"], \"missing_references\": \"1. M. Heck, S. Sakti, and S. Nakamura, \\\"Feature Optimized DPGMM Clustering for Unsupervised Subword Modeling: A Contribution to Zerospeech 2017,\\\" in Proc. ASRU, 2017.\\n2. E. Hermann and S. J. Goldwater, \\\"Multilingual bottleneck features for subword modeling in zero-resource languages,\\\" in Proc. Interspeech, 2018.\\n3. H. Kamper, K. Livescu, and S. Goldwater, An embedded segmental k-means model for unsupervised segmentation and clustering of speech,\\\" in Proc. ASRU, 2017.\\n4. B. M. Lake, C.-Y. Lee, J. R. Glass, and J. B. Tenenbaum, \\\"One-shot learning of generative speech concepts,\\\" in Proc. CogSci, 2014.\\n5. O. Scharenborg, F. Ciannella, S. Palaskar, A. Black, F. Metze, L. Ondel, and M. Hasegawa-Johnson, \\\"Building an ASR system for a low-resource language through the adaptation of a high-resource language asr system: Preliminary results,\\\"in Proc. ICNLSSP, 2017.\", \"edit\": \"Based on the rebuttal I've changed my rating from 4 to 5.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
rkl3-hA5Y7
Towards Decomposed Linguistic Representation with Holographic Reduced Representation
[ "Jiaming Luo", "Yuan Cao", "Yonghui Wu" ]
The vast majority of neural models in Natural Language Processing adopt a form of structureless distributed representations. While these models are powerful at making predictions, the representational form is rather crude and does not provide insights into linguistic structures. In this paper we introduce novel language models with representations informed by the framework of Holographic Reduced Representation (HRR). This allows us to inject structures directly into our word-level and chunk-level representations. Our analyses show that by using HRR as a structured compositional representation, our models are able to discover crude linguistic roles, which roughly resembles a classic division between syntax and semantics.
[ "holographic", "representations", "linguistic representation", "representation", "models", "hrr", "towards", "representation towards", "vast majority", "neural models" ]
https://openreview.net/pdf?id=rkl3-hA5Y7
https://openreview.net/forum?id=rkl3-hA5Y7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Hkgn4rk5eE", "BJgWYOVWg4", "ByxIPyhlgV", "SkgxB0oglN", "Bygsq6lvyV", "S1g6sOHqAm", "rJxfxxdCTX", "BJezFXrWTQ", "H1xMvQSZ6m", "H1e8MXHZaX", "S1l5JmSWa7", "H1g_oGB-pX", "rkxxlMH-aX", "rJlUq1BbT7", "HygFGGDC37", "S1xJZx1i2X", "H1lF68l527" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545364788224, 1544796281397, 1544761181918, 1544760888105, 1544125842760, 1543293093166, 1542516714171, 1541653369900, 1541653338341, 1541653261864, 1541653218211, 1541653151885, 1541652967639, 1541652365621, 1541464593280, 1541234679311, 1541174976628 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1209/Authors" ], [ "ICLR.cc/2019/Conference/Paper1209/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1209/Authors" ], [ "ICLR.cc/2019/Conference/Paper1209/Authors" ], [ "ICLR.cc/2019/Conference/Paper1209/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1209/Authors" ], [ "ICLR.cc/2019/Conference/Paper1209/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1209/Authors" ], [ "ICLR.cc/2019/Conference/Paper1209/Authors" ], [ "ICLR.cc/2019/Conference/Paper1209/Authors" ], [ "ICLR.cc/2019/Conference/Paper1209/Authors" ], [ "ICLR.cc/2019/Conference/Paper1209/Authors" ], [ "ICLR.cc/2019/Conference/Paper1209/Authors" ], [ "ICLR.cc/2019/Conference/Paper1209/Authors" ], [ "ICLR.cc/2019/Conference/Paper1209/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1209/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1209/AnonReviewer3" ] ], "structured_content_str": [ "{\"title\": \"Thank you all for your comments. Some more words.\", \"comment\": \"We thank all reviewers and chair for your comments. While we fully understand your concerns regarding the baseline results and related work, we hope to make it clear (once more) that:\\n1) Our intention was to make the results easily reproducible based on the widely available Tensorflow open-source implementation of LM on public datasets, with no sophisticated tricks or model modifications involved. At the same time, we strongly believe that the main contribution of our submission - decomposition of representation - does not have to be correlated with perplexity results. Better perplexity results do not guarantee decomposed representation, nor is a good decomposed representation hinged on good perplexity results.\\n2) We acknowledge Reviewer 1 for pointing out related works, however the existing approaches, although using HRR as a component, are very different from the ones we proposed in our paper. They would not naturally apply to learning disentangled linguistic features, the problem we aim to tackle and major contribution we make in our paper. Our understanding is that Reviewer 1 raised this point in debating with our claim that a naively implemented chunk-level model is intractable, and it was not his intention to directly apply the work on tree kernel HRR to the decomposition task at hand.\\n\\nWe believe that our proposed model, formulated specifically for addressing the topic of disentangled linguistic representation, is novel and effective, and provides a viable approach for future research. We would greatly appreciate it if our comments and previous responses are taken into more serious consideration, and decisions properly revised.\"}", "{\"metareview\": \"This paper proposes the use of holographic reduced representations in language modeling, which allows for a cleaner decomposition of various linguistic traits in the representation. Results show improvements over baseline language models, and analysis shows that the representations are indeed decomposing as expected.\\n\\nThe main reviewer concern was the lack of strength of the baseline, although the authors stress that they were using the default baseline from TensorFlow, which seems like it will be reasonable to me. Another concern is that there is other work on using HRR to disentangle syntax and semantics in representations for language (e.g. \\\"Distributed Tree Kernels\\\" ICML 2012, but also others), that has not been considered. \\n\\nBased on this, this seems like a very borderline case. Given that no reviewer is pushing strongly for the paper I'm leaning towards not recommending acceptance, but I could very easily see the paper being accepted as well.\", \"confidence\": \"2: The area chair is not sure\", \"recommendation\": \"Reject\", \"title\": \"Interesting task, but a lack of satisfaction with the results\"}", "{\"title\": \"Response to your additional comments\", \"comment\": \"Thank you again for the very careful and detailed review. We answer your questions below:\\n\\n1. We agree that the distributional conditions are crucial to the success of decoding procedure. One way of evaluating the sensitivity of model performance to the degree of satisfying these conditions is that for our models with fixed basis, we draw basis embeddings from arbitrary distributions, with different means and variances, then compare the difference in model performance. Drawing embedding vectors from an arbitrary distribution with very high variance violates the HRR condition, and we expect the model performance to drop sharply in these settings.\\n\\n2. Yes, you are right that information can be shared between different roles, and our formulation of the model makes simplified assumptions about many subtle linguistic phenomena. We formulated the model with separate bases for different roles out of the motivation of easing the training procedure, that is to facilitate the model to learn disentanglement. Using a set of shared bases brings difficulty into training that might lead to \\u201crole collapse\\u201d, in which case a single role explains everything. An ideal case would be a middle ground between the two extremes: we provide the model a set of shared bases, and the model learns to select which bases to use for each role. That would probably require adding another latent variable conditioned on the role, and is certainly something interesting for us to explore in our future work.\\n\\n4. Thanks for your suggestion. We have given further clarification on this point in our revised submission.\\n\\n5. (i) We re-trained our models and improved the baseline results in our revised submission. Our implementation is based upon the open-source Tensorflow RNNLM implementation, and the results are comparable to the scores reported in the document. In our experiments, constraining roles to be intra-sentential simplifies evaluation on the chunk level, since most annotations are provided in a sentence by sentence basis.\\n\\n(ii). Our chunk-level model is trained with both a word prediction loss term and a chunk prediction loss term, and is fine-tuned on a pre-trained word-level model. The fact that the perplexity is slightly worse after fine-tuning suggests that chunk prediction might not provide much information for word prediction. Looking at some specific examples is a bit problematic since it might lead to overgeneralization if we only examine a few samples. A more systematic analysis is warranted.\\n\\n9. Thank you for your suggestion. Our revised submission includes an experiment which compares the predicted chunk-level roles and coarser-grained semantic roles labels provided by the OntoNote dataset. The results suggests that indeed the predicted chunk-level roles correspond with the ground-truth semantic roles most of the time.\\n\\n16. Our full 1B experiment with 4 roles shows that the separation of roles becomes more \\u201cgradual\\u201d in contrast to a two-role model. Specifically, whereas a two-role model has a clear contrast between the first role and the second, a four-role model has increasing sensitivity to semantics from the first role to the fourth, and decreasing sensitivity to verb forms. For instance, the average intra-word cosine similarity for \\u2018getting\\u2019, \\u2018get\\u2019, \\u2018gets\\u2019 and \\u2018got\\u2019 is 0.286, 0.444, 0.508, 0.510 for the first role to the four role. Although we believe that the corpus contains a myriad of contextual information for better separation of roles, we think a more targeted loss function might be needed, perhaps along the line of [1]. \\n\\n[1] Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies. Tal Linzen, Emmanuel Dupoux and Yoav Goldberg. TACL 2016.\"}", "{\"title\": \"Regarding baseline results\", \"comment\": \"We thank you for reviewing our updated submission. For reproducibility, our implementation of the baseline LM is based off the open-source Tensorflow implementation of RNNLM tutorial: https://github.com/tensorflow/models/blob/master/tutorials/rnn/ptb/ptb_word_lm.py (we plan to open-source our codes for the paper as well, and they will be easily reproducible due to compatibility with the public Tensorflow implementation). The perplexity results on PTB are reported in the official document of the Tensorflow implementation, which we quote as follows:\\n\\n===========================================\\n| config | epochs | train | valid | test\\n===========================================\\n| small | 13 | 37.99 | 121.39 | 115.91\\n| medium | 39 | 48.45 | 86.16 | 82.07\\n| large | 55 | 37.87 | 82.62 | 78.29\\n\\nOur baseline perplexity results on PTB reported in the revised submission is just about the same as the \\\"large\\\" setting in the open-source implementation. We are aware that these numbers are not close to the SOTA results, but we did not adopt any of the sophisticated model modifications or training techniques from those SOTA models (for example, [1] reports PTB perplexity 56.8 using fraternal dropout, [2] reports PTB perplexity 57.3 using weight-dropped LSTM + averaged SGD). Our intention was to build the HRR-LM based upon a simple, plain RNNLM, from which the effect of enabling the HRR mechanism can be best studied without introducing too much modeling or training complication. We do acknowledge that there is still much room left for perplexity improvement, for both the baseline and our proposed HRR LMs.\\n\\nPlease also find our response to your earlier comments below.\\n\\n[1] Fracternal Dropout, Konrad Zo\\u0142na et. al, ICLR 2018\\n[2] Regularizing and Optimizing LSTM Language Models, Stephen Merity et. al, arXiv:1708.02182v1\"}", "{\"title\": \"Revision provides new results and analyses\", \"comment\": \"I have read the new revision, and noted the following positive points:\\n1. It adds results on the 1B word dataset.\\n2. It reports improved perplexity numbers for both the baseline and the proposed models. \\n3. It evaluates the quality of the word embeddings via many benchmark datasets. \\n4. It adds an evaluation of chunk embeddings via cluster analysis w.r.t semantic roles. \\n* All these are useful quantitative analyses that I thought were missing from the previous version. \\n\\nOne concern that I have is that most of the results are not compared to the SOTA. I don't think it's necessary to beat the SOTA, but at least providing numbers would put results in perspective. At this point I am not sure if the baseline is indeed good enough (e.g., on PTB language modeling there are vanilla LSTM models that get <60, while the paper reports a baseline of 76). The same is true for the results in the word level analysis (section 3.3). \\n\\nThere are also a few lingering questions for the authors from my comment from Nov 17. Please take a look at that.\"}", "{\"title\": \"Revision available now\", \"comment\": \"We made some major revisions to our initial submission. In particular, we significantly substantiated our experimental results and analysis. Here are a few key take-aways from our revision:\\n1. We retrained our models and improved the baseline performance on PTB. Perplexity is now on par with existing literature. \\n2. In addition, we trained models on the One-Billion-Word benchmark data with various portions of the training set and reported results.\\n3. We ran extensive experiments to both intrinsically and extrinsically evaluate the HRR models:\\n\\t- For intrinsic evaluation, we conducted experiments on 18 word embedding benchmark datasets and show improvements over the baseline in almost all cases.\\n\\t- For extrinsic evaluation, we conducted experiments on 6 downstream tasks and also show consistent improvements.\\n4. We conducted experiments to quantitatively evaluate chunk-level HRR embeddings, and showed that although trained with weak supervision, chunk embeddings do correspond to some gold semantic role labels.\\n\\nWe have also made other revisions addressing your questions and concerns. We hope you could take a look at our revised submission, and any further feedbacks and discussions are always welcome.\"}", "{\"title\": \"Thank you for your detailed response; some additional comments\", \"comment\": \"Thank you for your very detailed response. I would be happy to read an updated version that takes into account the comments by the reviewers and reconsider my evaluation accordingly.\\n\\nBelow are some additional comments.\\n\\n1. My concern about the validity of using HRR was mainly referring to the conditions when decoding works in HRR, which you better explained in your response. I understand that the method might somehow work even though the distributional conditions do not hold, but I still do not understand what the implications are. It seems like a crucial assumption. Is there any way to evaluate or estimate the effect of the conditions not holding in your case? \\n\\n2. Separate bases for different filler embeddings: yes, what you say makes sense, as separating the feature spaces may be required for decomposing the representation. I can see that in clear-cut cases (e.g., two separate meanings of a word like \\\"bank\\\"). But might there be cases where it may be worth sharing information between roles? \\n\\n4. Decomposing representations versus composing words: thank you for clarifying this point. It seems like a confusion on my part, but perhaps a note on this point might help the confused reader. \\n\\n5. Weak baseline results: \\n(i) I look forward to seeing the updated results with stronger baselines. On the matter of initializing the hidden state from the last batch, I'm not sure that would make a big difference in practice, but you might as well try that too. Regarding point (15), I don't see a reason to limit to intra-sentential roles, to the extent that this initialization makes a difference. \\n\\n(ii) On the speculation that \\\"chunk prediction doesn\\u2019t provide much complemental information for word prediction\\\" - could you test that by looking at specific examples where one method works better than the others? \\n\\n9. Both dependency relations and semantic roles (or semantic dependencies: http://sdp.delph-in.net) would be very interesting in my opinion. You can look at the major relations or coarser categorizations if you're concerned with their diversity. \\n\\n16. It is rather disappointing that no additional decomposition is obtained with more than two roles. Can you provide more details? Are some roles not used at all or are some used for the same function? My guess is that PTB should have enough data for further decomposition, but it would be interesting to see if more decomposition emerges in a larger dataset.\"}", "{\"title\": \"Response to reviewer 2: Part 2\", \"comment\": \"(4) \\u201cthe experimental section would benefit significantly if the paper also included evaluations on downstream tasks and/or evaluated against existing methods to incorporate structure in language models.\\u201d\\n\\nAs for downstream task, due to space limit, it\\u2019s hard to fully investigate the potential benefits besides the decomposed representations which we spent most of our experimental section on. However, we are planning on running a POS tagging task using learned representations as features for a linear classifier. We are also planning on running the model on SRL task. We believe these tasks would be good testbeds for our proposed method, and also address your concern here.\\n\\nAs for comparison against existing methods, we are not aware of any directly applicable approach. There are certainly many existing methods that try to incorporate structures, but mostly to enhance their representation, not decompose their representation. Moreover, the unsupervised nature of our approach makes direct comparison even harder. Of course, we can be totally ignorant, and we would appreciate any advice from you if you are aware of any specific comparable approach that fits the scenario here.\\n\\n[1] Devlin et al., BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding\\n[2] Peters et al., Deep contextualized word representations\\n[3] Huang et al., Tensor Product Generation Networks for Deep NLP Modeling\"}", "{\"title\": \"Response to reviewer 2: Part 1\", \"comment\": \"First, we would like to thank you for the kind words regarding our general idea. We fully acknowledge the validity of the many concerned raised here. We provide responses to them below:\\n\\n(1) \\u201cFrom a modeling perspective, the paper seems to conflate two points\\u201d:\\n\\nFirst of all, we agree that there are two points to be made here as you have pointed out:\\n(a) the potential benefit of a role-filler approach \\n(b) the architectural or computational advantage of any specific instance of such an approach. \\n\\nIn writing this paper, we have the following considerations. First, we use language modeling as our testbed to investigate (a). As we explained in the intro, \\u201cthe versatility of language modeling [as a complementary or pretraining task] demonstrates that some linguistic regularities much be present\\u201d. The recent success of BERT [1] and ELMO [2] across many tasks (including some very linguistics-oriented benchmarks) reflects this point as well. This being said, we acknowledge there are many other tasks that could be used to investigate (a) -- for instance, [3] used QA as the main task, and we personally thought about summarization on the ground that a summary has a clear designation of sentential roles (e.g., event name, location, etc). However, the simplicity of LM, coupled with its minimal necessity for supervision, convinced us to focus on LM instead.\\n\\nSecond, while there are many other instances of variable-binding framework (TPR being one of the most prominent examples), we decided to investigate into HRR on computational grounds. This was explained in our background section (\\u201cmakes HRR a more practical choice\\u201d). We will elaborate on this point more in the updated version. \\n\\nOur claim in the paper is based on the two considerations above. We believe that both (a) and (b) should be investigated fully, but given that this is our initial attempt, we think it\\u2019s reasonable to make some simplifying assumptions.\\n\\nWe hope this address your concern.\\n\\n(2) \\u201cIt is also not clear to me in what way we can interpret the different filler embedding\\u201d\\n\\nWe agree that the separation of semantics and syntax is not guaranteed. However, nor did we claim it to be. We stated in the intro that our model can \\u201ceffectively separate certain aspects of word or chunk representation, which roughly corresponds to a division between syntax and semantics\\u201d. The vagueness of our statement is precisely due to the fact that our model doesn\\u2019t have a \\u201csyntax training loss\\u201d or \\u201csemantics training loss\\u201d. In light of this, we argue that it is interesting and somewhat surprising that HRR-enabled models learned to separate these two aspects without a dedicated loss term. This goes to show that an inductive bias can be beneficial.\\n\\nWe also agree that we need to make more comprehensive evaluation. We are currently expanding our experiments to more datasets (wiki, one-billion-word, possibly some domain-specific texts, or some subset of them), and we will provide an updated version as soon as possible. On the other hand, we would like to point out that for all our experiments on PTB, we observed a consistent pattern that the first role (the one without downweighting the dot product at the start of training) always corresponds more to syntax than semantics, regardless of hyperparameter setting or random seed. We think it\\u2019s because syntactic cues/signals (e.g., POS tags) are relatively easier to identify than semantics ones (e.g., topic relatedness), and therefore the first set of embeddings tend fo consistently capture the more syntactic aspect. Of course, this pattern will carry more weight if our new round of experiments also confirm it. \\n\\n(3) \\u201cIn its current form, I found the experimental evaluation not convincing\\u201d\\n\\nWe are fully aware that our baseline seems to underperform, as pointed out by reviewer 3 as well. First we would like to point out that contrary to common practice in LM literature, \\u201cwe do not assume that the contiguous sentences in the raw data are fed sequentially as input\\u201d, and as a result \\u201cwe do not initialize the hidden state of LSTM with the last state from the last batch\\u201d (section 4.1). We took this approach to ensure that chunk-level representations capture only intra-sentential roles -- we do not consider discourse-level features. The downside of this is that we can no longer reply on information from the last sentence to help predict the current one. \\n\\nMeanwhile, we are also running another word-level baseline that follows the common practice in LM literature. We will update the results shortly.\"}", "{\"title\": \"Response to reviewer 3: Part 3\", \"comment\": \"(8) \\u201cThe analysis in section 4.3\\u201d\\n\\nThank you for the kind words. We will add more categories. We are also considering adding more datasets, and possibly adding more analysis in the appendix. \\n\\n(9) \\u201cautomatic evaluation at chunk-level is challenging\\u201d\\n\\nWe initially refrained from extracting roles from PTB because we couldn\\u2019t find any existing script to do that. Of course, extracting phrases is easy but to our best knowledge not for roles. One way is to use dependency relations. But those are usually very diverse and nuanced, and we had concerns about how well our unsupervised method would fare. Another way is to use semantic role labeling, even though a similar concern arises. We are currently expanding our experiment section to these two tasks , which also address your concern here.\\n\\n(10) one-billion-word dataset\\n\\nWe reproduce our response to a relevant point raised by reviewer 2 below.\\n\\n\\u201c...we are currently expanding our experiments to more datasets (wiki, one-billion-word, possibly some domain-specific texts, or some subset of them)\\u201d\\n\\n\\n(11-14)\\n\\nThanks for the comments. We would address these detailed issued in the update version. \\n\\n(15) \\u201cWhy not initialize the hidden state with the last state from the last batch\\u201d\\n\\nWe reproduce our response to a relevant point raised by reviewer 2 below.\\n\\n\\u201c...We took this approach to ensure that chunk-level representations capture only intra-sentential roles -- we do not consider discourse-level features. The downside of this is that we can no longer reply on information from the last sentence to help predict the current one.\\u201d\\n\\n(16) \\u201cHave you considered using more than two roles?\\u201d\\n\\nYes indeed. However, we did not observe further decomposition on PTB. We suspect that there are two possibilities that need more consideration. First, the model simply needs more data to achieve decomposition into even more aspects. Second, the signal from LM might not be strong enough to induce even more separated aspects. We think that the second issue is outside the scope of the current submission, and as for the first issue, we are currently running experiments on a bigger scale, and will update our results shortly.\\n\\n(17) \\u201cwriting, grammar\\u201d\\n\\nNoted. Thanks for pointing them out.\"}", "{\"title\": \"Response to reviewer 3: Part 2\", \"comment\": \"(5) (i) \\u201cweak baseline results\\u201d\\n\\nThis is a very valid concern, which reviewer 2 also raised (point 3). We reproduce our response below. \\n\\n\\u201cWe are fully aware that our baseline seems to underperform, as pointed out by reviewer 3 as well. First we would like to point out that contrary to common practice in LM literature, \\u2018we do not assume that the contiguous sentences in the raw data are fed sequentially as input\\u2019, and as a result \\u2018we do not initialize the hidden state of LSTM with the last state from the last batch\\u2019 (section 4.1)... \\n\\nOn the other hand, we are also running another word-level baseline that follows the common practice in LM literature. We will update the results shortly.\\u201d\\n\\n(ii) \\u201cCan you speculate or analyze in more detail why the chunk-level model doesn't perform well, and why adding more fillers doesn't help in this case? \\u201c\\n\\nWe speculate that it\\u2019s because chunk prediction doesn\\u2019t provide much complemental information for word prediction, and as a result, a competing/non-beneficial chunk prediction loss doesn\\u2019t help bring the word-level loss further down. This could potentially be mitigated by a more powerful model (say, bigger and deeper model), but it might cause more overfitting on PTB. Our preliminary results showed that introducing chunk-level loss at later stage of training helps bring down the perplexity, but we will provide more experimental results to make a conclusive judgment. \\n\\n(6) (i) \\u201cThis claim should be qualified by the use of attention\\u201d\\n\\nThanks for pointing this out. We will say a bit more in a footnote. We would also like to note that the use of attention is accompanied by what is essentially a memory mechanism (multiple embeddings). This is definitely related to using HRR as associated memory.\\n\\n(ii) \\u201cwhat do you mean by these concepts and how exactly is the current approach limited with respect to them?\\u201d\\n\\nThanks for pointing it out, and we would add more details in the update version. By transparency and interpretability, we mean that the operations of encoding and decoding have clear conceptual meaning. In our case, it is manifested by the explicit role-filler binding. Transferability means that some features are only transferable in certain aspects. For instance, a separation of domain-specific features from domain-invariant features would help the latter transfer more easily to other related tasks. \\n\\n(7) \\u201cSection 3.3 was not so clear to me:\\u201d\\n\\nApologies for the confusion. We would make it more clear in the updated version. \\n\\n(i) r_i^{chunk}s are indeed shared by all words, but because we also have context-sensitive weights (step 1), the associated role for the chunk would be different. As for splitting the output vectors, we meant that we use the same LSTM to predict two vectors -- one for predicting next word, the other for predicting the chunk-specific role weights a\\u2019s. Sharing the same RNN hidden state is not necessary, but we found it effective without introducing another neural network.\\n\\n(ii) As for chunk prediction, it is done by concatenating the previous two chunk embeddings as input, and feed it through a linear layer followed by tanh (page 5, paragraph Prediction). The same form of loss function is used (sum of dot products), but the negative samples (in the denominator) are taken from the same batch (page 5, paragraph Decoding). We will add more details to these two paragraphs.\\n\\n(iii) As for the chunker, we fully acknowledge its limitation. It will be ideal if chunking is done jointly with LM, but it is outside the scope of this paper. However, using a chunker makes intuitive sense, and is analogous to what we have done to the word-level model. Specifically, the word-level model needs word boundaries, which are naturally provided by whitespaces for languages like English. Similarly, the chunk-level model needs chunk boundaries, which is provided by a chunker.\"}", "{\"title\": \"Response to reviewer 3: Part 1\", \"comment\": \"Thank you for your kind words and very detailed comments.\\n\\nBefore delving into the detail, we would appreciate it if you could elaborate a bit more on the concern about \\u201cthe validity of using HRR in this scenario\\u201d mentioned in the second paragraph? (a) Did you mean using HRR in language modeling? (b) If this is case, does it concern you because the choice of this task, or because of the inadequate baseline performance? Thank you very much!\\n\\n(1) \\u201cthe conditions when the approximate decoding via correlation holds\\u201d\\n\\nThanks for pointing this out. We will explain a bit more in our updated version. For our experiments though, only the models run with fixed basis embeddings are with mean zero and variance 1/n because they are randomly sampled and fixed throughout training. I agree that word embeddings and LSTM states do not typically exhibit such a distribution (especially iid condition). However, we would also like to make two points. First, past work [1] that successfully uses HRR as associative memory, where these conditions are also not explicitly met (or at least they didn\\u2019t show it). Second, in our case, the decomposed scoring function actually acts as an (soft) enforcer that makes sure decoding works properly. The loss would only go down when the predicted filler embedding (after decoding) is close to the original filler embedding (before encoding). This is largely mediated by dot product -- the more accurate the decoding is, the bigger the value of dot product is. \\n\\nAs for other conditions, it is also required that the dimensionality of the vector be sufficiently bigger than the number of stored items. This obviously holds in our case since we are only using a couple of variable bindings, and we will make it more clear in the updated version.\\n\\n(2) \\u201cLearning separate bases for different role-filler bindings is said to encourage the model to learn a decomposition of word representation\\u201d\\n\\nIf we understand the question correctly, for each word, there are two (equal to the number of roles) filler embeddings, which have separate bases. These filler embeddings are then bound with their associated role embeddings. In this sense, base filler embeddings and role embeddings are shared across all words, but not between roles. Our earlier experiments showed that without separating these bases, decomposition of representations did not occur. We think it makes sense intuitively -- a decomposition of representation usually necessitates a separation of feature space. Does this address your concern? \\n\\n(3) \\u201cIt's not clear to me where in the overall model the next word is predicted\\u201d\\n\\nWe apologize for this confusion. The decoding module in 1(b) corresponds to equation 5. Instead of using one dot product as in a vanilla LSTM, we use the sum of two dot products, each of which is responsible for one role-filler binding. \\n\\nIndeed the score in equation 5 is used similarly as in equation 2. We will make this more clear in the updated version.\\n\\n(4) \\u201cComparison to other methods for composing words\\u201d\\n\\nIf we understand it correctly, you are referring to the word-level model since this is where we spent most time entailing and analyzing. As we argued in the response to reviewer 2 (point 4, reproduced below), we do not find any directly comparable method to the best our knowledge. \\n\\n\\u201c...There are certainly many existing methods that try to incorporate structures, but mostly to enhance their representation, not decompose their representation. Moreover, the unsupervised nature of our approach makes direct comparison even harder.\\u201d\\n\\nThe cited work you provided [2], and also Socher\\u2019s recursive network network deal with composing phrases from individual words, which does not concern the decomposition of word representation. Moreover, recursive neural networks need additional input such as parsed trees, which is definitely outside the scope of our paper. \\n\\nWe would like to emphasize that the main contribution is about the decomposition/separation of representations. This decomposition, in HRR\\u2019s framework, is accompanied by the initial operation of encoding/composing. Due to space limit, we do not fully investigate the potential advantage/disadvantage of using HRR as an encoder (compared to (say) Socher\\u2019s work), but rather spend most of the time using HRR to set up a model that can induce decomposition. \\n\\nOf course, we can be totally ignorant of other directly comparable methods. If you have any specific method in mind, we would really appreciate it if you can provide us some pointers.\"}", "{\"title\": \"Response to reviewer 1\", \"comment\": \"Thank you for your interest in our work and your kind words regarding the direction our paper takes. We summarize all the concerns raised and provide a point-by-point response below.\\n\\n(1) \\u201cIn this paper, it is not clear why random vectors have not been used\\u201d\\n\\nWe have two points to make regarding this comment. First, we did experiment on using fixed random basis embeddings, be it basis role embeddings or basis filler embeddings. This is denoted by models with names Fixed-* in Table 1. This is also mentioned in Page 4, right below Figure 1, \\u201cwe also consider using fixed random vectors for basis embeddings\\u201d. Second, if you are referring to using random vectors for not just bases, but also other trainable word-embedding related parameters (such as s^w_i), we think it is better to treat them as learnable parameters since random vectors do not cluster together in a meaningful way that corresponds to natural language. \\n\\n(2) \\u201cBut, how can this regularization function preserve the properties of the vectors such that when these vectors are composed the properties are preserved\\u201d\\n\\nWe agree with this characterization. However, we want to remake the point we make in response to reviewer 3 (point 1, reproduced below)\\n\\n\\u201c...in our case, the decomposed scoring function actually acts as an (soft) enforcer that makes sure decoding works properly. The loss would only go down when the predicted filler embedding (after decoding) is close to the original filler embedding (before encoding). This is largely mediated by dot product -- the more accurate the decoding is, the bigger the value of dot product is.\\u201d\\n\\nAlthough it is out our intention to design a theoretically complete model that preserves the properties all the way through, we do mean to take advantage of HRR properties, combined with black-box modeling from neural networks. We believe this is a reasonable approach to take in order to make our model viable in the world of deep learning.\\n\\n(3) \\u201cMoreover, the sentence \\u2018this is computationally infeasible due to the vast number of unique chunks\\u2019 is not completely true\\u201d\\n\\nWe meant to say that directly extending our word-level model to chunk-level is not plausible, because for word-level model, we designate a learnable vectorial parameter to each word type. By analogy, we would have to use a learnable vectorial parameter for each unique chunk type, which renders it intractable in our case. It is in this sense that we meant by saying \\u201cthis is computationally infeasible\\u201d.\\n\\nHopefully these responses address your concerns.\"}", "{\"title\": \"Thank you all for your reviews!\", \"comment\": \"We thank all the reviewers for their insightful comments and suggestions. We are aware of the general concern about the underperformance of baseline LMs. We will provide an updated submission shortly, with extended experimental analysis and improved baselines.\"}", "{\"title\": \"Decomposed Linguistic Representation with Holographic Reduced Representations\", \"review\": \"The paper proposes a new approach for neural language models based on holographic reduced representations (HRRs). The goal of the approach is to learn disentangled representations that separate different aspects of a term, such as its semantic and its syntax. For this purpose the paper proposes models both on the word and chunk level. These models aim disentangle the latent space by structuring the latent space into different aspects via role-filler bindings.\\n\\nLearning disentangled representations is a promising research direction that fits well into ICLR. The paper proposes interesting ideas to achieve this goal\\nin neural language models via HRRs. Compositional models like HRRs make a lot of sense for disentangling structure in the embedding space. Some of the experimental results seem to indicate that the proposed approach is indeed capable to discover rough linguistic roles. However, I am currently concerned about different aspects of the paper:\\n\\n- From a modeling perspective, the paper seems to conflate two points: a) language modeling vie role-filler/variable-binding models and b) holographic models as specific instance of variable bindings. The benefits of HRRs (compared e.g., to tensor-product based models) are likely in terms of parameter efficiency. However, the benefits from a variable-binding approach for disentanglement should remain across the different binding operators. It would be good to separate these aspects and also evaluate other binding operators like tensors products in the experiments.\\n\\n- It is also not clear to me in what way we can interpret the different filler embeddings. The paper seems to argue that the two spaces correspond to semantics and syntax. However, this seems in no way guaranteed or enforced in the current model. For instance, on a different dataset, it could entirely be possible that the embedding spaces capture different aspects of polysemy. However, this is a central point of the paper and would require a more thorough analysis, either by a theoretical motivation or a more comprehensive evaluation across multiple datasets.\\n\\n- In its current form, I found the experimental evaluation not convincing. The qualitative analysis of filler embeddings is indeed interesting and promising. However, the comparisons to baseline models is currently lacking. For instance, perplexity results are far from state of the art and more importantly below serious baselines. For instance, the RNN+LDA baseline from Mikolov (2012) achieves already a perplexity of 92.0 on PTB (best model in the paper is 92.4). State-of-the-art models acheive perplexities around 50 on PTB. Without an evaluation against proper baselines I find it difficult to accurately assess the benefits of these models. While language modeling in terms of perplexity is not necessarily a focus of this paper, my concern translates also to the remaining experiments as they use the same weak baseline.\\n\\n- Related to my point above, the experimental section would benefit significantly if the paper also included evaluations on downstream tasks and/or evaluated against existing methods to incorporate structure in language models.\\n\\nOverall, I found that the paper pursues interesting and promising ideas, but is currently not fully satisfying in terms of evaluation and discussion.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Back to the past\", \"review\": \"This paper is very interesting as it seems to bring the clock back to Holographic Reduced Representations (HRRs) and their role in Deep Learning. It is an important paper as it is always important to learn from the past. HRRs have been introduced as a form of representation that is invertible. There are two important aspects of this compositional representation: base vectors are generally drawn from a multivariate gaussian distribution and the vector composition operation is the circular convolution. In this paper, it is not clear why random vectors have not been used. It seems that everything is based on the fact that orthonormality is impose with a regularization function. But, how can this regularization function can preserve the properties of the vectors such that when these vectors are composed the properties are preserved.\\n\\nMoreover, the sentence \\\"this is computationally infeasible due to the vast number of unique chunks\\\" is not completely true as HRR have been used to represent trees in \\\"Distributed Tree Kernels\\\" by modifying the composition operation in a shuffled circular convolution.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Novel approach to learning decomposable representations; some unclear parts and questionable validity; weak performance\", \"review\": \"\", \"summary\": \"========\\nTheis paper proposes a method for learning decomposable representations in the context of a language modeling task. Using holographic reduced representations (HRR), a word embedding is composed of a role and a filler. The embedding is then fed to an LSTM language model. There is also an extension to chunk-level representations. Experimentally, the model achieves perplexity comparable to a (weak) baseline LSTM model. The analysis of the learned representations shows a separation into syntactic and semantic roles. \\n\\nThe paper targets an important problem, that of learning decomposable representations. As far as I know, it introduces a novel perspective using HRR and does so in the context of language modeling, which is a core NLP task. The analysis of the learned representations is quite interesting. I do have some concerns with regards to the quality of the language model, the clarity of some of the model description, and the validity of using HRR in this scenario. Please see detailed comments below.\", \"comments\": \"=========\\n1. Section 2 refers to Plate (1995) for the conditions when the approximate decoding via correlation holds. I think it's important to mention these conditions and discuss whether they apply to the language modeling case. In particular, Plate mentions that the elements of each vector need to be iid with mean zero and variance 1/n (where n is the length of the vector). Is this true for the present case? Typically, word embeddings and LSTM states are do not exhibit this distribution. Are there other conditions that are (not) met?\\n2. Learning separate bases for different role-filler bindings is said to encourage the model to learn a decomposition of word representation. On the other hand, if I understand correctly, this means that word embeddings are not shared between roles, because s^w_i is also a role-specific vector (not just a word-specific vector). Is that a cause of concern? \\n3. It's not clear to me where in the overall model the next word is predicted. Figure 1b has an LSTM that predicts filler embeddings. Does this replace predicting the next word in a vanilla LSTM? Equation 5 still computes a word score. Is this used to compute the probability of the next word as in equation 2? \\n4. Comparison to other methods for composing words. Since much of the paper is concerned with composing words, it seem natural to compare the methods (and maybe some of the results) to methods for composing words. Some examples include [2] and the line of work on recursive neural networks by Socher et al., but there are many others. \\n5. Perplexity results:\\n- The baseline results (100.5 ppl on PTB) are very weak for an LSTM. There are multiple papers showing that a simple LSTM can do much better. The heavily tuned LSTM of [1] gets 59.6 but even less tuned LSTMs go under 80 or 80 ppl. See some results in [1]. This raises a concern that the improvements from the HRR model may not be significant. Would they hold in a more competitive model? \\n- Can you speculate or analyze in more detail why the chunk-level model doesn't perform well, and why adding more fillers doesn't help in this case? \\n6. Motivation: \\n- The introduction claims that the dominant encoder-decoder paradigm learns \\\"transformations from many smaller comprising units to one complex emedding, and vice versa\\\". This claim should be qualified by the use of attention, where there is not a single complex embedding, rather a distribution over multiple embeddings. \\n- Introduction, first paragraph, claims that \\\"such crude way of representing the structure is unsatisfactory, due to a lack of transparency, interpretability or transferability\\\" - what do you mean by these concepts and how exactly is the current approach limited with respect to them? Giving a bit more details about this point here or elsewhere in the paper would help motivate the work. \\n7. Section 3.3 was not so clear to me:\\n- In step 1, what are these r_i^{chunk}? Should we assume that all chunks have the same role embeddings, despite them potentially being syntactically different? How do you determine where to split output vectors from the RNN to two parts? What is the motivation for doing this?\\n- In prediction, how do you predict the next chunk embedding? Is there a different loss function for this? \\n- Please provide more details on decoding, such as the mentioned annealing and regularization. \\n- Finally, the reliance on a chunker is quite limiting. These may not be always available or of high quality. \\n8. The analysis in section 4.3 is very interesting and compelling. Figure 2 makes a good point. I would have liked to see more analysis along these lines. For example, more discussion of the word analogy results, including categories where HRR does not do better than the baseline. Also consider other analogy datasets that capture different aspects. \\n9. While I agree that automatic evaluation at chunk-level is challenging, I think more can be done. For instance, annotations in PTB can be used to automatically assign roles such as those in table 4, or others (there are plenty of annotations on PTB), and then to evaluate clustering along different annotations at a larger scale. \\n10. The introduction mentions a subset of the one billion word LM dataset (why a subset?), but then the rest of the papers evaluates only on PTB. Is this additional dataset used or not? \\n11. Introduction, first paragraph, last sentence: \\\"much previous work\\\" - please cite such relevant work on inducing disentangled representations.\\n12. Please improve the visibility of Figure 1. Some symbols are hard to see when printed. \\n13. More details on the regularization on basis embeddings (page 4) would be useful. \\n14. Section 3.3 says that each unique word token is assigned a vectorial parameter. Should this be word type? \\n15. Why not initialize the hidden state with the last state from the last batch? I understand that this is done to assure that the chunk-level models only consider intra-sentential information, but why is this desired? \\n16. Have you considered using more than two roles? I wonder how figure 2 would look in this case. \\n\\n\\nWriting, grammar, etc.:\\n====================== \\n- End of section 1: Our papers -> Our paper\\n- Section 2: such approach -> such an approach; HRR use -> HRR uses; three operations -> three operations*:*\\n- Section 3.1: \\\"the next token w_t\\\" - should this be w_{t+1)? \\n- Section 3.2, decoding: remain -> remains \\n- Section 3.3: work token -> word token \\n- Section 4.1: word analogy task -> a word analogy task; number basis -> numbers of basis\\n- Section 4.2: that the increasing -> that increasing \\n- Section 4.3: no space before comma (first paragraph); on word analogy task -> on a word analogy task; belong -> belongs\\n- Section 4.4: performed similar -> performed a similar; luster -> cluster \\n- Section 5: these work -> these works/papers/studies; share common goal -> share a common goal; we makes -> we make; has been -> have been \\n\\nReferences\\n==========\\n[1] Melis et al., On the State of the Art of Evaluation in Neural Language Models\\n[2] Mitchell and Lapata, Vector-based Models of Semantic Composition\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
Hyghb2Rct7
SIMILE: Introducing Sequential Information towards More Effective Imitation Learning
[ "Yutong Bai", "Lingxi Xie" ]
Reinforcement learning (RL) is a metaheuristic aiming at teaching an agent to interact with an environment and maximizing the reward in a complex task. RL algorithms often encounter the difficulty in defining a reward function in a sparse solution space. Imitation learning (IL) deals with this issue by providing a few expert demonstrations, and then either mimicking the expert's behavior (behavioral cloning, BC) or recovering the reward function by assuming the optimality of the expert (inverse reinforcement learning, IRL). Conventional IL approaches formulate the agent policy by mapping one single state to a distribution over actions, which did not consider sequential information. This strategy can be less accurate especially in IL, a weakly supervised learning environment, especially when the number of expert demonstrations is limited. This paper presents an effective approach named Sequential IMItation LEarning (SIMILE). The core idea is to introduce sequential information, so that an agent can refer to both the current state and past state-action pairs to make a decision. We formulate our approach into a recurrent model, and instantiate it using LSTM so as to fuse both long-term and short-term information. SIMILE is a generalized IL framework which is easily applied to BL and IRL, two major types of IL algorithms. Experiments are performed on several robot controlling tasks in OpenAI Gym. SIMILE not only achieves performance gain over the baseline approaches, but also enjoys the benefit of faster convergence and better stability of testing performance. These advantages verify a higher learning efficiency of SIMILE, and implies its potential applications in real-world scenarios, i.e., when the agent-environment interaction is more difficult and/or expensive.
[ "Reinforcement Learning", "Imitation Learning", "Sequential Information" ]
https://openreview.net/pdf?id=Hyghb2Rct7
https://openreview.net/forum?id=Hyghb2Rct7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJlcGPNgeN", "rkxIWmILCm", "B1elsABUA7", "r1xyYnSLRQ", "H1g1CWYinX", "H1lsdScYhm", "HkescKAB2Q" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544730385901, 1543033598177, 1543032471591, 1543031926821, 1541276103322, 1541150066918, 1540905362983 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1208/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1208/Authors" ], [ "ICLR.cc/2019/Conference/Paper1208/Authors" ], [ "ICLR.cc/2019/Conference/Paper1208/Authors" ], [ "ICLR.cc/2019/Conference/Paper1208/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1208/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1208/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper explores the use of sequential information to improve imitation learning, essentially using recurrent networks (LSTM) instead of a simple NN in several existing imitation learning models (BC, GAIL, etc.). On the positive side, the empirical results are good, showing improvement in terms of attained rewards, convergence speed and stability. There are however some significant issues with the way the way the approach is motivated and positioned with respect to existsing work. In particular, the issue described in the paper is due to the fact they consider POMDPs (not MDPs): this should have been more clearly explained. There are also issues with the Related Work section. For these reasons, the paper is not quite ready for publication.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Paper should discuss and account for partial observability\"}", "{\"title\": \"We thank the reviewer for valuable comments\", \"comment\": \"We thank the reviewer for valuable comments. While the idea is simple and the contribution in term of model seems small, we posed an important problem that sequential information is important in the RL-related approaches.\\n\\nWe are sorry that we have missed the connection between this work and POMDPs. The mentioned papers will be cited and discussed thoroughly. We provide a simple solution to RL in POMDPs, which is the major contribution of this work.\\n\\nRegarding \\\"lack of good experimental results\\\", we achieved much better scores in most scenarios, and in some of them, we achieved better performance than human experts.\\n\\nRegarding experimental details, we can provide more in the revised version.\\n\\nThanks again for helping us improve the quality of this paper.\"}", "{\"title\": \"We thank the reviewer for valuable comments\", \"comment\": \"We thank the reviewer for valuable comments. While our idea is straightforward, it reveals the importance of introducing sequential information into these scenarios. This topic was not clearly studied before.\\n\\nWe totally agree with, and thank the reviewer on connecting this work with POMDPs. The JMLR'11 paper will be cited and discussed.\\n\\nRegarding the comments on terminologies, we will remove the statement that RL is a metaheuristic (this is not a major statement in this paper), as well as use a better way of organizing imitation learning, behavioral cloning and inverse reinforcement learning (again, this does not harm the contribution of this paper).\\n\\nSorry for the carelessness in the Related Work section. We will improve it.\\n\\nThe last paragraph of Section 3.4 was not well written. The core idea is to emphasize that our way of introducing sequential information is essentially different from the strategy of using a discounted factor. Being straightforward, we are considering removing this paragraph which does not harm the contents of this paper.\"}", "{\"title\": \"We thank the reviewer for valuable comments\", \"comment\": \"We thank the reviewer for valuable comments. We are grateful that the reviewer recognized the value of this work: it adds sequential information which improves the overall performance of both IRL and BC algorithms.\\n\\nIn addition, we totally agree that investigating the relationship between how the addition of sequential information adds value and the validity of Markovian assumption. We will try to provide some qualitative and quantitative analysis in the future revisions.\"}", "{\"title\": \"First review\", \"review\": \"This paper introduces the use of sequential information (state-action pairs) for enhancing imitation learning, and using recurrent networks (LSTM) in that process. The authors motivate this by pointing out that while the state information, if Markovian, should contain all information necessary for decision making, with incomplete learners redundant information in the sequential state-action information leading to the current state can be helpful, citing some concrete examples.\\nAfter describing a number of variants of this idea, in the context of IRL, BC, etc., the authors conduct a systematic empirical evaluation to assess the effectiveness of the proposal, over the baselines, using a number of RL benchmark problems. \\nThe results are favorable and convincingly show that the proposed sequential enhancement can bring significant improvement in terms of attained rewards, convergence speed and stability in many of the tested cases. \\nOne suggestion I have is that it would be interesting to investigate into the question of how the addition of sequential information adds value is related to the validity of Markovian assumption in each of the problem being considered. \\nIt is a good empirical paper demonstrating the practical use of an idea that is simple but reasonable, and in a way that is substantiated using proper cutting edge framework and baselines.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Straightforward extension of learning-from-demonstration approaches to exploit recurrent neural network\", \"review\": \"The paper puts forward the idea of using a recurrent neural network in algorithms for learning from demonstration in order to take into account sequential information. The authors test it in the inverse reinforcement learning setting and the behavioral cloning setting on different control problems.\\n\\nI feel the basic idea is really straightforward. Although some promising results are obtained in the experimental setting, I believe the contribution may not be sufficient for a publication at ICLR. Moreover, there are some issues in the writing, e.g., \\n\\n- classically, as far as I know, RL is not considered to be a metaheuristic, although I understand that someone could make the case for it.\\n\\n- although there\\u2019s not really a consensus on terminology, I think using imitation learning to define the whole class of problems encompassing IRL and behavioral cloning is not the best. Generally, imitation learning is equated to behavioral cloning. I think a better term for this general class is learning from demonstration. For instance, there are some IRL approaches that don\\u2019t try to mimic a demonstrated policy, but aim at learning an even better policy.\\n\\n- the issue described in the paper about the missing sequential information is due to the fact the authors consider POMDPs and not MDPs. This should be made clearer. I think the authors should also cite the following paper:\\n\\n@article{ChoiKim11,\\n\\tAuthor = {Jaedeug Choi and Kee-Eung Kim},\\n\\tJournal = {JMLR},\\n\\tPages = {691--730},\\n\\tTitle = {Inverse Reinforcement Learning in Partially Observable Environments},\\n\\tVolume = {12},\\n\\tYear = {2011}}\\n\\n- the related work has to be reworked. Kuderer et al. (2013) is not about urban route planning, but deals with learning driving style; Mnih et al. (2015) is not about training multi-agent systems, but introduces DQN; Silver et al. (2016) is about go, not chess. Are TRPO or PPO really off-policy or asynchronous?\\n\\n- the last section of Sec.3.4 sounds strange. It\\u2019s not MC that assumes that the impact of an action decays with time. The discount factor comes from the choice of the total discounted reward criterion.\", \"other_comments\": [\"in abstract: BL -> BC\", \"notations issues in (2-5)\", \"l.6-7, Algo 1: t = T_m?\", \"The text should be checked for typos.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Review\", \"review\": \"The paper proposes to integrate sequential information into imitation learning techniques. The assumption is that mostly all the IL techniques are learning a policy which depends on state at time t, while the information contained in this state may be not sufficient to choose the right action (actually, this is the POMDP setting, the notion of POMDP not appearing in the paper....). The authors thus propose to use a recurrent neural network to encode the state by aggregating past information, instead of just using the features of the state at time t. They thus instantiate this idea on different methods and show that, on some problems, this approach can increase the quality of the final policy.\\n\\nActually, the contribution of the paper is a simple extension of existing methods: using a RNN instead of a simple NN in imitation learning models. First of all, when dealing with classical environments such as Atari, many papers propose to use the last N frames as a state encoding (instead of the last frame), following the same intuition. The studied setting thus corresponds to the PO-MDP case and using a RNN in POMDP is for example what is done in [Merel etal. 2017]. Moreover, the problem of imitation learning (and particularly inverse RL) in POMDP has been of the interest of many papers like [Choi et al. 2008] for instance and many more, and it is unclear what is the positioning of this paper w.r.t existing works. Since the paper proposes just to encode history with a RNN, the proposed solution lacks of originality, and the contribution of the paper in term of model is quite low. But the authors explain how this can be instantiated in three different settings (IRL, GAIL and BC) -- note that the section concerning the use of Adaboost is not clear and could be better described -- which can be of the interest of the community. \\nConcerning the experiments, I don't understand what is the split between training and testing data. Is it pairs of state-action coming from the experts ? or trajectories ? Moreover, I don't understand why these environments correspond to POMDP cases and the authors have to give details on that. For instance, mountain-car is clearly not a POMDP problem in its classical shape, nor Acrobot. As if, it makes the experiments very difficult to reproduce. The interest of using the RNN to encode history does not seem clear for each of the cases since it often degrades the final performance, so I don't know exactly what insights I can extract from the paper.\", \"pro\": [\"The approach is proposed for IRL, GAIL and BC\"], \"cons\": \"* Lack of positionning w.r.t POMDP litterature\\n* Lack of details in the experiments, and lack of good experimental results\\n* Low contribution in term of model\\n\\n\\n[Merel et al. 2017] Learning human behaviors from motion capture\\nby adversarial imitation\\n[Choi et al.] Inverse Reinforcement Learning in Partially Observable\\nEnvironments\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
S1lhbnRqF7
Building Dynamic Knowledge Graphs from Text using Machine Reading Comprehension
[ "Rajarshi Das", "Tsendsuren Munkhdalai", "Xingdi Yuan", "Adam Trischler", "Andrew McCallum" ]
We propose a neural machine-reading model that constructs dynamic knowledge graphs from procedural text. It builds these graphs recurrently for each step of the described procedure, and uses them to track the evolving states of participant entities. We harness and extend a recently proposed machine reading comprehension(MRC) model to query for entity states, since these states are generally communicated in spans of text and MRC models perform well in extracting entity-centric spans. The explicit, structured, and evolving knowledge graph representations that our model constructs can be used in downstream question answering tasks to improve machine comprehension of text, as we demonstrate empirically. On two comprehension tasks from the recently proposed ProPara dataset, our model achieves state-of-the-art results. We further show that our model is competitive on the Recipes dataset, suggesting it may be generally applicable.
[ "recurrent graph networks", "dynamic knowledge base construction", "entity state tracking", "machine reading comprehension" ]
https://openreview.net/pdf?id=S1lhbnRqF7
https://openreview.net/forum?id=S1lhbnRqF7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1l4mLAge4", "S1e-ClGZkV", "r1gfdkBXAm", "r1e8lyS7RQ", "S1xtKI4mAX", "H1e6Tr4X0Q", "HkeB75RZRQ", "H1gOvMYT37", "rkxUMYW6hX", "Sklcn-_c3m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544771099883, 1543737544945, 1542831977520, 1542831853553, 1542829696857, 1542829509012, 1542740508686, 1541407328397, 1541376269679, 1541206450265 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1207/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1207/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1207/Authors" ], [ "ICLR.cc/2019/Conference/Paper1207/Authors" ], [ "ICLR.cc/2019/Conference/Paper1207/Authors" ], [ "ICLR.cc/2019/Conference/Paper1207/Authors" ], [ "ICLR.cc/2019/Conference/Paper1207/Authors" ], [ "ICLR.cc/2019/Conference/Paper1207/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1207/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1207/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper investigates a new approach to machine reading for procedural text, where the task of reading comprehension is formulated as dynamic construction of a procedural knowledge graph. The proposed model constructs a recurrent knowledge graph (as a bipartite graph between entities and location nodes) and tracks the entity states for two domains: scientific processes and recipes.\", \"pros\": \"The idea of formulating reading comprehension as dynamic construction of a knowledge graph is novel and interesting. The proposed model is tested on two different domains: scientific processes (ProPara) and cooking recipes.\", \"cons\": \"The initial submission didn't have the experimental results on the full recipe dataset and also had several clarity issues, all of which have been resolved through the rebuttal.\", \"verdict\": \"Accept. An interesting task & models with solid empirical results.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"An interesting task & models with solid empirical results.\"}", "{\"title\": \"Thanks for clarification\", \"comment\": \"Thank you for the clarification. Now the paper seems to be more clear than before. As my major concern was readability and it is now enhanced, I am raising my score to 7. Great work.\"}", "{\"title\": \"Response to Reviewer 3 comments (contd.)\", \"comment\": \"Response continued from above.\\n\\nWhy does the graph update require coreference pooling again? Don't the updates in Eq 1 and 2 take care of this? The ablation does not test this, right?\\n=====\\nWe agree that the coreference pooling in the graph update seems repetitive at first glance. We have further clarified the explanation given in the text and included another ablation experiment (row 4 of Table 5) to confirm its usefulness. This step does indeed repeat Eq. 2. In a nutshell, this is necessary because, after the recurrent and residual graph updates (Eqs 3.1 - 3.3) that propagate information across edges, we may end up with different representations for location nodes corresponding to the same location. We don\\u2019t want these representations to diverge from each other because of information propagation.\", \"to_give_you_more_detail\": \"The graph update step ensures information propagation between entities and location representations. Specifically if the current location of entity \\u201ce_t\\u201d is predicted as \\u201c\\\\lambda_t\\u201d, the graph update steps ensures that both the entity and location representation gets the same update (via eq 3.2 and 3.3). This would have been sufficient if every entity had a unique location. But, multiple entities can actually exist in the same location. Let\\u2019s consider this small graph below\\n\\t\\nWater - -> leaf\\nCO_2 --> leaf\\n\\nHere both water and CO_2 exist in the same location, leaf. But let\\u2019s say that the MRC model picked the \\u201cleaf\\u201d span from sentence 1 (of the text in Fig 2) for \\u201cWater\\u201d and from sentence 4 for CO_2. In reality, they refer to the same location entity \\u201cleaf\\u201d. Now, due to eq. 3.3, the two embeddings of leaf will get two different residual updates (one would be corresponding to Water and other would be because of CO_2). Because of the different updates, the two representations of the same entity might diverge. To remedy this, we re-use the coreference matrix \\u201cU\\u201d we create in eq. (2), which should already have a high attention score corresponding to the two leaf locations. Thus we perform a similar operation to the intra-graph update.\\n====\\nAnother modeling choice that is not clear is regarding how the model processes the text -- reading prefixes of the paragraph, rather than one sentence at a time. What happens if the model is changed to be read one sentence at a time?\\n====\\nThe \\u201cprefixes\\u201d that our model reads at each time step comprise all sentences up to and including the current sentence s_t. The motivation for this modeling choice was empirical. In our preliminary experiments we evaluated alternative strategies, such as (a) only considering the current sentence s_t, and (b) considering the entire paragraph at every time step. We found that operating on prefixes performed best. This is in line with the findings of Dalvi et al., 2018, where the Pro-Global model (which uses prefixes) performs better than the Pro-Local model (which operates on single sentences).\"}", "{\"title\": \"Response to Reviewer 3 comments\", \"comment\": \"Thanks for the insightful comments. We\\u2019ve tried to improve our paper based on your feedback. Most significantly, we\\u2019ve performed additional ablation studies to confirm that our modeling choices improve performance, and we provide further empirical insight on what the coreference operations do. We\\u2019ve also updated the model description and the notation in Section 4 to clarify modeling mechanisms and choices. Two important additions are a high-level summary of the model, which we give at the beginning of Section 4, and a table (Table 2) that lists what each symbol represents along with its dimensions. Below we address your concerns point-by-point.\\n\\nThe proposed method seems plausible, but some details are impressionistic and it is not clear why and whether the modeling choices do what the paper says. This is especially the case in a few places involving coreference:\\n1. The paper says at the top of page 6 that the result of Eq 1 is a disambiguated intermediate node representation.\\n2. The self attention in Eq 2 performs coreference disamguation which prevents different instances of the same location from being predicted for multiple entities.\\nWhile these may indeed be working as advertised, it would be good to see some evaluation that verifies that after learning, what is actually happening is coreference.\\n======\\nBased on your comments, we\\u2019ve performed additional ablations to measure the impact of the co-reference mechanisms. We find that removing any of them leads to a decrease in performance (Rows 2, 3, 4 of Table 5).\\n\\nTo provide more than just this quantitative insight, we\\u2019ll expand here on how KG-MRC handles coreference to better motivate the modeling choices:\\nThe construction of graph G_t from G_{t-1} uses co-reference disambiguation of nodes to prevent node duplication and to enforce temporal dependencies. We perform coreference disambiguation between location nodes of G_t and G_{t-1} via Eq. 1 (call this inter-graph coreference) and between the location nodes in the same graph Gt (call this intra-graph coreference) via Eq. 2. The inter-graph coreference yields new, intermediate representations for the nodes in G_t. These are further updated via the intra-graph coreference step.\", \"inter_graph_co_ref\": \"One way to think about this is that we construct a new graph G_t at every time step. Now the graph G_{t-1} might contain some location nodes which are predicted again at time step \\u2018t\\u2019 (e.g., in Figure 2, leaf node already existed in G_{t-1}). Instead of replacing an old node with an entirely new node at \\u2018t\\u2019, we take a recurrent approach and do a gated update that preserves some information stored in the node in previous time steps while adding new information unique to time step \\u2018t\\u2019.\", \"intra_graph_co_ref\": \"Inter-graph co-ref isn\\u2019t enough since the MRC module makes its span predictions independently. This means that, at time step t, the model could predict the same span/location for multiple entities and add all these duplicates to the graph. Moreover, a single location might have the same surface form but be from different parts of the paragraph (e.g. \\u201cleaf\\u201d in the 1st and the 5th sentence of the para in figure 2). The operations in Eq. 2 resolve this by performing self-attention (i.e., the predicted locations of all entities are compared to each other).\\n=====\"}", "{\"title\": \"Response to Reviewer 2 comments\", \"comment\": \"We\\u2019re glad that you found the paper interesting and well-written. To address your comments and questions:\\n\\n1. the NPN model seems a good alternative, will be good to have a discussion about why your model is better than NPN. Also, NPN can probably be modified to output spans of a sentence. I will be curious to know how it performs.\\n\\nThe NPN model requires a pre-defined lexicon of action types (i.e., verbs), such as cut, bake, boil, etc. For the recipes dataset, the action types and their causal effects were manually collected and defined. Since the ProPara dataset does not have these annotations, we would have to manually identify action types to apply NPN to it.\\nAlso, NPN treats the state change as a classification problem (of about 260 classes that are also manually defined). In contrast, KG-MRC finds the state-describing span in the text directly, which we believe is a more generic approach.\\n\\n2. A more detailed illustration of the system / network is needed. Would have made it much easier to understand the paper. \\n\\nWe agree that more detail would help readers to understand the model better. We\\u2019ve made some hopefully significant updates to Section 4 (model description and notation) to improve clarity, and we hope you\\u2019ll take the time to read the new manuscript. Two important additions are a high-level summary of the model, which we give at the beginning of Section 4, and a table (Table 2) that lists what each symbol represents along with its dimensions.\\n\\n3. What are the results when using the whole training set of Recipes ?\\n\\nWe\\u2019ve completed an experiment on the full Recipes dataset and updated the paper to describe the result (this experiment did not finish in time for the initial submission). The model\\u2019s F1 score improves from 51.64 on the partial data to 54.27 on the full data, surpassing the previous state of the art by a more significant margin.\"}", "{\"title\": \"Response to Reviewer 1 comments\", \"comment\": \"Thank you for the useful feedback. We\\u2019ve updated our paper to take it into account -- we\\u2019ve updated the model description and the notation in Section 4 to clarify our method. Two important additions are a high-level summary of the model, which we give at the beginning of Section 4, and a table (Table 2) that lists what each symbol represents along with its dimensions. We also made several updates that address your specific questions.\\n\\n1. Are e_{i,t} and lambda_{i,t} vectors? Scalars? Abstract node notations? It is not clear in the model section. Also, it took me a long time to figure out that \\u2018i\\u2019 is used to index each entity (it is mentioned later).\\n\\nThe entity and location embeddings e_{i,t} and lambda_{i,t} are d-dimensional vectors, although we also overload the symbols to refer to abstract nodes in the model\\u2019s knowledge graphs. In the updated manuscript we state both these facts explicitly and state much earlier that \\u2018i\\u2019 is the index for entities.\\n\\n2. The paper says v_i (initial representation of each entity) is obtained by looking at the contextualized representations (LSTM outputs) of entity mention in the context. What happens if there are multiple mentions in the text? Which one does it look at?\\n\\nWhen there are multiple mentions of entity i, the initial representation v_i is formed by summing the representations of each mention. We have updated the paper to clarify this (Sec 4.1).\\n\\n3. For the LSTM in the graph update, why does it have only one input? Shouldn\\u2019t it have two inputs, one for previous hidden state and the other for input?\\n\\nGood point! We\\u2019ve improved the notation used to describe the model in Section 4. The update equation now shows clearly that the LSTM takes in the concatenation of two node inputs (entity and location embeddings) along with the previous hidden state.\\n\\n4. Regarding Recipe experiments, the paper says it reaches a better performance than the baseline using just 10k examples out of 60k. This is great, but could you also report the number when the full dataset is used?\\n\\nWe\\u2019ve completed an experiment on the full Recipes dataset and updated the paper to describe the result (this experiment did not finish in time for the initial submission). The model\\u2019s F1 score improves from 51.64 on the partial data to 54.27 on the full data, surpassing the previous state of the art by a more significant margin.\\n\\n5. What does it mean that in training time the model \\u201cupdates\\u201d the location node representation with the encoding of the correct span. Do you mean you use the encoding instead?\\n\\nWe meant that we perform teacher-forcing to train the model. During training, we extract the context encodings for the groundtruth span and use these in downstream operations to obtain the node representations. At test time, we use the MRC module\\u2019s predicted span rather than the groundtruth.\\n\\n6. For ProPara task 2, what threshold did you choose to obtain the P/R/F1 score? Is it the threshold that maximizes F1?\\n\\nFor ProPara task 2, our model was optimized for micro averaged F1 on the development set. Tandon et al. (2018) were kind enough to provide us with their evaluation script.\"}", "{\"title\": \"Summary of updates\", \"comment\": \"Based on the insightful feedback from our reviewers, we\\u2019ve updated our paper and believe it is substantially improved. Below we summarize the general changes, and in responses to individual reviewers, we respond directly to their comments/questions.\\n\\nFull Recipes experiment (Section 5.2):\\nWe completed an experiment on the full Recipes dataset and updated the paper with the result. This experiment did not finish in time for the initial submission, so we only had results from a model trained on partial data. The model\\u2019s F1 score improved from 51.64 on the partial data to 54.27 on the full data, surpassing the previous state of the art (51.27) by a more significant margin.\\n\\nAdditional ablations (Table 5):\\nTo demonstrate more clearly the impact of several modelling choices, we\\u2019ve completed additional ablation experiments. Specifically, these measure the performance contributions of the model\\u2019s coreference operations and show that they are important.\\n\\nUpdate to results on commonsense constraints (Table 6):\\nAfter submission, we discovered a string-matching bug in the script that calculates commonsense constraint violations. Correcting this bug changes our results slightly, although the general takeaway is the same. KG-MRC still does not violate any commonsense constraints of Types 1 and 2 (as defined in ProStruct (Tandon et al., 2018)), but we find that both our model and ProStruct violate a small number of Type 3 constraints -- KG-MRC notably makes proportionally fewer violations than ProStruct (4.1% vs 6.3%). We also report violation numbers for several ablated variants of our model and find that they consistently perform worse than the full model. These results are all summarized in Table 6 of the updated manuscript.\\n\\nImproved Section 4\\nWe received feedback that additional details and notational changes in the model description would help readers to understand the model better. We, therefore, made some hopefully significant updates to Section 4 to improve clarity. Two important additions are a high-level summary of the model, which we give at the beginning of Section 4, and a table (Table 2) that lists what each variable represents along with its dimensions.\"}", "{\"title\": \"Good ideas and results, could use some work with explanation\", \"review\": \"* Summary\\nThis paper addresses machine reading tasks involving tracking the states of entities over text. To this end, it proposes constructing a knowledge graph using recurrent updates over the sentences of the text, and using the graph representation to condition a reading comprehension module. The paper reports positive evaluations on three different tasks.\\n\\n* Review\\n\\nThis is an interesting paper. The key technical component in the proposed approach is the idea that keeping track of entity states requires (soft) coreference between newly read entities and locations and the ones existing in the knowledge graph constructed so far.\\n\\nThe proposed method seems plausible, but some details are impressionistic and it is not clear why and whether the modeling choices do what the paper says. This is especially the case in a few places involving coreference:\\n1. The paper says at the top of page 6 that the result of Eq 1 is a disambiguated intermediate node representation.\\n2. The self attention in Eq 2 performs coreference disamguation which prevents different instances of the same location from being predicted for multiple entities.\\n\\nWhile these may indeed be working as advertised, it would be good to see some evaluation that verifies that after learning, what is actually happening is coreference.\\n\\nWhy does the graph update require coreference pooling again? Don't the updates in Eq 1 and 2 take care of this? The ablation does not test this, right?\\n\\nAnother modeling choice that is not clear is regarding how the model processes the text -- reading prefixes of the paragraph, rather than one sentence at a time. What happens if the model is changed to be read one sentence at a time?\\n\\nThat the model implicitly learns constraints from data is interesting!\", \"bottomline\": \"The paper presents interesting ideas and good results, but would be better if the modeling choices were better explored/motivated.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Meaningful contribution, but hard to read\", \"review\": \"The paper proposes a recurrent knowledge graph (bipartite graph between entities and location nodes) construction & updating mechanism for entity state tracking datasets such as (two) ProPara tasks and Recipes. The model goes through the following three steps: 1) it reads a sentence at each time step t and identifies the location of each entity via machine reading comprehension model such as DrQA (entities are predefined). 2) Co-reference module adjusts relationship scores (soft adjacency matrix) among nodes, including possibly new nodes introduced by the MRC model. 3) to propagate the relational information across all the nodes, the model performs L layers of LSTM for each entity that attend on other nodes via attention (where the weights come from the adjacency matrix). The model repeats the three steps for each sentence. The model is trained by directly supervising for the correct span by the MRC model at each time step, which is possible because the data provides strong supervision for each sentence (not just the answer at the end).\\n The model achieves the state of the art in the two tasks of ProPara and Recipes dataset.\", \"strengths\": \"The paper provides an elegant solution for tracking relationship between entities as time (sentence) progresses. I also agree with the authors that this line of work (dynamic KG construction and modification) is an important area of research. While the model shares a similar spirit to EntNet, I think the model has enough distinctions / contributions, especially given that it outperforms EntNet by a large margin. The model also obtains non-trivial improvement over previous SOTA models.\", \"weaknesses\": \"Paper could have been written better. I had hard time understanding it. The notations are overall confusing and not explained well. Also there are a few unclear parts which I discuss in questions below.\", \"questions\": \"1. Are e_{i,t} and lambda_{i,t} vectors? Scalars? Abstract node notations? It is not clear in the model section. Also, it took me a long time to figure out that \\u2018i\\u2019 is used to index each entity (it is mentioned later).\\n2. The paper says v_i (initial representation of each entity) is obtained by looking at the contextualized representations (LSTM outputs) of entity mention in the context. What happens if there are multiple mentions in the text? Which one does it look at?\\n3. For the LSTM in the graph update, why does it have only one input? Shouldn\\u2019t it have two inputs, one for previous hidden state and the other for input?\\n4. Regarding Recipe experiments, the paper says it reaches a better performance than the baseline using just 10k examples out of 60k. This is great, but could you also report the number when the full dataset is used?\\n5. What does it mean that in training time the model \\u201cupdates\\u201d the location node representation with the encoding of correct span. Do you mean you use the encoding instead?\\n6. For ProPara task 2, what threshold did you choose to obtain the P/R/F1 score? Is it the threshold that maximizes F1?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Contributions are novel and convinced about significance\", \"review\": \"The paper addresses a challenging problem of predicting the states of entities over the description of a process. The paper is very well written, and easily understandable. The authors propose a graph structure for entity states, which is updated at each step using the outputs of a machine comprehension system. The approach is novel and well motivated. I will suggest a few improvements:\\n\\n1. the NPN model seems a good alternative, will be good to have a discussion about why your model is better than NPN. Also, NPN can probably be modified to output spans of a sentence. I will be curious to know how it performs.\\n\\n2. A more detailed illustration of the system / network is needed. Would have made it much easier to understand the paper. \\n\\n3. What are the results when using the whole training set of Recipes ?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BkMiWhR5K7
Prior Convictions: Black-box Adversarial Attacks with Bandits and Priors
[ "Andrew Ilyas", "Logan Engstrom", "Aleksander Madry" ]
We study the problem of generating adversarial examples in a black-box setting in which only loss-oracle access to a model is available. We introduce a framework that conceptually unifies much of the existing work on black-box attacks, and demonstrate that the current state-of-the-art methods are optimal in a natural sense. Despite this optimality, we show how to improve black-box attacks by bringing a new element into the problem: gradient priors. We give a bandit optimization-based algorithm that allows us to seamlessly integrate any such priors, and we explicitly identify and incorporate two examples. The resulting methods use two to four times fewer queries and fail two to five times less than the current state-of-the-art. The code for reproducing our work is available at https://git.io/fAjOJ.
[ "adversarial examples", "gradient estimation", "black-box attacks", "model-based optimization", "bandit optimization" ]
https://openreview.net/pdf?id=BkMiWhR5K7
https://openreview.net/forum?id=BkMiWhR5K7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJgXFDcHxV", "S1eCfrzsA7", "BkglK_U5C7", "H1gTyuSq0Q", "SJg2WDBcR7", "rkgBbCf90X", "BJg_v6iOCQ", "SJeCGji_A7", "SyglTvjdR7", "rklvTWsO07", "rke9oWZuam", "rJgK3ebOpX", "B1ead6lupQ", "BJx1HlJJpQ", "B1lDhBt52X", "B1xIAu-qhm", "HJgk8e_hFm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment" ], "note_created": [ 1545082747107, 1543345430282, 1543297144385, 1543292900972, 1543292676319, 1543282173449, 1543187807709, 1543187222329, 1543186360361, 1543184831268, 1542095265895, 1542095025461, 1542094196841, 1541496886966, 1541211566529, 1541179597929, 1538191430906 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1206/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1206/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1206/Authors" ], [ "ICLR.cc/2019/Conference/Paper1206/Authors" ], [ "ICLR.cc/2019/Conference/Paper1206/Authors" ], [ "ICLR.cc/2019/Conference/Paper1206/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1206/Authors" ], [ "ICLR.cc/2019/Conference/Paper1206/Authors" ], [ "ICLR.cc/2019/Conference/Paper1206/Authors" ], [ "ICLR.cc/2019/Conference/Paper1206/Authors" ], [ "ICLR.cc/2019/Conference/Paper1206/Authors" ], [ "ICLR.cc/2019/Conference/Paper1206/Authors" ], [ "ICLR.cc/2019/Conference/Paper1206/Authors" ], [ "ICLR.cc/2019/Conference/Paper1206/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1206/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1206/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1206/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"This paper is on the problem of adversarial example generation in the setting where the predictor is only accessible via function evaluations with no gradients available. The associated problem can be cast as a blackbox optimization problem wherein finite difference and related gradient estimation techniques can be used. This setting appears to be pervasive. The reviewers agree that the paper is well written and the proposed bandit optimization-based algorithm provides a nice framework in which to integrate priors, resulting in impressive empirical improvements.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"well written, effective and relevant work on blackbox adversarial example generation\"}", "{\"title\": \"Thanks for the update. I will increase my rating to 7\", \"comment\": \"Thanks for fixing this bug in experiments. The results look much reasonable now. I will increase my rating.\"}", "{\"title\": \"Updated again with time and data priors\", \"comment\": \"We have updated the paper again (specifically, the comparison with Tu et al) to reflect experiments we have now run with both the time and data prior (Bandits_{TD}). At 100% success rate with the same experimental design, our method now uses over 6 times fewer queries.\"}", "{\"title\": \"Another note\", \"comment\": \"Also note that due to the time constraint in getting a revision in, we actually only compared Tu et al to our second-best method, Bandits_T. We are running a comparison with Bandits_{TD} (both time and data priors) and will revise (if time permits) and/or report back the results.\\n\\n[EDIT: we have now done so, see comment above]\"}", "{\"title\": \"Thank you for pointing this out... have revised the paper and updated the result\", \"comment\": \"First of all, we would like to sincerely thank the reviewer for their continually detailed comments and thorough review---it has been great help in improving the manuscript.\\n\\nUpon checking the code, we realized that (as the reviewer suggested), we had accidentally reproduced the _targeted_ attacks in the baseline code repository. To account for this, we modified our code to work for targeted attacks, and properly replicated the experimental setup, choosing the correct \\\\ell_2 perturbation bound, and random target classes as in Tu et al (except for the fact that we use the prepackaged Inception-v3 classifier rather than the downloaded one from Tu et al). We don't tune our hyper parameters at all, and use the same ones that we used for untargeted. \\n\\nOur method achieves the same success rate at over 3 times the query efficiency at 100% success rate (note that this is higher success rate than Tu et al achieve at the same l2 perturbation bound, since there the authors only bound the mean and not the max), still establishing significant improvement. We have uploaded a revision reflecting these changes.\"}", "{\"title\": \"Thanks for adding these results! They look very good, except for a small concern\", \"comment\": \"Dear Paper1206 Authors,\\n\\nThank you for adding these new results. Figure 7 now shows the cosine similarity under different step sizes, which looks convincing. The newly added experiments on different models (ResNet-50, VGG-16) and different datasets (CIFAR and ImageNet), as well as the comparisons to other state-of-the-art methods make this paper look much stronger than before.\\n\\nI have a concern regarding the comparison with (Tu et al, 2018). The 100-fold reduction looks to good to be true. Can you confirm that you performed the attack under the same setting? e.g., do you run attacks with the same target labels for both methods, or running untargeted attacks for both? I think it is better to double check this.\\n\\nI am willing to increase my rating to 7 as long as the above concern can be addressed.\\n\\nThanks,\\nPaper1206 AnonReviewer1\"}", "{\"title\": \"Revision [Reply to R2]\", \"comment\": \"Thank you again for the review. We have now posted a revision of our paper, and the summary comment above details all of the changes we've made in response to reviewer comments, including several additional experiments and comparisons with other methods.\", \"to_highlight_the_changes_that_are_most_relevant_to_your_review\": \"1) We now provide an illustration of the bound in Appendix A in the relevant query regimes\\n\\n2 and 3) We have clarified some points in the papers based on reviewer comments and added significantly more experimental results---we hope that these results further justify the use of the full 10 pages.\"}", "{\"title\": \"Revision [Reply to R3]\", \"comment\": \"We have addressed the above comments in our revision, please see the main comment for more details. Thank you again for the review and suggestions.\"}", "{\"title\": \"Revision [Reply to R1]\", \"comment\": \"We have addressed comments (1), (2), and (3) in our revision (details are in the main comment above). To address the raised points directly:\\n\\n(1) is now addressed in Figure 7 in Appendix B.3 which shows how the time-dependent trend decays with the step size---even at high step sizes the trend persists. Specifically, we plot a graph identical to Figure 2 but for many different step sizes, from norm around 0.03 all the way to 4.0.\\n\\n(2) Appendix G now shows a comparison with Tu et al ([2] in the original review). See our main comment above for a summary of the results.\\n\\n(3) We now include results from ImageNet and CIFAR, with Inception-v3, ResNet, and VGG16 in the appendices (more details in our main comment above).\\n\\nThank you again for the detailed review and the useful suggestions.\"}", "{\"title\": \"Revision\", \"comment\": \"We thank all the reviewers again for the helpful responses and revision suggestions. We have posted a revision that we believe addresses all the reviewer comments.\\n\\nIn addition to adding the suggested edits to the paper for clarity, we have now compared our approach with several datasets, baselines, and classifiers, and established a significant margin over state-of-the-art methods. Specifically, we have made the following updates:\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\nQuantifying time-dependent prior\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\nWe include a graph (in the omitted figures appendix) showing that the successive correlation prior (aka the time-dependent prior) holds true even up to very large step sizes. Specifically, we plot a graph identical to Figure 2 but for many different step sizes, from norm around 0.03 all the way to 4.0.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\nOther threat models and datasets\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\nWe have added an Appendix F corresponding to ImageNet results for VGG16 and ResNet50 classifiers (along with Inceptionv3 copied from the main text for reference). Our methods still outperform NES on these benchmarks, often by a larger margin than shown for Inception-v3 in Table 1.\\n\\nWe have added an Appendix E corresponding to a comparison of our methods and NES in the CIFAR l-infinity threat model (for L2, we could not find a reasonable maximum \\\\epsilon from recent literature) with VGG16, Resnet50, and Inceptionv3 networks. \\n\\n\\u2014\\u2014\\u2014\\u2014- \\nComparison with another baseline\\n\\u2014\\u2014\\u2014\\u2014-\", \"efficiency_compared_to_tu_et_al\": \"\\u2014\\u2014\\u2014\\u2014\\u2014\\nWe looked into (Tu et al, 2018) and (Bhagoji et al, 2017) as suggested by reviewer 1 to compare with a baseline; we chose to compare with Tu et al (AutoZOOM) since it was (a) released later, (b) uses a more standard classifier than in Bhagoji et al and (c) does not require access to an external set of representative images (unlike Bhagoji et al, which uses this set to find the PCA components). As such, we have added an Appendix comparing our method to that of Tu et al: achieving the same success rate and using the mean perturbation from Tu et al as our maximum perturbation, we achieve a 35-fold reduction in query complexity.\\n\\nEfficiency compared to Tu et al + fine tuning:\\n\\u2014\\u2014\\u2014\\u2014\\u2014-\\n Tu et al also give a \\u201cdistortion fine-tuning\\u201d technique that attempts to reduce the mean perturbation after the attack. This fine-tuning takes around 100,000 queries, and in the best case, after using around 100,000 queries, reduces the mean perturbation to 0.4e-4 per-pixel normalized, which works out to just over 10 (see Figure 3a in Tu et al). In Appendix F, we show that running our attack with this lower distortion budget directly gives a similar success rate, using an average of around 900 queries as opposed to 100,000, giving more than a *100-fold* reduction in query complexity.\\n\\n\\u2014\\u2014\\u2014\\u2014\\nBound illustration\\n\\u2014\\u2014\\u2014\\u2014\\n- To illustrate, we give an example of our own \\\\ell_2 threat model, where Theorem 1 gives us a bound on the performance gap between NES and least squares, in Appendix 1 (after the proofs).\\n\\n\\u2014\\u2014\\u2014\\u2014\\nEdits to paper\\n\\u2014\\u2014\\u2014\\u2014\\n- We noticed that our image normalization for generating Table 1 was slightly incorrect, so we have fixed it and rerun the experiment\\u2014this has not changed the output significantly, and our methods still beat NES by the same margin of normalized queries. However, in the interest of correctness, we have updated the numbers in Table 1 to reflect the experiment run with correct normalization.\\n- We have made the pseudocode for the bandits attack clearer, and explicitly noted how the data-dependent prior can be included, as well as justifying the boundary projection step\\n- Fixed: \\\\nabla L \\u2014> g^* in Figures\\n- Fixed: Section 2.4 sentence (as pointed out by Reviewer 3)\"}", "{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the detailed comments on the paper. We address the main points below:\\n\\n1. Typically black-box adversarial attacks are executed in a multi-step fashion, i.e. by using small numbers of queries per gradient estimates, and taking several gradient estimate steps (Ilyas et al, the NES-based attack, for example, uses 50 queries per gradient estimate). While it may be possible to prove tighter bounds, in the 50-query regime with d=268203, the bound is actually rather tight. (Furthermore, during our own preliminary experimentation, least-squares attacks usually performed identically to NES).\\n\\n2. Section 2.4 is meant to illustrate that without priors, we have essentially hit the limit of query-efficiency in black-box attacks. In particular, NES, which we found to be the current state-of-the-art, actually turns out to be approximately optimal, even from a theoretical perspective. This motivates us to take a new look on adversarial example generation, breaking through this optimality by introducing new information into the problem.\\n\\nWithout the proof in Section 2.4, one could reasonably hope that there are simply better gradient estimators that we can use as a drop-in replacement for NES. The theorems we prove there instead motivate our bandit optimization-based view. \\n\\n3. One iteration constitutes two queries (which are used for a variance-reduced gradient estimate via antithetic sampling). In general, the query count refers to queries of the classifier, whereas iteration counts the number of times that we take an estimated gradient step.\\n\\nWe hope the above points clarify the reviewer's concerns, and thank the reviewer again for the detailed feedback.\"}", "{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the comments!\", \"we_address_the_main_points_below\": \"> Data dependent prior in pseudocode: Yes it is in fact by choice of d, but we agree this can be made clearer in the pseudocode. We will make sure to describe this more clearly in our final paper.\\n\\n> Figure 4: We will make sure to update this and be more explicit.\\n\\n> Figure 4c (low cosine similarity): Remarkably, for black-box attacks, though higher cosine similarity is better, the threshold for a successful adversarial attack (in terms of cosine similarity) is extremely low. In particular, for NES, the cosine similarity (as you mentioned) is almost 0, but the gradient estimates *still* result in a successful attack! We show that using our method leads to significantly better estimates of the gradient, though as one would expect in such a query-deficient domain (100s of queries vs 3*10e5 dimensional images), still pretty poor.\\n\\nWe will also be sure to address all of the minor comments in our final paper. We thank the reviewer again for the useful comments and suggestions.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for the detailed comments, we will be sure to make these changes in the final version of the paper. As the reviewer correctly identifies, we consider the theoretical framework of online optimization as a basis for all black-box attacks to be one of our most profound contributions. That said, in order to improve the quality of the experimental results, we have addressed and added each suggested experiment. Specifically:\\n\\n1) We thank the reviewer for raising this---we initially only used the default NES step size (from Ilyas et al) to evaluate the temporal correlation. To give a fuller picture on how this temporal correlation relates with the step size, we have added a new plot in the appendix, which shows the average correlation on a trajectory as a function of the step size. \\n\\n2) To address this, we have added a table (in the Appendix) which compares our query-efficiency against that of [1] and [2]. It should also be noted, however, that both [1] and [2] can be integrated as \\\"priors\\\" on the gradient; in particular, that the gradient lays in some low-dimensional subspace. Our framework gives us a way to formalize these assumptions, and measure how empirically valid they are in order to find better and better black-box attacks.\\n\\n3) We have also added results on ResNet-50 and VGG-16 on ImageNet, and have also benchmarked our attack on all three classifiers (Inceptionv3, ResNet-50, VGG-16) on CIFAR as well.\\n\\nWe will be sure to comment again with a revision when the experiments are complete and integrated into the paper. We thank the reviewer again for the valuable suggestions.\"}", "{\"title\": \"good paper, accept\", \"review\": \"This paper formulates the black-box adversarial attack as a gradient estimation\\nproblem, and provide some theoretical analysis to show the optimality of an\\nexisting gradient estimation method (Neural Evolution Strategies) for black-box\\nattacks.\\n\\nThis paper also proposes two additional methods to reduce the number of queries\\nin black-box attack, by exploiting the spacial and temporal correlations in\\ngradients. They consider these techniques as priors to gradients, and a bandit\\noptimization based method is proposed to update these priors. \\n\\nThe ideas used in this paper are not entirely new. For example, the main\\ngradient estimation method is the same as NES (Ilyas et al. 2017);\\ndata-dependent priors using spatially local similarities was used in Chen et\\nal. 2017. But I have no concern with this and the nice thing of this paper is \\nto put these tricks under an unified theoretical framework, which I really \\nappreciate.\\n\\nExperiments on black-box attacks to Inception-v3 model show that the proposed\\nbandit based attack can significantly reduces the number of queries (2-4 times\\nfewer) when compared with NES. \\n\\nOverall, the paper is well written and ideas are well presented.\", \"i_have_a_few_concerns\": \"1) In Figure 2, the authors show that there are strong correlations between the\\ngradients of current and previous steps. Such correlation heavily depends on\\nthe selection of step size. Imagine that the step size is sufficiently large,\\nsuch that when we arrive at a new point for the next iteration, the\\noptimization landscape is sufficiently changed and the new gradient is vastly\\ndifferent than the previous one. On the other hand, when using a very small\\nstep-size close to 0, gradients between consecutive steps will be almost the\\nsame. By changing step-size I can show any degree of correlation. I am not\\nsure if the improvement of Bandit_T comes from a specific selection of\\nstep-size. More empirical evidence on this need to be shown - for example, run\\nBandit_T and NES with different step sizes and observe the number of queries\\nrequired.\\n\\n2) This paper did not compare with many other recent works which claim to\\nreduce query numbers significantly in black-box attack. For example, [1]\\nproposes \\\"random feature grouping\\\" and use PCA for reducing queries, and [2]\\nuses a good gradient estimator with autoencoder. I believe the proposed method\\ncan beat them, but the authors should include at least one more baseline to \\nconvince the readers that the proposed method is indeed a state-of-the-art.\\n\\n3) Additionally, the results in this paper are only shown on a single model\\n(Inception-v3), and it is hard to compare the results directly with many other\\nrecent works. I suggest adding at least two more models for comparison (most\\nblack-box attack papers also include MNIST and CIFAR, which should be easy to\\nadd quickly). These numbers can be put in appendix.\\n\\nOverall, this is a great paper, offering good insights on black-box adversarial\\nattack and provide some interesting theoretical analysis. However currently it\\nis still missing some important experimental results as mentioned above, and\\nnot ready to be published as a high quality conference paper. I conditionally\\naccept this paper as long as sufficient experiments can be added during the\\ndiscussion period.\\n\\n\\n[1] Exploring the Space of Black-box Attacks on Deep Neural Networks, by Arjun\\nNitin Bhagoji, Warren He, Bo Li and Dawn Song, https://arxiv.org/abs/1712.09491\\n(conference version accepted by ECCV 2018)\\n\\n[2] AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking\\nBlack-box Neural Networks, by Chun-Chen Tu, Paishun Ting, Pin-Yu Chen, Sijia\\nLiu, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh, Shin-Ming Cheng,\", \"https\": \"//arxiv.org/abs/1805.11770\\n\\n==========================================\\n\\nAfter discussing with the authors, they provided better evidence to support the conclusions in this paper, and fixed bugs in experiments. The paper looks much better than before. Thus I increased my rating.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Good paper, low confidence.\", \"review\": \"Paper formalizes the gradient estimation problem in a black-box setting, and provs the equivalence of least Squares with NES. It then improves on state of the art by using priors coupled with a bandit optimization technique.\\n\\nThe paper is well written. The idea of using priors to improve adversarial gradient attacks is an enticing idea. The results seem convincing.\", \"comments\": [\"I missed how data dependent prior is factored into the algorithms 1-3. Is it by the choice of d? I suggest a clearer explanation.\", \"In fig 4, I was confused that the loss of the methods is increasing. it took me a minute to realize this is the maximized adversarial loss, and thus higher is better. you may want to spell this out for clarity. I typically associate lower loss with better algorithms.\", \"I am confused by Fig 4c. If I am comparing g to g*, I do expect a high cosine similarity. cos = 1 is the best. Why is correlation so small? and why is it 0 for NES? You may also want to offer additional insight in the text explaining 4c.\"], \"minor_comments\": [\"Is table one misplaced?\", \"The symbol for \\\"boundary of set U\\\" may be confused with a partial derivative symbol\", \"first paragraph of 2.4: \\\"our estimator a sufficiently\\\". something missing?\", \"\\\"It is the actions g_t (equal to v_t) which...\\\" refering to g_t as actions is confusing. Although may be technically correct in bandit setting\", \"Further explain the need for the projection of algorithm 3, line 7.\", \"Fig 4: refer to true gradient as g*\"], \"caveat\": \"Although I am well versed in bandits, I am not familiar with adversarial training and neural network literature. There is a chance I may have misevaluated central concepts of the paper.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"A decent paper\", \"review\": \"UPDATE:\\n\\nI've read the revised version of this paper, I think the concernings have been clarified.\\n\\n-------\\n\\nThis paper proposes to employ the bandit optimization based approach for the generation of adversarial examples under the loss accessible black-box situation. The authors examine the feasibility of using the step and spatial dependence of the image gradients as the prior information for the estimation of true gradients. The experimental results show that the proposed method out-performs the Natural evolution strategies method with a large margin.\\n\\nAlthough I think this paper is a decent paper that deserves an acceptance, there are several concernings:\\n\\n1. Since the bound given in Theorem 1 is related to the square root of k/d, I wonder if the right-hand side could become \\\"vanishingly small\\\", in the case such as k=10000 and d=268203. I wish the authors could explain more about the significance of this Theorem, or provide numerical results (which could be hard).\\n\\n2. Indeed I am not sure if Section 2.4 is closely related to the main topic of this paper, these theoretical results seem to be not helpful in convincing the readers about the idea of gradient priors. Also, the length of the paper is one of the reasons for the rating.\\n\\n3. In the experimental results, what is the difference between one \\\"query\\\" and one \\\"iteration\\\"? It looks like in one iteration, the Algorithm 2 queries twice?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Re: Interesting\", \"comment\": \"Thank you for the comment!\\n\\nAs a recent work (e.g. https://arxiv.org/pdf/1804.08598.pdf, https://arxiv.org/pdf/1807.04457.pdf ) has shown, adapting to the label-only setting can be implemented as a modification of the loss function---since our method represents a general framework for gradient estimation through a loss function, the same technique can be used.\"}" ] }
HJMjW3RqtX
One-Shot High-Fidelity Imitation: Training Large-Scale Deep Nets with RL
[ "Tom Le Paine", "Sergio Gomez", "Ziyu Wang", "Scott Reed", "Yusuf Aytar", "Tobias Pfaff", "Matt Hoffman", "Gabriel Barth-Maron", "Serkan Cabi", "David Budden", "Nando de Freitas" ]
Humans are experts at high-fidelity imitation -- closely mimicking a demonstration, often in one attempt. Humans use this ability to quickly solve a task instance, and to bootstrap learning of new tasks. Achieving these abilities in autonomous agents is an open problem. In this paper, we introduce an off-policy RL algorithm (MetaMimic) to narrow this gap. MetaMimic can learn both (i) policies for high-fidelity one-shot imitation of diverse novel skills, and (ii) policies that enable the agent to solve tasks more efficiently than the demonstrators. MetaMimic relies on the principle of storing all experiences in a memory and replaying these to learn massive deep neural network policies by off-policy RL. This paper introduces, to the best of our knowledge, the largest existing neural networks for deep RL and shows that larger networks with normalization are needed to achieve one-shot high-fidelity imitation on a challenging manipulation task. The results also show that both types of policy can be learned from vision, in spite of the task rewards being sparse, and without access to demonstrator actions.
[ "Imitation Learning", "Deep Learning" ]
https://openreview.net/pdf?id=HJMjW3RqtX
https://openreview.net/forum?id=HJMjW3RqtX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJg2QAH4xE", "HJlj99OpCX", "Skl4vd35R7", "SJglgopFRm", "SJg865aKRm", "HkxSq9TFAX", "B1gEqF6tRm", "S1x8ZgMwCX", "H1gJp1zPCX", "S1e__pZvRX", "BJl6Z6bDCQ", "BylDJkZ_Tm", "H1xY8ktPT7", "SyeAUmWHTQ", "r1lU9VybTX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544998435949, 1543502482813, 1543321692345, 1543260903693, 1543260862363, 1543260813151, 1543260556145, 1543081981957, 1543081911393, 1543081327553, 1543081221011, 1542094559195, 1542061904876, 1541899093682, 1541629070001 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1205/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1205/Authors" ], [ "ICLR.cc/2019/Conference/Paper1205/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1205/Authors" ], [ "ICLR.cc/2019/Conference/Paper1205/Authors" ], [ "ICLR.cc/2019/Conference/Paper1205/Authors" ], [ "ICLR.cc/2019/Conference/Paper1205/Authors" ], [ "ICLR.cc/2019/Conference/Paper1205/Authors" ], [ "ICLR.cc/2019/Conference/Paper1205/Authors" ], [ "ICLR.cc/2019/Conference/Paper1205/Authors" ], [ "ICLR.cc/2019/Conference/Paper1205/Authors" ], [ "ICLR.cc/2019/Conference/Paper1205/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1205/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1205/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1205/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper introduces a setting called high-fidelity imitation where the goal one-shot generalization to new trajectories in a given environment. The authors contrast this with more standard one-shot imitation approaches where one-shot generalization is to a task rather than a precise trajectory. The authors propose a technique that works off of only state information, which is coupled with an RL algorithm that learns from a replay buffer that is populated by the imitator. The authors emphasize that their approach can leverage very large deep learning models, and demonstrate strong empirical performance in a (simulated) robotics setting.\\n\\nA key weakness of the paper is its clarity. All reviewers were unclear about the precise setting as well as relation to prior work in one-shot imitation learning. As a result, there were substantial challenges in assessing the technical contribution of the paper. There were many requests for clarification, including for the motivation, difference between the present setting and those addressed in previous work, algorithmic details, and experiment details.\\n\\nI believe that a further concern was the lack of a wide range of baselines. The authors construct several baselines that are relevant in the given setting, but did not consider \\\"naive baseline\\\" approaches proposed by the reviewers. For example, behavior cloning is mentioned as a potential baseline several times. The authors argue that this is not applicable as it would require expert actions. Instead of considering it a baseline, BC could be used as an \\\"oracle\\\" - performance that could be achieved if demonstration actions were known. As long as the access to additional information is clearly marked, such a comparison with a privileged oracle can be properly placed by the reader. Without including such commonly known reference approaches, it is very challenging to assess the proposed method's performance in the context of the difficulty of the task. Generally, whenever a paper introduces both a new task and a new approach, a lot of care needs to be taken to build up insights into whether the task appropriately reflects the domain / challenge the paper claims to address, how challenging the task is in comparison to those addressed in prior work, and to place the performance of the novel proposed method in the context of prior work. In the present paper, on top of the task and approach being novel, the pure RL baseline D4PG is not yet widely known in the community and it's performance relative to common approaches is not well understood. Including commonly known RL approaches would help put all these results in context.\\n\\nThe authors took great care to respond to the reviewer comments, providing thorough discussion of related work and clarifications of the task and approach, and these were very helpful to the AC to understand the paper. The AC believes that the paper has excellent potential. At the same time, a much more thorough empirical evaluation is needed to demonstrate the value of the proposed approach in this novel setting, as well as to provide additional conceptual insights into why and in what kinds of settings the algorithm performance well, or where its limitations are.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"a novel approach for a novel task, not sufficiently grounded in prior work\"}", "{\"title\": \"Motivation for high-fidelity (over) imitation\", \"comment\": \"A few reviewers have asked us for more motivation for high-fidelity imitation. As pointed out in the paper, we were inspired by the phenomenon of over imitation in developmental psychology. In addition to the highly cited references in our manuscript, the youtube video at (https://www.youtube.com/watch?time_continue=3&v=20Smx_nD9cw) based on Lyons, et. al. (2007) (http://www.pnas.org/content/pnas/104/50/19751.full.pdf) clearly illustrates this. In short, children, unlike other apes, choose to mimic all actions when copying an action sequence, even when those actions are irrelevant to achieving the goal. Other apes aim for the goal instead. Many developmental psychologists argue that over the years, over imitation in humans, leads to their ability to master more complex tasks. MetaMimic is a simple demonstration of this.\"}", "{\"title\": \"reply\", \"comment\": \"Thank you for your reply and clarifications on the points I raised. Overall I still think this is an interesting approach, and an elegant way of dealing with the fact that actions are not available from the demonstrations. I stand by my evaluation, and think that the current version of the paper does not pass the bar for the main track of ICLR.\\n\\nIt's interesting to hear about the two experiments that you tried and that did not work. I would suggest putting those in the Appendix, together with your hypothesis on why this does not work. These insights can be very valuable for other researchers.\\n\\nAn updated version of the paper should, in my view, have a clearer motivation for why we are interested in high fidelity imitation of the same task but different demonstrators.\\n\\nI also think future work in this direction, but with different tasks instead of different demonstrators, sounds promising.\"}", "{\"title\": \"Authors' response to reviewer 4 part 4\", \"comment\": \"> 2. How is the local context considered in action generation in sec 2.1.\\nThe authors reset the simulation environment to o_1 = d_1. \\nThen actions are generated with \\u00a0\\\\pi_{theta} (o_t, d_{t+1}). \\na. Is the environment reset every time step? \\n\\nIt is not. We start at the beginning of the episode and keep the environment running until a termination condition.\\n\\n> b. If not how is the deviation of the trajectory handled over time? \\u00a0\\n\\nThe deviation of the trajectory is not handled, it is actually used as our training signal. Early in training the deviation quickly grows, making the similarly decrease, which means our agent receives lower reward. The policy is trained to maximize the expected returns, so as training progresses, the deviation of the trajectory is minimized. Doing this for a wide set of motions, and generalization to new test set motions, is far from trivial. This indeed is why we are very happy with our results.\\n\\n> c. how is the time horizon for this open loop roll out chosen. \\n\\nWe roll out the policy for the length of the demonstration which is 500 time steps. We would argue that the controller is not open loop, but closed loop. The control variable is the observation. Our policy receives a target observation, and its current observation (the result of its previous action) and uses that to take a new control action.\\n\\n\\n> 3. How is this different for a using a tracking based MPC with the same horizon? The cost can be set the same the similarity metric between states.\\n\\nWe think this would be an interesting experiment to run. We will consider it for future work.\\n\\n> 4. The architecture uses a deep but simplistic model. When the major attribution of the model success is to state similarity -- especially image similarity -- why did the authors not use image comparators something like the Siamese model?\\n\\nIt depends on what you mean. a) why did we not replace our agent architecture with features learned from a siamese model, b) why did we not replace our similarity reward with something learned by a siamese model. We address both.\\n\\n\\t1.\\tWe make a subtle distinction, we claim that the major component of the models success is the ability of the model to compare current observation and target observation in order to produce actions that achieve the target observation. For this reason we think training the whole network end-to-end for this purpose will perform better. If we pre-trained a siamese model to learn similarity based features, those features may not necessarily be well-suited for predicting actions, and may cap the maximum reward achievable.\\n\\n\\t2.\\tWe wanted to show our system performs quite well with a simple reward function. We agree that learning a siamese model for the reward should work quite well and mention related approaches in the paper.\\n\\n\\n> Suggestion:\\nThe whole set of experiments are in a simulation. \\nThe authors go above and beyond in using Mitsuba for rendering images. But the images used are Mujoco rendered default. It would nice if the authors were more forthcoming about this. All image captions should clearly state -- Simulated robot results, show images used for agent training. The Mitsuba renders are only used for images but nowhere in the algorithm. So why do this at all, and if it has to be used please do it with a disclaimer. Right now this detail is rather buried in the text. \\n\\nWe did this to improve presentation. We like your suggestion of adding this information to the image captions.\"}", "{\"title\": \"Authors' response to reviewer 4 part 3\", \"comment\": \"> - Compare High-Fidelity Performance\\nIt is used as a differentiator of this method but without experimental evidence.\\nThe results showing imitation reward are insufficient. The metric should be independent of the method. An evaluation might compare trajectory tracking error: for objects, end-effector, and joint positions. This is available as privileged information since the setup is in a simulation.\\n\\nWe agree the proposed metrics would provide additional insight, thanks for the suggestion.\\n\\nHowever, we disagree we do not provide experimental evidence that our method performs high fidelity imitation. We highlight three ways we provide experimental evidence. (i) we provide evidence by generalization, we maximize the imitation reward on a training set, and show it still achieves high imitation reward on unseen demonstrations, (ii) we provide qualitative results, showing traces that illustrate the imitation policy mimics unseen demonstrations closely, in these results one can clearly see the end-effectors, joint positions, and objects closely track the demonstration, (iii) we provide a proxy metric, the task reward. Since our imitation policy is not trained using task reward, it receives no incentive to solve the task without closely imitating the demonstration.\\n\\n> Furthermore, a comparison with a model-based trajectory tracking with a learned or fitted model of dynamics is also very useful.\\n\\nThank you for the suggestion. We will consider it for future work.\\n\\n> - Compare Policy Learning Performance\\nIn addition to D4PG variants, performance comparison with GAIL will ascertain that unconditional imitation is better than SoTA. \\n\\nWe agree a comparison to GAIL would be quite interesting, however a fair comparison would require a bit of innovation. Vanilla GAIL is trained using on-policy RL algorithms (where as we use D4PG, an off policy method), and does not work well from images. We have tried a vanilla implementation of GAIL on this task and found it to be a very weak baseline, and think it would be unfair to include in the paper. \\n\\nThere have been extensions proposed to GAIL to fix these limitations, including several submitted to ICLR this year, but we consider it to be beyond the scope of this paper to coalesce them into one strong baseline.\\n\\n> * Tracking a reference (from either sim or demos) is a good idea that has been explored in sim2real literature[2,3] and imitation learning [4]. It is not by itself novel. The authors fail to acknowledge any work in this line as well as provide insight why is this good and when is this valid. For instance, with highly stochastic dynamics this may not work!\\n\\nWe have not claimed that tracking on its own is novel, and in fact have a section in the related work called \\\"Imitation by tracking\\\" which cites several papers that use tracking for imitation, including one of the earliest works in this area \\\"Robot learning from demonstration\\\" by Atkeson et al. (1997). In that section we highlight some of the reasons why imitation by tracking is useful: \\\"Imitation by tracking has several advantages. For example it does not require access to expert actions at training time, can track long demonstrations, and is amenable to third person imitation\\\". We will expand upon that section by including the citations you suggested.\\n\\nWe agree it is quite important to test if this method works with stochastic dynamics. At the moment it is not clear whether it would. Our policy has a mechanism for compensating with drift (by comparison of its current observation and its goal observation), and it is trained to maximize its expected reward, which may be sufficient to let it deal with stochastic dynamics. We think this would be an interesting direction for future work.\\n\\n> - \\\"Diverse Novel Skills\\\" \\nThe experiments are limited to a rather singular pick and place task with a 3-step structured reward model. It is unfair to characterize this domain as very diverse or complex from a robotics perspective. More experiments on continuous control would help.\", \"we_are_planning_to_make_this_more_specific\": \"Diverse novel motions.\\n\\n> - Bigger networks\\n\\\"In fig. 3 we demonstrate that indeed a large ResNet34-style network (He et al., 2016) clearly outperforms\\\" -- but Fig 3 is a network architecture diagram. It is probably fig 6!\\n\\nApologies, we will fix that.\\n\\n> - The authors are commended for presenting a broad overview of imitation based methods in table 2\\n\\nThank you.\\n\\n> 1. \\u00a0How different if the imitation learner (trained with imitation reward) from a Behaviour Cloning Policy.\\n\\nBehavior cloning with a small number of training trajectories is known to have difficulty when it drifts from the distribution of states found in the dataset. This problem grows more pronounced when trajectories are long. In comparison RL with this objective helps the imitation policy stay close to the trajectories.\"}", "{\"title\": \"Authors' response to reviewer 4 part 2\", \"comment\": \"* Improved Comparisons\\n- Compare with One-Shot Performance\\nSince this is one of the main contributions, explicit comparison with other one-shot imitation papers needs to be quantified with a clearly defined metric for generalization. \\nThis comparison should be both for short-term tasks such as block pick and place (Finn et al, Pathak et al, Sermanet et al.) and also for long-term tasks as shown in (Duan et al. 2017 and also in Neural Task Programming/Neural Task Graph line of work from 2018)\\n\\nOur paper is about one-shot high-fidelity imitation, not one-shot imitation. It is important to emphasize the high-fidelity word. That is, we want to mimic a diverse set of motions as precisely as possible. We believe this is useful in high precision engineering or surgery where departure from a very specific desired trajectory could have dire consequences. The works that we are being asked to compare against do not target high-fidelity imitation. Moreover, it is not clear they would be able to handle the diversity of motion in the demonstrations that we consider in our experiments (see the paragraph below). \\n\\nYu, Finn et al (2018) on One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning emphasize the difficulty of being able to generalize to new motions. In their conclusion they state \\u201cWhile our work enables one-shot learning for manipulating new objects from one video of a human, our current experiments do not yet demonstrate the ability to learn entirely new motions in one shot\\u201d. The authors go on to conclude \\u201cWe expect that more data and a higher-capacity model would likely help enable such extensions\\u201d. This was visionary. Our experiments prove that indeed we do need much higher capacity to generalize to new motions in one-shot imitation.\", \"we_highly_recommend_the_berkeley_website_of_yu_and_finn_on_one_shot_imitation_at_https\": \"//bair.berkeley.edu/blog/2018/06/28/daml/ The task for the first approach (first person imitation) is effectively a reaching task. Reaching is much simpler than stacking as it does not involve reasoning about the image and dealing with contact forces. For the second approach (third person imitation) the authors do consider something closer to our approach: pick and place. As pointed out above the generalization wrt to object variety is very impressive. To train the approach requires human and robot trajectories for the same task, with the robot trajectories labelled with actions. This is great work, but solving a different problem than the one in this paper.\\n\\nDuan et al. (2017) sample a task t~p(t) and conditioning on this sample a demo d~p(d|t). They use a 7-DOF Fetch robot with a simple open/closed binary gripper. Importantly, for them an observation is a list of (x,y,z) object positions relative to the gripper. Unlike them, we do not assume that we know the state of the world and our input is simply pixels (learning from pixels is known to be much harder). Knowing whether the gripper is closed or open is a reasonable assumption because agents (artificial and natural) have vestibular information and proprioception. However, knowing the state of the world (positions of objects wrt gripper) is a big simplifying assumption. Note too that Duan et al. (2017) have access to actions in the demonstration sequences. The same considerations apply to the one-shot imitation NIPS paper of Wang et al (2017): https://papers.nips.cc/paper/7116-robust-imitation-of-diverse-behaviors\\n\\nNeural Task Programming (NTP) is an ingenious extension of the Neural Programmer-Interpreters (NPI) of Reed and de Freitas (2016), whereby task specifications in the form of video demonstrations are added to each program core. The program core predicts a sub-segmentation of the video for subsequent subprograms to process. Eventually a program API is reached, its arguments are predicted, and it is executed. The tasks demonstrated are impressive, however there are important assumptions in that work that must be taken into consideration when comparing to ours. First the bottom level programs are standard robot APIs (move_to, grip, release.move_to, etc). Even more importantly, as stated in that paper \\u201cWe train the model using rich supervision from program execution traces\\u201d. Thus while we believe NTP is an important research endeavour, the setup and goals are very different from our high-fidelity one-shot imitation work. \\n\\nThe Neural Task Graph (NTG) approach relaxes the need for supervision in terms of hierarchy, but still requires the sequences or raw visual inputs and flat actions (APIs as in NTP) for supervised training. We again see this work as being very different from ours.\"}", "{\"title\": \"Authors' response to reviewer 4 part 1\", \"comment\": \"Thank you for detailed and extensive feedback. In addition to the contribution pointed out above, we would like to emphasize that this work demonstrates that it is possible to train massive deep neural networks (larger than any previous attempt by at least an order of magnitude) for RL. Moreover, the paper shows through ablations that such architectures are essential to achieve good generalization in one-shot imitation. Smaller networks fail to generalize. We feel the problem of generalization in one-shot learning is central to AI, and as such we believe this paper presents important results showing how to advance this research frontier. A few years ago, we certainly did not know whether RL, with its considerable variance, would allow us to train such large policies. We also did not know whether big nets were necessary at all in control tasks, as pointed out by Emo Todorov and colleagues at the previous NIPS. This paper provides empirical evidence and answers to these important questions.\\n\\n> The main method is very similar to D4PG-fd. The off-policy method samples from a replay buffer which comprises of both the demos and the agent experience from the previous learner iterates. \\n\\nFor clarification, the D4PGfd algorithm was introduced in this paper, with the previous existing work --- the DDPGfg of Vecerik et al (2017) --- missing the distributed and distributional aspects of the policy optimizer. Additionally, the D4PGfd method requires actions, but in contrast MetaMimic does not need access to actions as you note above. Moreover, MetaMimic does two things (i) one-shot high-fidelity imitation and (ii) task policy learning. The D4PGfd method only applies to task policy learning (ii). That is, it is missing an important core feature of MetaMimic.\\n\\n> 1. From a technical perspective, what is the advantage of training an imitation learner from a memory buffer of the total experience? \\nIf the task reward is not accessed, then when the imitation learner is training, then the data should not be used for training the task policy learner. On the other hand if task reward is indeed available then what is the advantage of not using it. \\n\\nExcellent question. The purpose of MetaMimic is twofold. The first goal is to deploy policies that users (say someone at a factory) can easily adapt, via demonstrations, to solve new tasks. Moreover, the case studied in this paper aims to meet the need for imitating the user observation trajectory with high-fidelity (eg in high precision engineering or surgery). That is, it is not only important to accomplish the goal, but also it is important to achieve this in a very precise and specific manner. \\n\\nThe second purpose of MetaMimic is to act as a general task policy learner by capitalizing on demonstrations of observations and rewards. Here, the process of following the demonstrations should be understood as an auxiliary task to address the problem of exploration. The final goal is a task policy. In this sense, the closest competitor to MetaMimic is DDPG-fd, but as pointed out, MetaMimic works from observations while DDPG-fd requires additional access to actions. Interestingly, our results in Figure 8, using our proposed D4PG-fd method, show that both methods perform similarly, despite MetaMimic requiring less information.\\n\\nWe feel much of the confusion we\\u2019ve created comes from the fact that we are proposing a method that does two things. It can be useful to do (i) only, (ii) only or both (i) and (ii). It really depends on the deployment case. \\n\\n> 2. A comparison with a BC policy to generate more experience data for the task policy agent/learning might also be useful. \\n\\nThis may work quite well. However, BC requires expert actions while our method does not. We already provide a strong baseline for training the task policy with access to expert actions, D4PGfD.\"}", "{\"title\": \"Authors' response to reviewer 3 part 2\", \"comment\": \"> The x-axis in the figures says \\\"time (hours)\\\" - is that computation time or simulated time?\\n\\nThe x-axis here refers to computation time.\\n\\n> In 3.2, I would be interested in seeing the following baseline comparison: Learn the test task from scratch using the one available demonstration, with the RL procedure (Equation 2, but possibly without the second term to make it fair). In Figure 5, we can see that the performance on the training tasks is much better when training on only 10 tasks, compared to 500. Then why not overfit to a single task, if that's what we're interested in? \\n\\nWe have run a similar experiment before, training both policies with varying number of demonstrations. With 10 demonstrations, the task policy still learns quickly, but achieves lower reward, and qualitatively is more cumbersome, repeatedly attempting to stack one block atop the other until it finds a stable position. Often it drops the block and has to pick it back up. As the number of demonstrations increases, the max reward reached increases, and the learned task policy stacks in one smooth motion. We will rerun this experiment and have plots for the camera ready.\\n\\n> An interesting baseline for 3.3 might be an RL algorithm with shaped rewards: using an additional reward term that is the eucledian distance to the *closest* datapoint from the demonstration. Compared to the baselines shown in the results section, this would be a fairer comparison because (1) unlike D4PG we also have access to information from the demonstrations and (2) no additional information is needed like the action information in D4PGfD and (3) we don't have the need for a curriculum.\\n\\nWe actually tried a related method without much success. Every episode we sampled a demonstration, and trained an unconditional policy that was rewarded for reaching the next step of the demonstration (not knowing which demonstration was provided). This method did not take off at all, because the policy did not know what goal it was trying to reach.\\n\\nThe suggested variation may work better because the policy will be rewarded as long as it reaches any goal. We are concerned that a trivial solution would be to take no action and stay at the same observation. Alternatively it could also oscillate back and forth between two observations. Nevertheless it is an interesting experiment and we hope to have this in time for the camera-ready. Thanks for suggesting it.\\n\\n> I find the first sentence, \\\"One-shot imitation is a powerful way to show agents how to solve a task\\\" a bit confusing. I'd say one-shot imitation is a method, not a way to show how to solve a task. Maybe an introductory sentence like \\\"Expert demonstrations are a powerful way to show agents how to solve a task.\\\" works better?\\n\\nTotally agree. We\\u2019ll update the text as you suggest.\\n\\n> Second sentence, the chosen example is \\\"manufacturing\\\" tasks - do you mean manipulation? When reading this, I had to think of car manufacturing - a task I could certainly not imitate with just a few demonstrations.\\n\\nIndeed. We will clarify.\\n\\n> Add note that with \\\"unconditional policy\\\" you mean not conditioned on a demonstration.\\n[2. MetaMimic]\\n- [2.1] Third paragraph: write \\\"Figure 2, Algorithm 1\\\" or split the algorithm and figure up so you can refer to them separately.\\n- [2.1] Last paragraph, second line: remove second \\\"to\\\"\\n\\nWe will make the changes suggested in this section. Thanks for the helpful suggestions, and for having devoted your time to clearly understand our paper and provide constructive feedback.\"}", "{\"title\": \"Authors' response to reviewer 3 part 1\", \"comment\": \"Thank you very much for your feedback. We address your comments below.\\n\\n> The abstracts oversells the contribution a bit when saying that MetaMimic can learn \\\"policies for high-fidelity one-shot imitation of diverse novel skills\\\". The setting that's considered in the paper is that of a single task, but different demonstrations (different humans from different starting points). This seems restrictive, and could have been motivated better.\\n\\nWe fully agree. We should have been more specific, e.g. \\\"policies for high-fidelity one-shot imitation of diverse novel motions in block stacking\\\". We plan to be much more specific in the writeup explaining the sources of variation in the task. \\n\\n> Experimental results are shown only for one task; block stacking with a robot arm in simulation.\\n\\nThis is correct. However there is significant variation in the motions, and since we are interested in high-fidelity imitation (and not just imitation for the purposes of solving the task), we believe this is a significant source of variation. This is in line with our experiment showing that one needs a neural net with very large capacity (the largest ever trained with RL) to generalize to novel test demonstrations in high-fidelity imitation. \\n\\n> Might not be a good topical fit for ICLR, but more suited for a conference like CoRL or a workshop. The paper is very specific to imitation learning for a manipulation / control tasks, where we can (1) reset the environment to the exact starting position of the demonstrations, (2) the eucledian distance between states in the demonstration and visited by the policy is meaningful (3) we have access to both pixel observations and proprioceptive measurements. The proposed method is an elegant way to solve this, but it's unclear how well it would perform on different types of control problems, or when we want to transfer policies between different (but related) tasks.\\n\\nWe feel there are important questions of representation here that make the work interesting for ICLR. For instance, prior to this work we didn\\u2019t know we could train such massive neural nets with RL, and that increasing the size of these particular models is needed for generalization. We fully agree that (1) is an important limitation of the present approach. We were more interested in motion variation (the style in which any user solves a specific task) that on say object variation. Our focus is on high-fidelity imitation - if the focus is on imitation, then the techniques of pointed out in our reply to reviewer 4 are better choices (eg the works of Silvio Savarese, Chelsea Finn and colleagues.\\n\\n> Where does the \\\"task stochasticity\\\" come from? Only from the starting state, and from having different demonstrations? Or is the transition function also stochastic?\\n\\nCorrect, the task stochasticity only comes from the starting state, and from having different demonstrations. We called it stochastic to distinguish it from environments like atari which are completely deterministic, i.e. without different starting conditions or different goals to achieve.\\n\\n> The learned policy is able to do one-shot imitation, i.e., given a new demonstration (of the same task) the policy can follow this demonstration. Do I understand correct that this mean that there is *no* additional learning required at test time?\\n\\nYes, that is correct. At test time, the imitation policy is able to follow (never-seen-before) trajectories with no additional learning very closely. This is what we believe is a very cool result, especially because other groups have struggled to achieve this.\\n\\n> It is not immediately clear to me why the setting of a single task but new demonstrations is interesting. Could the authors comment on this? One setting I could imagine is that the policy is trained in simulation, but then executed in the real-world, given a new demonstration. (If that's the main motivation though, then the experiments might have to support that this is possible - if no real-world robot is available, maybe the same simulator with a slightly different camera angle / light conditons or so.)\\n\\nWe would argue it is interesting because it is a skill that humans have, that is nontrivial for agents. Humans can observe a demonstration and imitate it very closely using just observations. In a factory, a manager might demonstrate to a new worker what to do with a set of objects, and then the new worker repeats the task with the same objects. Of course, humans can do this in a much more general way: e.g. from a third person perspective, and with abstract notions of perceptual similarity. Admittedly, we are all far from solving the full problem.\\n\\nStill, we think this is an interesting and useful step in that direction. One that demonstrates learning a complex mapping from perception to motor control through experience with an environment.\\n\\nWe agree that the sim-to-real version of the problem is quite interesting.\"}", "{\"title\": \"Authors' response to reviewer 2\", \"comment\": \"Thank you for taking the time to provide feedback on the paper. We have taken great care to ensure our claims are directly supported by our experiments. We will address the two claims your refer to below.\\n\\n> The authors claim that the method allows one-shot generalization to an unknown trajectory. To test this hypothesis the authors only provide experiments of generalization towards trajectories of a different demonstrator on the same task of stacking cubes. I would expect experiments with truly different trajectories on a different task than stacking cubes to test the hypothesis of one-shot imitation. Until then I see no evidence for a \\\"one-shot\\\" imitation capability of the proposed method.\\n\\nThere is a question of terminology here, and we agree that we need to address this more precisely in the paper. Most papers on one shot imitation sample a task t~p(t) and conditional on this sample a demonstration d~p(d|t). In this setting, accomplishing t is what matters and significant deviations in the demonstration d are tolerated. In our work, we are sampling d~p(d), and for us it is important to minimize deviations in d (i.e. we want high-fidelity). \\n\\nOther one-shot imitation learning methods mostly focus on object diversity, e.g. pushing unseen objects or placing unseen objects (see eg the excellent website of Yu and Finn: https://bair.berkeley.edu/blog/2018/06/28/daml/ ) We instead chose a difficult control task, block stacking, which allows for many diverse ways of solving the task. We focused on demonstration diversity, following a distinctly different trajectory to solve a new task instance. While generalizing to different colours and objects is difficult, generalizing to different motions in one shot imitation is equally hard. As pointed out by Yu, Finn et al (2018) \\u201cWhile our work enables one-shot learning for manipulating new objects from one video of a human, our current experiments do not yet demonstrate the ability to learn entirely new motions in one shot\\u201d. Admittedly, all the different sources of variation are important and we need to make progress in all of them. Our work clearly does not address object variety, and we do need to do this in the future.\\n\\n> That storing the trajectories of early training can act as replacement for exploration as rescue from off-policy states: This is never experimentally validated. This hypothesis could easiliy be validated with an ablation study, were the results of early would not be added to the replay buffer.\\n\\nActually, this is experimentally validated in Figure 8. Using D4PG to train the task policy, is equivalent to using our method without adding experiences from the imitation policy to the replay memory. \\n\\nWe believe this is strong evidence our method overcomes the exploration problem, because the same RL method is used, with the same hyper-parameters, same number of actors etc, but now the transitions in the replay are more likely to see task reward.\\n\\nWe will add a note to the paper to ensure this is clear. \\n\\n> High fidelity imitation: In the caption of Figure 7 the authors note that the unconditional task policy is able to outperform the demonstration videos. Thus the trajectories of the unconditional task policy allow a higher reward then the demonstrations.\\nCould the authors please comment on how the method still achieves high fidelity imitation even when the results of the unconditional task policy are added to the replay buffer? In prinicipal these trajectories allow a higher reward than the demonstration trajectories that should be imitated.\\n\\nThe task policy is able to achieve higher task reward, but at the moment we don't calculate or store its imitation reward. This is illustrated in Figure 1. The imitation policy is trying to maximize imitation reward, as a result the task policy trajectories do not interfere with the imitation policy.\\n\\n> Mainly due to the missing experimental validation of the claims made I recommend to reject the paper.\\n\\nWe hope we have made clear how our claims are supported by our experiments, and that you would reconsider your evaluation. Regardless, thank you for the feedback. It has been very valuable.\"}", "{\"title\": \"Authors' response to reviewer 1\", \"comment\": \"Thank you for the valuable feedback. We will address some of your comments and questions below.\\n\\n> The paper is really clearly written, but presenting the approach as \\\"high-fidelity\\\", \\\"one-shot\\\" learning is a bit confusing. First, it's not clear what's the motivation for high-fidelity. To me this is an artifact due to having to imitate the visual observation instead of the actions, which is a legitimate constraint, but not the original goal. Second, the one-shot learning setting consists of training on a set of stochastic demonstrations and testing on another set collected from a different person; both for the same task. Usually one-shot learning tests on slightly different tasks or environments, whereas here the goal is to generalize to novel demonstrations. It's not clear why do we care imitation per se in addition to the task reward.\\n\\nGiven the review scores, we can only agree with you that the paper is somewhat confusing and we have failed to motivate high-fidelity imitation properly. We think that what makes the paper confusing is that as it stands it tells two stories (Hi-Fi imitation and task policies). These two stories are fundamentally linked, however we admit the presentation did not make these links clear. We will try to address this, but we very much look forward to any advice on how to change the presentation to make it more understandable.\\n\\nWe used to the terminology One-Shot High-Fidelity Imitation to clarify both how our method works, and how it differs from existing methods in the space. First, high-fidelity is about mimicking the trajectory precisely. In precision engineering or surgery, where we don\\u2019t want the actuator doing anything other than what was demonstrated, this seems like a valuable idea. Existing few-shot imitation works focus on solving tasks, but not on following the trajectory precisely (see for example https://bair.berkeley.edu/blog/2018/06/28/daml/ ). In relation to this work, Yu and Finn point out: \\u201cWhile our work enables one-shot learning for manipulating new objects from one video of a human, our current experiments do not yet demonstrate the ability to learn entirely new motions in one shot\\u201d. The latter is what is demonstrated in our generalization experiments. That is, given a new demonstration motion, our policy is able to follow it closely as shown in Figure 4.\\n\\nSecond, we use the phrase \\\"one-shot\\\" to distinguish our method from other tracking based methods, which can require many thousands of environment interactions to learn to track a single trajectory. In contrast, our method requires no additional environment interactions to track a novel trajectory. It achieves this in the same way one-shot methods do, by learning a policy that works well across a large dataset of \\\"tasks\\\" where each \\\"task\\\" is a demonstration.\\n\\n> What I find interesting is the proposed approach for learning for video demonstration without action labels. Currently this requires an executor to render the actions to images, what if we don't have such an executor or only have a noisy / approximate renderer? In the real world it's probably hard to find a good renderer, it would be interesting to see how this constraint can be relaxed.\\n\\nWe agree that relaxing the constraint of an exact environmental renderer is an interesting research direction. This would be helpful for our method, as well as many other RL based methods. But we think it is beyond the scope of this paper.\\n\\n> While the authors have shown the average rewards of the two sets are different, I wonder what's the variance of each person's demonstration.\\n\\nThere is notable variance between the two demonstrators, and between each demonstration. We will provide some additional examples in the appendix.\\n \\n> In Fig 5, on the validation set, in terms of imitation loss there aren't that much difference between the policies, but in terms of task reward, the 'red' policy goes to zero while others policies' rewards are still similar. Any intuition for why seemingly okay imitation doesn't translate to task reward?\\n\\nYes, we have noticed two types of behavior that have reasonably high imitation rewards, but do not successfully complete the task: (i) the policy closely imitates the arm but ignores the block entirely, (ii) the policy successfully imitates both the arm and block position in the beginning of the trajectory, but fails to place the block on a stable position during the stack. As the imitation reward increases we see these behaviors less.\\n\\n> Overall, I enjoyed reading the paper and the experiments are comprehensive. The current presentation angle seems a bit off though.\\n\\nThanks! We are really glad you enjoyed the paper, and the experiments, and hope we can align the presentation a bit better.\"}", "{\"title\": \"Interesting idea to extend DDPGfD to use only state trajectories, but needs a further experimental validation.\", \"review\": \"**Summary**\\n\\nThe paper looks at the problem of one-shot imitation with high accuracy of imitation. The main contributions: \\n1. learning technique for high fidelity one-shot imitation at test time. \\n2. Policies to improve the expert performance through RL. \\n\\nThe main improvements of this method is that demo action and rewards are not needed only state trajectories are sufficient. \\n\\n\\n** Comments **\\n- The novelty of algorithm block\\nThe main method is very similar to D4PG-fd. The off-policy method samples from a replay buffer which comprises of both the demos and the agent experience from the previous learner iterates. \\n\\n1. From a technical perspective, what is the advantage of training an imitation learner from a memory buffer of the total experience? \\nIf the task reward is not accessed, then when the imitation learner is training, then the data should not be used for training the task policy learner. On the other hand if task reward is indeed available then what is the advantage of not using it. \\n\\n2. A comparison with a BC policy to generate more experience data for the task policy agent/learning might also be useful. \\n\\n* Improved Comparisons\\n- Compare with One-Shot Performance\\nSince this is one of the main contributions, explicit comparison with other one-shot imitation papers needs to be quantified with a clearly defined metric for generalization. \\n\\nThis comparison should be both for short-term tasks such as block pick and place (Finn et al, Pathak et al, Sermanet et al.) and also for long-term tasks as shown in (Duan et al. 2017 and also in Neural Task Programming/Neural Task Graph line of work from 2018)\\n\\n- Compare High-Fidelity Performance\\nIt is used as a differentiator of this method but without experimental evidence.\\nThe results showing imitation reward are insufficient. The metric should be independent of the method. An evaluation might compare trajectory tracking error: for objects, end-effector, and joint positions. This is available as privileged information since the setup is in a simulation.\\n\\nFurthermore, a comparison with a model-based trajectory tracking with a learned or fitted model of dynamics is also very useful.\\n\\n- Compare Policy Learning Performance\\nIn addition to D4PG variants, performance comparison with GAIL will ascertain that unconditional imitation is better than SoTA. \\n\\n\\n* Tracking a reference (from either sim or demos) is a good idea that has been explored in sim2real literature[2,3] and imitation learning [4]. It is not by itself novel. The authors fail to acknowledge any work in this line as well as provide insight why is this good and when is this valid. For instance, with highly stochastic dynamics this may not work!\\n\\n\\n- \\\"Diverse Novel Skills\\\" \\nThe experiments are limited to a rather singular pick and place task with a 3-step structured reward model. It is unfair to characterize this domain as very diverse or complex from a robotics perspective. More experiments on continuous control would help.\\n\\n- Bigger networks\\n\\\"In fig. 3 we demonstrate that indeed a large ResNet34-style network (He et al., 2016) clearly outperforms\\\" -- but Fig 3 is a network architecture diagram. It is probably fig 6!\\n\\n- The authors are commended for presenting a broad overview of imitation based methods in table 2\\n\\n** Questions **\\n\\n1. How different if the imitation learner (trained with imitation reward) from a Behaviour Cloning Policy. \\n\\n2. How is the local context considered in action generation in sec 2.1. \\nThe authors reset the simulation environment to o_1 = d_1. \\nThen actions are generated with \\\\pi_{theta} (o_t, d_{t+1}). \\na. Is the environment reset every time step?\\nb. If not how is the deviation of the trajectory handled over time? \\nc. how is the time horizon for this open loop roll out chosen. \\n\\n3. How is this different for a using a tracking based MPC with the same horizon? The cost can be set the same the similarity metric between states. \\n\\n4. The architecture uses a deep but simplistic model. When the major attribution of the model success is to state similarity -- especially image similarity -- why did the authors not use image comparators something like the Siamese model?\", \"suggestion\": \"The whole set of experiments are in a simulation. \\nThe authors go above and beyond in using Mitsuba for rendering images. But the images used are Mujoco rendered default. It would nice if the authors were more forthcoming about this. All image captions should clearly state -- Simulated robot results, show images used for agent training. The Mitsuba renders are only used for images but nowhere in the algorithm. So why do this at all, and if it has to be used please do it with a disclaimer. Right now this detail is rather buried in the text.\", \"references\": \"1. Neural Task Programming, Xu et al. 2018 (https://arxiv.org/abs/1710.01813)\\n2. Preparing for the Unknown: Learning a Universal Policy with Online System Identification (https://arxiv.org/abs/1702.02453)\\n3. Adapt: zero-shot adaptive policy transfer for stochastic dynamical systems (https://arxiv.org/abs/1707.04674)\\n4. A survey of robot learning from demonstration, Argall et al. 2009\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Insufficient evidence/experimental validation for the main claims of the paper\", \"review\": \"Summary\\n\\nThis work porposes a approach for one-shot imitation with high accuracy, called \\\"high fidelity imitation learning\\\" by the authors. Furthermore, the work addresses the common problem of exploration in imitation learning, which would help to rescue from off-policy states.\\n\\nReview\\n\\nIn my opinion, the main claims of this paper are not validated sufficiently in the experiments. I would expect the experiments to be designed specifically to support the claims made, but little evidence is provided:\\n\\n- The authors claim that the method allows one-shot generalization to an unknown trajectory. To test this hypothesis the authors only provide experiments of generalization towards trajectories of a different demonstrator on the same task of stacking cubes. I would expect experiments with truly different trajectories on a different task than stacking cubes to test the hypothesis of one-shot imitation.\\nUntil then I see no evidence for a \\\"one-shot\\\" imitation capability of the proposed method.\\n\\n- That storing the trajectories of early training can act as replacement for exploration as rescue from off-policy states: This is never experimentally validated. This hypothesis could easiliy be validated with an ablation study, were the results of early would not be added to the replay buffer.\\n\\n- High fidelity imitation: In the caption of Figure 7 the authors note that the unconditional task policy is able to outperform the demonstration videos. Thus the trajectories of the unconditional task policy allow a higher reward then the demonstrations.\\nCould the authors please comment on how the method still achieves high fidelity imitation even when the results of the unconditional task policy are added to the replay buffer? In prinicipal these trajectories allow a higher reward than the demonstration trajectories that should be imitated.\\n\\nMainly due to the missing experimental validation of the claims made I recommend to reject the paper.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"learning from video demonstration; exposition is confusing / misleading.\", \"review\": \"This paper presents an RL method for learning from video demonstration without access to expert actions. The agent first learn to imitate the expert demonstration (observed image sequence and proprioceptive information) by producing a sequence of actions that will lead to the similar observations (require a renderer that takes actions and outputs images). The imitation loss is a similarity metric. Next, the agent explores the environment with both the imitation policy and task policy being learned; an off-policy RL algorithm D4PG is used for policy learning. Experiments are conducted on a simulated robot block stacking task.\\n\\nThe paper is really clearly written, but presenting the approach as \\\"high-fidelity\\\", \\\"one-shot\\\" learning is a bit confusing. First, it's not clear what's the motivation for high-fidelity. To me this is an artifact due to having to imitate the visual observation instead of the actions, which is a legitimate constraint, but not the original goal. Second, the one-shot learning setting consists of training on a set of stochastic demonstrations and testing on another set collected from a different person; both for the same task. Usually one-shot learning tests on slightly different tasks or environments, whereas here the goal is to generalize to novel demonstrations. It's not clear why do we care imitation per se in addition to the task reward.\\n\\nWhat I find interesting is the proposed approach for learning for video demonstration without action labels. Currently this requires an executor to render the actions to images, what if we don't have such an executor or only have a noisy / approximate renderer? In the real world it's probably hard to find a good renderer, it would be interesting to see how this constraint can be relaxed.\", \"questions\": [\"While the authors have shown the average rewards of the two sets are different, I wonder what's the variance of each person's demonstration.\", \"In Fig 5, on the validation set, in terms of imitation loss there aren't that much difference between the policies, but in terms of task reward, the 'red' policy goes to zero while others policies' rewards are still similar. Any intuition for why seemingly okay imitation doesn't translate to task reward?\", \"Overall, I enjoyed reading the paper and the experiments are comprehensive. The current presentation angle seems a bit off though.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Well presented, but not suitable for ICLR\", \"review\": \"\", \"summary\": \"This paper proposes MetaMimic, an algorithm that does the following:\\n(i) Learn to imitate with high-fidelity with one-shot. The setting is that we have access to several demonstrations (only states, no actions) of the same task. During training, we have pixel observations plus proprioceptive measurements). At test time, the learned policy can imitate a single new demonstration (consisting of only pixel observations) of the same task.\\n(ii) When given access to rewards, the policy can exceed the human demonstrator by augmenting its experience replay buffer with the experience gained while learning (i). Therefore, even in a setting with sparse rewards and no access to expert actions (only states), the policy can learn to solve the task.\", \"overall_evaluation\": \"This is a good paper. In my opinion however, it does not pass the bar for ICLR.\", \"pros\": [\"The paper is well written. The contributions are clearly listed, the methods section is easy to follow and the authors explain the choices they make. The illustrations are clear and intuitive.\", \"The overview of hyperparameter choice and tuning / importance factor in the Appendix is useful.\", \"Interesting pipeline of learning policies that can use demonstrations without actions.\", \"The results on the simulated robot arm (block stacking task with two blocks) are good.\"], \"cons\": [\"The abstracts oversells the contribution a bit when saying that MetaMimic can learn \\\"policies for high-fidelity one-shot imitation of diverse novel skills\\\". The setting that's considered in the paper is that of a single task, but different demonstrations (different humans from different starting points). This seems restrictive, and could have been motivated better.\", \"Experimental results are shown only for one task; block stacking with a robot arm in simulation.\", \"Might not be a good topical fit for ICLR, but more suited for a conference like CoRL or a workshop. The paper is very specific to imitation learning for a manipulation / control tasks, where we can (1) reset the environment to the exact starting position of the demonstrations, (2) the eucledian distance between states in the demonstration and visited by the policy is meaningful (3) we have access to both pixel observations and proprioceptive measurements. The proposed method is an elegant way to solve this, but it's unclear how well it would perform on different types of control problems, or when we want to transfer policies between different (but related) tasks.\"], \"questions\": [\"Where does the \\\"task stochasticity\\\" come from? Only from the starting state, and from having different demonstrations? Or is the transition function also stochastic?\", \"The learned policy is able to do one-shot imitation, i.e., given a new demonstration (of the same task) the policy can follow this demonstration. Do I understand correct that this mean that there is *no* additional learning required at test time?\", \"It is not immediately clear to me why the setting of a single task but new demonstrations is interesting. Could the authors comment on this? One setting I could imagine is that the policy is trained in simulation, but then executed in the real-world, given a new demonstration. (If that's the main motivation though, then the experiments might have to support that this is possible - if no real-world robot is available, maybe the same simulator with a slightly different camera angle / light conditons or so.)\", \"The x-axis in the figures says \\\"time (hours)\\\" - is that computation time or simulated time?\"], \"other_comments\": [\"In 3.2, I would be interested in seeing the following baseline comparison: Learn the test task from scratch using the one available demonstration, with the RL procedure (Equation 2, but possibly without the second term to make it fair). In Figure 5, we can see that the performance on the training tasks is much better when training on only 10 tasks, compared to 500. Then why not overfit to a single task, if that's what we're interested in?\", \"An interesting baseline for 3.3 might be an RL algorithm with shaped rewards: using an additional reward term that is the eucledian distance to the *closest* datapoint from the demonstration. Compared to the baselines shown in the results section, this would be a fairer comparison because (1) unlike D4PG we also have access to information from the demonstrations and (2) no additional information is needed like the action information in D4PGfD and (3) we don't have the need for a curriculum.\", \"Nitpick (no influence on score):\", \"[1. Introduction]\", \"I find the first sentence, \\\"One-shot imitation is a powerful way to show agents how to solve a task\\\" a bit confusing. I'd say one-shot imitation is a method, not a way to show how to solve a task. Maybe an introductory sentence like \\\"Expert demonstrations are a powerful way to show agents how to solve a task.\\\" works better?\", \"Second sentence, the chosen example is \\\"manufacturing\\\" tasks - do you mean manipulation? When reading this, I had to think of car manufacturing - a task I could certainly not imitate with just a few demonstrations.\", \"Add note that with \\\"unconditional policy\\\" you mean not conditioned on a demonstration.\", \"[2. MetaMimic]\", \"[2.1] Third paragraph: write \\\"Figure 2, Algorithm 1\\\" or split the algorithm and figure up so you can refer to them separately.\", \"[2.1] Last paragraph, second line: remove second \\\"to\\\"\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
H1eiZnAqKm
The Expressive Power of Gated Recurrent Units as a Continuous Dynamical System
[ "Ian D. Jordan", "Piotr Aleksander Sokol", "Il Memming Park" ]
Gated recurrent units (GRUs) were inspired by the common gated recurrent unit, long short-term memory (LSTM), as a means of capturing temporal structure with less complex memory unit architecture. Despite their incredible success in tasks such as natural and artificial language processing, speech, video, and polyphonic music, very little is understood about the specific dynamic features representable in a GRU network. As a result, it is difficult to know a priori how successful a GRU-RNN will perform on a given data set. In this paper, we develop a new theoretical framework to analyze one and two dimensional GRUs as a continuous dynamical system, and classify the dynamical features obtainable with such system. We found rich repertoire that includes stable limit cycles over time (nonlinear oscillations), multi-stable state transitions with various topologies, and homoclinic orbits. In addition, we show that any finite dimensional GRU cannot precisely replicate the dynamics of a ring attractor, or more generally, any continuous attractor, and is limited to finitely many isolated fixed points in theory. These findings were then experimentally verified in two dimensions by means of time series prediction.
[ "Gated Recurrent Units", "Recurrent Neural Network", "Time Series Predictions", "interpretable", "Nonlinear Dynamics", "Dynamical Systems" ]
https://openreview.net/pdf?id=H1eiZnAqKm
https://openreview.net/forum?id=H1eiZnAqKm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJgDE630y4", "rJxfcnhT1V", "rylOpfu5AQ", "r1g-jzu90m", "SygKuf_qAQ", "S1xTRZu5AX", "H1glN-d507", "BJlZ9PDCnm", "HkxD98Zv2Q", "H1l8eaK1hQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544633646946, 1544567945747, 1543303871649, 1543303832625, 1543303793440, 1543303636795, 1543303464401, 1541465993310, 1540982414926, 1540492526370 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1203/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1203/Authors" ], [ "ICLR.cc/2019/Conference/Paper1203/Authors" ], [ "ICLR.cc/2019/Conference/Paper1203/Authors" ], [ "ICLR.cc/2019/Conference/Paper1203/Authors" ], [ "ICLR.cc/2019/Conference/Paper1203/Authors" ], [ "ICLR.cc/2019/Conference/Paper1203/Authors" ], [ "ICLR.cc/2019/Conference/Paper1203/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1203/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1203/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The paper analyses GRUs using dynamic systems theory. The paper is well-written and the theory seems to be solid.\\n\\nBut there is agreement amongst the reviewers that the application of the method might not scale well beyond rather simple 1- or 2-D GRUs (i.e., with one or two GRUs). This limitation, which is an increasingly serious problem in machine-learning papers, should be solved before the paper should be published. A very recent extension of the simulations to 16 GRUs improves this, but a rigorous analysis of higher-dimensional systems is pending and poses a considerable block for acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"scaling issue\"}", "{\"title\": \"Re: Better now\", \"comment\": \"Thank you for the comment and upon thinking about it we agree that the reference to machine translation better be left out - we will remove this remark from the camera ready version of the manuscript.\"}", "{\"title\": \"re: Review for the expressive power of GRUs as a continuous dynamical system.\", \"comment\": \">> Relevance of studying a continuous time version of the GRU\\n\\nRegardless of dimension, the discrete time GRU-RNN (any RNN or residual network for that matter) can be seen as a forward Euler discretization of an underlying continuous dynamical system, where the topology is dependent on the parameters. As such, there is a relation between the continuous system we derived, and the discrete dynamics of the GRU, which are trying to approximate it. This explanation has also been added to our manuscript. \\n\\nMoreover, the continuous time limit of residual networks and recurrent neural networks has recently garnered substantial interest. While most of the interest was theoretically motivated, the recent popularization of Neural Ordinary Differential Equations [4], [5]shows the feasibility and usefulness of the continuous-time limit. Our work is highly relevant to these exciting developments and hopefully will provide a useful framework for analyzing the latent representations and, as mentioned in the general introduction, for analyzing the dynamics of the gradients with respect to parameters.\\n\\n>> \\u201cIn order to show this major limitation of GRUs \\u2026\\u201d but then a 2-gru is used, which means that it\\u2019s not a general problem for GRUs with higher dim, right? Also, won\\u2019t approximate slow points would also be fine here? I think this language needs to be more heavily qualified.\\n\\nWe agree with this concern and have changed the language used in our manuscript to better express what is meant by a limitation, keeping in mind the use of low dimensional latent dynamics, and comparing the results to the more general GRU problem.\\n\\nThe use of a pseudo-line attractor depends on forcing the nullclines to be sufficiently close to one another in order to cause the arbitrarily slow flow in that subregion of phase space to dip below machine epsilon; a more general form of slow point.\\n\\nHowever, as a limitation, a general requirement to use approximate slow points in this system (given all functions are smooth, and all parameters can vary smoothly) is that there must exist a separate topology where the slow point in question is a saddle-node bifurcation fixed point (cite bifurcation theory), which limits the number of slow points that can exist for any specific set of parameters. \\n\\n>> GRU almost always refers to the network, even though it is Gated Recurrent Unit, this means that when you write \\u2018two GRUs\\u2019, the naive interpretation (to me) is that you are speaking about two networks and not a GRU network with two units.\\n\\nWe agree, and have made an explicit note on this point in our updated manuscript as to avoid confusion.\"}", "{\"title\": \"cnt'd\", \"comment\": \">> We are not familiar with related work on transformations from discrete to continuous dynamical systems: are the dynamics of the discrete time GRU model preserved in the transformation? If so, is there a reference for this?\\n\\nA GRU-RNN (more generally any RNN or residual network) can be seen as a forward Euler discretization of an underlying continuous time dynamical system (see [4] and references within). As such, the discrete and continuous time systems have very similar forms, as the GRU is attempting to approximate the system we analyzed. The continuous-time dynamical systems framework preserves the smooth temporal structures and ignore the possible quirky/jumpy features of discrete maps which powers our analysis. However, generally speaking, the dynamic \\nproperties are not always preserved when converting from discrete to continuous time. For example [3] showed that the 2D discrete GRU can exhibit chaos. However, a 2D continuous time dynamical system cannot show signs of chaos, a result of the Poincare-Bendixson theorem (J. Meiss. Differential Dynamical Systems).\\n\\n>> Are the phase portraits in the middle row of figure 8 generated by letting the discrete GRU system evolve, or is the continuous system used with the parameters of the trained GRU?\\n\\nThe dynamics shown in the middle row of figure 8 are those of the trained GRU on the continuous time system.\\n\\nThe reviewer then lists out a series of additional comments. We have gone through each individually and made the suggested corrections.\\n\\n>> Is the result by Weiss et al actually related to the result of the authors who found that 2 GRUS cannot accurately approximate a line attractor without near zero constant curvature in the phase space?\\n\\nYes, the mechanistic act of counting in the [Weiss et al.] paper using LSTMs has a continuous time analog to a line attractor, with a forcing term propelling the state parallel to the attractor. Since the GRU lacks an output gate, its hidden state acts as a hybrid between the LSTM\\u2019s cell state and output state. As a result, the GRU hidden state must exist asymptotically on a compact set, which is not true for the LSTM cell state. This limitation is necessary in proving that the finite dimensional GRU cannot exhibit a line attractor.\"}", "{\"title\": \"re:There are several issues with this interesting analysis\", \"comment\": \"Reviewer Specific Concerns:\\n\\n>> The Proof of Lemma 2 claims that h(t) achieves all values on the real set, which is false (h(t) assumes values in (-1,1)).\\n\\nWe\\u2019ve reworked this proof to avoid previous confusion. h(t), while asymptotically bounded to (-1,1), can be initialized anywhere on the reals, whereby it will eventually fall into the designated trapping region. Since h(t) can be realized as line of unit slope, it is unbounded and bijective, obtaining all values on the reals.\\n\\n>> Lemma 1 does not seem like a complete proof.\\n\\nLemma 1 has been rewritten in its entirety as a means to (1) improve readability, and (2) include the countably infinite case. The authors believe this approach is better suited for the paper, as proof for the countable case is contained within the argument, and no information is left out, as Taylor series approximation is not used. Moreover, we have now extended the proof to arbitrary dimensions using differential geometry arguments. These are the new Theorem 1 and 2.\\n\\n>> The authors claim that an arbitrarily close approximation of a line attractor can be created using two GRUs, but no proof is provided.\\n\\nWe apologize for the confusion on this concern. We show by existence that a 2D GRU can approximate a straight (or nearly straight) line attractor. However, an arbitrary line attractor cannot be mimicked to machine precision. We\\u2019ve adjusted the language of the paper to avoid misinterpretation.\\n\\n>> The experimental part is difficult to evaluate since there are no learning curves for the three tasks. For instance, it is difficult to judge whether the GRUs are unable to learn the dynamics of a ring attractor because of theoretical limitations or because the model has not been properly trained for the specific task.\\n\\nWe have extended our experiments to illustrate the inability of the 2D GRU to capture the dynamics of a ring attractor. We compared the k-step MSE as a function of number of epochs and as a function of latent dimensionality. We observe that for the ring attractor the MSE decreases as the latent dimensionality increases. On the other hand, the MSE for the FitzHugh-Nagumo does not decrease appreciably as the latent dimensionality increases.\\n\\n>> The paper is easy to read, except for certain parts where it is not clear if some of the statements are true in general or just have not been proven false by the authors. It is not clear why Figure 3 is representing all possible simple fixed points and bifurcation fixed points: is there a theoretical result stating that these are the only possible topologies, or are these the only ones found? The same question applies for the 36 images in figure 9. The range of the parameters used for finding these configurations is not specified.\\n\\nThank you for bringing our attention to this. Indeed our language was ambiguous at places. We have updated the language we use in order to make these points clearer. Generally speaking, the 1D analysis is exhaustive, as nothing beyond what we show exists. Figure 3 depicts all local structures found by the authors. Similarly, figure 9 depicts all global structures found by the authors.\\nThe range of parameters was not specified as no set range was considered in discovering these structures. Rather, all structures were found by a combinatorial systematic procedure, by means of considering all possible ways the nullclines can intersect, given their geometric structure. We\\u2019ve added an explanation of this method in the appendix.\\n\\n>> Since the hidden state assumes values in (-1,1)^2, why is its range in most of the images (-1.5,1.5)?\\n\\nWe agree that all interesting/practical behavior takes place in this bounded region in 2D. However, in section 2 it is stated that \\u201cthe hidden state is asymptotically contained within [-1,1]^d.\\u201d As a result, the hidden state only assumes values in (-1,1)^2 if and only if initialized within that region of phase space. The range of (-1.5,1.5) was used to better improve visualization of global dynamics, by including trajectories initialized outside the trapping region. This point is further emphasized in the updated paper.\"}", "{\"title\": \"re: a dynamical systems analysis of 1d and 2d gated recurrent units\", \"comment\": \"Reviewer Specific Concerns:\\n\\n>> Readability and validity of the continuous time system derivation.\\n\\nWe\\u2019ve added several intermediate steps to the derivation (Appendix A) as a means of improving readability. Note that a GRU-RNN (any RNN or residual network for that matter) can be seen as a forward Euler discretization of an underlying continuous time dynamical system. Under this discretization, the derivative with respect to time appears on the right-hand side of the equation 23, implying both the continuous and discrete time systems are of the same functional form. Furthermore, to clarify the reason for the seemingly missing $\\\\delta t$, let us point out that the Euler discretization is valid for general ODEs with arbitrary step sizes. In the GRU it\\u2019s implicitly assumed that $\\\\delta t = 1$, which is why the time step doesn\\u2019t appear in the GRU update equation for $h_{t+1}$. In the derivation we made use of this fact, but failed to make it explicit. We amend it in the new version of the manuscript.\\n\\n>> Small typo on the top of page 4\\nWe have corrected it.\"}", "{\"title\": \"General response to all reviewers\", \"comment\": \"We thank the reviewers for their careful reading of our manuscript and many helpful suggestions. We are flattered that the reviewers found the manuscript well written and original.\\n\\nFirst, we would like to briefly emphasize the importance of our work. Originally, GRUs were designed to mitigate the difficulty of training recurrent neural networks on tasks with long temporal dependence. In their ingenious use of different gates both LSTMs and GRUs were believed to store information until it is needed at a later time. However, prior to our analysis, little was formally known about how hidden states store information in their dynamics. We extend this understanding by exhaustively listing the types of dynamics that GRU network can generate. These include stable limit cycles over time (nonlinear oscillations), multi-stable state transitions with various topologies, or generating stereotypical temporal responses to perturbations (homoclinic orbit). We were pleasantly surprised to discover this rich expressive power of the 2D GRU system. This was possible thanks to the continuous-time dynamical systems framework that allows us to focus on the smooth temporal structures and ignore the possible quirky/jumpy features of discrete maps.\\n\\nFurthermore, the analysis of 1-D and planar hidden state dynamics offers a new approach to the analysis of recurrent neural networks. Existing approaches have chose other simplifications, ranging from the analysis of linear dynamics to mean field approaches [1], [2]. The latter has extended our understanding of the dynamics of large, randomly initialized networks. The approach championed in this work, considers, in a sense, the opposite simplification to that used in the mean field analysis. Here we have considered networks with 1 or 2 neurons in the hidden layer, but have derived the classes of dynamics that these simple networks always fall into, both at random initialization and throughout training. We believe that our analysis will be helpful in the future in deepening our understanding of the learning dynamics since the backward pass (for backpropagating gradients) requires computing a linearization of the forward dynamics. The dynamical systems perspective is intimately connected to learning, since the stability of equilibria is measured with the eigenvalues of the same linearization. Therefore understanding the topological properties of the forward dynamics gives insight into the topology of learning dynamics. In future work we hope to leverage this connection to better understand gradient dynamics during learning.\", \"major_changes\": \"We replaced previous claims about point and line attractors to arbitrary dimensions. (Theorem 1 & 2)\\nNew experiments showing learning curves for higher dimensional GRU networks (Fig. 9)\\nOverall improvement in writing.\", \"references\": \"[1]M. Hardt, T. Ma, and B. Recht, \\u201cGradient Descent Learns Linear Dynamical Systems,\\u201d J. Mach. Learn. Res., vol. 19, no. 29.\\n[2]M. Chen, J. Pennington, and S. Schoenholz, \\u201cDynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks,\\u201d in International Conference on Machine Learning, 2018, pp. 873\\u2013882.\\n[3]T. Laurent and J. von Brecht, \\u201cA recurrent neural network without chaos,\\u201d Nov. 2016.\\n[4]R. T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. Duvenaud, \\u201cNeural Ordinary Differential Equations,\\u201d ArXiv180607366 Cs Stat, Jun. 2018.\\n[5]W. Grathwohl, R. T. Q. Chen, J. Bettencourt, I. Sutskever, and D. Duvenaud, \\u201cFFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models,\\u201d ArXiv181001367 Cs Stat, Oct. 2018.\"}", "{\"title\": \"a dynamical systems analysis of 1d and 2d gated recurrent units\", \"review\": \"This paper analyzes GRUs from a dynamical systems perspective, i.e. phase diagrams, fixed points, and bifurcations. The abstract and intro are well written and motivate the need for more theoretical framework to understand RNNs, especially how well they are able to represent and express temporal features in the training data. The dynamical systems analysis is well presented and visualized nicely.\\n\\nMost of the paper concentrates on the 1d (one single GRU) and 2d (two GRU's) case. They show that 2d GRUs can be trained to adopt a variety of fixed points, can approximate a line attractors (an important feature for short-term memory), but cannot mimic a ring attractor.\", \"my_concerns_are\": [\"The derivation of the continuous time dynamical system (Appendix A) is confusing to me. Unless I'm not following the derivation correctly, should there be another \\\\Delta t in the denominator of the right-hand side of (23), from (22)? It's confusing to me that the continuous-time version in (26) has essentially the same form as the discrete-time version in (22).\", \"The applicability of this analysis to RNNs of even modest size is unclear. Generically, there's no reason to believe the intuitions from 2d should necessarily generalize to higher dimensions, and rigorous analysis of higher dimensional systems of this kind can be fairly challenging, even if one starts from a continuation analysis.\", \"Small typo: top of Page 4, figure should refer to 3A, not 2A.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"There are several issues with this interesting analysis\", \"review\": \"The authors analyse GRUs with hidden sizes of one and two as continuous-time dynamical systems, claiming that the expressive power of the hidden state representation can provide prior knowledge on how well a GRU will perform on a given dataset. Their analysis shows what kind of hidden state dynamics the GRU can approximate in one and two dimensions. In the experimental part, they show how a GRU with two hidden states trained on a multistep prediction task can learn such dynamics.\\n\\nAlthough RNNs are important for Machine Learning, the paper seems to contain flaws in the theoretical part, which seem to invalidate some of the claimed results. But we may change our rating in case of a convincing rebuttal.\\n\\nThe Proof of Lemma 2 claims that h(t) achieves all values on the real set, which is false (h(t) assumes values in (-1,1)). Nevertheless, the theorem should hold since there is always at least one intersection between h and tanh(f(h)).\\n\\nLemma 1 claims that for any choice of parameters, there exist only finitely many fixed points. However, in the proof the authors only show that the number of fixed points cannot be uncountable, without taking into consideration the possibility that there are countably many fixed points. The proof also omits steps concerning the Taylor expansion which would make the proof clearer: We suggest adding those steps in the appendix. Furthermore, when equation (12) is Taylor-expanded, the authors do not consider the case where the GRU parameters are such that the argument of function \\u201csech\\u201d is outside its convergence radius. These might be parameters for which there are infinitely many fixed points, even if we are unable to provide a Taylor expansion. The Lemma may still be correct, but this does not seem to be a complete proof.\\n\\nThe authors claim that an arbitrarily close approximation of a line attractor can be created using two GRUs, but no proof is provided.\\n\\nThe experimental part is difficult to evaluate since there are no learning curves for the three tasks. For instance, it is difficult to judge whether the GRUs are unable to learn the dynamics of a ring attractor because of theoretical limitations or because the model has not been properly trained for the specific task.\\n\\nThe paper is easy to read, except for certain parts where it is not clear if some of the statements are true in general or just have not been proven false by the authors. It is not clear why Figure 3 is representing all possible simple fixed points and bifurcation fixed points: is there a theoretical result stating that these are the only possible topologies, or are these the only ones found? The same question applies for the 36 images in figure 9. The range of the parameters used for finding these configurations is not specified.\\n\\nSince the hidden state assumes values in (-1,1)^2, why is its range in most of the images (-1.5,1.5)?\", \"we_are_not_familiar_with_related_work_on_transformations_from_discrete_to_continuous_dynamical_systems\": \"are the dynamics of the discrete time GRU model preserved in the transformation? If so, is there a reference for this? Are the phase portraits in the middle row of figure 8 generated by letting the discrete GRU system evolve, or is the continuous system used with the parameters of the trained GRU?\\n\\nWe would like to see more explanations of why various topologies are useful for the applications mentioned in the paper. Given a generic dataset, how can these results help to understand how well a GRU will perform?\\n\\nWhat is the reason behind the belief that the analysis extends to higher dimensions? The effects of a 1D -> 2D extension are far from trivial - why should that be different for higher dimensions?\\n\\nThe problem the authors want to solve seems important, and some of the theoretical results are promising, but we think that this paper has to be further polished before acceptance.\\n\\nIt is possible that we will increase the score if the authors can provide clarifications on the above questions.\", \"additional_comments\": \"Introduction\\n\\n-The vanishing gradient problem was not discovered in 1994, but in 1991 by Hochreiter: \\n\\nSepp Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, TU Munich, 1991. Advisor J. Schmidhuber.\\n\\n- Make clear that GRU is a variant of vanilla LSTM with forget gates (where one gate is missing):\\n\\nGers et al. \\u201cLearning to Forget: Continual Prediction with LSTM.\\u201c Neural Computation, 12(10):2451-2471, 2000. \\n\\n- The intro says that GRU has become widely popular and cites Britz et al 2017, but Britz et al actually show that LSTM consistently outperforms its variant GRU in Neural Machine Translation. Please clarify this. \\n\\n- Also mention Weiss et al (\\u201cOn the Practical Computational Power of Finite Precision RNNs for Language Recognition\\u201d) who exhibited basic limitations of GRU when compared to LSTM. \\n\\n- Is the result by Weiss et al actually related to the result of the authors who found that 2 GRUS cannot accurately a line attractor without near zero constant curvature in the phase space?\\n\\n\\nSection 2\\n\\n-Wrong brackets in equation (4)\\n-Missing bracket before citing Laurent & Brecht\\n\\nSection 4\\n\\n-\\u201dWe conjecture that the system depicted in figure 2A..\\u201d Should be figure 3A\\n- Lemma1: UZ has capital Z subscript\\n\\nSection 5.2\\n\\n-\\u201dThe added affine transformation allows for a sufficiently long subinterval\\u201d: \\u201csufficiently long\\u201d is too vague\\n\\nSection 5.3\\n\\n\\u201cA manifold with without near zero constant curvature\\u201d: should be \\u201ca manifold without near zero constant curvature\\u201d\\n\\nAppendix A\\n\\n-Wrong brackets in equation (20)\\n\\nAppendix B\\n\\n- In the proof of Theorem 1, the derivative is of (29), not of (12)\\n\\nAppendix C\", \"figure_9\": \"\\u201cwho\\u2019s initial conditions\\u201d should be \\u201cwhose initial conditions\\u201d\", \"after_rebuttal\": \"It's better now. However, the revised introduction still says: \\\"GRU has become wildly popular in the machine learning community thanks to its performance in machine translation (Britz et al., 2017) ... LSTM has been shown to outperform GRU on neural machine translation (Britz et al., 2017).... specifically unbounded counting, come easy to LSTM networks but not to GRU networks (Weiss et al., 2018).\\\"\", \"so_better_remove_the_first_statement_on_britz_et_al\": \"\\\"GRU has become wildly popular ... in machine translation (Britz et al., 2017)\\\" because they actually show why GRU is NOT wildly popular in machine translation, as correctly justified later in the same paragraph.\\n\\nPending the above revision, we'd like to increase our evaluation by 2 points, up to 6!\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review for the expressive power of GRUs as a continuous dynamical system.\", \"review\": \"Here the authors convert the GRU equations into continuous time and use theory and experiments to study 1- and 2-dimensional GRU networks. The authors showcase every variety of dynamical topology available in these systems and point out that the desirable line and ring attractors are not achievable, except in gross approximation. The paper is extremely well written.\\n\\nI am deeply conflicted about this paper. Is the analysis of 1 or 2 dimensional GRUs interesting or significant? That\\u2019s a main question of this paper. There is no question of quality, or clarity, and I am reasonably certain nobody has analyzed the GRU in this way before.\\n\\nOn the one hand, the authors bring a rigor and language to the discussion of recurrent networks that is both revealing (for these examples) and may to bear fruit in the future. On the other hand, the paper is exclusively focused on 1- and 2-dimensional examples which have precisely no relevance to the recurrent neural networks as used and studied by machine learning practitioners and researchers, respectively. If the authors have proved something more general for higher dimensional (>2) cases, they should make it as clear as possible.\\n \\nA second, lesser question of relevance is studying a continuous time version. It is my understanding that discrete time dynamics may exhibit significantly more complex dynamical phenomenon and again practitioners primarily deploy discrete time GRUs. I understand that theoretical progress often requires retreating to lower dimensionality and (e.g. linearization, etc.) but in this case it is not clear to me that the end justifies the means. On the other hand, a publication such as this will not only help to change the language of RNNs in the deep learning community, but also potentially bring in more dynamical systems specialists into the deep learning field, which I thoroughly endorse.\\n\\nModerate concern\\n\\n\\u201cIn order to show this major limitation of GRUs \\u2026\\u201d but then a 2-gru is used, which means that it\\u2019s not a general problem for GRUs with higher dim, right? Also, won\\u2019t approximate slow points would also be fine here? I think this language needs to be more heavily qualified.\\n\\nMinor\\n\\nGRU almost always refers to the network, even though it is Gated Recurrrent Unit, this means that when you write \\u2018two GRUs\\u2019, the naive interpretation (to me) is that you are speaking about two networks and not a GRU network with two units.\", \"side_note_requiring_no_response\": \"It might be interesting to study dynamical portrait as a function of training for the two-d GRU.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HyesW2C9YQ
I Know the Feeling: Learning to Converse with Empathy
[ "Hannah Rashkin", "Eric Michael Smith", "Margaret Li", "Y-Lan Boureau" ]
Beyond understanding what is being discussed, human communication requires an awareness of what someone is feeling. One challenge for dialogue agents is recognizing feelings in the conversation partner and replying accordingly, a key communicative skill that is trivial for humans. Research in this area is made difficult by the paucity of suitable publicly available datasets both for emotion and dialogues. This work proposes a new task for empathetic dialogue generation and EmpatheticDialogues, a dataset of 25k conversations grounded in emotional situations to facilitate training and evaluating dialogue systems. Our experiments indicate that dialogue models that use our dataset are perceived to be more empathetic by human evaluators, while improving on other metrics as well (e.g. perceived relevance of responses, BLEU scores), compared to models merely trained on large-scale Internet conversation data. We also present empirical comparisons of several ways to improve the performance of a given model by leveraging existing models or datasets without requiring lengthy re-training of the full model.
[ "dialogue generation", "nlp applications", "grounded text generation", "contextual representation learning" ]
https://openreview.net/pdf?id=HyesW2C9YQ
https://openreview.net/forum?id=HyesW2C9YQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HklZlzmtx4", "HklNRaCYA7", "B1erH8iYCX", "HyxygWiUR7", "BJx59xoLCm", "S1ecLgoURQ", "BJgP-yiLRm", "S1xwkJiLAm", "Byg9rCcIA7", "Hkeif058AX", "Ske58Tc8RQ", "S1gqZa98CQ", "Hklbe_U027", "S1eSaJc53Q", "B1xDLFJTiQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545314792804, 1543265740000, 1543251517272, 1543053543119, 1543053457789, 1543053394289, 1543053054584, 1543053023153, 1543052866343, 1543052819315, 1543052625665, 1543052546099, 1541461992770, 1541214141445, 1540319567122 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1202/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1202/Authors" ], [ "ICLR.cc/2019/Conference/Paper1202/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1202/Authors" ], [ "ICLR.cc/2019/Conference/Paper1202/Authors" ], [ "ICLR.cc/2019/Conference/Paper1202/Authors" ], [ "ICLR.cc/2019/Conference/Paper1202/Authors" ], [ "ICLR.cc/2019/Conference/Paper1202/Authors" ], [ "ICLR.cc/2019/Conference/Paper1202/Authors" ], [ "ICLR.cc/2019/Conference/Paper1202/Authors" ], [ "ICLR.cc/2019/Conference/Paper1202/Authors" ], [ "ICLR.cc/2019/Conference/Paper1202/Authors" ], [ "ICLR.cc/2019/Conference/Paper1202/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1202/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1202/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers raised a number of concerns including the usefulness of the presented dataset given that the collected data is acted rather than naturalistic (and the large body of research in affective computing explains that models trained on acted data cannot generalise to naturalistic data), no methodological novelty in the presented work, and relatively uninteresting application with very limited real-world application (it remains unclear whether having better empathetic dialogues would be truly crucial for any real-life application and, in addition, all work is based on acted rather than real-world data). The authors\\u2019 rebuttal addressed some of the reviewers\\u2019 concerns but not fully (especially when it comes to usefulness of the data). Overall, I believe that the effort to collect the presented database is noble and may be useful to the community to a small extent. However, given the unrealism of the data and, in turn, very limited (if any) generalisability of the presented to real-world scenarios, and lack of methodological contribution, I cannot recommend this paper for presentation at ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview\"}", "{\"title\": \"Thank you -- your response is on the wrong reviewer thread though\", \"comment\": \"Dear AnonReviewer1,\\n\\nWe are very glad to hear that you found our additional comments useful and upgraded your score to help it get presented to ICLR. We indeed are eager to see a lot more development based on this dataset.\\nWe noticed that you posted your response on the thread of the review of AnonReviewer3, which might be confusing to others -- would you mind reposting on the comment thread of your own review instead?\\nWe would also very much appreciate if you could update your initial review to reflect the new upgraded score.\\nThank you again for your time and consideration!\"}", "{\"title\": \"thanks for your additional comments - I upgraded my rating\", \"comment\": \"thanks for your additional comments - I upgraded my rating. I am hoping to see even more development based on this dataset and perhaps a longer journal paper in the future.\"}", "{\"title\": \"Response (3)\", \"comment\": \"\\u201cAre the highlighted numbers the only significant findings or just the max scores?\\u201d: in our submission, we had highlighted the maximum score, as is commonly done in papers in this community. But we indeed found that it would make the picture clearer to instead use confidence intervals in the table of human evaluations, so we have now instead highlighted results that were above 2 the standard error of the mean compared to a reference model, as a 95% confidence interval corresponds to 1.96 SEM.\\n\\nUnder-explained addition of Table 7 in the supplementary material / \\u201cThe emotion labels for all these datasets are not directly comparable so I would have liked to have seen more explanation around how these classifications were compared. \\u201c: That table provides context on performance as a benchmark compared to existing sets rather than a new result, to give a sense of relative difficulty compared to existing benchmarks for a given classification system. We agree that the emotion labels are not directly comparable, but machine learning systems trained on one task can often successfully be fine-tuned to transfer learning between them, e.g. as done in the DeepMoji paper which presents results for all these datasets for a same base architecture. \\n\\n\\u201cIt would also be helpful to know how more similar emotions such as \\\"afraid\\\" and \\\"anxious\\\" were scored vs \\\"happy\\\" and \\\"sad\\\" confusions\\u201d: Accuracy reported is not weighted by how \\u201cbad\\u201d the confusion is, so all the confusions are scored the same: classifying an \\u201cafraid\\u201d situation as \\u201canxious\\u201d is penalized as much as classifying it as \\u201chappy\\u201d. This is a very common trait of supervised benchmarks, although there has been a lot of work on how to improve classifiers and benchmarks by taking into account similarities between labels, e.g. Bengio et al 2010 [5] that construct a data-driven structured hierarchy of labels based on classification performance.\\nWhile it would be very interesting to perform this type of analyses on our data, emotion classification in itself wasn\\u2019t the focus on this paper, as emotion classification was mostly used as a way to augment the model with useful representations, so we didn\\u2019t include many experiments on emotion classification per se, and leave that for future work.\\nAs we responded to another reviewer comment, we tried to alleviate potential confusion problems by using more than one label in the prepend setting, and by using an intermediate representations within models that was taken before a determination into a single emotion was outputted, so that there isn\\u2019t an information loss caused by the winner-take-all process -- nonetheless, refining classifiers to make better use of label relatedness could be a way to improve representations for our task.\\n\\n[1] Jiwei Li, Michel Galley, Chris Brockett, Georgios P Spithourakis, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. In Proc. of ACL. pages 994\\u20131003 \\n[2] Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei-Hao Su, David Vandyke, Steve Young. 2015. Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems. In Proc. of EMNLP. pages 1711\\u20131721\\n[3]Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan. 2016. A Diversity-Promoting Objective Function for Neural Conversation Models. In Proc. of NAACL-HLT. pages 110-119\\n[4] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proc. of ACL. pages 311\\u2013318\\n[5] Bengio, S., Weston, J. and Grangier, D., 2010. Label embedding trees for large multi-class tasks. In Advances in Neural Information Processing Systems (pp. 163-171).\"}", "{\"title\": \"Response (2)\", \"comment\": \"Context vs. emotion: Thanks for the feedback, the use of \\u201ccontext\\u201d for the dialogue utterances that came before the utterance to produce was chosen to be consistent with existing dialogue papers in the literature, so we are keeping it for that specific use and have updated the paper to use \\u201cemotion\\u201d or \\u201clabel\\u201d everywhere else.\\n\\n\\u201cwhich also seems to be the first utterance in the dialog they will start\\u201d / \\u201cyou state that the dialog model has access to the situation description given by the speaker (also later called the situational prompt) but not the emotion word prompt.\\u201d / \\u201cCalling these both prompts makes the statement about 24,850 prompts/conversations a bit ambiguous. \\u201c: thanks for making us realize this was unclear. The situation description does not have to be the first utterance in the dialog started by the Speaker (but it can be if they choose to start the conversation with it, and indeed Speakers chose to do so frequently, about 23% of the time). It is not the case that the model has access to the situation description itself, neither the models nor the Listeners do (unless the Speaker decided to start the conversation with a description matching the situation). We have updated the manuscript in several places to make this important point clear, and also made sure to reserve \\u201cprompt\\u201d for the emotion word, and \\u201csituation description\\u201d for the text written by the Speaker in response to this word -- thank you for pointing out that we had used these words in an inconsistent way.\\n\\nHuman ratings/annotations: indeed our phrasing was confusing, thanks for pointing that out to us. We have updated the manuscript to clarify that the dialogs were generated as prompted by an emotion word and not annotated. As for model scoring, we now use the word \\u201cratings\\u201d instead of annotations, as per your suggestion to replace the word \\u201cannotation\\u2019.\\nWe added many details about the rating procedure in the Appendix, section A4. To your specific question: we, again, used MTurk. Workers were shown one randomly subsampled example from the test set for a randomly selected model (this was done 100 times per model) and asked to rate that single response. 217 US workers participated in the rating task, and had to perform a minimum of one rating.\\nFor the human comparison task, workers were shown a dialogue context, and the two responses from a pair of models presented in a randomized order (this was done 50 times per pair of models). They had to select if they preferred one, the other, or both equally. 337 US workers participated in the model comparison task.\\n\\n\\u201cre-used the same prompt several times\\u201d: workers were not prevented from talking about the same situation different times for the same emotion (and indeed some chose to do so, but very rarely), but were paired with different Listeners every time. Our phrasing of \\u201ctopic overlap\\u201d was indeed confusing so we have updated the manuscript to make it clear that we are talking about the same situation being described by the same worker in training and testing set.\"}", "{\"title\": \"Response (1)\", \"comment\": \"Thank you for your thoughtful feedback and pointing out many places where more experimental details or clarifications would be useful, and where we had been inconsistent in our terminology. We addressed your points and incorporated your corrections to the updated manuscript; please find detailed responses below.\\n\\n\\u201cI think it could have been better organized.\\u201d: we indeed have extensively re-organized the paper, as detailed in our response in the general thread.\\n\\n\\u201c\\u201c I would have appreciated a better explanation of the rationale for using BLEU scores. I did some online research to understand these Bilingual Evaluation Understudy Scores and while it seems like they measure sentence similarity, it is unclear how they capture \\u201drelevance\\u201d at least according to the brief tutorial that I read (https://machinelearningmastery.com/calculate-bleu-score-for-text-python/).\\u201d:: We truly appreciate your thoughtfulness in taking the time to consult background information about BLEU -- we indeed should have included the standard reference for BLEU, Papineni et al [4], which we are adding to the manuscript.\\n We use BLEU as an evaluation metric because it has been frequently used in other dialogue generation papers as an automated evaluation (Li et. al 2016 [1] cited in our manuscript, Wen et al 2015 [2], Li et al 2015 [3], to name just a few), and we are adding this as well to the manuscript. However, we definitely agree that word overlap scores do not always align with human judgement, which has been documented in other works such as the \\u201cHow Not to Evaluate your Dialogue System\\u201d paper [Liu et al. 2016] that we mentioned in our discussion of the human evaluation set-up. This is why we include human evaluation in addition to the commonly used automated metrics.\", \"crowdsourcing_process\": \"Yes, we did use Amazon Mturk for recruiting workers, and required that all of our participants came from the US. Each pair of workers contributed to at least two conversations, that could have been about the same or different emotion labels, depending on which words they were offered and which words they selected. Individual workers did not have to have conversations about all 32 emotions; instead, coverage of all emotions was ensured by offering more often the emotions that had been selected less overall. We have added a lot of details about the procedure in the manuscript (in the Appendix A2 and A4 sections, as well as in section 3). As to your specific questions: the median number of conversations per worker was 8, while the average was 61. Thus, there were definitely a handful of workers who were more actively participating. To ensure quality, we hand-checked random subsets of conversations by our most-frequent workers. They were allowed to participate in as many of these HITs as they wanted for the first ~15k conversations, then we added qualifications to limit the more \\u201cfrequently active\\u201d workers to a set number of conversations (100 per worker). We have added that information to the crowdsourcing description in the appendix, as well as a larger random sample of conversations from the dataset.\\n\\nWe would like to clarify what the HITs look like. In each HIT a worker is first taken to a screen where they are shown 3 emotion words. At first the 3 words were sampled randomly, but as the crowdsourcing data generation process went on, we showed the 3 words that had overall been picked the least so far for a first-time worker, or the 3 that had been used the least for that worker if the worker had already performed the task before, so as to ensure better coverage of all emotion labels. It is true that this makes worker select emotions that they might not spontaneously have preferred, but we observed an initial bias for situations that were easier to describe (e.g, a situation causing surprise), and we thought our dataset would be more useful for training versatile dialogue models if all emotion words were covered in a more balanced way.\\n\\n\\u201cThis would imply that some emotional situations were less preferred and potentially more difficult to write about. It would be interesting if this data was presented. \\u201c: While we are not presenting the initial imbalance, or commenting on it in the paper, its residual effect (as well as the fact that workers still had a choice between 3 words, so could effectively exclude ever working on 2) can still be observed in the slight imbalance of our set, in Table 7 of the Appendix, where the labels are ordered by decreasing frequencies.\\n \\nWorkers picked one emotion word among the 3 offered (their own choice) and wrote a description of a time they felt that way. Then, they were taken to another screen where they were paired randomly with another worker who had just completed the same process. They took turns starting two conversations. Each worker had to describe their situation as part of the conversation they started. After that, they answered a few brief feedback questions which helped monitor quality.\"}", "{\"title\": \"Response (2): emotion labels\", \"comment\": \"\\u201cthis is a very refined set that could get blurred at the boundaries between similar emotions.\\u201d:\\n as mentioned in the paper, we consulted existing works on emotion classification, especially works that had provided previous datasets of similar nature (e.g., Skerry and Saxe 2015). We decided to include all the emotion labels used in those previous works so that people who had used those datasets before could more easily transition to ours, possibly by using only the subset that had those emotion labels. Distinction between similar emotions was not as important to us, since our main focus was generating situations to which Listeners could react with empathy, rather than distinguishing between them. We selected a very fine-grained set of emotion labels so that researchers could group together similar emotions, as needed depending on the application they are interested in (and indeed there is a lot of work on how to cluster labels in a data-driven way, e.g. , Bengio et al 2010 [1]), though we do not try that here. We reasoned that it is easier to group together after the fact than to separate and the focus of this work is not emotion classification. We also thought that keeping emotions that are similar but suggest some intensity gradation (e.g., angry vs. furious) could even be useful down the line for tasks such as grading emotion intensity, like the task 1 of SemEval 2018.\\n\\n\\u201cdoes everyone interpret the same emotion label the same way\\u201d: no, indeed, but the agreement between humans is high enough to get good signal, and this has been quantified -- for example, see Fig 1C in Skerry and Saxe 2015 (reference in the paper) that finds an accuracy of 65% for 20 labels (all part of our set of 32), where chance would be 5%.\\nFor an explicit feature-based analysis of similarity, relevant analyses of overlap of features and similarities can also be found in Skerry and Saxe 2015 -- in particular Figure 2 shows how labels (20 labels, that are included in our list of 32) relate to appraisal features (e.g., expectedness, future, familiarity, suddenness, etc), basic emotions, and the affective circumplex.\\n\\n\\u201c will such potential ambiguities impact the work?\\nOne way to learn more about this is to aggregate related emotions to make a coarser set,\\nand compare the results.\\u201d \\u201cWhat about [multitask and ensemble]?\\u201d: With supervised fine-tuning or concatenating representations (the multitask and ensemble settings), the representation used in the model is taken before a single winner is outputted, so there isn\\u2019t an information loss caused by the winner-take-all process of outputting a single label -- however, it is definitely possible that having better clustering of emotions could focus learning on more crucial information than distinguishing whether someone is \\u201cangry\\u201d or \\u201cfurious\\u201d while they\\u2019re actually somewhere in between, and this could be tested in future work, for example in conjunction with existing methods to combine labels. Thanks for the suggestion.\\nAs you also mention, \\u201cTo some extent this is leveraged by the prepending method (with top-K emotion predictions).\\u201d -- and indeed that was our reason for experimenting with K > 1.\\n\\n\\u201con using an existing emotion predictor: does it predict the same set of emotions\\nthat you are using in this work?\\u201d all of the emotion predictors that we use from other works were trained with different sets of labels than what we use, and not directly emotions (e.g., emojis), however we fine-tune the deepmoji+ model on our set of labels. The deepmoji paper presents many experiments on transferring their emoji classification learning to multiple loosely emotion-related tasks, like sentiment classification (Table 9 in the appendix lists many of those datasets.) One of those datasets is the ISEAR set, which uses labels that we did include in our list and which also starts from short situation descriptions, so we had reason to believe that deepmoji could transfer well to our task.\\n\\n[1] Bengio, S., Weston, J. and Grangier, D., 2010. Label embedding trees for large multi-class tasks. In Advances in Neural Information Processing Systems (pp. 163-171).\"}", "{\"title\": \"Response (1): experiment organization and description\", \"comment\": \"Thank you for your thoughtful feedback, which was very helpful to improve the paper. Please see our response in the general thread, which details our updates. Regarding your specific notes and questions:\\n\\n\\u201cThe conclusions are somewhat fuzzy as there are , and as a result no clear cut recommendations can be made\\u201d: thanks for pointing that out -- we have extensively reorganized our paper to make the motivations and results clearer; please also see our response in the general thread.\\n\\n\\u201cHow is the \\\"situation description\\\" supposed to be related to the\\nopening sentence of the speaker? In the examples there seems to be substantial\\nOverlap.\\u201d This was indeed unclear, thanks for pointing that out -- we have now clarified this. We asked the crowdsourced workers to start the conversation by describing their situation in a conversational way. Because of this, there often is overlap. Workers sometimes stuck to the situation description closely, while others were more creative about re-wording things.\"}", "{\"title\": \"Response (2): experiments\", \"comment\": \"\\u201cbut the question the authors do not satisfactorily address is whether their explicit (and I would say sometimes ad-hoc) treatment of empathy (e.g., using emotion classifier, etc.) is crucially needed to get better empathetic dialogues [...]\\u201d: thank you very much for making that point, and making us realize that our experimental results would benefit from disentanglement. As detailed in the response in the general thread, we have extensively reworked the experimental section to make it clear (1) where benefits in empathy from training on our dataset are seen without any increase in model capacity, (2) why we found it valuable to include experiments combining our base model with external classifiers, (3) how capacity came into play, with new experiments with larger models. Your comments have also led us to downplay the treatment of emotion and add new experiments with topic classifiers, to give a better overall picture. \\n\\n\\u201cMore statistics in the table in terms of number of parameters and amount of in- and out-of-domain data used for each experiment would help draw a clearer picture.\\u201d We have added a section and table discussing capacity (Table 4) for the crucial experiments. For some of the experiments using a classifier trained on out-of-domain data, we have clarified our motivation for including them. We do not claim that it is surprising that it should help, rather we aim to provide empirical confirmation of whether it does, for a variety of different data sources, and by how much, so that practitioners can more easily decide which model or data to adopt.\\n\\n\\u201cdoesn\\u2019t really attempt to make major technical contribution\\u201d: we do not argue to the contrary, and it was definitely not how we had tried to cast our contribution. We have updated our manuscript with more citations to previous work, including the ones you provided, to make it even clearer that we claim no innovation on the architecture front -- rather, our goal is to show how existing methods can be used with our dataset, and how they compare.\"}", "{\"title\": \"Response (1): dataset\", \"comment\": \"Thank you for your insightful and detailed review. While we respectfully disagree with some of the points (and will detail why below), we appreciate that in most cases the fault was lying with us not having clarified those arguments in the manuscript, which we have now done. For others, your insightful questions led us to organize our experiments better and supplement them with new ones which collectively make for a clearer picture. To your detailed points, let us start with one of your last observations:\\n\\n\\u201cAbout the use of Reddit: this might not be the best background dataset, as it\\u2019s mostly strangers talking to other strangers, presumably causing the baseline to be weak on empathy. \\u201c \\nWe definitely agree that we would expect Reddit to be weak on empathy, but (1) as stated in our response in the main thread, there isn\\u2019t currently an empathy benchmark that we know of to actually quantify that, and (2) we wish for publicly available resources to train a conversation system that would respond with empathy in a reproducible way for the community.\\nThe Reddit data has the advantage of being easily publicly available, of a very large scale (1.7B comments), and has already been used to train dialogue systems (a few refs in the paper), so it\\u2019s good for the publicly available reproducible part. From interacting with Reddit-trained dialogue systems and looking at Reddit data, it indeed doesn\\u2019t seem very empathetic. There are two options from there: try and make that system more empathetic, and this is the approach we took in our work. The other one is to look for another background dataset, as you suggest. Unfortunately, there aren\\u2019t many publicly available corpora for training dialogue in a general domain, and their scale is at least an order of magnitude smaller than the Reddit one. We like your specific suggestion of \\u201cTwitter or other social-network type datasets (letting you follow people rather topics)\\u201d, and we indeed know of existing datasets from Twitter of a large scale, but they have shortcomings. First, tweets have low character limits, which is a constraint that doesn\\u2019t match the setting of general dialogue. Second, existing publicly released datasets are orders of magnitude lower in scale, and the conversations are very short. The Twitter corpus from Ritter et al 2010 [1] has 1.3 million conversations, 69% of which have only 2 turns. Sordoni et al 2015 [2] used more than 100 million longer Twitter conversations, but the released Triples Twitter corpus unfortunately has fewer than 5k dialogues.\\nA last point that we also clarified in our updated paper (see response in common thread) is that by nature, publicly available datasets from social media are quite different from one-on-one conversations, so even with a better background dataset from public social media, we would want to release data in a one-on-one setting.\\n\\n\\u201c exchanges that are rather clich\\u00e9 and overdone (e.g., Table 1: the label \\u201cafraid\\u201d yields a situation that is rather spooky and unlikely in the real world, and the conversations themselves are rather clich\\u00e9 and incorporate little details that would make them sound real).\\u201d: the example we had used in Table 1 indeed gives that impression. Thanks for drawing our attention to that; this impression is fortunately not representative of our dataset, which to us seems quite colorful. As stated in the general thread response, we have now added a sample of 10 randomly drawn conversations to give a better sense of what our dataset looks like. That random sample talks about 4 chicken species (Australorps, Rhode Island Reds, Barred Plymouth Rocks, Welsummer), rabies causing sensitivity to light, enchiladas, English majors, too much coffee to start a shift, Tuesday breaks to buy lottery tickets with coworkers, etc.\\n\\n\\u201cexisting real-world datasets underrepresent rare emotions (e.g., afraid), but that\\u2019s just a reflection of how these emotions are distributed in the real world.\\u201d: thank you for making us realize that we had not done a good enough job explaining the limitations of existing datasets for training empathetic conversations. We have updated the manuscript to clarify the nature of existing datasets and why they did not meet our needs. Please also refer to our response in the general thread for more clarifications on this point.\\n\\n[1] A. Ritter, C. Cherry, and B. Dolan. Unsupervised modeling of twitter conversations. In North American Chapter of the Association for Computational Linguistics (NAACL 2010), 2010.\\n[2] A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell, J. Nie, J. Gao, and B. Dolan. A neural network approach to context-sensitive generation of conversational responses. In Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2015), 2015.\"}", "{\"title\": \"Paper revision updated -- summary of modifications (2)\", \"comment\": \"(2) We organized our experiments better, as comments from all three reviewers helped us see that a clearer organization would greatly benefit the paper. We have now clearly separated the experiments in two sets:\\n-- experiments showing that using our data to train models improves the performance of a conversation model trained on Reddit on multiple dimensions, without using any other type of data or additional models, or increasing the capacity of the model beyond 0.01%, with the goal of demonstrating how our data can help create better conversation models\\n-- experiments comparing many ways to combine a pretrained conversation model and external pretrained models so as to bank previously conducted training without having to conduct costly retraining. We hope to help practitioners who would want to combine our dataset with existing models get a sense of what empirically works better or not on this benchmark.\\nAnonReviewer2 made an insightful point regarding the need to be more precise about model capacities, and clearer as to where additional data / capacity was used. In addition to the new clear partition of experiments, we have added a table with resource and parameter counts (table 4), and a paragraph to discuss capacity in section 4.1, with new experiments with larger models.\\nWe also added experiments using supervision from topic classification instead of emotion supervision (results added to Tables 2 and 3), and downplayed the focus on emotion to instead emphasize that many other types of good representations could be leveraged.\\n(3) Comments from all three reviewers helped us considerably clarify the experimental procedure throughout the paper and with a much longer section in the appendix, to use terminology more consistently, and present the main experimental results in a clearer way.\\n\\nWe hope that reviewers will find that we have adequately addressed their thoughtful comments, and that these extensive improvements to our manuscript will convince them to raise their scores so that we can share our work and dataset with the community.\"}", "{\"title\": \"Paper revision updated -- summary of modifications (1/2)\", \"comment\": \"We are grateful to all three reviewers for their insightful and thoughtful comments which have helped us substantially improve the manuscript. We uploaded a new version. Here is a summary of the main changes -- other reviewer questions are answered in individual replies to each review.\\n\\n(1) We clarified why we believe this dataset fills an important need for which no good data exists. AnonReviewer2\\u2019s detailed comments on real-world data made us realize that we had not sufficiently explained in the paper the shortcomings of the existing labelled datasets that we are aware of. We changed the Introduction and Related work sections to make that clearer. In particular:\\n-- we gave more background on DailyDialog, one of the datasets that we discuss as having an extreme skew in emotion labeling. We clarified that DailyDialog was \\u201cobtained by crawling educational websites intended for learners of English, includes many dialogues anchored in everyday situations and has been annotated post-hoc with emotion labels, but only \\u2248 5% of the utterances have a label other than \\u201cnone\\u201d or \\u201chappy\\u201d, and dialogues are mostly limited to domains deemed appropriate for use as a language learning tool (ordering from a restaurant, asking for directions, shopping for a specific item, introductions, etc).\\u201c We would respectfully argue that there is no reason to believe that the skews in that dataset actually reflect \\u201creal world\\u201d distributions: these are the biases of emotions that writers of dialogues for learners of English collectively believe would be the most useful and acceptable as teaching material. Our experience as language learners is that teaching dialogues are often limited to a narrow set of mundane experiences such as asking for directions, introductions, discussing coursework, vacations, etc -- and indeed random sampling of the data yields a lot of examples of these topics.\\n-- we provided our rationale for preferring crowdsourced data to public social media data, and explained why we believe the biases in public social media data were not a distribution that we should follow for our goal of empathetic conversation: \\u201cWhile public social media content has the advantage of being spontaneous (not elicited) data, it suffers from two shortcomings when used to train a model intended for one-on-one conversation (as opposed to, say, a bot designed to post on Twitter). First, the content is extracted from a context of communication in front of large \\u201dperipheral audiences\\u201d (Goffman, 1981) which include potentially everyone with an Internet connection, where the need for curated self-presentation (Goffman, 1959) and the uncertainty as to how wide that audience may be have been shown to lead to different choices of subject matters compared to private messaging, with people sharing more intense and negative emotions through private channels (Bazarova et al., 2015; Litt et al., 2014). Second, Tweets are generally a short-form format limited to 140 characters, which is not a constraint that applies to general conversation. In this work, we attempt to generate a more balanced coverage of emotions than would appear in public social media content, within a one-on-one framing of unconstrained utterances that is closer to our ultimate goal of training a model for conversation that can respond to any emotion.\\u201d \\nAs AnonReviewer2 suggests, it would be better to gather data from conversations between people who know each other -- but because of the nature of public social media communication, we would want those conversations to be from a one-on-one setting. This makes it impossible to gather and release that type of data from \\\"real interactions\\\" without violating the privacy of users who created it. Crowdsourcing provides dialogues that afford the one-on-one, real-time circumstances, while being much more suitable for reproduction and evaluation of dialogue systems. Furthermore, by explicitly asking the crowdsourced workers to try and be empathetic, our aim is to create data that captures empathy (as opposed to entertainment value for a public social media audience, etc). We would also like to highlight an observation from R2\\u2019s review, that actually contains one of our motivations for doing this work, about how conversation models trained on Reddit would \\u201cpresumably [...] be weak on empathy.\\u201d We would indeed expect that, but to the best of our knowledge there currently aren\\u2019t empathy benchmarks, and we found very little previous work on measuring empathy. Our work tries to remedy that.\\n-- comments from reviewers made us realize that including only two dialogues did not give a good sense of the dataset. We included 10 additional dialogues picked completely randomly from our data; we only rejected samples that created formatting problems in the text. The larger sample is presented in Table 8 of the appendix. We hope that the colorful sample makes it clear that our dataset is not the set of cliches with no details that AnonReviewer2 feared.\"}", "{\"title\": \"Doubts about the two main contributions\", \"review\": \"The overall goal of the paper is to make end-to-end dialogue systems more empathetic, so that they can respond more appropriately and in ways that acknowledge how the users are feeling. The authors make two contributions towards that goal: (1) they introduce a crowdsourced dataset (EmpatheticDialogue) annotated with fine-grained emotion labels. (2) They show improvements on dialogue generation (in terms of empathy, but also relevance and fluency) using a multi-task objective, ensemble of encoders, and a more ad-hoc technique that consists of prepending inferred emotion labels to the input.\\n\\nIn terms of technical novelty, the work is relatively incremental: (A) The use of multi-task objectives in sequence models [1] is relatively common nowadays (there is little mathematical details in the paper, so it\\u2019s hard to see how the approach of the paper really differs from extensive related work.). (B) Prepending predictions: prepending class labels to the input is also relatively common (e.g., in multilingual NMT to select a language). [2] presents a similar approach for polite response generation, where they prepend a label using a politeness classifier.\\n\\nI also have some doubts about the two claimed contributions of the paper (the authors actually list 3 contributions in the introduction, but for convenience I lump the 2 non-data ones together):\\n\\n(1) Dataset: The dataset was crowdsourced by giving workers an emotion label (e.g., afraid) and asking them to define a situation in which that emotion might occur and inviting them to have a conversation on that situation. The problem with prompting workers for specific emotions is that this assumes they are good actors and this is likely to produce exchanges that are rather clich\\u00e9 and overdone (e.g., Table 1: the label \\u201cafraid\\u201d yields a situation that is rather spooky and unlikely in the real world, and the conversations themselves are rather clich\\u00e9 and incorporate little details that would make them sound real). The authors justify this dataset by pointing out that existing real-world datasets underrepresent rare emotions (e.g., afraid), but that\\u2019s just a reflection of how these emotions are distributed in the real world. Better subsampling strategies would enable a better balance in the distribution without having to give up on real-world data (filtering using emojis, hashtags, etc.). As the paper shows quantitative gains using this dataset, it is probably ok to use but, qualitatively, this dataset is probably not for everyone working on emotion in NLP. \\n\\n(2) Improvement in empathetic dialogue generation: The paper shows improvements across the board compared to a Transformer baseline, but the question the authors do not satisfactorily address is whether their explicit (and I would say sometimes ad-hoc) treatment of empathy (e.g., using emotion classifier, etc.) is crucially needed to get better empathetic dialogues, since the authors did not control for training data size and model capacity. Indeed, the authors exploited different amounts of data (out of-domain, or both in- and out-of-domain), different model capacities (going from baseline Transformer to model ensembles), and sometimes richer input (e.g., pre-trained emotion classifier). The results might only be showing that more data or more model capacity helps, which would of course not be surprising at all. The fact that generated outputs improve in all aspects (not only empathy, but in attributes completely unrelated to empathy such as fluency and relevance) suggests that the improvement is due to more data or capacity (e.g., perhaps yielding better encoder). More statistics in the table in terms of number of parameters and amount of in- and out-of-domain data used for each experiment would help draw a clearer picture.\", \"about_the_use_of_reddit\": \"this might not be the best background dataset, as it\\u2019s mostly strangers talking to other strangers, presumably causing the baseline to be weak on empathy. Twitter or other social-network type datasets (letting you follow people rather topics) *might* be better suited as it comparatively involves more exchanges between people who actually know each other and who are thus more likely to behave empathetically.\\n\\nOverall, the paper doesn\\u2019t really attempt to make major technical contribution, and instead (1) introduces a dataset and (2) makes empirical contributions, but I think there are problems with both.\", \"typos\": \"\", \"introduction\": \"\\u201cfro\\u201d\", \"references\": \"Elizaa\\n\\n[1] Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, Lukasz Kaiser \\nMulti-task Sequence to Sequence Learning\", \"https\": \"//arxiv.org/pdf/1805.03162.pdf\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A renewed attempt for adapting dialog responses to emotional context\", \"review\": \"The paper describes a new study about how to make dialogs more empathetic.\\nThe work introduced a new dataset of 25k dialogs designed to evaluate the\\nrole that empathy recognition may play in generating better responses \\ntuned to the feeling of the conversation partner. Several model\\nset-ups, and many secondary options of the set-ups are evaluated.\", \"pros\": \"A lot of good thoughts were put into the work, and even though the techniques\\ntried are relatively unsophisticated, the work represents a serious attempt\\non the subject and is of good reference value.\\n\\nThe linkage between the use of emotion supervision and better relevancy is interesting.\\n\\nThe dataset by itself is a good contribution to the community conducting studies in this area.\", \"cons\": \"The conclusions are somewhat fuzzy as there are too many effects\\ninteracting, and as a result no clear cut recommendations can be made\\n(perhaps with the exception that ensembling a classifier model trained\\nfor emotion recognition together with the response selector is seen\\nas having advantages).\\n\\nThere are some detailed questions that are unaddressed or unclear from\\nthe writing. See the Misc. items below.\\n\\nMisc.\\n\\nP.1, 6th line from bottom: \\\"fro\\\" -> \\\"from\\\"\", \"table_1\": \"How is the \\\"situation description\\\" supposed to be related to the\\nopening sentence of the speaker? In the examples there seems to be substantial\\noverlap.\\n\\nFigure 2, distribution of the 32 emotion labels used:\\nthis is a very refined set that could get blurred at the boundaries between similar emotions.\\nAs for the creators of those dialogs, does everyone interpret the same emotion label the same way?\\ne.g. angry, furious; confident, prepared; ...; will such potential ambiguities impact the work?\\nOne way to learn more about this is to aggregate related emotions to make a coarser set,\\nand compare the results.\\n\\nAlso, often an event may trigger multiple emotions, which one the speaker chooses to focus on\\nmay vary from person to person. How may ignoring the secondary emotions impact the results?\\nTo some extent this is leveraged by the prepending method (with top-K emotion predictions).\\nWhat about the other two methods?\\n\\nP. 6, on using an existing emotion predictor: does it predict the same set of emotions\\nthat you are using in this work?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Attempting to improve chatbot responses with empathy - contributed dataset\", \"review\": \"Overall this paper contributes many interesting insights into the specific application of empathetic dialog into chatbot responses. The paper in particular is contributing its collected set of 25k empathetic dialogs, short semi-staged conversations around a particular seeded emotion and the results of various ways of incorporating this training set into a generative chatbot.\\n\\nWhile the results clearly do not solve the problem of automating emapthy, the paper does give insights into which methods perform better than others (Generation vs Retrieval) and explicitly adding emotion predictions vs using an ensemble of encoders.\\n\\nThere is a lot in this paper, and I think it could have been better organized.\\nI am more familiar with emotion related research and not language to language translation, so I would have appreciated a better explanation of the rationale for using BLEU scores. I did some online research to understand these Bilingual Evaluation Understudy Scores and while it seems like they measure sentence similarity, it is unclear how they capture \\u201drelevance\\u201d at least according to the brief tutorial that I read (https://machinelearningmastery.com/calculate-bleu-score-for-text-python/). I did not see the paper describing the use of this score in the references but perhaps I missed it \\u2013 could you please clarify why this is a good metric for relevance? It seems that these scores are very sensitive to sentence variation. I am not sure if you can measure empathy or appropriateness of a response using this metric.\\nFor your data collection you have 810 participants and 24,850 conversations. Are the 810 participants all speakers or speakers and listeners combined? How many conversations did each speaker/listener pair perform 32? (one for each emotion) or 64? (two for each emotion) Was the number variable? If so what is the distribution of the contribution \\u2013 e.g. did one worker generate 10,000 while several hundred workers did only three of four? Was it about even? Just for clarity \\u2013 how did you enroll participants? Was it through AMT? What were the criteria for the workers? E.g. Native English speaker, etc.\\n\\nIn your supplemental material, I found the interchanging of the words \\u201ccontext\\u201d and \\u201cemotion\\u201d confusing. The word context is used frequently throughout your manuscript: \\u201cdialog context,\\u201d \\u201csituational context\\u201d - emotions are different from situations, the situational utterance is the first utterance describing the emotion if I read your manuscript correctly. Table 6 should use \\u201cLabel\\u201d or \\u201cEmotion\\u201d instead of the more ambiguous \\u201cContext.\\u201d \\n\\nMy understanding is that speakers were asked to write about a time when they experienced a particular feeling and they were given a choice of three feelings that they could write about. You then say that workers are forced to select from contexts they had not chosen before to ensure that all of the categories were used. From this I am assuming that each speaker/listener worker pair had to write about all 32 emotions \\u2013 is this correct? Another interpretation of this is that you asked new workers to describe situations involving feelings that had not been chosen by other workers as data collection progressed to ensure that you had a balanced data set. This would imply that some emotional situations were less preferred and potentially more difficult to write about. It would be interesting if this data was presented. It might imply that some emotion labels are not as strong if people were forced to write about them rather than being able to choose to write about them. \\nWere these dialogs ever actually annotated? You state in section 2, Related Work \\u201cwe train models for emotion detection on conversation data that has been explicitly labeled by annotators\\u201d \\u2013 please describe how this was done. Did independent third party annotators review the dialogs for label correctness? Was a single rater or a majority vote used to decide the final label. For example, in Table 1, the label \\u201cAfraid\\u201d is given to a conversation that could also have reasonable been generated by the label \\u201cAnxious\\u201d a word explicitly used in the dialog. I am guessing that the dialogs are just labeled according to the label / provocation word and that they were not annotated beyond that, but please make this clear. \\nIn the last paragraph you state \\u201cA few works focus..\\u201d and then list 5. This should rather be \\u201cseveral other works have focused on \\u201c \\u2026 \\nConversely, you later state in section 3 \\u201cSpeaker and Listener\\u201d, \\u201cWe include a few example conversations from the training data in Table 1,\\u201d this should more explicitly be \\u201ctwo.\\u201d\\nAlso in section 3 when you describe your cross validation process, you state \\u201cWe split the conversations into approximately 80/10/10 partitions. To prevent overlap of <<discussed topics>> we split the data so that all the sets of conversations with the same speaker providing the prompt would be in the same partition. \\nIn your supplemental material you state that workers were paired. Each worker is asked to write a prompt, which also seems to be the first utterance in the dialog they will start. You state each worker selects one emotion word from a list of three which is somehow generated (randomly?) form your list of 32 . I am assuming each worker in the pair does this, then the pair has a two \\u201cconversations\\u201d one where the first worker is the speaker and another where the second worker is the speaker \\u2013 is this correct? It is not entirely clear from the description. Given that you have 810 workers and 24,850 conversations, I am assuming that each worker had more than one conversation. My question is - did they generate a new prompt / first utterance for each conversations. I am assuming yes since you say there are 24,850 prompts/conversations. For each user are all of the situation/prompts they generate describing the same emotion context? E.g. would one worker write ~30 conversations on the same emotion. This seems unlikely, and it seems more likely that given the number of conversations ~30 per participant is similar to the number of emotion words that you asked each worker to cycle through nearly all of the emotions or that given they were able to select, they might describe the same emotion, e.g. \\u201cfear\\u201d several times. If the same worker was allowed to select the same emotion context multiple times was it found that they re-used the same prompt several times? I am assuming that this is the case and that this is what you mean when you say that you \\u201cprevent overlap of discussed topics\\u201d between sets when you exclude particular workers. Is this correct? Or did you actually look and code the discussed topics to ensure no overlap even across workers (e.g. several people might have expressed fear of heights or fear of the dark).\\n\\nIn section 4, Empathetic dialog generator, you state that the dialog model has access to the situation description given by the speaker (also later called the situational prompt) but not the emotion word prompt. Calling these both prompts makes the statement about 24,850 prompts/conversations a bit ambiguous. A better statement would be 24,850 conversations based on unique situational prompts/descriptions (if they are in fact unique situational prompts. I am assuming they are not if you are worried about overlapping \\u201cdiscussed topics\\u201d which I am assuming are the situational prompts since the dialogs are very short and heavily keyed off these initial situational prompts)\\n\\nIn your evaluation of the models with Human ratings you describe two sets of tests. In one test you say you collect 100 annotations per model. More explicitly, did you select 100 situational prompts and then ask workers to rate the response of each model? Was how many responses was each worker shown? How many workers were used? Are the highlighted numbers the only significant findings or just the max scores? Annotations is probably not the correct word here.\\n\\nPlease also describe your process for assigning workers to the second human ratings task. \\n\\nSince the two novel aspects of your paper are the new dataset and the use of this dataset to create more empathetic chatbot responses (\\\"I know the feeling\\\") I have focused on these aspects of the paper in my review.\\n\\nI found the inclusion of Table 7 underexplained in the text. The emotion labels for all these datasets are not directly comparable so I would have liked to have seen more explanation around how these classifications were compared. It would also be helpful to know how more similar emotions such as \\\"afraid\\\" and \\\"anxious\\\" were scored vs \\\"happy\\\" and \\\"sad\\\" confusions\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
HJei-2RcK7
Graph Transformer
[ "Yuan Li", "Xiaodan Liang", "Zhiting Hu", "Yinbo Chen", "Eric P. Xing" ]
Graph neural networks (GNN) have gained increasing research interests as a mean to the challenging goal of robust and universal graph learning. Previous GNNs have assumed single pre-fixed graph structure and permitted only local context encoding. This paper proposes a novel Graph Transformer (GTR) architecture that captures long-range dependency with global attention, and enables dynamic graph structures. In particular, GTR propagates features within the same graph structure via an intra-graph message passing, and transforms dynamic semantics across multi-domain graph-structured data (e.g. images, sequences, knowledge graphs) for multi-modal learning via an inter-graph message passing. Furthermore, GTR enables effective incorporation of any prior graph structure by weighted averaging of the prior and learned edges, which can be crucially useful for scenarios where prior knowledge is desired. The proposed GTR achieves new state-of-the-arts across three benchmark tasks, including few-shot learning, medical abnormality and disease classification, and graph classification. Experiments show that GTR is superior in learning robust graph representations, transforming high-level semantics across domains, and bridging between prior graph structure with automatic structure learning.
[ "Graph neural networks", "transformer", "attention" ]
https://openreview.net/pdf?id=HJei-2RcK7
https://openreview.net/forum?id=HJei-2RcK7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Sygr5GZ7g4", "Syxt-SZJ14", "H1lBD_EnC7", "ryg4gQZ5Rm", "HJeQ2ghbRQ", "H1gLOlnb0X", "r1lf7xnbCQ", "Byl9qJhW0Q", "BJehhTIgRQ", "HkevBVyAh7", "HygJ6sB52X", "HkeG9tpuhX", "HJgco12NnX", "Skg-973qoQ", "BkgjwgsViQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "comment" ], "note_created": [ 1544913549048, 1543603457234, 1543419996549, 1543275243591, 1542729898684, 1542729838011, 1542729753558, 1542729617935, 1542643124228, 1541432383416, 1541196727267, 1541097866159, 1540829090331, 1540174729103, 1539776610935 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1201/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1201/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1201/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1201/Authors" ], [ "ICLR.cc/2019/Conference/Paper1201/Authors" ], [ "ICLR.cc/2019/Conference/Paper1201/Authors" ], [ "ICLR.cc/2019/Conference/Paper1201/Authors" ], [ "ICLR.cc/2019/Conference/Paper1201/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1201/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1201/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1201/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1201/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1201/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers all agree that the work is interesting, but none have stood out and championed the paper as exceptional. The reviewers note that the paper is well-written, contributes a methodological innovation, and provides compelling experiments. However, given the reviewers' positive but unenthusiastic scores, and after discussion with PCs, this paper does not meet the bar for acceptance into ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Reviewers agree that work is interesting, but reviews are borderline\"}", "{\"title\": \"Could you point me to the SOTA clarifications in the manuscript?\", \"comment\": \"Thanks for the clarifications in the comment, but I did not dispute that extra information is not available. I asked for clarifications to be added to the manuscript for your claims of being state-of-the-art on miniImageNet 5-way 1-shot which are simply not correct imho, since extra information was used. This extra information is not available to the other methods in your table, so the comparison is apples-to-oranges.\\n\\nI could not find any clarifications in the main text of the paper. Furthermore, moving the coefficient ablation to appendix further obfuscates just how important this extra information is for the result. This is worrying. I was expecting a honest discussion about how your method can achieve performance above SOTA in that domain using extra information, but is not strictly comparable and hence it is not the new SOTA. Could you please point me to sections where you cover this?\"}", "{\"title\": \"Keep my score.\", \"comment\": \"Thanks for author's feedback. Most of my concerns have been addressed. I still have one more question. When you train GTR, do you use full gradient descent or mini-batch gradient descent method?\"}", "{\"title\": \"Summary of author response\", \"comment\": \"We thank all reviewers for their constructive comments. We have updated the paper with Appendix A-E to address reviewers' concerns and further demonstrate Graph Transformer\\u2019s (GTR) performance. Here is a summary of these updates:\\n\\n(1) R1 expressed concern on comparison of GTR with previous methods which do not require the relations of meta-test and base classes a priori. We argue that the relation of categories GTR requires are universal and general (e.g., borrowed from WordNet taxonomy (Miller, 1995)), and the usage of which inherently follows human learning where reconciliation of data-dependent visual recognition and universal knowledge happens. Besides, WordNet taxonomy knowledge graph (Miller, 1995) is adopted in many related zero-shot learning papers (Wang et al., 2018; Kampffmeyer et al., 2018; Salakhutdinov et al., 2011) as prior meta-test and base classes relations. Our experiment proves that this reconciliation leads to stronger performance in the challenging few-shot setting, and the learning of complex causal relations among medical abnormalities and diseases which is essential for explanatory diagnosis. \\n\\n(2) To address R2\\u2019s concern on memory overhead of GTR, we provided theoretical analysis of memory usage in Appendix B, as well as experimental results on the relation between memory usage, model performance and model size in Appendix E. Our experiment shows that GTR is capable of achieving state-of-the-art performance on 1-shot classification using less memory and parameters than its direct baseline model Gidaris & Komodakis (2018). \\n\\n(3)To address R3\\u2019s concern on computational efficiency of GTR, we provided theoretical analysis on time complexity in Appendix C. Our analysis shows that with sequences and graphs as output, GTR costs constant training time, and linear to the output size and constant testing time respectively. Thanks to GTR\\u2019s global attention mechanism, it has higher computational efficiency than recurrent neural network, and is capable of scaling to large graphs. The scaling capability is further enhanced by our proposed trimming scheme which promotes sparsity of edges. \\n\\n(4) We additional conducted ablation study on weight of prior edges (\\\\lambda) in few-shot learning, and provided results in Appendix D. Our experiment shows that prior relations of meta-test classes are beneficial, and the combination of which and automatic learning leads to the best performance. \\n\\n(5) Besides the 10 baselines we have already compared with in graph classification task, we additionally compared with AWE (Ivanov & Burnaev, 2018) and FGSD (Verma & Zhang, 2017) in section 4.3 Table 4, and showed improved performance. We have also added more discussion of recent related work in Appendix A, including AWE (Ivanov & Burnaev, 2018), FGSD (Verma & Zhang, 2017), GAML (Do et al.,2018), graph2vec (Narayanan et al., 2016), graph2graph (https://openreview.net/forum?id=r1xYr3C5t7), and graph2seq (https://openreview.net/forum?id=SkeXehR9t7; https://openreview.net/forum?id=Ske7ToC5Km), and highlighted the difference and advantages of the proposed approach.\"}", "{\"title\": \"Response to Reviewer3\", \"comment\": \"Thanks for the constructive comments.\\n\\n1. Message passing including intra- and inter-graph is a common strategy in graph neural networks (Do et al.,2018; Gilmer et al., 2017; Schlichtkrull et al., 2018). However, the underlying design, formulation, and effectiveness of models can vary drastically. Our proposed GTR is novel in that it uses a multi-head global attention mechanism to enable global context encoding and parallelism. These technical merits mitigate the challenging vanished gradients problem in RNNs due to sequential processing, and enables long-term relationship modeling and fast training which RNNs and most GNNs with only local context encoding suffer. Besides, the second last paragraph in section 1. Introduction details GTR\\u2019s technical merits. \\n\\nWe have added more discussions of relevant approaches in Appendices A, including Graph Attentional Multi-Label Learning (GAML) (Do et al.,2018), and other previous work or parallel ICLR submissions on graph transformation such as graph2vec (Narayanan et al., 2016), graph2graph (https://openreview.net/forum?id=r1xYr3C5t7 ), and graph2seq (https://openreview.net/forum?id=SkeXehR9t7; https://openreview.net/forum?id=Ske7ToC5Km).\\n\\n2. Graph Transformer is able to process large graph inputs as the sparsity among edges can be promoted. For example, in the few-shot learning experiments, we propose to trim edges by a threshold, resulting in a sparse graph representation. The sparser the graphs are, the easier for GTR to extract features from the most related nodes, and the faster the information flows among strongly correlated nodes. The user-defined sparsity degree controls the scalability of GTR to large graphs, and the intensity of modeling long-term relationship between nodes, which is lacked in many GNNs (Kipf & Welling, 2017;Defferrard et al., 2016). In addition, different sparsity levels in different GTR layers can also be implemented, so as to gradually distill significant high-level semantics from large low-level input graphs. \\n\\nIn our experiment of medical abnormality and disease classification on the CX-CHR dataset, the abnormality graph contains 155 nodes and the computation by our GTR model is fast. In contrast, in the experiments conducted in GAML (Do et al.,2018), the average numbers of nodes are only 27.68 and 25.31 for the 9cancers and 50proteins datasets, respectively, which are much smaller than ours.\\n\\nReferences \\n[1] Dehghani, Mostafa, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and \\u0141ukasz Kaiser. \\\"Universal transformers.\\\" arXiv preprint arXiv:1807.03819 (2018).\"}", "{\"title\": \"Response to Reviewer2\", \"comment\": \"Thanks for the constructive comments.\\n\\n1. We agree that for graph inputs such as knowledge graphs, positional encoding is not necessary since the input nodes are fixed and prior edges are given. We indeed do not use positional encoding in the experiments of knowledge graph transformation such as few-shot classification (section 4.1) and medical abnormality and disease classification (section 4.2), and graph representation learning (section 4.3). \\n\\nPositional encoding can be useful for sequence data (e.g., text) and visual data (e.g., image). For example, positional encoding can be used to encode object locations in an image. There is previous work (Liu et al., 2018) on incorporating positional cues such as coordinates as visual features to enhance representation learning and downstream tasks. We are happy to add ablation study on the effectiveness of positional encoding for visual features in our revised version. \\n\\n2. In terms of memory overhead, GTR scales linearly with the graph size thanks to the proposed global attention mechanism. It does not require any additional memory compared to the standard Transformer (Vaswani et al., 2017) and recurrent neural networks. Please refer to Table 5 in the revised paper which summarizes the memory complexity for sequence and graph outputs.\\n \\nWe compute memory usage of GTR and baseline model Gidaris & Komodakis (2018), and conduct ablation study on model size on 1-shot learning in Appendices E as well as theoretical analysis in Appendices B. Specifically, we study how memory usage and model performance change with parameter size by changing feature dimensions and hidden feature dimensions, and fixing all other hyper-parameters. The results and analysis is summarized in Table 7. Most importantly, GTR using 512 as feature dimension and hidden feature dimension (last row of Table 7) only consumes 0.0278G memory which is less than that used by Gidaris & Komodakis (2018) (0.0407G), and obtains much better results (60.6778\\u00b10.7095% v.s. 56.32\\u00b10.86%). This demonstrates that GTR is not only effective, but also memory efficient. Furthermore, all results are still larger than that of all baseline models, maintaining GTR\\u2019s state-of-the-art performance. Additionally, the memory usage for medical abnormality and disease classification (section 4.2) and graph classification experiment (section 4.3) is only 0.0476G and 0.0217G respectively. \\n \\nIn terms of experiments on large datasets, the CX-CHR dataset used in the medical abnormality and disease classification is relatively large as it has 33,236 patient samples and 40,411 images in total where every patient can have multiple images. We are happy to conduct additional experiments on larger dataset. \\n \\nLast but not least, graph transformer holds several technical metrics that we missed to present in our initial version such as its time efficiency by requiring significantly less time to train than baseline methods, and only constant training and testing time for graph output. We add detailed explanation in Appendices C in our revised paper. \\n\\nReferences \\n[1] Liu, Rosanne, Joel Lehman, Piero Molino, Felipe Petroski Such, Eric Frank, Alex Sergeev, and Jason Yosinski. \\\"An intriguing failing of convolutional neural networks and the coordconv solution.\\\" arXiv preprint arXiv:1807.03247 (2018).\"}", "{\"title\": \"Response to Reviewer1\", \"comment\": \"Thank you for your constructive comments.\\n\\nOur method does not necessarily need to know a priori the meta-test class names but just the relations of novel categories. In such setting, any permutation of the 5 classes are still different task instances, and thus the difficulty level of the problem is the same as that studied in previous few-shot learning works. WordNet (Miller, 1995) categorial taxonomy relations is an example we used for incorporating rich semantic knowledge in few-shot learning. It is similar to many existing work in zero-shot learning where the word embedding of novel classes are given a priori for learning implicit relations of novel categories using pre-trained word embeddings (Socher et al., 2013;Frome et al., 2013), or explicit categorical relations such as the hierarchical structure of classes represented as knowledge graphs (Salakhutdinov et al., 2011; Deng et al., 2014). For example, (Wang et al., 2018) uses semantic embeddings of novel categories and the categorical relationships from WordNet knowledge graph to predict classifiers for zero-shot learning. (Kampffmeyer et al., 2018) follows this line and explores WordNet knowledge graphs of novel and base categories for zero-shot learning. In these methods, semantic features such as word embedding and knowledge graphs of novel categories are required as input. \\n\\nCompare to previous few-shot learning literature, our method indeed requires prior knowledge of novel category relations in addition to the few training samples of novel categories. However, this prior knowledge is general, universal, and can be easily obtained from rich textural semantic space (e.g., word embeddings trained from sufficiently large corpus, and WordNet taxonomy) a priori instead of hand-engineered. For example, the categorial relations in few-shot learning are borrowed from WordNet taxonomy (Miller, 1995), and statistical computed from training data in medical abnormality and disease classification. \\n\\nThe incorporation of prior categorical relations in few-shot learning highly aligns with human learning (Lake et al., 2015). For example, when teaching a baby what a bird is like, a teacher not only shows a few visual examples, but also provides verbal/textual descriptions of specific characteristics of the bird such that the baby can associate the novel category with categories that have already been learned. The learning process includes both visual understanding that is data-dependent, and contextual understanding which is universal and general. \\n\\nIntuitively, both universal and task-dependent categorical knowledge are beneficial for learning from limited examples. Graph Transformer demonstrates its superior capability of distilling information from prior universal knowledge, and effectively adapts itself for new training samples as specific tasks change. It also shows that the reconciliation of the universal knowledge-based and the task-dependent automatic learning are the key to success in empowering machine learning models in the few-shot learning task and the medical abnormality and disease classification task. \\n \\n\\nReferences \\n[1] R. Socher, M. Ganjoo, C. D. Manning, and A. Y. Ng. Zero- Shot Learning Through Cross-Modal Transfer. In ICLR, 2013. \\n[2] A. Frome, G. Corrado, J. Shlens, S. Bengio, J. Dean, and T. Mikolov. Devise: A deep visual-semantic embedding model. In NIPS, 2013. \\n[3] R. Salakhutdinov, A. Torralba, and J. Tenenbaum. Learning to Share Visual Appearance for Multiclass Object Detection. In CVPR, 2011. \\n[4] J. Deng, N. Ding, Y. Jia, A. Frome, K. Murphy, S. Bengio, Y. Li, H. Neven, and H. Adam. Large-Scale Object Classification Using Label Relation Graphs. In ECCV, 2014. \\n[5] Wang, Xiaolong, Yufei Ye, and Abhinav Gupta. \\\"Zero-shot Recognition via Semantic Embeddings and Knowledge Graphs.\\\" CVPR. 2018.\\n[6] Kampffmeyer, Michael, et al. \\\"Rethinking Knowledge Graph Propagation for Zero-Shot Learning.\\\" arXiv preprint arXiv:1805.11724 (2018).\\n[7] Lake, Brenden M., Ruslan Salakhutdinov, and Joshua B. Tenenbaum. \\\"Human-level concept learning through probabilistic program induction.\\\" Science 350.6266 (2015): 1332-1338.\"}", "{\"title\": \"Response to \\\"Missing related work in experiments\\\"\", \"comment\": \"Thanks for the comment. The two papers mentioned have conducted experiments on PROTEINS and D&D dataset for graph classification, and both of them obtained lower performance than our graph transformer (GTR). Specifically, [1] obtained 71.51 \\u00b1 4.02 on D&D, [2] obtained 77.10 on D&D and 73.42 on PROTEINS. Graph transformer obtained 79.15 on D&D and 75.70 on PROTEINS, surpassing both compared models by large margins.\\n\\nBesides, [1] uses random walk for graph representation learning which is limited to local context encoding, and only uses predefined edge weights, while GTR enables global context encoding via global attention mechanism and incorporation of predefined edge weights and learnable edge weights. [2] uses graph spectral distances for learning graph features which requires complex computation such as eigendecomposion while our method does not require such cost. Additionally, both [1] and [2] use SVM as algorithm for graph classification, while our method does not use any additional classifier. \\n\\nGraph neural modeling is an active research field with emergingly many new methods. We have compared with 10 previous methods including 4 kernel-based methods and 6 GNN-based methods in our graph classification experiments. We are happy to include the above results, and make comparison with more previous work to demonstrate the performance of our approach.\\n\\n[1] Ivanov et.al, Anonymous Walk Embeddings, ICML 2018\\n[2] Verma et.al, Hunt For The Unique, Stable, Sparse And Fast Feature Learning On Graphs, NIPS 2017\"}", "{\"comment\": \"While the proposed solution was compared to a few algorithms, some recent state-of-the-art algorithms were omitted in the experiments sections, having a misleading impression on the performance of the author's algorithm. At least the following papers should be included and argued the differences with the author's approach.\\n\\n[1] Ivanov et.al, Anonymous Walk Embeddings, ICML 2018\\n[2] Verma et.al, Hunt For The Unique, Stable, Sparse And Fast Feature Learning On Graphs, NIPS 2017\", \"title\": \"Missing related work in experiments.\"}", "{\"title\": \"This paper proposes an intereting method for graph dataset. However, some points need to be verified.\", \"review\": \"This paper proposes a graph transformer method to learn features from the data with a graph structure. Actually it is the extension of Transformer network to the graph data. Although it is not very novel, yet it is interesting. The experimental result has confirmed the author's claim.\", \"i_have_some_concerns_as_follows\": \"1. For the sequence input, this paper proposes to use the positional encoding as the standard Transformer network. However, for graphs, edges have encoded the relative position information. Is it necessary to incorporate this positional encoding? It's encouraged to conduct some experiments to verify it.\\n\\n2. It is well known that graph neural networks usually have large memory overhead. How about this model? I found that the dataset used in this paper is not large. Can you conduct some experiments on large-scale datasets and show the memory overhead?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Useful but straightforward idea\", \"review\": \"Summary\\n========\\nThe paper adopts the self-attention mechanism in Transformer and in message-passing graph neural networks to derive graph-to-graph mapping. Tested on few-shot learning, medical imaging classification and graph classification problems, the proposed methods show competitive performance. \\n\\nComment\\n========\\nGraph-to-graph mapping is an interesting setting and the paper presents an useful solution and interesting applications. The paper is easy to read.\\n\\nHowever, given the recent advancements in self-attention and message-passing graph modeling under various supervised settings (graph2vec, graph2set, graph2seq and graph2graph), the methodological novelty is somewhat limited. The idea of intra-graph and inter-graph message passing, for example, has been studied in:\\nDo et al. \\\"Attentional Multilabel Learning over Graphs-A message passing approach.\\\" arXiv preprint arXiv:1804.00293 (2018).\\n\\nComputationally, the current solution is not very scalable for large input and output graphs.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting bridge paper between knowledge-based and learning approaches with good synergy\", \"review\": \"I am familiar with the few-shot learning literature and this particular approach is novel as far as I know. A weight generator outputs the last layer of an otherwise pre-trained model. A combination of attention over base category weights and graph neural networks is used to parametrize the generator. Results are particularly good on 1-shot miniImageNet classification, but may not be entirely comparable with previous work. Two more interesting experiments are given and have convincingly superior results (at first glance) but I am not familiar with those domains.\\n\\nI still think the questions in my previous post need answers! I am willing to improve my score if clarifications are added to the paper.\\n\\nOverall, the paper makes a convincing point that hand-engineered graphs and knowledge can be effectively used with learning methods, even in challenging few-shot settings.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"How comparable are the few-shot learning results with other approaches?\", \"comment\": \"Thanks for an interesting paper with diverse experiments!\\n\\nI wonder how comparable the few shot-learning results are to related work, since this paper claims state-of-the-art performance, but seems to use extra information about the meta-test classes, which would make results not directly comparable. It is my understanding that wordnet information is used to an increasing extent as \\\\lambda gets closer to 1, and that a \\\\lambda value of 0 is the only truly comparable result. Furthermore, this approach seems to assume that meta-test class names are known, which is not commonly assumed in other approaches. Indeed, most approaches would have no choice but to consider any permutation of the 5 classes as different task instances, which is (arguably) a harder problem. Could you please clarify these details?\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your comment!\\n\\nThe accuracies of 1-shot learning for \\\\lambda=0.0 and 1.0 are 56.08+-0.79% and 61.56+-0.72% respectively. The highest 1-shot accuracy is 61.58+-0.71% achieved when \\\\lambda=0.9. The accuracies of 5-shot learning for \\\\lambda=0.0 and 1.0 are 72.12+-0.64% and 72.95+-0.64% respectively. The highest 5-shot accuracy is 73.21+-0.63% achieved when \\\\lambda=0.7. It is worth noting that \\\\lambda indicates the weight on prior edges. So \\\\lambda=0.0 and 1.0 correspond to only using the learned edges, and only using the pre-defined edges respectively. \\n\\nBesides, the experiment shown in our paper was obtained only in the first round of our model, and used fixed learning rate 0.01 during episodic training. After allowing learning rate for episodic training to decrease 10 times whenever encountering validation performance plateau, we obtained the above improved results. The results demonstrate that: 1) GTR with prior category similarity improves few-shot learning performance over its direct baseline framework Gidaris & Komodakis (2018) on both 1-shot and 5-shot learning, and achieves the state-of-the-art performance on 1-shot learning; 2) 1-shot learning relies more on the prior knowledge of similarity between base and novel categories than 5-shot learning; 3) GTR(\\\\lambda=0.0) is slightly lower than that achieved by Gidaris & Komodakis (2018), however not statistically significant, indicating that the attention mechanisms in both models have similar effectiveness in this task. We will add more ablation study in our revised version. \\n\\nFor the question on GTR for image input, images are usually first fed to a deep network for features extraction, and the extracted features such as the output of the last convolutional layer of a deep network are then used as input of GTR. Thus, the visual input to GTR generally has small size such as 5*5*128 and 16*16*256 (thus, graph node size is 25 and 256). In graph classification task, we adopted naive average pooling for aggregating node features for class-level graph classification. However, other techniques such as cluster-based pooling can be incorporated.\"}", "{\"comment\": \"Thanks for the Graph Transformer work.\\n\\nThe idea of using both source attention and self-attention between different graphs is quite interesting and novel. Also, I like the natural layer stacking in Figure 1 (Right). \\n\\nSpecifically, I am interested in the few-shot learning experiments. In Table 1, the performance of GTR varies with different \\\\lambda values. \\nMy question is what is the accuracy of \\\\lamba=0 and \\\\lambda=1, since these two values indicate two special cases: only using predefined edge and only using learned edges. \\nAnother question is about GTR for image input. In the paper, \\\"each pixel is treated as graph nodes\\\", how to get the class-level graph representation? Is it average pooling across pixels? And will this per-pixel nodes strategy increase the computation cost in graph operation?\", \"title\": \"Interesting work, questions about few-shot learning\"}" ] }
Byf5-30qFX
DHER: Hindsight Experience Replay for Dynamic Goals
[ "Meng Fang", "Cheng Zhou", "Bei Shi", "Boqing Gong", "Jia Xu", "Tong Zhang" ]
Dealing with sparse rewards is one of the most important challenges in reinforcement learning (RL), especially when a goal is dynamic (e.g., to grasp a moving object). Hindsight experience replay (HER) has been shown an effective solution to handling sparse rewards with fixed goals. However, it does not account for dynamic goals in its vanilla form and, as a result, even degrades the performance of existing off-policy RL algorithms when the goal is changing over time. In this paper, we present Dynamic Hindsight Experience Replay (DHER), a novel approach for tasks with dynamic goals in the presence of sparse rewards. DHER automatically assembles successful experiences from two relevant failures and can be used to enhance an arbitrary off-policy RL algorithm when the tasks' goals are dynamic. We evaluate DHER on tasks of robotic manipulation and moving object tracking, and transfer the polices from simulation to physical robots. Extensive comparison and ablation studies demonstrate the superiority of our approach, showing that DHER is a crucial ingredient to enable RL to solve tasks with dynamic goals in manipulation and grid world domains.
[ "Sparse rewards", "Dynamic goals", "Experience replay" ]
https://openreview.net/pdf?id=Byf5-30qFX
https://openreview.net/forum?id=Byf5-30qFX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ryeGn_lVSV", "ryl6DQ9i-V", "r1gXqdEEg4", "BJxLjfH5C7", "Sygu7-xF0Q", "HkgNjz-UAm", "rJlLKVtS07", "H1lI5-_BAm", "S1lN5tUBCQ", "SJekV4-SC7", "r1e8PbJV0X", "HJxjXfLXRQ", "B1eDAUxO6m", "rygmG8lOTQ", "SyxSHQHwpQ", "ByxsM6u76X", "BklxR_v7p7", "BJxeUuD76X", "BkljCMD7a7", "r1eb4ZDQ67", "H1xtaCNcnX", "rJgHzRb5hX", "B1lgRLesoX" ], "note_type": [ "comment", "official_comment", "meta_review", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1550219434253, 1546523493377, 1544992907074, 1543291549841, 1543205151745, 1543013020334, 1542980733971, 1542975886135, 1542969739694, 1542947878549, 1542873437602, 1542836771266, 1542092495448, 1542092299016, 1542046525340, 1541799187498, 1541793992409, 1541793863693, 1541792467143, 1541792040977, 1541193408747, 1541180941364, 1540191944055 ], "note_signatures": [ [ "~Xiaojian_Ma1" ], [ "ICLR.cc/2019/Conference/Paper1200/Authors" ], [ "ICLR.cc/2019/Conference/Paper1200/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1200/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1200/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1200/Authors" ], [ "ICLR.cc/2019/Conference/Paper1200/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1200/Authors" ], [ "ICLR.cc/2019/Conference/Paper1200/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1200/Authors" ], [ "~Hassam_Sheikh1" ], [ "ICLR.cc/2019/Conference/Paper1200/Authors" ], [ "ICLR.cc/2019/Conference/Paper1200/Authors" ], [ "ICLR.cc/2019/Conference/Paper1200/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1200/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1200/Authors" ], [ "ICLR.cc/2019/Conference/Paper1200/Authors" ], [ "ICLR.cc/2019/Conference/Paper1200/Authors" ], [ "ICLR.cc/2019/Conference/Paper1200/Authors" ], [ "ICLR.cc/2019/Conference/Paper1200/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1200/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1200/AnonReviewer2" ] ], "structured_content_str": [ "{\"comment\": \"I found this paper and the supplementary video are quite interesting and insightful. However, I got a few questions on some details, and hope you can help me with them.\", \"q1\": \"In the last few seconds of video#1(https://drive.google.com/file/u/0/d/10SU6vYd0m0MAtSpRbs71sTQ4G6gaoBnY/view), when the gripper strikes out the gray ball, the red ball(goal) still locates on the very left of the table. According to my understanding, the goal at that time will be striking the gray ball to reach the very left position. However, the arm chooses to strike the ball to the middle instead. It's quite non-intuitive to observe such results since I can't find any mechanism about goal prediction in DHER and its implementation. Furthermore, if it can be seen as some mysterious generalization of the trained policy, why the policies in other Dy-* tasks are mostly acting in a goal-following manner? Can you help me to figure it out?\", \"q2\": \"In DHER, it matches the desired goal traj and achieved goal traj to rewrite the goal information of the off-policy experience. An emerging problem will be when the feasible space is quite large, it may require a pretty large amount of trajs to make a successful match, which will hurt the sample efficiency of such off-policy methods (DPG). I wonder if it is possible to simply construct a desired goal traj manually instead of brute-force matching the simulated data? The temporal alignment may be an issue, but as using simulation is allowed (also used in the matching procedure of DHER), I think it won't be difficult.\", \"title\": \"Some questions\"}", "{\"title\": \"Code Release\", \"comment\": \"The code includes environments and algorithms: https://github.com/mengf1/DHER .\"}", "{\"metareview\": \"This work proposes a method for extending hindsight experience replay to the setting where the goal is not fixed, but dynamic or moving. It proceeds by amending failed episodes by searching replay memory for a compatible trajectories from which to construct a trajectory that can be productively learned from.\\n\\nReviewers were generally positive on the novelty and importance of the contribution. While noting its limitations, it was still felt that the key ideas could be useful and influential. The tasks considered are modifications of OpenAI robotics environments, adapted to the dynamic goal setting, as well as a 2D planar \\\"snake\\\" game. There were concerns about the strength of the baselines employed but reviewers seemed happy with the state of these post-revision. There were also concerns regarding clarity of presentation, particularly from AnonReviewer2, but significant progress was made on this front following discussions and revision.\\n\\nDespite remaining concerns over clarity I am convinced that this is an interesting problem setting worth studying and that the proposed method makes significant progress. The method has limitations with respect to the sorts of environments where we can reasonably expect it to work (where other aspects of the environment are relatively stable both within and across episodes), but there is lots of work in the literature, particularly where robotics is concerned, that focuses on exactly these kinds of environments. This submission is therefore highly relevant to current practice and by reviewers' accounts, generally well-executed in its post-revision form. I therefore recommend acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"A potentially influential approach despite its limitations, well delivered and improved following feedback.\"}", "{\"title\": \"New reward shaping experiments show that the issue is related to (2) and environments.\", \"comment\": \"Thanks for discussing this. We have added new dense reward baselines DDPG (dense-2) and DQN (dense-2) in the paper: we use the negative distance (-d) as dense rewards. However, if success, we use (-d + 1.0) as rewards. 1.0 is a bonus.\\n\\nAccording to our results (Figures 3 and 6), DDPG (dense-2) does not work well for manipulation tasks. Thus the issue may be related to (2). These reward shaping approaches may hinder exploration. For a long period, it does not get any success experience. \\nThere are two reasons. First, the approaches limit the way how the agent moves to the goal. For example, current reward shaping approaches seem to encourage chasing the goal. They do not work very well when the velocity is large. A good solution may be to encourage the agent to find a shortcut. However, it is not easy to design a good reward shaping function because it involves the goal velocity, including both magnitude and direction. \\nSecond, manipulation environments are very challenging. Continuous environments and dynamic goals make the exploration space huge. \\n\\nDQN (dense-2) works very well for Dy-Snake. Because in our simple 2D grid world domain, which is a discrete environment, the agent can explore all the space in a short time. The agent knows that the success can bring a bonus very soon.\"}", "{\"comment\": \"As long as there are some patterns in the desired goal's motion, the proposed method should work. It means that it doesn't have to follow the same trajectory. Of course, when the pattern is simple, it becomes easy to learn.\", \"title\": \"On the repeatable object trajectory\"}", "{\"title\": \"Is it possible to determine if the issue is (1) or (2)\", \"comment\": \"I think it's unlikely to be (3), but indeed I can imagine the specific shaped reward you use might cause the agent to get close to the goal but not quite succesfully touch it. This to me seems though more of a poor shaped reward. Does the agent receive extra reward for getting the goal? It seems that would solve the problem of the agent not completely reaching the object. Is it possible to determine if the agent tends to get close to the goal?\\n\\nMaybe one could see if the modified (dense) reward is at least progressively improving. I still find it odd this baseline is unable to learn at all in many of these cases. I think having a strong and fair baseline would help to convince readers about the method.\"}", "{\"title\": \"No goal trajectory prediction but use RL to predict actions\", \"comment\": \"Thanks for your reply. We do not directly predict or model the future of desired goals. Given a lot of the past trajectories of desired goals, corresponding actions, and rewards (which can be seen as scores for taken actions), the RL agent will know which action is good and can lead to the desired goals in the future.\\nThe RL algorithm constructs a policy (normally a neural network) to determine how to choose an action based on the current state. Therefore, we can conclude that the prediction of desired goals is automatically embedded in the learned RL policy. It is achieved from the process of maximizing the rewards by the RL agent.\"}", "{\"title\": \"Goal trajectory prediction ?\", \"comment\": \"Thank you for the clarification above, it helps a lot. Now, to catch a moving object, you have to predict one of its future positions to meet it on time. You are not learning a model of the dynamics. So how do you predict this future position? Is it based on the idea that object trajectories are repeatable, i.e. the same object will perform the same trajectory many times?\\n\\nPlease forgive this naive way of putting questions, but it may help many readers beyond me.\"}", "{\"title\": \"More details of the desired and achieved goals\", \"comment\": \"Thanks for your reply. We follow the goal setting of UVFA (Schaul et al., 2015a) and HER (Andrychowicz et al., 2017). In the example of a moving object that the agent must reach, the desired goals correspond to the positions of the moving object. The achieved goals correspond to the positions of the gripper, which is connected to a robotic arm.\\nThe moving object (desired goals) is a dynamic goal. The gripper (achieved goals) is controlled by the RL agent and tries to reach the moving object.\\n\\nWe updated Figure 1 and Section 3.2 to add more details of the desired and achieved goals. In the paper, in Figure 1, we use a green curve to indicate the trajectory of achieved goals and a red curve to indicate the trajectory of desired goals.\\nIn Figure 2, for the first three tasks, the positions of the (blocked) black gripper indicate achieved goals. The red object indicates desired goals. For the fourth task, the green circle (a snake) indicates achieved goals. The red object (food) indicates desired goals.\\n\\nIn our real robotic system, there are a gripper connected to a robotic arm and a blue block. When running a task, the positions of the gripper indicate achieved goals and the positions of the moving blue block indicate desired goals.\"}", "{\"title\": \"I still don't get it\", \"comment\": \"I'm really sorry, but after reading Section 3 several times, I'm afraid I still don't understand exactly what the authors are doing.\\n\\nLet us take the example of a moving object that the agent must reach. In this case, I'm assuming \\\"acheived goals\\\" correspond to positions of the object, known a posteriori. But what do \\\"desired goals\\\" stand for? Is the agent trying to predict the trajectory of the object?\\n\\nIt seems that the other reviewers have no problem with that, so if anybody could explain the setup to me, I would be delighted...\"}", "{\"title\": \"The topic of the paper and the time complexity of DHER\", \"comment\": \"Thanks for your interest. The paper belongs to relevant topics in ICLR, for example, reinforcement learning or applications in robotics, or any other field. Please see https://iclr.cc/Conferences/2019/CallForPapers .\\nTake HER (Andrychowicz et al., 2017) as an example, it was published in NIPS 2017. \\n\\nFor DHER, the time complexity of search process is O(1). In our implementation, we use two hash tables to store the trajectories of achieved goals and desired goals, respectively.\"}", "{\"comment\": \"I feel this paper is very well suited for a robotics conference rather than ICLR, I see valid concerns from reviewers but these are general assumptions that needs to be taken by robotics people to make things work. The only thing, I am interested in the overall time-complexity of DHER process, which seems to be growing as the number of episodes increases\", \"title\": \"More suitable for a robotics conference\"}", "{\"title\": \"Limitation and that the algorithm is very natural for many manipulation tasks\", \"comment\": \"Thanks for discussing the limitation of DHER. Similar to HER, we need to have the definition of goals and know the similarity metric between goals in order to construct \\u201csuccess\\u201d from failed experiences. We had provided how to use and define goals in Section 3.1 --- and we made addition revisions to make it more clear. See Sections 1 and 3.1 for the discussions.\\n\\nBecause we have the same multi-goal assumption as HER, we did not claim our method can be used for every case. However, it still can be applied to many domains if we know how to define the goals and if their trajectories intersect. \\n\\nFor a game, if its goals can be used as part of the observation and do not affect the environment dynamics, our algorithm will work. Regarding the Atari games, we did find that there is no game satisfying the multi-goal assumption. However, our approach can be potentially used for other games where we know the similarity of goals, for example, hunting for food in a Minecraft-like grid world mini-game. The Dy-Snake game in our work serves as a reference for which types of games our approach can benefit. \\n\\nThe algorithm is very natural for many manipulation tasks because we can access (sometimes noisy) object positions in manipulation. The starting point of this work is actually for manipulation controls.\"}", "{\"title\": \"Intuition of the dense reward baseline and the reasons of its poor performance\", \"comment\": \"Thanks for noting this. The intuition is that the agent should try its best to approach the goal as the goal is moving.\\n\\nVanilla DDPG uses sparse rewards and works better than DDPG (dense). Actually, a similar phenomenon also appeared in HER. See Figure 5 in HER (Andrychowicz et al., 2017). There may be three reasons that make the shaped and dense rewards perform poor: (1) There is a discrepancy between our optimization objective (i.e., the shaped reward function) and the success condition (i.e., in some radius from the goal); (2) The shaped rewards penalize some inappropriate behaviour which may hinder exploration. It could cause the agent to learn not to touch the goal at all if it can not manipulate it precisely; (3) Dynamic goals make the tasks difficult since the search space is huge.\"}", "{\"title\": \"Intuition for why the dense reward baseline performs so poorly\", \"comment\": \"Thanks for adding this baseline. Could the authors provide some intuition about why the new baseline, a dense reward based on the distance to the target doesn't work at all for the tasks in Figure 3 while DHER does? It seems this kind of dense reward should at least do better than the vanilla DDPG. It's hard to see why a sparse reward with DHER should do better than this by such a substantial amount. This would help a lot to understand the advantages of the proposed method (which also relies on knowing the goal position).\"}", "{\"title\": \"The approach appears more problem specific than claimed.\", \"comment\": \"Thanks for your response and clarifications. I would like to comment on this point:\\n\\n\\\"1) We position the paper in the context of RL with sparse rewards. We follow the goal setting of UVFA (Schaul et al., 2015a) and HER (Andrychowicz et al., 2017). The dynamic goal problem is extended from this setting, not all other cases. Please see paragraph 3 in Section 1 (Introduction) and paragraph 1 in Section 3.1 (Dynamic goals) for more descriptions. \\n2) We propose a new experience replay method. The proposed algorithm can be combined with any off-policy RL algorithms, similar to HER, as shown in Figure 1.\\n3) The motivation of developing algorithms which can learn from unshaped reward signals is that it does not need domain-specific knowledge and is applicable in situations where we do not know what admissible behaviour may look like. The similar motivation is also mentioned in HER (Andrychowicz et al., 2017). We also added new experimental results about dense rewards. The results show DHER works better. See Figures 3 and 6.\", \"q1\": \"The algorithm appears very specific and not applicable to all cases with dynamic goals. \\u2026\", \"a1\": \"Please see 1) and 3) above.\\\"\\n\\nI believe this kind of motivation as a principled approach to RL with sparse rewards and no domain knowledge is an overclaim. The HER algorithm is a heuristic one and to the best of my understanding requires a domain specific knowledge of how to set fake goals, which is natural in many settings such as grid worlds for example. The moving goal case described here requires even more domain specific knowledge and I am not convinced is truly \\u201cmodel-free\\u201d in most cases. To the best of my understanding the matching phase of your method requires a domain specific understanding of goal similarity. Is it possible to provide a dynamic goal example that is not just a simple and short trajectory in space and makes sense to be applied with DHER? Could the authors for example explain how the algorithm would be applicable in a case of an Atari style game where a goal would teleport or have long trajectories (non-trivial to match without a complex matching heuristic). It seems in this case (a) one would have to obtain precise coordinate positions of the goal (this would mean one can\\u2019t just solve the problem based on pure pixels and must rely on domain knowledge) and (b) the matching algorithm itself would need to be heavily crafted with domain specific knowledge. I think the method might be more specific than the authors claim and should be presented as such.\"}", "{\"title\": \"[1/2] The algorithm does not need to learn the dynamics. It creates success experiences by combining trajectories in the replay buffer.\", \"comment\": \"We thank the reviewer for the comments and have revised the paper accordingly. We believe the reviewer has some misunderstandings about our work. We make the following clarifications.\\n1) For the dynamic of goals, our algorithm does not need to learn the dynamics. The algorithm creates new experiences through combining two failed experiences in which their goal trajectories have overlaps at any timestep. Please see paragraph 1 in Section 3.1 (Dynamic goals) for more descriptions.\\n2) Our algorithm is about experience replay. The input is the past experiences. The output is new assembled success experiences if exist. We updated Figure 1 to show how DHER works with a RL algorithm.\\n3) Regarding the RL environments and the proposed algorithm and transfer solution, we would like to open all of them. All results can be reproduced. We believe the dynamic goal problem manipulation control is also interesting for other researchers.\", \"q1\": \"In order to do so, they first need to learn a model of the dynamics of the goal, and then to select in the replay buffer experience reaching the expected value of the goal at the expected time.\", \"a1\": \"Please see S1.\", \"q2\": \"how the agent learns of the goal motion ...\", \"a2\": \"Generally speaking, reinforcement learning learns a policy through trial and error. The reinforcement learning agent interacts with an environment and obtains rewards to indicate whether its action is good or not.\\nIn our setting, the goal\\u2019s motion is a part of environment. This setting is quite normal in real world. See our introduction and HER (Andrychowicz et al., 2017). When a RL algorithm takes an action, it will automatically and latently take the knowledge of the goal\\u2019s motion into consideration. \\nHowever, under this setting, after interacting with the environment for a long time, we still face the problem that we do not have success signals to guide a policy learning. The main difficulty then lies in how to efficiently use the past experiences in the replay buffer to construct the success signals, other than learn the motion of the goal. Our paper then provide a solution to solve the difficulty.\\nThere are a lot of goal trajectories and they are different to each other. Taking Dy-Reach as an example, as shown in Figure 3(a), we followed openai gym\\u2019s training settings. There are 50 epoches in total and each epoch has 100 episodes, i.e,. 100 trajectories. The performance (success rate) of the learned model DDPG+DHER can achieve 0.8. If the velocity of the goal is slower, the performance can achieve 1.0.\", \"q3\": \"how the output of this model is taken as input to infer the desired values of the goal in the future: ...\", \"a3\": \"Our model is a kind of experience reply method. The input of our model is the past trajectories. Most of them are failed. The output of our model is assembled experiences. The assembled experiences are success experiences. We followed the goal setting of UVFA (Schaul et al., 2015a) and HER (Andrychowicz et al., 2017). The goals are represented by positions. The model searches the relevant experience according to the positions of goals. If the positions of two goals are overlapped (< tolerance 0.01) at anytime, then they are matched.\"}", "{\"title\": \"[2/2] Additional responses to the reviewer\\u2019s points.\", \"comment\": \"Q4: an architecture diagram\", \"a4\": \"We updated Figure 1.\", \"q5\": \"Figures 3a and 5 \\u2026 performance decreases ...\", \"a5\": \"One reason may be that it is a temporal drop and will recover later. Another reason may be that the policy trained with assembled experiences becomes overfitting to simple cases as such kind of experiences are assembled a lot. The overfitting to simple cases decreases overall performance. The similar pattern also appeared in other papers. See Pusing task in Fig 2 in HER (Andrychowicz et al., 2017).\", \"q6\": \"To me, Section 4.5 about transfer to a real robot does not bring much \\u2026\", \"a6\": \"The experiments of transferring to a real robot mainly demonstrate dynamic goals are real-world problems and can be solved by our method. At the same time, it shows when DHER uses positions, it is robust to the real-world environment.\", \"q7\": \"In Section 4.6, the fact that DHER can outperform HER+ is weird \\u2026\", \"a7\": \"It is indeed a little surprising. It shows DHER is very efficient in some simple environments. In a simple environment, such as Dy-Snake, DHER has better generation than HER+. The reason may be that HER+ uses only one way to modify a trajectory. However, DHER has different ways to create success trajectories because we can find different matching positions given a trajectory from the past experiences. The Dy-Snake environment is so simple that DHER is able to create a lot of success experience in a short time.\", \"q8\": \"In more details, a few further remarks ...\", \"a8\": \"We polished the paper.\", \"q9\": \"in the appendix a further experiment (dy-sliding) \\u2026 of little use\\u2026\", \"a9\": \"We removed it. We added this before because our open source will contain this environment and our model also works on it successfully.\", \"q10\": \"In Algorithm 1, line 26: this is not the algorithm A that you optimize, this is its critic network.\", \"a10\": \"Line 26 indicates a standard update for the RL algorithm A. It is similar to HER. Please see the last several lines of Algorithm 1 in HER (Andrychowicz et al., 2017).\\nThe key process of DHER is from lines 13 to 23. We had added a marker at the end of Line 20.\", \"q11\": \"line 15: you search for a trajectory that matches the desired goal ...\", \"a11\": \"We use a hash table to store trajectories. We search trajectories in the hash table and return the first that matches.\", \"q12\": \"we assign certain rules to the goals so that they accordingly move => very unclear...\", \"a12\": \"The details are given in the next paragraph. See the second paragraph in Section 4.1. For different environments, the rules are slightly different.\", \"q13\": \"For defining the reward, you use s_{t+1} and g_{t+1}, why not s_t and g_t?\", \"a13\": \"They are the same meaning and just corresponding to different timesteps. At time step t, after taking an action, the state turns to s_{t+1} and the goal turns to g_{t+1}. Thus the reward is defined based on s_{t+1} and g_{t+1}.\\nSimilarly, if the time step is t - 1 (t > 1), the reward is defined based on s_{t} and g_{t}.\", \"q14\": \"p6: the same cell as the food at a certain time step. Which time step? How do you choose?\", \"a14\": \"It means if the snake moves to the same cell as the food at any timestep, the game is over. We only set the maximum timestep for each episode.\"}", "{\"title\": \"Experiments of the shaped reward baselines.\", \"comment\": \"Thank you for your insightful comments and feedback!\", \"q1\": \"baselines \\u2026 shaped rewards\\u2026\", \"a1\": \"We added shaped reward baselines. We use a natural distance related (dense) reward function to train the agent. Figures 3 and 6 in the paper show that the dense rewards do not work well for dynamic goals, though they help at the beginning of the learning.\", \"q2\": [\"It would be good to be more upfront about the limitations of the method \\u2026\"], \"a2\": \"We agree. In the revised paper, we provided more details about the limitations, including the goal assumption, the transfer requirements and so on. See Section 1 and 4.5 for more details.\", \"q3\": \"It would be interesting to see quantitative results for the simulated experiments in section 4.5.\", \"a3\": \"Thanks for your valuable suggestion. In Section 4.5, with the accurate positions, we have 100% success rate for 5 trials.\", \"q4\": \"The performance of DHER on Dy-Reaching seems to degrade in later stages of training (Figures 3a and 5). Do you know what is causing it? DQN or DHER?\", \"a4\": \"One reason may be that it is a temporal drop and will recover later. Another reason may be that the policy trained with assembled experiences becomes overfitting to simple cases as such kind of experiences are assembled a lot. The overfitting to simple cases decreases overall performance. The similar pattern also appeared in other papers. See Pusing task in Fig 2 in HER (Andrychowicz et al., 2017).\"}", "{\"title\": \"There is little work addressing dynamic goals in the sparse reward setting. Update the literature review and add dense reward baselines.\", \"comment\": \"We thank the reviewer for the comments, and we would like to clarify a few important misconceptions that the reviewer has regarding our work.\\n1) We position the paper in the context of RL with sparse rewards. We follow the goal setting of UVFA (Schaul et al., 2015a) and HER (Andrychowicz et al., 2017). The dynamic goal problem is extended from this setting, not all other cases. Please see paragraph 3 in Section 1 (Introduction) and paragraph 1 in Section 3.1 (Dynamic goals) for more descriptions. \\n2) We propose a new experience replay method. The proposed algorithm can be combined with any off-policy RL algorithms, similar to HER, as shown in Figure 1.\\n3) The motivation of developing algorithms which can learn from unshaped reward signals is that it does not need domain-specific knowledge and is applicable in situations where we do not know what admissible behaviour may look like. The similar motivation is also mentioned in HER (Andrychowicz et al., 2017). We also added new experimental results about dense rewards. The results show DHER works better. See Figures 3 and 6.\", \"q1\": \"The algorithm appears very specific and not applicable to all cases with dynamic goals. \\u2026\", \"a1\": \"Please see 1) and 3) above.\", \"q2\": \"I am also wondering if for most practical cases one could construct a heuristic for making the goal trajectory a valid one (not necessarily relying on knowing exact dynamics) thus avoiding the matching step.\", \"a2\": \"It is a good idea to take domain heuristics into consideration. However, in our paper, we aim to construct a model-free method for dynamic goals to avoid the complexity of constructing goal trajectories. We agree that your idea worths a try in the future.\", \"q3\": \"The literature review and the baselines do not appear to consider any other methods designed for dynamic goals. \\u2026\", \"a3\": \"We do not want to claim that the dynamic goal problem is a fresh problem. However, there is little work addressing dynamic goals in the sparse reward setting. As far as we know, there is no open-source RL environments for such problems. (OpenAI Gym Robotics uses fixed goals.)\", \"q4\": \"I find it difficult to believe that nobody has studied solutions to this problem and solutions specific to that don\\u2019t exist.\", \"a4\": \"Our paper focuses on addressing dynamic goals with sparse rewards. This setting has not been addressed probably because it is difficult to learn. For example, the recently developed DDPG and HER failed in our tasks. Moreover, there is no open-source environments for the dynamic goals and sparse rewards, to the best of our knowledge.\", \"q5\": \"There is several interesting ideas and a new dataset introduced, but I would like to be more convinced that the problems tackled are indeed as hard as the authors claim and to have a better literature review.\", \"a5\": \"Except for sparse rewards, we also added new experimental results about dense rewards for the dynamic goal setting. We have similar results. Similar to DDPG and DDPG+HER, DDPG(dense) does not work well in our tasks. For the simple Dy-Snake environment, DQN(dense) is better than DQN but not better than DQN+DHER. See Figures 3 and 6.\"}", "{\"title\": \"Interesting idea but lacking some context and experiments seem to not have any baselines targeted at the problem\", \"review\": \"The authors propose an extension of hindsight replay to settings where the goal is moving. This consists in taking a failed episode and constructing a valid moving goal by searching prior experiences for a compatible goal trajectory. Results are shown on simulated robotic grasping tasks and a toy task introduced by the authors. Authors show improved results compared to other baselines. The authors also show a demonstration of transfering their policies to the real world.\\n\\nThe algorithm appears very specific and not applicable to all cases with dynamic goals. It would be good if the authors discussed when it can and cannot be applied. My understanding is it would be hard to apply this when the environment changes across episodes as there needs to be matching trajectories. It would also be hard to apply this for the same reason if there are dynamics chaging the environment (besides the goal). If the goal was following more complex dynamics like teleporting from one place it seems it would again be rather hard to adapt this. I am also wondering if for most practical cases one could construct a heuristic for making the goal trajectory a valid one (not necessarily relying on knowing exact dynamics) thus avoiding the matching step.\\n\\nThe literature review and the baselines do not appear to consider any other methods designed for dynamic goals. The paper seems to approach the dynamic goal problem as if it was a fresh problem. It would be good to have a better overview of this field and baselines that address this problem as it has certainly been studied in robotics, computer vision, and reinforcement learning. I find this paper hard to assess without a more appropriate context for this problem besides a recently proposed technique for sparse rewards that the authors might want to adapt to it. I find it difficult to believe that nobody has studied solutions to this problem and solutions specific to that don\\u2019t exist.\\n\\nThe writing is a bit repetitive at times and I do believe the algorithm can be more tersely summarized earlier in the paper. It\\u2019s difficult to get the full idea from the Algorithm block.\\n\\nOverall, I think the paper is borderline. There is several interesting ideas and a new dataset introduced, but I would like to be more convinced that the problems tackled are indeed as hard as the authors claim and to have a better literature review.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review\", \"review\": \"This paper proposes a way of extending Hindsight Experience Replay (HER) to dynamic or moving goals. The proposed method (DHER) constructs new successful trajectories from pairs of failed trajectories where the goal accomplished at some point in the first trajectory happens to match the desired goal in the second trajectory. The method is demonstrated to work well in several simulated environments and some qualitative sim2real transfer results to a real robot are also provided.\\n\\nThe paper is well written and is mostly easy to follow. I liked the idea of combining parts of two trajectories and to the best of my knowledge it is new. It is a simple idea that seems to work well in practice. While DHER has some limitations I think the key ideas will lead to interesting future work.\\n\\nThe main shortcoming of the paper is that it does not consider other relevant baselines. For example, since the position of the goal is known, why not use a shaped reward as opposed to a sparse reward? The HER paper showed that using sparse rewards with HER can work better than shaped rewards. These findings may or may not transfer to the dynamic goal case so including a shaped reward baseline would make the paper stronger.\", \"some_questions_and_suggestions_on_how_to_improve_the_paper\": [\"It would be good to be more upfront about the limitations of the method. For example, the results on a real robot probably require accurate localization of the gripper and cup. Making this work for precise manipulation will probably require end-to-end training from vision where it\\u2019s not obvious DHER would apply.\", \"It would be interesting to see quantitative results for the simulated experiments in section 4.5.\", \"The performance of DHER on Dy-Reaching seems to degrade in later stages of training (Figures 3a and 5). Do you know what is causing it? DQN or DHER?\", \"Overall, I think this a good paper.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Simple and nice idea, but very unclear description and some serious flaws\", \"review\": \"In this paper, the authors extend the HER framework to deal with dynamical goals, i.e. goals that change over time.\\nIn order to do so, they first need to learn a model of the dynamics of the goal, and then to select in the replay buffer experience reaching the expected value of the goal at the expected time. Empirical results are based on three (or four, see the appendix) experiments with a Mujoco UR10 simulated environment, and one experiment is successfully transfered to a real robot.\\n\\nOverall, the addressed problem is relevant (the question being how can you efficiently replay experience when the goal is dynamical?), the idea is original and the approach looks sound, but seems to suffer from a fundamental flaw (see below).\\n\\nDespite some merits, the paper mainly suffers from the fact that the implementation of the approach described above is not explained clearly at all.\\nAmong other things, after reading the paper twice, it is still unclear to me:\\n- how the agent learns of the goal motion (what substrate for such learning, what architecture, how many repetitions of the goal trajectory, how accurate is the learned model...)\\n- how the output of this model is taken as input to infer the desired values of the goal in the future: shall the agent address the goal at the next time step or later in time, how does it search in practice in its replay buffer, etc.\\n\\nThese unclarities are partly due to unsufficient structuring of the \\\"methodology\\\" section of the paper, but also to unsufficient mastery of scientific english. At many points it is not easy to get what the authors mean, and the paper would definitely benefit from the help of an experienced scientific writer.\\n\\nNote that Figure 1 helps getting the overall idea, but another Figure showing an architecture diagram with the main model variables would help further.\\n\\nIn Figures 3a and 5, we can see that performance decreases. The explanation of the authors just before 4.3.1 seem to imply that there is a fundamental flaw in the algorithm, as this may happen with any other experiment. This is an important weakness of the approach.\\n\\nTo me, Section 4.5 about transfer to a real robot does not bring much, as the authors did nothing specific to favor this transfer. They just tried and it happens that it works, but I would like to see a discussion why it works, or that the authors show me with an ablation study that if they change something in their approach, it does not work any more.\\n\\nIn Section 4.6, the fact that DHER can outperform HER+ is weird: how can a learn model do better that a model given by hand, unless that given model is wrong? This needs further investigation and discussion.\\n\\nIn more details, a few further remarks:\\n\\nIn related work, twice: you should not replace an accurate enumeration of papers with \\\"and so on\\\".\", \"p3\": \"In contrary, => By contrast,\\n\\nwhich is the same to => same as\\n\\ncompare the above with the static goals => please rephrase\\n\\nIn Algorithm 1, line 26: this is not the algorithm A that you optimize, this is its critic network.\", \"line_15\": \"you search for a trajectory that matches the desired goal. Do you take the first that matches? Do you take all that match, and select the \\\"best\\\" one? If yes, what is the criterion for being the best?\", \"p5\": \"we find such two failed => two such failed\\n\\nthat borrows from the Ej => please rephrase\\n\\nwe assign certain rules to the goals so that they accordingly move => very unclear. What rules? Specified how? Please give a formal description.\\n\\nFor defining the reward, you use s_{t+1} and g_{t+1}, why not s_t and g_t?\", \"p6\": \"the same cell as the food at a certain time step. Which time step? How do you choose?\\n\\nThe caption of Fig. 6 needs to be improved to be contratsed with Fig. 7.\", \"p8\": \"the performance of DQN and DHER is closed => close?\\n\\nDHER quickly acheive(s)\\n\\nBecause the law...environment. => This is not a sentence.\\n\\nMentioning in the appendix a further experiment (dy-sliding) which is not described in the paper is of little use.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
HJf9ZhC9FX
Stochastic Gradient/Mirror Descent: Minimax Optimality and Implicit Regularization
[ "Navid Azizan", "Babak Hassibi" ]
Stochastic descent methods (of the gradient and mirror varieties) have become increasingly popular in optimization. In fact, it is now widely recognized that the success of deep learning is not only due to the special deep architecture of the models, but also due to the behavior of the stochastic descent methods used, which play a key role in reaching "good" solutions that generalize well to unseen data. In an attempt to shed some light on why this is the case, we revisit some minimax properties of stochastic gradient descent (SGD) for the square loss of linear models---originally developed in the 1990's---and extend them to \emph{general} stochastic mirror descent (SMD) algorithms for \emph{general} loss functions and \emph{nonlinear} models. In particular, we show that there is a fundamental identity which holds for SMD (and SGD) under very general conditions, and which implies the minimax optimality of SMD (and SGD) for sufficiently small step size, and for a general class of loss functions and general nonlinear models. We further show that this identity can be used to naturally establish other properties of SMD (and SGD), namely convergence and \emph{implicit regularization} for over-parameterized linear models (in what is now being called the "interpolating regime"), some of which have been shown in certain cases in prior literature. We also argue how this identity can be used in the so-called "highly over-parameterized" nonlinear setting (where the number of parameters far exceeds the number of data points) to provide insights into why SMD (and SGD) may have similar convergence and implicit regularization properties for deep learning.
[ "optimization", "stochastic gradient descent", "mirror descent", "implicit regularization", "deep learning theory" ]
https://openreview.net/pdf?id=HJf9ZhC9FX
https://openreview.net/forum?id=HJf9ZhC9FX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SyeDH4LElV", "HJgikArTR7", "HyxHcqKsCX", "HJxIZOj5RX", "ryxsZIo507", "HJg0RVi5CQ", "ryxWiLXCh7", "Skg_Iqwq37", "S1gpoI__hQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544999999499, 1543491043119, 1543375501009, 1543317501969, 1543316994572, 1543316693623, 1541449369052, 1541204560453, 1541076644919 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1199/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1199/Authors" ], [ "ICLR.cc/2019/Conference/Paper1199/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1199/Authors" ], [ "ICLR.cc/2019/Conference/Paper1199/Authors" ], [ "ICLR.cc/2019/Conference/Paper1199/Authors" ], [ "ICLR.cc/2019/Conference/Paper1199/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1199/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1199/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The authors give a characterization of stochastic mirror descent (SMD) as a conservation law (17) in terms of the Bregman divergence of the loss. The identity allows the authors to show that SMD converges to the optimal solution of a particular minimax filtering problem. In the special overparametrized linear case, when SMD is simply SGD, the result recovers a recent theorem due to Gunasekar et al. (2018). The consequences for the overparametrized nonlinear case are more speculative.\\n\\nThe main criticisms are around impact, however, I'm inclined to think that any new insight on this problem, especially one that imports results from other areas like control, are useful to incorporate into the literature. \\n\\nI will comment that the discussion of previous work is wholly inadequate. The authors essentially do not engage with previous work, and mostly make throwaway citations. This is a real pity. I would be nice to see better scholarship.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting generalization of older results in control\"}", "{\"title\": \"Response\", \"comment\": \"We thank the reviewer for their additional comments and have noted that they increased their score. We are not in disagreement with regards to SMD, or the reviewer's clarifying remarks about it. Furthermore, as also mentioned to Reviewer 3, we cannot comment on whether the implicit regularization of SMD is \\\"surprising\\\" or not.\\n\\nHowever, we do regretfully disagree with the reviewer that the paper's contributions are incremental.The reviewer bases this contention on their assertions that the implicit regularization of SMD is not surprising and that, as a machine learning researcher, they cannot appreciate the fundamental identity we show for SMD. We are not sure what to make of this last statement. The fundamental identity we show for SMD---both the local version in Lemma 4 and the global version in Lemma 5---can be regarded as a \\\"defining property\\\" of SMD, in the same way that our Eq (13), or Eqs (3.9) and (3.11) in the reference the reviewer cites, are defining properties of SMD. In other words, the SMD updates can be obtained from the identities in Lemmas 4 and 5, and therefore \\\"define\\\" the SMD updates. The advantage of these lemmas, especially Lemma 5, is that it gives a \\\"global\\\" interpretation of what SMD does, something that is not apparent---at all---from the defining local optimization of Eq (13) or the explicit update (15). It says something about what SMD is doing, and what quantities it is preserving, something which is not directly apparent from (13) or (15). It shows that the sum of D_{Li}(w,w_{i-1}), a certain measure of how well we are predicting the true parameter vector w, is bounded above by the sum of l(v_i), the loss of the noise. For quadratic loss, it upper bounds the energy of the prediction error by the energy of the noise.\\n\\nIn addition to yielding a novel interpretation for SMD, we show the utility of this fundamental identity, both to derive novel results, as well as to obtain more direct proofs of existing ones. We establish the minimax optimality of SMD, which generalizes the H-infinity optimality of SGD for linear models and quadratic loss. (Perhaps this is what the reviewer contends only the robust control community would appreciate. But, even if that were so---like it or not---this is a property of SMD that no other algorithms possess. It also can be interpreted in terms of the robustness of the algorithm in a manner we describe in the paper.) We further use the fundamental identity to give a deterministic proof of convergence for fixed step-size SMD in the over-parametrized case---something that had not been done before---and re-obtain implicit regularization in a very transparent way. The identity also allows us to say quite a bit in the over-parametrized nonlinear case (as happens in deep learning), and we outline this in Section 5.2. (The nonlinear case is currently under further investigation.) We have also used the fundamental identity to give a very direct proof of the stochastic convergence of SMD when the step size is vanishing and satisfies the Robbins-Monro conditions (this has been submitted to another venue).\\n\\nFurther, as mentioned by Reviewer 1, our fundamental identity raises the question of whether such identities can be found for other, perhaps more complicated, algorithms.\\n\\nAll this appears novel to us and we do not know what any of it has to do with being a machine learning researcher. SMD is used in machine learning and, in our view, new facts about it are expected to be of interest to machine learning researchers and practitioners.\\n\\nWe sincerely appreciate the reviewer's time and efforts in reading and evaluating our paper and value their comments. However, we had hoped the reviewer's recommendation would be based on objective facts, rather than subjective \\\"surprise\\\" and \\\"appreciation\\\".\"}", "{\"title\": \"Response after authors rebuttals.\", \"comment\": \"I thank the authors for having responded to my questions.\\nThe description of mirror descent provided by the authors is correct. An alternative description of mirror descent is that it is similar to gradient descent with Bregman divergence induced by the strongly convex potential function. Look at Proposition 3.2 in the following paper which establishes this equivalence\", \"https\": \"//web.iem.technion.ac.il/images/user-files/becka/papers/3.pdf\\nWith this viewpoint, I remarked (in my official review) that implicit regularization property of SMD algorithm is not surprising.\\n\\nWhile the fundamental identity the authors prove is interesting for robust control community, I as a machine learning researcher, find it hard to appreciate thi result. Modulo these, the contributions are very incremental. For this reason, I cannot recommend a strong acceptance.\"}", "{\"title\": \"Response to Reviewer3\", \"comment\": \"We thank the reviewer for their feedback and acknowledging the positive aspects of our work. Our responses to the reviewer\\u2019s comments follow.\\n \\n>> (1) Several results are extended from existing literature. For example, Lemma 1 and Theorem 3 have analogues in (Hassibi et al. 1996). Proposition 8 is recently derived in (Gunasekar et al., 2018). Therefore, it seems that this paper has some incremental nature. I am not sure whether the contribution is sufficient enough.<< \\n\\nAs mentioned in the paper, our results differ from these results in several aspects.\\nThe results on the fundamental identity and minimax optimality, e.g. (Hassibi et al. 1996; Kivinen at al., 2006), had never been shown in this generality, i.e., for general potential functions, general loss functions, and general models. In fact, it was not clear how to extend the results. The key insight here is that one needs to consider the Bregman divergence of the loss function. \\n\\nWhile an equivalent form of Proposition 8 has been shown in (Gunasekar et al., 2018), we would like to point out that (1) this result naturally follows from our fundamental identity, and (2) our approach readily proves (deterministic) convergence, too. (Gunasekar et al, 2018) just focus on the KKT conditions after convergence has happened. Their approach does not allow the study of convergence.\\n \\n>> (2) The authors say that they show the convergence of SMD in Proposition 9, while (Gunasekar et al., 2018) does not. It seems that the convergence may not be surprising since the interpolating case is considered there.<< \\n\\nWe cannot comment on whether convergence in the interpolating case is \\u201csurprising\\u201d or not. What we can comment on is that proving the convergence of SGD with fixed step size, even in the interpolating case, is not trivial. In fact, (Gunasekar et al., 2018) considered the linear interpolating case too; but to the best of our knowledge, there is no result about convergence in their paper. We further give conditions on the loss function (such as convexity, and even quasi-convexity) for SMD to converge in the linear interpolating case.\\n \\n>> (3) Implicit regularization is only studied in the over-parameterized case. Is it possible to say something in the general setting with noises?<< \\n\\nThe setting when we the model is not over-parametrized and there is noise, is not as simple. As we mention in the paper, when the model is not over-parameterized, SGD (or SMD) with fixed step size cannot converge. Therefore one cannot speak of implicit regularization when convergence does not happen. \\n\\nOf course, one can get convergence if the step size is allowed to vanish to zero. In this case, convergence is not surprising, since with a vanishing step size one essentially stops updating the solution after a while. What is more interesting is what one converges to. In work that has been submitted to another venue, we have used the same fundamental identity to show that for iid noise, SGD and SMD converge to the \\u201ctrue\\u201d parameter vector, provided the vanishing step size satisfies the so-called Robbins-Monro conditions. Our proof is very simple and direct and avoids ergodic averaging or appealing to stochastic differential equations, which is how the customary proofs go.\\n \\n>> (4) The discussion on the implicit regularization for over-parameterized case is a bit intuitive and based on strong assumptions, e.g., the first iterate is close to the solution set. It would be more interesting to present a more rigorous analysis with relaxed assumptions.<< \\n\\nWhile it would be nice---per the reviewer\\u2019s suggestion---to be able to prove convergence without the strong assumption that w_0 be close to the solution set, this may be a bit too ambitious and we are not sure how it can be done---or whether the statement is even true. We should reiterate our belief that this assumption is perhaps not too unrealistic in the highly over-parametrized case, because when the parameters are initialized at random around zero, w.h.p., the initial point will be close to the solution set (which is a very high-dimensional manifold). We have significantly expanded our discussions of the highly over-parametrized nonlinear case in Sec 5.2., with the hope of making the arguments more clear, all while acknowledging the fact that they are somewhat heuristic in nature.\"}", "{\"title\": \"Response to Reviewer2\", \"comment\": \"We thank the reviewer for their constructive feedback and for acknowledging the pros of the work. With respect to the two cons mentioned by the reviewer, we would like to make the following points.\\n \\n>> 1. The notion of optimality is w.r.t. a metric that is pretty non-standard and it was not clear to me as to why the metric is important to study (the ratio metric in eq 9).<< \\n\\nWhile the metric in (9) may be unfamiliar to the learning community, it is known in the estimation theory and control literature, and is in fact the H^{\\\\infty} norm (maximum energy gain) of the transfer operator that maps the unknown disturbances to the prediction errors. H^{\\\\infty} theory was developed to allow the design of estimators and controllers that were robust to model and disturbance uncertainty. There are connections to online learning (that have not yet been fully explored) and we remark on this in the footnote of Section 3.2. Furthermore, extending the minimax optimality results of (Hassibi et al 1996) and (Kivinen et al 2006) to general loss functions and nonlinear models had remained open and our paper shows that the correct way to formulate the minimax problem is through the Bregman divergence of the loss. Finally, the minimax optimality results of SGD and SMD can be regarded as the global defining properties of these algorithms. They are usually defined through some local optimization and/or update and it is not clear what they are doing globally---whether they are optimizing anything globally. Our results show what it is that they globally optimize.\\n \\n>> 2. The result is not very surprising since SMD is pretty much a gradient descent w.r.t a different distance metric.<< \\n\\nStochastic mirror descent (SMD) is a popular family of algorithms, which includes stochastic gradient descent (SGD) as a special case (when the potential function is the squared l2 norm), and has been studied in many papers, e.g. (Nemirovskii et al., 1983; Beck & Teboulle, 2003; Cesa-Bianchi et al., 2012; Zhou et al., 2017; Zhang and He, 2018; etc.). While each step of SMD can be viewed as transforming the variable $w$ with a mirror map, to $\\\\nabla\\\\psi(w)$, and adding the instantaneous gradient update to that variable, the updates are NOT the gradient with respect to that new variable, and therefore, it is not \\u201cgradient descent w.r.t. a different metric.\\u201d In fact, when the step size is very small, one can show that SMD updates the $w$ vector, not by the instantaneous gradient, but rather by the product of the inverse Hessian of the potential and the instantaneous gradient. \\n\\nFinally, for clarity, we would like to summarize the contributions of this work:\\n\\n1. We show that there exists a \\u201cfundamental identity\\u201d (i.e., a conservation law) which holds for SMD (and SGD) under very general conditions.\\n\\n2. Using this identity, we show that, for general nonlinear models and general loss functions, when the step size is sufficiently small, SMD (and SGD) are the optimal solution of a certain minimax filtering problem. This generalizes several results from the robust control theory literature, e.g., (Hassibi et al., 1994; Kivinen at al., 2006.)\\n\\n3. We show that many properties recently proven in the literature, such as the \\u201cimplicit regularization\\u201d of SMD (and SGD) in the over-parameterized linear case---when convergence happens---(Gunasekar et al., 2018), naturally follow from this theory. The theory also allows us to establish new results, such as the convergence (in a deterministic sense) of SMD (and SGD) in the over-parameterized linear case.\\n\\n4. We finally also use the theory developed in this paper to provide some speculative arguments into why SMD (and SGD) may have similar convergence and implicit regularization properties in the so-called ``highly over-parameterized'' nonlinear setting common to deep learning.\"}", "{\"title\": \"Response to Reviewer1\", \"comment\": \"We thank the reviewer for their supportive feedback and comments. We agree that it would be nice to see whether invariant relationships of the type we have found for SMD were to hold for more complicated iterative algorithms---we are currently investigating this. To the best of our abilities, we have made every attempt to clarify the paper and add more explanations and detailed discussions (as permitted by the page limitation). We hope this removes the barriers to a higher score. Now to the specific comments:\\n \\n>> 1. Can the authors explain how is the minimax optimality result of Theorem 6 (and Corollary 7) related to the main result of the paper which is probably Proposition 8 and and 9? Is that minimax optimiality a different insight separate from the main line of the arguments (which I believe is Proposition 8 and 9)?<< \\n\\nYes, we consider the minimax optimality (Theorem 6) as a separate insight. It gives a new interpretation to SMD and shows the manner in which it is robust to uncertainty about the true parameter vector and the model of the noise sequence. It derives from the same identity (18), and extends known results in the estimation theory literature (e.g., Hassibi et al.; 1996, Kivinen at al., 2006) to general SMD algorithms with general potential and general loss.\\n \\n>> 2. Is the gain in Proposition 9 over Proposition 8 is all about using loss convexity to ensure that the SMD converges and w_\\\\infty exists?<< \\n\\nYes, that is correct.\\n \\n>> 3. The paper has highly insufficient comparisons to many recent other papers on the idea of \\\"implicit bias\\\" like, https://arxiv.org/abs/1802.08246, https://arxiv.org/abs/1806.00468 and https://arxiv.org/abs/1710.10345. It seems pretty necessary that there be a section making a detailed comparison with these recent papers on similar themes.<<\\n\\nThank you for pointing out the above references---we have added them all. We also provide a brief comparison to our results (see Sec 1.1 \\u201cOur Contributions\\u201d, as well as the discussion below Proposition 8). The main difference is that our techniques allow a (deterministic) proof of convergence of SMD for the regression problem, which was not given in prior papers (implicit regularization was shown if convergence happens).\"}", "{\"title\": \"Very insightful paper but some essential details are missing.\", \"review\": \"This is a very interesting paper and it suggests a novel way to think of \\\"implicit regularization\\\". The power of this paper lies in its simplicity and its inspiring that such almost-easy arguments could be made to get so much insight. It suggests that minimizers of the Bregrman divergence are an alternative characterization of the asymptotic end-points of \\\"Stochastic Mirror Descent\\\" (SMD) when it converges. So choice of the strongly convex potential function in SMD is itself a regularizer!\\n\\nIts a very timely paper given the increasing consensus that \\\"implicit regularization\\\" is what drives a lot of deep-learning heuristics. This paper at its technical core suggests a modified notion of Bregman-like divergence (equation 15) which on its own does not need a strongly convex potential. Then the paper goes on to show that there is an invariant of the iterations of SMD along its iterations which involves a certain relationship (equation 18) between the usual Bregman divergence and their modified divergence. I am eager to see if such relationships can be shown to hold for more complicated iterative algorithms! \\n\\nBut there are a few points in the paper which are not clear and probably need more explanation and let me list them here. ( and these are the issues that prevent me from giving this paper a very high rating despite my initial enthusiasm )\\n\\n1. \\nCan the authors explain how is the minimax optimality result of Theorem 6 (and Corollary 7) related to the main result of the paper which is probably Proposition 8 and and 9? Is that minimax optimiality a different insight separate from the main line of the arguments (which I believe is Proposition 8 and 9)? \\n\\n2.\\nIs the gain in Proposition 9 over Proposition 8 is all about using loss convexity to ensure that the SMD converges and w_\\\\infty exists? \\n\\n3. \\nThe paper has highly insufficient comparisons to many recent other papers on the idea of \\\"implicit bias\\\" like, https://arxiv.org/abs/1802.08246, https://arxiv.org/abs/1806.00468 and https://arxiv.org/abs/1710.10345. It seems pretty necessary that there be a section making a detailed comparison with these recent papers on similar themes.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Minimax optimality results are proven for SGD and SMD. These results demonstrate implicit regularization properties of these algorithms even when the models are trained without explicit regularization\", \"review\": \"The authors look at SGD, and SMD updates applied to various models and loss functions. They derive a fundamental identity lemma 2 for the case of linear model and squared loss + SGD and in general for non-linear models+ SMD + non squared loss functions. The main results shown are\\n1. SGD is optimal in a certain sense for squared loss and linear model.\\n2. SGD always converges to a solution closest to the starting point.\\n3. SMD when it converges, converges to a point closest to the starting point in the bregman divergence. The convergence of SMD iterates is shown for certain learning scenarios.\", \"pros\": \"Shows implicit regularization properties for models beyond linear case.\", \"cons\": \"1. The notion of optimality is w.r.t. a metric that is pretty non-standard and it was not clear to me as to why the metric is important to study (the ratio metric in eq 9).\\n2. The result is not very surprising since SMD is pretty much a gradient descent w.r.t a different distance metric.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"a nice attempt to study implicit regularization of SGD but not sure whether the contribution is sufficient\", \"review\": \"Optimization algorithms such as stochastic gradient descent (SGD) and stochastic mirror descent (SMD) have found wide applications in training deep neural networks. In this paper the authors provide some theoretical studies to understand why SGD/SMD can produce a solution with good generalization performance when applied to high-parameterized models. The authors developed a fundamental identity for SGD with least squares loss function, based on which the minimax optimality of SGD is established, meaning that SGD chooses the best estimator that safeguards against the worst-case disturbance. Implicit regularization of SGD is also established in the interpolating case, meaning that SGD iterates converge to the one with minimal distance to the starting point in the set of models with no errors. Results are then extended to SMD with general loss functions.\", \"comments\": \"(1) Several results are extended from existing literature. For example, Lemma 1 and Theorem 3 have analogues in (Hassibi et al. 1996). Proposition 8 is recently derived in (Gunasekar et al., 2018). Therefore, it seems that this paper has some incremental nature. I am not sure whether the contribution is sufficient enough.\\n\\n(2) The authors say that they show the convergence of SMD in Proposition 9, while (Gunasekar et al., 2018) does not. It seems that the convergence may not be surprising since the interpolating case is considered there.\\n\\n(3) Implicit regularization is only studied in the over-parameterized case. Is it possible to say something in the general setting with noises?\\n\\n(4) The discussion on the implicit regularization for over-parameterized case is a bit intuitive and based on strong assumptions, e.g., the first iterate is close to the solution set. It would be more interesting to present a more rigorous analysis with relaxed assumptions.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
H1lqZhRcFm
Unsupervised Learning of the Set of Local Maxima
[ "Lior Wolf", "Sagie Benaim", "Tomer Galanti" ]
This paper describes a new form of unsupervised learning, whose input is a set of unlabeled points that are assumed to be local maxima of an unknown value function $v$ in an unknown subset of the vector space. Two functions are learned: (i) a set indicator $c$, which is a binary classifier, and (ii) a comparator function $h$ that given two nearby samples, predicts which sample has the higher value of the unknown function $v$. Loss terms are used to ensure that all training samples $\vx$ are a local maxima of $v$, according to $h$ and satisfy $c(\vx)=1$. Therefore, $c$ and $h$ provide training signals to each other: a point $\vx'$ in the vicinity of $\vx$ satisfies $c(\vx)=-1$ or is deemed by $h$ to be lower in value than $\vx$. We present an algorithm, show an example where it is more efficient to use local maxima as an indicator function than to employ conventional classification, and derive a suitable generalization bound. Our experiments show that the method is able to outperform one-class classification algorithms in the task of anomaly detection and also provide an additional signal that is extracted in a completely unsupervised way.
[ "Unsupervised Learning", "One-class Classification", "Multi-player Optimization" ]
https://openreview.net/pdf?id=H1lqZhRcFm
https://openreview.net/forum?id=H1lqZhRcFm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJxfXfTRkV", "BJle8y2LJ4", "SJleu-jZyN", "HygAnf8lyN", "ByeENYjyCX", "B1xnM_vyAX", "ryxp2idFpQ", "B1l5BauD6X", "S1lMMR0W6Q", "HyeP682Wa7", "rJl3jlGJ6X", "BJe4X2ykpm", "BJgPwCiuhQ", "S1gaKy9P37" ], "note_type": [ "meta_review", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review" ], "note_created": [ 1544634905583, 1544105799829, 1543774567574, 1543688886398, 1542596907681, 1542580243719, 1542192052876, 1542061377593, 1541692937815, 1541682879092, 1541509283801, 1541499931933, 1541090910762, 1541017476563 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1198/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1198/Authors" ], [ "ICLR.cc/2019/Conference/Paper1198/AnonReviewer2" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1198/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1198/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1198/Authors" ], [ "ICLR.cc/2019/Conference/Paper1198/Authors" ], [ "ICLR.cc/2019/Conference/Paper1198/Authors" ], [ "ICLR.cc/2019/Conference/Paper1198/Authors" ], [ "ICLR.cc/2019/Conference/Paper1198/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1198/Authors" ], [ "ICLR.cc/2019/Conference/Paper1198/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1198/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes a new unsupervised learning scheme via utilizing local maxima as an indicator function.\\n\\nThe reviewers and AC note the novelty of this paper and good empirical justifications. Hence, AC decided to recommend acceptance.\\n\\nHowever, AC thinks the readability of the paper can be improved.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Novel work\"}", "{\"title\": \"Golan and El-Yaniv, NIPS 2018\", \"comment\": \"Thank you very much for pointing us to the NIPS 2018 work by Golan and El-Yaniv, which we will happily include in our next version.\\n\\nWe completely agree with AnonReviewer2 that the two methods are different in their scope and orthogonal in their contributions and are working on combining both methods. It will take more than a few days, since the implementations of the two methods were written in different frameworks.\"}", "{\"title\": \"Sticking to original rating\", \"comment\": \"Thank you for pointing out this really interesting work.\\n\\nI am aware of this paper, and don't view it as in any sense - reducing quality of the paper under review, and as a reviewer - I am sticking to the currently assigned rating (8).\\n\\nWhile it might be interesting to point readers to the NIPS work in this paper, they are completely incomparable contributions. NIPS work is an image specific method, which focuses on data augmentation (to be more precise: enforced predefined geometrical transformation invariance), while paper under review is a generic scheme which happens to be applicable to one-class classification. Both methods seem orthogonal, and would be great to see them combined in some future work.\"}", "{\"comment\": \"Your results for one-class classification in Table 1 for CIFAR-10 are significantly inferior to the state-of-the-art. See the following NIPS paper:\", \"http\": \"//papers.nips.cc/paper/8183-deep-anomaly-detection-using-geometric-transformations.pdf\\nTable 1 (page 9) in that paper shows outstandingly better results. Moreover, the performance of their algorithm is better for each and every class. Your average AUC for all CIFAR-10 experiments is 69.8, and the best known average AUC in that paper is 86.0.\\nGiven these results, and the very high ratings of your paper, it is crucial to include the best known numbers (in the NIPS paper) in your Table 1.\\nWe doubt it that the reviewers had given your paper such high ratings had they known about the state-of-the-art.\", \"title\": \"There are already superior results than yours for one-class classification\"}", "{\"title\": \"Thanks for revision\", \"comment\": \"Thank you for updating the paper and responding my questions. With respect to your response, I am fine with current version. Since the revised paper has significantly improved, I decide to increase the score to 8, and I believe it is a really solid work.\"}", "{\"title\": \"Thanks for revision\", \"comment\": \"Thank you for updating the paper and providing missing information. Wrt. point 2, I am fine with current formulation. I find the empirical results of lack of mode hopping intriguing, and would strongly suggest taking a deeper look into this phenomenon in the future. For the time being, I am increasing the score to 8, as the paper presentation (and results) significantly improved, and I believe it is a really solid work.\"}", "{\"title\": \"Thank you for your support and the insightful comments\", \"comment\": \"Thank you very much for the supportive and very detailed review.\\n\\nYou suggest to reposition the paper as a density estimation problem. After much consideration we decided that a more conservative approach, in which we leave the current presentation and add the new viewpoint, would serve us better at this point. Your exciting perspective is now added to the introduction and we already received a positive feedback on it from AnonReviewer3. \\n\\nFollowing your suggestion, we have moved the theoretical part to the appendix. One small remark -- going forward, and applying the dual model beyond unsupervised learning, we expect h to become more dominant than c. For example, we are exploring an event detection model where the events occur at the local maxima of h, in regions that are defined by c.\", \"reviewer\": \"Multi-agent systems cannot be optimized with independent gradient descent in general (convergence guarantees are lost). Consequently many papers focus on methods that bring these properties back (e.g. Consensus Optimization or Symplectic Gradient Ascent). It would be beneficial for the reader to spend some time discussing stability of the system proposed, even if only empirically and on small problems.\", \"answer\": \"Following the review, we became familiar with the field of convergence of multi-agent systems. Thank you for pointing us in this direction. Our method could benefit in the future from the increased stability and theoretical guarantees one can obtain with these emerging methods.\\n\\nAs requested, we tried to evaluate this empirically. We took the example from [1] of a mixture of 16 Gaussians that are placed on a 4x4 grid and applied our method, as well as variations in which we trained only c or only h. Since our method is meant to model local maxima and not entire high-probability regions, we take a standard deviation that is ten times smaller than previous work. These results, which can be found in the latest revised version, indicate that when jointly training c and h, the former captures all the 16 modes, and h is also informative. When training each alone, training results with mode hopping.\\n\\n[1] D. Balduzzi, S. Racaniere, J. Martens, J. Foerster, K. Tuyls, and T. Graepel. The Mechanics of n-Player Differentiable Games. ICML, 2018.\", \"to_the_other_comments\": \"1. The \\\\cdot was added to Eq. 1\\n\\n2. Trying to add the parameters dependencies in Eq. 1 and 2 resulted in a cumbersome formulation. We therefore choose address the dependencies with added text. Please let us know if you still prefer that we separate the equations.\\n\\n3. A three player game is explored in two ways in the ablation of Tab.2: (i) In the lines that say \\u201cwith G_c only\\u201d we use only G_c to generate negative points to both c and h and report results for both of these functions, and (ii) Same for \\u201cwith G_h only\\u201d, where G_h was used to generate negative points to both networks. We altered the text to better reflect this.\\n\\n4. We have added standard deviations to Tab. 1, similarly to the paper from which the baselines were taken. The results reported were already averaged over multiple runs.\\n\\n5. Expectations rather than sums -- Following the suggestion, we have replaced the sum with averages. Writing the equations as expectations would require the addition of slightly more terminology and we wish to avoid this. Note that while SGD is indeed used, every step of Alg. 1 is over the entire samples of the training set (since the training sets are small).\"}", "{\"title\": \"Thank you for the constructive feedback\", \"comment\": \"Thank you for pointing us to the direction of clustering.\\n\\nFollowing the review, we have submitted a new revision, in which the following changes were made:\\n\\n1. As requested, we clarified appendix A.\\n\\n2. Following the review, we considered the question of the uniqueness of the value function in the setting of Thm. 1. Two results were added: (i) a new part of the theorem showing that 2m-1 is the minimal number of hidden neurons required. (ii) a discussion that stems from the proof of part (i) on the uniqueness of the value function in the case of the theorem.\"}", "{\"title\": \"Thank you for your comments\", \"comment\": \"Thank you very much for your comments.\\n\\nIt is true that c and h are trained concurrently and that the training algorithm, presented as Algorithm 1, is almost symmetric between the two. However, the two networks differ for multiple reasons: (i) The structure of the two functions is different: c has one input, and h has two, and (ii) The loss is different: G_h, which is the network that generates negative points for h, generates points G_h(x) that are in the vicinity of point x.\", \"these_two_differences_are_enough_to_ensure_that_h_and_c_take_different_roles\": \"c is, what AnonReviewer2 calls a characteristic function (does x belong to the set), and h is a comparator of nearby points.\\n\\nWhen there are multiple aspects that define the given set of input points, e.g., class membership and quality, c and h would assume the role that fits their structure, and not a random role. \\n\\nIn addition, due to their loss, h and c strive to become anti-correlated, which further pushes them to take different roles. As mentioned, these roles are not arbitrary but depend on the structure of the two functions\\n\\nIn the revision we uploaded earlier today, we put an additional emphasis on this asymmetry.\\n\\nTo your questions #1:\\n\\nWe use either c or h based on our goal. If, for the image experiments, our goal is to detect out-of-class samples, we use c. If our goal is to detect low quality images, we use h. In the cancer dataset experiment, h is more suitable for predicting the continuous value of survival we are interested with. A hypothetical scenario in which h and c play a different role in drug discovery is mentioned, for illustration, at the end of the discussion section.\\n\\nTo your questions #2: \\n\\nThe results in Tab. 3 are reported for multiple experiments, which are given side by side for brevity. In the columns of the experiment \\u201c(i) class membership\\u201d we evaluate the typical one-class classification scenario, for which c is suitable. \\n\\nIn the other two scenarios, we test images from the training class vs. noisy images. In the experiment \\u201c(ii) Noise in-class\\u201d we evaluate the ability of each learned method to discriminate between images that are similar to those in the training set and images that are noisy versions of it. In this task, which is based on image quality, h, as a comparator, is more suitable. \\n\\nTo see why this is the case, consider the training of h, during which points x are compared with generated points x\\u2019 in the vicinity of x. Since the training points x are obtained from a set of real-world training images, they are likely to be of higher quality than the generated nearby points.\"}", "{\"title\": \"Thank you for the constructive dialogue\", \"comment\": \"We are grateful for the prompt response and the new comments. In response to it, we have revised the manuscript, incorporating all of the suggestions to the best of our ability.\\n\\nYour understanding of the manuscript is correct. We have clarified the motivating examples and have added the one-class classification application to the introduction.\\n\\nThe changes are marked in red. The new revision also addresses the writing concerns by the other reviewers (we will address these reviews separately).\"}", "{\"title\": \"Unsupervised Learning of the Set of Local Maxima\", \"review\": \"In this paper, the authors focus on the task of learning the value function and the constraints in unsupervised case. Different from the conventional classification-based approach, the proposed algorithm uses local maxima as an indicator function. The functions c and h and two corresponding generators are trained in an adversarial way. Besides, the authors analyzed that the proposed algorithm is more efficient than the conventional classification-based approach, and a suitable generalization bound is given. Overall, this work is theoretically complete and experimentally sufficient.\\n1.\\tThe trained c and h give different predictions in most cases. As a unsupervised method, how to deal with them?\\n2.\\tIn Table3, why can h achieve better results when adding noise?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"definitions\", \"comment\": \"It is always the authors\\u2019 responsibility that the readers understand their work. To maximize the probability that our work would be well understood, we have collected feedback from quite a few readers. Yet it seems that there is still room for improvement.\\n\\nWhile taking responsibility for this, we respectfully disagree with the claim of the reviewer that \\u201cthere is no formal problem definition or statement, and the notions and terminologies in this paper are not properly defined or introduced.\\u201c As can be seen, we define the problem we study multiple times: (i) it is defined clearly in the abstract (input, goal, which functions are learned, why, and how). (ii) it is defined again at the end of the introduction in the first three paragraphs of page 2. (iii) it is redefined again at the beginning of Sec. 3, since we were worried that some readers would skip the abstract and the introduction.\\n\\nParagraphs 1-4 motivate our methods, by showing sample sets that arise in biology, man-made constructs, and weights of neural networks. The underlying value function in each case is explained: fitness or energetic efficiency in biology, an implicit value function in architecture (we mention a few possible factors), and an engineered loss in machine learning. \\n\\nIt seems that the reviewer was confused by the last example since it discusses machine learning. However, the paragraph merely describes a process that generates unsupervised samples that are the result of a local optimization process. The implication is that similarly to the other examples, viewing the learned weights of each random initialization as points in a vector space, this set of vectors is a suitable input to our method. \\n\\nIt is emphasized in the abstract and then in the introduction that the value function is learned and that the local maxima are of that unlearned function. The paper starts with \\u201c[the] input is a set of unlabeled points that are assumed to be *local maxima of an unknown value function* in an unknown subset of the vector space\\u201d. \\n\\nThe reviewer states that we discuss local maxima without stating the optimization problem. However, the local maxima we consider are of a function we seek to learn, not of an optimization problem. The notion of local maxima is discussed in the abstract, as it is actually applied: we learn a function h that compares the value of two points and a local maxima is a point x such that every point x\\u2019 in the vicinity of x satisfies c(x\\u2019) = -1 or is deemed by h to be lower in value than x.\", \"the_notion_of_local_maxima_is_also_clearly_defined_in_the_intro\": \"\\u201cIn addition, we also consider a value function v, and for every point x', such that ||x' \\u2212x|| < eps, for a sufficiently small eps > 0, we have: v(x0) < v(x)\\u201d. In practice, as mentioned early on in Sec. 3, and is well motivated by the ambiguity of v, we learn a comparator function h and not v.\\n\\nThe reviewer says that \\u201cIt is not properly defined what is x and how to obtain it\\u201d. The points x are the training samples and the definition of x is also given multiple times: \\n(1) The abstract says \\u201call training samples x\\u201d. \\n(2) The introduction says that the points x are in the set S, which is defined as \\u201cLet S be the set of such samples from a space X\\u201d. The word \\u201csuch\\u201d clearly refers in this context to unlabeled training samples. \\n(3) This is repeated one paragraph below, at the beginning of Sec. 2, \\u201cThe input to our method is a set of unlabeled points.\\u201d \\n(4) As mentioned, we redefine x and the other concepts as soon as Sec. 3 starts, to make sure that all readers are aware of the setting. \\u201cRecall that S is the set of unlabeled training samples, and that we seek two functions c and v such that for all x \\\\in S it holds that: (i) c(x) = 1, and (ii) x is a local maxima of v.\\u201d By \\u201cseek\\u201d we mean learn, but since it is not the first time this is stated in the paper (even the previous paragraph mentions that the value function is learned), we used a different word.\\n\\nThe reviewer says that \\u201cThe motivation and intuition behind the formulations in (1) and (2) are hard to follow, perhaps because the goal and objective of the paper is unclear. \\u201c. However, the terms of both equations are discussed one by one below them. These explanations are directly tied to the goals and objectives that appear earlier in the paper:\\n(1) In the abstract: \\u201cLoss terms are used to ensure that all training samples x are a local maxima, according to h and satisfy c(x) = 1. Therefore, c and h provide training signals to each other: a point x\\u2019 in the vicinity of x satisfies c(x\\u2019) = \\u22121 or is deemed by h to be lower in value than x. \\u201c\\n(2) In the intro: \\u201cThis structure leads to a co-training of v and c, such that every point x\\u2019 in the vicinity of x can be used either to apply the constraint v(x\\u2019) < v(x) on v, or as a negative training sample for c. Which constraint to apply, depends on the other function: if c(x\\u2019) = 1, then the first constraint applies; if v(x\\u2019) >= v(x), then x\\u2019 is a negative sample for c\\u201d.\"}", "{\"title\": \"Interesting idea of casting one-class classification/set beloning problem onto 4 player game\", \"review\": [\"This paper describes a new form of one-class/set beloning learning, based on definition of 4 player game:\", \"Classifier player (c), which is a typical one-class classifier model\", \"Comparator player (h), which given two instances answers if first is \\\"not smaller\\\" (wrt. set belonging) than the other\", \"Classifier adversary player (Gc), which tries to produce hard to distinguish samples for (c)\", \"Comparator adversary player (Gh), which tries to produce hard to classify samples for (h)\", \"This way authors end up with cooperative-competitive game, where c and h act cooperatively to solve the problem, while Gc and Gh constantly try to \\\"beat\\\" them.\", \"Overall I find this paper to be interesting and worth presenting, however I strongly encourage authors to rethink the way story is presented so that it is more approachable by people who do not have much experience with viewing typical classification problems as games. In particular, one could completely avoid talking about \\\"sets of local maxima\\\" and just talk about the density estimation problem, with c being characteristic function (of belonging to the support) and h being comparator of the pdf.\"], \"strong_points\": [\"Novel, multi-agent in nature, approach to one-class classification\", \"Proposed method build a complex system, which can be used in much wider class of problems than just classification (due to joint optimisation of classifier and comparator)\", \"Extensive evaluation on 4 problems\", \"Nice ablation study showing that most of the benefits come from pure c/Gc game (on average 68.8% acc vs 65.2% of just c, and 69.8% of entire system) but that h/Gh players do indeed still improve (an extra 1%). It might be interesting to investigate what exactly changed in c due to existance of h in training. Are there any identifiable properties of the model that can now be analysed?\"], \"weak_points\": \"In general I believe that theoretical analysis is the weakest part of the paper, and while interesting - it is actually a minor point, and shows interesting properties, but not the ones that would guarantee anything in \\\"practical setup\\\". I would suggest \\\"downplaying\\\" this part of the paper, maybe moving most of it to the appendix.\", \"to_be_more_specific\": [\"Theorem 1 shows that representation can be more compact, however existance of compactness does not rely imply that this particular solution can ever be learned or that it is a good thing (number of parameters is not correlated with generalisation capabilities of the model).\", \"Lemma 1 seems a bit redundant for the story. While it is nice to be able to show generalisation bounds in general, this paper is not really introducing new class of models (since in the end c is going to be used for actual classification), but rather training regime, and generalisations bounds do not tell us anything about the emerging dynamical system. The fact that adding v does not constrain c too much seems quite obvious, and as a result I would suggest moving this section to appendix.\", \"Instead, if possible, the actual tricky mathematical bit for methods like this would be, in reviewers opinion, any analysis of learning dynamics of the system like this. Multi-agent systems cannot be optimised with independent gradient descent in general (convergence guarantees are lost). Consequently many papers focus on methods that bring these properties back (e.g. Consensus Optimisation or Symplectic Gradient Ascent). It would be beneficial for the reader to spend some time discussing stability of the system proposed, even if only empirically and on small problems.\"], \"other_remarks\": [\"eq. (1) is missing \\\\cdot\", \"it could be useful to include explicit parameters dependences in (1) and (2) so that one sees how losses really define asymmetric game between the players\", \"why do we need 4 players and not just 3, with Gc and Gh being a single player/neural network? can we consider this as another ablation?\", \"given small performance gaps in Table 1 can we get error estimates/confidence intervals there? Deep SVDD paper includes error estimates of the baseline methods\", \"since training is performed in mini batch (it does not have to be decomposible over samples) shouldn't equations be based on expectations rather than sums?\", \"-\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"hard to follow\", \"review\": \"The reviewer feels that the paper is hard to follow. The abstract is confusing enough and raises a number of questions. The paper talks about `\\\"local maxima\\\" without defining an optimization problem. What is the optimization problem are we talking about? Is it a maximization problem or minimization problem? If we are dealing with a minimization problem, why do we care about maxima?\\n\\nThe first several paragraphs did not make the problem of interest clearer. But at least the fourth paragraph starts talking about training networks (the reviewer guesses this \\\"network\\\" refers to neural network, not other types network (e.g., Bayesian network) arising in machine learning). This paragraph talks about random initialization for minimizing a loss function, does this mean we are considering a minimization problem's local maxima? In addition, random initialization-based neural network training algorithms like back propagation cannot guarantee giving local maxima or local minima of the problem of interest (which is the loss function for training). It is even not clear if a stationary point can be achieved. So if the method in this paper wishes to work with local maxima of an optimization problem, this may not be a proper example.\\n\\nThe next paragraph brings out a notion of value function, which is hard to follow what it is. A suggestion is to give a much more concrete example to enlighten the readers.\\n\\nThe next two paragraphs seem to be very disconnected. It is not properly defined what is x and how to obtain it. If they are local maxima of a problem, please give us an example: what is the optimization problem, and why this is an interesting setup?\\n\\nSince the problem setup of this paper is very hard to decode, it is also very hard to appreciate why the papers in the \\\"related work\\\" section are really related.\\n\\nThe motivation and intuition behind the formulations in (1) and (2) are hard to follow, perhaps because the goal and objective of the paper is unclear.\\n\\nOverall, there is no formal problem definition or statement, and the notions and terminologies in this paper are not properly defined or introduced. This makes evaluating this work very hard.\\n\\n\\n========= after author feedback =======\\nAfter discussing with the authors through OpenReview, the reviewer feels that a lot of things have been clarified. The paper is interesting in its setting, and seems to be useful in different applications. The clarity can still be improved, but this might be more of a style matter. The analysis part is a bit heavy and overwhelming and not very insightful at this moment. Overall, the reviewer appreciate the effort for improving the readability of the paper and would like to change the recommendation to ```` accept.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
B1e9W3AqFX
Multi-task Learning with Gradient Communication
[ "Pengfei Liu", "Xuanjing Huang" ]
In this paper, we describe a general framework to systematically analyze current neural models for multi-task learning, in which we find that existing models expect to disentangle features into different spaces while features learned in practice are still entangled in shared space, leaving potential hazards for other training or unseen tasks. We propose to alleviate this problem by incorporating a new inductive bias into the process of multi-task learning, that different tasks can communicate with each other not only by passing hidden variables but gradients explicitly. Experimentally, we evaluate proposed methods on three groups of tasks and two types of settings (\textsc{in-task} and \textsc{out-of-task}). Quantitative and qualitative results show their effectiveness.
[ "Pretend to share", "Gradient Communication" ]
https://openreview.net/pdf?id=B1e9W3AqFX
https://openreview.net/forum?id=B1e9W3AqFX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Hyl4jvExlV", "Bklq1mKt0m", "ryludMtYCQ", "S1gylMtF0X", "Hkeyx04527", "rkgfxIgOh7", "rJxg0PiSnQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544730523950, 1543242466484, 1543242351975, 1543242214730, 1541193190653, 1541043689680, 1540892616478 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1197/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1197/Authors" ], [ "ICLR.cc/2019/Conference/Paper1197/Authors" ], [ "ICLR.cc/2019/Conference/Paper1197/Authors" ], [ "ICLR.cc/2019/Conference/Paper1197/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1197/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1197/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper presents a novel idea of transferring gradients between tasks to improve multi-task learning in neural network models. The write-up includes experiments with multi-task experiments with text classification and sequence labeling, as well as multi-domain experiments. After the reviews, there are still some open questions in the reviewer comments, hence the reviewer decisions were not updated.\\nFor example, the impact of sequential update in pairwise task communication on performance can be analyzed. Two reviewers question task relatedness and the impact of how and when it is computed could be good to include in the work. Baselines could be improved to reflect reviewer suggestions.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting MTL approach, but the work could be improved in the light of suggestions.\"}", "{\"title\": \"Response to reviewer 3\", \"comment\": \"We thanks for your insightful and helpful comments.\\nFirst, we have made our presentation more clear based on your confusion:\\n1) Eq.(4) (8)\\n2) Description of Figure (5)\", \"some_detailed_responses_are_shown_as_follows\": \"\", \"q1\": \"\\u201cThe whole paper is assumed to address the \\\"pretend-to-share\\\" problem, while the authors never provide any evidence that such problem really exists for any other algorithm. It seems to be an assumption without any support.\\u201d\", \"a1\": \"Actually the term \\u201cpretend-to-share\\u201d is just used to describe an existing problem and related evidence has been observed in some previous work (Bousmalis et al., 2016; Liu et al., 2017). While we have claimed this in \\u201cIntroduction\\u201d section, we will make this more clear in our revised version.\", \"q2\": \"\\u201cTo my best knowledge, meta-learning, including MAML (Finn et al. 2017), can obviously solve both in-task setting and out-task setting. In some sense, I think this work is almost equivalent to MAML\\u201d\", \"a2\": \"1) First of all, we don\\u2019t claim that meta-learning methods cannot be used for in-task setting and out-of-task setting. The thing we claimed is existing methods about meta learning focus on modelling the dependencies of samples from the SAME tasks.\\n2) Most existing meta learning methods are designed for few-shot learning (including MAML), whose motivation is far different from multi-task learning. For multi-task learning, one of the questions we care about is \\u201chow to allow different tasks to help each other as more as possible\\u201d.\\n3) In this paper, one major contribution is that we find an existing problem in multi-task learning can be alleviated by allowing different tasks to pass gradients. \\nTo summarize, this paper starts from a real problem existing in multi-task learning, and we propose to solve it by passing gradients between different tasks. Similarly, most existing meta learning methods propose to passing gradient within the same tasks for few-shot learning scenario.\", \"q3\": \"\\u201cseveral state-of-the-art baselines including MAML and cross-stitch networks should be compared\\u201d\", \"a3\": \"As we have described above, MAML has its own training setting(training set, support set, test set), which is hard to use for our tasks.\\nAdditionally, there are too many existing methods for multi-task learning, we have chosen existing models as our baselines which also focus on \\u201cpretend-to-share\\u201d problem.\"}", "{\"title\": \"Response to reviewer 2\", \"comment\": \"Thanks for your comments and the response of each point is listed below.\\n 1. Both pairwise and listwise communication mechanisms are designed for addressing the inconsistent updating problem of shared parameters between different tasks. The difference is that pairwise communication considers the updating consistency of parameter between two tasks, which is a relatively relaxed constraint. (In the real scenario, there are features which can be shared partially).\\n 2. Yes, explicitly passing gradients to different tasks will take additional time while the overall training processing is still very efficient. \\n 3. Here we choose a function without learnable parameters to compute the fast weight. We have give more detailed formulation in our revised version. \\n 4. Yes, as we have also claimed in the paper, the relatedness can be computed in a static or dynamic way. The question \\u201chow to choose weights for different tasks ?\\u201d is a classic problem and pre-compute the task relatedness has been widely used in existing work. Here, we don\\u2019t explore more about this to make our paper more focused.\"}", "{\"title\": \"Response to reviewer 1\", \"comment\": \"Thanks for your comments and the response of each point is listed below.\\nThat\\u2019s a really good question. As described in our paper, for the tagging tasks(POS, Chunking,NER), they don\\u2019t share the same output space, and here we regard each task equally. Another alternative way is to learn them dynamically.\\nBoth pairwise and listwise communication mechanisms are designed for addressing the inconsistent updating problem of shared parameters between different tasks.\\nThe difference is that pairwise communication considers the updating consistency of parameter between two tasks, which is a relatively relaxed constraint.\\nSpecifically, the constraint of listwise communication is used to purify the intersection of all task\\u2019s feature space while pairwise communication is more fit to the partially-shared scenario.\"}", "{\"title\": \"interesting paper on improving MTL using gradient communication\", \"review\": \"Paper summary:\\nIn this paper, the authors propose a general framework for multi-task learning (MTL) in neural models. The framework is general for including some of the current neural models for MTL. Under the framework, the author propose a new method that could allow tasks to communicate each other with explicit gradients. Based on the gradients being communicated, the system could adjust the updates of one task based on the gradient information of the other task. Also, prior task relatedness information could be incorporated to the system. \\n\\nThe idea of incorporating passing gradients among tasks seems very interesting, which is new as far as I am aware of. Although the idea is simple, but it seems intuitive since purely aggregating gradient updates might have undesired cancelling effects on each other. \\n\\nThere are some questions I have about this method. \\n1.\\tI\\u2019m curious about how the sequential update in pairwise task communication affects the performance. \\n2.\\tAlso, how does sequential update nature of the method affect the training speed, as for now, the parameter update consists of two sequential steps which also involve changes to the traditional update rule. \\n3.\\tWhat is fast weight for and how it is used in (9)? It would be better if there are more details on how the update is carried out during the gradient communication.\\n4.\\tRegarding the relatedness for List-wise communication, is it possible to update the relatedness dynamically? Since the pre-computed relatedness might not always make sense. During the learning of the representations, the task relatedness could change in the process.\\nThe system framework for MTL introduced by the authors seem to be kind of isolated to the method proposed. I feel that the framework is not quite easy to understand from the way it is presented. From my perspective, the effectiveness of analyzing MTL methods using the framework seems a bit limited to me, as it serves more like a way of abstracting MTL models instead of analyzing it. Therefore, I feel the content devoted to that part might be too much.\\n\\nOverall, I think the paper is interesting although the method itself is relatively simple. And the direction of utilizing gradient communication among tasks seem interesting and could be further explored. But I do feel the organization of the paper is a bit too heavy on the framework instead of the methodology proposed. And more details of the algorithm proposed could be provided.\\n\\nOn a side note, I think the paper exceeds the required length limit of 10 pages if appendices are counted towards it.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"just packing exsiting algorithms\", \"review\": \"This paper tries to address the\\\"pretend-to-share\\\" problem by designing the gradient passing schemes in which the gradient updates to specific parameters of tasks are passed to the shared parameters. Besides, the authors summarize existing multitask learning algorithms in a framework called Parameters Read-Write Networks (PRAWN).\", \"pros\": [\"The view of putting existing multi-task learning algorithms in a read-write framework is quite intriguing and inspiring.\"], \"cons\": [\"Motivation: The whole paper is assumed to address the \\\"pretend-to-share\\\" problem, while the authors never provide any evidence that such problem really exists for any other algorithm. It seems to be an assumption without any support.\", \"Method:\", \"Though the read-write framework is very interesting, the authors do not clearly present it, so that the readers can be totally get lost. For example, what do you mean by writing {\\\\Theta^{*r}_k - \\\\theta^{swr}_k}? In the line of structural read-op, where are \\\\theta_3 and \\\\theta_4 in the column of the constituent para. ? What do you mean by writing the equation (4)? How do you define g() in equation (8)? This is a research paper which should be as clear as possible for the readers to reproduce the results, rather than a proposal only with such abstract and general functions defined.\", \"In the list-wise communication scheme, you define the task relationship in equation (11). The problem is how do you evaluate the effectiveness of such definition, since massive works in multitask learning pursue to learn the task relationship automatically to guarantee the effectiveness instead of such heuristic definition.\", \"Related works: The authors do not clearly and correctly illustrate the connections between this work and meta-learning/domain adaptation. To my best knowledge, meta-learning, including MAML (Finn et al. 2017), can obviously solve both in-task setting and out-task setting. In some sense, I think this work is almost equivalent to MAML.\", \"Experiments:\", \"First, several state-of-the-art baselines including MAML and cross-stitch networks should be compared. Specifically, for the text classification dataset, there have been a lot of domain adaptation works discovering the transferable pivots (shared features) and non-pivots (specific features), which the authors should be aware of and compare in Table 3.\", \"The Figure 5 is not clear to me, and so is the discussion. The authors try to explain that the updating direction of shared parameters for PGP-SR is an integration of two private updating directions. I tried hard to understand, but still think that Figure 5(a) is even better than Figure 5(b). The updating direction of the shared parameters is almost the same as the cyan line.\", \"Presentation: there are so many grammatical errors and typos. For example,\", \"In the introduction, \\\"...datasets, range from natural\\\" -> \\\"...datasets, ranging from natural\\\"\", \"In the related work, \\\"and they propose address it with adversarial\\\" -> \\\"and they propose to address it with adversarial\\\"\", \"In the beginning of Section 4, \\\" an general\\\" -> \\\"a general\\\"\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review\", \"review\": \"This paper proposes that models for different tasks in multi-task learning cannot only share hidden variables but also gradients.\", \"pros\": [\"The overall framework is theoretically motivated and intuitive. The idea of passing gradients for multi-task learning is interesting and the execution using fast weights is plausible.\", \"The experiments are extensive and cover three different task combinations in different domains.\", \"The results are convincing and the additional analyses are compelling.\"], \"cons\": [\"I would have liked to see a toy example or at least a bit more justification for the \\\"pretend-to-share\\\" problem that models \\\"collect all the features together into a common space, instead of learning shared rules across different tasks\\\". As it is, evidence for this seems to be mostly anecdotal, even though this forms the central thesis of the paper.\", \"I found the use of Read and Write ops confusing, as similar terminology is widely used in memory-based networks (e.g. [1]). I would have preferred something that makes it clearer that updates are constrained in some way as \\\"writing\\\" implies that the location is constrained, rather than the update minimizing a loss.\"], \"questions\": [\"How is the weight list of task similarities \\\\beta learned when the tasks don't share the same output space? How useful is the \\\\beta?\", \"Could you elaborate on what is the difference between pair-wise gradient passing (PGP) and list-wise gradient passing (LGP)\", \"[1] Graves, A., Wayne, G., & Danihelka, I. (2014). Neural turing machines. arXiv preprint arXiv:1410.5401.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
r1g5b2RcKm
MLPrune: Multi-Layer Pruning for Automated Neural Network Compression
[ "Wenyuan Zeng", "Raquel Urtasun" ]
Model compression can significantly reduce the computation and memory footprint of large neural networks. To achieve a good trade-off between model size and accuracy, popular compression techniques usually rely on hand-crafted heuristics and require manually setting the compression ratio of each layer. This process is typically costly and suboptimal. In this paper, we propose a Multi-Layer Pruning method (MLPrune), which is theoretically sound, and can automatically decide appropriate compression ratios for all layers. Towards this goal, we use an efficient approximation of the Hessian as our pruning criterion, based on a Kronecker-factored Approximate Curvature method. We demonstrate the effectiveness of our method on several datasets and architectures, outperforming previous state-of-the-art by a large margin. Our experiments show that we can compress AlexNet and VGG16 by 25x without loss in accuracy on ImageNet. Furthermore, our method has much fewer hyper-parameters and requires no expert knowledge.
[ "Automated Model Compression", "Neural Network Pruning" ]
https://openreview.net/pdf?id=r1g5b2RcKm
https://openreview.net/forum?id=r1g5b2RcKm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HklzDghbgV", "H1xgw8mTRX", "B1eFJqN9Am", "SyehOO4qCX", "BkgC6IEc0X", "BygczIV50m", "SJgffqb2h7", "SJec_a682X", "H1eb3snHn7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544826969939, 1543480920512, 1543289312962, 1543288948381, 1543288518141, 1543288337698, 1541310985651, 1540967794143, 1540897705338 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1196/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1196/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1196/Authors" ], [ "ICLR.cc/2019/Conference/Paper1196/Authors" ], [ "ICLR.cc/2019/Conference/Paper1196/Authors" ], [ "ICLR.cc/2019/Conference/Paper1196/Authors" ], [ "ICLR.cc/2019/Conference/Paper1196/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1196/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1196/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The authors propose a technique for pruning networks by using second-order information through the Hessian. The Hessian is approximated using the Fisher Information Matrix, which is itself approximated using KFAC. The paper is clearly written and easy to follow, and is evaluated on a number of systems where the authors find that the proposed method achieves good compression ratios without requiring extensive hyperparameter tuning.\\n\\nThe reviewers raised concerns about 1) the novelty of the work (which builds on the KFAC work of Martens and Grosse), 2) whether zeroing out individual connections as opposed to neurons will have practical runtime benefits, 3) the lack of comparisons against baselines on overall training time/complexity, 4) comparisons to work which directly prune as part of training (instead of the train-prune-finetune scheme adopted by the authors).\\nIn the view of the AC, 4) would be an interesting comparison but was not critical to the decision. Ultimately, the decision came down to the concern of lack of novelty and whether the proposed techniques would have an impact on runtime in practice.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Work could be strengthened by analysis of runtime performance\"}", "{\"title\": \"Concerns Now Addressed\", \"comment\": \"Thank you for clarifying the complexity. Please also include it in the paper formally.\"}", "{\"title\": \"Thank you all for reviewing our paper.\", \"comment\": \"We would like to thank all reviewers for reviewing our paper and give insightful comments. We are open to further comments.\\n\\nPlease find more detailed responses as below.\"}", "{\"title\": \"Thanks for your comments.\", \"comment\": \"Thanks for your comments and references. Hopefully our responses can answer some of your questions.\\n\\n- It is not clear to me...\", \"answer\": \"We understand the reviewer\\u2019s concern about the practical benefit of such method. However, as we mentioned before, pruning individual weights can help on-chip inference (FPGA) or mobile device where memory is an issue and customization can be applied to facilitate inference speed. Pruning individual weights can also help make model smaller as shown in [4]. Lastly, our contribution mainly focus on providing a hyper-parameter free manner to prune the network to get a smaller size, not claiming to have faster inference speed. Therefore, we think gaining speed is beyond the scope of this work.\\n[4] Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding\", \"experiments\": [\"While the reported compression rates are good...\"]}", "{\"title\": \"Method Complexity and Time\", \"comment\": \"Thanks for your comments. Here we provide a brief complexity and time evaluation of our method, using AlexNet as an example.\", \"complexity\": \"Our method computes the Hessian in a block-wise manner, and the size of each block is determined by the size of that layer. The largest fully-connected layer in AlexNet is fc1, which is a 9216 x 4096 matrix. As a result, a_{l-1} in Eq(13) is a vector of size 9216, and \\\\nabla_{s_l}L in Eq(13) is a vector of size 4096. Thus, A_{l-1} for this layer is a matrix of size 9216 x 9216, and DS_l has size 4096 x 4096. It is not difficult to invert two matrices with such size with standard hardwares (O(9216^3)).\", \"time\": \"Each pruning operation is followed by a re-training procedure (just as other popular pruning methods). Using AlexNet as an example, the re-training procedure run 120 epochs over the ImageNet dataset, which typically involves 1-2 days on 4 Nvidia 1080 Ti and a 32 core CPU, while the pruning operation is around 70s on the same hardware. Therefore, we think our pruning method only brings negligible overhead.\\n\\n In general, we do not claim that our method converge faster than other pruning methods. But we think our method can automatically determine compression ratios for every layers, and thus avoiding lots of tuning and trials for manually searching those ratios, which make our method easier and faster to use in practice.\"}", "{\"title\": \"New experimental results of ResNet50 and other responses.\", \"comment\": \"We sincerely thank the reviewer for those insightful suggestions and comments, here's some of our responses.\\n\\n- Part of the novelty ...\", \"answer\": \"\\\\lambda is a global variable, which is equivalent to the global compression ratio in practice. We introduce it for easier illustration in formulae.\"}", "{\"title\": \"Second order method for pruning multiple layers\", \"review\": [\"This paper proposes a multi-layer pruning technique based on the Hessian. The main claims are performing better than other second order pruning methods and be principled.\", \"Main concerns / comments are:\", \"Part of the novelty relays on computing the Hessian, and the algorithm goes for very large networks (parameter wise), why? Modern networks do have much fewer parameters and do perform better. How does it behave on those? Would be interesting to see impact on modern networks (e.g., ResNet).\", \"Paper claims to be principled (as many others) and being able to address multiple layers at the same time. I do believe first order methods do that as well. Why not compared to them?\", \"Paper claims little overhead (compared to training and re-training). There is not much on that. Also, following the pipeline [train-prune-retrain] can be substituted by pruning while training with little overhead as in recent papers: (such as Learning with structured sparsity or Learning the number of neurons in DNN both at NIPS2016 or encouraging low-rank at compression aware training of DNN, nips 2017). Compared to those newer methods, this proposal has a drop-in accuracy while those do not. Would be nice to have a discussion related to that. Would be possible to include this into the original training process?\", \"Experiments are shown in small datasets and non-current networks with millions of parameters which do not reflect current state of the art. I would be interested to see limitations in networks not having fully connected layers with the large majority of (redundant) parameters.\", \"Compute time is not provided. Please comment on that\", \"I am not sure if I understand the statement on 'pruning methods can not handle multiple layers'. To the best of my understanding, current pruning methods as those mentioned above do\", \"Different to others, the proposed method, given a desired compression ratio can adjust the relevance of each layer. That is interesting, however, what is the motivation behind? Would be interesting to be able to control specifically each layer to make sure, for instance, the latency of each layer is maintained.\", \"I am confused with \\\\lambda, how does this go from percentage to per parameter? Is that guaranteed?\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Marginally above acceptance threshold\", \"review\": \"The paper proposes a multi-layer pruning method called MLPrune for neural networks, which can automatically decide appropriate compression ratios for all the layers. It firstly pre-trains a network. Then it utilizes K-FAC to approximate the Fisher matrix, which in turn approximates the exact Hessian matrix of training loss w.r.t model weights. The approximated Hessian matrix is then used to estimate the increment of loss after pruning a connection. The connections from all layers with the smallest loss increments are pruned and the network is re-trained to the final model.\", \"strength\": \"1. The paper is well-written and clear. \\n2. The method is theoretically sound and outperforms state-of-the-art by a large margin in terms of compression ratio. \\n3. The analysis of the pruning is interesting.\", \"weakness\": \"*Method complexity and efficiency are missing, either theoretically or empirically.* \\nThe main contribution claimed in the paper is that they avoid the time-consuming search for the compression ratio for each layers. However, there are no evidences that the proposed method can save time. As the authors mention, AlexNet contains roughly 61M parameters. On the other hand, the two matrices A_{l-1} and DS_l needed in the method for a fully-connected layer already have size 81M and 16M respectively. Is this only a minor overhead, especially when the model goes deeper?\\n\\nOverall, it is a good paper. I am inclined to accept, and I hope that the authors can show the complexity and efficiency of their method.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Hyper-parameter-free approach, but limited novelty\", \"review\": \"This paper introduces an approach to pruning the parameters of a trained neural network. The idea is inspired by the Optimal Brain Surgeon method, which relies on second derivatives of the loss w.r.t. the network parameters. Here, the corresponding Hessian matrix is approximated using the Fisher information to make the algorithm scalable to very deep networks.\", \"strengths\": [\"The method does not require hyper-parameter tuning.\", \"The results show the good behavior of the approach.\"], \"weaknesses\": \"\", \"novelty\": [\"In essence, this method relies on the work of Marten & Grosse to approximate the Hessian matrix used in the Optimal Brain Surgeon strategy. This is fine, but not of great novelty.\"], \"method\": [\"It is not clear to me why the notion of binary parameters gamma is necessary. Instead of varying the gammas from 1 to 0, why not directly zero out the corresponding network parameters w?\", \"In essence, the objective function in Eq. 5 adds an L_1 penalty on the gamma parameters, which would be related to an L_1 penalty on the ws. Note that this strategy has been employed in the past, e.g., Collins & Kohli, 2014, \\\"Memory Bounded Deep Convolutional Networks\\\".\", \"It is not clear to me how zeroing out individual parameters will truly allows one to reduce the model afterwards. In fact, one would rather want to remove entire rows or columns of the matrix W_l, which would truly correspond to a smaller model. This was what was proposed by Wen et al., NIPS 2016 and Alvarez & Salzmann, NIPS 2016, \\\"Learning the Number of Neurons...\\\".\", \"In the past, when dealing with the Hessian matrix, people have used the so-called Pearlmutter trick (Pearlmutter, Neural Computation 2014, \\\"Fast exact multiplication by the Hessian\\\". In fact, in this paper, the author mentions the application to the Optimal Brain Surgeon strategy. Is there a benefit of the proposed approach over this alternative strategy?\"], \"experiments\": [\"While the reported compression rates are good, it is not clear to me what they mean in practice, because the proposed algorithm zeroes out individual parameters in the matrix W_l of each layer. This does not guarantee entire channels to be removed. As such, I would not know how to make the model actually smaller in practice. It would seem relevant to show the true gains in memory usage and in inference speed (both measured on the computer, not theoretically).\"], \"summary\": \"I do appreciate the fact that the proposed method does not require hyper-parameters and that it seems to yield higher compression rates than other pruning strategies that act on individual parameters. However, novelty of the approach is limited, and I am not convinced of its actual benefits in practice.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BkeK-nRcFX
The Nonlinearity Coefficient - Predicting Generalization in Deep Neural Networks
[ "George Philipp", "Jaime G. Carbonell" ]
For a long time, designing neural architectures that exhibit high performance was considered a dark art that required expert hand-tuning. One of the few well-known guidelines for architecture design is the avoidance of exploding or vanishing gradients. However, even this guideline has remained relatively vague and circumstantial, because there exists no well-defined, gradient-based metric that can be computed {\it before} training begins and can robustly predict the performance of the network {\it after} training is complete. We introduce what is, to the best of our knowledge, the first such metric: the nonlinearity coefficient (NLC). Via an extensive empirical study, we show that the NLC, computed in the network's randomly initialized state, is a powerful predictor of test error and that attaining a right-sized NLC is essential for attaining an optimal test error, at least in fully-connected feedforward networks. The NLC is also conceptually simple, cheap to compute, and is robust to a range of confounders and architectural design choices that comparable metrics are not necessarily robust to. Hence, we argue the NLC is an important tool for architecture search and design, as it can robustly predict poor training outcomes before training even begins.
[ "deep learning", "neural networks", "nonlinearity", "activation functions", "exploding gradients", "vanishing gradients", "neural architecture search" ]
https://openreview.net/pdf?id=BkeK-nRcFX
https://openreview.net/forum?id=BkeK-nRcFX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HkgyrAd7eE", "B1gtG2Ok0Q", "S1g5gqvyAQ", "SyluC942aQ", "S1x4fvjv67", "BJl6VAZUpQ", "SyeSD4KGTQ", "ByxulRy-aQ", "SygRpTy-T7", "HygxJskbpm", "rJlBqc1bTQ", "S1xAuckZa7", "BkgENCgC3X", "Bkx3LR333Q", "HygEe93_3Q", "B1eUOvyX57" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment" ], "note_created": [ 1544945207498, 1542585360647, 1542580721865, 1542372048451, 1542072076166, 1541967412813, 1541735517271, 1541631471592, 1541631429940, 1541630680428, 1541630604573, 1541630581606, 1541439019671, 1541357139577, 1541093867616, 1538615150065 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1194/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1194/Authors" ], [ "ICLR.cc/2019/Conference/Paper1194/Authors" ], [ "ICLR.cc/2019/Conference/Paper1194/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1194/Authors" ], [ "ICLR.cc/2019/Conference/Paper1194/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1194/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1194/Authors" ], [ "ICLR.cc/2019/Conference/Paper1194/Authors" ], [ "ICLR.cc/2019/Conference/Paper1194/Authors" ], [ "ICLR.cc/2019/Conference/Paper1194/Authors" ], [ "ICLR.cc/2019/Conference/Paper1194/Authors" ], [ "ICLR.cc/2019/Conference/Paper1194/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1194/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1194/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1194/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes the NonLinearity Coefficient (NLC), a metric which aims to predicts test-time performance of neural networks at initialization. The idea is interesting and novel, and has clear practical implications. Reviewers unanimously agreed that the direction is a worthwhile one to pursue. However, several reviewers also raised concerns about how well-justified the method is: in particular, Reviewer 3 believes that a quantitative comparison to the related work is necessary, and takes issue with the motivation for being ad-hoc. Reviewer 2 also is concerned about the soundness of the coefficient in truly measuring nonlinearity.\\n\\nThese concerns make it clear that the paper needs more work before it can be published. And, in particular, addressing the reviewers' concerns and providing proper comparison to related works will go a long way in that direction.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting direction but needs more work\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for taking the time to respond.\\n\\nRegarding the initialization scheme of scaling the outgoing weights of a single input neuron, we would add the following: instead of whitening the data indirectly by changing the scale of a few weights, why not just whiten the data before feeding it into the network? I believe that is what is commonly done, and with good reason. Let's assume the data component x(i) of the i'th input neuron is c times larger than other data components, and say we wish to correct for this. If we scale down the outgoing weights of x_i by c, then indeed the forward signal flowing out is of the same magnitude. However, the gradient with respect to those weights will still be ~c times larger than with respect to other weights, because the gradient is an outer product involving the input x. Therefore, the gradient *relative* to the weight size is c^2 times larger for weights linked to x(i) than for the other weights. Therefore likely no SGD learning rate will be suitable for both the \\\"regular-sized\\\" and \\\"small\\\" weights. All of these complications can be avoided by simply scaling down x_i before feeding it into the network. \\n\\nAt the end of the day, the paper shows that the NLC is predictive for architectures sampled as described in section C, nothing more and nothing less. Practical neural networks are made from a very large range of building blocks and hyperparameters. We cover a significant number of them in our section C sampler. However, we can't hope to cover arbitrary functions of data f. It is clearly the specific regular nature of popular networks that makes our analysis work. The more a given network deviates from the networks studied in our paper, the more judgment a practitioner would have to exercise. \\n\\nHowever, again, we stress that the scope of our experiments exceeds that of the vast majority of prior work.\\n\\n\\\"For instance, you imply in this paper that this metric would work for convolutional networks while not providing evidence for it.\\\" How? We simply write: \\\"We have no reasons to suspect our results will not generalize to CNNs, and we plan to investigate this point in future work\\\". and later we write: \\\"In future work, we plan to ... extend our study to convolutional and densely-connected networks.\\\" How is this implying that the NLC works for CNNs? We explicitly state that we need to extend our study to investigate this point.\\n\\nRegarding the correlation of the \\\"altered\\\" NLC and test error: we would have to write the code and run the experiments for this. However, changing the definition of the NLC is a revision of a magnitude that likely goes beyond the scope of what is permissible in a single review cycle, as the majority of figures and correlation numbers would have to be changed and sections 3 / A would have to be rewritten. So unless the area chair indicates that this revision would push the paper over the edge in a positive sense, we won't be making that alteration. Also, we are not convinced that making the NLC more complex (as well as more complex to compute) is worth it in this instance. Even if the definition of the NLC is replaced, there are still innumerable edge cases and hypothetical constructs that would \\\"break\\\" the trends discovered in the paper.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your comments.\\n\\nWe agree that some of the other metrics have been demonstrated to be linked to certain quantities. For example, Lipschitz constant has been linked to adversarial robustness and depth to the efficient representation of certain function classes. However, throughout the paper, we demonstrate a range of favorable properties of the NLC which we think make it stand out among, or at least be complementary to, other metrics.\\n\\nWe will remove the Q/S notation in the camera-ready version together with other requested changes. (Since the change is purely a formality, I'm assuming this is fine.)\", \"regarding_correlation_information\": \"the underlying claim behind all quantities associated with this concept is that non-preservation of correlations between datapoints leads to high error. In the paper, we argue against this general point rather than any of the individual metrics, for space reasons. While we agree that a deeper discussion would be preferable, there is not enough space for this in the \\\"related work\\\" section.\\n\\nWe believe we have cited a sufficient number of papers on the topic of depth and are happy to include specific references if requested.\"}", "{\"title\": \"Re: Review response\", \"comment\": \"Yes my comments about originality were in relation to those other metrics. Those metrics have properties not demonstrated for yours, and you haven't shown that they don't have the properties that yours apparently does. For example, your metric involves the expected norm of the Jacobian, which is a quantity that has been studied a lot already.\\n\\nYes, I do still think you should get rid of the Q/S notation.\\n\\nRegarding \\\"correlation information\\\". In order to have a proper discussion you would need to define precisely what quantity you are talking about. There are various quantities discussed in those papers, some of which don't seem to involve the input distribution at all.\\n\\nRegarding old papers on benefits of depth. I believe the papers you cite contain some of the references. For example, the work of Wolfgang Maass and collaborators from the 90s.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your reply.\\n\\n### 1) Hessian / curvature\\n\\n\\\"Your only evidence that the NLC captures curvature as far as I can tell is fig. 1.\\\"\\n\\n\\\"Plotting this against the NLC would, in my eyes, be a much more rigorous way of checking whether the NLC actually captures anything about curvature.\\\"\\n\\nBut we are not trying to argue that the NLC captures curvature. We *define* nonlinearity as the inverse relative diameter of linearly approximable regions. We think it is very reasonable to define \\\"linearity\\\" as the size of linearly approximable regions, and therefore \\\"nonlinearity\\\" as the inverse of that. As far as we know, \\\"nonlinearity\\\" does not have an accepted rigorous definition for deep networks, therefore we give it one. The definition of the NLC is well-justified as a measure of the relative diameter, both conceptually (section 3 / A) and experimentally (figure 1).\\n\\nI don't understand why there is a burden to define the word \\\"nonlinearity\\\" as a synonym to curvature, or why we have the burden to discuss curvature in our paper at all. Our paper is not about curvature. If we had called the \\\"nonlinearity coefficient\\\" instead \\\"linear region coefficient\\\", would you still object to the paper?\\n\\n\\\"It follows that your definition for the diameter ought to be proportional to the Jacobian and inversely proportional to the NLC.\\\"\\n\\nI'm not 100% sure how you get to this statement, but it is clearly incorrect. While the relative diameter is indeed inversely proportional to the NLC, it is *also* inversely proportional to the Jacobian. The NLC, by definition, is proportional to the Jacobian. Therefore the relative diameter has a similar relationship with the Jacobian as it has with the NLC. \\n\\nThe fact that the NLC approximates the relative diameter is a highly non-trivial observation and cannot be shown with a back-of-the envelope calculation. \\\"k ~ ||J(x)||_F/||H(x)||_F, therefore k is proportional to the Jacobian\\\" makes no sense in practice, because the Jacobian and Hessian are highly dependent. As I mentioned in my original rebuttal, curvature and relative diameter are not directly related, so looking at the Hessian does not help in determining the relationship between NLC and relative diameter.\\n\\n\\n### 2) NLC vs Jacobian\\n\\nThe pure Jacobian measured before training correlates much less with error than the NLC. On CIFAR10, the correlation is only 0.38. On waveform-noise, it is only 0.44. We are happy to include those numbers in the paper. However, such comparisons are ultimatetely uninteresting for 2 reasons: (1) Because the low correlation of the Jacobian is largely caused by a small number of outliers. Because of the susceptibility of the correlation measure, as well as the RMSE measure, to outliers, it does not lend itself for ranking prediction metrics. We include the correlation in the paper simply in order to statistically verify the existence of the strong relationship between NLC and test error that is evident from figure 2. (2) because the reason the Jacobian is correlated with error is because the Jacobian is correlated with the NLC, by definition. Any overlap in correlation is caused by the trivial insight that the NLC takes into account the Jacobian in its computation.\", \"the_reason_to_prefer_the_nlc_over_the_pure_jacobian_is_precisely_because_the_nlc_is_well_justified\": \"see section 3, A, 5, 6 and figure 1. As far as we can tell, Novak et al provided little to no justification for caring about the Frobenius norm of the raw Jacobian apart from its predictiveness. It is this theoretical grounding of the NLC that makes it robust to the confounders discussed in section 6, to which the raw jacobian is not robust. We improve upon Novak with regards to theoretical justification / grounding, robustness, as well as usefulness of task studied.\\n\\nFinally, aren't the RMSE measure (relative to the standard deviation of the error) and correlation equivalent? The square of the correlation measures exactly the fraction of variance explained by the predictor via a linear model. Isn't this what you are suggesting?\\n\\n\\n\\nThank you again and we look forward to continuing the discussion.\"}", "{\"title\": \"Addendum\", \"comment\": \"I would just like to make a slight modification to my reply. While I do believe that RMSE would be more appropriate than correlation for this particular task, I do not think that I was correct in saying that the NLC could be used to cull only trivially poor networks. The only reason why some networks appear trivially poor is because of the NLC in the first place, which is a success!! As I mentioned in my original review I do overall like your raw results. Nonetheless, I do stand by the rest of the comments in my reply. I do hope that you make the exposition stronger and incorporate baselines in future versions of the manuscript.\"}", "{\"title\": \"Reply\", \"comment\": \"Thanks for your quick and detailed reply, however it did not really address the issues that I raised in my review.\\n\\nMy purpose in bringing up the Hessian was not to suggest that it would be a useful metric in practice, and my reason for bringing up the work of Novak et al. was not to say that they had similar results. In your work, you propose \\u201cThe Nonlinearity Coefficient\\u201d (a name that you chose) along with arguments that suggest that it captures nonlinearity. My point was that 1) you have not validated that the NLC actually captures nonlinearity to a sufficient degree and 2) you have not validated that the NLC works better than other measures. Together, these issues lead me to be concerned that your results will mislead practitioners and might do more harm than good. \\n\\nLet me address these two points separately.\\n\\n1) I agree that the Hessian is intractable for most networks used in practice. To a lesser extent I agree that the Hessian is poorly behaved for ReLU networks (in fact, I think if you add small amounts of Gaussian noise to data it is well-defined). However, for small networks where the Hessian is well-defined and tractable it would offer an appealing way to test the NLC, but you have not done that. Moreover, conceptually the Hessian also provides a good way to think curvature that shows the \\u201crelative diameter\\u201d you introduce to be a poor measure of nonlinearity when comparing with the NLC.\\n\\nYour only evidence that the NLC captures curvature as far as I can tell is fig. 1. Your definition for diameter is the maximum k such that 1/2(f(x\\u2019) - f(x))^Tv < (J(x)(x\\u2019-x))^Tv < 2(f(x\\u2019) - f(x))^Tv where x\\u2019 = x + C k u. Let us now consider this condition a bit more carefully. Suppose that f(x) has a well-defined Taylor series then A(f(x\\u2019) - f(x)) = A(J(x) (x\\u2019-x) + 1/2(x\\u2019-x)H(x)(x\\u2019-x) + O((x\\u2019-x)^3)). If we assume that u and v are randomly oriented wrt the Hessian and the Jacobian then will have equality between A(f(x\\u2019) - f(x))^Tv and (J(x)(x\\u2019-x))^Tv when k ~ ||J(x)||_F/||H(x)||_F with some additional constants. It follows that your definition for the diameter ought to be proportional to the Jacobian and inversely proportional to the NLC. This is true regardless of whether or not the NLC is related to nonlinearity at all and is only based on the fact that the metric you chose for diameter depends on the norm of the Jacobian.\\n\\nSo what would a more appropriate definition have been? If you had instead considered the k where (f(x\\u2019) - f(x) - J(x)(x\\u2019-x))^2 > C for some C. Expanding to second order you would expect k ~ C^{1/4} / ||H(x)||_F which would not have this implicit factor of the Jacobian. Plotting this against the NLC would, in my eyes, be a much more rigorous way of checking whether the NLC actually captures anything about curvature.\\n\\n2) Given that you have not validated that the NLC actually contains information about curvature, it is natural to ask whether or not you are gaining anything aside from the Jacobian. While it is true that the results of Novak et al. were for trained networks, it is also true that they saw strong correlation between the norm of the Jacobian and generalization error. To me, since the derivation of the NLC is suspect, it is all the more important that you check how well the norm of the Jacobian correlates with generalization error (at initialization).\\n\\nActually, to that end I would like to point out something else that I found unfortunate about your exposition. In sec. 6 you compare the NLC with a number of other metrics that one might want to correlate with generalization performance. However, you do not actually do the comparison, but instead give qualitative justification for why each might not work. This would be more impactful if you actually showed that the NLC performs better than these other metrics. To me this is all the more true given that the NLC itself is poorly justified.\\n\\nAs a final point, I have to remark on the table that you presented. I don\\u2019t know that I find correlation to be a particularly compelling metric for success in this case (and I don\\u2019t know exactly how to interpret a correlation of 0.57, for example). I think a better metric would be to treat this as a regression task and compute the RMSE for a linear model based on the NLC compared with the standard deviation of the data (only for data in the inset). In this way we can see how much the NLC buys you in terms of predicting generalization performance over not doing the prediction. Looking at the inset, I would be very dubious as a machine learning practitioner at preemptively throwing out networks in that range of NLC. At the point where I would be unwilling to do that it seems that the NLC could at most be used to cull trivially poor networks without training (although there is some value in this).\\n\\nI do feel that this could be a solid submission down the road, but in my opinion these are serious issues that must be addressed before the results seem trustworthy.\"}", "{\"title\": \"Review response (part 1/2)\", \"comment\": \"Dear reviewer 2,\\n\\nThank you for your review.\\n\\nWe believe the standard you measure our paper by is completely unrealistic. You seem to imply that the NLC can only be useful if it is perfect, i.e. if there is no network with an initially high NLC that performs well and no network with an initially low NLC that performs badly. The space of neural networks effectively contains every function that has parameters with respect to which meaningful gradients can be computed. There is no way that a single metric could possibly explain all performance variations across a space that large. The goal of the paper is to show that the NLC is highly predictive of performance in networks built from common design paradigms. But of course the NLC can only account for some of the variance of test error. There are of course factors that influence performance that are not related to nonlinearity. For example, a linear network with a bottleneck layer of width 1 will have a high test error despite being linear. Conversely, any network that has a low training error will also have a low test error if, say, the training and test set happen to be the same, no matter how nonlinear the network is. On an image dataset a ConvNet will achieve lower error than a fully-connected net even if the nonlinearity is the same. Again, a single scalar metric cannot hope to explain all of the performance variation. However, we believe the fact that the NLC can explain a large fraction of that variation across a wide range of networks is very significant.\\n\\nWe strongly believe that the NLC's robustness to confounders and architectural design choices is a *strength* of our paper, not a *weakness*. We improve in these categories upon all the works we cited in our paper, the vast majority of which were published at top conferences (e.g. Balduzzi et al, Cisse et al, Glorot et al, He et al, Raghu et al, Schoenholz et al, Xiao et al, Yang & Schoenholz) Not only do we validate the NLC across a range of networks whose breadth exceeds most if not all related work, it is robust to many failure cases that other metrics are not robust to (section 6). In fact, a core motivation of writing our paper was precisely to advance the state-of-the art in robustness and breadth of applicability. If you care strongly about these things, shouldn't you welcome our paper as improving the state-of-the-art rather than condemning it for not reaching \\\"perfection\\\"? Finally, while our derivation of the NLC in section 3 / A uses heuristics, it is still more principled than the majority of competing metrics.\", \"regarding_the_two_failure_cases_you_mentioned\": \"We disagree with your assertion regarding ResNets. Our experience suggests ResNets with zero weights in the last tensor of a residual block generalize well no matter how deep they are. This is supported by \\\"Lechao Xiao, Yasaman Bahri, Jascha Sohl-Dickstein, Samuel S. Schoenholz, and Jeffrey Pennington. Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks.\\\". This paper shows how to train arbitrarily deep networks. While this paper deals with plain nets rather than ResNets, the same principles apply in both cases. Note that as depth increases, we must decrease the learning rate and increase the numerical precision of the computation, as we outline in section B.1 of our paper. \\n\\nThe linear network where the weights with respect to a single input dimension have a much different size than all other weights seems to be a purely theoretical construction. I have never seen such a weight initialization used. Furthermore, this problem is easily fixable. The NLC is currently defined as / equivalent to $\\\\sqrt{\\\\frac{\\\\Tr(\\\\mathbb{E}_x\\\\mathcal{J}\\\\mathcal{J}^T)\\\\Tr(C_x)}{d_\\\\text{in}\\\\Tr(C_f)}}$. Instead, we can define the NLC as $\\\\sqrt{\\\\frac{\\\\Tr(\\\\mathbb{E}_x\\\\mathcal{J}C_x\\\\mathcal{J}^T)}{\\\\Tr(C_f)}}$. Then, the NLC is exactly equal to 1 for all linear functions. The reason we did not use this more complicated definition of the NLC in our paper was because in practice, the weight matrix is initialized without regard to the spectrum of the input. We did not want to make the NLC more complicated without gaining practical benefits. After all, the NLC is supposed to measure nonlinearity for (especially randomly initialized) neural networks, not every possible function.\"}", "{\"title\": \"Review response (part 2/2)\", \"comment\": \"\\\"one can wonder what is correlating generalization and NLC together in the experiment section. Same remark applies to the correlation between nonlinearity and NLC.\\\"\\n\\nNLC is a measure of nonlinearity that is based on certain assumptions as explained in sections 3 / A. Figure 1 shows that those assumptions hold in practical networks. Figure 2 shows that good / bad performance is strongly associated with nonlinearity. At least one reason for this relationship is given in figure 3E, where we show that NLC is related to sensitivity of the output to small input changes.\\n\\n\\\"What were the architecture that resulted in small/high NLC?\\\"\\n\\nThe magnitude of the NLC is chiefly dependent on the linear approximability of nonlinearities, as explained in section 5. Unfortunately, an in-depth discussion on why certain architectures have a certain NLC, for a large number of architectures, goes beyond the scope of the paper. \\n\\n\\nThank you and we look forward to your response.\"}", "{\"title\": \"Review response\", \"comment\": \"Dear Reviewer 1,\\n\\nThank you for your review. We address your detailed comments below. It seems that your main criticism which prevented our paper from attaining a higher rating was your assessment that \\\"The contribution is solid, although not earth shattering given previous work on such metrics.\\\" I would love to know more detail regarding this statement. We believe that many properties demonstrated for the NLC throughout the paper (e.g. predictiveness of error when computed before training across a wide range of architectures, predictiveness of nonlinearity, robustness to confounders, relationship to linear approximability of activation functions) are either novel compared to other metrics or at least have not been demonstrated for them. If you are aware of prior work that contradicts those beliefs, we would love to know.\\n\\n\\n.\\n.\\n.\\n\\n\\n*** missing sqrt(d) factor\\n\\nYou are absolutely correct. Thank you for pointing this out. For what it's worth, we noticed this problem very soon after we submitted the paper and posted a correction on openreview. You can check our comment \\\"typo found\\\", posted on October 3rd, below. We understand that it is annoying to see conflicting definitions, and we apologize for this. The highlighted \\\"Definition 1\\\" at the top of page 3 is correct. We added the alternative definition using traces at the last moment, thereby producing typos. The reason for including the 'correct' d_in / d_out factors in the definition is to avoid susceptibility of the NLC to changes in input dimension / output dimension that do not affect nonlinearity / performance.\\n\\nIn the revision we just uploaded, we have fixed the d_in / d_out typos. If you think we should still also remove the Q/S notation, please let us know and we will upload an additional revision.\\n\\n*** relative diameter\\n\\nWe added a reference to section E.1 in the main text, where the formal definition of relative diameter is diven. The informal meaning is discussed in section 3.\\n\\n*** test error vs training error\\n\\nYes, the high test error is mostly caused by bad generalization, at least on the waveform-noise dataset. Please see section B.1 for further details on this.\\n\\n*** Biased output\\n\\nAn individual neuron has biased activation values if the standard deviation of those values is much smaller than the absolute mean. The output bias as defined by the Q over S ratio is a way to average the bias across neurons in the output layer. The quantity can be written as \\\\sqrt{ E_j,x [f(x,j)^2] / (E_j,x [f(x,j)^2] - E_j [[E_x x(i)]^2]) }.\\n\\n*** Failure cases for correlation information\\n\\nThe problem with correlation information is that it is susceptible to adding and removing constants. Consider a simple example: performing k-means on a dataset is equivalent to performing k-means on that dataset plus a large constant. Adding the constant destroys correlation information, but does not fundamentally alter the quality of the representation, at least as long as the constant is not comparable in size to the largest representable floating point number.\\n\\nHence, any bias in the input that is removed by the network confounds correlation information. Batch normalization is just one way to do this. One can also simply initialize the trainable bias in the first layer to eliminate the bias of the dataset. \\n\\nSimilarly, if the network corrupts correlation information by introducing bias, we can for example use an error function (instead of vanila softmax-cross entropy) that compares the output minus the bias to the labels instead of the output itself. Or we could add the same bias to the labels and use an L2 error function, for example.\\n\\nWe agree that the paper that introduced correlation information possibly did not intend to deal with batchnorm. We do not call into question the validity of their results, but simply point out limitations of the metric.\\n\\n*** Representational benefits of depth\\n\\nAdmittedly, I am not a huge expert on the literature on representational benefits of depth. Since this is not a core topic for the paper, we hope that including 7 citations is sufficient. However, if there are other papers on depth that you think should be cited, please let us know.\\n\\n\\nThanks\"}", "{\"title\": \"Review response (part 1/2)\", \"comment\": \"Dear Reviewer 3,\\n\\nThanks you for your review. We agree that the Hessian, as well as the method used in Novak et al, are interesting to consider given the context of our paper. In fact, we considered discussing both them in the paper itself, but ultimately decided to focus only on the 5 metrics discussed in section 6, for space reasons. Please see the discussion of the Hessian and Novak paper below.\\n\\n### Using the Hessian ###\", \"the_hessian_has_a_severe_drawback_for_estimating_nonlinearity\": \"it fails completely for nonlinearities that are non-differentiable. For example, the input-output Hessian of a plain ReLU or hard tanh network is the zero tensor. Even in a batchnorm-ReLU network, the Hessian treats ReLU's as if they are linear functions and thereby grossly underestimates nonlinearity. Since ReLU is by far the most popular activation function, any nonlinearity measure would certainly have to work for ReLU to be practically useful.\\n\\nOne might approximate ReLU with a smooth function, such as 1/k log(1 + exp(kx)) with a large k, but this has further pitfalls. Consider the Hessian of a nonlinearity operation \\\\tau that is applied elementwise to a vector of pre-activations x with respect to that vector: H_\\\\tau = d^2\\\\tau / dx^2. The off-diagonal entries of this 3D tensor are zero. If \\\\tau is a close approximation to ReLU, then the diagonal entries are either very large if the correspondind entry of x is close to zero, or very small if the corresponding entry of x is not close to zero. Therefore the diagonal entries of H_\\\\tau are very sparse, and hence have high variance. Further, as \\\\tau gets closer and closer to ReLU, ||H_\\\\tau||_F converges to infinity. Finally, because of the chain rule for second derivatives, an infinite / noisy H_\\\\tau causes the Frobenius norm of the input-output Hessian of the network to also be infinite / noisy, yielding to an incorrect / high-variance estimate of nonlinearity.\\n\\nWhile ReLU is a prominent failure case, these issues extend far wider. Consider an arbitrary activation function \\\\tau. Then we can approximate it by a piecewise linear activation fucntion \\\\tau'. If the length of the linear segments used is sufficently small, we can replace \\\\tau with \\\\tau' in any network and obtain not just the same learning behavior and final performance, but also the same linearly approximable regions. Yet the Hessian of the network using \\\\tau' grossly underestimates nonlinearity for the reasons outlined above. In fact, using similar constructions, we can maintain the nonlinearity and performance of a network but vary the Hessian almost arbitrarily. This begs the question: amongst this large space of possible Hessians, which is the correct one?\\n\\nIn addition to the conceptual issues, the input-output Hessian of a network is a 3D tensor. We are aware of no algorithm that computes its Frobenius norm efficiently.\\n\\nIn summary, we argue that the Hessian is either outright unsuitable for nonlinearity estimation / performance prediction in deep networks, or significantly more work would need to be done to fix the issues above. In either case, the NLC is the state-of-the art for nonlinearity estimation.\"}", "{\"title\": \"Review response (part 2/2)\", \"comment\": \"### The method of Novak et al ###\\n\\nOur paper goes significantly beyond the scope of Novak et al, because we use the NLC computed *before* training to predict performance *after* training. Novak et al use the Jacobian *after* training to compare against performance *after* training. Predicting performance before training is much more useful because it enables architecture design / selection. Furthermore, it is also much harder. We predict the property (test error) of one network (trained network) by examining a property (NLC) of a different network (untrained network). Novak et al make inference about the property of a network (test error) from properties of that same network.\\n\\nLet us detail just one reason why our task is harder. The Novak et al paper uses the Frobenius norm of the Jacobian of the softmax units with respect to the input and compares that value to test error. We can write that Jacobian as d softmax/d input, which is the same as d softmax/d logits * d logits/d input, where 'logits' denotes the values that are fed into the softmax. Now it turns out that ||d softmax/d logits||_F tends to be smaller for a given input when the prediction of the network is correct for that input, and it tends to be larger when the prediction of the network is incorrect. This is shown in the Novak paper in figure 6. Therefore ||d softmax/d logits||_F is strongly correlated with error not because of an interesting structural property of a network, but simply because of an idiosyncratic property of the softmax: it tends to have larger gradients for less confident predictions. Hence, it is likely that the correlation between d softmax/d logits = d softmax/d logits * d logits/d input and test error is also caused to a significant degree by this effect. By computing the NLC before training, we do not \\\"benefit\\\" from this spurious signal.\\n\\nTherefore, our task is not comparable to the task studied by Novak et al, and hence the raw correlation numbers are also not comparable. We would argue that if you consider the Novak paper to be an important contribution, our paper is at least an equally important contribution, because we study a task that is at least in certain ways significantly more useful.\\n\\n### Summary ###\\n\\nThe NLC is the first gradient-based metric that, when computed before traning, has been shown to be predictive of test error after training through a large-scale study involving a wide variety of networks. Additional benefits include:\\n\\n- it is an accurate measure of nonlinearity *in practice* (figure 1)\\n- it is intimiately related to the linear approximability of activation functions (section 5)\\n- it is more robust to confounders than comparable metrics (section 6)\\n- it is cheap to compute (section G)\\n\\nHowever, we do not claim that the NLC is the *correct* measure of nonlinearity for deep neural networks. We are happy to use heuristics in deriving the NLC as long as we attain the above benefits. In response to your criticism that \\\"it feels like an extremely weak definition of nonlinearity to say that the linear approximation of a function fails when it produces values that lie outside of the co-domain of the function.\\\" we would respond that our goal is not to define nonlinearity definitively, but to come up with a metric that has the benefits outlined above. Nonetheless, we think that our figure 1 and the shortcomings of the Hessian indicate that the NLC is the state-of-the-art in neural network nonlinearity estimation, and we think that the NLC as a performance predictor is better motivated than e.g. 'gradient explosion' or 'correlation preservation', as well as the metric used by Novak et al.\\n\\nFinally, we strongly disagree with the statement that \\\"in fig. 2A it seems like the nonlinearity coefficient varies by at least two orders of magnitude in the inset of the figure where the test accuracy really does not seem sensitive to its value.\\\" See the correlation numbers and p-values below. While the correlation in the inset is a bit lower, this is to be expected as the correlation between random variables tends to decrease if the range of one variable is restricted.\\n\\nScenario correlation p-value\\nCIFAR10 0.72 2.34e-41\\nCIFAR10 (inset) 0.68 1.88e-21\\nMNIST 0.81 1.31e-57\\nMNIST (inset) 0.63 3.60e-19\\nwave 0.67 1.82e-33\\nwave (inset) 0.57 3.60e-13\\n\\nWe thank you again for your review. We would love to discuss further and look forward to your response. Please let us know whether you want us to include the above discussions in the next revision of the manuscript or not.\"}", "{\"title\": \"An interesting proposal lacking justification.\", \"review\": \"In this paper the authors introduce a new quantity, the nonlinearity coefficient, and argue that its value at initialization is a useful predictor of test time performance for neural networks. The authors conduct a wide range of experiments over many different network architectures and activation functions to corroborate these results. The authors then extend their method to compute the local nonlinearity of activation functions instead.\\n\\nI am a bit torn on this paper. I appreciate the direction that the authors have chosen to pursue. The topic of identifying parameters that are predictive of trainability is certainly interesting and has the potential to be quite impactful. Moreover, the breadth of the experiments conducted by the authors is novel and significant. Finally, I find the the overall manner in which the authors have chosen to present their data refreshingly transparent. Together, this leads me to believe that the quantity proposed by the authors might be useful to researchers.\\n\\nHaving said that, I am concerned by the author\\u2019s exposition of the nonlinearity coefficient itself. Fundamentally, my concern stems from the fact that it seems a lot of relatively ad-hoc decisions were made in the construction of the nonlinearity coefficient and an insufficiently good job was done to compare it to other measures of nonlinearity.\\n\\nSpecifically, it feels like an extremely weak definition of nonlinearity to say that the linear approximation of a function fails when it produces values that lie outside of the co-domain of the function. Moreover, I feel as though there is already a well defined notion of nonlinearity at a point that could be constructed by reference to the Hessian (or generally by the approximation error induced by truncating the Taylor series after the linear term). I would like to see some comparison between these two methods. \\n\\nThis is made more troubling given that the correlation found by the authors is present but does not seem especially strong. For example, in fig. 2A it seems like the nonlinearity coefficient varies by at least two orders of magnitude in the inset of the figure where the test accuracy really does not seem sensitive to its value. Prior work (for example, [1] from last years ICLR) has shown strong correlations between the Frobenius norm of the Jacobian and test error (see fig. 5 and fig. 6). Since the definition of the nonlinearity coefficient seems somewhat ad-hoc I would love to see a comparison between it and just looking at the Jacobian norm in terms of predicting test accuracy.\\n\\n[1] - SENSITIVITY AND GENERALIZATION IN NEURAL NETWORKS: AN EMPIRICAL STUDY\\nRoman Novak, Yasaman Bahri, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Solid contribution\", \"review\": \"This paper proposes a metric to measure the \\\"nonlinearity\\\" of neural network, and presents evidence that the value of this metric at initialization time is predictive of generalization performance.\\n\\nApart from a few problems I think this paper is well written and thorough. The contribution is solid, although not earth shattering given previous work on such metrics. There seems to be a basic error in some of the early math, although I don't think this will qualitatively affect the results in any significant way.\\n\\n\\n-----------------\", \"detailed_comments_by_section\": \"------------------\", \"section_3\": \"It seems like a 1/sqrt(d) factor is missing from these Q_i(S_x x(i)) and Q_j(S_x f(x,j)) formulas. As far as I can tell this doesn't affect Def 1 because you seemed to use the correct formula there. \\n\\nHowever, the rewritten version with the traces doesn't seem to be correct. There should be a d_in factor in the denominator (inside the square root). This error seems unrelated to the other one. Assuming I'm correct and that this is an error, does this affect your results in the various figures? And what is the actual final definition of NLC that you used?\\n\\nIn general, it's annoying for the reader to verify that all of these forms are equivalent. And it's fiddly enough with the sqrt(d) terms constantly disappearing and reappearing in the numerator and denominators that even you made multiple errors (as far as I can tell). I would suggest making this section more rigorous and writing out everything carefully. And you probably don't need to rewrite it in so many equivalent forms with different notation unless they are useful somehow. \\n\\nThe use of the Q and S symbols feels superfluous and counterproductive. Standard notation with expectations and squares wouldn't take much more space and would be a lot clearer.\", \"section_4\": \"\\\"we plot the relative diameter of the linearly approximable regions of the network as defined in section 3\\\": but you don't seem to define \\\"relative diameter\\\" there. As far as I can tell it's only defined in Appendix E, and this is only mentioned in the caption of figure 1. It's impossible to interpret this result without knowing precisely what \\\"relative diameter\\\" is. If you can't afford to describe this in the main paper you should at least mention that it's a different (more expensive) way of estimating the same thing that the NLC estimates.\\n\\nIn Figure 2, are the higher test errors due to the optimizer failing to lower the training error, or due to a greater generalization gap? I guess the Figure 3 results suggest the latter possibility, which is surprising to me. \\n\\n\\nWhat does it mean to have a \\\"very biased output\\\". What does that inequality mean intuitively? Should there be an absolute value on the RHS? It would be much easier to parse it if it were written in plain notation without these S and Q symbols.\", \"section_6\": \"\\\"metric also an\\\" -> \\\"metric also has an\\\"\\n\\nCan you generate a failure case for \\\"correlation information\\\" that doesn't involve Batch Norm layers? I don't think the authors of those works meant for their results to deal with that.\\n\\nNote that there are actually a lot of papers going back to the 90s that discussed and proved representational benefits of depth in neural networks.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Does the nonlinearity coefficient measure nonlinearity?\", \"review\": \"I do not understand the denomination of nonlinearity coefficient provided in definition 1: although the quantity indeed does equal to 1 under whitened data distribution or orthogonal matrix, the conjecture that it should be close to 1 does not seem to be close at all just under any data distribution. Using a similar construction that section 6, we can rescale a whitened input data with a diagonal matrix D with components all equal to one except for a very large one \\\\lambda and also multiply the input weights by D^{-1} to compensate (and have a similar function). If you look at such construction for the linear case with identity initialization of A, the NLC is sqrt((\\\\lambda^2 + n - 1) (\\\\lambda^{-2} + n - 1)) / n which can grow arbitrarily large with \\\\lambda *for a linear model*. However, because of its low capacity, we would expect a linear model to have reasonable generalization. This seems to compromise the initial NLC being low as a necessary condition for reasonable generalization.\\nConversely, it\\u2019s possible to initialize arbitrarily large residual networks such that the resulting initial function is linear (by initializing the output weight of the incrementing block to 0). This initialization may also be done such that the initial NLC becomes close to 1. I would not think this wouldn\\u2019t necessarily result in good generalization, which seems to agree with the experimental observation. \\nNow given that this initial NLC is neither sufficient nor necessary to predict generalization, one can wonder what is correlating generalization and NLC together in the experiment section. Same remark applies to the correlation between nonlinearity and NLC. This is especially concerning since in the linear case, the NLC can vary whether we chose to whiten the data or not for example, so the other influencing factors need to be discovered. What were the architecture that resulted in small/high NLC?\\nThe experiment section still contains interesting bits, such as successful training of very deep architecture that are very sensitive to input perturbations but they are not part of the main thread of the paper.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Typo found\", \"comment\": \"On page 3, the terms that involve the trace operator Tr() are missing some d_in/d_out values. root(Tr(C_x)) should instead be root(Tr(C_x)/d_in). root(Tr(C_f)) should instead be root(Tr(C_f)/d_out). Finally, the two terms that have Tr(E_x JJ^T) and Tr(AA^T) in the numerator should also have d_in in the denominator, under the square root. Otherwise, the text remains unchanged. Apologies for any inconvenience caused.\"}" ] }
B1xY-hRctX
Neural Logic Machines
[ "Honghua Dong", "Jiayuan Mao", "Tian Lin", "Chong Wang", "Lihong Li", "Denny Zhou" ]
We propose the Neural Logic Machine (NLM), a neural-symbolic architecture for both inductive learning and logic reasoning. NLMs exploit the power of both neural networks---as function approximators, and logic programming---as a symbolic processor for objects with properties, relations, logic connectives, and quantifiers. After being trained on small-scale tasks (such as sorting short arrays), NLMs can recover lifted rules, and generalize to large-scale tasks (such as sorting longer arrays). In our experiments, NLMs achieve perfect generalization in a number of tasks, from relational reasoning tasks on the family tree and general graphs, to decision making tasks including sorting arrays, finding shortest paths, and playing the blocks world. Most of these tasks are hard to accomplish for neural networks or inductive logic programming alone.
[ "Neuro-Symbolic Computation", "Logic Induction" ]
https://openreview.net/pdf?id=B1xY-hRctX
https://openreview.net/forum?id=B1xY-hRctX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SklUxuqqeV", "r1ee4DycyV", "HJxOd0tDJE", "S1e1Do49Cm", "rklvumwmCQ", "r1e_izwX0m", "S1l4CCI66Q", "rJxii0IT6Q", "H1g19nL6Tm", "Syx7d_H1Tm", "rylWbydT3Q", "rkgpGkN52Q", "r1gMP1TKnQ" ], "note_type": [ "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1545410542395, 1544316711966, 1544162928399, 1543289686741, 1542841199018, 1542840992260, 1542446796224, 1542446755312, 1542446215291, 1541523563160, 1541402361166, 1541189396907, 1541160794212 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1193/Authors" ], [ "ICLR.cc/2019/Conference/Paper1193/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1193/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1193/Authors" ], [ "ICLR.cc/2019/Conference/Paper1193/Authors" ], [ "ICLR.cc/2019/Conference/Paper1193/Authors" ], [ "ICLR.cc/2019/Conference/Paper1193/Authors" ], [ "ICLR.cc/2019/Conference/Paper1193/Authors" ], [ "ICLR.cc/2019/Conference/Paper1193/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1193/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1193/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1193/AnonReviewer3" ] ], "structured_content_str": [ "{\"title\": \"Thanks for your comprehensive comments. We will revise the paper accordingly.\", \"comment\": \"Dear AC,\\n\\nThanks for your careful reading of our manuscript and consideration. We appreciate your remarkably comprehensive comments and suggestions. We promise to provide an inclusive revision in the camera-ready version accordingly.\\n\\nMany thanks,\\nAuthors.\"}", "{\"title\": \"Thanks for your pointers\", \"comment\": \"Thanks for your pointers to the related papers. We will discuss them in the next version of our paper.\"}", "{\"comment\": \"... although it is not a differentiable model or even a neural model, the idea of learning to sort infinite arrays from short examples has been explored in the \\\"Generalized Planning\\\" literature, for example,\", \"http\": \"//rbr.cs.umass.edu/shlomo/papers/SIZaij11.pdf\", \"https\": \"//www.dtic.upf.edu/~jonsson/ker18.pdf\", \"title\": \"Potential related work\"}", "{\"title\": \"Reply to authors' response\", \"comment\": \"Thanks for the clarification about the details and the scalability. I would like to keep my rating. This is an interesting direction and worth pursuing, so I support acceptance. But it is still unclear to me how the proposed approach can move beyond toy datasets.\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"1. Running time / training time.\\nThe number of examples/episodes used is shown in Table 4. We plan to add training time / inference speed in our revision. Here, we show our results on Blocks World. We train our model on 12 CPUs (Xeon E5) and a single GPU (GTX 1080), It takes 3 hours to train our model (26000 episodes). During inference, the model runs in 1.43s per episode when the number of blocks is 50.\\n\\n2. Rules are not expressed in a logical formalism.\\nThanks for the comment and suggestion --- Yes, your understanding is correct. Although the design of NLM\\u2019s neural architecture is highly motivated by FOPC logic formalism, NLM models do not explicitly encode FOPC logic forms. In contrast, the weights of the MLPs encodes how models should perform the deduction, and the output of the NLM can be regarded as the conclusions (0/1 indicating whether we should move the block, in a Blocks World).\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"1. Model details.\\nDetailed implementation details including the number of layers (a.k.a. the depth) can be found in Table 4 (Appendix B.2). As for the hyper-parameters of the MLPs, we use no hidden layer, and the hidden dimension (number of intermediate predicates) of each layer is set to 8 across all our experiments.\\nWe thank the reviewer for the suggestion, and will make these information more clear in our revision. Moreover, we plan to release our code upon acceptance.\\n\\n2. Scalability\\nIt should be clarified that scalability mentioned in the paper mainly refers to the complexity of reasoning (e.g., number of steps before producing a desired predicate), not the number of objects/entities or relations. For example, as shown in our general clarification, learning predicates that have a complex structure (such as the ShouldMove in the example) pose a scalability challenge to existing ILP methods. We also refer the reviewer to our clarification on scalability for more detailed analysis.\\nIn general, we agree with the reviewer that an inductive logic system should be able to handle both complex reasoning rules (e.g., as the settings explored in our paper) and large-scale entity sets (e.g., as in knowledge graph-related literature). We hope the methods and insights we presented in this paper could help the whole community in this interesting direction.\\n\\n3. Permutation in MLP.\\nPermutation is needed in two places. Consider two n-ary MLPs at a particular layer of the NLM (called \\u201cp\\u201d). As the reviewer correctly points out, the [m, m-1, \\u2026, m-n+1] dimensions represent permutations in the input of p. On the other hand, the permutation before MLP is to create new predicates that only differs from the existing one in the variable order, in order to compute composition of these two predicates; this is the second place where permutation is needed.\\n\\nAs an example, suppose \\u201cp\\u201d is the predicate HasEdge(x, y). By permuting its variables, we get another predicate, HasReverseEdge(x, y), which is TRUE if there is an edge from y to x. These two predicates can be used to compose a more complex predicate\\n HasBidirectionalEdge(x, y) \\u2190 HasEdge(x, y) \\u2227 HasReverseEdge(x, y)\"}", "{\"title\": \"Response to AnonReviewer3 Continued\", \"comment\": \"6. The scalability discussion with ILP systems and SRL methods.\\nThank you for the comment. Please see our response to the scalability claim. We will revise the paper accordingly to clarify.\\n\\n7. Generalization w.r.t. the number of objects.\\nDifferent from the reviewer\\u2019s hypothesis, our results actually verify that NLM models do generalize well to larger test instances. For example, Table 2 shows that our learned model achieves 100% accuracy on test instances with more blocks, and the same for Table 1. We have also conducted experiments testing this ability using several trained model in extreme cases which consist of 500 blocks (1000 numbers for sorting), no failure cases were found. The models will be made public along with our code after the paper decision. This ability is one of our main findings, as highlighted in the abstract (\\u201cNLMs ... generalize to arbitrarily large-scale tasks\\u201d).\\n\\n8. The goal configuration of Blocks World.\\nWe present the generation of Blocks World instances in Appendix B.4. We will make it more clear in the revision. The goal configuration is randomly and independently generated as the initial configuration. One can compute that the expected optimal number of steps needed for solving the Blocks World is approximately 2m - o(m) steps are required to solve the case, where m is the number of blocks, which is 50 in the test instances. In average 84 steps means the model learns a fairly good solution. The reviewer is also welcome to check our demo in the footnote of Page 8: https://sites.google.com/view/neural-logic-machines .\\n\\n9. MDP formulation of the Blocks World.\\nThanks for the nice suggestion. We discuss the MDP formulation in section 3.4, and we will make it more clear. We input the current world and the target world with tensors describing relations between objects. At each time step, the agents take actions to move one block onto another. We use sparse rewards to train the agents: The agents get the reward only when they finish the task.\\n\\n10. NLM learns the underlying logical rules.\\nThanks for the comment. We intend to mean that the learned NLM generalizes well to problems with varying sizes, in the same way logical rules do. We will reword the sentence to avoid confusions, and discuss rule extraction as future work.\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for many comments and pointers, and will revise our paper to emphasize further our contributions and novelties compared to previous work.\\n\\n1. Section 2.1 and the handling of free variables.\\nSection 2.1 lists three primitive rules that serve as building blocks in later subsections to implement a Neural Logic Machine. This is necessary for providing terminology and notation used throughout the rest of the paper. We are not claiming them as novel contributions.\\nSection 2.1 does *not* describe propositional logic. The rule for \\u201cBoolean logic\\u201d is used in NLM as a component for realizing first-order logic (probabilistically, as described in section 2.2): they are used to operate on predicates grounded on objects. An example in the Blocks World domain may look like:\\n IsGround(A) V Clear(A) -> Placeable(A)\\nwhere A is one object in the Blocks World domain; and notably, IsGround(.), Clear(.) and Placeable(.) are not manually specified but are learned by the network.\\nOur model supports free variables. The arity of a predicate is its number of free variable. For example, the arity of a binary predicate is 2, and NLM uses a matrix (a tensor of dimension 2) to represent the predicate\\u2019s values for all possible grounding; the 1st paragraph of section 2.2 gives further details. The three rules (eqns 1-3) keep the same number of free variables, increase it by 1, and decrease it by 1, respectively.\\n\\n2. The probability distributions modeled by MLPs.\\nWe would like to thank R3 for the comment about \\u201cjoint distribution\\u201d, and briefly clarify technical details in Section 2.2 & 2.3 to avoid potential misunderstanding.\\n\\nLet\\u2019s define the input of each layer k as H_k (whose each element is in [0, 1]) recursively in the following:\\n\\n(1) The initial layer is H_1 = prob(B) representing boolean values 0 or 1, where B is a set of base predicates.\\n(2) For each layer k, the probabilistic boolean expression in the building block is defined above Eqn. 4:\\n Expression(H_1, ... , H_k) ==> H_{k+1}\\nwhere Expression in NLM is represented by some neural network structure. As illustrated in Figure 2 & 3, we use (a) grouped MLP with weights \\\\theta_k and activation \\\\sigma, and (b) ReduceOrExpand that computes\\n H'_k = \\\\sigma(MLP(H_1, ... , H_k; \\\\theta_k)),\\n H_{k+1} = ReduceOrExpand(H'_k).\\nThis building block keeps all elements of H_{k+1} in [0, 1] and becomes the input of next layer k+1. Therefore, such a series of building blocks is able to model a complex expression. \\n\\nWe will not use \\u201cjoint distribution\\u201d to avoid confusion, and make it more clear in the revision.\\n\\n3. The difference with other approaches that encode the weight of weighted logic rules using neural networks.\\nThanks for the pointers. We will cite and discuss the papers in the revision. Our work differs substantially from MLN with weights computed by NNs, e.g., the mentioned L&F paper:\\nTheir logic rules (called \\u201cknowledge base\\u201d in L&F) are designed by experts; see sec 2.3 of L&F). Here, our NLM uses deep NNs to learn such rules from data. The Blocks World example in our response to the scalability question shows the complexity of the rules that NLMs can handle.\\nConsequently, our NLM needs to learn weights that form those rules. In contrast, MLN only needs to learn a real-valued weight for each hand-designed logic rule.\\n\\n4. The difference with the unrolled computation graph of MLN.\\nOne of our main contributions is to use deep NN to learn logic rules. Unrolling NN-parameterized MLNs is limited by the need and quality of expert-designed logic rules.\\n\\n5. The encoding of objects.\\nIt is unclear to us what the reviewer means by \\u201cobjects are \\u2026 vector encodings\\u201d and hence the similarity to DeepProbLog, as we do *not* encode objects by vectors. Data representations in NLM are all tensors that encode the (probabilistic) true/false values of grounded predicates; see the 1st paragraph of section 2.2 (page 3).\"}", "{\"title\": \"Clarification on Scalability\", \"comment\": \"We thank all reviewers for their thoughts and comments. In addition to the specific responses below, here we clarify on the scalability question asked by some reviewers. We will include related discussions in our revision.\\n\\nIt should be clarified that scalability mentioned in the paper mainly refers to the complexity of reasoning (e.g., number of steps before producing a desired predicate), not the number of objects/entities or relations. This is highlighted in #2 at the bottom of page 1: \\u201cWe expect the learning system to scale with the number of logic rules. Existing logic-based algorithms like ILP suffer an exponential computational complexity with respect to the number of logic rules\\u201d.\\n\\nKnowledge-graph tasks involve many entities (e.g. > 10M) and relations as reviewers pointed out, but the rules involved in the reasoning steps are usually restricted. For example, the rules considered in the knowledge base reasoning work (Yang et al., 2017) are restricted in a \\u201cchain-like\\u201d form (their eqn 1.), which is query(Y,X)<-Rn (Y,Zn) \\u2227 \\u00b7 \\u00b7 \\u00b7 \\u2227 R1 (Z1,X), while R1, . . . , Rn are *known* relations in the knowledge base. Such knowledge-graph reasoning tasks represent an interesting yet different class of problems outside of the current scope of our paper.\\n\\nIn contrast, learning predicates that have a complex structure (such as the ShouldMove example below) pose a scalability challenge to existing ILP methods. In dILP [Evans et al.], for example, suppose each rule has C possible choices from the templates and R rules are need to be learned, then the possible space is at least O(C^R) --- the number of the set of possible rules is exponential w.r.t. the number of rules. On the other hand, our method is only quadratic in the number of rules (or in this case, equivalently, number of predicates).\\n\\n**********************************************************************\\n A Blocks World Example \\n**********************************************************************\\nThis example shows what we mean by complex reasoning in the seemingly simple Blocks World domain. Suppose we are interested in knowing whether a block should be moved in order to reach the target configuration. Here, a block should be moved if (1) it is moveable; and (2) there is at least one block below it that does not match the target configuration. Call the desired predicate \\u201cShouldMove(x)\\u201d.\\n\\nInputs Relations (as specified in the last paragraph of page 7):\\nSameWorldID, SmallerWorldID, LargerWorldID;\\nSameID, SmallerID, LargerID;\\nLeft, SameX, Right, Below, SameY, Above.\\nThe relations are given on all pairs of objects across both worlds.\\n\\nHere is one way to produce the desired predicate by defining several helper predicates, designed by \\u201chuman experts\\u201d:\\n1. IsGround(x) \\u2190 \\u2200y Above(y, x)\\n2. SameXAbove(x, y) \\u2190 SameWorldID(x, y) \\u2227 SameX(x, y) \\u2227 Above(x, y)\\n3. Clear(x) \\u2190 \\u2200y \\u00acSameXAbove(y, x)\\n4. Moveable(x) \\u2190 Clear(x) \\u2227 \\u00acIsGround(x)\\n5. InitialWorld(x) \\u2190 \\u2200y \\u00acSmallerWorldID(y, x)\\n6. Match(x, y) \\u2190 \\u00acSameWorldID(x, y) \\u2227 SameID(x, y) \\u2227 SameX(x, y) \\u2227 SameY(x, y)\\n7. Matched(x) \\u2190 \\u2203y Match(x, y)\\n8. HaveUnmatchedBelow(x) \\u2190 \\u2203y SameXAbove(x, y) \\u2227 \\u00acMatched(y) \\n9. ShouldMove(x) \\u2190 InitialWorld(x) \\u2227 Moveable(x) \\u2227 HaveUnmatchedBelow(x)\", \"we_can_also_write_the_logic_forms_in_one_line\": \"ShouldMove(x) \\u2190 (\\u2200y \\u00acSmallerWorldID(y, x)) \\u2227 (\\u2200y \\u00ac(SameWorldID(y, x) \\u2227 SameX(y, x) \\u2227 Above(y, x))) \\u2227 \\u00ac(\\u2200y Above(y, x)) \\u2227 ((\\u2203y SameWorldID(x, y) \\u2227 SameX(x, y) \\u2227 Above(x, y)) \\u2227 \\u00ac(\\u2203z \\u00acSameWorldID(y, z) \\u2227 SameID(y, z) \\u2227 SameX(y, z) \\u2227 SameY(y, z)) )\\n\\nNote that this is only a part of the logic needed to complete the Blocks World challenge. The learner also needs to figure out where should the block be moved onto. The proposed NLM can learn policies that solve the Blocks World from the sparse reward signal indicating only whether the agent has finished the game. More importantly, the learned policy generalizes well to larger instances (consisting more blocks).\\n**********************************************************************\"}", "{\"metareview\": \"\", \"pros\": [\"The paper presents an interesting forward chaining model which makes use of meta-level expansions and reductions on predicate arguments in a neat way to reduce complexity. As Reviewer 3 points out, there are a number of other papers from the neuro-symbolic community that learn relations (logic tensor networks is one good reference there). However using these meta-rules you can mix predicates of different arities in a principled way in the construction of the rules, which is something I haven't seen.\", \"The paper is reasonably well written (see cons for specific issues)\", \"There is quite a broad evaluation across a number of different tasks. I appreciated that you integrated this into an RL setting for tasks like blocks world.\", \"The results are good on small datasets and generalize well\"], \"cons\": \"- (scalability) As both Reviewers 1 and 3 point out, there are scalability issues as a function of the predicate arity in computing the set of permutations for the output predicate computation.\\n- (interpretability) As Reviewer 2 notes, unlike del-ILP, it is not obvious how symbolic rules can be extracted. This is an important point to address up front in the text. \\n- (clarity) The paper is confusing or ambiguous in places:\\n\\n-Initially I read the 1,2,3 sequence at the top of 3 to be a deduction (and was confused) rather than three applications of the meta-rules. Maybe instead of calling that section \\\"primitive logic rules\\\" you can call them \\\"logical meta-rules\\\".\\n\\n-Another confusion, also mentioned by reviewer 3 is that you are assuming that free variables (e.g. the \\\"x\\\" in the expression \\\"Clear(x)\\\") are implicitly considered universally quantified in your examples but you don't say this anywhere. If I have the fact \\\"Clear(x)\\\" as an input fact, then presumably you will interpret this as \\\"for all x Clear(x)\\\" and provide an input tensor to the first layer which will have all 1.0's along the \\\"Clear\\\" relation dimension, right?\\n\\n-It seems that you are making the assumption that you will never need to apply a predicate to the same object in multiple arguments? If not, I don't see why you say that the shape of the tensor will be m x (m-1) instead of m^2. You need to be able to do this to get reflexivity for example: \\\"a <= a\\\".\\n\\n-I think you are implicitly making the closed world assumption (CWA) and should say so.\\n\\n-On pg. 4 you say \\\"The facts are tensors that encode relations among multiple objectives, as described in Sec. 2.2.\\\". What do you mean by \\\"objectives\\\"? I would say the facts are tensors that encode relations among multiple objects.\\n\\n-On pg. 5 you say \\\"We finish this subsection, continuing with the blocks world to illustrate the forward\\npropagation in NLM\\\". I see no mention of blocks world in this paragraph. It just seems like a description of what happens at one block, generically.\\n\\n-In many places you say that this model can compute deduction on first-order predicate calculus (FOPC) but it seems to me that you are limited to horn logic (rule logic) in which there is at most one positive literal per clause (i.e. rules of the form: b1 AND b2 AND ... AND bn => h). From what I can tell you cannot handle deduction on clauses such as b1 AND b2 => h1 or (h2 and h3).\\n\\n-There is not enough description of the exact setup for each experiment. For example in blocks world, how do you choose predicates for each layer? How many exactly for each experiment? You make it seem on p3 that you can handle recursive predicates but this seems to not have been worked out completely in the appendix. You should make this clear.\\n\\n-In figure 1 you list Move as if its a predicate like On but it's a very different thing. On is predicate describing a relation in one state. Move is an action which updates a state by changing the values of predicates. They should not be presented in the same way.\\n\\n-You use \\\"min\\\" and \\\"max\\\" for \\\"and\\\" and \\\"or\\\" respectively. Other approaches have found that using the product t-norm t-norm(x,y) = x * y helps with gradient propagation. del-ILP discusses this in more detail on p 19. Did you try these variations?\\n\\n-I think it would be helpful to somewhere explicitly describe the actual MLP model you use for deduction including layer sizes and activation functions.\\n\\n-p. 5. typo: \\\"Such a parameter sharing mechanism is crucial to the generalization ability of NLM to\\nproblems ov varying sizes.\\\" (\\\"ov\\\" -> \\\"of\\\")\\n\\n-p. 6. sec 3.1 typo: \\\"For \\u2202ILP, the set of pre-conditions of the symbols is used direclty as input of the system.\\\" (\\\"direclty\\\" -> \\\"directly\\\")\\n\\nI think this is a valuable contribution and novel in the particulars of the architecture (eg. expand/reduce) and am recommending acceptance. But I would like to see a real effort made to sharpen the writing and make the exposition crystal clear. Please in particular pay attention to Reviewer 3's comments.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting forward chaining approach to neural deduction\"}", "{\"title\": \"Interesting approach to model FOL in NN, with concerns in scalability\", \"review\": \"This paper presents a model to combine neural network and logic programming. It proposes to use 3 primitive logic rules to model first-order predicate calculus in the neural networks. Specifically, relations with different numbers of arguments over all permutations of the groups of objects are represented as tensors with corresponding dimensions. In each layer, a MLP (shared among different permutations) is applied to transform the tensor. Multiple layers captures multiple steps of deduction. On several synthetic tasks, the proposed method is shown to outperform the memory network baseline and shows strong generalization.\\n\\nThe paper is well written, but some of the contents are still a bit dense, especially for readers who are not familiar with first-order predicate calculus. \\n\\nThe small Python example in the Appendix helps to clarify the details. It would be good to include the details of the architectures, for example, the number of layers, and the number of hidden sizes in each layer, in the experiment details in the appendix. \\n\\nThe idea of using the 3 primitive logic rules and applying the same MLP to all the permutations are interesting. However, due to the permutation step, my concern is whether it can scale to real-world problems with a large number of entities and different types of relations, for example, a real-world knowledge graph.\", \"specifically\": \"1. Each step of the reasoning (one layer) is applied to all the permutations for each predicate over each group of objects, which might be prohibitive in real-world scenario. For example, although there are usually only binary relations in real-world KG, the number of entities is usually >10M. \\n\\n2. Although the inputs or preconditions could be sparse, thus efficient to store and process, the intermediate representations are dense due to the probabilistic view, which makes the (soft) deduction computationally expensive.\", \"some_clarification_questions\": \"Is there some references for the Remark on page 3? \\n\\nWhy is there a permutation before MLP? I thought the [m, m-1, \\u2026, m-n+1] dimensions represent the permutations. For example, if there are two objects, {x1, x2}. Then the [0, 1, 0] represents the first predicate applied on x1, and x2. [1, 0, 0] represents the first predicate applied on x2 and x1. Some clarifications would definitely help here. \\n\\nI think this paper presents an interesting approach to model FOPC in neural networks. So I support the acceptance of the paper. However, I am concerned with its scalability beyond the toy datasets.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Review\", \"review\": \"In this paper the authors propose a neural-symbolic architecture, called Neural Logic Machines (NLMs), that can learn logic rules.\\n\\nThe paper is pretty clear and well-written and the proposed system is compelling. I have only some small concerns.\\nOne issue concerns the learning time. In the experimental phase the authors do not state how long training is for different datasets.\\nMoreover it seems that the \\u201crules\\u201d learnt by NSMs cannot be expressed in a logical formalism, isn\\u2019t it? If I am right, I think this is a major difference between dILP (Evans et. al) and NLMs and the authors should discuss about that. If I am wrong, I think the authors should describe how to extract rules from NLMs.\\nIn conclusion I think that, once these little issues are fixed, the paper could be considered for acceptance.\\n\\n[minor comments]\\np. 4\\n\\u201ctenary\\u201d -> \\u201cternary\\u201d\\n p. 5\\n\\u201cov varying size\\u201d -> \\u201cof varying size\\u201d\\n\\u201cThe number of parameters in the block described above is\\u2026\\u201d. It is not clear to me how the number of parameters is computed.\\n\\u201cIn Eq. equation 4\\u201d -> \\u201cIn Eq. 4\\u201d\\n\\np. 16\\n\\u201cEach lesson contains the example with same number of objects in our experiments.\\u201d. This sentence sounds odd.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"interesting directions but unclear novelty and some claims that are too strong\", \"review\": \"The paper introduces Neural Logic Machines, a particular way to combine neural networks and first order but finite logic.\\n\\nThe paper is very well written and structured. However, there are also some downsides.\\n\\nFirst of all, Section 2.1 is rather simple from a logical perspective and hence it is not clear what this gets a special term. Moreover, why do mix Boolean logic (propostional logic) and first order logic? Any how to you deal with the free variables, i.e., the variables that are not bounded by a quantifier? The semantics you define later actually assumes that all free variables (in your notation) are bounded by all quantifiers since you apply the same rule to all ground instances. Given that you argue that you want a neural extension of symbolic logic (\\\"NLM is a neural realization of (symbolic) logic machines\\\") this has to be clarified as it would not be an extension otherwise. \\n\\nFurthermore, Section 2.2 argues that we can use a MLP with a sigmoid output to encode any joint distribution. This should be proven. It particular, given that the input to the network are the marginals of the ground atoms. So this is more like a conditional distribution? Moreover, it is not clear how this is different to other approaches that encode the weight of weighted logical rule (e.g. in a MLN) using neural networks, see\\ne.g. \\n\\nMarco Lippi, Paolo Frasconi:\\nPrediction of protein beta-residue contacts by Markov logic networks with grounding-specific weights. \\nBioinformatics 25(18): 2326-2333 (2009)\\n\\nNow of course, and this is the nice part of the present paper, by stacking several of the rules, we could directly specify that we may need a certain number of latent predicates. \\nThis is nice but it is not argued that this is highly novel. Consider again the work by Lippi and Frasconi. We unroll a given NN-parameterized MLN for s fixed number of forward chaining steps. This gives us essentially a computational graph that could also be made differentiable and hence we could also have end2end training. The major difference seems to be that now objects are directly attached with vector encodings, which are not present in Lippi and Frasconi's approach. This is nice but also follows from Rocktaeschel and Riedel's differentiable Prolog work (when combined with Lippi and Frasconi's approach).\\nMoreover, there have been other combinations of tensors and logic, see e.g. \\n\\nIvan Donadello, Luciano Serafini, Artur S. d'Avila Garcez:\\nLogic Tensor Networks for Semantic Image Interpretation.\", \"ijcai_2017\": \"1596-1602\\n \\nHere you can also have vector encodings of constants. This also holds for \\n\\nRobin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, Luc De Raedt:\", \"deepproblog\": \"Neural Probabilistic Logic Programming. CoRR abs/1805.10872 (2018)\\n\\nThe authors should really discuss this missing related work. This should also involve\\na clarification of the \\\"ILP systems do not scale\\\" statement. At least if one views statistical relational learning methods as an extension of ILP, this is not true. Probabilistic ILP aka statistical relational learning has been used to learn models on electronic health records, see e.g., the papers collectively discussed in \\n\\nSriraam Natarajan, Kristian Kersting, Tushar Khot, Jude W. Shavlik:\\nBoosted Statistical Relational Learners - From Benchmarks to Data-Driven Medicine. Springer Briefs in Computer Science, Springer 2014, ISBN 978-3-319-13643-1, pp. 1-68\\n\\nSo the authors should either discuss SRL and its successes, separating SRL from ILP, or they cannot argue that ILP does not scale. In the related work section, they decided to view both as ILP, and, in turn, the statement that ILP does not scale is not true. Moreover, many of the learning tasks considered have been solved with ILP, too, of course in the ILP setting. Any ILP systems have been shown to scale beyond those toy domains. \\nThis also includes the blocks world. Here relational MDP solvers can deal e.g. with BW worlds composed of 10 blocks, resulting in MDPs with several million states. And the can compute relational policies that solve e.g. the goal on(a,b) for arbitrary number of blocks. This should be incorporated in the discussion of the introduction in order to avoid the wrong impression that existing methods just work for toy examples. \\n\\nComing back to scaling, the current examples are on rather small datasets, too, namely <12 training instances. Moreover, given that we learn a continuous approximation with a limit depth of reasoning, it is also very likely that the models to not generate well to larger test instances. So the scaling issue has to be qualified to avoid to give the wrong impression that the present paper solves this issue. \\n\\nFinally, the BW experiments should indicate some more information on the goal configuration. This would help to understand whether an average number of moves of 84 is good or bad. Moreover, some hints about the MDP formulation should be provided, given that there have been relational MDPs that solve many of the probabilistic planning competition tasks. And, given that the conclusions argue that NLMs can learn the \\\"underlying logical rules\\\", the learned rules should actually be shown. \\n\\nNevertheless, the direction is really interesting but there several downsides that have to be addressed.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
H1lFZnR5YX
Neural Regression Tree
[ "Wenbo Zhao", "Shahan Ali Memon", "Bhiksha Raj", "Rita Singh" ]
Regression-via-Classification (RvC) is the process of converting a regression problem to a classification one. Current approaches for RvC use ad-hoc discretization strategies and are suboptimal. We propose a neural regression tree model for RvC. In this model, we employ a joint optimization framework where we learn optimal discretization thresholds while simultaneously optimizing the features for each node in the tree. We empirically show the validity of our model by testing it on two challenging regression tasks where we establish the state of the art.
[ "regression-via-classification", "discretization", "regression tree", "neural model", "optimization" ]
https://openreview.net/pdf?id=H1lFZnR5YX
https://openreview.net/forum?id=H1lFZnR5YX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HyefcXpegE", "H1eB8QXqam", "S1lO457jnm", "rke_Ie993Q" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544766346103, 1542234956862, 1541253680194, 1541214288504 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1192/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1192/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1192/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1192/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"While the idea of revisiting regression-via-classification is interesting, the reviewers all agree that the paper lacks a proper motivating story for why this perspective is important. Furthermore, the baselines are weak, and there is additional relevant work that should be considered and discussed.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Needs a stronger motivation and updated baselines\"}", "{\"title\": \"Clearly written and well thought out paper with somewhat lackluster motivation and results\", \"review\": \"This paper presents a new approach to regression via classification problem utilizing a hybrid model between a neural network and a decision tree. The paper is very well written and easy to follow. It presents results on two very similar regression tasks and claims state of the art performance on both. The paper however does not motivate its contributions sufficiently, and does not provide enough experimental results to justify their method.\\n\\nThe authors could significantly improve the paper by spending more time motivating their work. For example, it is unclear why RvC is the best strategy for the tasks they study and what other tasks one should approach from a RvC standpoint. The paper would also be significantly more compelling if the strategy was applied to more varied tasks. Furthermore the two baseline models used are 11 and 34 years old respectively and i do not believe they represent a thorough review of the potential approaches to this problem. Significant work could also be done to explore the effect of using different neural network structures for the NRT - in this paper only a fairly simple 3 layer architecture is used. \\n\\nSection 4.4 is interesting and i believe the paper would be improved if more time was spent exploring the explanability of this new proposed model. \\n\\nFinally the scan method mentioned in the conclusion could have more emphasis placed on it in the text. \\n\\nOver all the paper is well written and easy to follow but is limited by its lack of well detailed motivation and insufficient baselines and applied tasks.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Failed to motivate the significance and poor experimental baselines.\", \"review\": \"Summary:\\n\\nThis paper presents a neural network based tree model for the regression via classification problem. The paper is easy to follow but it failed to give motivations for the significance of this work. I do not understand why regression via classification is any useful and what value it brings to the well studied regression problem with many different function approximators. The paper neither explain why regression via classification is any useful nor does it motivates the need for the presented model. The presented experiments are also not thorough, there are stronger and simpler baselines for regression like random forests, gradient boosted trees or kernel ridge regression which are not evaluated and compared. I think this work do not pass the acceptance bar at ICLR conference.\", \"comments\": \"1. I was not aware of this age and height estimation tasks. i-vectors are the standard features for speaker recognition. Can the authors please elaborate in a line or two why i-vectors would be suitable for age and height estimation?.\\n\\n2. The regressor function r() simply gives out the mean value of the bin. The authors could have provided on details on why this choice ? and how it affects MAE ?\\n\\n3. Each node in the NRT is successively being trained on a lesser amount of data. why do all the node-specific neural networks need the same parameter size then ?\\n\\n4. In Conclusion the authors say, \\\"In addition, we proposed a scan method and a gradient method to optimize the tree.\\\" The authors do not very clearly mention these two methods in the text, neither are the results demonstrated in that way.\", \"miscellaneous_comments\": \"1. This line seems incomplete in Section 1: \\\"Traditional methods for defining the partition T by prior knowledge, such as equally probable intervals, equal width intervals, k-means clustering, etc. [4, 5, 3].\\\" \\n\\n2. The notations used inside the nodes in Figure 1 has not been defined in the paper. \\n\\n3. Figure 2 and 3 axes don't have labels. Figure 3 caption says age, but it is for heights.\\n\\n4. In Section 4.4: Figure 4.4 should be Figure 4 and at one point \\\"This is visible in 4.4\\\" should be \\\"This is visible in Figure 4\\\"\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Technically interesting contribution but would need more considerations and evidences\", \"review\": \"Summary:\\nThe paper presents a novel supervised-learning method for regression using decision trees and neural nets. The core idea is based on a 90s technique called \\\"regression via classification\\\" by first apply discretization of target response y by some clustering, and apply any \\\"classification\\\" to those discretized values as class labels. Because real-valued y is one-dimensional and ordered, discretization means setting up any thresholds to give N-partitions of training {y_i}s. The proposed method tries to jointly learn these thresholds as well as node splitters of decision trees using neural nets. Because each node splitters are given by neural nets here, probability outputs for binary classification are also available. Regarding these probabilities as probabilistic splitting at each node, response y weighted by the path probabilities to leaves is the final prediction. The learning is in a greedy manner as in standard tree learning because exact joint learning is computationally hard. Experiments on speaker profiling illustrate the performance improvements against standard nonlinear regression such as SVR and regression trees.\", \"comment\": [\"This is a technically very interesting contribution, but several points can be considered more carefully as below.\", \"To be honest, it would be unconvincing that the approach \\\"regression via classification (RvC)\\\" is still valid. The proposed approach is an elaborate extension of this approach, but if we want prediction performance for regression, we would use some ensembles of regression trees such as Random forest, GBDT, ExtraTrees, ... instead of a single CART. Or even we can directly use deep learning based regression. The experiments against CART and SVR would be too naive in the current context of supervised learning. On the other hand, single CARTs are well interpretable and can be a nice tool to get some interpretations of the given data. But the proposed method seems to lose this type of interpretability because of introducing node splitters by neural nets. So the merits of the proposed approach would be somewhat unclear.\", \"In the context of tree learning, we need to consider two things.\", \"First of all, node splitting by general binary splitters are called \\\"multivariate trees\\\", but interestingly this does not always bring the good prediction performance on current quite high-dimensional data. So I guess that both optimizing \\\"threshold for RvC\\\" and \\\"nonlinear node splitters\\\" cannot always bring the prediction performance. Limitations and conditions would need to be clarified more carefully.\", \"Second of all, probabilistic consideration of decision trees such as eq(4) is almost like so-called \\\"probabilistic decision trees\\\" also known as \\\"hierarchical mixtures of experts (HME)\\\" in machine learning. See famous widely-cited papers of Jordan & Jacobs 1994 and Bishop & Svensen 2003. This can bring joint learning of probabilistic node splitter (gating networks) and decision functions at leaves (expert networks), and is also known to bring the smoothing effect into discrete and unstable regression trees, and hence the improved prediction performance. So which of probabilistic consideration or RvC contributes to the observed improvement is unclear...\", \"The target joint optimization of eq (3) is actually optimized by a number of heuristic ways, and it is quite unclear how it is truly optimized. In contrast, HME learning is formulated as a joint optimization (and solved by EM in the case of Jordan & Jacobs, for example).\", \"The experiments on single datasets of a very specific speaker profiling problem would be somewhat misleading. Probably, for this specific problem, there would be other existing methods. On the other hand, if this is for benchmarking purpose, a regression by neural nets and tree ensemble (random forest or something) can be included as other baselines, and also other types of regression problems can be tested.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
ByxF-nAqYX
Locally Linear Unsupervised Feature Selection
[ "Guillaume DOQUET", "Michèle SEBAG" ]
The paper, interested in unsupervised feature selection, aims to retain the features best accounting for the local patterns in the data. The proposed approach, called Locally Linear Unsupervised Feature Selection, relies on a dimensionality reduction method to characterize such patterns; each feature is thereafter assessed according to its compliance w.r.t. the local patterns, taking inspiration from Locally Linear Embedding (Roweis and Saul, 2000). The experimental validation of the approach on the scikit-feature benchmark suite demonstrates its effectiveness compared to the state of the art.
[ "Unsupervised Learning", "Feature Selection", "Dimension Reduction" ]
https://openreview.net/pdf?id=ByxF-nAqYX
https://openreview.net/forum?id=ByxF-nAqYX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SkxZ4XiVx4", "HygWE6dxAm", "B1giAnOxCQ", "SkerO3_x07", "Hkxcj4-lTQ", "rJxCdB-dnm", "BJgZhgKDhm", "HyeU4d1UnQ", "HJgBSRh3cQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment" ], "note_created": [ 1545020200917, 1542651176962, 1542651090780, 1542650989151, 1541571745647, 1541047670380, 1541013672547, 1540909101641, 1539259965124 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1191/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1191/Authors" ], [ "ICLR.cc/2019/Conference/Paper1191/Authors" ], [ "ICLR.cc/2019/Conference/Paper1191/Authors" ], [ "ICLR.cc/2019/Conference/Paper1191/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1191/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1191/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1191/Authors" ], [ "ICLR.cc/2019/Conference/Paper1191/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"This paper presents an LLE-based unsupervised feature selection approach. While one of the reviewers has acknowledged that the paper is well-written with clear mathematical explanations of the key ideas, it also lacks a sufficiently strong theoretical foundation as the authors have acknowledged in their responses; as well as novelty in its tight connection to LLE. When theoretical backbone is weak, the role of empirical results is paramount, but the paper is not convincing in that regard.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Unconvincing novelty and empirical results\"}", "{\"title\": \"Response to Reviewer1\", \"comment\": \"Thank you for your review.\", \"q1\": \"\\\"First, the result of the dimensionality reduction drastically depend on the method used.\\nIt is well known that every DR method focuses on preserving certain properties of the data.\\nFor instance, PCA preserves the global structure while t-SNE works locally, maximizing the recall [1].\\nThe choice of the DR method should justify the underlying assumption of the approach.\\nI expect that the results of the experiments to change drastically by changing the DR method.\\\"\", \"a1\": \"You are right, the result depends on the DR method. However:\\ni) a linear DR does not work (as shown on the toy XOR example)\\nii) non-linear DR methods (Isomap, t-SNE, MDS, LLE) rely on the local Euclidean distance in the original space, that might be arbitrarily corrupted by random features.\\niii) The non-linear DR method in LLUFS (denoising auto-encoder with D-D/2-D/4-D/8-D/4-D/2-D neural architecture) yields stable results (e.g. 98% same features are in the top 100 selected features for all datasets w.r.t. different initialisations).\", \"q2\": \"\\\"The LLE method is based on the assumption that if the high-dimensional data is locally linear,\\nit can be projected on a low-dimensional embedding which is also locally linear.\\nTransitioning from a locally linear high-dimensional data to a lower dimension makes sense because there exists higher degree of freedom\\nin the higher dimension. However, making this assumption in the opposite direction is not very intuitive.\\nWhy would the features that do not conform to the local linearity of the low-dimensional structure (which itself is obtained via a non-linear mapping) are insignificant?\\\"\", \"a2\": [\"The approach assumes that\", \"X (the initial data in D dimensions) can be mapped onto Z (latent space in D/8 dimensions) with no or little loss of information;\", \"From Z, the idea is to find among X_sub (all datasets defined from X by selecting a subset of features) the best mapping in the LLE sense. From Z (dimension D/8) to X_sub, the decrease in dimensionality is still high (the evaluation considers the selected top-100 features, with 100 << D/8 except for Madelon).\"], \"q3\": \"\\\"Finally, there are no theoretical guarantees on the performance of the method. Is there any guarantee that, e.g. given one noisy feature in high dimension, the method will find that feature, etc.?\\\"\", \"a3\": \"You are right, there is no theoretical guarantees for LLUFS. To our best knowledge, the evaluation of all unsupervised FS methods (including ours) is based on a supervised setting.\", \"q4\": \"\\\"Minor: what is the complexity of the method compared to the competing methods? What is the runtime? Is this a practical approach on large datasets?\\\"\", \"a4\": \"Theoretically: Besides the complexity of learning the Auto-Encoder, the time complexity of the prior agglomerative hierarchical clustering is O(D**2) with D the number of features (up to logarithmic terms). This complexity motivates the extension proposed in the paper, to use the feature correlation within the Auto-Encoder loss to deal with redundancy (section 3.3).\\nThe time complexity of the nearest neighbor search is O(D/8 n**2) with D/8 the dimension of Z and n the number of points. \\nThe time complexity of computing the W matrix is O(D/8 n k**3) with k the number of neighbors set to 6 in all problems. \\nEmpirically, LLUFS is slower than LAP, SPEC, MCFS and faster than NDFS. On dataset lung (203 points, 3312 features), the respective runtimes (on a single 2.67 Ghz CPU core) are :\\n* 0.2 seconds for LAP.\\n* 1.6 seconds for SPEC.\\n* 24.5 seconds for MCFS.\\n* 114.4 seconds for LLUFS (*)\\n* 131.0 seconds for NDFS.\\n(*) 24.4 seconds for the agglomerative clustering; 77.7 seconds for training the AutoEncoder; 12.3 seconds for the distorsion step.\\n\\nOn dataset pixraw10P (100 points, 10 000 features) :\\n*0.3 seconds for LAP.\\n*1.8 seconds for SP\\u00cbC.\\n*258 seconds for MCFS.\\n*930 for LLUFS (*) \\n*1646 seconds for NDFS.\\n(*) 614.6 seconds for the agglomerative clustering + 300 seconds for training the AutoEncoder + 15.9 seconds for the distorsion step.\"}", "{\"title\": \"Response to Reviewer2\", \"comment\": \"Thank you for your review.\", \"q1\": \"\\\"This work basically assumes that the dataset is (well) clustered. This might be true for most real world datasets,\\nbut I believe the degree of clustered-ness may vary by dataset. It will be nice to discuss effect of this. \\nFor example, if most data points are concentrated on a particular area not being well clustered, \\nhow much this approach get affected? If possible, it will be great to formulate it mathematically, but qualitative discussion is still useful.\\\"\", \"a1\": \"Given the absence of label information, unsupervised FS algorithms rely on the assumption that there is some sort of \\\"intrisic structure\\\" to the data.\\nUnsupervised approaches [1,2,3,4,5] assume that there are some clusters, which can be well-separated by an appropriate feature subset. \\nAs these clusters are defined in the initial feature space, they depend on the Euclidean distance which is arbitrarily corrupted from irrelevant features \\n(except for [5] , which iteratively learns a new distance during selection).\", \"llufs_proposes_another_strategy\": [\"An auto-encoder achieves the non-linear dimensionality reduction and constructs features, defining a compressed version Z of the initial data X;\", \"We now search for the subset of initial features, defining X_sub such that, if we applied LLE dimensionality reduction on Z, X_sub would be a perfect candidate (in the sense of preserving the local structure defined from W, with Z ~= WZ).\", \"The gain is that the combinatorial optimization problem of finding the best subset of features of size d can be solved in a straightforward way as the score of each feature is its distorsion: ranking the initial features by increasing distorsion, the optimal set of features is the top d features.\"], \"q2\": \"\\\"For the dimension reduction, the authors used autoencoder neural network only. What about other techniques like PCA or SVD?\\\"\", \"a2\": \"Non separable clusters (the XOR problem) cannot be captured from a linear dimensionality reduction (PCA, SVD) method.\\nIt is true that we could have used other non-linear dimensionality reduction methods (Isomap or MDS) to define a latent representation, instead of Auto-Encoder. However, Isomap and MDS depend on the Euclidean distance in the initial feature space, thus with same weakness as said in A1. \\n\\n\\n\\\"It seems x_j is missing in Johnson-Lindenstrauss Lemma formula.\\\"\\n\\nYou are right. Thank you. We fixed the typo. \\n\\n[1] Cai et al. (2010) \\\"Unsupervised Feature Selection for Multi-Cluster Data\\\",\\n[2] Li et al. (2012) \\\"Unsupervised feature selection using non-negative spectral analysis\\\"\\n[3] Li et al. (2014) \\\"Clustering-guided sparse structural learning for unsupervised feature selection\\\",\\n[4] Shi et al. (2014) \\\"Robust spectral learning for unsupervised feature selection\\\"\\n[5] Nie et al. (2016) \\\"Unsupervised Feature Selection with Structured Graph Optimization\\\"\"}", "{\"title\": \"Response to Reviewer3\", \"comment\": \"Thank you for your review and for the reference.\", \"q1\": \"\\\"This paper seems to directly use one existing dimensionality reduction method, i.e., LLE, to explore the local structure of data\\\"\", \"a1\": [\"Actually, LLUFS uses two dimensionality reduction approaches in complementary ways, along a 2-step process:\", \"An auto-encoder achieves the non-linear dimensionality reduction and constructs features, defining a compressed version Z of the initial data X;\", \"We now search for the subset of initial features, defining X_sub such that, if we applied LLE dimensionality reduction on Z, X_sub would be a perfect candidate (in the sense of preserving the local structure defined from W, with Z ~= WZ).\", \"The gain is that the combinatorial optimization problem of finding the best subset of features of size d can be solved in a straightforward way as the score of each feature is its distorsion: ranking the initial features by increasing distorsion, the optimal set of features is the top d features.\"], \"q2\": \"\\\"Why uses LLE rather than other methods such as LE? What are the advantages?\\\"\", \"a2\": \"Linear embedding can hardly be used in the first step if we want to capture non-separable patterns (e.g. XOR) in the initial representation.\\nAs for the second step, prior work such as [1,2,3] indicate that in order to be efficient, feature scoring must reflect data structure on a local scale. \\nThis observation motivates using the proposed distorsion score over global-scale methods such as PCA.\", \"q3\": \"\\\"Authors state that the method might be biased due to the redundancy of the initial features.\\nTo my knowledge, there are some unsupervised feature selection to explore the redundancy of the initial features, such as the extended work of Li et al. (2012) \\\"Unsupervised Feature Selection via Nonnegative Spectral Analysis and Redundancy Control\\\".\\\"\", \"a3\": \"The authors of (Li et al., 2015) improve on NDFS [4] through an additional term on the feature importance matrix, penalizing the selection of correlated features.\\nAt the moment, feature redundancy is taken into account in LLUFS prior to launching the Auto-Encoder: using the feature correlation within the Auto-Encoder loss is a perspective for further work (section 3.3).\", \"q4\": \"\\\"How about the computational complexity of the proposed method?\\\"\", \"a4\": \"Theoretically: Besides the complexity of learning the Auto-Encoder, the time complexity of the prior agglomerative hierarchical clustering is O(D**2) with D the number of features (up to logarithmic terms). This complexity motivates the proposed extension (Q3).\\nThe time complexity of the nearest neighbor search is O(D/8 n**2) with D/8 the dimension of Z and n the number of points. \\nThe time complexity of computing the W matrix is O(D/8 n k**3) with k the number of neighbors set to 6 in all problems. \\nEmpirically, LLUFS is slower than LAP, SPEC, MCFS and faster than NDFS. On dataset lung (203 points, 3312 features), the respective runtimes (on a single 2.67 Ghz CPU core) are :\\n* 0.2 seconds for LAP.\\n* 1.6 seconds for SPEC.\\n* 24.5 seconds for MCFS.\\n* 114.4 seconds for LLUFS (*)\\n* 131.0 seconds for NDFS.\\n\\n(*) 24.4 seconds for the agglomerative clustering; 77.7 seconds for training the AutoEncoder; 12.3 seconds for the distorsion step.\\n\\nOn dataset pixraw10P (100 points, 10 000 features) :\\n*0.3 seconds for LAP.\\n*1.8 seconds for SP\\u00cbC.\\n*258 seconds for MCFS.\\n*930 for LLUFS (*) \\n*1646 seconds for NDFS.\\n(*) 614.6 seconds for the agglomerative clustering + 300 seconds for training the AutoEncoder + 15.9 seconds for the distorsion step.\\n\\nQ5 \\\"Finally, the equation above Eq. 8 may be wrong.\\\"\\nYou are right. Thank you. We fixed the typo. \\n\\n[1] Cai et al. (2010) \\\"Unsupervised Feature Selection for Multi-Cluster Data\\\"\\n[2] Qian and Zhai (2013) \\\"Robust Unsupervised Feature Selection\\\"\\n[3] Liu et al. (2014) \\\"Global and local structure preservation for feature selection\\\"\\n[4] Li et al. (2012) \\\"Unsupervised feature selection using non-negative spectral analysis\\\"\"}", "{\"title\": \"Locally Linear Unsupervised Feature Selection\", \"review\": \"This paper focuses on the problem of unsupervised feature selection, and proposes a method by exploring the locally linear embedding. Experiments are conducted to show the performance of the proposed locally linear unsupervised feature selection method. There are some concerns to be addressed.\\n\\nFirst, the novelty and motivation of this paper is not clear. This paper seems to directly use one existing dimensionality reduction method, i.e., LLE, to explore the local structure of data. Why uses LLE rather than other methods such as LE? What are the advantages?\\n\\nSecond, in Section 3.3, authors state that the method might be biased due to the redundancy of the initial features. To my knowledge, there are some unsupervised feature selection to explore the redundancy of the initial features, such as the extended work of f Li et al. (2012) \\\"Unsupervised Feature Selection via Nonnegative Spectral Analysis and Redundancy Control\\\". \\n\\nThird, how about the computational complexity of the proposed method? It is better to analyze it theoretically and empirically.\\n\\nFinally, the equation above Eq. 8 may be wrong.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Locally Linear Unsupervised Feature Selection\", \"review\": \"In this paper, the authors presented Locally Linear Unsupervised Feature Selection (LLUFS), where a dimensionality reduction is first performed to extract data patterns, which are used to evaluate compliance of features to the patterns, applying the idea of Locally Linear Embedding.\\n\\n1. This work basically assumes that the dataset is (well) clustered. This might be true for most real world dataset, but I believe the degree of clustered-ness may vary by dataset. It will be nice to discuss effect of this. For example, if most data points are concentrated on a particular area not being well clustered, how much this approach get affected? If possible, it will be great to formulate it mathematically, but qualitative discussion is still useful.\\n\\n2. For the dimension reduction, the authors used autoencoder neural network only. What about other techniques like PCA or SVD? Theoretical and experimental comparison should be interesting and useful.\\n\\n3. This paper is well-written, clearly explaining the idea mathematically. It is also good to mention limitation and future direction of this work. It is also good to cover a corner case (XOR problem) in details.\\n\\n4. Minor comments:\\n - Bold face is recommended for vectors and matrices. For instance, 1 = [1, 1, ..., 1]^T, where we usually denote the left-hand 1 in bold-face.\\n - It seems x_j is missing in Johnson-Lindenstrauss Lemma formula. As it is, \\\\sum_j W_{i,j} is subject to be 1, so the formula does not make sense.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"The paper lacks a solid motivation\", \"review\": \"Summary: The paper proposes the LLUFS method for feature selection. The idea is to first apply a dimensionality reduction method on the input data X to find a low-dimensional representation Z. Next, each point in Z is represented by a linear combination of its nearest neighbors by finding a matrix W which minimizes || Z - WZ||. Finally, these weights are used to asses the distortion of every feature in X by considering the reconstruction loss in the original space.\", \"comments\": \"There are multiple shortcomings in the motivation of the approach. First, the result of the dimensionality reduction drastically depend on the method used. It is well known that every DR method focuses on preserving certain properties of the data. For instance, PCA preserves the global structure while t-SNE works locally, maximizing the recall [1]. The choice of the DR method should justify the underlying assumption of the approach. I expect that the results of the experiments to change drastically by changing the DR method.\\n\\nSecond, the LLE method is based on the assumption that if the high-dimensional data is locally linear, it can be projected on a low-dimensional embedding which is also locally linear. Transitioning from a locally linear high-dimensional data to a lower dimension makes sense because there exists higher degree of freedom in the higher dimension. However, making this assumption in the opposite direction is not very intuitive. Why would the features that do not conform to the local linearity of the low-dimensional structure (which itself is obtained via a non-linear mapping) are insignificant?\\n\\nFinally, there are no theoretical guarantees on the performance of the method. Is there any guarantee that, e.g. given one noisy feature in high dimension, the method will find that feature, etc.?\", \"minor\": \"what is the complexity of the method compared to the competing methods? What is the runtime? Is this a practical approach on large datasets?\\n\\nOverall, I do not agree with the assumptions of the paper nor convinced with the experimental study. Therefore, I vote for reject.\\n\\n[1] Venna et al. \\\"Information retrieval perspective to nonlinear dimensionality reduction for data visualization.\\\" Journal of Machine Learning Research 11, no. Feb (2010): 451-490.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Clarifying typos\", \"comment\": \"We spotted three typos that could hinder the reader's comprehension :\\n\\n-In the notations section , the unnormalized Laplacian should read \\\"L = Delta - S\\\" (instead of \\\"L = M - S\\\")\\n\\n-At the beginning of page 3, in the formal background section, the first eigenvector should read : \\\"Xi_0 = Delta^(1/2) 1\\\" (instead of \\\"Xi_0 = M^(1/2) 1 \\\")\\n\\n-In appendix 1, the Laplacian score should read : \\\"L_j = (1/sigma_j) * Sum_(i,k) (X[i,j] - X[k,j])S_(i,k)\\\"\\n\\nWe apologize for these errors and hope those clarifications prove useful.\"}", "{\"title\": \"Anonymity issue fixed\", \"comment\": \"Thank you for bringing this blunder to our attention.\\nThe paper is now properly anonymized.\"}" ] }
HklKWhC5F7
How Training Data Affect the Accuracy and Robustness of Neural Networks for Image Classification
[ "Suhua Lei", "Huan Zhang", "Ke Wang", "Zhendong Su" ]
Recent work has demonstrated the lack of robustness of well-trained deep neural networks (DNNs) to adversarial examples. For example, visually indistinguishable perturbations, when mixed with an original image, can easily lead deep learning models to misclassifications. In light of a recent study on the mutual influence between robustness and accuracy over 18 different ImageNet models, this paper investigates how training data affect the accuracy and robustness of deep neural networks. We conduct extensive experiments on four different datasets, including CIFAR-10, MNIST, STL-10, and Tiny ImageNet, with several representative neural networks. Our results reveal previously unknown phenomena that exist between the size of training data and characteristics of the resulting models. In particular, besides confirming that the model accuracy improves as the amount of training data increases, we also observe that the model robustness improves initially, but there exists a turning point after which robustness starts to decrease. How and when such turning points occur vary for different neural networks and different datasets.
[ "Adversarial attacks", "Robustness", "CW", "I-FGSM" ]
https://openreview.net/pdf?id=HklKWhC5F7
https://openreview.net/forum?id=HklKWhC5F7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BkgGeBwSxE", "HkeTVQV9RX", "SkgBFpM8CX", "HkeEEjMUCm", "SJxad5zUCX", "BJx-g_GU07", "S1xDWt79pm", "rJxmqa5s3Q", "S1gvPogc3m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545069802114, 1543287604838, 1543019901307, 1543019308452, 1543019125225, 1543018473101, 1542236414831, 1541283210768, 1541176158577 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1190/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1190/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1190/Authors" ], [ "ICLR.cc/2019/Conference/Paper1190/Authors" ], [ "ICLR.cc/2019/Conference/Paper1190/Authors" ], [ "ICLR.cc/2019/Conference/Paper1190/Authors" ], [ "ICLR.cc/2019/Conference/Paper1190/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1190/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1190/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers conclude the paper does not bring an important contribution compared to existing work. The experimental study can also be improved.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"reject\"}", "{\"title\": \"Response to Authors\", \"comment\": \"I thank the authors for clarifying the contribution of the paper and for providing additional results with other measures of robustness. I have hence revised my rating.\"}", "{\"title\": \"Response to AnonReviewer4\", \"comment\": \"Thank you for your efforts reviewing our paper and providing helpful comments and suggestions.\\n\\nFirst, since the question about the contribution is the same as the one raised by Reviewer 3, we repeat our answer here. We wish to clarify the contributions of our work in connection to Su et al., 2018. Compared to Su et al., 2018, we study robustness from a very different perspective --- we study the effect of training data on the robustness of any given model, while Su et al., 2018 study how different model architectures affect robustness. The most important contribution of our work is that, from the experimental results, we have observed that training data can significantly affect the robustness of models. Our findings motivate further work, such as how to further analyze the relationship of robustness and the distribution of training data, and how to enhance the robustness of a model by transforming the training data. \\n\\nSecond, we add in the appendix results on comparing the robustness of models in a different way by taking the mistakes of models into consideration. From our experimental results, the test accuracy usually increases with more training data. The focus of our current study is how the accuracy and robustness of models change with respect to increased training data (by randomly partitioning the original training dataset into S1 \\\\subset S2,...\\\\subset Sn, all of which are balanced). As we commented in our response to one comment in Review 2, it would be interesting future research to study how to select training data to maintain test accuracy, but deteriorate robustness.\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for the suggestion to explore how to explain our findings theoretically, which we agree would be interesting.\\n\\nWe wish to clarify the contributions of our work in connection to Su et al., 2018. Compared to Su et al., 2018, we study robustness from a very different perspective --- we study the effect of training data on the robustness of any given model, while Su et al., 2018 study how different model architectures affect robustness. The most important contribution of our work is that, from the experimental results, we have observed that training data can significantly affect the robustness of models. Our findings motivate further work, such as how to further analyze the relationship of robustness and the distribution of training data, and how to enhance the robustness of a model by transforming the training data.\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for the helpful comments and suggestions. We will answer the questions as listed:\", \"q1\": \"For the motivation example I assume the following assessment holds true. Several linear functions are sampled and compose S_1, S_2, and T.\", \"a1\": \"We apologize for the unclear description in the paper. Only one linear function is used to generate natural training and testing data. In more detail, we generate the data using the function y = a * x + u, where a is a fixed value. For example, let a = 2, we can sample the data using y = 2x + u. To generate each pair of (x, y), we randomly sample both x and u with x from the range (0, 10) and u from the range (-1, 1). Then, we compute the corresponding y using the sampled x and u.\", \"q2\": \"A single linear regression model is used to fit all the data, either S_1 or (S_1 and S_2). If that is the case the experiment is not clear to me since the single linear model can only fit the data mean, mean slope (a) and constant (mu). Since the joint dataset better captures the mean of T the error for the joint training should be lower indeed.\", \"a2\": \"Yes, in the example, the joint dataset better captures the mean of T, and the error for the joint training is lower, i.e., the mean squared error of M2 (computed on more training data) is smaller than that of M1 on T.\", \"q3\": \"However, to actually compare both values the same threshold theta should be used for both and not a percentage of their performance.\", \"a3\": \"In the example, we do actually use the same threshold \\\\theta for both M1 and M2. Our use of the symbol \\\\theta might have confused the reviewer, which we apologize --- it is the relative error, not a percentage. In more detail, the robustness of each of the models is measured by the average distortion it can tolerate to make correct predictions for the testing data in T. Here, we say a prediction is correct if the relative error of the prediction and the label is smaller than the given threshold, \\\\theta. The same threshold is used for M1 and M2. For example, given a testing data t \\\\in T, we compute the robustness of M1 on t as follows. We continue adding bigger and bigger distortions to t by a tiny step. For each t\\u2019, we check if the model still predicts correctly on it (by checking if the relative error of the prediction on t\\u2019 and the label exceeds the threshold \\\\theta). If no, we add additional distortions. If yes, we return |t\\u2019 - t| as the robustness value of M1 on t.\", \"q4\": \"I would argue that this very simple model does not provide any valuable insight into the problem due to its construction.\", \"a4\": \"Through this simple example, we want to show analytically (rather than empirically) the existence of our observed phenomenon, i.e., models trained on more data can be more accurate but less robust. In the example, we generate two datasets, S1 and S2 with S1 \\\\subset S2. Next, we compute two linear regression models M1 and M2 on S1 and S2 respectively based on a closed-form calculation. Then, we show that M2 is more accurate than M1 since the mean squared error of M2 on the testing set T is smaller than that of M1. And for each testing data t in T, we show that M1 tolerates more distortions on t to still make correct predictions than M2, which indicates that M2 is less robust than M1. The phenomenon, i.e, with more training data, the linear regression model can be more accurate but less robust, motivated us to conduct this extensive empirical analysis to investigate and understand if (and how much) this phenomenon holds for realistic neural network models for image classification tasks.\\n\\nThe reviewer\\u2019s last question concerns how to measure and compare the robustness of models. In this paper, we use the average distortions on the commonly correctly predicted images to compare the robustness of models. We add further results in the appendix using a different metric (including the mistakes of the models), which confirm the same pattern. Usually the test accuracy increases with more training data. It would be interesting future research to study how to select training data to maintain test accuracy, but deteriorate robustness.\"}", "{\"title\": \"Revision Summary\", \"comment\": \"We thank the reviewers for the helpful feedback, which has helped us refine our paper and guided us to clarify some important misinterpretations in the reviews, in particular, concerning the contributions of our work with respect to Su et al., 2018. We have updated our paper accordingly. A major update is the extra appendix to provide a different measurement of robustness based on questions from the reviewers.\"}", "{\"title\": \"An empirical study of the influence of training data size on model robustness\", \"review\": \"This paper conducts an empirical analysis of the effect of training data size on the model robustness to adversarial examples. The authors compared four different NN architectures using four different datasets for the task of image classification. Overall, the paper is easy to follow and clearly written.\\n\\nHowever, since Su et al., 2018, already presented similar findings, I do not see any major contribution in this paper. Additionally, I would expect the authors to conduct some more analysis of their results besides acc. and distortion levels. For examples, investigate the type of mistakes the models have made, compare models with the same test acc. but different amount of training data used to get there, some analysis/experiments to explain these findings (monitor models parameters/grads during training, etc.)\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Clear structure and presentation of the empirical evaluation but the significance of the results is not clear.\", \"review\": \"This paper empirically evaluates the effect of the training dataset size on accuracy and robustness against adversarial attacks. The methodology of the paper is generally easy to assess and the overall idea well communicated.\\n\\nFor the motivation example I assume the following assessment holds true. Several linear functions are sampled and compose S_1, S_2, and T. A single linear regression model is used to fit all the data, either S_1 or (S_1 and S_2). If that is the case the experiment is not clear to me since the single linear model can only fit the data mean, mean slope (a) and constant (mu). Since the joint dataset better captures the mean of T the error for the joint training should be lower indeed. However, to actually compare both values the same threshold theta should be used for both and not a percentage of their performance. I would argue that this very simple model does not provide any valuable insight into the problem due to its construction.\\n\\nThe experimental setup presented in Section 4 only considers examples which are classified correctly by all data subsets. However, it is crucial to also consider the mistakes of these subsequent sets. For example, the learned model for the most restrictive dataset is most likely not exposed to a complex decision boundary, therefore it will exhibit a much smoother prediction at the cost that it will simply classify many more examples as the target class. In this case using data perturbations is not even the problem since completely different examples might be classified wrongly. Although not entirely clear, it would be very useful to consider the nearest negative neighbor in the dataset in the embedding space of the classifier to capture this problem at least partially. In general if the test accuracy is lower the learned classifier exhibits less performance, thus, adversary examples, distorted examples are not the main issue since it simply makes mistakes on visually different examples. Therefore, the overall analysis should be much more focused on models which achieve the same test performance but use require less data to achieve this performance.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Empirical study of variation of accuracy and robustness of networks versus training data size\", \"review\": \"The paper presents an empirical study of how accuracy and robustness vary with increasing training data for four different data sets and CNN architectures. The main conclusion of the study is that while training accuracy generally increases with increasing training data, provided sufficient training data is available for training the network in the first place, the robustness on the other hand does not necessarily increase, and may even decrease.\\n\\nSimilar findings were presented previously in Su et al., 2018. Hence, the current paper contains incremental and marginal new findings versus the existing literature. The paper would also have been a lot stronger and significantly advanced our scientific understanding of the problem if the authors had made some attempt at trying to explain their findings theoretically. In its current form the paper does not contain sufficient contributions for acceptance.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
ryetZ20ctX
Defensive Quantization: When Efficiency Meets Robustness
[ "Ji Lin", "Chuang Gan", "Song Han" ]
Neural network quantization is becoming an industry standard to efficiently deploy deep learning models on hardware platforms, such as CPU, GPU, TPU, and FPGAs. However, we observe that the conventional quantization approaches are vulnerable to adversarial attacks. This paper aims to raise people's awareness about the security of the quantized models, and we designed a novel quantization methodology to jointly optimize the efficiency and robustness of deep learning models. We first conduct an empirical study to show that vanilla quantization suffers more from adversarial attacks. We observe that the inferior robustness comes from the error amplification effect, where the quantization operation further enlarges the distance caused by amplified noise. Then we propose a novel Defensive Quantization (DQ) method by controlling the Lipschitz constant of the network during quantization, such that the magnitude of the adversarial noise remains non-expansive during inference. Extensive experiments on CIFAR-10 and SVHN datasets demonstrate that our new quantization method can defend neural networks against adversarial examples, and even achieves superior robustness than their full-precision counterparts, while maintaining the same hardware efficiency as vanilla quantization approaches. As a by-product, DQ can also improve the accuracy of quantized models without adversarial attack.
[ "defensive quantization", "model quantization", "adversarial attack", "efficiency", "robustness" ]
https://openreview.net/pdf?id=ryetZ20ctX
https://openreview.net/forum?id=ryetZ20ctX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "B1lPQ2VreE", "r1l21JD9RQ", "rJlCsNCx0X", "SJlagR_gR7", "H1lNfHKMTQ", "BkxPH1-xp7", "SylvEsdV3m", "rygxXn_fhQ", "S1e96wfRoQ", "BJeTRnNpjX", "Hyly7UCijQ", "H1lOYy3joQ", "S1gHry3ijQ", "SyxwKb2hcX", "BkgmUBVncQ", "rkx4Xm72cm", "ryly95Z29X", "ByguRKJhqQ", "Hkehlty29X", "ryxW9o4Z9m", "HklDKcVZqQ" ], "note_type": [ "meta_review", "official_comment", "comment", "official_review", "official_comment", "comment", "official_review", "official_review", "comment", "official_comment", "comment", "official_comment", "official_comment", "comment", "comment", "official_comment", "comment", "official_comment", "official_comment", "comment", "comment" ], "note_created": [ 1545059358805, 1543298787611, 1542673574358, 1542651380909, 1541735691885, 1541570367413, 1540815662840, 1540684824224, 1540396994036, 1540340948812, 1540249111273, 1540239232503, 1540239165378, 1539256702679, 1539224907376, 1539220252039, 1539213959069, 1539205583835, 1539205363529, 1538505608946, 1538505343296 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1189/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1189/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1189/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1189/Authors" ], [ "~Nicholas_Carlini1" ], [ "ICLR.cc/2019/Conference/Paper1189/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1189/AnonReviewer1" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1189/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1189/Authors" ], [ "ICLR.cc/2019/Conference/Paper1189/Authors" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1189/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1189/Authors" ], [ "ICLR.cc/2019/Conference/Paper1189/Authors" ], [ "(anonymous)" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers agree the paper brings a novel perspective by controlling the conditioning of the model when performing quantization. The experiments are convincing experiments. We encourage the authors to incorporate additional references suggested in the reviews. We recommend acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper, accept.\"}", "{\"title\": \"Re: Good research work with clear arguments well supported by rigorous experiments.\", \"comment\": \"Thank you so much for the detailed feedback and advice.\\n\\n1. We conducted an empirical study to find the reason for inferior robustness in Section 3 and Figure 3.\\n\\n2. We appreciate the reviewer for the advice. The orthogonal regularization is an effective method to regularize the Lipschitz constant of the network, but indeed, it might not be the optimal strategy. We believe that the robustness of the quantized network is related to the specific form of Lipschitz regularization, and we will try the recommended term of Lipschitz regularization to see if it works better than orthogonal regularization.\\n\\nOn the other hand, one important reason why we used the orthogonal regularization is the computation efficiency. A simple way to speed up the calculation of orthogonal regularization is row sampling. In our experiments, we found that we can achieve similar regularization effect when sampling less than 30% of the weight matrix rows to conduct orthogonal regularization at each step, saving more than 90% of the computation of calculating regularization terms (which is already small compared to network training). In such case, the orthogonal regularization could be even more efficient than the power iteration.\\n\\n3. We have tried the weight clipping approach for controlling Lipschitz constant as proposed in Wasserstein GAN (Arjovsky et al.), but it performed worse than orthogonal regularization according to our experiments. We are interested to try out if penalizing the norm of the Jacobian will work better.\"}", "{\"comment\": \"I am trying to reproduce the results of this paper (mainly the claims that R+FGSM training is just as effective as PGD adversarial training, see the long discussion below regarding Table 3). I have tried taking the CIFAR-10 code from ( https://github.com/MadryLab/cifar10_challenge ) and implementing the R+FGSM attack during training as described in this paper. However, I have been unable to reproduce the claim of 43% robustness at eps=8.\\n\\nWould the authors be willing to release their source code?\", \"title\": \"Unable to reproduce R+FGSM results\"}", "{\"title\": \"Good research work with clear arguments well supported by rigorous experiments.\", \"review\": \"The paper is well written with clear motivation and very easy to follow. \\nThe core idea of using orthogonal regulariser for improving the robustness of neural network models have been presented in Cisse et.al and the authors re-use it for improving the robustness of quantised models. The main contribution of this work is in identifying that the standard quantised models are very vulnerable to adversarial noise which is illustrated through experiments and then empirically showing that the regulariser presented in Cisse et. al improves the robustness of quantised models with rigorous experiments. The paper add value to the research community through thorough experimental study as well as in industry since quantised models are widely used and the presented model is simple and easy to use.\", \"some_suggestions_and_ideas\": \"1. It will be great if the authors could add a simple analytical explanation why the quantised networks are not robust. \\n\\n2. The manifold of Orthogonal matrices does not include all 1 - Lipschitz matrices and also the Orthogonal set is not convex. I think a better strategy for this problem is to regularise the spectral norm to be 1. Regularising the spectral norm is computationally cheaper than Orthogonal regulariser when combined with SGD using power iterations. Moreover the regulariser part of the model becomes nice and convex.\\n\\n3. Another strategy to control the Lipschitz constant of the network is to directly penalise the norm of the Jacobian as explained in Improved Training of Wasserstein GANs (Gulrajani et. al).\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Re: A simple regularization scheme that efficiently protects quantized models from adversarial attacks\", \"comment\": \"Thank you so much for the feedback. Below are our answers to the questions.\\n\\n- Lipschitz regularization has been proposed in previous papers, and the robustness of full-precision models can also be improved when combined with Lipschitz regularization. However, there is a significant difference between applying Lipschitz regularization term to full-precision model and quantized model: when the quantized model is trained with Lipschitz regularization, we found it more robust than the original full-precision model, and it is **even more robust** than the full-precision model trained with the same Lipschitz regularization. That is to say, when we introduce the Lipschitz regularization term, quantization itself can be used as an effective denoiser to reduce the adversarial perturbation, as much of the perturbation strength is smaller than the quantization bucket width. Thus we call the method Defensive Quantization.\\nActually, we have already shown such an effect in Table 1 of the original paper. The column *Quantize Gain* shows the adversarially attacked accuracy improvement of the quantized model compared to the full-precision model, trained with exactly the same Lipschitz regularization term. Without Lipschitz regularization, quantized models are less robust than full-precision ones (-9.1%). While with Lipschitz quantization, the quantized models are consistently more robust.\\nIn short, (1) conventional quantized models are less robust. (2) Lipschitz regularization makes the model robust. (3) Lipschitz regularization + quantization makes model even more robust. The reviewer has noticed (1)(2), but we also want to emphasize (3) in the paper. \\n\\n- Quantization has become an industry standard for deep learning hardware. Making quantized models robust is an important topic that concerns billions of AI devices. To deploy models on GPU/TPU/FPGA/Mobile phones, we need quantization to reduce the storage, inference time and energy. Many companies have provided both hardware and software to support quantized models. For example, TensorFlow-Lite supports running quantized models on mobile phones [1] to reduce inference time and storage. NVIDIA has released TensorCore [2] to support low-bit training like INT4, INT8, and binary precision for inference.\\nThe performance of fully quantized models compared to full-precision counterparts is extensively studied in previous works like XNOR-Net [3], BNN [4], DoReFa-Net [5], where weights, activation (and even gradients) are quantized to low-bit representations. With quantization, we can achieve massive compression or speed-up at the cost of little or no accuracy lost. In short, quantization is an important topic widely adopted by industry. Making quantized models robust will benefit billions of AI devices. \\n\\n- For adversarial training, we used the quantized model itself to generate white-box adversarial samples. For white-box adversarial testing, we also used the quantized model, by definition. For black-box adversarial testing, we used another full-precision model. Since we used an STE (straight through estimator) y=x + clip_gradient(x_quant - x) to compute the gradient of quantization operator, it behaves just like the normal full-precision model during backpropagation. Therefore we believe it does not make much difference to use full-precision or quantized models.\\n\\n- Our Lipschitz regularization term only applies to layers with parameters, i.e., convolution and fully connected layer, which make up the majority of modern networks. Our experiments also show that such a regularization term is enough to make quantized models robust. Also, since the size of parameters is much smaller compared to activations (we usually use a large batch size like 256 for training, while we only need to maintain a single copy of parameters), the cost for calculating the regularization is negligible according to our experiments.\\nFurthermore, since non-linear layers like ReLU itself has a Lipschitz constant <=1, we do not need to take special care of them.\\n\\n[1] https://www.tensorflow.org/lite/performance/model_optimization#model_quantization\\n[2] https://www.nvidia.com/en-us/data-center/tensorcore/\\n[3] Rastegari et al., XNOR-Net: Imagenet classification using binary convolutional neural networks\\n[4] Courbariaux et al., Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1\\n[5] Zhou et al., DoReFa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients\"}", "{\"comment\": \"Reviewer 1, I'm sure you have a lot going on, and having only a few weeks to review several papers can be hard, but I would encourage you to re-read the paper carefully and write a more detailed review. This may not be your area -- that's okay. In fact, given that it's not your area of focus, your perspective could be very helpful in what could be improved on to make the paper more generally accessible to the broader community.\\n\\nUnfortunately, what you currently have written does not come close to resembling a complete paper review. It is disrespectful to both the authors of this paper (even if you do give it a high score, they don't get any comments on how to improve it) and the other reviewers (who put effort into writing a thorough review of the paper).\", \"title\": \"Please write a more thoughtful review\"}", "{\"title\": \"A simple regularization scheme that efficiently protects quantized models from adversarial attacks\", \"review\": \"Summary:\\nThe paper proposes a regualrization scheme to protect quantized neural networks from adversarial attacks. The authors observe that quantized models become less robust to adversarial attacks if the quantization includes the inner layers of the network. They propose a Lipschitz constant filtering of the inner layers' input-output to fix the issue.\", \"strengths\": \"The key empirical observation that fully quantized models are more exposed to adversarial attacks is remarkable in itself and the explanation given by the authors is reasonable. The paper shows how a simple regularization scheme may become highly effective when it is supported by a good understanding of the underlying process.\", \"weaknesses\": \"Except for observing the empirical weakness of fully quantized models, the technical contribution of the paper seems to be limited to combining the Lipschitz-based regularization and quantization. Has the Lipschitz technique already been proposed and analysed elsewhere? If not, the quality of the paper would be improved by investigating a bit more the effects of the regularization from an empirical and theoretical perspective. If yes, are there substantial differences between applying the scheme to quantized models and using it on full-precision networks? It looks like the description of the Lipschitz method in Section 4 is restricted to linear layers and it is not clear if training is feasible/efficient in the general case.\", \"questions\": [\"has the Lipschitz technique been proposed and analysed elsewhere? Is the robustness of full-precision models under adversarial attacks also improved by Lipschitz regularization?\", \"how popular is the practice of quantizing inner layers? Has the performance of fully quantized models ever been compared to full-precision or partially quantized models in an extensive way (beyond adversarial attack robustness)?\", \"are the adversarial attacks computed using the full-precision or the quantized models? would this make any difference?\", \"the description of the Lipschitz regularization given in Section 4 assumes the layers to be linear. Does the same approach apply to non-linear layers? Would the training be feasible in this case?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"while i am no expert an adversarial attacks in deep learning, the results are compelling imho\", \"review\": \"imho, this manuscript is clearly written, addresses a confusing point in the current literature, clarifies some issues, and provides a novel and useful approach to mitigate those issues.\\nreading the other comments online, the authors seem to have addressed those concerns as well.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"comment\": \"In the paper you report 44% accuracy for PGD training and 43% accuracy for R+FGSM training (going up to 50% with DQ). The table above shows 36%. without DQ and 43% with DQ. Which is correct?\\n\\nAlso, Madry et al. uses 40 iterations of PGD to attack the trained model, not 7 (which is used during training).\", \"title\": \"Those results are different\"}", "{\"title\": \"Our attacker is not broken, and the results are correct\", \"comment\": \"We have confirmed with the author of Madry et al. (2018) about the details of R+FGSM adversarial training in their OpenReview reply. To make a fair comparison, we re-implemented their R+FGSM training and test the accuracy under PGD attack using their hyper-parameter (step_size=2, n_step=7, eps=8) compared to our R+FGSM adversarially trained model. The results are listed below:\\n\\n\\t\\t\\t\\t\\t\\t\\t\\toriginal\\t\\tPGD (Madry\\u2019s) attacked accuracy\\nMadry\\u2019s adv. R+FGSM training\\t\\t93.130\\t\\t2.8\\nOur adv. R+FGSM training\\t\\t\\t91.61\\t\\t36.05\\nOur adv. R+FGSM training + DQ\\t94.0\\t\\t\\t43.23\\n\\nWe can see that using R+FGSM (Madry\\u2019s) adversarial training, the resulted model indeed reaches near 0% accuracy under PGD attack, while the model trained with our R+FGSM adversarial training is resistant to the attack. And DQ further improves the robustness. The reason is not due to the attacker, since we used the same attacker, but we have a stronger defense method. We think the difference is due to mainly 2 reasons:\\n1. In Madry\\u2019s R+FGSM adversarial training, they used a fixed eps same as testing (here they use eps=8). While in our method, the eps is sampled from a truncated normal distribution ranging (0, 16) during training, so that the model does not overfit to a certain epsilon. Our method can generate a noise with bigger infinity norm (up to 16) and thus provides better robustness. At the same time, it also reduces overfitting to R+FGSM adversarial samples itself and prevents gradient masking.\\n2. In Madry\\u2019s R+FGSM, they first sample a noise within [-eps, eps], and take a step with size 2*eps followed by clipping, so that the noise reaches a corner of the box. The value in the noise is one of {-eps, eps}. While with our implementation, the value of noise is one of {-eps, 0, eps}, which is more representative.\\nIn conclusion, PGD attack can break Madry\\u2019s R+FGSM adversarial training but cannot break our R+FGSM adversarial training. Therefore, we think our results are correct and the attacker is not broken.\"}", "{\"comment\": \"I'm sorry, but while technically you are right that they are different, for all intents and purposes, they are the same.\\n\\nAs you say in your paper, R+FGSM does the following: choose a constant e1, e2 so that e1<e2. For an image x first let x1 = x + e1*R where R is chosen to be randomly component-wise either -1 or 1. Then, let x2 = FGSM(x1, e).\\n\\nPGD, as described by Madry et al. (2018) does the following: choose a constant e. For an image x first let x1 = x + e*R where each entry of R is chosen *uniformly* from [-1,1]. Then, let x2 = FGSM(x1,e) (and repeat as necessary, but we're talking about 1 iteration here).\\n\\nWhile technically there is a difference (in how the initial noise sample is selected), they are essentially identical. But okay, technically different. To re-phrase my question then, your paper is claiming that if I do R+FGSM as described above, with epsilon=8, on CIFAR-10, the resulting model will have 43% accuracy? Is this the claim you are making?\", \"regarding_the_comparisons\": \"I understand you're not trying to compare the two approaches. However, in order to ensure that DQ is actually effective, it is important to also ensure that you get everything else right.\\n\\nAs of right now, it appears that your attack algorithm is somehow broken, because it should be possible to still attack a R+FGSM trained model and reduce its accuracy to near-0%. Because you can't attack this baseline that is known to be broken successfully, it raises doubts about the evaluation of your DQ defense.\", \"title\": \"R+FGSM is nearly the same as 1 step of PGD\"}", "{\"title\": \"Not related to the claim of our paper\", \"comment\": \"In the link you provided, the author uses an iterative version of R+FGSM with multiple random starts, which is different from ours.\\n\\nAs mentioned in Section 2.2.1, we used the varying attack strength following Song et al. (2016), so that we can test the model\\u2019s robustness under different strength. Although we used a smaller step size, we provided the results under eps=16 for PGD, which is a much stronger attack than Madry\\u2019s. And our DQ method consistently outperforms normal network and VQ.\\n\\nFurthermore, all our experiments are conducted under the same setting for vanilla quantization and defensive quantization. Therefore, it does not affect the conclusion that DQ is more robust than VQ. What we try to demonstrate is that DQ is more robust than VQ, but not to compare adversarial R+FGSM training with adversarial PGD training.\"}", "{\"title\": \"R+FGSM is different from 1-step PGD\", \"comment\": \"No. Even if you use n_step=1 for PGD, it is still different from R+FGSM. Please refer to equation (2) for details. Since we used an iterative PGD, we do not know how it behaves to use PGD with n_step=1.\\n\\nWe would like to stress that our paper aims to demonstrate the effectiveness of Defensive Quantization, not to compare R+FGSM adversarial training with PGD adversarial training. And all the experiments are conducted under the same setting for strict comparison. Comparison of other defend methods is beyond the scope of our paper.\"}", "{\"comment\": \"In this comment ( https://openreview.net/forum?id=rJzIBfZAb&noteId=HkRKZDTQM ), the authors claimed that when adversarially *training* the network using R+FGSM, it overfits to R+FGSM and completely vulnerable to PGD.\\n\\nHowever, the result at Table 3 (R+FGSM training without DQ under PGD attacks) is contradictory and I think that the difference is because of the parameters of PGD attacks which are different from Madry et al. (2018), especially the small step-size, alpha=1. See Section 2.2.1.\", \"title\": \"It seems that the difference is due to the different parameters in attack not training\"}", "{\"comment\": \"To confirm then, the claim your paper makes is that if I take PGD adversarial training on CIFAR-10, and reduce the number of iterations to 1, that it is not less effective than before? (Before: Madry et al. use N=7 steps of PGD during training, a step size of 2/255, and a bound of eps=8/255. After: this paper proposes N=1 steps of PGD during training, a step size of 8/255, and a bound of eps=8/255.)\", \"looking_at_table_3\": \"PGD adversarial training reaches 44% accuracy on eps=8 PGD attacks, and R+FGSM adversarial training reaches 43% accuracy accuracy on eps=8 PGD attacks.\\n\\nIs this correct?\", \"title\": \"Clarifying your claims\"}", "{\"title\": \"We used PGD (Mady et al., 2018) but not BIM for adversarial training\", \"comment\": \"No, we *strictly* used the PGD as proposed by Madry et al. (2018) for adversarial training and attack evaluation, *NOT* the BIM without a random start. We mentioned R+FGSM is less likely to cause gradient masking compared to *FGSM* because of the random start, but not compared to *PGD*.\\n\\nUnder stronger attack like PGD, our experiments have consistently shown that defensive quantization outperforms vanilla quantization, and bridges the gap between efficiency and robustness.\"}", "{\"comment\": \"Reading the text more carefully, it appears you are not performing the strong version of adversarial training as proposed by Madry et al. (2018), but instead the BIM as used by Kurakin et al. (2016). The PGD implementation as proposed by Madry et al. (2018) take a single random step and then perform PGD from there. That is to say, R-FGSM is identical to 1 step of PGD. So the extra random steps can not be the reason for preventing gradient masking. Calling this \\\"Adversarial PGD\\\" is therefore deceptive.\", \"title\": \"R-FGSM is strictly weaker than PGD\"}", "{\"title\": \"We already demonstrated the robustness of DQ under strong attack. We added strong attack results to Figure 5 and the result is consistent.\", \"comment\": \"We used FGSM in motivation, but we used PGD in experiments (Table 2/3). Even under weak attacks, vanilla quantization suffers. Even under strong attacks, defensive quantization is more robust. We have already shown the robustness of DQ under strong attack (PGD) in Table 2 and 3, both white and black box.\\n \\nWe choose FGSM attack for motivation because it is most transferable (Su et al., 2018) and thus best for mounting black-box attacks (Kurakin et al., 2017), while the black-box robustness is more essential for real deployed models. \\nTo solve the reviewer\\u2019s concern and make our claim stronger, we added the results of PGD attack (\\u03b5=8) (Madry et al., 2018) to Figure 5. The corresponding results are listed below:\", \"accuracy_under_pgd_attack\": \"n_bit\\t1\\t2\\t3\\t4\\n----------- white-box -----------\\nVQ\\t\\t1.6\\t0.7\\t0.2\\t0.4\\nDQ\\t\\t1.3\\t1.0\\t2.0\\t1.1\\n----------- black-box -----------\\nVQ\\t\\t35.7\\t59.2\\t64.0\\t65.0\\t\\nDQ\\t\\t69.7\\t68.0\\t68.9\\t68.3\\n\\nFor black-box under PGD attack, our Defensive Quantization (DQ) consistently out-performed Vanilla Quantization (VQ) by a large margin. For example, under 1-bit quantization, VQ got only 35.7% accuracy, while DQ still maintains the accuracy at 69.7%. The trend is similar to the FGSM attacked results written in the paper.\\nFor white-box, since the models are normally trained without adversarial training, the white-box accuracy is randomly near zero for both VQ and DQ, this is another reason why we use FGSM for the motivation.\"}", "{\"title\": \"No Contradiction with Prior Conclusion. We Observed Consistent Trends Demonstrated by Other Paper\", \"comment\": \"Thanks. R+FGSM was proposed in Tramer et al. (2018), where the author used R+FGSM for the black-box based adversarial training. Previous work showed adversarial *FGSM* training is weaker than PGD, but not *R-FGSM*. There is no prior work conducted comparisons between white-box R+FGSM and PGD adversarial training, and thus no conclusion whether one of them is significantly better. In fact, compared with FGSM, R-FGSM introduced randomness, making it less likely to cause gradient masking than FGSM.\\n\\nWe do find a similar result (https://openreview.net/forum?id=ryxeB30cYX) where the author finds that randomized one-step adversarial training can achieve comparable and even better robustness than PGD adversarial training. R+FGSM adversarial training is also a randomized one-step adversarial training method, and therefore also achieves comparable or better robustness against PGD adversarial training. Since adversarial R+FGSM training is more stable, it would be possible to achieve better robustness under certain evaluations.\"}", "{\"comment\": \"Table 3 appears to show that training with R+FGSM is more robust than full PGD adversarial training, consistently across every entry entry, even without DQ. This contradicts much prior work, especially Madry et al. (2018). Do the authors believe there is something interesting going on to cause this?\", \"title\": \"Confusing Table 3 Results\"}", "{\"comment\": \"This paper makes many of its claims (e.g., Table 1, Figure 5) by evaluating against FGSM. Unfortunately, many times results that appear correct using FGSM do not hold true against the stronger attacks of Kurakin et al. (2017), Carlini & Wagner (2017), Madry et al. (2018). It would strengthen the paper to use one of these attacks consistently.\", \"title\": \"FGSM results are limiting\"}" ] }
SJldZ2RqFX
D-GAN: Divergent generative adversarial network for positive unlabeled learning and counter-examples generation
[ "Florent CHIARONI. Mohamed-Cherif RAHAL. Nicolas HUEBER. Frédéric DUFAUX." ]
Positive Unlabeled (PU) learning consists in learning to distinguish samples of our class of interest, the positive class, from the counter-examples, the negative class, by using positive labeled and unlabeled samples during the training. Recent approaches exploit the GANs abilities to address the PU learning problem by generating relevant counter-examples. In this paper, we propose a new GAN-based PU learning approach named Divergent-GAN (D-GAN). The key idea is to incorporate a standard Positive Unlabeled learning risk inside the GAN discriminator loss function. In this way, the discriminator can ask the generator to converge towards the unlabeled samples distribution while diverging from the positive samples distribution. This enables the generator convergence towards the unlabeled counter-examples distribution without using prior knowledge, while keeping the standard adversarial GAN architecture. In addition, we discuss normalization techniques in the context of the proposed framework. Experimental results show that the proposed approach overcomes previous GAN-based PU learning methods issues, and it globally outperforms two-stage state of the art PU learning performances in terms of stability and prediction on both simple and complex image datasets.
[ "Representation learning. Generative Adversarial Network (GAN). Positive Unlabeled learning. Image classification" ]
https://openreview.net/pdf?id=SJldZ2RqFX
https://openreview.net/forum?id=SJldZ2RqFX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SygMo1zllE", "HkeDSYqSkE", "rJeLfUqHJ4", "HkxbRJNCRm", "BkePN_Q0Am", "ryxSFRFuCm", "r1xj3iKdCX", "Bkej0zBdAX", "Bkgh7MSu0Q", "S1giigBd07", "SyxHcFfl0m", "HkeOL7GeAX", "B1lmrzQ937", "BkxY2b-5hX", "SygDGpKthQ", "SJlKFsUy3m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment" ], "note_created": [ 1544720281570, 1544034623287, 1544033805764, 1543548873006, 1543546926772, 1543179900616, 1543179187029, 1543160531448, 1543160355928, 1543159970568, 1542625676937, 1542624079679, 1541186107274, 1541177776963, 1541147919174, 1540479872600 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1188/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1188/Authors" ], [ "ICLR.cc/2019/Conference/Paper1188/Authors" ], [ "ICLR.cc/2019/Conference/Paper1188/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1188/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1188/Authors" ], [ "ICLR.cc/2019/Conference/Paper1188/Authors" ], [ "ICLR.cc/2019/Conference/Paper1188/Authors" ], [ "ICLR.cc/2019/Conference/Paper1188/Authors" ], [ "ICLR.cc/2019/Conference/Paper1188/Authors" ], [ "ICLR.cc/2019/Conference/Paper1188/Authors" ], [ "ICLR.cc/2019/Conference/Paper1188/Authors" ], [ "ICLR.cc/2019/Conference/Paper1188/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1188/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1188/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1188/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"With positive unlabeled learning the paper targets an interesting problem and proposes a new GAN based method to tackle it. All reviewers however agree that the write-up and the motivation behind the method could be made more clear and that novelty compared to other GAN based methods is limited. Also the experimental analysis does not show a strong clear performance advantage over existing models.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Proposed model targets an interesting problem but paper could need a bit more work\"}", "{\"title\": \"Answers to your question\", \"comment\": \"The presented section \\u201cWithout discriminator batch normalization\\u201d discussing normalization techniques is also relevant for the GenPU method.\\n\\nSince very recently, the spectral normalization (SN) is a GAN state of the art normalization technique. For example, recent interesting GAN models like the SAGAN (\\u201cSelf-Attention Generative Adversarial Network\\u201d, 2018) propose to use it. \\n\\nSN is applied on the network weights. So several distribution of minibatches can be manipulated by D during the training with SN. Thus both D-GAN and GenPU can benefit to this normalization technique to accelerate and stabilize their respective adversarial trainings. We are currently testing SN on the D-GAN discriminator weights.\\nThe discussion part will consequently also discuss in the last version the SN technique. \\n\\nHowever, we claim that the D-GAN is relatively simpler to adapt to the new standard (One D and one G) GAN variants than GenPU because in practice, the D-GAN framework simply consists in adding a loss term in the GAN discriminator loss function, without adding any additional hyper-parameters. A GAN working without batch normalization can be adapted in this way to the proposed D-GAN framework without hyper-parameters tuning. When several changes appear in the recent GAN variants, it is more practical, from an implementation point of view, to adapt a GAN variant to the D-GAN framework than doing the contrary. This is harder to do with the GenPU algorithm implementation. For example, the SAGAN adds self-attention layers operations. The progrGAN (\\u201cProgressive growing of GANs for improved Quality, Stability, and Variation\\u201d, ICLR 2018) changes drastically the GAN training implementation. \\n\\nConcerning the computational cost. If we suppose that both GenPU and D-GAN need the same number of training epoch iterations to converge, then training five learning models instead of two is 2.5 times more computational demanding. This becomes a problem on high dimensional data. For example, the progrGAN (one D and one G) training time for CelebA-HQ is two weeks with one Tesla V100 GPU. If a user intends to do some GAN-based PU prototyping tests, then using two learning models instead of five is very interesting.\\n\\n\\nWe sincerely hope these answers clarify some motivations of the proposed approach. Do not hesitate to highlight what remains unclear. This helps us a lot to better communicate the key ideas of the proposed approach. \\n\\nWe thank you for your constructive comments.\"}", "{\"title\": \"Answers to your questions\", \"comment\": \"***\", \"1st_paragraph_answer\": \"The original GAN (Goodfellow et al., 2014) discriminator gets in input minibatches of unlabeled examples and generated examples.\\nConcerning the proposed approach, as illustrated in the figure 1 (updated version), the D-GAN discriminator gets in input minibatches of unlabeled examples, generated examples, and positive labeled examples.\\n\\nThe discriminator D and the generator G of the original GAN (Goodfellow et al., 2014) are alternately trained as follow:\\n -\\tD is trained to predict in output the value \\u201c1\\u201d for unlabeled examples, and the value \\u201c0\\u201d for the generated examples. \\n -\\tG is trained to generate examples which are considered as real by D. So the generator is trained to generate examples for which D predicts the output value \\u201c1\\u201d.\\n => Consequently, G learns to reproduce the unlabeled examples distribution p_U.\\n\\nConcerning the proposed D-GAN approach, the discriminator D and the generator G are alternately trained as follow:\\n -\\tD is trained to predict in output the value \\u201c1\\u201d for unlabeled examples, and the value \\u201c0\\u201d for both generated examples and positive labeled examples. \\n -\\tG is still trained to generate examples for which the discriminator predicts the output value \\u201c1\\u201d.\\nAs highlighted in the paragraph just below the equation 1 in the updated version, we recall that if D is trained to predict in output the value \\u201c1\\u201d for unlabeled examples, and the value \\u201c0\\u201d for positive labeled examples, then D is in fact trained to exclusively predict the output value \\u201c1\\u201d for the counter-examples (included in the unlabeled dataset).\\n => Consequently, G exclusively learns to reproduce the counter-examples distribution p_N included in the unlabeled distribution p_U.\\n\\nIn order to produce this new behavior in practice, we propose to add the loss function term \\u201cEp[log(1-D(Xp))]\\u201d in the original GAN discriminator loss function (see equation 2). This is justified intuitively in the updated version of the article just below the equation 2.\\n\\n\\n\\n***\", \"2nd_paragraph_answer\": \"The proposed approach does not follow the same intuition as the GenPU method.\\nThe proposed D-GAN approach is based on the original GAN implementation introduced by (Goodfellow et al., 2014). The original GAN implementation consists in training consecutively alternately D and G. So, we propose in our article to decompose the adversarial training such that we firstly analyze the discriminator behavior independently to the generator behavior. Then, we deduce the generator behavior which is naturally influenced by the discriminator behavior. \\n\\n\\n\\n***\", \"3rd_paragraph_answer\": \"In PN classification, we train a learning model by optimizing a risk taking into consideration positive and negative examples.\\nIn PU classification, we train a learning model by optimizing a risk taking into consideration positive and unlabeled examples. This is simply the reason why we used the expression \\u201cPU risk\\u201d to introduce the risk presented in the equation 1. \\nThe presented risk (equation 1) is simple and does not represent by itself our contribution. However, we do a brief analysis to deduce what is the behavior of a learning model trained with this risk. Then our contribution is to propose to introduce this behavior in the original GAN discriminator loss function such that it enables G to learn the counter-examples distribution.\"}", "{\"title\": \"Seems the authors misunderstood my point in the review\", \"comment\": \"I know the goal of two stage PU---it is anyway similar to GenPU. I just cannot comprehensively follow the logic of your use of GANs, because Figure 1 is unclear to me. In PU learning, you cannot apply GANs without any modification, otherwise you can only generate P or U data that is not your goal.\\n\\nIn GenPU, there are 2 Gs and 3 Ds, and experimentally this is the minimum number if we want to also generate P data; it can be reduced to 1G and 2Ds where 1D ensures that realP + genN approximates realU and 1D ensures that genN doesn't approximate realP (I am not an author of GenPU). In both ways, GenPU introduced the cluster assumption that is not assumed in other GANs (see Figure 2 of GenPU where p_{gn}(x) consistently has smaller supports than p_n(x), and the assumption \\\"p_{gn}(x) almost never overlaps with p_p(x)\\\" in their theoretical analysis). I personally think this idea of using GANs is intuitively for PU learning.\\n\\nOn the other hand, D-GAN has 1G and 1D and I would like the authors to explain in the introduction why 1D is enough for identifying p_n(x) from p(x) given p_p(x) where p(x) and p_p(x) are only approximately known via data. The authors only given some equations without intuition/motivation and they are still unclear to me. Note that in PU learning, there is no PU *risk* and the expected risk is shared by PN learning and PU learning; instead, PN and PU learning have their own *risk estimators* that can estimate the unique expected risk from PN data and PU data, respectively. So what do you mean by defining a new expected risk as a performance measure to be minimized?\"}", "{\"title\": \"Generality of D-GAN and GenPU\", \"comment\": \"I am not an expert on GANs, so I still cannot follow why D-GAN is easier to benefit from recent advances whereas GenPU is harder. Is it something like spectral normalization can be used in D-GAN but cannot be used in GenPU? If so, why?\"}", "{\"title\": \"Answers (Part 2)\", \"comment\": \"***\\n\\n\\u201cBased on the experiments, the proposed method achieves marginal improvement in terms of F1 score but sometimes also slightly lower performance than other GAN based such as PGAN, so the impact of this work to solve positive unlabelled data problem is not evident.\\u201c\", \"the_proposed_approach_overcomes_previous_gan_based_pu_methods_issues\": [\"GenPU issues: GenPU is not easily adaptable to the current GAN state of the art evolutions because of its untraditional adversarial framework. Moreover, GenPU uses prior knowledge. This is unpractical for example on some real incremental application datasets in which the fraction pi value can change continuously at each new training minibatch.\", \"PGAN issue: It has a first stage overfitting problem when it is applied on relatively simple datasets as MNIST. In fact, it is mentioned in their article: \\u201cIt is also known that a GAN is not perfect in its operation when it is applied to high dimensional data, \\u2026 Thus it is possible to estimate the non-zero distance d computed into the cost function of Db\\u201d. In other words, the PGAN exploits the GANs convergence defaults to address the PU learning problem.\", \"Globally this improvement is present, such that:\", \"D-GAN outperforms 75% of the time PGAN and RP methods on table 3.\", \"Numerically the gap is not very important, but the experimental results are consistent with the expected behavior mentioned in the method section such that the D-GAN:\", \"generates relevant counter-examples while preserving the standard GAN architecture;\", \"achieves good predictions without using prior knowledge;\", \"converges towards the PGAN performance for complex tasks (One vs. Rest mode on CIFAR-10): 75% of the time, the D-GAN gets prediction performances slightly above the PGAN method on CIFAR-10;\", \"easily outperforms in practice PGAN on simple datasets (MNIST). The overfitting issue of the PGAN for simple tasks is reduced, as illustrated on the figure 6 (b).\", \"Moreover, we will indicate that we did not need to fine-tune GAN variants hyper-parameters to get these results.\", \"***\", \"\\u201cmy main experience is in the computer vision for autonomous driving\\u201d\", \"Another motivation of the proposed approach can be linked to your area.\"], \"motivation_4\": \"Easier incremental learning applications\\nFor real applications like autonomous driving, recording unlabeled data is much more accessible than getting ground truth labeled data. Under the assumption that unlabeled data contain relevant counter-examples, using a PU learning method enables to focus the training dataset labeling effort exclusively on the samples of our class of interest.\\n\\nIf the unlabeled training dataset recorded is exploited incrementally, then the fraction of unlabeled positive samples can change at each new unlabeled minibatch. For example, if the positive class is \\u201cpedestrian\\u201d, then their proportion can drastically vary from a street to another one. In this context, PU learning methods using the unlabeled dataset prior knowledge are not suitable. \\n\\nPU methods not using prior knowledge like the proposed approach can solve such problems.\\n\\n\\n***\", \"to_sum_up\": [\"four different motivations where presented to justify this work;\", \"the intuition behind the D-GAN method has been clarified;\", \"This article presents both scientific contributions;\", \"The presented results demonstrate the proposed approach ability to overcome previous GAN-based PU issues.\", \"We hope the mentioned problems have been clarified in these answers. Moreover, this helped us to improve the article understanding.\", \"We sincerely thank you for your constructive review.\"]}", "{\"title\": \"Answers (Part 1)\", \"comment\": \"Thanks for your review,\\n\\nFirstly we apologize for not making the text clear enough. \\nWe hope the following answers to your respective points will clarify the proposed contributions.\\n\\n\\n***\\n\\n\\u201cThe motivation of the work is not clear\\u201d\\n\\nMotivations can be expressed as follow.\", \"1\": [\"Overcome the previous state of the art approaches disadvantages.\", \"GenPU architecture is more computational demanding (three discriminators and two generators) than standard GAN architectures (one discriminator and one generator). Furthermore, GenPU requires prior knowledge and additional loss function hyper-parameters.\", \"The PGAN method has overfitting issues on simple datasets (see figure 6. (b)) because its approach is based on GANs imperfections.\"], \"2\": \"A framework easily adaptable to GANs variants.\\n -\\tA GAN PU framework similar to standard GAN could enable a better adaptability to last and potentially future GANs variants. It is an important point because the state of the art is updated continuously but the architectures remain similar (one generator and one discriminator).\", \"3\": \"Adversarial training of GAN-based approaches enables to learn automatically relevant high level feature metrics.\\n -\\tGANs generate semantically realistic images. The most interesting aspect is probably that the error computed to evaluate the generated images quality is estimated from a high level feature point of view: the discriminator output. In this way, GANs enable relevant data augmentation.\\n\\n\\n***\\n\\n\\u201cthe novelty seems to be present\\u201d\\n\\nThe two article contributions can be highlighted as follow.\", \"contribution_1\": \"We propose to incorporate a PU risk inside the discriminator loss function.\", \"we_show_that_a_gan_can_solve_by_itself_a_positive_unlabeled_learning_task_if_the_problem_is_well_formulated\": \"We combine a PU risk with the GAN discriminator loss function. That enables the G convergence to the distribution of counter-examples included in the unlabeled dataset.\\n\\nPrevious GAN-based PU approaches do not include the PU risk in the discriminator cost function. GenPU and PGAN logics are as follow:\\n -\\tGenPU convergence is inspired by the original GAN convergence exposed by GoodFellow in 2014. The main idea is in this sentence: \\u201cDu is aimed at separating the unlabelled training samples from the fake samples of both Gp and Gn\\u201d. Thus the global system GenPU enables the convergence \\u201cpi Pgp + (1-pi) Pgn -> Pu\\u201d, with Pgp the distribution of positive samples generated by Gp, Pgn the distribution of the negative samples generated by Gn, and Pu the distribution of unlabeled samples. However, the same reasoning can be expressed using one single generator Gn if we replace the generated positive samples by the positive labeled samples that we have in a PU dataset. Thus training five different models is not necessary to address standard PU learning challenge where we have enough positive samples. This reasoning is different to the propose one. \\n -\\tPGAN is trained to converge towards the unlabeled dataset distribution during the first step. The PGAN exploits GANs imperfections such that the generated distribution at the adversarial equilibrium is still separable from the unlabeled samples distribution by a classifier. The PGAN does not use a PU learning risk to train its GAN part.\", \"contribution_2\": \"Highlight of a critical normalization issue discussed in the context of the proposed framework\\n\\nBatch-normalization (BN) technique cannot be used when several minibatches distributions (unlabeled, positive, and generated ones) are used to train a learning model.\\n\\nWith BN, a classifier prediction for a given sample is critically influenced by the other samples of the same minibatch. As presented in the article, the consequence with a PU learning risk is that BN does not allow a classifier to distinguish positive from negative samples (see figure 5(a) and subsections 2.3 and 3.1). These sections include the analysis of this BN effect and alternative normalization solutions, such that this effect disappears.\\n\\nIn practice, a D-GAN using BN converges towards the unlabeled samples distribution. Without, it converges exclusively towards the negative samples distribution. The normalization training impact is clearly highlighted in the figure 5. \\nTo the best of our knowledge, we are the first to highlight this critical phenomenon for the PU learning task.\\n\\n\\n***\\n\\n\\u201c intuition of the D-GAN is not clearly written.\\u201d \\n\\nThe D-GAN intuition can be expressed as follow. The discriminator D addresses to the generator G the riddle: \\n\\u201cShow me what IS unlabeled AND NOT positive.\\u201d\\nIt turns out that negative samples included in the unlabeled dataset are both unlabeled and not positive. Consequently G addresses this riddle by learning to show the negative samples distribution to D.\"}", "{\"title\": \"Answers (Part 3)\", \"comment\": \"*****\\n\\n\\n\\u201cThe novelty is to be honest incremental\\u201d\\n\\nThe D-GAN addresses the task to generate relevant counter-examples from a PU dataset in a different way than the previous GAN-based PU learning approaches. \\n\\n-\\tGenPU convergence is inspired by the original GAN convergence presented in (\\u201cGenerative adversarial nets\\u201d; 2014 NIPS). The GenPU main idea is in this GenPU article sentence: \\u201cDu is aimed at separating the unlabelled training samples from the fake samples of both Gp and Gn\\u201d. That enables the global system GenPU convergence such that \\u201cpi Pgp + (1-pi) Pgn -> Pu\\u201d, with Pgp the distribution of positive samples generated by Gp, Pgn the distribution of the negative samples generated by Gn, and Pu the distribution of unlabeled samples (real ones). However, the same reasoning can be expressed using one single generator Gn if we replace the generated positive samples by the positive labeled samples that we have in a PU dataset. Thus training five different models to address standard PU learning challenge is not necessary. This reasoning is different to the D-GAN one. \\n\\n-\\tPGAN is trained to converge towards the unlabeled dataset distribution during the first step. The PGAN exploits GANs imperfections such that the generated distribution at the adversarial equilibrium is still separable from the unlabeled samples distribution by a classifier. The PGAN method does not focus the generator G convergence towards the counter-examples distribution. The proposed D-GAN approach enables the G convergence exclusively towards the latter.\", \"main_contribution\": \"We propose to incorporate a PU risk inside the discriminator loss function.\", \"we_show_that_a_gan_can_solve_by_itself_a_positive_unlabeled_learning_task_if_the_problem_is_well_formulated\": \"We combine the risk Rpu with the discriminator GAN loss function. That enables the G convergence to the distribution of counter-examples included in the unlabeled dataset.\", \"the_side_contribution\": \"highlight of a Batch Normalization (BN) (Ioffe & Szegedy, 2015) critical issue.\\n A learning model manipulating different training minibatches distributions should not use BN. Alternative normalization techniques are discussed and tested in the context of the proposed framework which manages positive, unlabeled and generated training minibatches.\\n\\nThe both enumerated contributions are presented in the article. In addition, the former one presents a thinking different to the previous GAN-based PU approaches. \\n\\n\\n\\u201cThe significance is similarly poor, due to that the experiments mixed up methods for censoring PU and those for case-control PU.\\u201d \\n\\nD-GAN is compared to GenPU, nnPU, PGAN, and RP methods achieving state of the art prediction performances. More is better than less.\\n\\n\\n\\u201cF1-score is a performance measure for information retrieval rather than binary classification.\\u201d\\n\\nF1-Score metric is relevant in the context of the One vs. Rest challenge presented because:\\n-\\tThis metric is used in the RP and PGAN articles reported results. Thus we used the same metric to compare our results to the PGAN article reported results.\\n-\\tThe F1-Score evaluates the ability of a binary classifier to predict correctly the positive samples. One vs. Rest challenge focuses the attention on the examples of our class of interest: the positive class.\\nAccuracy metric has been also used in table 2 in the context of the One vs. One task, and in figure 6 to evaluate the overfitting problem.\\n\\nNonetheless, we take into consideration this comment.\\n\\n\\n\\u201cWe all know GANs are pretty good at MNIST but not CIFAR-10.\\u201d\\n\\nThanks for highlighting this point.\", \"another_interest_of_our_article_is_that_we_demonstrate_experimentally_the_contrary_on_tables_3_and_4\": \"the D-GAN achieves state of the art prediction performances on CIFAR-10 with the WGAN-GP variant.\\n\\n\\n\\u201cGenPU has a critical issue of mode collapse, and this is why GenPU reports 1-vs-1 rather than 5-vs-5 on MNIST.\\u201d\\n\\nWe agree with you. Fortunately, the GAN mode collapse issue has been drastically reduced with recent GAN variants. It turns out that the D-GAN framework maintains the conventional GAN architectures (one discriminator and one generator) like DCGAN, WGAN-GP and LSGAN-GP. This enables to adapt easily the proposed approach to such conventional variants. In other words, the D-GAN architecture is more practical to follow the current GAN state of the art evolution. Furthermore, fine-tuning of GANs variants hyper-parameters is not needed when these variants are used by the proposed D-GAN framework, as presented in our article. \\n\\n\\n*****\\n\\n\\nThese answers clarify the enumerated issues.\\n\\n\\n\\nTo conclude, we thank you for highlighting these interesting points of the presented article, and for taking the time to develop them, especially concerning the PU state of the art.\\n\\nWe sincerely thank you for your instructive review.\"}", "{\"title\": \"Answers (Part 2)\", \"comment\": \"Concerning the state of the art,\\n\\n \\u201cnone of these 3 papers was cited\\u201d\\n\\nWe thank you for quoting all these relevant articles. We agree with you that it is interesting to keep in mind the founders articles. We focused the attention in our article on the most recent PU learning approaches which achieve the state of the art prediction performances. The link with the quoted articles is as follow:\\n-\\tconcerning the censoring PU learning, RP can be considered as an improvement of Elk08 (\\\"learning classifiers from only positive and unlabeled data\\\", KDD 2008) method as mentioned in their article: \\u201cRank Pruning leverages Elk08 \\u2026\\u201d;\\n-\\tconcerning the Case-control PU learning, nnPU (\\u201cPositive-Unlabeled Learning with Non-Negative Risk Estimator\\u201d; NIPS 2017) addresses the overfitting problem of uPU (\\u201cConvex formulation for learning from positive and unlabeled data\\u201d; ICML 2015). uPU is an improvement of uPU-2014 (\\u201canalysis of learning from positive and unlabeled data\\u201d; NIPS 2014), such that uPU-2015 proposes a convex formulation by using \\u201cdifferent loss functions for positive and unlabeled samples\\u201d. This improvement reduces the computational cost of uPU-2014 method.\\n\\n We will add the three relevant citations that you quoted to the introduction.\\n\\n\\n\\u201cBy definition, GAN-based PU learning belongs to the latter problem setting while Rank Prune can only be applied to the former but was included as a baseline method.\\u201d\\n\\nRank Prune (RP) addresses standard PU learning problem as highlighted in the table 2 of their article.\", \"rp_is_a_baseline_method_in_the_context_of_the_presented_d_gan_method_because\": \"-\\tboth are two-stage methods such that during the first stage they prepare a PN dataset for the second stage (classifier step);\\n-\\tboth (RP and D-GAN) address the PU learning problem without the need of prior knowledge. What is more, RP achieves state of the art performances as presented in their article;\\n-\\tboth follow a similar reasoning such that: \\n o\\tthe RP method consists in selecting the confident examples during the first step;\\n o\\tthe generator G of the D-GAN method consists in learning the distribution of the samples considered as the closest to the value \\u201cy=1\\u201d (label associated to the unlabeled negative samples) by the discriminator D;\\nSo both methods exploit exclusively the samples predicted with the higher confidence.\\n\\nThis clarifies why RP can be considered as a baseline in the context of this study. D-GAN method has more points in common with RP than with nnPU method. \\n\\nOur aim is not to center the discriminator predictions for positive samples on the corresponding label value (y=0 in our case).\\nIn our case, we train G to generate samples considered by D as \\u201cy=1\\u201d. Thus we care about two things, which can be obtained for any fraction pi between 0 and 1, as follow: \\n-\\tthe fact that unlabeled positive samples are considered by D as distant to the label value \\u201cy=1\\u201d, thanks to the fact that they follow the same distribution as the positive labeled samples which are associated during the training to the contradictory label \\u201cy=0\\u201d. Consequently, in practice D predicts an intermediate value between \\u201cy=1\\u201d and \\u201cy=0\\u201d for the positive samples distribution;\\n-\\tthe fact that negative samples included in the unlabeled dataset are exclusively associated by D to their expected label \\u201cy=1\\u201d;\\n\\nIn this way, negative samples are considered as the most \\u201cy=1\\u201d by D. This enables G convergence towards \\u201cy=1\\u201d samples: the negative samples.\\n\\n\\n\\u201call discriminative PU methods and GenPU require to know pi for learning.\\u201d\\n\\nIf we suppose that the D-GAN method is a discriminative method, then the presented results (tables 2, 3, 4, and figures 3, 4, 6) demonstrate experimentally that the discriminative methods do not necessarily need prior knowledge to achieve state of the art predictions, as this is the case for the D-GAN.\\n\\nThus from this point of view, we can consider that the D-GAN proposed approach is a novel interesting discriminative approach not using pi for learning the counter-examples distribution.\", \"the_intuition_behind_the_d_gan_is_based_on_an_obvious_practical_phenomenon\": \"With the PU risk proposed (equation 1), the negative samples are always considered as \\u201c1\\u201d for any fraction pi value.\\n\\n\\n*****\"}", "{\"title\": \"Answers (Part 1)\", \"comment\": \"Thanks for your constructive review,\\nYour comments indicate that the text and equations are not clear enough and that some previous state of the art methods were omitted. We understand that the lack of clarity can be an issue. We made during this rebuttal period a clarification effort.\\nMoreover, please find as follow the answers to your comments.\\n\\n\\n*****\\n\\n\\n\\u201cI cannot easily follow the meanings behind the equations.\\u201d\\n\\nWe have clarified the equations.\\n\\n\\n\\u201cI cannot see why the generated data can serve as negative data.\\u201d \\u201cThis paragraph is discussing GenPU, PGAN and the proposed method, and consequently the motivation of the current paper does not make sense at least to me.\\u201d\", \"gans_are_known_to_be_relevant_because_of_their_ability_of_finding_a_boundary_between_real_and_generated_samples\": \"A GAN discriminator is trained to find autonomously the best metric to evaluate the generated samples quality. This metric is considered as more relevant than previous ones such as the auto-encoders per-pixel reconstruction loss function.\", \"the_gan_based_pu_approaches_main_idea_is_to_exploit_this_gan_benefit_to_address_a_pu_learning_problem\": \"The initial goal of GANs is to imitate the unlabeled distribution. In the context of the PU task, this goal is adapted to identify and imitate autonomously the distribution of relevant counter-examples hidden in the unlabeled dataset.\", \"the_motivation_in_this_paragraph_is_to_discuss_the_previous_gan_based_approaches_following_issues\": \"-\\tGenPU issues: GenPU is not easily adaptable to the current GAN state of the art (fast) evolutions because of its untraditional adversarial framework. Moreover, GenPU uses prior knowledge. This is unpractical for example on some real application incremental datasets in which the fraction pi value can change continuously at each new training minibatch.\\n-\\tPGAN issue: It has a first stage overfitting problem when it is applied on relatively simple datasets as MNIST. In fact, it is mentioned in their article: \\u201cIt is also known that a GAN is not perfect in its operation when it is applied to high dimensional data, \\u2026 Thus it is possible to estimate the non-zero distance d computed into the cost function of Db\\u201d. In other words, the PGAN exploits the GANs convergence defaults to address the PU learning problem.\\n\\nThe proposed approach overcomes the above enumerated issues while keeping their respective advantages. This is done by using a different technique: The D-GAN directly incorporates a PU learning risk into the discriminator loss function. This guides naturally the generator to converge towards the distribution of the negative samples included in the unlabeled dataset.\\n\\n\\n*****\\n\\n\\n\\u201cThe paper classified PU learning methods into two categories, one-stage methods and two-stage methods. This is interesting. However, before that, they should be classified into two categories, for censoring PU learning and for case-control PU learning.\\u201d\\n\\nPrevious relevant state of the art articles, like nnPU, classify PU learning methods in two-stage and one-stage categories. The article nnPU (\\u201cPositive-Unlabeled Learning with Non-Negative Risk Estimator\\u201d, NIPS 2017) says: \\u201cExisting PU methods can be divided into two categories based on how U data is handled. The first category (e.g., [11, 12]) identifies possible negative (N) data in U data, and then performs ordinary supervised (PN) learning; the second (e.g., [13, 14]) regards U data as N data with smaller weights.\\u201d. \\n\\nGAN-based approaches generate samples in the first step, and they perform ordinary PN learning during the second step by considering the generated samples as relevant counter-examples. RP prepares a PN dataset from a PU dataset. Thus it is relevant to classify into the same category (two-stage) RP, and GAN-based approaches (D-GAN, PGAN, GenPU). \\n\\nWe introduce these categories (one-stage/two-stage) because our goal is to focus the attention on methods which aim at producing a relevant PN dataset from a PU dataset.\"}", "{\"title\": \"Furthermore,\", \"comment\": \"***\\nPGAN score when \\u03c0p=0 (annoted as PNGAN data augmentation reference) is now indicated in tables 3 and 4 to highlight the GAN effect on CIFAR-10. \\n=> WGAN \\\"data augmentation\\\" increases the reference PN average F1-Score on CIFAR-10 from 0.68 to 0.812.\\n*** \\n\\n- Another novelty of the presented article is to highlight a critical Batch-Normalization effect on the discriminator (sections 2.3 and 3.1).\\n\\n\\n- D-GAN intuition can be expressed as follow:\\n \\u201c- Show me what IS unlabeled AND NOT positive.\\u201d \\nThis is the task asked by D to G. Negative samples are both unlabeled and not positive. Consequently G learns to show the negative samples distribution to D.\\n\\nThis article presents an interesting contribution by merging GANs and PU learning areas in this way.\\n\\n***\\n\\nYour review helped us to clarify some formulations of the proposed method. You also highlighted that the article omitted some justifications concerning the experimental results. We apologize for not making the text clear enough. We will use shorter and concise sentences in the article. \\n\\nThese previous answers to your respective points will contribute to improving the presentation clarity and strengthening the experiments.\\n\\nWe sincerely thank you for your review.\"}", "{\"title\": \"Answers\", \"comment\": \"Thanks for your constructive review,\\n\\n\\n1.\\na. \\u201ca better robustness counter the varying images complexity\\u201d will be replaced by \\u201ca better adaptability to the images complexity\\u201d\\nAccording to the PGAN article, PGAN should not be used for simple tasks. D-GAN works on both simple and complex tasks: D-GAN is more adaptable to the images complexity.\\n\\nb. l(D(X), \\u03b4) will be replaced by l(D(X), y=\\u03b4) and \\u201c\\u21d4\\u201d by \\u201c=\\u201d. \\n\\u03b4 is the label of positive samples: \\u03b4 substitutes both contradictory labels \\u201c0\\u201d and \\u201c1\\u201d associated to Pp in the risk Rpu (equation 1). Unlabeled positive samples are separated from unlabeled negative ones with D trained with the risk Rpu.\\n\\nc. Equation (6) will be replaced as follow.\", \"the_loss_function_ld_of_d_is_defined_as\": \"Ld = Rpu + Eg [ l(D(Xg),Yg=0) ],\\nwith Rpu the PU risk (equation(1)), Yg=0 the label associated to the samples Xg generated by G; Xg = G(z), and Eg the expectation for samples Xg.\\nWe recall Rpu = Eu [ l(D(Xu),Yu=1) ] + Ep [ l(D(Xp),Yp=0) ], with Yu=1 the label of unlabeled samples Xu with the expectation Eu, and Yp=0 the label of labeled positive samples Xp with the expectation Ep.\\nThe loss function \\u201cl\\u201d used in the proposed D-GAN framework can be the binary cross-entropy \\u201cH\\u201d such that \\u201cl=-H\\u201d. So:\\nLd = Eu [ -H(D(Xu),Yu=1) ] + Ep [ -H(D(Xp),Yp=0) ] + Eg [ -H(D(Xg),Yg=0) ].\", \"the_binary_cross_entropy_h_is_defined_as_below\": \"H(D(X),Y) = - Y log(D(X)) \\u2013 (1-Y) log(1-D(X)), with Y represents the label associated to the D input samples X. Thus H(D(X),Y=1) = - log(D(X)) and H(D(X),Y=0) = - log(1-D(X)).\\nFinally, Ld can be developed as follow:\\nLd = Eu [ log[D(Xu)] ] + Ep [ log[1-D(Xp)] ] + Eg [ log[1-D(Xg)] ].\\nThis shows the incorporation of Rpu (equation(1)) inside the D-GAN loss function (equation(5)).\\nThe role of G during the adversarial training is to generate samples considered by D as \\u201c1\\u201d. Only negative samples are considered as \\u201c1\\u201d by D thanks to the Rpu risk. This justifies intuitively the G convergence towards the negative samples distribution.\\n\\nd. The sentence part \\\"such that we have supp(Pp (Xp )) \\u2229 supp(Pn (Xn )) \\u2192 \\u2205, with supp the support function of probability distributions.\\\" will be removed. \\nWe talked about D ability to distinguish positive samples distribution Pp from negative one Pn. If D does this distinction, then G converges towards Pn. If D fails to do this task, then G converges to the unlabeled samples distribution Pu as the PGAN.\\n\\n2. We take into consideration your comment. This is not the main message of the article. Section 2.2 will be removed from the method part. \\n\\n3. \\n\\u201cresults in Table 4 and Table 3 do not compare to GenPU.\\u201d \\nWe do not compare our results to GenPU method for the challenging One vs. Rest task because: \\na. GenPU method is not reproducible. \\n - Code not provided.\\n - Implementation details are missing in their article: Three hyper-parameters (lambda_P, lambda_N and lambda_U) are introduced in the GenPU article, but the values are not specified. They are important for the GenPU training with respect to their role inside the GenPU cost function (GenPU equation (3)): Instructions 8, 9 and 10 of the GenPU pseudo-code (GenPU Algorithm 1) apply them directly to the prior knowledge parameters \\u03c0p and \\u03c0n (=1-\\u03c0p). That makes impossible the GenPU reproducibility. \\n\\nb. GenPU mode collapse issue does not enable to perform complex tasks as the One vs. Rest challenge.\\n\\nc. GenPU is not valorized in their article as an interesting alternative for the standard PU context where we own relatively enough positive labeled samples: GenPU article does not present results with more than 100 positive labeled samples.\\n\\nd. The goal of tables 3 and 4 is to compare methods which do not need prior knowledge.\\n\\n\\n\\u201cthe authors claim several times that the GenPU method is *onerous*\\u201d\", \"genpu_training_computational_cost_cannot_be_quantified\": \"GenPU is not reproducible and training epoch iterations needed to converge are not specified. If we consider that both standard GAN and GenPU architectures need the same number of training epochs to converge to the expected distribution, then training five models (GenPU) instead of two (D-GAN) is more computational demanding.\\nD-GAN does not add or modify hyper-parameters of GAN variants tested (GAN, DCGAN, WGAN-GP, LS-GAN). \\n\\n\\n\\u201cthe reference PN method performs worse than other PU learning methods which does not make sense.\\u201d\", \"d_gan_performs_better_than_pn_on_cifar_10_because\": \"- It learns relevant counter-examples distribution. RP article discusses the same behavior on CIFAR-10.\\n\\n - Generated images enable data augmentation. GANs latent linear interpolations result in semantic images interpolations outputs. Thus GANs learn generic representation. \\nIt is not observed on MNIST because data augmentation is difficult to produce on low-dimensional data.\\nPGAN score when \\u03c0p=0 (as for PN) will be added in tables 3 and 4 to highlight this effect. \\n\\nThis phenomenon is not straightforward, but these reasons clarify it.\\n\\n\\nWe sincerely thank you for your review.\"}", "{\"title\": \"Clear Rejection\", \"review\": \"[Summary]\\nPU learning is the problem of learning a binary classifier given labelled data from the positive class and unlabelled data from both the classes. The authors propose a new GAN architecture in this paper called the Divergent Gan (DGAN) which they claim has the benefits of two previous GAN architectures proposed for PU learning: The GenPU method and the Positive-Gan architecture. The key-equation of the paper is (5) which essentially adds an additional loss term to the GAN objective to encourage the generator to generate samples from the negative class and not from the positive class. The proposed method is validated through experiments on CIFAR and MNIST.\\n\\n[Pros]\\n1. The problem of PU learning is interesting.\\n2. The experimental results on CIFAR/MNIST suggest that some method that the authors coded worked at par with existing methods.\\n\\n[Cons]\\n1. The quality of the writeup is quite bad and a large number of critical sentences are unclear. E.g.\\na. [From Abstract] It keeps the light adversarial architecture of the PGAN method, with **a better robustness counter the varying images complexity**, while simultaneously allowing the same functionalities as the GenPU method, like the generation of relevant counter-examples.\\nb. Equation (3) and (4) which are unclear in defining R_{PN}(D, \\u03b4)\\nc. Equation (6) which says log[1 - D(Xp)] = Yp log[D(Xp)] + (1-Yp) log[1-D(Xp)] which does not make any sense.\\nd. The distinction between the true data distribution and the distribution hallucinated by the the generator is not maintained in the paper. In key places the authors mix one with the other such as the statement that supp(Pp (Xp )) \\u2229 supp(Pn (Xn )) \\u2192 \\u2205\\nIn short even after a careful reading it is not clear exactly what is the method that the authors are proposing.\\n\\n2. Section 2.2 on noisy-label learning is only tangentially related to the paper and seems more like a space filler.\\n\\n3. The experimental results in Table 4 and Table 3 do not compare to GenPU. Although the authors claim several times that the GenPU method is *onerous*, it is not clear why GenPU is so much more onerous in comparison to other GAN based methods which all require careful hyper-parameter tuning and expensive training. Furthermore the reference PN method performs worse than other PU learning methods which does not make sense. Because of this I am not quite convinced by the experiments.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Problem and framework not well explained\", \"review\": \"The motivation of the work is not clear but the novelty seems to be present.\\n\\nThe paper is very hard to follow as the problem description and intuition of the D-GAN is not clearly written.\\n\\nBased on the experiments, the proposed method achieves marginal improvement in terms of F1 score but sometimes also slightly lower performance than other GAN based such as PGAN, so the impact of this work to solve positive unlabelled data problem is not evident. \\n\\nI am personally not as familiar with the PU problem and existing frameworks so my confidence in the assessment is low; my main experience is in the computer vision for autonomous driving and sparse coding.\\n\\nBut my feeling is this paper is marginally below the threshold of acceptance.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"Too many issues\", \"review\": \"This paper proposed another GAN-based PU learning method. The mathematics in this paper is not easy to follow, and there are many other critical issues.\\n\\n*****\\n\\nThe clarity is really an issue. First of all, I cannot easily follow the meanings behind the equations. I guess the authors first came up with some concrete implementation and then formalize it into an algorithm. Given the current version of the paper, I am not sure whether this clarity of equations can be fixed without an additional round of review or not.\\n\\nMoreover, the logic in the story line is unclear to me, especially the 3rd paragraph that seems to be mostly important in the introduction. There are two different binary classification problems, of separating the positive and negative classes, and of separating the given and generated data. I cannot see why the generated data can serve as negative data. This paragraph is discussing GenPU, PGAN and the proposed method, and consequently the motivation of the current paper does not make sense at least to me.\\n\\n*****\\n\\nThe paper classified PU learning methods into two categories, one-stage methods and two-stage methods. This is interesting. However, before that, they should be classified into two categories, for censoring PU learning and for case-control PU learning. The former problem setting was proposed very early and formalized in \\\"learning classifiers from only positive and unlabeled data\\\", KDD 2008; the latter problem setting was proposed in \\\"presence-only data and the EM algorithm\\\", Biometrics 2009 and formalized in \\\"analysis of learning from positive and unlabeled data\\\", NIPS 2014. Surprisingly, none of these 3 papers was cited. By definition, GAN-based PU learning belongs to the latter problem setting while Rank Prune can only be applied to the former but was included as a baseline method.\\n\\nThe huge difference between these two settings and their connections to learning with noisy labels are known for long time. To be short, class-conditional noise model corrupts P(Y|X) and covers censoring PU, mutual contamination distribution framework corrupts P(X|Y) and covers case-control PU, and mathematically mutual contamination distribution framework is more general than class-conditional noise model and so is case-control PU than censoring PU. See \\\"learning from corrupted binary labels via class-probability estimation\\\", ICML 2015 for more information where the above theoretical result has been proven. An arXiv paper entitled \\\"on the minimal supervision for training any binary classifier from only unlabeled data\\\" has some experimental results showing that methods for class-conditional noise model cannot handle mutual contamination distributions. The situation is similar when applying censoring PU methods to case-control PU problem setting.\\n\\nFurthermore, the class-prior probability pi is well-defined and easy to estimate in censoring PU, see \\\"learning classifiers from only positive and unlabeled data\\\" mentioned above. However, it is not well-defined in case-control PU due to an identifiability issue described in \\\"presence-only data and the EM algorithm\\\" mentioned above. Thus, the target to be estimated is defined as the maximal theta such that theta*P(X|Y)<=P(X) following \\\"estimating the class prior and posterior from noisy positives and unlabeled data\\\", NIPS 2016. BTW, \\\"mixture proportion estimation via kernel embedding of distributions\\\" is SOTA in class-prior estimation; the previous NIPS paper was written earlier and accepted later.\\n\\nIn summary, as claimed in the paper and shown in Table 1 in the introduction, all discriminative PU methods and GenPU require to know pi for learning. This is true, but this is because they are designed for a more difficult problem setting---learning classifiers and estimating pi are both more difficult. Lacking some basic knowledge of PU learning is another big issue.\\n\\n*****\\n\\nThe novelty is to be honest incremental and thus below the bar of ICLR. The significance is similarly poor, due to that the experiments mixed up methods for censoring PU and those for case-control PU. What is more, F1-score is a performance measure for information retrieval rather than binary classification. We all know GANs are pretty good at MNIST but not CIFAR-10. In fact, GenPU has a critical issue of mode collapse, and this is why GenPU reports 1-vs-1 rather than 5-vs-5 on MNIST. Even though, I still think GenPU makes much more sense than PGAN and D-GAN.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Unclear definitions\", \"comment\": \"This paper is studying the problem of PU learning which is an important and interesting problem, however I am having difficulty in reading the paper key definitions and equations are badly written. Please clarify the following:\\n\\na) Equation (6) says log[1 - D(Xp)] = Yp log[D(Xp)] + (1-Yp) log[1-D(Xp)] which does not make any sense. What are the authors saying here? \\n\\nb) Equation (3) defines R_{PN}(D, \\u03b4) in terms of l(D(X), \\u03b4) but l(D(X), \\u03b4) is not defined properly in equation (4). The left hand side of (4) has l(D(X), \\u03b4) but \\u03b4 vanishes on the right hand side of that equation. I have no idea what is going on here.\\n\\nc) The authors frequently confuse the true data distribution and the distribution hallucinated by the the generator. For example consider the expressions that \\\"supp(Pp (Xp )) \\u2229 supp(Pn (Xn )) \\u2192 \\u2205 \\\" Which distribution are the authors talking about? Is it an assumption on the true data distribution required for learning ? or this is a property of the generator's distribution. \\n\\nD) The experimental results in Table 4 and Table 3 do not compare to GenPU. Although the authors claim several times that the GenPU method is *onerous*, it is not clear why GenPU is so much more onerous in comparison to other GAN based methods which all require careful hyper-parameter tuning and expensive training. Furthermore the reference PN method performs significantly worse than other PU learning methods which does not make sense. The PN method should be much better or comparable to the performance of any PU method. Please clarify.\"}" ] }
rJedbn0ctQ
Zero-training Sentence Embedding via Orthogonal Basis
[ "Ziyi Yang", "Chenguang Zhu", "Weizhu Chen" ]
We propose a simple and robust training-free approach for building sentence representations. Inspired by the Gram-Schmidt Process in geometric theory, we build an orthogonal basis of the subspace spanned by a word and its surrounding context in a sentence. We model the semantic meaning of a word in a sentence based on two aspects. One is its relatedness to the word vector subspace already spanned by its contextual words. The other is its novel semantic meaning which shall be introduced as a new basis vector perpendicular to this existing subspace. Following this motivation, we develop an innovative method based on orthogonal basis to combine pre-trained word embeddings into sentence representation. This approach requires zero training and zero parameters, along with efficient inference performance. We evaluate our approach on 11 downstream NLP tasks. Experimental results show that our model outperforms all existing zero-training alternatives in all the tasks and it is competitive to other approaches relying on either large amounts of labelled data or prolonged training time.
[ "Natural Language Processing", "Sentence Embeddings" ]
https://openreview.net/pdf?id=rJedbn0ctQ
https://openreview.net/forum?id=rJedbn0ctQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "B1gnMKDPmE", "BklEbeM46Q", "H1x3zHcX6Q", "HJxL-Cv-pX", "rkegb7v1pQ", "ryxisXv53X", "ByeZqvI5n7", "H1gGaa7q3m", "SJlO1wzcnQ", "rylVjUk3sX", "Hyge8iaiqX", "SJlo7YliqQ", "BkeeXPCc9m", "S1xll3s597", "HygqgNDKcX", "SkljByVFcX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "comment", "official_comment", "official_review", "comment", "official_review", "official_comment", "comment", "official_comment", "comment", "official_comment", "comment" ], "note_created": [ 1548347668107, 1541836795624, 1541805331933, 1541664253986, 1541530359980, 1541202850590, 1541199753135, 1541189049749, 1541183200377, 1540253340456, 1539197767809, 1539143971492, 1539135256328, 1539124200408, 1539040241751, 1539026755452 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1187/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1187/Authors" ], [ "ICLR.cc/2019/Conference/Paper1187/Authors" ], [ "ICLR.cc/2019/Conference/Paper1187/Authors" ], [ "ICLR.cc/2019/Conference/Paper1187/AnonReviewer3" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1187/Authors" ], [ "ICLR.cc/2019/Conference/Paper1187/AnonReviewer2" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1187/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1187/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1187/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1187/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"title\": \"Interesting idea but in need of more clarity\", \"metareview\": \"The paper proposes a simple approach for computing a sentence embedding as a weighted combination of pre-trained word embeddings, which obtains nice results on a number of tasks. The approach is described as training-free but does require computing principal components of word embedding subspaces on the test set (similarly to some earlier work). The reviewers are generally in agreement that the approach is interesting, and the results are encouraging. However, there is some concern about the clarity of the paper and in particular the placement of the work in relation to other methods. There is also a bit of concern about whether there is sufficient novelty compared to Arora et al. 2017, which also compose sentence embeddings as weighted combinations of word embeddings, and also use a principal subspace of embeddings in the test set. This AC feels that the method here is sufficiently different from Arora et al., but agrees with the reviewers that the paper clarity needs to be improved, so that the community can appreciate what is gained from the new aspects of the approach and what conclusions should be drawn from each experimental comparison.\", \"recommendation\": \"Reject\", \"confidence\": \"4: The area chair is confident but not absolutely certain\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Hello AnonReviewer3,\\n\\nWe appreciate your comprehensive review and questions. Please find our response below.\\n\\n(1) About re-word the categories. Thanks for your suggestion. In the revised version submitted, we categorize sentence embeddings methods into two types, one is non-parameterized methods, including GEM and SIF, that don\\u2019t depend on parameters or need training. The other type is parameterized methods, such as InferSent and QuickThoughts, that need supervised/unsupervised training to update the parameters.\\n\\n(2) About supervised tasks. We are sorry for the confusion. in section 3.3, we add a description of supervised tasks (first paragraph) and an analysis of results (the end of second paragraph). \\n\\n(3) On \\u201chow the baseline algorithms are tuned and/or trained on these tasks\\u201d.\\nOn the supervised tasks, the performance of baseline model \\u201cGloVe BOW\\u201d is extracted from ref[1]. On STSB dataset, results of baseline model \\u201cword2vec skipgram\\u201d and \\u201cGlove\\u201d are extracted from the official website of STSB dataset (http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark). \\u201cLexVec\\u201d, \\u201cL.F.P\\u201d and \\u201cELMo\\u201d are from experiments run by us. As noted in the \\u201cexperimental settings\\u201d section in the appendix, sentences are tokenized using the NLTK wordpunct tokenizer, and then all punctuation is then skipped. Sentence vectors are just mean of word representations, and the similarity score is the cosine similarity of two vectors.\\n\\n(4) About \\\\mathbf{r}. In the line under Eq(4), we mention that \\\\mathbf{r} is the last column of R^i. And R^i is defined in Eq(3).\\n\\n(5) About GS and subspace projections. We agree that subspaces projection is more mathematically concise compared with GS. The reason why we still use GS to introduce novelty score is that GEM is motivated by the fact when a sentence is formed, different words bring in different meaning to this sentence one by one, and GS is appropriate to describe this process by yielding the orthogonal basis vectors one by one.\\n\\n(6) Although a_n and a_s are both functions of r_{-1}, they describe different quantities. Note that a_s is initially computed as q_i\\u2019s alignment with the meanings in its context. And Eq(6) shows that a_s is r_{-1}, i.e. the l_2 norm of q_i, divided by a constant. a_s is trying to quantify the absolute significance/magnitude of the new semantic meaning q_i. \\n\\nIn contrast, a_n is a function (exponential) of r_{-1} divided by l_2 norm of r, i.e. a function of the \\u201cproportion\\u201d of q_i in word w_i. Note that ||r||_2 = ||v_{w_i}||_2, and r_{-1} = ||q_i||_2. Therefore, a_n is quantifying that among all the information that w_i is trying to ship, what\\u2019s proportion of the new meaning q_i?\\n\\n(7) On fig 1. We apologize for the possible ambiguity. The sentence is represented by a sequence of blue block in the top middle, marked as w_1 \\u2026 w_{i-m} \\u2026 w_i \\u2026 w_{i+m} \\u2026 w_n. And we didn\\u2019t show the corpus in fig 1, and instead we show the top K principal vectors of X^c as those orange/yellow blocks on the right. And more descriptions are added to the caption of fig 1.\\n\\n(8) In eq(8), we change the notation \\u201cr\\u201d to \\u201ch\\u201d. Thanks for your suggestion.\\n\\n(9) On \\u201c2.4.1 is a bit confusing\\u201d.\\nWe think you referred to the matrix in the first paragraph in 2.4.1. The first paragraph is a revisit of the method in SIF. The formal desription of GEM starts from the second paragraph. We form a matrix X^c and its ith column is given by eq(7). Eq(7) is independent of a_u, a_n and a_s, and it\\u2019s computed using the singular values and singular vectors of the sentence matrix $\\\\mS$. And then we use X^c and q_i to compute a_u.\\n\\n(10) In the STS benchmark dataset, our hyper-parameters are chosen by conducting parameters search on STSB dev set at m = 7, h = 17, K = 45, and t = 3. And we use the same values for all supervised tasks. The integer interval of parameters search are m \\u2208 [5, 9], h \\u2208 [8, 20], L \\u2208 [35, 75] (at stride of 5), and t \\u2208 [1, 5]. And we use the same values for all supervised tasks. We add the discussion to the \\u201cexperimental settings\\u201d section in the appendix. \\n\\nThanks for your time and we hope that our response has addressed your questions. Look forward to your suggestion and evaluation.\", \"reference\": \"[1] Conneau, Alexis, et al. \\\"Supervised learning of universal sentence representations from natural language inference data.\\\" arXiv preprint arXiv:1705.02364 (2017).\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Hello AnonReviewer2,\\n\\nThank you for the detailed and careful review. We appreciate your points in favor and against.\", \"about_remarks_and_questions\": \"(1)\\nFor rows \\u201cGlove\\u201d and \\u201cword2vec\\u201d in table 1, the sentence embeddings are computed as the simple average of all word embeddings of words in the sentence.\\n\\n(2)\\nSorry if we didn\\u2019t make this clearer in the paper, but we\\u2019ve included results from Quick Thoughts and a very recent model using transformer in the first version of our paper. Quick Thoughts is denoted as \\u201cQT\\u201d, and their results are shown on table 3. \\u201cReddit + SNLI\\u201d in table 1 and table 2 is a very recent and competitive transformer model, introduced in [1] and [2]. The model uses the transformer from \\u201cattention is all you need\\u201d as the encoder. And in the revised version, we include their results on supervised tasks in table 3, denoted as USE. We also include ELMo\\u2019s performance on STS benchmark in table 1. The sentence embeddings are computed as the mean of ELMo vector of each word.\\n\\nBesides, we did comparison with other very recent and even more competitive models published around mid 2018, for example \\u201ca lar carte\\u201d and STN in table 3.\", \"comparison_with_these_models_mentioned_above\": \"On STSB dataset, GEM (77.5/82.1) clearly outperforms mean of ELMo (55.87/64.58), and is very close to the transformer model on test set (actually better than it on dev set). On supervised tasks, GEM\\u2019s performance is definitely better than some parameterized methods (like SkipThought, Sent2Vec and FastSent). And it\\u2019s still very competitive compared with parameterized SOTA models, for example, GEM is better than transformer model USE on SUBJ, MPQA, better than a lar carte on MPQA, TREC.\", \"about_novelty\": \"(1) We acknowledge that SIF is the first published work on using weighted sum of word vectors for sentence representation. And representing sentence as a composition (average, non-linear, p-mean etc.) of word vectors has been an active research topic before and after SIF (e.g. ref [3][4][5]). And we believe there are still much to explore on this direction.\\n\\n(2) On GEM\\u2019s novelty.\\nAlthough our model utilizes the idea weighted sum of word vectors, GEM is significantly different from SIF, including following aspects: \\n- To our knowledge, we are the first to adopt well-established numerical linear algebra to quantify the sentence semantic meaning and the importance of words. And this simple method proves to be competitive.\\n\\n- The weights in SIF depend on statistic of vocabularies on very large corpus (wikipedia). In contrast, the weights in GEM are directly computed from the sentence \\u201con the scene\\u201d. Given a sentence and its context, GEM is ready to go, independent of prior statistical knowledge of words.\\n\\n- In GEM, the components in weights are all computed from numerical linear algebra. And SIF directly have a hyper-parameter term in the weights, i.e. the smooth term. \\n\\n- As suggested by experiments in table 1 and 3, GEM outperforms SIF by significant margin.\\n\\nThanks for your time again. Hope that our response addresses your concern. We kindly ask for your further evaluation and opinions.\", \"reference\": \"[1] Cer, Daniel, et al. \\\"Universal sentence encoder.\\\" arXiv preprint arXiv:1803.11175 (2018).\\n[2] Yang, Yinfei, et al. \\\"Learning Semantic Textual Similarity from Conversations.\\\" arXiv preprint arXiv:1804.07754 (2018).\\n[3] Wieting, John, et al. \\\"Towards universal paraphrastic sentence embeddings.\\\" arXiv preprint arXiv:1511.08198 (2015).\\n[4] Wieting, John, and Kevin Gimpel. \\\"Revisiting recurrent networks for paraphrastic sentence embeddings.\\\" arXiv preprint arXiv:1705.00364 (2017).\\n[5] R\\u00fcckl\\u00e9, Andreas, et al. \\\"Concatenated $ p $-mean Word Embeddings as Universal Cross-Lingual Sentence Representations.\\\" arXiv preprint arXiv:1803.01400 (2018).\"}", "{\"title\": \"Response\", \"comment\": \"Hi AnonReviewer1,\\n\\nThanks for reviewing the paper and recognizing the novelty in our idea! Please find our response to the four points as follows.\\n\\n(1)\\nIn the case that the length of the sentence is larger than the dimension of the word embeddings, our algorithm still works fun. Sorry for the possible confusion and here are some clarifications:\\nFirst, the novelty score and significance score are independent of the length of the sentence, so they are good.\\n\\nFor the uniqueness score, the part that depends on the length of the sentence is the coarse embedding in eq(7). For the coarse embedding, now we have a sentence matrix S of size d*n, where d is word vector embedding size, n is the length of sentence, and n > d. The thin SVD of S is S = U*Sigma*V, where U is of size d*d, Sigma is of size d*n, and V is of size n*n. And the (d+1)th column to the nth column in Sigma is zeros, this is because S only has number of d singular values. In this case, the upper limit in eq(7) is n instead of d, and we have the coarse embedding from sentence matrix S. Therefore, in this case our model is fun. We\\u2019ll add explanation on this corner case in the appendix in the revised version (will submit very soon). \\n\\n(2) \\nFirst, although we use Gram-Schmidt process (GS), GEM is not that sensitive to the order of words, explained as follows. For GS on n incoming vectors, if the last vector is fixed, the last orthogonal base vector computed is independent of the order of first n vectors. In our case, the word w_i is always shifted to the last column in the context window, and we only utilize the last orthogonal base vector, q_i, generated by GS. Therefore, no matter how the first (n-1) words in the context window are shifted, q_i is always the same. And those three scores stay the same for w_i.\\n\\nSecond, as suggested in the review, we do some experiment of removing non-important stop words.\", \"s1\": \"\\\"The student is reading a physics book\\\"\", \"s2\": \"\\\"student is reading a physics textbook\\\"\\nThe cosine similarity between sentence vector of s1 and s2 given by GEM is 0.998\\n\\nsent1= \\\"A man walks along walkway to the store\\\"\\nsent2= \\\"man walks along walkway to the store\\\"\\ncosine similarity = 0.984\\n\\nsent1= \\\"Someone is sitting on the blanket\\\"\\nsent2= \\\"Someone is sitting on blanket\\\"\\ncosine similarity = 0.981\\n\\nThe similarity scores are all very closed to 1, suggesting that sentence embeddings barely change. \\n\\n(3)\\nWe are sorry about the confusion. For \\u201ctraining-free\\u201d, we are trying to say that the sentence embedding model built upon word2vec-type embedding doesn\\u2019t require training and free of trained parameters, for example, SIF and GEM belongs to this training-free type. And \\u201ctraining-required\\u201d means the embeddings model needs training to update its parameters, for example skip-thoughs and InferSent. We plan to rename the two types as parameter-free and parameters-required in the revised version.\\n\\n(4)\\nWe do some experiments to show that the value of \\\\alpha in GEM shows the relative importance level.\\nFirst, assume that originally the GEM assigns a weight alpha_i for the word w_i,. On STS benchmark test set, GEM achieves 77.5 (Pearson\\u2019s r * 100). If now the weight is changed to 1/(alpha_i), the performance drops to 69.59. And the performance falls to 32.83, if the weight is exp(-alpha_i). These results show that if we are trying to assign small weight to words that GEM assign high alpha value, and then sentence embeddings performs very badly. This phenomenon indicates that the alpha value given by GEM reflects the related importance level.\", \"and_we_list_some_concrete_examples_of_alpha_value_below\": \"\", \"the_sentence_to_encode\": \"\\u201cThe stock market opens low on Friday\\u201d\\nAlpha values (sorted) by GEM are: [lower: 4.94505258, stock: 4.93871886, closes: 4.78424269, market: 4.62267853, Friday: 4.51399687, the: 3.75456615, on: 3.70935467]\\nGEM emphasizes informative words like \\u201clower\\u201d and \\u201ccloses\\u201d, and diminishes stop words like \\u201cthe\\u201d and \\u201cthere\\u201d.\\n\\nWe sincerely look forward to your further feedbacks and evaluation.\"}", "{\"title\": \"missing a lot of details in the proposed model\", \"review\": \"The paper presented a new training-free way of generating sentence embedding. The proposed work is along the same motivation from Arora et al., 2017. A systematic analysis has been done on a number of tasks to show the strong performance (close or higher than the specifically \\\"supervised\\\" strategies).\\n\\n- I suggest the author to re-ward the category terms of the existing methods. Un-supervised and training-free are confusing. Unsupervised and supervised should be all in a group of training-required methods. unsupervised in this paper is more task-agnostic but domain specific and supervised is to extract sentence emb that is prediction task specific. \\n\\n- The evaluation tasks are rich but not clearly stated. For instance, the supervised taske are only discussed at high-level. Not clear what each task is and how one should interpret the results from each experiments. The way author presented it suggests the detail here were not important. It is also good to include discussion on how the baseline algorithms are tuned and/or trained on these tasks. Readers cannot reproduce the same results based on the current paper. \\n\\n- Notation and Math: \\n--r-1 in (4) is not clear as \\\\mathbf{r} is not defined properly\\n--based on sec 2.2., it is easy to motivate the novelty score from subspace projection rather than QR/GS; \\n-- a_n and a_s are both functions of r_{-1} which is the perp. energy of the words w.r.t. its contexts. Is there a fundamental difference?\\n-- Figure 1 is a little bit confusing. Not clear what is word and what is a sentence/corpus. \\n-- in Eq(8), better not to use r as it confuses with the GS coeffs. \\n-- 2.4.1 is a bit confusing, sentence embeddings c_1, \\\\ldot, c_N are introduced, but so far no sentence embedding has been formally introduced. Is this initialized from some heuristic? It is confusing in the sense that eq (9) c_s are defined by a_u, but a_u defined in eq (8) depends on sigma_d that relies on X^{c}, which is a funcion of all c_s's. \\n-- there are several parameters for GEM, please add some discussion on how these are selected in each of the evaluated tasks.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"comment\": [\"Yes, I agree that the expected thing GEM/SIF should do is to estimate the principal components on a background context corpus.\", \"But, right now, I believe that the evaluation for GEM (as well as SIF) is at an advantage in comparison to other methods, which don't use information across sentences in the test set.\", \"STSB has a training set (which can be the \\\"background\\\" context corpus), and I think the comparison would be more accurate and fair if the principal component were estimated on the basis of it.\", \"This doesn't take anything against your particular method which might be well placed in a scenario where you have the background corpus, but I think it's a good idea for the community to understand this nuance and having fair comparisons.\", \"Regarding two sentences: I might be wrong here, but doesn't the quality of estimated principal components diminish with number of data samples (two sentences versus the entire training set previously).\", \"Thanks a lot again.\"], \"title\": \"Thanks for your prompt and detailed response\"}", "{\"title\": \"For the case of embedding only one sentence\", \"comment\": \"Hello,\\n\\nThanks for reading our paper and your question.\\n\\n1) Yes, you are right. The matrix X^c has coarse grained embeddings of sentences in the test set.\\n\\n2) For corner case case of embedding a set of one sentence, GEM still works out with some simple adjustments. Some possible adjustments include: first, one can have a \\\"background\\\" context corpus in the very beginning. For example, if you want to encode a sentence about politics, you can calculate the principal components on some political articles dataset, which is regarded as the corpus now. Also in research datasets, the training set can always serve as the corpus for the test set. Second, in real-life engineering, GEM can keep and update a cache of the coarse embeddings of history queries. And this cache can serve as the corpus. The corner case is not explicitly taken care of in the pseudo code in our paper (and it's the same for SIF paper).\\n\\n3) If you pass in two sentences, the algorithm works fun as usual. Also as pointed in 2), one can always can have a cache of coarse embeddings generated from previous queries or simply have a background context corpus.\\n\\nThank you.\"}", "{\"title\": \"review of Zero-training Sentence Embedding via Orthogonal Basis\", \"review\": \"Paper overview: This paper proposes a new geometry-based method for sentence embedding from word embedding vectors, inspired by Arora et al (2017). The idea is to quantify the novelty,significance and corpus-wise uniqueness of each word. In order to do so, they analyze geometrically how the word vector of the target word relates to 1) the subspace created by the word-vectors in its context 2) its alignment with the meanings in its context (using SVD) 3) its presence in the all the corpus. For each of these aspects, they output a score or weight. The final sentence representation is a weighted average, using these scores, of the word vectors of the sentence.\", \"remarks_and_questions\": \"1) In table 1, Glove and word2vec are word representations, how is the sentence representation computed here? \\n 2) The authors are not comparing to what is now considered the state of the art methods, such as Quick thoughts vectors (ICLR 2018, 'an efficient framework for learning sentence representations' by Logeswaran et al.), Transformer (Attention is all you need by Vaswani et al.) and ELMo (Deep contextualized word representations, by Peters et al.).\", \"points_in_favor\": \"1) Results: The method gives the best performance for non-training methods with an +2 point improvement on average, although it cannot beat training methods (see Table 3, for instance). \\n 2) On the result tables, it should be reported also the std, not just the average, so the reader can evaluate if the difference between the methods is statistically significant.\\n 3) Inference speed: the method is fast (see table 5) \\n 4) stability of the results: The method is robust to slight changes in the hyperparameters such as the size of the window, number of principal components used, etc (see Fig 2)\", \"points_against\": \"The methods presented in the paper are not novel. The main novelties are the geometrical analysis on the contribution of each word of the sentence to the sentence overall semantic meaning, and the definition of the scores (eqs 4,6,8) that allow to improve the weighted average sentence representation (eq 9), an idea already present in Arora et al.'s paper.\", \"conclusion\": \"Although the geometric analysis of the paper is interesting, I dont think it is sufficient to justify a paper at ICLR, unless, after comparison with the other methods proposed previously, the proposed model is still competitive and the difference is statistically significant.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"comment\": \"Hi! Really interesting work.\\n\\nIf I understand correctly, the principal component computation takes place across sentence embeddings in the test set. So, in particular, for a downstream task like STSB, the matrix X^c, would have coarse grained embeddings of sentences in the test set right? (and then the principal components are calculated)\\n\\nI am not saying that this is a good/bad way and I believe SIF also does the principal component computation in this manner. While this cleverly utilizes information across sentences in the test set, I guess this can be a problem when you are given just one sentence at a time and you want to compute its embedding.\\n\\nSo, would the method also work if just two query sentences are passed in and it has to measure the similarity between them?\\n\\nThanks a lot!\", \"title\": \"Regarding the principal component removal\"}", "{\"title\": \"Interesting idea with issues to resolve\", \"review\": \"This is a paper about sentence embedding based on orthogonal decomposition of the spanned space by word embeddings. Via Gram-Schmidt process, the sequence of words in a sentence is regarded as a sequence of incoming vectors to be orthogonalized. Each word is then assigned 3 scores: novelty score, significance score, and uniqueness score. Eventually, the sentence embedding is achieved as weighted average of word embeddings based on those scores. The authors conduct extensive experiments to demonstrate the performance of the proposed embedding. I think the idea of the paper is novel and inspiring. But there are several issues and possible areas to improve:\\n\\n1. What if the length of the sentence is larger than the dimension of the word embedding? Some of the 3 scores will not be well-defined.\\n\\n2. Gram-Schmidt process is sensitive to the order of the incoming vectors. A well-defined sentence embedding algorithm should not. I suggest the authors to evaluate whether this is an issue. For example, if by simply removing a non-important stop word at the begging of the sentence and then the sentence embedding changes drastically, then it indicates that the embedding is problematic.\\n\\n3. I\\u2019m confused by the classification between training-free sentence embedding and unsupervised sentence embedding? Don\\u2019t both of them require training word2vec-type embedding?\\n\\n4. The definition of the three scores seems reasonable, but requires further evidence to justify. For example, by the definition of the scores, do we have any proof that the value of \\\\alpha indeed demonstrated the related importance level?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Clarification on structure for supervised tasks\", \"comment\": \"Hi,\\n\\nGEM uses pretty simple and standard structure for supervised tasks, just logistic regression with one hidden layer. In terms of complexity and structure, the architecture we used is the same as SIF and [1]. Most of the evaluations are implemented using standardized evaluation tool SentEval[2](will cite in the revised version).\", \"reference\": \"[1] Wieting, John, et al. \\\"Towards universal paraphrastic sentence embeddings.\\\" arXiv preprint arXiv:1511.08198 (2015).\\n[2] Conneau, Alexis, and Douwe Kiela. \\\"SentEval: An Evaluation Toolkit for Universal Sentence Representations.\\\" arXiv preprint arXiv:1803.05449 (2018).\"}", "{\"comment\": \"Thanks for your reply! For what it's worth, I've noticed that a lot of these sentence embedding papers use different architectures for supervised tasks. Some use really simple ones, and others more complex ones. For example, with uSIF, the architecture was relatively simple, which is probably why the SIF scores it reports were also lower than what was reported in the original SIF paper.\\n\\nIf it's not too much work, I think it'd be worth trying your architecture on some of other embedding types (SIF, USIF, infersent, etc.) I think it'd give us a much better idea of how much of a difference the embeddings are making as opposed to the architecture.\", \"title\": \"Using the same architecture?\"}", "{\"title\": \"GEM compared with SIF and \\\"Sentences as subspaces\\\"\", \"comment\": \"Thanks for your comment! Our model (GEM) is quite different from both SIF and \\\"Sentences as subspaces\\\" as follows.\\n\\nFirst, compared with SIF, GEM generates the weight for each word in a completely different way. In SIF, weight is a function of IDF and a hyperparameter. In GEM, weight is computed by capturing the new semantic meaning brought in by each word (section 2.2, 2.3, 2.4). What\\u2019s more, principal components removal method is different in SIF and GEM. GEM proposes sentence-dependent principal component removal (SDR, section 2.5), and principal components are generated from coarse-grained sentence embedding matrix (section 2.4.1). In contrast, SIF removes the very same components from each sentence.\\n\\nSecond, we were aware of the paper \\u201clow-rank subspaces\\u201d paper mentioned in your comment. That paper developed a very interesting method to compare the similarity of a PAIR of sentences, by using the principal angles between two low-rank approximation sentence matrices. We'll cite it in the revised version. It's true that \\u201cSentences as subspaces\\u201d and our methods both begin with writing sentences in a matrix form. However, GEM is about generating a sentence vector for each sentence, using the geometric properties only in the single sentence to decode, while the other one is about generating a similarity score for a sentence pair, by comparing two subspaces. And in GEM, the final representation of a sentence is a vector, not a subspace.\"}", "{\"comment\": \"Hello!\\n\\nDo you represent sentence as a subspace? How are the observations different from https://arxiv.org/abs/1704.05358? How is it different from SIF?\\n\\nCheers!\", \"title\": \"Difference with Sentences as subspaces\"}", "{\"title\": \"Performance compared with uSIF\", \"comment\": \"Thanks for your interest! uSIF is evaluated on SST(80.7), SICK-R(83.8), SICK-E(81.1) and STSB test(79.5) (http://aclweb.org/anthology/W18-3012 ). And our model achieves performance of 84.7, 86.5, 86.2 and 77.5 respectively. We will cite uSIF in the revised version.\"}", "{\"comment\": \"Very interesting paper! I was wondering how your method compared against uSIF (https://github.com/kawine/usif), a variant of SIF with no hyperparameter tuning. uSIF did much better than SIF on the STS tasks, so I'd be interested in seeing how it does against your method here.\", \"title\": \"Evaluation with uSIF?\"}" ] }
HkzOWnActX
Model-Agnostic Meta-Learning for Multimodal Task Distributions
[ "Risto Vuorio", "Shao-Hua Sun", "Hexiang Hu", "Joseph J. Lim" ]
Gradient-based meta-learners such as MAML (Finn et al., 2017) are able to learn a meta-prior from similar tasks to adapt to novel tasks from the same distribution with few gradient updates. One important limitation of such frameworks is that they seek a common initialization shared across the entire task distribution, substantially limiting the diversity of the task distributions that they are able to learn from. In this paper, we augment MAML with the capability to identify tasks sampled from a multimodal task distribution and adapt quickly through gradient updates. Specifically, we propose a multimodal MAML algorithm that is able to modulate its meta-learned prior according to the identified task, allowing faster adaptation. We evaluate the proposed model on a diverse set of problems including regression, few-shot image classification, and reinforcement learning. The results demonstrate the effectiveness of our model in modulating the meta-learned prior in response to the characteristics of tasks sampled from a multimodal distribution.
[ "Meta-learning", "gradient-based meta-learning", "model-based meta-learning" ]
https://openreview.net/pdf?id=HkzOWnActX
https://openreview.net/forum?id=HkzOWnActX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ByglEc1-eE", "HJxPOIRvRm", "BygrA2zZCQ", "r1liF2MbRX", "SkgtIhfW0X", "Bkl3ziz-R7", "rkguWNGZCX", "rke6feGWR7", "HJeH-KpCn7", "SJe7jhshh7", "B1lwX9L537" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544776232314, 1543132783116, 1542692044628, 1542691971291, 1542691921381, 1542691603565, 1542689792430, 1542688789481, 1541490941395, 1541352603200, 1541200414946 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1186/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1186/Authors" ], [ "ICLR.cc/2019/Conference/Paper1186/Authors" ], [ "ICLR.cc/2019/Conference/Paper1186/Authors" ], [ "ICLR.cc/2019/Conference/Paper1186/Authors" ], [ "ICLR.cc/2019/Conference/Paper1186/Authors" ], [ "ICLR.cc/2019/Conference/Paper1186/Authors" ], [ "ICLR.cc/2019/Conference/Paper1186/Authors" ], [ "ICLR.cc/2019/Conference/Paper1186/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1186/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1186/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes a meta-learning algorithm that extends MAML, particularly focusing on multimodal task distributions. The paper is generally well-written, especially with the latest revisions, and the qualitative experiments show some interesting structure recovered. The primary weakness of the paper is that the experiments are largely on relatively simple benchmarks, such as Omniglot and low-dimensional regression problems. Meta-learning papers with convincing results have shown results on MiniImagenet, CIFAR, CelebA, and/or other natural image datasets. Hence, the paper would be more compelling with more difficult experimental settings. In the paper's current form, the reviewers and the AC agree that it does not meet the bar for ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta review\"}", "{\"title\": \"Follow-up Before the Rebuttal Deadline\", \"comment\": \"We sincerely appreciate the constructive reviews provided by all reviewers. We addressed the concerns in our response and revision. We believe our contributions toward multimodal model-agnostic meta-learning are solid. We would like to kindly ask the reviewers to let us know if there is any further comment towards our revised paper, and we wish to address it before the end of the rebuttal period. Thanks!\"}", "{\"title\": \"Our General Response\", \"comment\": \"We thank all reviewers for their constructive feedback. We revised our paper based on reviewers' suggestions and we believe that we addressed most of the concerns. We appreciate the reviewers spending time reading the revised paper in advance. We are more than happy to address further concerns before the end of the rebuttal deadline. Please do not hesitate to let us know for any additional comments on the paper so that we can further improve the paper.\"}", "{\"title\": \"Response to AnonReviewer1 (Part 3/3)\", \"comment\": \"> The terminology of \\\"task distribution\\\" and \\\"modes\\\" thereof is used without introduction in the introduction section. The terminology \\\"model-based meta-learning/adaptation\\\" and \\\"gradient-based meta-learning/adaptation\\\" is also used without introduction here. This makes the introduction unnecessarily opaque. Consider the reader who is not familiar with meta-learning papers; they would have a very hard time parsing, for example, the phrase \\\"...this not only requires additional identity information about the modes, which is not always available or is ambiguous when the modes are not clearly disjoint...\\\" (pg. 1).\\n\\nWe revised the introduction and writing to be more clear about the terms we use. The first time the terms \\u201cmodel-based meta-learning\\u201d and \\u201cgradient-based meta-learning\\u201d are used they are introduced with one sentence summaries of their meanings and literary references.\\n\\n> Further, the terminology \\\"model-based\\\" seems non-standard, and is aliased with the term model-based reinforcement learning (which specifically refers to the set of RL algorithms that make use of a \\\"model\\\" of transition dynamics). Since the paper tackles a reinforcement learning benchmark, this may lead to some confusion.\\n\\nIn the original paper, the term \\u201cmodel-based meta-learning\\u201d is explained when it is first mentioned in the introduction and this line of work is presented in the related work section. We believe its meaning is clear in the paper.\\n\\n> The paper needs to be checked over for English grammar and style.\\n> everywhere: \\\"task specific\\\" -> task-specific\\n> pg. 3: \\\"relevant but vaguely related skills\\\" this is imprecise\\n> pg. 3; \\\"our model does not maintain an internal state\\\" Is the task representation/embedding not an internal state?\\n> pg. 3: The episodic training setup, which is standard to meta-learning setups, could be much better described. The MAML algorithm could be given better intuition.\\n> Algorithm 1: \\\"infer\\\" is a misuse of terminology that usually refers to an operation in latent variable probabilistic modeling. Since the computation of \\\\tau is purely feedforward, I recommend writing \\\"compute.\\\"\\n> \\\\tau should be used in some places where v is used instead\\n\\nWe appreciate the reviewer\\u2019s advice. We revised the paper to address these points.\\n\\n[1] Finn et al. \\u201cModel-Agnostic Meta-Learning for Fast Adaptation of Deep Networks\\u201d, ICML 2017\\n[2] Finn et al. \\u201dProbabilistic Model-Agnostic Meta-Learning\\u201d, NIPS 2018\\n[3] Kim et al. \\u201cBayesian Model-Agnostic Meta-Learning\\u201d, NIPS 2018\\n[4] Lee and Choi \\u201cGradient-Based Meta-Learning with Learned Layerwise Metric and Subspace\\u201d, ICML 2018\\n[5] Grant et al. \\u201cRecasting Gradient Based Meta-Learning as Hierarchical Bayes\\u201d, ICLR 2018\\n[6] Nichol et al. \\u201cReptile: a Scalable Meta-learning Algorithm\\u201d, arXiv 2018\\n[7] Zaheer et al. \\u201cDeep Sets\\u201d, NIPS 2017\"}", "{\"title\": \"Response to AnonReviewer1 (Part 2/3)\", \"comment\": \"> The motivation for the particular form of the task embedding computation is not given. What were the other options? Why not, for example, an order-invariant function instead of a bidirectional GRU?\\n\\nIn fact, the bidirectional GRU (BiGRU) in our model can be considered as a superset of functions that are order-invariant. According to [7], an order-invariant function that takes a set/bag of data would have the following forms: \\n(1) A transformation applied to each instance data to obtain instance embedding\\n(2) Another different transformation applied to the sum of those instance embeddings and get a holistic embedding represents all data in the set.\\nIn this case, our BiGRU first embeds instances into a feature representation. Though there is a dependency to previous and later instances, it could be set to zero through the learning and therefore result in an instance-wise embedding function like (1). Next, we average the output of BiGRU and perform an additional linear transformation to compute \\\\tau, which is the same as (2). \\nTherefore, we would like to emphasize that through using BiGRU, our model-based meta-learner can be order-invariant or order-dependent based on the underlying structure inside the training data, which has placed more flexibility. The detailed discussion is included in the revised paper.\\n\\n> In all of the experiments, there is no appropriate baseline that keeps the parameter dimensionality constant, so it is unclear whether the (marginal) improvement in performance is due to added expressivity by adding more parameters rather than an algorithmic improvement. I suggest an ensembling baseline with an appropriate number of ensemble members.\\n\\nWe experimented with adding layers and adding more units to the MAML layers, and it does not improve performance on the studied problems.\\n\\n> There is no evaluation on a standard benchmark for few-shot classification (miniImageNet), and the Omniglot improvement is small.\\n\\nThe fact that miniImageNet has a higher dimensional visual data (much larger image size) comparing to Omniglot makes it much more difficult to learn good task embeddings using our simple task embedding network (linear projection + BiGRU). We performed additional experiments on miniImageNet during the rebuttal and found that the few-shot learning result of our model is only on par with MAML. We believe that with a better tuned convolutional structure for learning task embedding from high-dimensional visual data, our model could potentially improve.\\n\\n> The reinforcement learning comparison at some point compares MUMOMAML with modulation applied (therefore with access to task-specific data) to MAML with no adaptation (and therefore no access to task-specific data). This is not entirely fair.\\n\\nIn the figures we show the modulated MuMoMAML and MAML at the same step to align the gradient update steps on both algorithms, but the intention was not to compare the modulated model to the unadapted MAML. We clarified this in the caption and the text.\\n\\n> tSNE results can be misleading (e.g., see https://distill.pub/2016/misread-tsne/), and the task delineation is not extremely clean.\\n\\nWe would like to emphasize that the structure shown in our embedding visualization is consistent across a range of perplexity choices. In fact, although the distill article argues that \\\"random embedding can have some non-random sub-cluster structure\\\", it is hard to have clear linear structures (like in our case) consistently across different choices of perplexity. \\n\\n> Everywhere: The \\\"prior\\\" referred to in this paper is not a prior in the Bayesian sense. I suggest a more careful use of terminology.\\n\\nWe thank reviewers for the feedback and agree that a general use of prior can be confusing. It is worth noting that here the \\\"prior\\\" is with respect to the target task, which the model has not seen and learned before modulation or gradient-based optimization. Thus the model is, in fact, a \\\"prior\\\" model, to the target task. We revised the paper to make this more clear.\\n\\n> Abstract: \\\"augment existing gradient-based meta-learners\\\" You augment a specific variant of gradient-based meta-learning, MAML.\\n\\nWe updated the abstract to make this clear. We believe this modification can be applied to the family of gradient-based meta-learners that seek a parameter initialization and perform gradient steps to adapt to tasks.\"}", "{\"title\": \"Response to AnonReviewer1 (Part 1/3)\", \"comment\": \"We thank the reviewer for the feedback and address the concerns in detail below.\\n\\n> The motivation of uncovering latent modes of a task distribution does not align with the proposed method. The algorithm computes a continuous representation of the data from a task (which is fixed during gradient-based fast adaptation). The mode identity, on the other hand, should be a discrete variable. Such a discrete variable is never explicitly computed in the proposed method.\\n\\nThe reason we selected a continuous representation for the task identity vector is that we believe many task distributions of interest have richer structure than a distribution with clearly disjoint modes. For example, the task distribution studied in the regression experiment has multimodal structure, but the functions are parameterized in such way that the sampled functions interpolate the modes densely (i.e. the observed data points of a quadratic function can sometimes be very similar to a sinusoidal function, as shown in Figure 7 and 8 in the paper). We believe it is a strength of our proposed method that the identity vector can help interpolate modes in a task space where some tasks fall in between the modes, as pointed out by R2. We revised the paper to better reflect this idea so that discrete mode identification is not implied.\\n\\n> The technical writing is unclear and jargon is often used without definition. Importantly, one of the central motivators of the paper, \\\"task modulation\\\", is never given a precise definition.\\n\\nWe revised the Introduction, Preliminaries and Method sections to improve the language and make them easier to follow. The term \\u201ctask modulation\\u201d is not mentioned in the original paper. To address the reviewer\\u2019s concern, we provide a clear description of the term \\u201cmodulation\\u201d where it is mentioned.\\n\\n> The standard few-shot classification task (Omniglot) does not clearly consist of a task distribution that is multimodal, so the method is not well-motivated in this setting.\\n\\nThe value of comparing the proposed model against baselines on few-shot classification image is to verify if the proposed model can achieve better performance when the task modes are not obvious. This intuition is stated in section 5 in the original paper.\\n\\n> The paper neglects to discuss how the proposed method could be used in the context of other methods for \\\"gradient-\\nbased meta-learning\\\" such as Ravi & Larochelle (2016). I believe the attention-based modulation and the FiLM modulation could be easily adapted to that setting. Why was this not discussed or evaluated?\\n\\nWe focus on improving the family of gradient-based meta-learners which seek a common initialization for the parameters to enable fast adaptation within few gradient steps, including [1-6]. While Ravi & Larochelle (2016), which is included the original paper under the category of optimization-based meta-learner, propose to update the meta-learner with gradients by learning update rules (as mentioned in section 2), it does not seek a parameter initialization. As a result, it does not suffer from the issue that we aim to solve here, and therefore we do not see an obvious way to augment this work with our proposed model.\\n\\n> Conditioning has been used in the context of few-shot learning before, but this is not discussed (https://arxiv.org/abs/1805.10123, https://arxiv.org/abs/1806.07528).\\n\\nWe appreciate the suggestion and we have included the references in the revised paper.\\n\\n> The paper often confounds task representation with neural network parameter values. For example, Figure 1 depicts the adaptation of parameter values with gradients (\\\\nabla L), yet the caption describes \\\"task modes.\\\" More careful writing would disentangle these two components.\\n\\nWe determined that Figure 1 (a) is perhaps not fulfilling its purpose in providing an approachable visual overview of the method and removed it to avoid further confusion. We revised writing in the paper to better separate the concepts of parameters and task representation. Specifically, we changed the language so that \\\\tau is not called a parameter vector, but rather a \\u201cmodulation vector\\u201d to reflect the fact that it is computed by the model.\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for the feedback and address the concerns in detail below.\\n\\n> The experiment on few-shot image classification is less convincing, with results only on the Omniglot dataset, which are only comparable to those of existing methods that are designed for a single task distribution. Why not show results on MiniImageNet or other more realistic datasets which are more likely to be multimodal?\\n\\nThe fact that miniImageNet has a higher dimensional visual data (much larger image size) comparing to Omniglot makes it much more difficult to learn good task embeddings using our simple task embedding network (linear projection + BiGRU). We performed additional experiments on miniImageNet during the rebuttal and found that the few-shot learning result of our model is only on par with MAML. We believe that with a better tuned convolutional structure or pre-trained feature extractors (as a practice suggested in [1,2]) for learning task embedding from high-dimensional visual data, our model could potentially improve.\\n\\n> It is not clear how the idea of modulation works for multimodal meta-learning. More discussions and insights can be helpful.\\n\\nOur intuition is that each block (i.e. a channel of a convolutional layer or a neuron of a fully-connected layer) of the gradient-based meta-learner network should learn to be specialized in different meta-learning tasks. To utilize this specialization to ensemble a powerful meta-learner, we apply the modulation block-wise to activate or deactivate the units of a block by estimating the mode of the task using the model-based meta-learner. In other words, we propose to first select a relevant subnetwork based on a given task to enable efficient adaptation. We revised the paper to make this intuition clear.\\n\\n> The encoding of a task relies on the order of examples, which seems undesirable for a classification or regression problem.\\n\\nIn fact, the bidirectional GRU (BiGRU) in our model can be considered as a superset of functions that are order-invariant. According to [1], an order-invariant function that takes a set/bag of data would have the following forms: \\n(1) A transformation applied to each instance data to obtain instance embedding\\n(2) Another different transformation applied to the sum of those instance embeddings and get a holistic embedding represents all data in the set.\\nIn this case, our BiGRU first embeds instances into a feature representation. Though there is a dependency to previous and later instances, it could be set to zero through the learning and therefore result in an instance-wise embedding function like (1). Next, we average the output of BiGRU and perform an additional linear transformation to compute \\\\tau, which is the same as (2). \\nTherefore, we would like to emphasize that through using BiGRU, our model-based meta-learner can be order-invariant or order-dependent based on the underlying structure inside the training data, which has placed more flexibility. The detailed discussion is included in the revised paper.\\n\\n[1] Oreshkin et al. \\u201cTask dependent adaptive metric for improved few-shot learning\\u201d, NIPS 2018\\n[2] Qiao et al. \\u201cFew-Shot Image Recognition by Predicting Parameters from Activations\\u201d, CVPR 2018\\n[3] Zaheer et al. \\u201cDeep Sets\\u201d, NIPS 2017\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank the reviewer for the feedback and address the concerns in detail below.\\n\\n> The novelty of the paper seems to be the combinations of MAML and FiLM, which seems a bit limited.\\n\\nWe propose a previously under-explored problem of enabling a family of meta-learners that seek a parameter initialization [1-6] to deal with multimodal task distributions. Also, we empirically demonstrate a limitation of a state-of-the-art gradient-based meta-learners (MAML) - it is impractical to find a single initialization that allows a network to adapt to diverse tasks with few gradient steps, which is also pointed out by R3. We then present a framework together with a training algorithm that aims to alleviate the problem. We propose to identify the mode of a task and activate or deactivate some parts of a network to enable efficient adaptation with few gradient steps. We experiment with different modulation methods (e.g. attention mechanisms with sigmoid and softmax) and show promising results in section 5.1, which is not limited to FiLM.\\n\\n> I wonder whether the proposed method is mostly useful when there is a clear mode difference as in the synthetic regression/RL tasks of the paper. \\n\\nThe results presented in the paper demonstrate that our model can deal with overlapping modes. Specifically, in the regression experiments reported in the paper, the parameter ranges of the functions in the dataset allow for generation of functions that are indistinguishable from functions of a different class. That is, a sinusoid with a low enough frequency will look like a linear function in the given coordinate range, especially considering that noise is added to the observation. Also, in 2D navigation environment, sampled goals can appear between the mode centers. \\n\\n> What's the results on the mini-Imagenet? The Omniglot seems to be saturated already.\\n\\nThe fact that miniImageNet has a higher dimensional visual data (much larger image size) comparing to Omniglot makes it much more difficult to learn good task embeddings using our simple task embedding network (linear projection + BiGRU). We performed additional experiments on miniImageNet during the rebuttal and found that the few-shot learning result of our model is only on par with MAML. We believe that with a better tuned convolutional structure or pre-trained feature extractors (as a practice suggested in [1,2]) for learning task embedding from high-dimensional visual data, our model could potentially improve.\\n\\n> Why tau is not updated in the inner loop of Algorithm 1?\\n\\n\\\\tau is not updated in the outer or inner loop, because it is not a parameter of our model; instead, it is a set of vectors produced by our model-based meta-learner to activate or deactivate some parts of the gradient-based meta-learner accordingly to the estimated task mode. Therefore, only the parameters of the gradient-based meta-learner (\\\\theta) are updated in the inner loop training. We revised the paper to make this clear.\\n\\n> In page 5, in 'based on the input data samples and then infers the parameter to modulate the prior model', what does the `input data samples' refers to? Is it the training data of a meta-learning task? \\n\\nYes. Input data samples (x_1, y_1, \\u2026, x_K, y_K) form a meta-learning task. We revised the paper to make this clear.\\n\\n> Do you stop gradient to the learner in MUMOMAML?\\n\\nThe gradients of the loss for a single task, that is computed in the inner loop of the algorithm, are only used for adapting the parameters of the gradient-based meta-learner. So in this sense, the inner loop gradients are stopped to the model-based meta-learner. The outer loop loss is used to compute gradients with respect to the initial parameters of the gradient-based meta-learner and the parameters of the model-based meta-learner.\\n\\nIn [1] they experiment with the \\u201cfirst-order MAML\\u201d by stopping the gradient through the inner loop update procedure when updating the initial parameters. As [1] does not report a significant difference, we do not do this.\\n\\n> page 4, 'in to' -> 'into'\\n\\nWe appreciate the reviewer pointing this out. We revised the paper accordingly.\\n\\n[1] Finn et al. \\u201cModel-Agnostic Meta-Learning for Fast Adaptation of Deep Networks\\u201d, ICML 2017\\n[2] Finn et al. \\u201dProbabilistic Model-Agnostic Meta-Learning\\u201d, NIPS 2018\\n[3] Kim et al. \\u201cBayesian Model-Agnostic Meta-Learning\\u201d, NIPS 2018\\n[4] Lee and Choi \\u201cGradient-Based Meta-Learning with Learned Layerwise Metric and Subspace\\u201d, ICML 2018\\n[5] Grant et al. \\u201cRecasting Gradient Based Meta-Learning as Hierarchical Bayes\\u201d, ICLR 2018\\n[6] Nichol et al. \\u201cReptile: a Scalable Meta-learning Algorithm\\u201d, arXiv 2018\\n[7] Oreshkin et al. \\u201cTask dependent adaptive metric for improved few-shot learning\\u201d, NIPS 2018\\n[8] Qiao et al. \\u201cFew-Shot Image Recognition by Predicting Parameters from Activations\\u201d, CVPR 2018\"}", "{\"title\": \"Layer-wise conditioning via task-embedding for meta-learning\", \"review\": [\"Strengths:\", \"The paper identifies a valid limitation of the MAML algorithm: With a limited number of gradient descent steps from a single initialization, there is a limit to the ability of a fixed-size neural network to adapt to tasks sampled from a diverse dataset.\", \"The tSNE plots show some preliminary interesting structure for the simple regression and RL tasks, but not for the classification task.\"], \"weaknesses\": [\"The motivation of uncovering latent modes of a task distribution does not align with the proposed method. The algorithm computes a continuous representation of the data from a task (which is fixed during gradient-based fast adaptation). The mode identity, on the other hand, should be a discrete variable. Such a discrete variable is never explicitly computed in the proposed method.\", \"The technical writing is unclear and jargon is often used without definition. Importantly, one of the central motivators of the paper, \\\"task modulation\\\", is never given a precise definition.\", \"The standard few-shot classification task (Omniglot) does not clearly consist of a task distribution that is multimodal, so the method is not well-motivated in this setting.\", \"Experimental conclusions are weak.\"], \"major_comments\": [\"The paper neglects to discuss how the proposed method could be used in the context of other methods for \\\"gradient-based meta-learning\\\" such as Ravi & Larochelle (2016). I believe the attention-based modulation and the FiLM modulation could be easily adapted to that setting. Why was this not discussed or evaluated?\", \"Conditioning has been used in the context of few-shot learning before, but this is not discussed (https://arxiv.org/abs/1805.10123, https://arxiv.org/abs/1806.07528).\", \"The paper often confounds task representation with neural network parameter values. For example, Figure 1 depicts the adaptation of parameter values with gradients (\\\\nabla L), yet the caption describes \\\"task modes.\\\" More careful writing would disentangle these two components.\", \"The motivation for the particular form of the task embedding computation is not given. What were the other options? Why not, for example, an order-invariant function instead of a bidirectional GRU?\", \"In all of the experiments, there is no appropriate baseline that keeps the parameter dimensionality constant, so it is unclear whether the (marginal) improvement in performance is due to added expressivity by adding more parameters rather than an algorithmic improvement. I suggest an ensembling baseline with an appropriate number of ensemble members.\", \"There is no evaluation on a standard benchmark for few-shot classification (miniImageNet), and the Omniglot improvement is small.\", \"The reinforcement learning comparison at some point compares MUMOMAML with modulation applied (therefore with access to task-specific data) to MAML with no adaptation (and therefore no access to task-specific data). This is not entirely fair.\", \"tSNE results can be misleading (e.g., see https://distill.pub/2016/misread-tsne/), and the task delineation is not extremely clean. I would be more convinced if a clustering algorithm were applied.\"], \"minor_comments\": [\"The paper needs to be checked over for English grammar and style.\", \"everywhere: The \\\"prior\\\" referred to in this paper is not a prior in the Bayesian sense. I suggest a more careful use of terminology.\", \"abstract: \\\"augment existing gradient-based meta-learners\\\" You augment a specific variant of gradient-based meta-learning, MAML.\", \"pg. 1: \\\"carve on a snowboard\\\" don't know what this means\", \"The terminology of \\\"task distribution\\\" and \\\"modes\\\" thereof is used without introduction in the introduction section. The terminology \\\"model-based meta-learning/adaptation\\\" and \\\"gradient-based meta-learning/adaptation\\\" is also used without introduction here. This makes the introduction unnecessarily opaque. Consider the reader who is not familiar with meta-learning papers; they would have a very hard time parsing, for example, the phrase \\\"...this not only requires additional identity information about the modes, which is not always available or is ambiguous when the modes are not clearly disjoint...\\\" (pg. 1).\", \"Further, the terminology \\\"model-based\\\" seems non-standard, and is aliased with the term model-based reinforcement learning (which specifically refers to the set of RL algorithms that make use of a \\\"model\\\" of transition dynamics). Since the paper tackles a reinforcement learning benchmark, this may lead to some confusion.\", \"pg. 3; \\\"our model does not maintain an internal state\\\" Is the task representation/embedding not an internal state?\", \"pg. 3: \\\"relevant but vaguely related skills\\\" this is imprecise\", \"pg. 3: The episodic training setup, which is standard to meta-learning setups, could be much better described. The MAML algorithm could be given better intuition.\", \"everywhere: \\\"task specific\\\" -> task-specific\", \"Algorithm 1: \\\"infer\\\" is a misuse of terminology that usually refers to an operation in latent variable probabilistic modelling. Since the computation of \\\\tau is purely feedforward, I recommend writing \\\"compute.\\\"\", \"\\\\tau should be used in some places where v is used instead\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Why modulation works for meta-learning\", \"review\": \"This paper presents an interesting meta-learning algorithm that can learn from multimodal task distributions, by combining model-based and gradient-based meta-learning. It first represents a task with a latent feature vector produced by a recurrent network, and then modulates the meta-learned prior with this task-specific latent feature vector before applying gradient-based adaptation. Experimental results are shown to validate the proposed algorithm. While the idea appears to be quite novel for meta-learning, further efforts are needed to improve this work.\\n\\n1. The experiment on few-shot image classification is less convincing, with results only on the Omniglot dataset, which are only comparable to those of existing methods that are designed for a single task distribution. Why not show results on MiniImageNet or other more realistic datasets which are more likely to be multimodal? \\n\\n2. It is not clear how the idea of modulation works for multimodal meta-learning. More discussions and insights can be helpful.\\n\\n3. The encoding of a task relies on the order of examples, which seems undesirable for a classification or regression problem.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Official review\", \"review\": \"\", \"mumomaml\": \"Model-Agnostic Meta-Learning for Multimodal Task Distributions\\n\\nThis paper proposed multi-modal MAML, which alleviates the single initialization limitation of MAML by modulating task prior with MAML. Below are some comments.\", \"pros\": \"1. Overall, the paper is clear written. \\n2. By using modulation, there is no need to explicitly control/know the number of modes in advance.\\n3. The multi-MAML baseline is good for an ablation study, though it is only on a synthetic regression task.\\n4. MUMOMAML combines the strength of both gradient-based and model-based meta-learners.\\n\\nCons.\\n1. The novelty of the paper seems to be the combinations of MAML and FiLM, which seems a bit limited.\\n2. I wonder whether the proposed method is mostly useful when there is a clear mode difference as in the synthetic regression/RL tasks of the paper. Moreover, the paper only shows tasks with only two-three modes, what happen when there is a large number of modes?\\n3. What's the results on the mini-Imagenet? The Omniglot seems to be saturated already.\\n4. Why tau is not updated in the inner loop of Algorithm 1?\", \"minor\": \"1. page 4, 'in to' -> 'into'\\n2. In page 5, in 'based on the input data samples and then\\ninfers the parameter to modulate the prior model', what does the `input data samples' refers to? Is it the training data of a meta-learning task?\\n3. Do you stop gradient to the learner in MUMOMAML?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
H1z_Z2A5tX
DON’T JUDGE A BOOK BY ITS COVER - ON THE DYNAMICS OF RECURRENT NEURAL NETWORKS
[ "Doron Haviv", "Alexander Rivkind", "Omri Barak" ]
To be effective in sequential data processing, Recurrent Neural Networks (RNNs) are required to keep track of past events by creating memories. Consequently RNNs are harder to train than their feedforward counterparts, prompting the developments of both dedicated units such as LSTM and GRU and of a handful of training tricks. In this paper, we investigate the effect of different training protocols on the representation of memories in RNN. While reaching similar performance for different protocols, RNNs are shown to exhibit substantial differences in their ability to generalize for unforeseen tasks or conditions. We analyze the dynamics of the network’s hidden state, and uncover the reasons for this difference. Each memory is found to be associated with a nearly steady state of the dynamics whose speed predicts performance on unforeseen tasks and which we refer to as a ’slow point’. By tracing the formation of the slow points we are able to understand the origin of differences between training protocols. Our results show that multiple solutions to the same task exist but may rely on different dynamical mechanisms, and that training protocols can bias the choice of such solutions in an interpretable way.
[ "dynamics", "recurrent neural networks", "judge", "book", "cover", "rnns", "memories", "unforeseen tasks", "training protocols", "effective" ]
https://openreview.net/pdf?id=H1z_Z2A5tX
https://openreview.net/forum?id=H1z_Z2A5tX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BkxgPvRRyN", "Ske-B1trJE", "SyejZFu5Am", "SkxRh_O9A7", "ryxxYu_5A7", "BylAb_uq07", "ryxm9jKHpX", "HyxmF2QohQ", "rylXT7xO2m", "BkeNoZL-om" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544640344247, 1544027961195, 1543305475222, 1543305398210, 1543305335527, 1543305221615, 1541933963126, 1541254267234, 1541043130579, 1539559836233 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1185/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1185/Authors" ], [ "ICLR.cc/2019/Conference/Paper1185/Authors" ], [ "ICLR.cc/2019/Conference/Paper1185/Authors" ], [ "ICLR.cc/2019/Conference/Paper1185/Authors" ], [ "ICLR.cc/2019/Conference/Paper1185/Authors" ], [ "ICLR.cc/2019/Conference/Paper1185/Authors" ], [ "ICLR.cc/2019/Conference/Paper1185/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1185/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1185/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"This paper analyses the dynamics of RNNs, cq GRU and LSTM.\\n\\nThe paper is mostly experimental w.r.t. the difficulty of training RNNs; this is also caused by the fact that the theoretical foundations of the paper seem not to be solid enough. Experimentation with CIFAR10 is not completely stable.\\n\\nThe review results make the paper balance at the middle. The merit of the paper for the greater community is doubted, in its current form.\", \"confidence\": \"2: The area chair is not sure\", \"recommendation\": \"Reject\", \"title\": \"borderline\"}", "{\"title\": \"Response to Reviewer 3: post-rebuttal comments\", \"comment\": \"We thank the referee for a prompt response and constructive comments.\", \"regarding_the_anomaly_revealed_with_the_gru_on_cifar_10\": \"further investigation of this case shows that increasing regularization leads to DeCu outperforming VoCu, as in all other scenarios.\\n\\nAs the referee requested, we will present more bifurcation portraits in the final submission.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"(New version uploaded, including improved clarity, CIFAR-10, LSTM, new bifurcation analysis)\\n\\nWe thank the reviewer for the detailed review and suggestions.\\n\\nFollowing the comments from all reviewers, we have clarified our main messages.\", \"our_main_results_are\": \"1.1) It is known and we also show: Different curricula can lead to the same performance. \\n1.2) Despite that, when extrapolating the task to new settings, differences between networks trained with these curricula arise.\\n2) The source of these differences can be traced to dynamical objects formed through training - in our case slow points.\\n3) Analyzing these slow points in the nominal task can predict behavior on the extrapolated task\\n4) Tracing the formation of these slow points through learning provides a link between training, slow point formation, stability of memories and performance\", \"answers_to_specific_comments_below\": \"(C - reviewer comment, R - response)\\n\\nC1) Firstly, only one task is used - based on object classification with images - so it is unclear how generalisable these findings are, given that the authors' setup could be extended to cover at least another task, or at least another dataset. MNIST is a sanity check, and many ideas may fail to hold when extended to slightly more challenging datasets like CIFAR-10. \\n\\nR1) We extended the setup to cover another dataset: CIFAR-10. \\n\\nC2) Secondly, as far as I can tell, the results are analysed on one network per setting, so it is hard to tell how significant the differences are. While some analyses may only make sense for single networks, e.g. Figure 3, a proper quantification of some of the results over several training runs would be appropriate. \\n\\nR2) We repeated the analysis for 20 realizations of MNIST (10 with GRU and 10 with LSTM), and 10 realizations of CIFAR-10 (5 GRU, 5 LSTM). All figures now include error bars.\\n\\n\\nC3) Finally, it is worth investigating LSTMs on this task. This is not merely because they are more commonly used than GRUs, but they are strictly more powerful - see \\\"On the Practical Computational Power of Finite Precision RNNs for Language Recognition\\\", published at ACL 2018. Given the results in this paper and actually the paper that first introduces the forget gate for LSTMs, it seems that performing these experiments solely with GRUs might lead to wrong conclusions about RNNs in general.\\n\\nR3) We repeated all the experiments on LSTM, finding qualitatively similar phenomena. \\n\\nC4) There are also more minor spelling and grammatical errors throughout the text that should be addressed. For example, there is a typo on the task definition on page 2 - \\\"the network should *output* a null label.\\\" \\nR4) We fixed this and many other typos and grammatical errors.\"}", "{\"title\": \"Response to Reviewer 1 - Part 2\", \"comment\": \"C8) Does the backtracking fixed/slow point algorithm assume that the location of the fixed point does not change through training?\\nR8) It assumes that it doesn\\u2019t change a lot. This is a reasonable assumption in numerical continuation - which is what we do here. We also state this assumption explicitly in the text now. This assumption can break down near bifurcations, which is exactly what happens in VoCu - and we now have a new part in the text that analyzes these bifurcations, linking them to changes in performance.\\n\\nC9) Wouldn't it make more sense to investigate the pack-projection of desired output at each training step? \\nR9) The new short backtracking algorithm combines pack-projection with backtracking. After learning a new class, it makes more sense to do the gradient descent at the relevant training step. When we want to study the emergence of a slow point, however, backtracking is needed. In the new part of the text dealing with bifurcations, we show how this method can reveal which existing slow points give rise to new ones when a class is learned.\\n\\nC10) PTMT, PMTP, and TaCu are not described well in the main text. \\nR10) We removed these protocols, as they did not contribute much to the main message.\\n\\nC11) The pharse 'basin of attraction' is losely used in a couple of places. If there isn't an attractor, its basin doesn't make sense. \\nR11) A slow point can also have a region of attraction, albeit in a shape of saddle rather than a basin . Eventually, the dynamics will drift away from this slow point - but there is an area of phase space that will initially lead to the slow point. We removed or modified places where this term was used inappropriately.\\n\\nC12) Fig 4 is not very informative. Also is this just from one network each? \\nR12) We replaced this with a more informative figure showing the dependence of accuracy on the speed of the relevant slow point, rather than just the speed itself.\\n\\nC13) Fig 5 is too small! \\nR13) It was enlarged, and new panels were added to address the bifurcation analysis, revealing the root cause of glitches in speed depicted in the original submission..\\n\\nC14) page 2: input a null label -> output a null label \\nR14) fixed.\\n\\nC15) it would be interesting to see how general those findings are on other tasks, e.g., n-back task with MNIST. \\nR15) We thank the reviewer for proposing this task. Since some of key aspects of this study - in particular, and crucially, time generalization for unforeseen delays are not easily extendable to this case , we decided to leave this for future work.\"}", "{\"title\": \"Response to Reviewer 1 - Part 1\", \"comment\": \"(New version uploaded, including improved clarity, CIFAR-10, LSTM, new bifurcation analysis)\\n\\nWe thank the reviewer for the positive review, as well as for the detailed comments and suggestions.\\n\\nFollowing the comments from all reviewers, we have clarified our main messages.\", \"our_main_results_are\": \"1.1) It is known and we also show: Different curricula can lead to the same performance. \\n1.2) Despite that, when extrapolating the task to new settings, differences between networks trained with these curricula arise.\\n2) The source of these differences can be traced to dynamical objects formed through training - in our case slow points.\\n3) Analyzing these slow points in the nominal task can predict behavior on the extrapolated task\\n4) Tracing the formation of these slow points through learning provides a link between training, slow point formation, stability of memories and performance\", \"answers_to_specific_comments_below\": \"(C - reviewer comment, R - response)\\n\\nC1) I find the title not very informative. Connection from 'Book' to 'Curriculum' is weak.\\nR1) Our main point was that although different protocols lead to the same performance in the nominal settings, their internal dynamics \\u201cunder the hood\\u201d are different - hence the proverb.\\n\\n\\nC2) The task does not have inherent structure that requires stable fixed points to solve. In fact, since it only requires maximum 19 time frames, it could come up with weird strategies. Since the GRU-RNN is highly flexible, there would be many solutions. The particular strategy that was learned depends on the initial network and training strategy.\\nR2) Indeed for short delays, transients can suffice. But the variable delay is expected to encourage a fixed point solution (Orhan and Ma, bioRxiv 2018). In the revised version, we also explicitly state several possible strategies. Our result shows that indeed different training strategies can lead to different solutions.\\n\\nC3) How repeatable were these findings? I do not see any error bars in Fig 2 nor table 1.\\nR3) We repeated the analysis for 20 realizations of MNIST (10 with GRU and 10 with LSTM), and 10 realizations of CIFAR-10 (5 GRU, 5 LSTM). All figures now include error bars.\\n\\nC4) How sensitive is this to the initial conditions? If you use the VoCu trained network as initial condition for a DeCu training, does it tighten the sloppy memory structure and make it more stable?\\nR4) Training for VoCu and successively for DeCu is similar to letting VoCu more training time at the final stage with all classes introduced. As we discussed in the main text that additional training does not enhance performance. Regarding sensitivity to initial conditions, we performed our analysis for several initialisations of each setting and all results between settings were alike.\\n\\nC5) I liked the Fig 2b manipulation to inject noise into the hidden states.\\nR5) Thanks!\\n\\nC6) English can be improved in many places. \\nR6) We edited the text, and hope that the English has been improved.\\n\\nC7) Algorithm 1 is not really a pseudo-code. I think it might be better to just describe it in words. This format is unnecessarily confusing and hard to understand. \\nR7) We described the algorithm (and the new short back-tracking algorithm) in words.\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"(New version uploaded, including improved clarity, CIFAR-10, LSTM, new bifurcation analysis)\\n\\nWe thank the reviewer for the comments, and apologize for the lack of clarity in the previous version.\\n\\nBriefly, the distinction between memorization and processing was part of the motivation, and was used to construct the curricula. It was not, however, the main result.\\nFollowing the comments from all reviewers, we have clarified our main messages.\", \"our_main_results_are\": \"1.1) It is known and we also show: Different curricula can lead to the same performance. \\n1.2) Despite that, when extrapolating the task to new settings, differences between networks trained with these curricula arise.\\n2) The source of these differences can be traced to dynamical objects formed through training - in our case slow points.\\n3) Analyzing these slow points in the nominal task can predict behavior on the extrapolated task\\n4) Tracing the formation of these slow points through learning provides a link between training, slow point formation, stability of memories and performance\\n\\nAs for the toy setting - we opted for a setting that would allow us to parametrically extrapolate the task. Furthermore, we now expanded our results to another architecture (LSTM) and another dataset (CIFAR). \\n\\nWe hope these changes amount to both clarifying exactly what question we are addressing, and broadening the results.\"}", "{\"title\": \"Answer to Area Chair\", \"comment\": \"We thank the reviewers for their valuable comments, and are currently working to address all of them. In the meantime, we can report that using LSTM instead of GRU leads to the same qualitative results.\"}", "{\"title\": \"understanding memorization vs processing across two types of curricula in RNNs\", \"review\": \"This manuscript attempts to use a delayed classification task to understand the dynamics of RNNs. The hope is to use this paradigm to distinguish memorization from processing in RNNs, and to further probe when specific curricula (VoCu, DeCu) outperform each other, and what can be inferred from that.\", \"quality\": [\"The experimental design is sensible. However, it is rather too much a toy example, and too narrow, hence it is unclear how much these results can be generalized across RNNs\", \"Highly problematic is that the key concepts in the paper -- memorization and processing -- are not well defined. This means that the results inevitably are just interpretations rather than any sort of compelling empiricism. After a careful read of the paper, I found it difficult to take away any particular learnings, other than \\\"training RNNs is hard.\\\"\"], \"clarity\": [\"The paper is fairly straightforward, which is positive.\", \"The lack of clarity around particular definitions means that clarity is limited to the empirical results. If the results are incredibly compelling, that would be acceptable, but absent that (as is the case here), the paper comes across to me as rather unclear in its purpose or its takeaway message.\"], \"originality\": [\"The Barak 2013 paper seems to be the key foundation for this work. This work is sufficiently original beyond that paper.\"], \"significance\": \"- The combination of lack of clarity and limited results on a toy setting imply that the significance is rather too low.\\n\\nOverall, this is a genuine effort to explore the dynamics of RNNs. I suggest improvements can be made by either (1) working hard to clarify in the text *exactly* what question is being asked and answered, or (2) broadening the results to make a much more rigorously supported point, or (3) ideally both.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Learned memory structure differs due to training paradigm\", \"review\": \"This paper titled <don't judge a book by its cover - on the dynamics of recurrent neural networks> studies how different curriculum learning results in different hidden state dynamics and impacts extrapolation capabilities. By training a 200-GRU-RNN to report the class label of a MNIST frame hidden among noisy frames, authors found different training paradigms resulted in different stability in memory structures quantified as stable fixed points. Their main finding is that training by slowly increasing the time delay between stimulus and recall creates more stable fixed point based memory for the classes.\\n\\nAlthough the paper was clearly written in a rush, I enjoyed reading it for the most part. These are very interesting empirical findings, and I can't wait to see how well it generalizes.\\n\\n# I find the title not very informative. Connection from 'Book' to 'Curriculum' is weak.\\n\\n# The task does not have inherent structure that requires stable fixed points to solve. In fact, since it only requires maximum 19 time frames, it could come up with weird strategies. Since the GRU-RNN is highly flexible, there would be many solutions. The particular strategy that was learned depends on the initial network and training strategy.\\n\\n# How repeatable were these findings? I do not see any error bars in Fig 2 nor table 1.\\n\\n# How sensitive is this to the initial conditions? If you use the VoCu trained network as initial condition for a DeCu training, does it tighten the sloppy memory structure and make it more stable?\\n\\n# I liked the Fig 2b manipulation to inject noise into the hidden states.\\n\\n# English can be improved in many places.\\n\\n# Algorithm 1 is not really a pseudo-code. I think it might be better to just describe it in words. This format is unnecessarily confusing and hard to understand.\\n\\n# Does the backtracking fixed/slow point algorithm assume that the location of the fixed point does not change through training? Wouldn't it make more sense to investigate the pack-projection of desired output at each training step?\\n\\n# PTMT, PMTP, and TaCu are not described well in the main text.\\n\\n# The pharse 'basin of attraction' is losely used in a couple of places. If there isn't an attractor, its basin doesn't make sense.\\n\\n# Fig 4 is not very informative. Also is this just from one network each?\\n\\n# Fig 5 is too small!\\n\\n# page 2: input a null label -> output a null label\\n\\n# it would be interesting to see how general those findings are on other tasks, e.g., n-back task with MNIST.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"An intriguing but preliminary investigation into RNN dynamics and generalisation\", \"review\": \"Post-rebuttal update:\\nThe authors have clarified their main messages, and the paper is now less vague about what is being investigated and the conclusions of the experiments. The same experimental setup has been extended to use CIFAR-10 as an additional, more realistic dataset, the use of potentially more powerful LSTMs as well as GRUs, and several runs to have more statistically significant results - which addresses my main concerns with this paper originally (I would have liked to see a different experimental setup as well to see how generalisable these findings are, but the current level is satisfying). Indeed, these different settings have turned up a bit of an anomaly with the GRU on CIFAR-10, which the authors claim that they will leave for future work, but I would very much like to see addressed in the final version of this paper. In addition some of the later analysis has only been applied under one setting, and it would make sense to replicate this for the other settings (extra results would have to fit into the supplementary material).\\n\\nI did spot one typo on page 4 - \\\"exterme\\\", but overall the paper is also better written, which helps a lot. I commend the authors on their work revising this paper and will be upgrading my rating to accept.\\n\\n---\\n\\nThe authors investigate the hidden state dynamics of RNNs trained on a single task that mixes (but clearly separates) pattern recognition and memorisation. The authors then introduce two curricula specific to the task, and study how the trained RNNs behave under different deviations from the training protocol (generalisation). They show that under the curriculum that exhibited the best generalisation, there exist more robust (persisting for long time periods) fixed/slow points in the hidden state dynamics. They then extend the optimisation procedure developed by Sussillo & Barak for continuous-time RNNs in order to find these points. Finally, they use this method to track the speed of these points during the course of training, and link spikes in speed to one of the curricula which introduces new classes over time.\\n\\nUnderstanding RNNs - and in particular how they might \\\"generalise\\\" - is an important topic of research. As done previously, studying RNNs as dynamical systems is a principled way to do so. In this line of work some natural objects to look into are fixed points and even slow points (Sussillo & Barak) - how long they can persist, and how large the basins of attraction are. While I believe the authors did a reasonable job following this through, I have some concerns about the experimental setup. Firstly, only one task is used - based on object classification with images - so it is unclear how generalisable these findings are, given that the authors' setup could be extended to cover at least another task, or at least another dataset. MNIST is a sanity check, and many ideas may fail to hold when extended to slightly more challenging datasets like CIFAR-10.\\n\\nSecondly, as far as I can tell, the results are analysed on one network per setting, so it is hard to tell how significant the differences are. While some analyses may only make sense for single networks, e.g. Figure 3, a proper quantification of some of the results over several training runs would be appropriate.\\n\\nFinally, it is worth investigating LSTMs on this task. This is not merely because they are more commonly used than GRUs, but they are strictly more powerful - see \\\"On the Practical Computational Power of Finite Precision RNNs for Language Recognition\\\", published at ACL 2018. Given the results in this paper and actually the paper that first introduces the forget gate for LSTMs, it seems that performing these experiments solely with GRUs might lead to wrong conclusions about RNNs in general.\\n\\nThere are also more minor spelling and grammatical errors throughout the text that should be addressed. For example, there is a typo on the task definition on page 2 - \\\"the network should *output* a null label.\\\"\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rJx_b3RqY7
AIM: Adversarial Inference by Matching Priors and Conditionals
[ "Hanbo Li", "Yaqing Wang", "Changyou Chen", "Jing Gao" ]
Effective inference for a generative adversarial model remains an important and challenging problem. We propose a novel approach, Adversarial Inference by Matching priors and conditionals (AIM), which explicitly matches prior and conditional distributions in both data and code spaces, and puts a direct constraint on the dependency structure of the generative model. We derive an equivalent form of the prior and conditional matching objective that can be optimized efficiently without any parametric assumption on the data. We validate the effectiveness of AIM on the MNIST, CIFAR-10, and CelebA datasets by conducting quantitative and qualitative evaluations. Results demonstrate that AIM significantly improves both reconstruction and generation as compared to other adversarial inference models.
[ "Generative adversarial network", "inference", "generative model" ]
https://openreview.net/pdf?id=rJx_b3RqY7
https://openreview.net/forum?id=rJx_b3RqY7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ByevvbGUxN", "HJxe7UjbxV", "HJlfIT1ZlN", "H1gK50mgAX", "S1lGhOQlC7", "HygMkrMgAm", "HkxoDveR2m", "r1l9TGz637", "BygpS0C527" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545113951160, 1544824344076, 1544777034275, 1542631057372, 1542629545719, 1542624473605, 1541437283153, 1541378753613, 1541234245162 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1184/Authors" ], [ "ICLR.cc/2019/Conference/Paper1184/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1184/Authors" ], [ "ICLR.cc/2019/Conference/Paper1184/Authors" ], [ "ICLR.cc/2019/Conference/Paper1184/Authors" ], [ "ICLR.cc/2019/Conference/Paper1184/Authors" ], [ "ICLR.cc/2019/Conference/Paper1184/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1184/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1184/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Discussion with Reviewer 1\", \"comment\": \"Dear Reviewer 1,\\n\\nWe have added another high-dimensional experiment in which we compared the KL-divergence achieved by different models. The result has been posted in another reply.\\n\\nIn the previous update, we have tried to address some of your concerns. Furthermore, we have added another section 4.3 to explain the connection between our method and VAE.\\n\\nDo you have any further suggestion? We would be glad to have more discussions with you!\\n\\nThank you!\"}", "{\"metareview\": \"The paper proposes a method that aims to combine the strenghts of VAEs and GANs.\\n\\nThe paper establishes an interesting bridge between GANs and VAEs. The experimental results are encouraging, even though only relatively small datasets were used. It is encouraging that the method results in better reconstructions then ALI, a related method.\\n\\nSome reviewers think that the paper contains limited novelty compared to the wealth of recent work on this topic (e.g. ALI/BiGAN). The paper's contribution is seen as incremental; e.g. the training is very similar to InfoGAN. Also, the claims of better sample quality over ALI seem insufficiently supported by the data.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-Review\"}", "{\"title\": \"Second-round response to Reviewer 2\", \"comment\": \"Dear Reviewer 2,\\n\\nWe really appreciate that you considered our updates of the paper and increased the score!\\n\\nIn the new response below, we have designed a high-dimensional experient and compared our method AIM with ALI, VEEGAN, and VAE.\\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\", \"experiment_settings\": \"The latent code z is of dimension 16, and the data X is of dimension 256. The relation between z and X is X = Az + N(0, 0.01^2 * I), where A is a Gaussian matrix with i.i.d entries from N(0, 0.05^2). Note that A was generated at first and then fixed thereafter. We then randomly sampled 200,000 z\\u2019s from the standard Normal distribution (16-dim), and then mapped them to the data space using A. Then we chose 100,000 of the samples for training, 50,000 for validation, and 50,000 for testing. The results are summarized below. The KL-divergence is calculated using the ITE-package [1] following the Adversarial Variational Bayes [2] paper.\\n\\n\\n first epoch best KL best epoch/total epochs\\n z X z X \\n---------------------------------------------------------------------------------------------------------------------------\\nAIM 0.04 76.54 0.31 2.02 31 / 100\\n---------------------------------------------------------------------------------------------------------------------------\\nALI 0.07 62.83 0.37 5.66 30 / 100\\n---------------------------------------------------------------------------------------------------------------------------\\nVEEGAN 0.07 47.95 0.33 11.41 36 / 100\\n---------------------------------------------------------------------------------------------------------------------------\\nVAE 2.16 619.30 0.02 9.15 92 / 100\\n---------------------------------------------------------------------------------------------------------------------------\\n\\nIn the \\\"first epoch\\\" column, we report the KL-divergence of the first epoch. We choose the \\\"best epoch\\\" to be the epoch when KL(z, z_fake) + KL(x, x_fake) attains its minimum on the validation set, and then report the KL-divergence of the \\\"best epoch\\\" on the test set.\\n\\nFrom the table, we observe that VAE achieves a much smaller KL on z. One important reason could be its explicit minimization of KL(z, z_fake). AIM and the other two adversarial methods have similar z-space KL-divergence.\\n\\nIn the x-space, AIM has the best result, followed by ALI. VEEGAN has a slight improvement on z-space KL, but sacrifices the performance on x. Since in this case X only has one mode, our hypothesis is that VEEGAN makes a trade-off between alleviating mode collapse and maintaining the generating quality within each mode. However, this needs more experiments to confirm. VAE does not perform well on x-space, and its convergence seems much slower than the other methods. The KL on x may continue decreasing if we run other hundreds of epochs. But within 100 epochs, it does not give competitive results even though the experiment settings are very close to the VAE assumptions. However, our X does have covariance between different dimensions, while VAE assumes independent features to use the L2 loss. From our experiment, this violation indeed has a large impact on the performance.\\n\\nOverall, the experiment results are on par with our reports in the paper. AIM not only effectively infers the code z, but also uses the inference mechanism to further improve generating quality.\", \"references\": \"[1] Szabo, Zoltan. \\\"Information theoretical estimators (ite) toolbox.\\\" 2013.\\n[2] Mescheder, Lars, Sebastian Nowozin, and Andreas Geiger. \\\"Adversarial Variational Bayes: Unifying variational autoencoders and generative adversarial networks.\\\" ICML, 2017.\"}", "{\"title\": \"Author Response to AnonReviewer3\", \"comment\": \"We thank Reviewer 3 for the encouraging feedback and the precise summary of our work.\\n\\nFor a fair comparison, we only conduct experiments on the same datasets used in the related papers. But the architecture of our model (especially the generator and discriminator) can be easily replaced by more advanced state-of-the-art GANs for larger and more complicated datasets.\\n\\nFYI, we have added another Section 4.3 to explain the interesting relation between our method and VAE. And we have also added more experiments to Section 5.3.\"}", "{\"title\": \"Author Response to AnonReviewer2\", \"comment\": \"We thank Reviewer 2 for the constructive feedback. Here is our point-to-point response to the comments and questions raised in this review:\\n\\n1. Section 4:\\n- q(z) seems to be undefined. Is it the aggregated posterior?\\n\\nWe are sorry for the confusion. Yes, q(z) is the aggregated posterior. In this paper, p() stands for the distribution on the generator, and q() stands for the distribution on the encoder.\\n\\n- How is equation (1) related to ELBO that is used for training VAEs?\\n\\nTo better explain the relation between (1) and VAE, we have added a new Section 4.3. \\n\\nTo summarize, equation (1) is our method\\u2019s objective. It means that AIM performs marginal distribution matching in the latent space and conditional distribution matching in the data space. But this objective cannot be optimized directly, so we transfer the problem using (3). It turns out VAE can be derived in a similar manner. Specifically, the objective of VAE can be understood as the equation (5), which is like the \\u201creverse version\\u201d of equation (3). By reverse, we mean that VAE can be explained as performing marginal distribution matching in the data space and conditional distribution matching in the latent space. Note that the RHS of (5) is the well-known VAE form, i.e. a regularization on z (I_vae) plus a reconstruction term on x (II_vae). But we actually get it from a perspective different from ELBO. ELBO is a lower bound of the log-likelihood of the data, and we maximize ELBO in order to maximize the log-likelihood. However, in equation (5), we do not have any inequality and are not directly trying to increase the likelihood. Instead, the LHS of (5) is the summation of KL-divergence between the conditionals on z and between the marginals on x. This is the quantity that VAE tries to minimize, from our perspective motivated by (3).\\n\\n2. Some relevant references are missing: I\\u2019d love to see a discussion of how this loss relates to other VAE-GAN hybrids.\\n\\nThank you for bringing these work to our attention. We have cited and discussed them in our updated draft. We also added another adversarial inference paper (adversarial variational Bayes). They are discussed in the second and fourth paragraph in the Related Work section. Empirical comparison with VEEGAN has also been added to Section 5.3.\\n\\n3. Section 5.1\\n\\nWe use MSE as the measure of how well our model reconstructs the samples. While there may not exist an absolutely perfect measure, we think the relative improvement on one measure still provides lots of information. For example, the best baseline model in Section 5.1 has MSE 0.080 on MNIST, while our model has only 0.026. On CIFAR-10, the best baseline MSE is 0.416, while ours is only 0.019. Moreover, all these improvements on MSE do not come with any compromise on generation. In fact, our model even improves the generating performance over GAN with the same architecture.\\n\\nBut per the reviewer\\u2019s request, we add another measure called \\u201cpercentage of high-quality samples\\u201d to our mode-collapse experiment, motivated by the experiments in VEEGAN. The results are summarized in Table 2. We observe that the best baseline model covers about 24.6 modes with 40% generated samples to be of high quality, while our model can cover all the 25 modes with more than 80% high-quality samples. This together with the inception score provides strong evidence that AIM can generate higher-quality samples.\\n\\n4. Section 5.4\\n\\nWe are not very sure which error bar was the reviewer referring to. But for better illustration, we have summarized the results in a table (Table 2). And from that, we can see that our model covers all of the 25 modes every time with high-quality samples, and can indeed reliably reduce the mode collapse.\\n\\n5. Minor issue\\n\\nThank you for pointing out this typo =) We have corrected it!\"}", "{\"title\": \"Author Response to AnonReviewer1\", \"comment\": \"We thank reviewer 1 for the deep and insightful review. Here is our point-to-point response to the comments and questions raised in the review:\\n\\n1. \\u201cThe space of adversarially trained latent variable models has grown quite crowded in recent years.\\u201d\\n\\nAlthough there has been a large improvement in the topic of adversarial inference in recent years, some big issues are still not well addressed and limit the effectiveness of the inference mechanism in adversarial frameworks.\\n\\nFirstly, to the best of our knowledge, all of the works that attempt to incorporate the inference mechanism into GAN suffer from deteriorating the generation performance. This is supported by the paper [1], in which they conducted extensive experiments to compare many state-of-the-art models with DCGAN. The result shows that GAN variants with inference network perform worse than the standard DCGAN on image generation.\\n\\nSecondly, the inference performance is also very limited, and as the data distribution becomes more complicated, this issue will be more severe. For example, ALICE\\u2019s reconstruction performance on CIFAR-10 is much worse than that on MNIST.\\n\\nTo the best of our knowledge, we are the first to successfully handle these two issues simultaneously. For generation performance, our model AIM does not deteriorate the generation performance but actually further improve it compared with GAN with the same architecture. For the inference, AIM consistently achieves better results on even complicated distributions. \\n\\n2. \\u201cI would like to understand better why it is that latent variable (z) reconstruction gives rise to better x-space reconstruction.\\u201d\\n\\nWe have added a new Section 4.3 to demonstrate the connection between our model and VAE. Specifically, the objective of VAE can be understood as the equation (5), which is like the \\u201creverse version\\u201d of equation (3). By \\\"reverse\\\", we mean that VAE can be explained as performing marginal distribution matching in the data space and conditional distribution matching in the latent space, while our model performs marginal distribution matching in the latent space and conditional distribution matching in the data space. \\n\\nNote that the latent z reconstruction alone does not guarantee a better data space reconstruction, just like a stand-alone x reconstruction of VAE will not work without the help of regularization on z. Our method has two simultaneous conditions on the generator, encoder, and discriminator: the generator has to generate samples that can fool the discriminator and the encoder has to bring these generated samples back to their latent codes. So the generator needs to not only generate samples that look real, but also map the latent codes to the \\u201ccorrect\\u201d locations (e.g. modes). Otherwise, the encoder will have a hard time to map the samples back (more precisely in our case, will have a low likelihood).\\n\\nAnother interesting property of (5) is that it actually provides a new perspective on VAE\\u2019s objective, different from the maximum likelihood point of view. Note that there is also no inequality in (5), unlike the ELBO approach. We can get VAE\\u2019s objective by decomposing the summation of the KL-divergence between posteriors on z and between marginals on x.\\n\\n3. I did not find the claims of better sample quality of AIM over ALI to be well supported by the data. In this context, it is not entirely clear what the significant difference in inception scores represents.\\n\\nProbably the 2D Gaussian mixture result will provide an insight here. From Table 2, we can see that ALI\\u2019s generated samples only cover on average 16 (out of 25) modes while our method\\u2019s can cover 25 every time. The ALI\\u2019s result we show in Figure 3 is the best-covering result they report, and we include it only to give more insights on the difference between our method and the joint distribution matching scheme.\\n\\nFrom the quantitative results in Table 1, we also observe that AIM gives a higher inception score than ALI, and this happens when AIM also has a much lower reconstruction error. The main takeaway message is that, compared to the joint distribution matching of ALI, the separate marginal and conditional matching of AIM leads to better reconstruction and generation. Actually, from Figure 2, the reconstructions of ALI are not always faithful even on the MNIST dataset. We think this is because that it is still very hard for the adversarial game to discover the dependency relation between x and z.\", \"reference\": \"[1] Distribution matching in variational inference.\"}", "{\"title\": \"A Review on Adversarial Inference by Matching Priors and Conditionals\", \"review\": \"The goal this is work is to develop a generative model that enjoys the strengths of both GAN and VAE without their inherent weaknesses. The paper proposes a learning framework, in which a generating process p is modeled by a neural network called generator, and an inference process q by another neural network encoder. The ultimate goal is to match the joint distributions, p(x, z) and q(x, z), and this is done by attempting to match the priors p(z) and q(z) and matching the conditionals p(x|z) and q(x|z). As both q(z) and q(x|z) are impossible to sample from, the authors mathematically expand this objective criterion and rewrite to be dependent only on p(x|z), q(x) and q(z|x), that can be easily sampled from. In the main part of the work, the authors use the f-divergence theory (Nowozin et al., 2016) to present the optimization problem as minmax optimization problem, that is learned using an adversarial game, using training and inference algorithms that are proposed by the authors. In experiments, the authors consider both reconstruction and generation tasks using the MNIST, CIFAR10 and CelebA datasets. Results show that the proposed method yields better MSE reconstruction error as better as a higher inception scores for the generated examples, compared to a standard GAN and a few other methods.\\n\\nThis work establishes an important bridge between the VAE and GAN framework, and has a a good combination of theoretical and experimental aspects. Experiments results are encouraging, even though only relatively simple and small datasets were used. Overall, I would recommend accepting the paper for presentation in the conference.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Ok paper, some nice comparisons, but too similar to existing models\", \"review\": \"This paper presents a variant of the adversarial generative modeling\\nframework, allowing it to incorporate an inference mechanism. As such it is\\nvery much in the same spirit as existing methods such as ALI/BiGAN. The\\nauthors go through an information theoretic motivation but end up with the\\nstandard GAN objective function plus a latent space (z) reconstruction\\nterm. The z-space reconstruction is accomplished by first sampling z from\\nits standard normal prior and pushing that sample through the generator to\\nget sample in the data space (x), then x is propagated through an encoder\\nto get a new latent-space sample z'. Reconstruction is done to reduce the\\nerror between z' and z.\", \"novelty\": \"The space of adversarially trained latent variable models has\\ngrown quite crowded in recent years. In light of the existing literature,\\nthis paper's contribution can be seen as incremental, with relatively low novelty. \\n\\nIn the end, the training paradigm is basically the same as InfoGAN, with\\nthe difference being that, in the proposed model, all the latent\\nvariables are inferred (in InfoGAN, only a subset of the latent\\nvariables are inferred) . This difference was a design decision on the part of the InfoGAN\\nauthors and, in my opinion, does not represent a significantly novel\\ncontribution on the part of this paper.\", \"experiments\": \"The experiments show that the proposed method is\\nbetter able to reconstruct examples than does ALI -- a result is not\\nnecessarily surprising, but is interesting and worth further\\ninvestigation. I would like to understand better why it is that latent\\nvariable (z) reconstruction gives rise to better x-space reconstruction.\\n\\nI did not find the claims of better sample quality of AIM over ALI to be\\nwell supported by the data. In this context, it is not entirely clear what\\nthe significant difference in inception scores represents, though on this, the\\nresults are consistent with those previously published\\n\\nI really liked the experiment shown in Figure 4 (esp. 4b), it makes the\\ndifferences between AIM and ALI very clear. It shows that relative to ALI,\\nAIM sacrifices coherence between the \\\"marginal\\\" posterior (the distribution\\nof latent variables encoded from data samples) and the latent space\\nprior, in favor of superior reconstructions. AIM's choice of trade-off is\\none that, in many contexts, one would happy to take as it ensures that\\ninformation about x is not lost -- as discussed elsewhere in the paper.\\nI view this aspect of the paper by far the most interesting. \\n\\nSummary,\\nOverall, the proposed AIM model is interesting and shows promise, but I'm\\nnot sure how much impact it will have in light of the existing literature\\nin this area. Perhaps more ambitious applications would really show off the\\npower of the model and make it standout from the existing crowd.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting idea, but needs more work\", \"review\": \"UPDATE (after author response):\\n\\nThank you for updating the paper, the revised version looks better and the reviewers addressed some of my concerns. I increased my score.\\n\\nThere's one point that the reviewers didn't clearly address: \\\"It might be worth evaluating the usefulness of the method on higher-dimensional examples where the analytic forms of q(x|z) and q(z) are known, e.g. plot KL between true and estimated distributions as a function of the number of dimensions.\\\" Please consider adding such an experiment.\\n\\nThe current experiments show that the method works better on low-dimensional datasets, but the method does not seem to be clearly better on more challenging higher dimensional datasets. I agree with Reviewer1 that \\\"Perhaps more ambitious applications would really show off the power of the model and make it standout from the existing crowd.\\\" Showing that the method outperforms other methods would definitely strengthen the paper.\\n\\nSection 5.4: I meant error bars in the numbers in the text, e.g. 13 +/- 5.\\n\\n---------\\n\\nThe paper proposes a new loss for training deep latent variable models. The novelty seems a bit limited, and the proposed method does not consistently seem to outperform existing methods in the experiments. I'd encourage the authors to add more experiments (see below for suggestions) and resubmit to a different venue.\", \"section_4\": [\"q(z) seems to be undefined. Is it the aggregated posterior?\", \"How is equation (1) related to ELBO that is used for training VAEs?\"], \"some_relevant_references_are_missing\": \"I\\u2019d love to see a discussion of how this loss relates to other VAE-GAN hybrids.\", \"veegan\": \"Reducing mode collapse in GANs using implicit variational learning\", \"https\": \"//arxiv.org/pdf/1802.06847.pdf\\n\\n\\nSection 5.1:\\n- The quantitative comparison measures MSE in pixel space and inception score, neither of which are particularly good measures for measuring the quality of how well the conditionals match. I\\u2019d encourage the authors to consider other metrics such as log-likelihood.\\n\\n- It might be worth evaluating the usefulness of the method on higher-dimensional examples where the analytic forms of q(x|z) and q(z) are known, e.g. plot KL between true and estimated distributions as a function of the number of dimensions.\\n\\nSection 5.4: \\n- The error bars seem quite high. Is there a reason why the method cannot reliably reduce mode collapse?\", \"minor_issues\": [\"CIFAT-10 -> CIFAR-10\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
SygD-hCcF7
Dimensionality Reduction for Representing the Knowledge of Probabilistic Models
[ "Marc T Law", "Jake Snell", "Amir-massoud Farahmand", "Raquel Urtasun", "Richard S Zemel" ]
Most deep learning models rely on expressive high-dimensional representations to achieve good performance on tasks such as classification. However, the high dimensionality of these representations makes them difficult to interpret and prone to over-fitting. We propose a simple, intuitive and scalable dimension reduction framework that takes into account the soft probabilistic interpretation of standard deep models for classification. When applying our framework to visualization, our representations more accurately reflect inter-class distances than standard visualization techniques such as t-SNE. We show experimentally that our framework improves generalization performance to unseen categories in zero-shot learning. We also provide a finite sample error upper bound guarantee for the method.
[ "metric learning", "distance learning", "dimensionality reduction", "bound guarantees" ]
https://openreview.net/pdf?id=SygD-hCcF7
https://openreview.net/forum?id=SygD-hCcF7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJxWi7NeeN", "SklYQlZsR7", "HkgXAnlcRQ", "Syxw33eqRQ", "B1gOV3x90Q", "ryxpkAks3m", "HylnV5jchQ", "H1gTSCXcnX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544729496883, 1543340064872, 1543273675432, 1543273647098, 1543273520374, 1541238245096, 1541220916356, 1541189188549 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1181/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1181/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1181/Authors" ], [ "ICLR.cc/2019/Conference/Paper1181/Authors" ], [ "ICLR.cc/2019/Conference/Paper1181/Authors" ], [ "ICLR.cc/2019/Conference/Paper1181/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1181/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1181/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper introduces an approach for reducing the dimensionality of training data examples in a way that preserves information about soft target probabilistic representations provided by a teacher model, with applications such as zero-shot learning and distillation. The authors provide an extensive theoretical and empirical analysis, showing performance improvements in zero shot learning and finite sample error upper bounds. The reviewers generally agree this is a good paper that should be published.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper\"}", "{\"title\": \"good point distinguishing input distributions and target distributions\", \"comment\": \"I admit that I was confusing input distributions with target distributions. Thanks for the clarification. Figure 4 confirms that data points need not be Gaussian shaped. I also mistakenly thought t-SNE plots on the right were DRPR plots before. Unimodality assumption of class distributions is still relevant, but I am not too concerned about it for two reasons: 1) t-SNE is not good at modeling multimodal distributions either both conceptually and empirically (ex: Figure 4), 2) it would be a straightforward to extend DRPR to allow multimodal distributions by allowing multiple centers to map to a single distributions. I will adjust my rating accordingly.\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank you for your positive review and helpful suggestions. We address the comments in the following and have updated the paper accordingly:\\n\\n- \\u201cComments: - When trying to understand the proposed method, I found it useful to expand out the full objective function and derive the gradients w.r.t. to f_i. If my maths were correct, the gradient of the objective w.r.t. f_i can be written as the difference between the expected gradient of the divergence w.r.t Y and the expected gradient of the divergence w.r.t. the posterior cluster assignment probabilities. Though not surprising in and of itself, the authors might consider including this equation as it really helped me understand what the learning algorithm was doing.\\u201d\\n\\nWe thank the reviewer for this idea. As suggested, we have added the gradient of our optimization problem wrt the example f_i in the new Equation (5). To simplify its formulation, we consider that the matrix of centroids M does not depend on F (which is the case in the zero-shot learning task) and that priors are all equal. The gradient does depend on both Y and the posterior cluster assignment probabilities.\\nMore exactly, the magnitude of the gradient depends on both of theses scores (which means that a cluster with high score Y_ic will be given more importance). \\nThe gradient tries to make f_i closer to each centroid while separating it from all the centroids depending on their predicted scores as well.\\n\\nWe have added a new paragraph called \\u201cGradient interpretation\\u201d which discusses the gradient.\\n\\n- \\u201cThe authors might consider adding a more complete description of the zero-shot learning task. My understanding of the task was that there are text descriptions of each category and at test time new text descriptions are added that were not in the training set. The goal is to map an unseen image to a class based on the text descriptions of the classes. A couple of sentences explaining this in the first paragraph of section 4.2 would help those who are not familiar with this zero-shot learning setup.\\u201d\\n\\nThank you for this clear and succinct description of our zero-shot learning scenario. We have added this to the first paragraph of Section 4.2 as suggested.\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We thank you for sharing your concerns about the paper. We will clarify your concerns regarding the assumptions made by DRPR and classification and distillation problems for zero-shot learning.\\n\\n- \\u201c DRPR is making a strong assumption that representations of data points that belong to the same class form a unimodal distribution. I don't believe this is a realistic assumption.\\u201d \\n\\nWhile DRPR makes some assumptions on the distribution in the learned low-dimensional space, we would like to emphasize that DRPR makes no assumption on the input probability distribution in general.\\nIt is true that the toy dataset illustrates a special case where both the original and low-dimensional representations follow a unimodal Gaussian distribution (for each cluster). \\nThe goal of the toy experiment was to illustrate some weaknesses of t-SNE on a problem that is easy to visualize. We chose this toy dataset because it was easy to visualize the original 3D points themselves (input of t-SNE in Fig. 2 (b)), and also the soft assignment scores of the different points wrt the different clusters in the original space (input of t-SNE in Fig. 2 (c)). These soft probability scores are the target of our model.\", \"our_algorithm_is_similar_to_t_sne_in_the_sense_that_it_assumes_some_distribution_of_the_data_in_the_low_dimensional_space\": \"t-SNE considers a Student-t distribution to compute similarity between pairs of points. Any kind of probability distribution can be used for the input space: t-SNE considers by default a conditional probability based on a Gaussian similarity between pairs of points, but any other kind of distribution can be given as input.\\nDRPR is given target probability scores that can be computed from any distribution. In the zero-shot learning task, those scores indeed come from Gaussian mixtures. However, the targets in the visualization experiment come from neural networks trained with cross entropy and do not follow a Gaussian distribution\\n\\n- \\u201cMost deep-learning based classification models in the literature use softmax, we need stronger inductive bias to improve the performance of the model.\\u201d \\n\\nWe agree that, in the usual classification task where the training and test categories are the same, learning a fully connected layer + softmax regression leads to state-of-the-art performance. However, the output dimensionality of the learned model is high and it is then difficult to interpret what has been learned by the model. Applying visualization techniques such as t-SNE has been proposed to interpret such complex models. Nonetheless, to the best of our knowledge, no visualization techniques exploit the fact that softmax classifiers have soft probabilistic interpretations.\\n\\n- \\u201cWhen we distillate one model into another, the performance generally improves even when the same exact model is both the teacher and the student (Furlanello et al., ICML 2018). Therefore, it would be interesting to compare against distillation with baseline models themselves.\\u201d\\n\\nWe thank the reviewer for this suggestion. We have compared our method to two distillation strategies proposed in (Furlanello et al., ICML 2018) in the zero-shot learning task. We used it with the Prototypical Networks as the teacher since it obtains the best performance.\\nIn the first strategy, for each example, we preserve only the predicted category of the teacher model and convert the target as a one-hot vector. This actually corresponds to applying a second \\u201clayer\\u201d of prototypical network which is a special case of our method when the targets are hard assignments.\\nIn the second strategy, we preserve the predicted category and permute the scores of the category with lower scores.\\nIn both cases, the accuracy scores decreased relative to using only the Prototypical Network (without our method): about 55% on Birds and 60% on Flowers.\\n\\nThis difference may be explained by the fact that (Furlanello et al., ICML 2018) consider a task where categories are the same during training and test. Applying wrong predictions based on a pre-trained teacher does not seem to affect accuracy.\\nIn our case, the set of training, validation and test categories are all different. Preserving relevant relative scores seems more crucial.\\n\\n- \\u201cVisualization analysis focuses on how class-relationships are preserved rather than faithful representation of each data point, which is a wrong target \\u201d\\n\\nThe \\u201cwrong\\u201d target depends on the task.\\nEvery dimensionality reduction framework has some criterion that is seeked to be optimized. PCA finds a linear transformation that maximizes the variance. t-SNE tries to preserve local neighborhood in both the high and low-dimensional spaces, which results in a poor preservation of inter-cluster distances. There is not a clear definition of \\u201cfaithful representation\\u201d since it always depends on what is meant to be represented. \\nThe goal of DRPR is to find a low-dimensional space that best preserves the soft predicted scores of a classifier.\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We would like to thank you for the feedback on the paper. We try to clarify your concerns regarding the convergence guarantee and why we need to compute prior at each iteration.\\n\\n- \\u201cThe algorithm that is presented is quite natural, though no guarantees that it will converge to something relevant were given.\\u201d\\n\\u201cDoes this algorithm really minimize the discrepancy?\\u201d\\n\\nIt is not clear what \\u201csomething relevant\\u201d refers to, so we try to explain the result of Theorem 1 in the appendix. The theorem provides a finite-sample upper bound on the quality of the minimizer of the empirical loss, as defined in Eq. (6). \\nThe quality of the solution is measured according to the discrepancy (Delta_n), which is the expected KL-divergence between the teacher and the student. The teacher is defined by \\\\phi, and provides a distribution over k clusters.\\nBut since we only have access to n data points, the student minimizes the empirical discrepancy \\\\Delta_n. The minimizer is \\\\hat{g}, and it induces a distribution psi_\\\\hat{g}.\\nTheorem 1 shows that the expected KL-divergence between teacher \\\\phi and student psi_\\\\hat{g} is upper bounded by two terms:\\nThe best possible student within the function class G, from which \\\\hat{g} is selected.\\nSome estimation error terms that depend on the number of samples n and some properties of the function space G.\\n\\nWe emphasize that our theoretical result does not concern the convergence of the optimization procedure, as noted in Footnote 5.\", \"a_related_question_of_the_reviewer_is_that_whether_the_algorithm_really_minimizes_the_discrepancy\": \"The answer is that the algorithm minimizes the empirical version of the discrepancy. But the theory shows that doing so leads to a guarantee on the quality of resulting estimator, according to the true discrepancy (which has expectation, instead of a finite number of data points).\\n\\n- \\u201cEspecially, easiness of substitution of \\\\bar{Y}_{ic} with Y_{ic} in the algorithm is unclear (roughly speaking, the latter means that E-step is omitted in EM).\\u201d\\n\\nWe thank the reviewer for pointing out this lack of clarity in our first submission. We assume that the reviewer refers to the statement \\u201cIt is worth noting that we never apply the EM algorithm during training\\u201d. We do apply the E-step of the EM algorithm at each gradient descent iteration, but only once unlike the standard EM algorithm.\\n\\nMore precisely, at each iteration, we know the optimal desired values of the M-step variables (i.e. the centroid matrix M and the prior vector \\\\pi) as a function of the ground truth assignment matrix Y and the current representations of the mini-batch F. We then use these optimal values of M and \\\\pi (formulated in step 4 of Algorithm 1) to compute the E-step, which corresponds to the predicted assignment matrix \\\\Psi formulated in Eq. (3) (which you are denoting as \\\\bar{Y}_{ic}). \\nBy definition of our optimization problem in Eq. (4), we minimize the average of KL divergence between the rows of Y and the rows of \\\\Psi.\\n\\nWe have updated the sentence accordingly.\\n\\nWe would also like to emphasize that exploiting the target assignment matrix Y to compute the optimal M-step variables (i.e. centroid matrix) is commonly done in the supervised (hard) clustering literature [A,B].\\n\\n- \\u201cIf matrix Y in algorithm is fixed, why we need to compute \\\\pi in the loop? Isn't it going to be the same?\\u201d\\n\\nWe think that you mean that if Y stays the same at every iteration, then \\\\pi which depends only on Y should also stay the same at each iteration. This statement is correct.\\nHowever, in the case where we train a neural network via mini-batches (e.g. in the zero-shot learning task or the hard clustering task in Section C.3), Y corresponds to the target assignment matrix of the mini-batch.\\nSince each iteration then considers a different mini-batch F, the matrices Y and \\\\pi then also change.\\n \\nNonetheless, as mentioned in our paper, the priors can also be assumed to be all equal and then ignored.\\nThe matrix \\\\pi can also be calculated according to the target assignments of the whole training dataset and then fixed.\\n\\n\\n[A] Lajugie et al., Large-Margin Metric Learning for Constrained Partitioning Problems, ICML 2014\\n[B] Law et al., Deep Spectral Clustering Learning, ICML 2017\"}", "{\"title\": \"What do we really minimize?\", \"review\": \"The paper deals with a problem formulation adjacent to that of the sufficient dimension reduction: given training set of pairs (x_i,y_i), how to reduce the dimension of the first element, i.e. map x_i --> f(x_i), so that f(x_i)'s still have all the information to recover y_i's.\\n\\nIn the paper, the output y_i is a probability distribution over k labels that softly describes inclusion of example i into k classes.\\n\\nThey consider a nonlinear case, i.e. the mapping f is taken from a prespecified set of mappings, parameterized by Theta (e.g. neural network). Then by \\\"recovering y_i\\\" they mean that EM algorithm for {f(x_i)} will result in a clustering of the data into k soft clusters similar to given {y_i}.\\n\\nThe algorithm that is presented is quite natural, though no guarantees that it will converge to something relevant were given. Theoretical analysis deals with a question --- how far the empirical discrepancy could be from the true expected one. Especially, easiness of substitution of \\\\bar{Y}_{ic} with Y_{ic} in the algorithm is unclear (roughly speaking, the latter means that E-step is omitted in EM). If matrix Y in algorithm is fixed, why we need to compute \\\\pi in the loop? Isn't it going to be the same? Does this algorithm really minimizes the discrepancy?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"strong inductive bias of the model may not be appropriate for visualization\", \"review\": \"Authors propose a method of embedding training data examples into low-dimensional spaces such that mixture probabilities from a mixture model on these points are close to probability predictions from the original model in terms of KL divergence. Authors suggest two use-cases of such an approach: 1) data visualization, and 2) zero-shot learning. For the visualization use-case, authors compare against other dimensionality reduction methods with qualitative analysis on a synthetic problem, as well as evaluation metrics such as Neighborhood-Preservation Ratio and Clustering Distance Preservation Ratio. For zero-shot use-case, they take pre-trained models on two zero-shot tasks, and improve the accuracy by using probability outputs from pre-trained models as target.\\n\\nRegarding the benefit of using the proposed method for visualization, the DRPR is making a strong assumption that representations of data points that belong to the same class form a uni-modal, Gaussian distribution (since authors don't experiment with distance functions other than L2). This inductive bias comes with a strong benefit when the assumption is true - as demonstrated in the toy dataset experiment - but when it is not true, the visualization would strongly distort the underlying structure of the model. And I don't believe this is a realistic assumption, because there has to be a reason that most deep-learning based classification models in the literature don't always use a model like (3) or Prototypical Networks instead of typical fully-connected + softmax layer, unless the data size is small and we need stronger inductive bias to improve the performance of the model. That is, we usually don't think unimodality is the right assumption, even with learned representations. I suspect that the while DRPR might be good at visualizing relationships between class labels - especially which class can be easily confused with another - but would be worse at faithfully representing each data point, especially the ambiguity of class labels on individual ones. I would argue, however, that faithful representation of each data point is more important for scatter plots than relationship between classes, because the latter can be more effectively analyzed with other methods such as confusion matrices. As it is typical in most dimensionality reduction papers, I would encourage authors to consider more types of synthetic datasets which nonlinearity and multimodality are critical to be learned. I don't believe quantitative evaluation in Table 1 and 2 are very meaningful, because DRPR's objective function is much better aligned with these metrics than others. \\n\\nZero-shot experiments show a promising lift over the baseline pre-trained models. The kind of bias we should be careful about, however, is that when we distillate one model into another, the performance generally improves even when the same exact model is both the teacher and the student: (Furlanello et al, ICML 2018 https://arxiv.org/abs/1805.04770 ). Therefore, it would be interesting to compare against distillation with baseline models themselves.\", \"pros\": [\"Extensive theoretical and empirical analysis\", \"Simple idea that generalizes to multiple use-cases, which implies robustness of the approach as a methodology\"], \"cons\": [\"Unimodal assumption is likely not realistic, which would result in misleading visualization of data\", \"Visualization analysis focuses on how class-relationships are preserved rather than faithful representation of each data point, which is a wrong target\", \"Synthetic experiment is conducted on a single, too simplistic one; more examples are needed to understand the capabilities of the model in more detail\", \"The bias of knowledge distillation is not controlled\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Excellent paper with strong motivation, interesting proposed method, and comprehensive empirical results\", \"review\": \"Overall, I thought this was an excellent paper. The idea is well-motivated, the presentation is clear, and the evaluations are both comprehensive and provide insight into the behavior of the proposed methods (I will not comment on the theoretical analysis, as it is entirely contained in the supplemental materials). I was honestly impressed by the shear volume of content in this paper, particularly since I found none of it to be superfluous. Frankly, this paper might be better served as two papers or a longer journal paper, but that is hardly a reason not to accept it. I strongly recommend acceptance and have only a couple of comments on presentation.\", \"comments\": [\"When trying to understand the proposed method, I found it useful to expand out the full objective function and derive the gradients w.r.t. to f_i. If my maths were correct, the gradient of the objective w.r.t. f_i can be written as the difference between the expected gradient of the divergence w.r.t Y and the expected gradient of the divergence w.r.t. the posterior cluster assignment probabilities. Though not surprising in and of itself, the authors might consider including this equation as it really helped me understand what the learning algorithm was doing.\", \"The authors might consider adding a more complete description of the zero-shot learning task. My understanding of the task was that there are text descriptions of each category and at test time new text descriptions are added that were not in the training set. The goal is to map an unseen image to a class based on the text descriptions of the classes. A couple of sentences explaining this in the first paragraph of section 4.2 would help those who are not familiar with this zero-shot learning setup.\"], \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
SygvZ209F7
Biologically-Plausible Learning Algorithms Can Scale to Large Datasets
[ "Will Xiao", "Honglin Chen", "Qianli Liao", "Tomaso Poggio" ]
The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feedback pathways. To address this “weight transport problem” (Grossberg, 1987), two biologically-plausible algorithms, proposed by Liao et al. (2016) and Lillicrap et al. (2016), relax BP’s weight symmetry requirements and demonstrate comparable learning capabilities to that of BP on small datasets. However, a recent study by Bartunov et al. (2018) finds that although feedback alignment (FA) and some variants of target-propagation (TP) perform well on MNIST and CIFAR, they perform significantly worse than BP on ImageNet. Here, we additionally evaluate the sign-symmetry (SS) algorithm (Liao et al., 2016), which differs from both BP and FA in that the feedback and feedforward weights do not share magnitudes but share signs. We examined the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using different network architectures (ResNet-18 and AlexNet for ImageNet; RetinaNet for MS COCO). Surprisingly, networks trained with sign-symmetry can attain classification performance approaching that of BP-trained networks. These results complement the study by Bartunov et al. (2018) and establish a new benchmark for future biologically-plausible learning algorithms on more difficult datasets and more complex architectures.
[ "biologically plausible learning algorithm", "ImageNet", "sign-symmetry", "feedback alignment" ]
https://openreview.net/pdf?id=SygvZ209F7
https://openreview.net/forum?id=SygvZ209F7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "e7i1WpdBG7", "B1l91_a0J4", "rygygUR5Am", "Sye4Oht90Q", "BJxkToUNA7", "HyegvkufCm", "r1xjm1dzAX", "SJexARPG0X", "BkxHwKZA6Q", "Bkxf7H_W6X", "r1lbbE_W67", "HJg3Sz_WTm", "HJx1pgdbTQ", "S1ekcnvWp7", "HJels3U-67", "BkeAYwE03X", "SkguetB9h7", "BJg34EAt27", "S1eEJYfCsX" ], "note_type": [ "comment", "meta_review", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1737361548967, 1544636386067, 1543329254944, 1543310443580, 1542904758770, 1542778712416, 1542778659008, 1542778567638, 1542490460921, 1541666074155, 1541665785236, 1541665347689, 1541664951075, 1541663879000, 1541659799809, 1541453701682, 1541196015785, 1541166132444, 1540397275713 ], "note_signatures": [ [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1180/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1180/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1180/Authors" ], [ "ICLR.cc/2019/Conference/Paper1180/Authors" ], [ "ICLR.cc/2019/Conference/Paper1180/Authors" ], [ "ICLR.cc/2019/Conference/Paper1180/Authors" ], [ "ICLR.cc/2019/Conference/Paper1180/Authors" ], [ "ICLR.cc/2019/Conference/Paper1180/Authors" ], [ "ICLR.cc/2019/Conference/Paper1180/Authors" ], [ "ICLR.cc/2019/Conference/Paper1180/Authors" ], [ "ICLR.cc/2019/Conference/Paper1180/Authors" ], [ "ICLR.cc/2019/Conference/Paper1180/Authors" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1180/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1180/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1180/AnonReviewer1" ] ], "structured_content_str": [ "{\"comment\": \"I have the same question similar with anonymous reviewer: \\\"Assuming weight transport?\\\"\\n\\nWeight transport is \\\"backprop seems to require rapid information transfer back along axons from each of its synaptic outputs\\\" as mentioned in [1]. Although author mentioned that \\\" The sign (excitatory or inhibitory) of a synapse is determined by the identity of neurotransmitters, which is a fixed, intrinsic property of a synapse and indeed of all synapses emanating from the same presynaptic neuron (i.e., Dale's Law; Dale, 1935). Therefore, in the brain, sign-symmetry can be easily implemented without the need to worry about sign-transport (Figure 4b). \\\", the problem of \\\"weight transport\\\" is not about \\\"if it has exact value or the sign of synapses\\\" , it foucus on the transmition direction. i.e. information is just allowed to transmit from presynapses to postsynapses. \\n\\nSo I may worry about the biological plausibility of sign-symmetry (SS) algorithm.\\n\\n[1] Lillicrap T P, Cownden D, Tweed D B, et al. Random synaptic feedback weights support error backpropagation for deep learning[J]. Nature communications, 2016, 7(1): 13276.\", \"title\": \"Weight transport concern\"}", "{\"metareview\": \"This heavily disputed paper discusses a biologically motivated alternative to back-propagation learning. In particular, methods focussing on sign-symmetry rather than weight-symmetry are investigated and, importantly, scaled to large problems. The paper demonstrates the viability of the approach. If nothing else, it instigates a wonderful platform for debate.\\n\\nThe results are convincing and the paper is well-presented. But the biological plausibility of the methods needed for these algorithms can be disputed. In my opinion, these are best tackled in a poster session, following the good practice at neuroscience meetings.\\n\\nOn an aside note, the use of the approach to ResNet should be questioned. The skip-connections in ResNet may be all but biologically relevant.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"worth discussing more\"}", "{\"title\": \"Practical and biologically-relevant differences\", \"comment\": \"There are two issues here. First is whether the performance of XNOR-Net predicts the performance of SS. Saying \\u201cgradient computation in XNOR-Net is exact in form\\u201d means that because symmetrical (binary) weights are used in the forward and backward pass in XNOR-Net, credit assignment on the weights is still accurate, just like in regular backpropagation. In contrast, in SS, the error gradient is not guaranteed to have either the right magnitude or the right sign, as shown by the example in the previous reply.\\n\\nNow, from a purely practical standpoint, it is good that XNOR-Net accurately calculates the gradient and impressive that XNOR-Net has binary W in addition to binary B. However, neither of them helps address the second issue of biological plausibility. As the initial comment points out, biological synapses do not necessarily have binary weight either for W or for B. However, if W and B can freely vary, weight symmetry as in XNOR-Net (or backprop in general) cannot be guaranteed\\u2014creating the \\u201cweight transport problem\\u201d\\u2014and gradient calculation will no longer be accurate as in XNOR-Net or backpropagation. The performance of SS is thus unexpected because it does not limit W and B to be binary or require them to be wholly symmetrical (only symmetrical in sign), yet can still guide learning. Hence, although XNOR-Net is \\u201cmore restricted,\\u201d this restriction actually helps guarantee weight symmetry and, in turn, accurate error estimation.\\n\\nOn the point of biological plausibility, another concern with XNOR-Net is that during its training, each binary weight in W still uses an underlying real-valued buffer to allow fine-grained updates of W (arXiv:1603.05279v4, section \\u201cTraining Binary-Weights-Networks\\u201d). This weight duality during inference and during update seems rather biologically problematic.\\n\\nThank you for the comment. It is quite useful to clarify the superficial similarity between XNOR-Net and SS.\"}", "{\"comment\": \"I'm very interested in your work which stands in a biological perspective.\\nBut, I couldn't follow some points of what you have said: \\n\\\"In XNOR-Net, although its feed-forward and feedback weights are binary, they are still exactly symmetrical, making gradient computation exact in form.\\\"\\nBinary weights are symmetrical, so isn't it a good thing? \\nAs mentioned by other commenters, in SS, the backprop weight are B = sign(W^{T}), although W is not binary, the B for error propagation is binary, so can we consider the XNOR-Net as a more restrict version where W is also binary?\\nBesides, what do you mean by \\\"in form\\\"? If it means standardization, isn't it good for computation?\\nHope for your reply.\", \"title\": \"the difference between XNOR-net and sign-symmetry\"}", "{\"title\": \"Thank you for the comment\", \"comment\": \"As you point out, it is true that in our current implementation of sign-symmetry, there is still the issue of \\\"sign transport,\\\" where feedforward weights have to inform feedback weights of their signs. We were originally motivated to test sign-symmetry because biological synapses do not switch signs: The sign (excitatory or inhibitory) of a synapse is determined by the identity of neurotransmitters, which is a fixed, intrinsic property of a synapse and indeed of all synapses emanating from the same presynaptic neuron (i.e., Dale's Law; Dale, 1935). Therefore, in the brain, sign-symmetry can be easily implemented without the need to worry about sign-transport (Figure 4b). This is why, in the discussion (Section 4.3), we suggest testing sign-fixed and sign-consistent neural networks; but they are beyond the scope of this paper.\\n\\nRegarding the effectiveness of sign-symmetric feedback weights, we think our results are not expected from work like Rprop. The crucial difference is that previous work uses sign information of the *gradient* but still computes gradients exactly using backpropagation/the chain rule. In contrast, because we use asymmetric feedforward/feedback weights, neither the magnitude *nor the sign* of the gradient is guaranteed to be preserved. Reusing the example from our other replies, consider an input connected by respective weights {1, -0.5} to two outputs receiving gradient {1, 1.5}. The gradient on the input computed by BP will be 0.25; that computed by SS will be -0.5. They do not share their sign! Therefore, our surprising finding is that even inaccurate propagation of error like this can still support learning; we are working on understanding how. (Unlike feedback alignment, we do not observe forward and backward weights to become more aligned, but they also start more aligned by construction (Figure 3a).)\\n\\nRegarding comparison to other biologically-plausible algorithms, all existing proposals of biologically-plausible algorithms do not solve all the implausibilities of backpropagation at once. Instead, they each address a subset of the issues. We made the discussion more explicit in Section 4.3 comparing algorithms on what problems they solve and what problems remain.\\n\\nOn all three points, please also see our reply to Reviewer 1 and the public comment on 11/08.\\n\\nThank you for raising these thoughtful points for discussion!\"}", "{\"title\": \"Thank you very much for the review\", \"comment\": [\"Thank you again for the review. We really appreciate your detailed and constructive comments.\", \"We have applied the two suggested changes to the text.\", \"Re: AlexNet with feedback alignment, we are currently testing this condition and expect to include partially completed training in the final review submission and fully completed training in the final draft. As expected, its performance is slightly lower than ResNet-18 trained with FA.\", \"We have applied your excellent suggestions to Figures 1, 2, and 3.\", \"Regarding the small change in alignment angle, our hypothesis is that sign-symmetry does not depend on alignment of weight matrices to learn. Instead, the difference in absolute weight magnitudes between BP- and SS-trained models suggests that something else is at play. We are still analyzing the training dynamics to understand how SS guides learning.\"]}", "{\"title\": \"Thank you very much for the review\", \"comment\": \"Regarding writing, we have lessened the weight placed on comparison to Bartunov et al. and have made citation styles consistent. Thank you very much for suggesting these changes.\\n\\nRegarding suggested experiments, due to time limits we are not able to extensively test the suggested conditions. We have tested smaller batch sizes ({128, 64} vs. 256) for several training epochs, and we observe very little difference in performance for these initial epochs; in our experience initial performance is a fairly good indicator of final performance (1 epoch is still 5-20 k iters). We have also tested the effect of dropout by reintroducing them to layers fc6 and fc7 in AlexNet (in the modified AlexNet we originally tested, we removed dropout because we used BatchNorm [https://arxiv.org/abs/1502.03167]); or, adding dropout before the fc layer in ResNet-18. In both cases, dropout led to slightly slower training in the first few epochs, although we do know whether it will lead to improved converged performance. Thank you for suggesting these conditions for better characterizing the behavior of SS.\"}", "{\"title\": \"Thank you very much for the review\", \"comment\": \"Again, thank you for the thoughtful and detailed review. We agree with most of your comments, and have edited the writing to more clearly discuss our contribution as it relates to other work in the BP arena. Please see the revised manuscript for changes to the text. As a summary, we more clearly discuss the following:\\n\\n1) The significance and limitation of sign-symmetry\\nAs the reviewer points out, the present SS algorithm removes weight magnitude transport but still requires forward and backward weights to communicate their signs. We think it was not so clear that sign of the feedback weights is more important than magnitude before this work. While the direction of the gradient is sufficient for training as demonstrated by signSGD and related work, what we retain is not the sign of the gradient but rather the sign of the weights. As a consequence, as the error signal is propagated down the layers, the gradient may lose not only its magnitude but also its sign as compared to the backpropagated gradient. Consider the following simple example: an input is connected respectively by weights {1, -0.5} to two outputs receiving gradient {1, 1.5}. The gradient on the input computed by BP will be 0.25; that computed by SS will be -0.5. They do not share their direction! Hence, SS is not simply a coarser gradient update, but represents coarser error propagation that is nonetheless effective.\\n\\nNevertheless, the requirement for sign communication is an additional assumption as compared to FA, representing a qualitative cost in \\\"plausibility.\\\" We more explicitly discuss the \\u201cdegree\\u201d of implausibility of sign-symmetry compared to other algorithms in the revised discussion section.\\n\\n2) Molecular mechanisms\\nAlthough we agree that biological molecular mechanisms are rich enough to implement a large variety of schemes, we still think there is something to be said about the simplicity or ease with which one scheme can be implemented compared to another--as judged by, e.g., the number of unique genes or interactions needed. Although we have not tried to devise a scheme for implementing BP, our intuition is that it will be much more difficult, if only because the information needed to be communicated is more (order of 10 bits for magnitude compared to 1 bit for sign). Moreover, in the truly biological case of consistent neurons, implementing sign-symmetry is rather easy (Figure 4b).\\n\\nHence, the purpose of Figure 4 is to illustrate only that sign-symmetry can be achieved relatively *simply* in the brain (only 2 orthogonal ligand-receptor pairs in the case of Figure 4b), not that it is the only or even the most *likely* implementation. We would further like to remark that it is currently difficult to verify/falsify this sort of inter-areal wiring scheme in the brain given the current limits of connectomics. It is challenging to a) image large tissue sizes and b) trace axons over long distances. As a frame of reference, a paper from 1 yr ago examined tissue sizes of ~500 microns and axon lengths of ~250 microns (doi.org/10.1038/nature24005); recent work pushes the limit to ~1 mm (doi.org/10.1038/s41592-018-0049-4). In comparison, in mice a visual area spans 0.5-several mms (doi.org/10.1016/j.neuron.2011.12.004); in primates it is tens of millimeters. In this light, although Figure 4 is a thought experiment, it also represents a falsifiable hypothesis that, just like the proposed scheme for feedback alignment (Lillicrap et al. 2016, their Figure S3), can be tested with experimental data potentially in the near future.\\n\\n3) Sign switching\\nWe agree that removing sign switching will probably greatly benefit sign-symmetry (since it removes the need for sign-transport). We have run experiments where weights do not switch sign, but find the preliminary results difficult to interpret and insufficient to report. On the other hand, we think sign switching is a core issue for any algorithm aiming at biological plausibility, but FA does not address it; nor do Bartunov et al. despite their carefully controlled architecture. That is not a criticism. How could addressing this point improve biological plausibility, unless we can also remove inconsistent neuron outputs (i.e., observe Dale's Law)? In general, there are many elements of biologically implausibility in current deep learning settings, as discussed in Section 4.3, \\\"Towards a more biologically plausible learning algorithm.\\\" To make practical progress, we think it is still meaningful to make stepwise advances. What we contribute is that imprecise error propagation (both in magnitude and sign) is still very useful for guiding learning.\\n\\nWe are grateful for the reviewer for tracing out the nuances in the problem of biological plausibility, and hope we have sufficiently incorporated them into the revised manuscript and tightened our argument.\"}", "{\"title\": \"Weight-transport, unitary weight, and XNOR-Net\", \"comment\": \"Thank you very much for your thoughtful comments. Here is a more detailed reply after we've run additional experiments to address your concerns.\\n\\n1) Weight-transport\\nWe do not claim to completely solve the problem of \\\"transport.\\\" However, we do address weight-transport by eliminating the need to synchronize magnitude (many bits of information) and only asking forward and backward weights to share signs (1 bit of information). This requires a much looser connection between the two, and indeed makes it much easier to devise an implementation for SS in the brain. Although one could perhaps concoct a scheme to achieve precise weight symmetry, it will likely be much more difficult and complex because more information need to be shared. Related, although the implementation in Figure 4 is ad hoc, its purpose is only to show that it's relatively *simple* to implement sign-symmetry in the brain (especially with consistent neurons), not that it is *likely* implemented in this way in the brain. Hence, we chose to not grasp for speculative neuroscientific evidence and overstretch our claims for Figure 4. \\n\\n2) Unitary weights\\nWe have run additional experiments where feedback weight B = sign(W) * R, where W is the feedforward weight, R is a random weight matrix (as in Feedback Alignment), and * is elementwise multiplication. This setting achieves similar performance to SS and BP in Figure 1, consistent with our interpretation that sign-symmetry works because of sign symmetry, not because of any special property of the weight magnitudes.\\n\\n3) XNOR-Net\\nThank you for bringing up binary weight networks and signSGD for discussion. We omitted discussing them because they are not motivated by biological plausibility of the learning algorithm, and hence although they are superficially similar to SS, they are fundamentally different. SignSGD has been discussed in the previous comment; it still computes exact gradients, only using gradient signs during update. In XNOR-Net, although its feedforward and feedback weights are binary, they are still exactly symmetrical, making gradient computation exact in form.\\n\\nIn contrast, consider this simple case in sign-symmetry, where input h_0 is connected by {w_0, w_1} to output {h_10, h_11}. If {w_0, w_1} = {1, -0.5} and grad_output = {1, 1.5}, grad_input = 0.25 in BP but -0.5 in SS. Not only is the SS gradient imprecise, it is in a different direction than the BP gradient! Hence SS is fundamentally different from both XNOR-Net and signSGD.\\n\\nWe are grateful for all three comments above, and we will add them to the discussion in the paper to make our contribution clearer.\"}", "{\"title\": \"Thanks for the comment!\", \"comment\": \"Thanks a lot for such a detailed and constructive comment.\", \"many_of_the_concerns_are_similar_to\": \"\", \"https\": \"//openreview.net/forum?id=SygvZ209F7&noteId=HJels3U-67&noteId=HJels3U-67\\n\\nAnd we answered some of them. We are going to provide more replies soon.\"}", "{\"title\": \"Thank you very much for the review\", \"comment\": \"As a quick comment, we really appreciate your very detailed feedback! We are working heavily on a hopefully equally detailed reply! :D\"}", "{\"title\": \"Thank you very much for the review\", \"comment\": \"Thank you very much for your encouraging review.\\n\\nRegarding datasets with small sample size, many such experiments can be found in [1]. We did not formally repeat them but observe similar conclusions.\\n\\nThis is a quick reply and we are working on a more detailed version!\\n\\n[1] Liao, Q., Leibo, J. Z., & Poggio, T. (2015). How Important is Weight Symmetry in Backpropagation?. arXiv 2015, AAAI 2016\"}", "{\"title\": \"Thank you very much for the review\", \"comment\": \"We are very excited to see your encouraging review! We really appreciate your super detailed comments. \\n\\nThis is a quick reply and we are working on detailed replies to all comments!\"}", "{\"title\": \"signSGD\", \"comment\": \"Thanks for the feedback!\\n\\nsignSGD seems to be very similar (if not the same as) the \\\"Batch Manhattan\\\" (BM) approach first used in [1], which is discussed in this paper and [1].\\n\\nOne central question in biologically-plausible training of neural network is how much different (from SGD) can the weight updates be while maintaining good performance. How much noise can SGD tolerate if evolution wants to implement an approximated SGD in the brain.\\n\\nWith signSGD/BM, we can see that as long as the direction of the weight update is the same as standard SGD, the performance is quite good. As your comment said, this might be only mildly surprising.\\n\\nWith sign-symmetry feedback, however, the gradients are propagated imprecisely *every* layer, leading to drastically different update directions in many early layers of the network. Without the results of this paper and [1], it is much unclearer whether this drastic level of divergence from SGD can still lead to good performance. \\n\\nAlthough not completely eliminating the problem of weight transport, the results of this paper constitute an important step towards that direction, showing that this non-trivial level of discrepancy from SGD can be tolerated to achieve good performance on large-scale tasks like ImageNet. It is a good news for evolution --- it has more flexibility in implementing approximated SGD in the brain.\\n\\n[1] Liao, Q., Leibo, J. Z., & Poggio, T. (2015). How Important is Weight Symmetry in Backpropagation?. arXiv 2015, AAAI 2016\", \"footnote\": \"This is just a quick reply by one author. We are working on replies to all other comments. Thank you all very much for constructive comments!\"}", "{\"comment\": \"The sign-symmetry method does not solve the weight transport problem. It just shows that a coarser kind of transport may be sufficient for practical purposes. The biological mechanism concocted in Figure 4 to show how it *may* be implemented is completely ad hoc and without any empirical support (one may as well concoct a similar scheme for standard backprop). Also, however that scheme is supposed to work (which is not explained clearly in the text, by the way), it has to show why the feedback weights have to be *exactly* +1 or -1, which is what the sign-symmetry algorithm assumes (again biologically completely unrealistically). The scheme only appears to show how sign consistency can be achieved, not why the weights have to be exactly +1 or -1.\\n\\nAs another commenter pointed out, the success of the sign-symmetry method in practical applications is also not surprising, given the success of the signSGD method: https://arxiv.org/abs/1802.04434 (a paper the authors unfortunately do not discuss or cite) and especially the success of the very similar (and even more restrictive) binary weight architectures (such as the XNOR-Net: https://arxiv.org/abs/1603.05279 and a whole slew of other work that followed it), again an entire literature not even mentioned in this paper.\\n\\nIn conclusion, the main claim of this paper (that \\\"biologically plausible learning algorithms can scale to large datasets\\\") is misleading and the main result is not novel.\", \"title\": \"misleading title, misleading claims, main result not novel\"}", "{\"comment\": \"It's nice to see more interest around the question of biologically-motivated deep learning! I've been wondering a couple of things about the central claim of the manuscript. As I understand it, the manuscript is aimed at examining the question of whether biologically motivated algorithms can scale to large and difficult datasets. The abstract frames this question in particular around the problem of 'weight transport' and biologically motivated algorithms that do away with this issue. I may be missing something, but it seems to me that the approach suggested in the manuscript still makes liberal use of weight transport. That is, the proposed approach uses backward matrices that are constructed dynamically in terms of the forward weights via:\\n\\nB = sign(W^{T})\\n\\nIs this true? Even though this throws away sign information, this operation still transports lots of weight information from the forward path to backward synapses. Thus, the approach appears to assume weight transport.\\n\\nIt might still be an interesting datapoint to know that backward passes constructed in this way are effective. Though I would have said that this wasn't particularly surprising, since sign information is well known to be the crucial information for learning: for example, aggressive gradient clipping works well in many instances and as early as the 1990s Rprop (robust prop [1]) was shown to work very effectively by discarding the magnitudes of gradients (and keeping just the sign information). \\n\\nSeveral of the other biologically-motivated algorithms that are referenced in the manuscript aim to get rid of weight transport, e.g by learning useful backward weights (Difference Target Prop). So, is it reasonable to compare the approach in this manuscript to other algorithms that don't use weight transport? 'Sign-symmetry' seems to exist in a very different category, in that it takes weight transport for granted. If I understand correctly, the wiring diagram in Figure 4 is meant to suggest how why it would be ok to take weight transport for granted in the brain. But, I would have said that existing empirical evidence speaks against this outlined implementation. At the very least, I found myself wanting citations that would strengthen the claim.\\n\\nIn sum, it seems like what could be said given the evidence presented in the manuscript is that: if there were an algorithm that could successfully construct backward B matrices with the correct signs (i.e. matching sign(W^{T})) without weight transport, then this hypothetical algorithm would be successful on large scale datasets. This is interesting in its own right, but at first blush this statement seems far from the central claim of the manuscript that existing biologically-plausible algorithms already scale to large data sets? But I may have missed something in my reading of the work, and would be happy to be corrected on details.\\n\\n[1] Martin Riedmiller und Heinrich Braun: Rprop - A Fast Adaptive Learning Algorithm. Proceedings of the International Symposium on Computer and Information Science VII, 1992\", \"title\": \"Assuming weight transport?\"}", "{\"title\": \"An important step in our understanding of biologically plausible learning.\", \"review\": \"Summary: The authors are interested in whether particular biologically plausible learning algorithms scale to large problems (object recognition and detection using ImageNet and MS COCO, respectively). In particular, they examine two methods for breaking the weight symmetry required in backpropagation: feedback alignment and sign-symmetry. They extend results of Bartunov et al 2018 (which found that feedback alignment fails on particular architectures on ImageNet), demonstrating that sign-symmetry performs much better, and that preserving error signal in the final layer (but using FA or SS for the rest) also improves performance.\\n\\nThe paper is clear, well motivated, and significant in that it advances our understanding of how recently proposed biologically plausible methods for getting around the weight symmetry problem work on large datasets.\\n\\nIn particular, I appreciated: the clear introduction and explanation of the weight symmetry problem and how it arises in the context of backprop, the thorough experiments on two large scale problems, the clarity of the presented results, and the discussion about future directions of study.\", \"minor_comments\": [\"s/there/therefore in the first paragraph on page 2\", \"The authors claim that their conclusions \\\"largely disagree with results from Bartunov et al 2018\\\". I would suggest a slight rewording here: the authors' results *extend* our understanding of Bartunov et al 2018. They do not disagree in the sense that this paper also finds that feedback alignment alone is insufficient to train large models on ImageNet.\", \"Figure 1: I was expecting to see a curve for performance of feedback alignment on AlexNet\", \"Figure 1: The colors are hard to follow. For example, the two shades of purple represent the two FA models, which makes sense, but then there are two separate hues (black and blue) for the sign-symmetry models. Instead, I would suggest keeping black (or gray) for backpropagation (the baseline), and then using two hues of one color (e.g. light blue and dark blue) for the two sign-symmetry models. This would make it easier to group the related models.\", \"Figure 2: Would be nice if these colors (for backprop/FA/SS) matched the colors in Figure 1.\", \"Figure 3: Why is there such a small change in the average alignment angle (2 degrees?) I found that surprising.\", \"Figure 3: The right two panels would be clearer on the same panel. That is, instead of showing the std. dev. separately, show it as the spread (using error bars) on the plot with the mean. This makes it easier to get a sense if the distributions overlap or not.\", \"Figure 3 (b/c): Could also use the same colors for BP/SS as Figs 1 and 2.\", \"Figure 3 (caption): I think the blue/red labels in the caption are mixed up for panel (a).\"], \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Nice alternative to backprop\", \"review\": \"In the submitted manuscript, the authors compare the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using different network architectures, with the aim of testing biologically-plausible learning algorithms alternative to the more artificial backpropagation.\\nThe obtained results are promising and quite different to those in (Bartunov , 2018) and lead to the conclusion that biologically plausible learning algorithms in general and sign- symmetry in particular are effective alternatives for ANN training.\\n\\nAlthough all the included ideas are not fully novel, the manuscript shows a relevant originality, paving the way for what can be a major breakthrough in deep learning theory and practice in the next few years. The paper is well written and organised, with the tackled problem well framed into the context. The suite of experiments is broad and diverse and overall convincing, even if the performances are not striking. Very interesting the biological interpretation and the proposal for the construction in the brain.\", \"a_couple_of_remarks\": \"I would be interested in understanding the robustness of the sign-symmetry algorithm w.r.t. for instance dropout and (mini)batch size, and to see the behaviour of the algorithm on datasets with small sample size; second, there is probably too much stress on comparing w/ (Bartunov , 2018), while the manuscript is robust enough not to need such motivation.\", \"minor\": \"refs are not homogeneous, first names citations are not consistent.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"the claims, conclusion, and general writing need to be better situated in the context of the concerns in the field\", \"review\": \"This work adds to a growing literature on biologically plausible (BP) learning algorithms. Building off a study by Bartunov et al. that shows the deficiencies of some BP algorithms when scaled to difficult datasets, the authors evaluate a different algorithm, sign-symmetry, and conclude that there are indeed situations in which BP algorithms can scale. This seemingly runs counter to the conclusions of Bartunov et al.; while the authors state that their results are \\\"complementary\\\", they also state that the findings \\u201cdirectly conflict\\u201d with the results of Bartunov, concluding that BP algorithms remain viable options for both learning in artificial networks and the brain.\\n\\nTo reach these conclusions the authors report results on a number of experiments. First, they show successful training of a ResNet-18 architecture on ImageNet using sign-symmetry, with their model performing nearly as well as one trained with backpropagation. Next, they demonstrate decent performance on MS COCO object detection using RetinaNet. Finally, they end with a discussion that seeks to explain the differences in their approach and the approach of Batunov et al, and with a potential biological implementation of sign symmetry.\\n\\nOverall the clarity of the writing is sufficient. The algorithm is properly explained, and there are sufficient citations to reference prior work. The results are generally clear (though there is an incomplete experiment, I agree with the authors that it is unlikely for the preliminary results to change). I believe that there is enough detail for this work to be reproducible. The work is also sufficiently novel in that experiments using sign-symmetry on difficult datasets have not been undertaken, to my knowledge.\\n\\nUnfortunately, the clarity and rigor of the *scientific argument* is insufficient for a number of reasons. These will be enumerated below.\\n\\nFirst, the explicit writing and underlying tone of the paper reveal a misrepresentation of the scientific argument in Bartunov et al. The scientific question in Bartunov et al. is not a matter of whether BP algorithms can be useful in purely artificial settings, but rather whether they can say anything about the way in which the brain learns. In this work, on the other hand, there seems to be two scientific questions: first, to assess whether BP algorithms can be useful in artificial settings, and second, to determine whether they can say anything about how the brain learns, as in Bartunov (indeed, the author\\u2019s conclusions highlight precisely these two points). Unfortunately, the experiments and underlying experimental logic push towards addressing the first question, and use this as evidence towards a conclusion to the second question. More concretely, experiments are run on biologically problematic architectures such as ResNet-18, often with backpropagation in the final layer (though admittedly this doesn\\u2019t seem to be an important detail with sign-symmetry, for reasons explained below). This is fine under the pretense of answering the first question, but to seriously engage with the results of Bartunov et al. and assess sign-symmetry\\u2019s merit as a BP algorithm for learning in the brain, the work requires the authors the algorithms to be tested under similar conditions before claiming that there is a \\u201cdirect conflict\\u201d. To this end, though the authors claim that the conditions on which Bartunov et al tested are \\u201csomewhat restrictive\\u201d, this logic can equally be flipped on its head: the conditions under which this paper tests sign-symmetry are not restrictive enough to productively move in the direction of assessing sign-symmetry\\u2019s usefulness as a description of learning in the brain, and so the conclusion that the algorithm remains a viable option for describing learning in the brain is not sufficiently supported. On the other hand, I think the conclusions regarding the first question -- whether sign-symmetry can be useful in artificial settings -- are fine given the experiments.\\n\\nSecond, the work does not sufficiently weigh the \\u201cdegree\\u201d of implausibility of sign-symmetry compared to the other algorithms, and implicitly speaks of feedback alignment, target propagation, and sign-symmetry as equally realistic members of a class of BP algorithms. Of course, one doesn\\u2019t want to go down the road of declaring that \\u201calgorithm A is more plausible than algorithm B!\\u201d, but the nuances should at least be seriously discussed if the algorithms are to be properly compared. In backpropagation the feedback connections must be similar in sign and magnitude. Sign-symmetry eliminates the requirement that the connections be similar in magnitude. However, this factor is arguably the least important of the two (the direction of the gradient is more important than the magnitudes), and we are still left with feedback weights that somehow have to tie their sign to their feedforward counterparts, which is not an issue in target propagation or feedback alignment. The authors try to explain away this difficulty with an appeal to molecular biology, which leads into my third point.\\n\\nThird, the appeal to molecular mechanisms to explain how sign-symmetry can arise is not rigorous. There is a plethora of molecular mechanisms at play in our cells; indeed, there are enough mechanisms to hand-craft *any* sort of circuit one likes. Thus, it is somewhat vacuous to conclude that a particular circuit can be \\u201ceasily implemented\\u201d in the brain simply by appealing to a hand-crafted circuit. For this argument to hold one needs to appeal to biological data to demonstrate that such a circuit either a) exists already, b) most probably exists because of reasons X, Y, Z. Unfortunately there is no biological backing, rendering this argument a possibly fun thinking exercise, but not a serious scientific proposal. But perhaps most problematic, the argument leaves the problem of sign-switching in the feedforward network to \\u201cfuture work\\u201d. This is perhaps *the most* important problem at play here, and until it is answered, these arguments don\\u2019t have sufficient impact.\\n\\nAltogether the scientific argument of this work needs tightening. The tone, the title, and the overall writing should be modified to better tackle the nuances underlying the arguments of biologically plausible learning algorithms. The claims and conclusions need to be more explicit, and the work needs to better seated in the context of both the previous literature, and the important questions at play for assessing biologically plausible learning algorithms.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
rkxw-hAcFQ
Generating Multi-Agent Trajectories using Programmatic Weak Supervision
[ "Eric Zhan", "Stephan Zheng", "Yisong Yue", "Long Sha", "Patrick Lucey" ]
We study the problem of training sequential generative models for capturing coordinated multi-agent trajectory behavior, such as offensive basketball gameplay. When modeling such settings, it is often beneficial to design hierarchical models that can capture long-term coordination using intermediate variables. Furthermore, these intermediate variables should capture interesting high-level behavioral semantics in an interpretable and manipulable way. We present a hierarchical framework that can effectively learn such sequential generative models. Our approach is inspired by recent work on leveraging programmatically produced weak labels, which we extend to the spatiotemporal regime. In addition to synthetic settings, we show how to instantiate our framework to effectively model complex interactions between basketball players and generate realistic multi-agent trajectories of basketball gameplay over long time periods. We validate our approach using both quantitative and qualitative evaluations, including a user study comparison conducted with professional sports analysts.
[ "deep learning", "generative models", "imitation learning", "hierarchical methods", "data programming", "weak supervision", "spatiotemporal" ]
https://openreview.net/pdf?id=rkxw-hAcFQ
https://openreview.net/forum?id=rkxw-hAcFQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BkgzW8tkeV", "BygisxQbCm", "ryeTwe7bAm", "S1lglem-0Q", "r1lOYyQWRX", "Bkgtp0yHaX", "BJgg0sJQpm", "rygJogZanm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544685049720, 1542693026934, 1542692964892, 1542692840116, 1542692735875, 1541893824645, 1541761992109, 1541374102771 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1178/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1178/Authors" ], [ "ICLR.cc/2019/Conference/Paper1178/Authors" ], [ "ICLR.cc/2019/Conference/Paper1178/Authors" ], [ "ICLR.cc/2019/Conference/Paper1178/Authors" ], [ "ICLR.cc/2019/Conference/Paper1178/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1178/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1178/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper presents generative models to produce multi-agent trajectories. The approach of using a simple heuristic labeling function that labels variables that would otherwise be latent in training data is novel and and results in higher quality than the previously proposed baselines.\\nIn response to reviewer suggestions, authors included further results with models that share parameters across agents as well as agent-specific parameters and further clarifications were made for other main comments (i.e., baselines that train the hierarchical model by maximizing an ELBO on the marginal likelihood?).\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Generative models to produce coordinated multi-agent behavior\"}", "{\"title\": \"AnonReviewer3 Response\", \"comment\": \"Thank you for reviewing our paper and providing insightful feedback. We respond to your main points below.\\n\\n> \\u201cWhat is more interesting is that a heuristic labeling function is sufficient to label macro-intents that lead to learning realistic basketball offense and swarm behavior.\\u201d\\n\\nYes, we are very excited at the new lines of research that this opens up. One can envision many settings in which users wish to have diverse and detailed control over what\\u2019s being generated. We believe models with this degree of control can be learned by incorporating labeling functions defined by users according to their preferences. We are very excited about future work in this direction.\\n\\n> \\u201cAre any \\u2026 baselines \\u2026 equivalent to training the hierarchical model by maximizing an ELBO on the marginal likelihood?\\u201d\\n\\nIf we understand the reviewer\\u2019s question, then VRAE-mi (previously named VRNN-mi) does exactly this by introducing a global latent variable (in place of macro-intent weak labels) and maximizing the ELBO as well as the mutual information between the global latent variable and the trajectory. We will update the paper to make this more clear.\\n\\n> \\u201c... could not the other models have higher likelihoods?\\u201d\\n\\nYes, a higher ELBO does not imply a higher true likelihood, as it depends on the tightness of the bound. Computing the exact likelihood is infeasible, but it can be approximated with importance sampling. However, we note that likelihoods do not necessary correspond to quality of generated samples, as evidenced by our experiments and by [1]. Furthermore, reporting ELBOs is often sufficient when quantitatively comparing models [2,3,4].\\n\\n\\u201c> ... 98% statistical significance.\\u201d\\n\\nWe performed a one-sample t-test, where the null hypothesis is that the gains come from a zero-mean distribution (which would mean that both models are preferred equally).\\n\\n[1] Theis et al. A note on the evaluation of generative models.\\n[2] Chung et al. A recurrent latent variable model for sequential data.\\n[3] Fraccaro et al. Sequential neural models with stochastic layers.\\n[4] Goyal et al. Z-forcing: training stochastic recurrent networks.\"}", "{\"title\": \"AnonReviewer2 Response\", \"comment\": \"Thank you for reviewing our paper and providing insightful feedback. We respond to your main points below.\\n\\n> \\u201c... how would an intermediate baseline model where a set of parameters are shared and each agent also has an independent set of parameters perform?\\u201d\\n\\nFollowing your suggestion, we trained such a model where the positions of all players are fed into a single GRU network, but independent networks are used to compute latent variables for each agent. This is a mix between VRNN-single and VRNN-indep, which we will call VRNN-mixed, and achieves an ELBO of 2331 and similar statistics as VRNN-indep (we will update Table 1 and Table 3). We\\u2019ve also included some generated samples at (https://bit.ly/2S66iO9). However, we emphasize that this model remains fundamentally different from our solution, as our solution provides a degree of controllability and interpretability (through macro-intents) not offered by these baselines. \\n\\n> \\u201cHow is the threshold for macro-intent generation selected.\\u201d\\n\\nThe threshold is chosen such that it qualitatively matches realistic basketball behavior (i.e. when a basketball player is considered stationary). However, this is a very interesting question raised by the reviewer regarding the effect of labeling functions on the stability and robustness of the model. One can imagine other domains where labeling functions come from a variety of sources, some of which are noisy or redundant. Designing an algorithm that can process these labels and incorporate them into sample generation is a new line of research that we are very excited about.\\n\\n> \\u201c... could using separate [macro-intent] vector for each agent \\u2026 give the same result?\\n\\nIn the basketball setting, individual macro-intents are in fact sufficient for generating corresponding trajectories. However, this is mainly an architectural detail that is domain-dependent and not the most important part of our contributions. For example, one can also define macro-intents that cannot be factorized for each agent, such as friendly/unfriendly behavior in the Boids model included in our experiments.\\n\\n> Minor Comments\\n\\nThe results come from sampling from the posterior distribution. The average standard deviation of the learned posterior distribution is around 0.08 per latent dimension. The standard deviation of the learned likelihood of the data is very peaked (often less than 0.01). The macro-intent RNN model achieves a log-likelihood of 2180, which is an improvement over the RNN-gauss model but still worse than all VRNN models. We will update the paper to correct for typos.\"}", "{\"title\": \"AnonReviewer1 Response\", \"comment\": \"Thank you for reviewing our paper and providing insightful feedback. We respond to your main points below.\\n\\n> \\u201cThe evaluations are not very strong due to toy setup.\\u201d\\n\\nWe emphasize that, although we use a 2D perspective of the game of basketball, this setting of modeling multi-agent tracking data is still highly non-trivial due to the following reasons:\\n- Such data is often fine-grained and spans long time horizons.\\n- Models must reason over all possible multi-agent trajectories, which is exponentially large w.r.t. the number of agents and time horizon.\\n- Expert behavior is often inherently non-deterministic (being unpredictable on offense) and current methods struggle to accurately capture such multimodal behavior. \\n- Modeling the coordination between agents is crucial for generating realistic trajectories (e.g. executing a specific offensive play in basketball).\\n\\nOur approach provides an efficient solution that addresses all the aforementioned challenges, whereas current state-of-the-art baselines perform very poorly in this task (e.g. players going out of bounds, players not moving cohesively, etc.). See (http://bit.ly/2DAu1Ub) for some comparisons, which is the same link provided in the footnote on page 5. Lastly, we comment that coaches and sports analysts evaluate team strategies using a 2D view of the game, so our solution in this space is practically relevant.\"}", "{\"title\": \"General comments to revewiers\", \"comment\": \"We thank all reviewers for their insightful comments and will make updates to the paper as needed. We briefly summarize our contributions below.\\n\\nWe work in a novel sequential modeling setting in which the target phenomenon (coordinated multi-agent behavior) is inherently non-deterministic and multimodal. Current approaches do not scale to the complexity of this problem because the space of all possible multi-agent trajectories is exponentially large w.r.t. the number of agents, and the agents are often highly coordinated. \\n\\nWe propose an efficient solution that uses a simple labeling function in sequential generative models to learn a macro-intent latent variable that encodes long-term intent and captures the coordination between agents. Our results demonstrate that our model generates trajectories of significantly higher quality than current baselines. Lastly, we highlight that our approach provides a degree of control and interpretability not offered by other baselines; the macro-intent variables are well understood (since they originate from a heuristic labeling function) and their effect on generated samples can be easily analyzed. \\n\\nWe believe that this work opens a new line of research into algorithms that can provide users with various degrees of control during sample generation. Current alternatives involve learning latent variables in a fully unsupervised fashion and inspecting them after training for interpretable features. Our work uses labeling functions to directly control sample generation in ways that can be specified by the user. For example, the labeling function we used for basketball allows users to control where they want players to go (see Figure 6a in our paper). We are very excited about future work in this direction.\"}", "{\"title\": \"Paper proposes multi-agent sequential generative models. This is influential beyond toy simulations presented in the paper.\", \"review\": \"Very strong paper, building on top of variational RNNs for multi-agent sequential generation. Dialogue use case is mentioned in Discussion is indeed very exciting. The approach extends VRNN to a hierarchical setup with high level coordination via a shared learned latent variable. The evaluations are not very strong due to toy task setup, however the approach is clear and impactful.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Hierarchical latent variables with weak supervision help learning a global coordination between cooperative agents.\", \"review\": [\"This paper proposes training multiple generative models that share a common latent variable, which is learned in a weakly supervised fashion, to achieve high level coordination between multiple agents. Each agent has a separate VRNN model which is conditioned on the agent\\u2019s own trajectory history as well as the shared latent variable. The model is trained to maximize the ELBO objective and log-likelihood over macro-intent labels. Experimental results are conducted over a basketball gameplay dataset (to model the trajectories of the offensive team members) and a synthetic dataset. The results show that the proposed model is on-par with the baseline models in terms of ELBO while showing that it can model multi-modality better and is preferred more by humans.\", \"In general, the paper is well written and the overall framework captures the essence of the problem that the authors are trying to solve.\", \"Furthermore, incorporating an auxiliary latent variable to model the coordination between multiple agents is interesting.\", \"I have several comments related to the strength of the baselines and contribution of individual components in the proposed model.\", \"Major Comments\", \"It seems that VRNN-single and VRNN-indep are two models on the far two ends of a spectrum. To understand the contribution of the shared macro-intent, how would an intermediate baseline model where a set of parameters are shared between agents and each agent also has an independent set of parameters perform? This could be accomplished by sharing the parameters of the first layer of GRU networks and learning the second layer parameters independently.\", \"How is the threshold for macro-intent generation selected? How does this parameter affect the overall performance? Since the smoothness of the segments between two macro-intents depend on this parameter, I am wondering its effect on the learned posterior distribution.\", \"Rather than using the prediction of the macro-intent RNN as a single global vector (\\\\hat{g}_t), could using separate vectors for each agent (corresponding blocks of \\\\hat{g}_t) as inputs to VRNN give the same results? Since the macro-intent RNN is already aware of all the macro-intents, it would be interesting to see if individual macro-intents are sufficient for VRNN to generate corresponding trajectories.\", \"Minor Comments\", \"Do results in Table (1) come from sampling or using mode of the distributions? How peaked are the learned posterior distributions?\", \"What is the performance of the macro-intent RNN model?\", \"In Eq (2), \\u201c<=T\\u201d should be \\u201c<=t\\u201d (as in Eq (11) in Chung 2015).\", \"In Page 6, bullet point 4: it should be \\u201cexcept we maximize the mutual information\\u2026\\u201d\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Heuristic labeling enables learning of hierarchical model without needing to marginalize over latent variables\", \"review\": \"# Summary\\n\\nThe paper proposes training generative models that produce multi-agent trajectories using heuristic functions that label variables that would otherwise be latent in training data. The generative models are hierarchical, and these latent variables correspond to higher level goals in agent behavior. The paper focuses on basketball offenses as a motivating scenario in which multiple agents have coordinated high-level behavior. The generative models are RNNs where each output is fed into the decoder of a variational autoencoder to produce observed states. The authors add an intermediate layer to capture the latent variables, called macro-intents. The parameters are learned by maximizing an evidence lower bound.\\n\\nExperiments qualitatively and quantitatively show that the hierarchical model produces realistic multi-agent traces.\\n\\n# Comments\\n\\nThe paper presents a sensible solution for heuristically labeling latent variables. It is not particularly surprising that the model then learns useful behavior because it no longer has to maximize the marginal likelihood over all possible macro-intents. What is more interesting is that a heuristic labeling function is sufficient to label macro-intents that lead to learning realistic basketball offenses and swarm behavior.\\n\\nAre any of the baselines (VRNN-single, VRNN-indep, and VRNN-mi) equivalent to training the hierarchical model by maximizing an ELBO on the marginal likelihood? I do not think this comparison is done, which might be interesting to quantify how much of a difference heuristic labeling makes. Of course, the potentially poor fit of a variational distribution would confound the results.\\n\\n# Minor things\\n\\n1) In the caption of Table 1, it says \\\"Our hierarchical model achieves higher log-likelihoods than baselines for both datasets.\\\" Are not the reported scores evidence lower-bounds? So it achieves a higher evidence lower bound, but without actually computing the true likelihood, could not the other models have higher likelihoods?\\n\\n2) Under \\\"Human preference study\\\" it says \\\"All judges preferred our model over the baselines with 98% statistical significance.\\\" I am not familiar with this terminology. Does that mean that a p value for some null hypothesis is .02?\\n\\n3) Something is wrong with the citation commands. Perhaps \\\\citep should be used.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
SyxwW2A5Km
Learning Representations of Categorical Feature Combinations via Self-Attention
[ "Chen Xu", "Chengzhen Fu", "Peng Jiang", "Wenwu Ou" ]
Self-attention has been widely used to model the sequential data and achieved remarkable results in many applications. Although it can be used to model dependencies without regard to positions of sequences, self-attention is seldom applied to non-sequential data. In this work, we propose to learn representations of multi-field categorical data in prediction tasks via self-attention mechanism, where features are orderless but have intrinsic relations over different fields. In most current DNN based models, feature embeddings are simply concatenated for further processing by networks. Instead, by applying self-attention to transform the embeddings, we are able to relate features in different fields and automatically learn representations of their combinations, which are known as the factors of many prevailing linear models. To further improve the effect of feature combination mining, we modify the original self-attention structure by restricting the similarity weight to have at most k non-zero values, which additionally regularizes the model. We experimentally evaluate the effectiveness of our self-attention model on non-sequential data. Across two click through rate prediction benchmark datasets, i.e., Cretio and Avazu, our model with top-k restricted self-attention achieves the state-of-the-art performance. Compared with the vanilla MLP, the gain by adding self-attention is significantly larger than that by modifying the network structures, which most current works focus on.
[ "Learning Representations", "Feature Combinations", "Self-Attention" ]
https://openreview.net/pdf?id=SyxwW2A5Km
https://openreview.net/forum?id=SyxwW2A5Km
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJe5S_CWJN", "r1x4CIwo27", "HklbEe9Fhm", "Bye_X4Dt2m" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1543788610387, 1541269195951, 1541148713360, 1541137440285 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1177/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1177/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1177/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1177/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"All reviewers agree in their assessment that this paper is not ready for acceptance into ICLR and the authors did not respond during the rebuttal phase.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Reject\"}", "{\"title\": \"Good results, better justification for the novelty needed.\", \"review\": \"Summary\\nThe paper proposes to apply self-attention mechanism from (Vaswani et.al.) to the task of click-through rate prediction, which is a task where one has input features which are a concatenation of multiple one-hot vectors (referred to as fields). The paper finds that applying the self-attention mechanism outperforms state of the art approaches for the task on two benchmark datasets. It then proposes a small modification to the self-attention mechanism, retaining only the top-k attentions to sparsify attention, and finds that it leads to marginal improvements.\\n\\nStrengths\\n+ The paper is fairly well written, and the contributions are succinctly summarized.\\n+ The proposed approach appears to get state of the art results on click-through rate prediction.\\n+ The results contain clear ablations of the approach.\\n\\nNegatives\\n1. It is not clear why the skip connection is needed. Especially, using the skip connection the way it is done in Eqn. 4 is a bit odd since we are adding positive quantities to each other, meaning that across multiple rounds, the magnitude of the attended feature will keep increasing. Perhaps this is the reason why performance deteriorates after attending thrice?\\n\\n2. Calling top-K a regularizer is somewhat misleading as it is a fundamentally different model class, as opposed to a regularizer that imposes a soft constraint on the kind of solutions that should be preferred in our hypothesis class. The current paper does not show with enough clarity if the improvements with top-k are because it is a better model for the data or because it is a better regularizer. One way to do this would be to systematically look at the difference between training and validation losses with and without top-k and show that the difference is smaller when the model is regularized. \\nMore generally, it would be ideal to show what kind of a constraint the top-k attention places on the hypothesis class of the original model. For example, the dropout paper shows that dropout, in the linear case is equivalent to L2 regularization (in expectation). (*)\\n\\n3. It would be interesting to report how often there is an overlap in the top k indices chosen across multi-head attentions.\\n\\n4. What are the relative number of parameters in each of the models for which the results are reported? Are we ensuring that a similar number of parameters are used to report all the results in say, Table. 1.? Also, it would be good to report error bars for the results in Table. 1 since the differences seem to quite small. (*)\\n\\n\\nPreliminary Evaluation\\nThe paper is a fairly straightforward application of self-attention to the task of click-through rate prediction. The major modeling novelty is in using top-k attentions for the click-through task, the interestingness/ validity of which needs to be demonstrated more clearly to understand if this heuristic might apply to other models and other datasets. Important points for the rebuttal are marked with a (*) above.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Official review\", \"review\": \"Summary:\\nThe authors apply the self-attention mechanism, a.k.a. transformer, to improve the representations of multi-field categorical features in recommendation systems. Unlike the previous approaches in which multi-field features are simply concatenated, the proposed method more actively combines those features improving the final performance.\", \"strengths\": [\"It is reasonable to apply the permutation-invariant self-attention mechanism to the multi-field features as orders of the fields should not matter.\", \"The method achieves the state-of-the-art performance on two datasets.\"], \"weaknesses\": [\"The paper lacks the technical novelty as it does not propose any novel technique. Rather, it simply applies an existing technique to a new type of dataset.\", \"More extensive analyses on the learned representation would improve the paper.\", \"As the authors argue, the method can be used upon other existing state-of-the-art networks. Showing the improvement on other methods would improve the paper. Currently, the authors only present improvement on a simple MLP.\"], \"questions\": \"To apply the self-attention, the embeddings of the field features should be projected in the same space. I wonder if this physically makes sense. I wonder how they are embedded in the features and relate to each other. I would suggest to include some analysis on the features while putting some rows of Table 3 to the appendix since many of these rows are not directly related to the method itself.\\n\\nOverall, I like the idea of the paper. However, the paper lacks the technical novelty and presents only limited experiments and analysis. I would suggest the authors include more analyses on the learned representations.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Concern of invalid evaluation and a weak contribution\", \"review\": \"Quality:\\n- In 4.4, the authors have vigorously explored the space of hyperparameters. However, they do not describe how to determine the hyperparameters, e.g., set aside a validation set from a part of the training set and determine the hyperparameters using this validation set, while the authors split the two datasets into only training and test sets, respectively. Without this procedure, the results may overfit to the test set via repeated experiments. Even though the used datasets are of few-million, this procedure guarantees a minimum requirement for a reliable outcome from the proposed model. I firmly recommend the authors to update their results using a validation set to determine the hyperparameters and then report on the test set. Please describe these experimental details to ensure that the performed experiments are valid.\", \"clarity\": [\"Overall, the writing can be improved via proof-reading and polishing the sentences. In Introduction section, \\\"there is little work applying...\\\" can be specified or rephrased with \\\"it is underexplored to apply\\\", and \\\"input features are not independent\\\" can be specified on what there are not independent. Moreover, the last two sentences in the second paragraph in the Introduction section is unclear what the authors want to argue: \\\"The combinations in linear models are then made by cross product over different fields. Due to the sparsity problem, the combinations rely on much manual work of domain experts.\\\"\", \"The authors use top-k restriction (Shazeer et al., 2017) to consider sparse relationships among the features. For this reason, have you tried to use the L1 loss on the probability distributions, which are the outputs of softmax function?\", \"In 4.5, the authors said, they \\\"are in most concern of complementarity.\\\" What is the reason for this idea and why not the \\\"relevance\\\"?\", \"In Table 4, I'm afraid that I don't understand the content (three numbers in parenthesis) of the third column. How does each input x_i or x_j, or a tuple of them get their own CTR?\"], \"originality_and_significance\": [\"They apply self-attention to learn multiple categorical features to predict Click-Through-Rate (CTR) with a top-k non-zero similarity weight constraint to adapt to their categorical inputs. Due to this, the scientific contribution to the corresponding community is highly limited to providing empirical results on the CTR task.\", \"The authors argue that \\\"most of current DNN-based models simply concatenate all feature embeddings\\\"; however, this argument might be an over-simplified statement for the existing models in section 2.\", \"Similar works can be found but missed to cite: [1] proposes a general framework to self-attention to exploit sequential (time-domain) and parallel (feature-domain) non-locality. [2] learns bilinear attention maps to integrate multimodal inputs using skip-connections and multiple layers on top of the idea of low-rank bilinear pooling.\"], \"pros\": [\"Strong empirical results on two CTR tasks using the previous works of self-attention and top-k restriction techniques.\"], \"cons\": [\"This work fairly lacks its originality since the proposing method heavily relies on the two previous works, self-attention and top-k restriction. They apply them to multiple categorical features to estimate CTR; however, their application seems to be monotonic without a novel idea of task-specific adaptation.\"], \"minor_comments\": \"- In Figure 1, \\\"the number of head\\\" -> \\\"the number of heads\\\".\\n\\n\\n[1] Wang, X., Girshick, R., Gupta, A., & He, K. (2018). Non-local Neural Networks. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'18).\\n[2] Kim, J.-H., Jun, J., & Zhang, B.-T. (2018). Bilinear Attention Networks. In Advances in Neural Information Processing Systems 32 (NIPS'18).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
rke8ZhCcFQ
ATTACK GRAPH CONVOLUTIONAL NETWORKS BY ADDING FAKE NODES
[ "Xiaoyun Wang", "Joe Eaton", "Cho-Jui Hsieh", "Felix Wu" ]
Graph convolutional networks (GCNs) have been widely used for classifying graph nodes in the semi-supervised setting. Previous works have shown that GCNs are vulnerable to the perturbation on adjacency and feature matrices of existing nodes. However, it is unrealistic to change the connections of existing nodes in many applications, such as existing users in social networks. In this paper, we investigate methods attacking GCNs by adding fake nodes. A greedy algorithm is proposed to generate adjacency and feature matrices of fake nodes, aiming to minimize the classification accuracy on the existing ones. In additional, we introduce a discriminator to classify fake nodes from real nodes, and propose a Greedy-GAN algorithm to simultaneously update the discriminator and the attacker, to make fake nodes indistinguishable to the real ones. Our non-targeted attack decreases the accuracy of GCN down to 0.10, and our targeted attack reaches a success rate of 0.99 for attacking the whole datasets, and 0.94 on average for attacking a single node.
[ "Graph Convolutional Network", "adversarial attack", "node classification" ]
https://openreview.net/pdf?id=rke8ZhCcFQ
https://openreview.net/forum?id=rke8ZhCcFQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HklpwF9BlV", "ryeL9Aae6X", "Hke6uV79nm", "HJl0oTyFhX" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1545083237190, 1541623438484, 1541186677286, 1541107110104 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1176/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1176/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1176/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1176/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"While the main idea of the paper is nice, the reviewers are not satisfied with the clarity of the material and the execution.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"reject\"}", "{\"title\": \"Should be supported with stronger experiments and be more clearly presented\", \"review\": \"This paper presents an idea of adding fake nodes to attack a graph network model, by a GAN style trainning procedure.\\n\\nHowever I concern about the experimental parts, which are only evaluated on small settings. \\n\\nPlus, the notations are inconsistant, whereas the objective function in (3) has nothing to do with $X_{fake}$. I tend to believe that this should be a typo.\\n\\nThe greedy optimization should generally be highly costed, although it works well for learning sparse representation in previous literature, however, in the graph setting, I am not sure that this is a good fit for $O(|V|^2)$ variables. Perhaps the author need to argue why this is efficient, or to propose other methods.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"An interesting idea, but the improvement over existing work is unclear\", \"review\": \"The authors propose a new adversarial technique to add \\u201cfake\\u201d nodes to fool a GCN-based classifier. The basic approach relies on a greedy heuristic to add edge/node features, and the authors also present a GAN-based approach, which allows the model to add \\u201cfake\\u201d nodes that are not easily distinguishable from regular nodes. The primary motivation behind the idea of adding \\u201cfake\\u201d is that it is unrealistic to change the features/edges of existing nodes. Experimental results show that adding a large number (20% in most cases) of fake nodes can significantly degrade accuracy of a GCN, and results show that the GAN-based approach is somewhat effective at making the \\u201cfake\\u201d nodes less distinguishable. In terms of strengths, the GAN-based approach is well-motivated and it appears that the authors were thorough in their experiments on Cora/Citseer (e.g., with a number of ablation/sensitivity studies).\\n\\nHowever, while interesting, this paper has a number of areas where it could be substantially improved:\\n\\n1) With regards to the motivation: It is not clear what substantive technical novelty there is in the idea of \\u201cadding fake nodes\\u201d, compared to existing approaches that simply modify existing nodes in an adversarial way. Intuitively, the approach of Zugner et al can already handle this case of \\\"adding new nodes\\\". One just adds a set of nodes with random/null edges/features to the graph, treats this as their \\u201cattacker node\\u201d set and then runs Zugner et al's greedy algorithm. Some clarification on why this simple application of Zugner et al's approach does not work would be useful and/or empirical results using their method as a baseline would be useful. (Also, Zugner et al was published in KDD 2018, so the citation should be corrected). \\n\\n2) In Zugner et al, they derive approximations and algorithms that allow them to compute the score of adding/removing an edge in constant time. The greedy approach in this work appears quite expensive as every greedy update requires an expensive gradient computation. Some discussion of computational complexity would improve the paper. \\n\\n3) Results are only provided on two small datasets (presumably due to the large computational cost for the approach). These two very small datasets are not indicative of many real-world scenarios, and additional results on larger (and more diverse) datasets would greatly strengthen the paper. \\n\\n4) Adding 20% fake nodes seems like a prohibitively large number. Even 5% fake nodes is extremely large. It is unclear what real-world applications could admit such drastic numbers of fake nodes, and some comments on this would greatly strengthen the paper. \\n\\n5) The GAN method is interesting and well-motivated, but it is not clear if this method offers any utility beyond the \\u201cdistribution matching\\u201d approach of Zugner et al (Section 4.1 of their paper). A comparison between these methods is necessary to justify the utility of the proposed GAN-greedy approach.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Nice but straightforward idea to attack graph CNNs; paper not always well-written\", \"review\": \"The main idea of this paper is that a 'realistic' way to attack GCNs is by adding fake nodes. The authors go on to show that this is not just a realistic way of doing it but it can done in a straightforward way (both attacks to minimize classification accuracy and GAN-like attacks to make fake nodes look just like real ones).\\n\\nThe idea is neat and the experiments suggests that it works, but what comes later in the paper is mostly rather straightforward so I doubt whether it is sufficient for ICLR. I write \\\"mostly\\\" because one crucial part is not straightforward but is on the contrary, incomprehensible to me. In Eq (3) (and all later equations) , shouldn't X' rather than X be inside the formula on the right? Otherwise it seems that the right hand side doesn't even depend on X' (or X_{fake} ). \\nBut if I plug in X', then the dimensions for weight matrices W^0 and W^1 (which actually are never properly introduced in the paper!) don't match any more. So what happens? To calculate J you really need some extra components in W0 and W1. Admittedly I am not an expert here, but I figure that with a bit more explanation I should have been able to understand this. Now it remains quite unclear...and I can't accept the paper like this.\\n\\nRelatedly, it is then also unclear what exactly happens in the experiments: do you *retrain* the network/weights or do you re-use the weights you already had learned for the 'clean' graph?\", \"all_in_all\": \"\", \"pro\": [\"basic idea is neat\"], \"con\": \"- development is partially straightforward, partially incomprehensible.\\n\\n(I might increase my score if you can explain how eq (3) and later really work, but the point that things remain rather straightforward remains).\", \"rating\": \"3: Clear rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}" ] }
H1gL-2A9Ym
Predict then Propagate: Graph Neural Networks meet Personalized PageRank
[ "Johannes Gasteiger", "Aleksandar Bojchevski", "Stephan Günnemann" ]
Neural message passing algorithms for semi-supervised classification on graphs have recently achieved great success. However, for classifying a node these methods only consider nodes that are a few propagation steps away and the size of this utilized neighborhood is hard to extend. In this paper, we use the relationship between graph convolutional networks (GCN) and PageRank to derive an improved propagation scheme based on personalized PageRank. We utilize this propagation procedure to construct a simple model, personalized propagation of neural predictions (PPNP), and its fast approximation, APPNP. Our model's training time is on par or faster and its number of parameters on par or lower than previous models. It leverages a large, adjustable neighborhood for classification and can be easily combined with any neural network. We show that this model outperforms several recently proposed methods for semi-supervised classification in the most thorough study done so far for GCN-like models. Our implementation is available online.
[ "Graph", "GCN", "GNN", "Neural network", "Graph neural network", "Message passing neural network", "Semi-supervised classification", "Semi-supervised learning", "PageRank", "Personalized PageRank" ]
https://openreview.net/pdf?id=H1gL-2A9Ym
https://openreview.net/forum?id=H1gL-2A9Ym
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Hyx7QsljBE", "SkgQgtR9BN", "BkxZow25SE", "BkxZGnQNE4", "rke40msQE4", "HJxbLZhZx4", "rylzwtw3JV", "ryeRGUehk4", "ryl2um9KJV", "HkxoIiqdk4", "B1xn8s4o6m", "HJg43sI_6Q", "S1lvvM8OTX", "BkxnHzUdaQ", "HkeSQM8Oa7", "S1ga6ZIuT7", "SygzJkminQ", "S1e-m_U5nX", "Bkxy1bhPnX" ], "note_type": [ "comment", "official_comment", "comment", "official_comment", "comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1550678811252, 1550670058832, 1550661529425, 1549184008709, 1549149131863, 1544827208924, 1544481114217, 1544451606009, 1544295283631, 1544231762848, 1542306644138, 1542118316155, 1542115934862, 1542115907758, 1542115868611, 1542115781131, 1541250777964, 1541199896828, 1541026006880 ], "note_signatures": [ [ "~Benedek_Rozemberczki1" ], [ "ICLR.cc/2019/Conference/Paper1175/Authors" ], [ "~Benedek_Rozemberczki1" ], [ "ICLR.cc/2019/Conference/Paper1175/Authors" ], [ "~Benedek_Rozemberczki1" ], [ "ICLR.cc/2019/Conference/Paper1175/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1175/Authors" ], [ "ICLR.cc/2019/Conference/Paper1175/Authors" ], [ "ICLR.cc/2019/Conference/Paper1175/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1175/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1175/Authors" ], [ "ICLR.cc/2019/Conference/Paper1175/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1175/Authors" ], [ "ICLR.cc/2019/Conference/Paper1175/Authors" ], [ "ICLR.cc/2019/Conference/Paper1175/Authors" ], [ "ICLR.cc/2019/Conference/Paper1175/Authors" ], [ "ICLR.cc/2019/Conference/Paper1175/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1175/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1175/AnonReviewer3" ] ], "structured_content_str": [ "{\"comment\": \"Referenced it on the Pytorch github repo. Loved this paper.\", \"title\": \"Thank You!\"}", "{\"title\": \"Reference implementation published\", \"comment\": \"Thank you for your interest and effort in reimplementing our model in PyTorch!\\n\\nWe've just published a reference implementation at https://github.com/klicperajo/ppnp .\"}", "{\"comment\": \"I tried to reproduce the results and created an implementation.\", \"https\": \"//github.com/benedekrozemberczki/APPNP\", \"title\": \"Attempt to reproduce results.\"}", "{\"title\": \"Later this month\", \"comment\": \"We're planning to release source code for the model along with the camera ready version later this month.\"}", "{\"comment\": \"Is there publicly available code for the paper?\", \"title\": \"Code\"}", "{\"metareview\": \"There were several ambivalent reviews for this submission and one favorable one. Although this is a difficult case, I am recommending accepting the paper.\\n\\nThere were two main questions in my mind.\\n1. Did the authors justify that the limited neighborhood problem they try to fix with their method is a real problem and that they fixed it? If so, accept.\\n\\nHere I believe evidence has been presented, but the case remains undecided.\\n\\n2. If they have not, is the method/experiments sufficiently useful to be interesting anyway?\\n\\nThis question I would lean towards answering in the affirmative.\\n\\nI believe the paper as a whole is sufficiently interesting and executed sufficiently well to be accepted, although I was not convinced of the first point (1) above. One review voting to reject did not find the conceptual contribution very valuable but still thought the paper was not severely flawed. I am partly down-weighting the conceptual criticism they made. I am more concerned with experimental issues. However, I did not see sufficiently severe issues raised by the reviewers to justify rejection.\\n\\nUltimately, I could go either way on this case, but I think some members of the community will benefit from reading this work enough that it should be accepted.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"borderline paper\"}", "{\"title\": \"Re: thanks for your response\", \"comment\": \"Dear reviewer,\\n\\nThank you for clarifying your review and reconsidering and upgrading your score!\\n\\nWe would like to point out that Laplacian feature propagation is just that very basic PPR-based baseline you wanted to see -- it uses PPR-like feature propagation in combination with logistic regression.\\n\\nSince we both agree that LASAGNE falls into a different category of methods and that we use PPR in a very different way (to propagate information instead of sampling contexts for a skip-gram model), we are not quite sure what work you are referring to that reduces the novelty value of our method. Our model's simplicity might make it seem like a minor contribution but it also makes the model easy to implement, train, optimize, extend and scale. E.g. note that GNNs with many layers suffer from difficulties in gradient-based training, while our method (thanks to the decoupling of the propagation step) does not, making it more suitable to use in practice.\"}", "{\"title\": \"The issues of limited range and oversmoothing\", \"comment\": \"** Issue of Limited Range **\\n\\nEvidence for showing that larger neighborhoods are beneficial is shown e.g. in Figures 4 and 5 of the paper. Figure 4 shows how the accuracy increases dramatically on Cora-ML and PubMed as we increase the number of propagation steps beyond 2. Figure 5 shows that the optimal \\u03b1 lies between 0.05 and 0.2. For these values, between 86% and 51% of the influence comes from neighborhoods using more than 2 propagation steps.\\n\\nFurthermore, larger neighborhoods are especially important in the sparsely labelled setting, as shown by Li, Han and Wu (AAAI 2018) and in Figure 3 of our paper. This figure shows that our method can handle small training sets best and outperforms GCN by 6 percentage points in this setting.\\n\\nXu et al. (ICML 2018) have also found the limited range to be an issue, especially for nodes in the periphery. Very little information will reach these nodes with only 2 hops and a higher range is therefore critical for classifying these.\\n\\n** Oversmoothing and attention-like mechanisms **\\n\\nAn attention-like mechanism for working with multiple different neighborhood sizes was already investigated in previous work by Xu et al. in the jumping knowledge networks (JK) model. However, for most experiments they still achieved best performance when using only 2-3 layers. In our own experiments we have found JK to perform best with only 3 layers and in the paper we show that our new model significantly outperforms it.\\n\\nIn earlier experiments we have also tested attention over different neighborhood sizes in combination with our model, but found that learning the attention weights is problematic and mostly overfits on the node itself. Please note that our personalized PageRank uses an *implicit* attention scheme on the different neighborhoods with weights \\u03b1(1-\\u03b1)^k (for the k-step neighborhood), which we have found to perform significantly better than any other weighting scheme we have tested. This implicit attention mechanism might be one reason why our model performs so well.\\n\\nWe have also experimented with increasing the number of layers in GAT (which uses attention for its node aggregation function), but were not able to successfully increase its number of layers beyond the original 2. \\n\\nFinally, different node aggregation functions were used by e.g. GraphSAGE, which also performs best when using no more than 2 layers and therefore shows the same problem of limited range.\"}", "{\"title\": \"thanks for your response\", \"comment\": \"Dear authors,\\n\\nI would like to thank you for the detailed response(s) to the review(s you have received). I would also like to make a related comment: I agree with your comments overall, modulo that there has been any confusion. Your experimental setup was clear from the first time I read your nice paper, that is why I mentioned in one of comments that \\\" While according to the authors\\u2019 categorization of the existing methods in the intro, LASAGNE falls under the \\u201crandom walk\\u201d family of methods\\\". Perhaps I should have made it more clear in my review, that personally as a reviewer I would have liked to see some basic classification baseline that is related to PPR, that was my main point and why I made two possible suggestions.\\n\\nI have upgraded my score. I want to clarify that my non-acceptance score as my review title summarized from early on was not due to this baseline comparison fact (besides you compared with other state-of-the-art related methods), but due to the fact that I personally found the contribution to be (on the one hand *interesting* but on the other hand) limited from a novelty perspective.\"}", "{\"title\": \"clarification\", \"comment\": \"I believe the reviewer here meant \\\"substantial and practically meaningful\\\" and not \\\"statistically significant.\\\"\\n\\nYour point about graph diameter is a good one. However I am wondering if you can elaborate a bit on your argument in section 2 where you say:\\n\\n\\\"There are essentially two reasons why a message passing algorithm like GCN can\\u2019t be trivially expanded to use a larger neighborhood. First, aggregation by averaging causes oversmoothing if too many layers are used. It, therefore, loses its focus on the local neighborhood (Li et al., 2018). Second, most common aggregation schemes use learnable weight matrices in each layer. Therefore, using a larger neighborhood necessarily increases the depth and number of learnable parameters of the neural network (the second aspect can be circumvented by using weight sharing, which is typically not the case, though).\\\"\\n\\nIt seems fine to use weight sharing to deal with the second issue and I believe it isn't that uncommon. However, the oversmoothing issue could be a larger problem. Couldn't this be dealt with using attention-like mechanisms or different aggregation functions like max instead of sum (or intermediate functions)?\\n\\nAn average diameter of 10, the largest for datasets you explore, might not be enough to be problematic. Keeping in mind that I have not carefully read the paper, only skimmed it, can you succinctly summarize what evidence you have that limited range is an important issue in practice? I agree with the premise that it could be (because of the tying network depth or recurrent sequence length to neighborhood size is somewhat arbitrary), but I am wondering how best to demonstrate this is an issue and your approach is a successful solution on an important problem of practical interest.\"}", "{\"title\": \"Re: Reviewer2\", \"comment\": \"Thank you for your quick response!\\n\\nIf we understand you correctly, all your points above are referring to the study of larger graphs to ensure a large diameter (since, as mentioned in your first comment, a large diameter requires more propagation steps). Note, however, that the graph diameter usually shrinks with graph size (see e.g. Leskovec 2005). Thus, instead of studying even larger graphs one should analyze graphs with sufficiently large diameter. Indeed, the graphs we have already studied in our paper have an average diameter between 5 and 10 (see Table 1 of the revised version). Thus, a few GCN layers can not cover the entire graph.\\n \\nOur experiments further show that denser graphs with a smaller diameter (e.g. Microsoft Academic) require a higher alpha (see Figure 5). Your discussion actually prompted us to adjust alpha on this dataset to better reflect the graph\\u2019s underlying characteristics (see Section 6 of the revised version).\\n\\nFurthermore, we are not sure what exactly you mean with \\u2018significant\\u2019 -- and why you have the impression that our results are not significant. In our paper and comments we use the term significant in the mathematical sense of statistical significance. The results clearly show that our method\\u2019s improvements are significant with a p-value of 0.05, as we have shown in our rigorous evaluation (for small and large graphs as well as graphs with different diameters).\"}", "{\"title\": \"Re: Re: Reviewer2\", \"comment\": \"Thanks for your reply!\", \"to_reiterate_my_questions\": \"1) The graph with ~10k nodes would be the limit for your exact algorithm, as the results are missing in Table 2. But since you have the approximation with power-iteration like layers, it would be better if you can target on large graphs. \\n\\n2) And I expect your algorithm would benefit more on large graphs. This is the case where the pagerank could be more effective in propagating information, than parameterized message passing operators. So that's why it is important to do large scaled experiments to show the truly 'significant' gains. \\n\\n3) Here are several good large datasets you may want to take a look: https://snap.stanford.edu/data/\"}", "{\"title\": \"Re: Reviewer3\", \"comment\": \"Thank you for your review and feedback!\\n\\nYou are right, nothing prevents the model from using the standard transition matrix. During model development, however, we have found that the added self-loops of the GCN-matrix are beneficial to performance. The symmetrical normalization actually doesn't make any difference in the limit k->infinity. However, we found this style of normalization to be beneficial for the finite-step approximation.\"}", "{\"title\": \"Re: Reviewer2\", \"comment\": \"Thank you for your review and feedback!\\n\\nThe connection to the GNN-framework is certainly interesting and we\\u2019ve added it in the revised version of the paper (in Section 3, after introducing APPNP). However, our main contribution is not the usage of fixed-point iterations for node classification, which has already been used e.g. in label propagation and belief propagation algorithms. Our contribution is the improvement of GCN-like models by solving the limited range problem through the development and thorough evaluation of an end-to-end trained model utilizing one specific fixed-point iteration.\\n\\nAs you correctly noticed, the exact model is not applicable to larger data -- this is exactly the reason why we have developed its approximation. The discussion can be found under \\\"efficiency analysis\\\" in Section 3. We have edited the experimental section to make this more clear. Furthermore, we would like to highlight that we have already performed an analysis on large graphs. As shown in Table 1, our experimental evaluation includes two graphs with 20k nodes, which follows the suggestion you gave (>10k nodes).\\n\\nPlease note that we have already compared our model to jumping knowledge networks (JK), which is similar to the GNN that uses proper gating/skip connections you suggested. As we show in the experimental section, we significantly outperform this model.\\n\\nYou state that we show \\\"some marginal gains\\\". However, we show that our results are significant. Previous methods have reported \\u201clarge\\u201d gains that actually were not statistically significant and vanish when thoroughly evaluated, as we show in the paper. We paid a lot of attention to performing a fair comparison and a rigorous statistical analysis of our results, which shows that we significantly outperform previous models. The different evaluation may make the improvements seem smaller. But in fact they are larger than those reported in previous, less careful evaluations. We have edited the section to further clarify this. Furthermore, we\\u2019ve included a reference to the work by Dai et al.\"}", "{\"title\": \"Re: Reviewer1\", \"comment\": \"Thank you for your review and feedback!\\n\\nWe want to clarify that the principle and task performed by LASAGNE is fundamentally different to ours. The LASAGNE method learns individual node embeddings in an unsupervised setting. Our goal is not to learn individual node embeddings but to learn a transformation from attributes to class labels in the semi-supervised setting, as graph convolutional network (GCN)-like models do. Moreover, LASAGNE only considers structural information. Generally, it has been shown that approaches that consider both structure and attributes outperform methods that only consider the structure (see e.g. Kipf Welling 2017). Therefore, we only compare with methods that consider both, but we added a reference to LASAGNE in the paper.\\n\\nWe feel that this confusion was due to a bad framing of our model. To make things clearer we have decided to rename the model and replace the term \\u201cembedding\\u201d with \\u201cprediction\\u201d in the revised version (see also our general comment).\\n\\nWe cannot run the proposed baseline, since as we clarified above we do not learn any personalized pagerank embeddings to begin with. However, we do already include a comparatively simple baseline which is the bootstrapped Laplacian feature propagation. This method propagates features in a similar way as we do and then uses a one-vs-all classifier. We significantly outperform this baseline.\\n\\nIn the revised version of the paper we clarified that the datasets are similar in that they contain bag-of-words features and use scientific networks. However, these graphs have very different numbers of nodes, edges, features, and classes, and different topology, as shown in Table 1. The datasets you suggested from the LASAGNE paper are not suitable for the kind of semi-supervised classification we consider since they do not contain node attributes.\\n\\nThank you for suggesting the interesting experiment of varying neural network depth! The investigated datasets do not benefit from deeper networks. You can find the results in Figure 11 of the updated version of the paper.\"}", "{\"title\": \"Title and name change\", \"comment\": \"Dear reviewers, dear commenters,\\nWe feel that the term \\\"embedding\\\" that we used in our work (and paper\\u2019s title) might be a source of confusion, which is why we have decided to replace it with \\u201cprediction\\u201d and rename the model. We want to clarify that we do NOT learn individual node embeddings as done in node embedding methods. We propagate the predictions as part of the end-to-end trained model. Please keep in mind that we did NOT change any part of the model except for the name.\"}", "{\"title\": \"Interesting but limited contribution\", \"review\": \"The thurst behind this paper is that graph convolutional networks (GCNs) are constrained by construction\\nto focus on small neighborhoods around any given node. Large neighborhoods introduce in principle\\na large number of parameters (while as the authors point out, weight sharing is an option to avoid this issue), \\nplus even worse oversmoothing may occur. Specifically, Xu et al. (2018) showed that for a k-layer GCN one can \\nthink of the influence score of a node x on node y as the probability that a walker that starts at x, \\nlands on y after k steps of random walk (modulo some details). \\n\\nTherefore, as k increases the random walks reaches its stationary distribution, forgetting any local information that is useful, \\ne.g., for node classification. To avoid this problem, the authors propose the following: use personalized Pagerank\\ninstead of the standard Markov chain of Pagerank. In PPR there is a restart probability, which allows \\ntheir algorithm to avoid \\u201cforgetting\\u201d the local information around a walk, thus allowing for an arbitrary \\nnumber of steps in their random walk. The authors define two methods PEP, and PEPa based on PPR. The latter \\nmethod is faster in practice since it approximates the PPR. \\n\\nA key advantage of the proposed method is the separation of the node embedding part from the propagation scheme. In this sense, \\nfollowing the categorization of existing methods into three categories, PEP is a hybrid of message passing algorithms,\\nand random walk based node embeddings. The experimental evaluation tests certain basic properties of the proposed method. One interesting performance feature of \\nPEP and PEPa is that they can perform well using few training examples. This is valuable especially when obtaining labeled\\nexamples is expensive. Finally, the authors compare their proposed methods against state-of-the-art GCN-based methods. \\n\\nSome remarks follow. \\n\\n- The idea of using PPR for node embeddings has been suggested in recent prior work \\u201cLASAGNE: Locality and structure aware graph node embeddings\\u201d \\nBy Faerman et al. While according to the authors\\u2019 categorization of the existing methods in the intro, LASAGNE \\nfalls under the \\u201crandom walk\\u201d family of methods, the authors should compare against it. \\n \\n- Continuing the previous point, even simpler baselines would be desirable. How inferior is for instance \\nan approach on one-vs-all classification using the approximate personalized Pagerank node embedding and \\nsupport vector machines? \\n \\n- Also, the authors mention \\u201csince our datasets are somewhat similar\\u2026\\u201d. Please clarify with respect to \\nwhich aspects? Also, please use datasets that are different. For instance, see the LASAGNE paper for \\nmore datasets that have different number of classes. \\n\\n- In the experiments the authors use two layers for fair comparison. Given that one of the advantages of the \\nproposed method is the ability to have more layers without suffering from the GCN shortcomings \\nwith large neighborhood exploration, it would be interesting to see an experiment where the number of layers is a variable.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"review on \\\"Personalized Embedding Propagation: Combining Neural Networks on Graphs with Personalized PageRank\\\"\", \"review\": \"This paper proposed a variant of graph neural network, which added additional pagerank-like propagations (with constant aggregation weights), in additional to the normal message-passing like propagation layers. Experiments on some benchmark transductive node classification tasks show some empirical gains.\\n\\nUsing more propagations with constant aggregation weights is an interesting idea to help propagate the information in a graph. However, this idea is not completely new. In the very first graph neural network [1], the propagation is done until convergence. If the operator in each layer is a contraction map, then according to the Banach Fixed Point theorem [2], a unique solution can be guaranteed. The constant operator used in this paper is thus a special case of this contraction map.\\n\\nAlso, the closed form solution in (3) is not practical. It may not be suitable for large graphs (e.g., graphs with >10k nodes). And that\\u2019s why this approach is not suitable for Pubmed and Microsoft dataset. The PEP_A is more practical. However, in this case I\\u2019m curious how it would compare with a GNN having same number of layers, but with proper gating/skip connections like ResNet. \\n\\nThe experiments show some marginal gains on the small graphs. However, I think it would be important to test on large graphs. Since small graphs typically have small diameter, thus several GNN layers would already cover the entire graph, and the additional propagation done by pagerank here might not be super helpful. \\n\\nFinally, I think the author should properly cite another relevant paper [3], which uses fixed point iteration to help propagate the local information. \\n\\n[1] Scarselli et.al, \\u201cThe Graph Neural Network Model\\u201d, IEEE Transactions on Neural Networks, 2009\\n[2] Mohamed A. Khamsi, An Introduction to Metric Spaces and Fixed Point Theory\\n[3] Dai et.al, Learning Steady-States of Iterative Algorithms over Graphs, ICML 2018\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Idea is interesting; experiments are convincing\", \"review\": \"This paper proposes a GCN variant that addresses a limitation of the original model, where embedding is propagated in only a few hops. The architectural difference may be explained in the following: GCN interleaves the individual node feature transformation and the single-hop propagation, whereas the proposed architecture first transforms the node features, followed by a propagation with an (in)finite number of hops. The propagation in the proposed method follows personalized PageRank, where in addition to following direct links, there is a nonzero probably jumping to a target node.\\n\\nI find the idea interesting. The experiments are comprehensive, covering important points including data split, training set size, number of hops, teleport probability, and ablation study. Two interesting take-home messages are that (1) GCN-like propagation without teleportation leads to degrading performance as the number of hops increases, whereas propagation with teleportation leads to converging performance; and (2) the best-performing teleport probability generally falls within a narrow range.\", \"question\": \"The current propagation approach uses the normalized adjacency matrix proposed by GCN, which is, strictly speaking, not the transition matrix used by PageRank. What prevents from using the transition matrix? Note that this matrix naturally handles directed graphs.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
ryl8-3AcFX
Environment Probing Interaction Policies
[ "Wenxuan Zhou", "Lerrel Pinto", "Abhinav Gupta" ]
A key challenge in reinforcement learning (RL) is environment generalization: a policy trained to solve a task in one environment often fails to solve the same task in a slightly different test environment. A common approach to improve inter-environment transfer is to learn policies that are invariant to the distribution of testing environments. However, we argue that instead of being invariant, the policy should identify the specific nuances of an environment and exploit them to achieve better performance. In this work, we propose the “Environment-Probing” Interaction (EPI) policy, a policy that probes a new environment to extract an implicit understanding of that environment’s behavior. Once this environment-specific information is obtained, it is used as an additional input to a task-specific policy that can now perform environment-conditioned actions to solve a task. To learn these EPI-policies, we present a reward function based on transition predictability. Specifically, a higher reward is given if the trajectory generated by the EPI-policy can be used to better predict transitions. We experimentally show that EPI-conditioned task-specific policies significantly outperform commonly used policy generalization methods on novel testing environments.
[ "Reinforcement Learning" ]
https://openreview.net/pdf?id=ryl8-3AcFX
https://openreview.net/forum?id=ryl8-3AcFX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HklRzRxmeV", "r1lti-b707", "Hye8juQs6m", "Ske9HN1b6X", "rklS-V1Wp7", "SklMCmJ-aX", "SkluH4n6nX", "S1xdZ3l5nX", "HkevaWpO27" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544912405924, 1542816161457, 1542301854113, 1541628994480, 1541628924590, 1541628874167, 1541420095808, 1541176319612, 1541095870854 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1174/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1174/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1174/Authors" ], [ "ICLR.cc/2019/Conference/Paper1174/Authors" ], [ "ICLR.cc/2019/Conference/Paper1174/Authors" ], [ "ICLR.cc/2019/Conference/Paper1174/Authors" ], [ "ICLR.cc/2019/Conference/Paper1174/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1174/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1174/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes an approach for probing an environment to quickly identify the dynamics. The problem is relevant to the ICLR community. The paper is well-written, and provides a detailed empirical evaluation. The main weakness of the paper is the somewhat small originality over prior methods on online system identification. Despite this, the reviewer's agreed that the paper exceeds the bar for publication at ICLR. Hence, I recommend accept.\\n\\nBeyond the related work mentioned by the reviewers, the approach is similar to work in meta-learning. Meta-RL and multi-task learning has typically been considered in settings where the reward is changing (e.g. see [1],[2],[3],[4], where [4] also uses an embedding-based approach). However, there is some more recent work on meta-RL across varying dynamics, e.g. see [5],[6]. The authors are encouraged to make a conceptual connection between this approach and the line of work in model-based meta-RL (particularly [5] and [6]) in the final version of the paper.\\n\\n[1] Duan et al. https://arxiv.org/abs/1611.02779\\n[2] Wang et al. CogSci '17 https://arxiv.org/abs/1611.05763\\n[3] Finn et al. ICML '17 https://arxiv.org/abs/1703.03400\\n[4] Hausman et al. ICLR '17: https://openreview.net/forum?id=rk07ZXZRb\\n[5] S\\u00e6mundsson et al. https://arxiv.org/abs/1803.07551\\n[6] Nagabandi et al. https://arxiv.org/abs/1803.11347\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"meta review\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for the feedback. The changes have rendered the contributions more clear and in particular the ablation studies are appreciated with respect to clarification of individual contributions of framework aspects. The previous rating stays appropriate due to similarity to existing approaches and ablated performance results.\"}", "{\"title\": \"New version\", \"comment\": [\"We thank all the reviewers for reviewing our paper. We have updated the paper with the following changes according to the suggestions:\", \"Added a reference in the introduction section (R1)\", \"Added a reference in the related work section (R3)\", \"Highlighted the difference of our reward comparing to the curiosity reward in the approach section 4.1.2 (R3)\", \"Additional experiment results for ablation studies in Table 1 and additional discussion in section 5.4. (R1, R3)\", \"Provided more details of the simulation environments in Appendix A\", \"Provided more implementation details of the baselines in Appendix D (R3)\", \"Minor changes to fit in 8 pages\"]}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for your detailed review and for finding our work interesting. We will focus our response on the novelty of our paper and clarify some key contributions.\", \"novelty_in_reward_formulation_compared_to_intrinsic_motivation\": \"For an intrinsic motivation/curiosity reward, the reward would be the error in prediction. This would encourage the policy to explore unexplored regions of state space. However, in our work, the reward is the difference in prediction error of two models. The first model predicts directly while the second model is conditioned on EPI-trajectory. This is an important distinction which causes our EPI-policy to extract information to improve prediction and not explore. Using errors in prediction modelling as a surrogate for environment information gathering is to the best of our knowledge novel to our work. We attempted to highlight this difference in the last paragraph of our \\u2018Related Work\\u2019 section. However, we will further highlight this difference in the approach section.\\n\\nNovelty compared to UP-OSI (Yu et al. 2017):\\nUP-OSI is an approach for explicit system identification. It takes in trajectories optimized for a specific task and tries to predict the environment parameters directly. But those trajectory to solve a task may not be optimal to disentangle to effects of different environment parameters (Lowrey et al. 2018). Hence this could make explicit prediction impractical for a large number of entangled environment parameters. \\nInstead, our EPI policy is optimized to extract underlying parameters represented via embedding. This also reduces the burden of exactly disentangling the environment parameters, while providing sufficient information to learn a task. Furthermore, our results demonstrate that having a separate policy to extract embedding is a better strategy to get higher rewards.\", \"ablation_studies\": \"As per your suggestion, we have run the ablation experiment without using any training tricks (i.e. \\u201cNo Vine Data\\u201d + \\u201cNo Regularization\\u201d). Hopper achieves 1237 pts which is 66 pts worse than using all the tricks. It however still beats all baselines. For the Striker, we get 0.324m final distance which is 0.162m worse than using all the tricks. Although we still beat 5/7 baselines, this is a significant loss in performance and highlights the importance of using both the Vine data and regularization. The ablation results of individual training tricks can be seen in Section 5.4.\", \"description_of_baselines\": \"For baselines, we will add more information in the Appendix. The MLP policies, baselines and our policy, has two hidden layers with 32 hidden units each with relu activations. This ensures that the capacity of the networks are the same. The Recurrent baseline is using an LSTM with 32 hidden units from rllab. All of these network sizes are defaults from the rllab toolbox. For the UP-OSI baseline, we followed the original paper and ran 5 iterations for training OSI.\", \"related_work\": \"We thank the reviewer for this reference and we will include it in the Related Work section. \\n\\nThank you again for reviewing our paper. We would be happy to provide any further clarifications. We will upload an updated version of the paper with the appendix and more details in a few days.\", \"references\": \"Lowrey, Kendall, et al. \\\"Reinforcement learning for non-prehensile manipulation: Transfer from simulation to physical system.\\\" Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR), 2018 IEEE International Conference on. IEEE, 2018.\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your review and for finding our paper well-written and clear. We are happy to address any other concerns or questions about our work.\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your review and for finding our work interesting. We will add appropriate references at places in introduction. For example, Braun et al. show that humans indeed adapt online within a single trial to perform a task in unpredictable environments rather than using a fixed policy. We will add similar references to the paper.\\n\\nWe are also working on expanding the discussion to give more details on the results. We will upload a new version in few days. If you have any specific suggestions or additional questions, we will be happy to address them.\", \"reference\": \"Daniel A Braun, Ad Aertsen, Daniel M Wolpert, and Carsten Mehring. Learning optimal adaptation strategies in unpredictable motor tasks. Journal of Neuroscience, 29(20), pp.6472-6478, 2009.\"}", "{\"title\": \"great paradigm, but paper could be more efficient\", \"review\": \"Some argumentation might better be supported by some reference, like :\\n\\n\\\"When humans are tasked to perform in a new environment, we do not explicitly know what param-\\neters affect performance. Instead, we probe the environment to gain an intuitive understanding of\\nits behavior (Fig. 1). The purpose of these initial interactions is not to complete the task imme-\\ndiately, but to extract information about the environment. This process facilitates learning in that\\nenvironment. Inspired by this observation,\\n\\\"\\n\\nThe overall idea is interesting, the implementation is correct via a TRANSITION PREDICTION MODELS\\n\\nMore place could be taken for more detailed results, use appendix to swap some text...\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting work about environment generalization for reinforcement learning\", \"review\": \"This paper proposes an \\u201cEnvironment-Probing\\u201d Interaction (EPI) policy used as an additional input for reinforcement learning (RL). This EPI allows to extract environment representations and implicitly understand the environment in order to improve the generalization on novel testing environments.\", \"pros\": \"This paper is well written and clear and the contribution is relevant to ICLR. Although I am not familiar with RL , the contribution seems novel and the model performances are compared with strong and appropriate baselines.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Learning policies for probing environment parameters to accelerate task learning. Interesting though limited novelty.\", \"review\": \"The submission presents a reinforcement learning method for exploring/probing the environment to determine an environment\\u2019s properties and exploit these during later tasks. The method relies on jointly learning an embedding that simplifies prediction of future states and a policy that maximises a curiosity/intrinsic motivation like reward to learn to explore areas where the prediction model underperforms. In particular, the reward is based on the difference between prediction based on the learned embedding and prediction based on a prior collected dataset, such that the reward optimises to collect data with a large difference between the prediction accuracy of both models. The subsequently frozen policy and embedding are then used in other domains in a system identification like manner with the embedding utilised as input for a final task policy. The method is evaluated on a striker and hopper environment with varying dynamics parameters and shown to outperform a broad set of baselines.\\n\\nIn particular the broad set of baselines and small performed ablation study on the proposed method are quite interesting and beneficial for understanding the approach. However, the ablation study could be in more detail with respect to the additional training variations (Section 4.1.3; e.g. without all training tricks). Additionally, information about the baselines should be extended in the appendix as e.g. different capacities alone could have an impact where the performances of diff. algorithms are comparably similar. In particular, additional information about the training procedure for the UP-OSI (Yu et al 2017) baseline is required as the original approach relies on iterative training and it is unclear if the baseline implementation follows the original implementation (similar to Section 4.1.3.). \\n\\nOverall the submission provides an interesting new direction on learning system identification approaches, that while quite similar to existing work (Yu et al 2017), provides increased performance on two benchmark tasks. The contribution of the paper focuses on detailed evaluation and, overall, beneficial details of the proposed method. The novelty of the submission is however limited and highly similar to current methods.\", \"minor_issues\": \"- Related work on learning system identification:\\nLearning to Perform Physics Experiments via Deep Reinforcement Learning\\nMisha Denil, Pulkit Agrawal, Tejas D Kulkarni, Tom Erez, Peter Battaglia, Nando de Freitas\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
S1x8WnA5Ym
Learning Diverse Generations using Determinantal Point Processes
[ "Mohamed Elfeki", "Camille Couprie", "Mohamed Elhoseiny" ]
Generative models have proven to be an outstanding tool for representing high-dimensional probability distributions and generating realistic looking images. A fundamental characteristic of generative models is their ability to produce multi-modal outputs. However, while training, they are often susceptible to mode collapse, which means that the model is limited in mapping the input noise to only a few modes of the true data distribution. In this paper, we draw inspiration from Determinantal Point Process (DPP) to devise a generative model that alleviates mode collapse while producing higher quality samples. DPP is an elegant probabilistic measure used to model negative correlations within a subset and hence quantify its diversity. We use DPP kernel to model the diversity in real data as well as in synthetic data. Then, we devise a generation penalty term that encourages the generator to synthesize data with a similar diversity to real data. In contrast to previous state-of-the-art generative models that tend to use additional trainable parameters or complex training paradigms, our method does not change the original training scheme. Embedded in an adversarial training and variational autoencoder, our Generative DPP approach shows a consistent resistance to mode-collapse on a wide-variety of synthetic data and natural image datasets including MNIST, CIFAR10, and CelebA, while outperforming state-of-the-art methods for data-efficiency, convergence-time, and generation quality. Our code will be made publicly available.
[ "Generative Adversarial Networks" ]
https://openreview.net/pdf?id=S1x8WnA5Ym
https://openreview.net/forum?id=S1x8WnA5Ym
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJeWmZCRJV", "rJgZqXnAyE", "S1xHtzNJyE", "HJgrdf4yy4", "S1laIMVJJN", "Hklz3wLUAQ", "BJx4OI8IAX", "SkgACBULRm", "rkgO-i9H6Q", "Skx_mWw7TX", "S1lVMpOypX", "SyxGMY50h7" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review" ], "note_created": [ 1544638745062, 1544631176740, 1543615100972, 1543615084953, 1543615061119, 1543034793729, 1543034475828, 1543034325741, 1541937920401, 1541792031690, 1541537035719, 1541478666336 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1173/Authors" ], [ "ICLR.cc/2019/Conference/Paper1173/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1173/Authors" ], [ "ICLR.cc/2019/Conference/Paper1173/Authors" ], [ "ICLR.cc/2019/Conference/Paper1173/Authors" ], [ "ICLR.cc/2019/Conference/Paper1173/Authors" ], [ "ICLR.cc/2019/Conference/Paper1173/Authors" ], [ "ICLR.cc/2019/Conference/Paper1173/Authors" ], [ "ICLR.cc/2019/Conference/Paper1173/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1173/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1173/Authors" ], [ "ICLR.cc/2019/Conference/Paper1173/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"followup\", \"comment\": \"We wish all the reviewers happy holidays and we understand that this is a busy time. Motivated by the ICLR spirit to interact more during the review process, we aim to interact more with the reviewers. We worked hard to address the comments and make the paper stronger based on the reviewer's feedback that we really appreciate. Thank so much for the reviewers and ICLR organizers for the great efforts to make it a great experience both in the process and the venue that gathers many great people.\"}", "{\"metareview\": \"The paper proposes GAN regularized by Determinantal Point Process to learn diverse data samples.\\n\\nThe reviewers and AC commonly note the critical limitation of novelty of this paper. The authors pointed out\\n\\n\\\"To the best of our knowledge, we are the first to introduce modeling data diversity using a Point process kernel that we embed within a generative model. \\\"\\n\\nAC does not think this is convincing enough to meet the high standard of ICLR.\\n\\nAC decided the paper might not be ready to publish in the current form.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Limited novelty\"}", "{\"title\": \"follow-up\", \"comment\": \"May we ask the reviewer, if we were able to address the main concerns that the reviewer had through our rebuttal and revision of the paper? Are there any further issues that the reviewer wants us to address? If so, we would appreciate your feedback and further discussion.\"}", "{\"title\": \"follow-up\", \"comment\": \"May we ask the reviewer, if we were able to address the main concerns that the reviewer had through our rebuttal and revision of the paper? Are there any further issues that the reviewer wants us to address? If so, we would appreciate your feedback and further discussion.\"}", "{\"title\": \"follow-up\", \"comment\": \"May we ask the reviewer, if we were able to address the main concerns that the reviewer had through our rebuttal and revision of the paper? Are there any further issues that the reviewer wants us to address? If so, we would appreciate your feedback and further discussion.\"}", "{\"title\": \"Addressing your comments\", \"comment\": \"We would like to thank you for the insightful comments, and we address them as follows:\\n\\n[Related Work] Thank you, we added the mentioned references.\\n\\n[DPP Motivation] Since this is a common comment from reviewers we posted an official reply that clarifies our motivation. Please refer to it for further clarification.\\n\\n[Differentiability of Our Loss] Since L_S and L_D are symmetric real matrixes, therefore our regularizer is differentiable. Please refer to \\\"On differentiating Eigenvalues and Eigenvectors by Magnus (1985) - Section 3\\\" for a proof of the differentiability of eigendecomposition obtained for symmetric real matrixes. In practice, we used the built-in function \\\"self_adjoint_eig\\\" in Tensorflow implementation and \\\"symeig\\\" operator in PyTorch and both of their implementations are differentiable.\\n\\n[Experimental Setting] We use the same experimental setting of Unrolled-GAN and WGAN-GP. Both approaches are targetting stabilizing generative training and alleviating mode collapse. We selected this setting because WGAN-GP is the current state-of-the-art stabilization method for adversarial training and is highly cited (734 citations). Also, its implementation is open-source, which guarantees a fair comparison. \\n\\nWe also included the results of applying our method to the VEEGAN experimental setting in Table 5 at Appendix C. Our method remains to outperform all baselines for both experimental settings.\\nFinally, we added the evaluation of our method using the NDB/K metric proposed by \\\"On GANs and GMMs\\\" in Table 7 at Appendix C, as suggested.\"}", "{\"title\": \"Addressing the comments\", \"comment\": \"Thanks for your constructive thorough comments, we address each in detail.\\n\\n[DPP Motivation] We responded to this in a separate post. \\n\\n[Analysis of Eigendecomposition] If an eigenvalue is zero, then it will zero-out its corresponding eigenvectors. This is because eigenvectors are weighted by their corresponding real eigenvalues as illustrated in the second term of Eq. 5.\\n\\n[Eigendecomposition time-efficiency] The Eigendecomposition of an nxn matrix requires O(n^3+n^2 log^2 nlog b) runtime within a relative error bound of 2^-b as shown in \\\"The Complexity of the Matrix Eigenproblem\\u201d, STOC, 1999. In our loss, we perform two eigendecompositions: L_S_B, L_D_B corresponding to the fake and true kernel respectively. Therefore, the runtime analysis of our loss is O(n^3), where n is the batch size. \\nNormally the batch size does not exceed 1024 for most training paradigms because of memory constraints. In our case, it is 512 for synthetic data and 64 or 16 for real data. Hence, the eigendecomposition does not account for a significant delay in the method. \\nTo further verify this claim, we measured the relative time that eigendecompositions take of each iteration time. We obtained 11.61% for Synthetic data, 9.36% for Stacked-MNIST data and 8.27% for CIFAR-10. We also show the average iteration running time of all baselines in Appendix C, Table 5. Our method is the closest to the standard DCGAN running time, and faster than the rest of baselines by a large margin.\\n\\n[Related Work] We included comparison with DeLiGAN, and our method remains to outperform the rest of the baselines. Regarding comparing with methods that explicitly tackle the mode collapse problem. We note that we are comparing with Unrolled-GAN, VEEGAN, ALI, and RegGAN, and all of them were originally introduced to solve the Mode collapse problem.\\n\\n[Novelty] To the best of our knowledge, we are the first to introduce modeling data diversity using a Point process kernel that we embed within a generative model. Furthermore, we show the effectiveness of our approach using two common generative models: VAEs and GANs. We assess the performance of our method on a battery of synthetic data as well as small-scale and large-scale real images. We evaluated our method using different metrics and various experimental settings to ensure robustness to hyperparameter effect.\\n\\n[Minor Issues] Thank you, we addressed the mentioned points. Cos(v, w) is the cosine similarity between eigenvectors of real data and eigenvectors of fake data.\"}", "{\"title\": \"DPP Motivation and Additional Experiments\", \"comment\": \"We thank all the reviewers for their valuable feedback. In this post, we cover questions related to the motivation and additional experiments we performed to address the reviewer\\u2019s concerns. Additionally, we have improved Section 4 and replaced Figure 1 in the paper to better clarify the mentioned points. The rest of this post is organized as follows.\\n\\n(a) Motivation/Idea\\n(b) From Motivation to Loss\\n(c) Additional Experiments\\n\\n[a: Motivation/Idea] \\nDPP is an elegant probabilistic model featured with diverse sampling characteristic. Although sampling from DPPs is computationally inefficient, evaluating the probability that a produced sample belonging to a DPP is relatively much faster. We rely on this observation to teach our generator G to generate diverse examples. The generator G produce a sample of size B (batch size) that we can encourage its diversity to improve by backpropagating the DPP diversity metric through the generator parameters. The DPP metric is maximized by producing a batch of orthogonal vectors, this might lead the generator to produce unrealistic images. This is not an issue on the conventional use of DPP in subset selection problems such as video summarization since the selected frames are guaranteed to be realistic. In our generation context, in order to keep the generations on the real image manifold, we match the diversity of the real images to the diversity of the fake images instead by ensuring the closeness of the real eigenvalues to the fake eigenvalues. We also encourage the realism of the structure by matching the real/fake eigenvectors weighted by its corresponding real eigenvalues (its importance), which we found vital in our ablation for the same purpose (see Table 2). \\n\\n[b: From Motivation to Loss] \\nAs illustrated in Section 3, DPP involves creating a semi-positive definite kernel (L_S) which captures the pairwise similarity between the items of a subset S. The determinant of the kernel L_S was shown to correlate with the diversity of subset S (Eq. 1). Therefore, there is a direct correlation between kernel L_S and the diversity within subset S.\\nNonetheless, we are not aiming to merely increase the diversity within subset S (i.e., min det(L_S)). Adding this term to adversarial loss as a regularizer will be equivalent to synthesizing a repulsion model that drives all generated samples apart from each other, where they have the maximum diversity as shown in ablation study (Table 2).\\nInstead, we are using the kernel L_S to model the diversity within two sets: real data and fake data. Then, we encourage the generator to synthesize fake data that has similar diversity to the diversity of real data. To simplify learning the matrix L_D_B, we choose to learn the major characteristics of the kernel that model its structure: eigenvalues and eigenvectors (Eq. 5).\\n\\n\\n[c: Additional Experiments] \\nAdditionally, we added the following experiments to show the effectiveness of our approach:\\n1) [ Reviewer 3] We added an additional ablation with a regularizer (min det(L_S)) to the adversarial loss in Table 2, showing that all the components of our loss are important to achieve the best performance. \\n2) [Reviewer 1] We compared with DeLiGAN in Table 3( ~250 modes less than ours, 1.0 less than ours in inception score metric).\\n3) [Reviewer 1] We computed the average iteration time of all baselines in Table 3, and we report them in Table 5, Appendix C. Evidently, GDPP-GAN has an indistinguishable running time from DCGAN, which are the fastest models to train. \\n4) [Reviewer 3] We repeated the experiments of Table 3, using the more challenging experimental setting of VEEGAN. We report the results in Table 5, Appendix C. In both settings, our method consistently outperforms other baselines as evaluated on CIFAR-10 and Stacked-MNIST. \\n5) [Reviewer 3] We evaluated GDPP-GAN using NDB/K evaluation metric in Table 7, Appendix C. \\n6) [Reviewer 2-CelebA] Demonstrate that our loss scales across deeper networks and more complex datasets, by integrating our GDPP loss within state-of-the-art: Progressive Growing GAN, and applying it on large-scale real images dataset: CelebA. Quantitatively, the addition of our loss shows consistent improvement of the Sliced Wasserstein Distance and qualitatively fewer artifacts in generations; refer to Figure 11 and Table 4.\\n7) [GDPP on VAE and GAN] Embedded GDPP loss to VAE, and showed that our loss is invariant to the generation approach. In both GAN and VAE, our loss is shown to significantly improve the performance of original generator approach. In fact, applying our loss on VAE on Stacked-MNIST dataset doubles the number of captured modes (623 vs 341) and cuts the KL-Divergence to half (1.3 vs 2.4).\\n\\nIf you have any further comments or questions, please notify us. \\nThank you!!\"}", "{\"title\": \"The authors\\u2019 motivation from DPP is arguable\", \"review\": [\"This paper proposes generative adversarial networks regularized by Determinantal Point Process (DPP) to learn diverse data space. DPP is a probabilistic model that encourages the diversity between the dataset. Authors observe that previous generative models have a mode-collapse problem, and they add generative DPP (GDPP) loss (eq (5)) as a diversity regularizer. Experiments show the GDPP loss is practically helpful to learn under synthetic multi-modal data and real-world image generation.\", \"The paper is well written and easy to comprehend the motivations and main contributions. And the experimental results seem to be interesting. However, there are some arguable issues:\", \"The main contribution is adding GDPP loss to the original generative models. The authors claim that the GDPP loss (eq (5)) is motivated by the DPP, but I think it does not utilize DPP characteristics at all. The proposed loss is rather close to eigenvalues/vectors matching rather than DPP. It does not seem to be capture DPP properties even assuming the training is perfect. In particular, DPP measures the similarity as the volume of spanned space, while the GDPP loss uses the cosine similarity.\", \"The GDPP loss is a function of eigenvalues/vectors of kernels, which is generated by internal features of the discriminator. I am curious how to compute the eigenvalues/vectors. Also, the gradient of functions of eigenvalues/vectors is not straightforward as it takes at least a cubic time-complexity with a dimension. It is better to clarify the time complexity for computing the loss and its gradients.\", \"In addition, if the feature kernel is not a full rank, it is deficient, i.e., some eigenvalues can be zeros. Do you compute the loss all eigenvalues? or compute only some eigenvectors?\", \"In section 5, the analysis of time-efficiency is not sufficient. Authors report the performance varying the number of iterations. However, since the loss computes eigenvectors/values, the cost per iteration should be larger than other competitors. It is natural to compare the elapsed time or number of FLOPS.\", \"Although the proposed method shows the best results for the experiments, it is desirable to compare to more diversity encouraging generative models, e.g., DeLiGAN [1]. In addition, I could not recognize the effectiveness of proposals in the experiments of image dataset.\", \"In overall, I think the proposed idea is interesting, but the authors\\u2019 motivation from DPP is arguable. In addition, I do not find enough novelty.\"], \"minor_issues\": [\"What is cos(v,w)? Please specify the definition of this.\", \"Where is Fig. 2k ? Please add the sub-index in Figure 2.\", \"[1] Gurumurthy, Swaminathan, Ravi Kiran Sarvadevabhatla, and R. Venkatesh Babu. \\u201cDeLiGAN: Generative Adversarial Networks for Diverse and Limited Data.\\u201d CVPR. 2017\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"The connection between the propsoed regularizer and the DPP is not precise.\", \"review\": \"For training GANs, the authors propose a regularizer that is inspired by DPP, which encourage diversity. This new regularizer is tested on several benchmark datasets and compared against other mode-collapse mitigating approaches.\\n\\n \\nIn section 2, important references to recent techniques to mitigate mode collapse are missing, e.g.\\nBourGAN (https://arxiv.org/abs/1805.07674)\\nPacGAN (https://arxiv.org/abs/1712.04086)\\nD2GAN (https://arxiv.org/abs/1709.03831)\\n\\nAlso related is evaluation of mode collapse as in \\nOn GANs and GMMs (https://arxiv.org/abs/1805.12462) \\n\\nThe actual loss that is proposed as in (5) and (6), seems far from the motivation that is explained as in Eq (3), using generator as a point process that resembles DPP. This conceptual gap makes the proposed explanation w.r.t DPP unsatisfactory. A more natural approach would be simply add $det(L_{S_B})$ itself as a regularizer. Extensive experimental comparisons with this straightforward regularizer is in order. \\n\\nIt is not immediate if the proposed diversity regularizer $L_g^{DPP}$ in (5) is differentiable in general, as it involves computing the eigen-vectors. Elaborate on the implementation of the gradient update with respect to this new regularizer.\", \"experiments\": \"1. The results in Table 3 for stacked-MNIST are very different from VEEGAN paper. Explain why a different setting was used compared to VEEGAN experiments. \\n\\n2. Similar experiments have been done in Unrolled-GAN paper. Add the experiment from that setting also. \\n\\n3. In general, split the experiments to two parts: one where the same setting is used as in the literature (e.g. VEEGAN, Unrolled GAN) and the results are compared against those reported in those papers. Another where new settings are studied, and the experiments of the baseline methods are also run by the authors. This is critical to differentiate such cases, as hyper parameters of competing algorithms could have not been tuned as rigorously as the proposed method. This improves the credibility of the experimental results, eventually leading to reproducibility.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Addressing the comments and describing our additional experiments\", \"comment\": \"Thanks a lot for your constructive comments. We reply to each of the aforementioned points separately. Additionally, we updated our manuscript with new results of applying GDPP on Variational AutoEncoder and progressive-growing GANs on the CelebA dataset.\\n\\n[Eigen values and vector computation] Correct, we perform an eigen decomposition to obtain the eigenvalues and eigenvectors.\\nIn general, there are two factors that contribute to a potential overhead for the eigen decomposition: the batch size and the extracted features dimensionality. The larger the batch size or features dimensionality, the larger the kernel to be decomposed, which constitutes a computational overhead. However, if the data is too complex and large, it would not fit a large batch size in the memory but will need a larger feature representation. And, if the data is too simple and small, can fit larger batch sizes in the memory, and will need a smaller feature representation. Therefore, the computational overhead should be similar across variable types of data.\\nIn practice, we computed the average ratio of GDPP computation with respect to the total iteration time. We obtained 11.61% for Synthetic data, 9.36% for Stacked-MNIST data and 8.27% for CIFAR-10.\\n\\n[Interpreting GDPP loss (Eq. 6)] Intuitively, for a positive semidefinite matrix, eigenvectors represent the orientation of distortion within data, and eigenvalues represent the magnitude of distortion of each eigenvector. That's why we have an L2 magnitude loss that favors a matching of the magnitudes of eigenvalues (Magnitude loss), and a Cosine-similarity loss that aims to match the orientations of the eigenvectors (Structure loss).\\n\\n[Batch size effect] Increasing the batch size is a subject of memory constraints, but theoretically, it should improve the performance up to a limit. In synthetic data, we used a batch size of 512. For stacked-MNIST and CIFAR-10, we used a batch size of 64. In CelebA, we used a batch size of 16, and plan to present results with a larger batch size.\\n\\n[Quality of the discriminator features] In order for the adversarial network to distinguish real from fake, it learns to extract discriminative features of each data sample. When training with GDPP loss, we use those features to compute the GDPP. Specifically, we compute eigen vectors and values of the DPP kernel constructed using those features. However, we only backpropagate this loss to the generator but not the discriminator.\\n\\n[Quality on CIFAR10] It is true that DCGANs reach an Inception score greater than 6 in a conditioned setting, whereas we applied GDPP to the more challenging setup of unsupervised adversarial training. Using the same architecture of WGAN-GP(Gulrajani et al., 2017), with only half the number of training iterations and the standard adversarial training paradigm, our method generates similar qualitative results to the unsupervised WGAN-GP with a higher Inception-Score and a lower Inference-Via-Optimization. We also note that the inception score values greatly depend on the used architecture: using deeper networks would probably improve the performances, but the rank should be the same. \\n\\n[Diversity in GANs] Unrolled-GAN, VEEGAN, ALI are methods that were explicitly introduced to address mode collapse. We presented these methods in the related work section and compared to them in the experiments section.\\n\\n[More datasets & VAE] We now present additional results with the state-of-the-art progressive growing architecture on CelebA, demonstrating that our loss scales across deeper networks and more complex datasets. We also show that it performs similarly in Variational Autoencoder, deeming GDPP to be a model and architecture invariant loss.\"}", "{\"title\": \"This paper integrates DPP with GAN and promotes diversity in learning generator distribution\", \"review\": [\"The paper proposes to introduce DPP into the vanilla GAN loss and uses it as a way to regularize the generator to produce more diverse outputs, in order to combat the mode-collapse problem. Since the proposed method is added as a simple loss regularizer, the approach does not introduce additional parameters, therefore, less training difficulties. The results on synthetic data seems promising, but there is insufficient evaluation being performed on real and larger dataset where the mode collapse problems are more likely to happen.\", \"Method\", \"The proposed methods seem sensible. But there are some critical details missing from the current text that prevents me from assessing this paper clearly.\", \"How are the \\\\lambda and v in Eq.6 calculated? It seems to me that you need to estimate the eigenvalues and eigenvectors at every iteration of your training. I am aware of that many DPP-based models suffer from scalability issues. Could you discuss the potential overhead of this procedure? Also in experiments, you claim \\\"DPP-GAN is cost-free, we observe that the training of GDPP-GAN has an indistinguishable running time than the DCGAN, which are the fastest models to finish training\\u201c, which is hard to believe.. could you give more details and analysis on the overhead here?\", \"In Eq.6 why there are both \\\"a diversity magnitude loss L_m\\\", and \\\"diversity structure loss L_s\\\". What do they specifically try to capture respectively? Could you give a geometric interpretation on this part?\", \"what is the batch size used in your experiments on MNIST and CIFAR-10. It seems to me that the effectiveness of GDPP would rely on batch size used as per my understanding you will estimate the DPP kernel using the current batch of samples (generated or real)?\", \"Despite the fact that GDPP wants to reduce parameters introduced, it is not very intuitive to understand how it would work to use the features outputted by D as the DPP features as well. As D, as the discriminator itself, is trained to distinguish real from fake, while mimicking the eigenvalues/vectors of real data. How would these two goals be reconciled by the same set of parameters?\", \"Experiments\", \"The results on the synthetic data seem promising, but the results on MNIST and CIFAR-10 are not impressive enough:\", \"The visual quality of Figure.9 does not look very appealing. I believe many simple variants DCGAN can produce better quality of images\", \"Why your DCGAN baseline on CIFAR-10 only reports inception score around 5 (with high variance, see Figure.5 and Table 3)? I believe vanilla DCGANs can easily attain an IS at 5.5 to 6, as reported in most recent GAN literature.\", \"More visual results on CIFAR10 should have been presented in order to demonstrate DPP does generate images with as many classes as existed in CIFAR-10 (which is 10)\", \"The results could be much more convincing if the authors could show the generation results and evaluation metrics on larger/more real datasets other than CIFAR-10 and MNIST. See GAN literature in 2018 about what dataset to use.\", \"Presentation\", \"Most parts of this paper are well written. There are few typos and grammatical errors across the text which I believe are easily fixable. There are some missing details that hinder the understanding of some technical parts of the paper. See above for detailed comments.\", \"Other\", \"Promoting diversity in (deep) generative models isn't a new topic. It would be good if the authors could established connections/differences between this work and this line of relevant works.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
S1xLZ2R5KQ
Maximum a Posteriori on a Submanifold: a General Image Restoration Method with GAN
[ "Fangzhou Luo", "Xiaolin Wu" ]
We propose a general method for various image restoration problems, such as denoising, deblurring, super-resolution and inpainting. The problem is formulated as a constrained optimization problem. Its objective is to maximize a posteriori probability of latent variables, and its constraint is that the image generated by these latent variables must be the same as the degraded image. We use a Generative Adversarial Network (GAN) as our density estimation model. Convincing results are obtained on MNIST dataset.
[ "gan", "posteriori", "submanifold", "general image restoration", "latent variables", "maximum", "general", "denoising", "inpainting" ]
https://openreview.net/pdf?id=S1xLZ2R5KQ
https://openreview.net/forum?id=S1xLZ2R5KQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BygEH5ra14", "Hkge4MIj0m", "HklKrfHc0X", "HyghIbr50m", "Bye3WZB907", "S1gQAUDB27", "HygHWpHH2m", "BJe2djKE3X" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544538684103, 1543361064502, 1543291457345, 1543291219789, 1543291139782, 1540875978953, 1540869373156, 1540819827672 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1172/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1172/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1172/Authors" ], [ "ICLR.cc/2019/Conference/Paper1172/Authors" ], [ "ICLR.cc/2019/Conference/Paper1172/Authors" ], [ "ICLR.cc/2019/Conference/Paper1172/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1172/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1172/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes a framework of image restoration by searching for a MAP in a trained GAN subject to a degradation constraint. Experiments on MNIST show good performance in restoring the images under different types of degradation.\\n\\nThe main problem as pointed out by R1 and R3 is that there has been rich literature of image restoration methods and also several recent works that also utilized GAN, but the authors failed to make comparison any of those baselines in the experiments. Additional experiments on natural images would provide more convincing evidence for the proposed algorithm.\\n\\nThe authors argue that the restoration tasks in the experiments are too difficult for TV to work. It would be great to provide actual experiments to verify the claim.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Lack of comparison with recent baselines\"}", "{\"title\": \"Comments after authors' response\", \"comment\": \"Follow-up comments after authors' response.\\n\\nFor Q1, I mentioned in the previous comments that the missing reference is Yeh et al., ICASSP 2018, not the CVPR 2017 paper. \\n\\nYeh, Raymond A., et al. \\\"Image Restoration with Deep Generative Models.\\\" 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018.\\n\\nIn Yeh et al. paper, the author did use the pre-trained generator to obtain the results. In addition, the ICASSP 2018 version did contain various image restoration tasks. \\n\\nFor Q2, TV might not be the best solution for all these tasks, but it is a strong baseline. Reviewer #3 also mentioned that. The paper should include the results instead of just saying that it does not work. In addition, Yeh et al. paper should be the right baseline method to compare with. \\n\\nThe rebuttal did not address my concerns. The score remains the same.\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your thoughtful review. We will address your concerns in turn.\", \"q1\": \"So you provide the general framework where somebody has to specify only the F?\", \"a1\": \"Yes, and that is the motivation of this work, to avoid training new models for slightly different situations.\", \"q2\": \"The efficiency of the method is highly based on the ability of the GAN to approximate well the prior distribution of the noise-free images.\", \"a2\": \"Yes, so we use WGAN-GP, a strong and elegant implementation, as our trained GAN.\", \"q3\": \"Is parameter Omega estimated individually for each degraded image?\", \"a3\": \"Yes.\\n\\nThank you again for your positive reviews which give me some confidence, I really appreciate it.\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your thoughtful review. We will address your concerns in turn.\", \"q1\": \"The idea is very related to Yeh et al.\\u2019s work which is not mentioned at all.\", \"a1\": \"The entire first paragraph of our related work section is focused on Yeh et al.\\u2019s work. As we explained in the paragraph, there is a major theoretical flaw in their method. Yeh et al. (2017) use the discriminator loss of a trained GAN as an indicator of how realistic their restoration is. However, Goodfellow et al. (2014) already prove that the discriminator is unable to identify how realistic an input is after several steps of training, if the GAN has enough capacity. Ideally the generator will have all the information of the data distribution while the discriminator will have none. That is why we use the generator of a trained GAN as an implicit probability density model in our method.\\n\\nAnother difference between their work and ours is that they only focus on image inpainting problem, while our method applies to various image restoration problems.\", \"q2\": \"Total variation regularization can also handle different degradations.\", \"a2\": \"We think you underestimate the difficulty of those restoration problems. Please check the degraded images in Table 3. These images are damaged so badly that TV cannot recover any meaningful thing. As a handcrafted prior, TV performs much worse than our data-driven baseline method in these tasks.\", \"q3\": \"Does the proposed method learn the image inpainting mask as well? What are the parameters of the degradation in the applications?\", \"a3\": \"The image inpainting mask is known and fixed.\\n\\nWe use four different kinds of degradation to test the generality of our method. The first three kinds of degradation are 7\\u00d7 downsampling, making a 14\\u00d714 square hole in the center of the image, and adding Gaussian white noise with a standard deviation of 1.0, respectively. The last kind of degradation is a composition of a series of degradation in order, which are (a) adding linear motion blur by at most 14 pixels in any direction, (b) 4\\u00d7 downsampling, (c) adding uniform noise between -0.05 and 0.05, (d) randomly removing 10% of the pixels.\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for your thoughtful review. We will address your concerns in turn.\", \"q1\": \"The degradation function F is challenging to obtain in real world scenarios.\", \"a1\": \"Many state-of-the-art approach, including SRCNN and SRGAN, has their own implicitly defined degradation function. They use their function F to generate training samples during their training process, while we use our explicitly defined function F during the inference process. If the assumed degradation function F is not exactly the function in real scenarios, both these state-of-the-art approach and our method will suffer. So it is unfair to criticize our motivation just because we explicitly write out the degradation function F.\", \"q2\": \"TV can also be applied for different restoration tasks, and it is easier to be optimized.\", \"a2\": \"We think you underestimate the difficulty of those restoration problems. Please check the degraded images in Table 3. These images are damaged so badly that TV cannot recover any meaningful thing. As a handcrafted prior, TV performs much worse than our data-driven baseline method in these tasks.\"}", "{\"title\": \"The motivation is not convincing. The final model is too difficult to be optimized. The experimental results are also too weak for evaluation.\", \"review\": \"This paper proposed a framework to incorporate GAN into MAP inference process for general image restoration.\\n\\nFirst, the motivation of the proposed framework is not convincing for me. That is, authors assumed that they have a degradation function F and all the inference process is just based on this known function. However, in real world scenarios, it is actually challenging to obtain exact degradation information. Thus we may only apply the proposed model on a few tasks with exactly known F.\\n\\nSecond, due to the norm based constraints, authors actually need to optimize a highly nonconvex optimization problem. Moreover, due to the trace based loss function, the computational cost will also be very high. Please notice that standard MAP based methods only need to solve a simple convex optimization model (e.g., TV) and these methods can also be applied for different restoration tasks. Actually, we only need to specify particular fidelity terms for different tasks. Moreover, very recent works have also successfully incorporate both generative and discriminative network architectures (e.g., [1,2]) into the optimization process. Therefore, I cannot find any advantage in the proposed method, compared with these existing MAP based image restoration approaches.\\n\\nFinally, the experimental part is also too weak to evaluate the proposed method. As I have mentioned above, actually a lot of methods have been developed to address general image restoration tasks. Some works actually also incorporate generative and/or discriminative networks into MAP inference process for these tasks. Thus I believe authors must compare their method with these state-of-the-art approaches. Moreover, authors should conduct experiments on state-of-the-art benchmarks, including natural images. This is because the digitals images in MNIST do not have rich texture and detail structures, thus are not very challenging for standard image restoration methods. \\n\\n\\n[1]. Kai Zhang, Wangmeng Zuo, Shuhang Gu, Lei Zhang: Learning Deep CNN Denoiser Prior for Image Restoration. CVPR 2017: 2808-2817\\n[2]. Jiawei Zhang, Jin-shan Pan, Wei-Sheng Lai, Rynson W. H. Lau, Ming-Hsuan Yang: Learning Fully Convolutional Networks for Iterative Non-blind Deconvolution. CVPR 2017: 6969-6977\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Nice presentation but missing important reference\", \"review\": \"This paper proposed a general method for image restoration based on GAN. In particular, the latent variable z is optimized based on the MAP framework. And the results are obtained by G(z). This method looks reasonable to achieve good results. However, the idea is very related to Yeh et al.\\u2019s work which has already published but not mentioned at all.\\n\\nYeh, Raymond A., et al. \\\"Image Restoration with Deep Generative Models.\\\" 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018.\\n\\nBoth the proposed method and Yeh et al.\\u2019s method optimize the latent variable z of the generator using MAP, although the loss functions are slightly different. In addition, the applications are very similar: image inpainting, denoising, super-resolution etc. Yeh et al.\\u2019s method should be the right baseline instead of the nearest neighbor algorithm. \\n\\nIn addition, the results seem very weak. There are tons of algorithms for image inpainting, denoising, and super-resolution, but the proposed method was not compared with them. The paper claims that only the nearest neighbor algorithm can handle different degradations. This is not true. For example, total variation regularization can do all these tasks.\", \"some_other_comments\": \"what are the parameters of the degradation in the applications? For example, in image inpainting, does the proposed method learn the mask as well? So it is blind inpainting?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting idea, but the paper needs improvements.\", \"review\": \"The authors propose a method for image restoration, where the restored image is the MAP estimate. A pretrained GAN is utilized to approximate the prior distribution of the noise-free images. Then, the likelihood induces a constraint which is based on the degradation function. In particular, the method tries to find the latent point for which the GAN generates the image, which if gets degraded will match the given degraded image. Also, an optimization algorithm is presented that solves the proposed constrained optimization problem.\\n\\nI find the paper very well written and easy to follow. Also, the idea is pretty clean, and the derivations are simple and clear. Additionally, the Figures 2,3 are very intuitive and nicely explain the theory. However, I think that there are some weaknesses (see comments):\", \"comments\": \"#1) I do not understand exactly what the \\\"general method\\\" means. Does it mean that you propose a method, where you can just change the F, such that to solve a different degradation problem? So you provide the general framework where somebody has to specify only the F?\\n\\n#2) Clearly, the efficiency of the method is highly based on the ability of the GAN to approximate well the prior distribution of the noise-free images.\\n\\n#3) There are several Equations that can be combined, such that to save enough white space in order to discuss further some actual technical details. For instance, Eq. 2,3 can be easily combined using the proportional symbol, Eq. 8,9,10,11 show actually the same thing.\\n\\n#4) I think that the function F has to be differentiable, and this should be mentioned in the text. Also, I believe that some actual (analytic) examples of F should be provided, at least in the experiments. The same holds for the p(Omega). This parameter Omega is estimated individually for each degraded image?\\n\\n#5) Before Eq. 8 the matrix V is a function of z and should be presented as such in the equations.\\n\\n#6) I believe that it would be nice to include a magnified image of Fig. 3, where the gradient steps are shown. Also, my understanding is that the optimization goal is to find first a feasible solution, and then find the point that maximizes f. I think that this can be clarified in the text.\\n\\n#7) The optimization steps seem to be intuitive, however, there is not any actual proof of converge. Of course, the example in the Figure 3 is very nice and intuitive, but it is also rather simple. I would suggest, at least, to include some empirical evidences in the experiments that show convergence.\\n\\n#8) In the experiments I think that at least one example of F and p(Omega) should be presented. Also, what the numbers in Table 4 show? Which is the best value that can be achieved? These numbers correspond to several images, or to a unique image? \\n#9) I think that MNIST is almost a toy experiment, since the crucial component of the proposed method is the prior modeling with the GAN. I believe that a more challenging experiment should be conducted e.g. using celebA dataset.\", \"minor_comments\": \"#1) In the paragraph after Eq. 4 the equality p_r(x)=p_G(x) is very strong assumption. I would suggest to use the \\\\simeq symbol instead.\\n\\n#2) After Eq. 6 the \\\"nonnegative\\\" should be \\\"nonzero\\\".\\n\\n#3) Additional density estimation models can be used e.g. VAEs, GMM. Especially, I believe that the VAE will provide a way to approximate the prior easier than the GAN.\\n\\n#4) In Section 2 paragraph 2, the sentence \\\"However, they only ... and directly\\\" is not clear what means.\\n\\nIn general, I find both the proposed model and optimization algorithm interesting. Additionally, the idea is nicely presented in the paper. Most of my comments are improvements which can be easily included. The two things that make me more skeptical, is the convergence of the proposed algorithm and the experiments. The MNIST is a relatively simple experiment, and I would like to see how the method works in more challenging problems. Also, I think that additional methods to compute the image prior should be included in the experiments.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
S1MB-3RcF7
Multi-objective training of Generative Adversarial Networks with multiple discriminators
[ "Isabela Albuquerque", "João Monteiro", "Thang Doan", "Breandan Considine", "Tiago Falk", "Ioannis Mitliagkas" ]
Recent literature has demonstrated promising results on the training of Generative Adversarial Networks by employing a set of discriminators, as opposed to the traditional game involving one generator against a single adversary. Those methods perform single-objective optimization on some simple consolidation of the losses, e.g. an average. In this work, we revisit the multiple-discriminator approach by framing the simultaneous minimization of losses provided by different models as a multi-objective optimization problem. Specifically, we evaluate the performance of multiple gradient descent and the hypervolume maximization algorithm on a number of different datasets. Moreover, we argue that the previously proposed methods and hypervolume maximization can all be seen as variations of multiple gradient descent in which the update direction computation can be done efficiently. Our results indicate that hypervolume maximization presents a better compromise between sample quality and diversity, and computational cost than previous methods.
[ "Generative Adversarial Networks", "Multi-objective optimization", "Generative models" ]
https://openreview.net/pdf?id=S1MB-3RcF7
https://openreview.net/forum?id=S1MB-3RcF7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HylSlRTGeV", "BkxqMAxkl4", "r1eIdLBA14", "ByeCEZOlJE", "BklSxbdxkE", "HJxRbLlRa7", "H1gifE_paQ", "Byg-vvVt6X", "Ske4BPVYTX", "rygTVw4KTQ", "r1elJcCVpX", "HJxOmF0ET7", "B1gaR7CNpX", "HyeH5yCEaX", "rJeo4xjp37", "rkxn9hTqhX", "rJgbpOntnX", "Bke7IvPfnQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment" ], "note_created": [ 1544900076913, 1544650257610, 1544603246125, 1543696694377, 1543696621034, 1542485509554, 1542452243271, 1542174552625, 1542174523945, 1542174517082, 1541888472235, 1541888288226, 1541886933112, 1541885837052, 1541414963375, 1541229715977, 1541159096618, 1540679499016 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1171/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1171/Authors" ], [ "ICLR.cc/2019/Conference/Paper1171/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1171/Authors" ], [ "ICLR.cc/2019/Conference/Paper1171/Authors" ], [ "ICLR.cc/2019/Conference/Paper1171/Authors" ], [ "ICLR.cc/2019/Conference/Paper1171/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1171/Authors" ], [ "ICLR.cc/2019/Conference/Paper1171/Authors" ], [ "ICLR.cc/2019/Conference/Paper1171/Authors" ], [ "ICLR.cc/2019/Conference/Paper1171/Authors" ], [ "ICLR.cc/2019/Conference/Paper1171/Authors" ], [ "ICLR.cc/2019/Conference/Paper1171/Authors" ], [ "ICLR.cc/2019/Conference/Paper1171/Authors" ], [ "ICLR.cc/2019/Conference/Paper1171/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1171/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1171/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1171/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers found that paper is well written, clear and that the authors did a good job placing the work in the relevant literature. The proposed method for using multiple discriminators in a multi-objective setting to train GANs seems interesting and compelling. However, all the reviewers found the paper to be on the borderline. The main concern was the significance of the work in the context of existing literature. Specifically, the reviewers did not find the experimental results significant enough to be convinced that this work presents a major advance in GAN training.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"A well written paper on multi-discriminator GAN training, but just below the bar in empirical results\"}", "{\"title\": \"Reply to Reviewer 3\", \"comment\": \"We first thank the reviewer for her/his reply.\\n\\nThe more discriminators one can use the better. This is inline with findings in [1 - Theorem A.2]. We found 24 discriminators to work very well across different datasets and models.\\n\\nAs we have previously mentioned, the comparison between 1 and 24 discriminators is unfair by design in terms of computational cost. This is exactly the reason why many of our experiments are focused on comparing different multiple-discriminators methods so as to emphasize the importance of our contribution under this setting. The same \\u201dunfairness\\u201d is also observed if one compares different generators (DCGAN-like vs. ResNet-based in [2 - Table 2] and [3 - Table 3], for example). This is exactly the point we want to make. If one has the available resources, increasing the cost via adding discriminators is a practical approach to trade the added cost for extra quality/diversity.\\n\\nIf we understood correctly the experiment suggested by the reviewer, we would have to modify the generator's or discriminator\\u2019s architectures to compensate for the extra discriminators. Although very interesting, this experiment would not support any claim we make. We clarify once more that our focus is on improving the performance of a given generator.\\n\\nWe empirically support the claim that a given generator will achieve better performance with multiple discriminators when compared to its single-discriminator counterpart. This was consistently observed using generators of various sizes in different datasets.\\n\\nFor a given generator, using the random projections setting from [1], increasing the number of discriminators will:\\n\\n1-Improve sample diversity\\n2-Yield higher sample quality in terms of FID (and Inception score as well)\\n3-Make it easier to find a working set of hyperparameters (all default Adam hyperparameters usually yields great improvements w.r.t. single-discriminator)\\n\\nBy looking at the multiple-discriminators setting through the lens of multi-objective optimization, one can see that optimizing for the average loss will yield solutions at any part of the pareto front, while hypervolume maximization prefers central regions. This is where we believe lies the benefit in using this approach over average loss, since it will enforce the assumptions of [1 - Theorem A.2] (cf. discussion with reviewer 2).\\n\\nThe reviewer mentioned that \\u201cAgain, if we do not have controlled experiment results, how can we really know whether the performance gain is significant enough so that the added cost is tolerable?\\u201d It is not possible to tell in absolute terms whether the added cost is \\u201ctolerable\\u201d or not. In our experiments, we were able to train generative models of image data of up to 256x256x3 against 24 discriminators in a single common GPU (NVIDIA GTX 1080ti). Given that, we claim the method is practical, theoretically grounded, and yields relevant performance gains, which we believe is an useful finding for the community to be aware of.\\n\\n[1] Neyshabur, Behnam, Srinadh Bhojanapalli, and Ayan Chakrabarti. \\\"Stabilizing GAN training with multiple random projections.\\\" arXiv preprint arXiv:1705.07831 (2017).\\n[2] Miyato, T., Kataoka, T., Koyama, M., & Yoshida, Y. (2018). Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957.\\n[3] Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. C. (2017). Improved Training of Wasserstein GANs. In Advances in Neural Information Processing Systems (pp. 5767-5777).\"}", "{\"title\": \"reply\", \"comment\": \"Thank the authors for detailed replies.\\n\\n\\\"if one accounts for the several training runs required for hyperparameters search, is reduced.\\\" This is a good point, although it's an empirical (even case-by-case) finding. Figure 5 shows its insensitivity w.r.t. to \\\\delta in your case, how about learning rate in you case? How about learning rate in other cases? In your case, improvement from 8 to 16 is little, but 16 to 24 is significant. Then how many discriminators are enough to obtain this sensitivity? If all these questions have to be answered empirically (through experiments), then the saving in hyperparameters search is discounted...\\n\\n\\\"While the increase in cost in terms of FLOPS and memory is unavoidable\\\" Why is it unavoidable? Can we design an experiment to compare the Inception/FID of single and multiple discriminators, with same FLOP and/or Memory? Or with the same Inception/FID, compare the FLOP and/or Memory? The results in the paper are not enough to answer this controlled experiment question. The added Table 6 cannot, either. By the way, this seems to be a typo. \\\"This effect was consistently observed considering 4 different well-known approaches, namely DCGAN (Radford et al., 2015), Least-square GAN (LSGAN) (Mao et al., 2017), and HingeGAN (Miyato et al., 2018). \\\"\\n\\n\\\"We wanted to show the added cost would translate into performance gain.\\\" Again, if we do not have controlled experiment results, how can we really know whether the performance gain is significant enough so that the added cost is tolerable? \\\"the added cost would translate into performance gain.\\\" is the basic requirement for a valid method, but tell nothing that the method is good. \\n\\nTherefore, I prefer to keep my rate. A fair comparison between single and multiple discriminators is my main concern.\"}", "{\"title\": \"Feedback\", \"comment\": \"Thank you once more for your time and suggestions. We hope to have addressed your concerns and would appreciate if you take the results included during the rebuttal into consideration when reviewing your score. We are looking forward to hearing back from you and open to discuss any further concern.\"}", "{\"title\": \"Feedback\", \"comment\": \"Thank you once more for your time and suggestions. We hope to have addressed your concerns and would appreciate if you take the results included during the rebuttal into consideration when reviewing your score. We are looking forward to hearing back from you and open to discuss any further concern.\"}", "{\"title\": \"Response to comment\", \"comment\": \"Thank you for your comment and for your time reading the updated version of our manuscript.\\n\\n\\u201c2. The idea of using multi-objective optimization to improve stability is really interesting. Maybe I missed something, but there seems to be a large jump from \\\"GANs are unstable\\\" to \\\"multi-objective optimization should help\\\". Is there anything (say, a theoretical result or conceptual explanation) in the literature to fill the gap?\\u201d\\n\\n**For the \\u201cGANs are unstable\\u201d part, we build upon two results introduced in [1], namely: \\n\\n1-In Theorem A.1, it is shown that marginals along random projections will likely have a higher overlap. This means that, in the projected space, generator\\u2019s and real samples will be more alike. This avoids a common failure mode in GANs training which corresponds to when the discriminators quickly learn how to distinguish real and generated samples. Thus, training in a lower-dimensional randomly projected space will be easier if compared to the original data.\\n\\n2- In Theorem A.2, an upper-bound is proven to show that if approximation of the projected data distribution is achieved along a sufficient number of random projections (each corresponding to a discriminator), the distribution induced by the generator approximates the real data distribution (in the original space). \\n\\nAs we see, authors in [1] propose to trade one hard problem for a number of easier subproblems in order to ameliorate training instability. \\n\\n**Regarding the \\\"multi-objective optimization should help\\\" aspect, we proposed a more suitable optimization framework to be used in the described setting looking at it through the lens of multi-objective optimization, by applying the hypervolume maximization approach. We compared previously proposed multiple-discriminators approaches as well as Multiple Gradient Descent, and showed hypervolume maximization to yield a better compromise between cost in time and sample quality.\\n\\nWe further respectfully point the reviewer to the last paragraph of Section 4.1, which we post herein for convenience. There, we aim at further motivate our contribution by comparing the proposed method with simply minimizing for the average loss: \\n\\n\\u201cThe upper bound proven by [1] assumes that the marginals of the real and generated distributions are identical along all random projections. Average loss minimization does not ensure equally good approximation between the marginals along all directions. In case of a trade-off between discriminators, i.e. if decreasing the loss on a given projection increases the loss with respect to another one, the distribution of losses can be uneven. With HV on the other hand, especially when \\\\eta is reduced throughout training, overall loss will be kept high as long as there are discriminators with high loss. This objective tends to prefer central regions of a trade-off, in which all discriminators present a roughly equally low loss.\\u201d\\n\\nWe hope to have appropriately clarified your concerns. \\n\\n[1] Neyshabur, Behnam, Srinadh Bhojanapalli, and Ayan Chakrabarti. \\\"Stabilizing GAN training with multiple random projections.\\\" arXiv preprint arXiv:1705.07831 (2017).\"}", "{\"title\": \"comment\", \"comment\": \"1. I didn't articulate my comment about tradeoffs well; it's been partially addressed by App C.2\\n\\n2. The idea of using multi-objective optimization to improve stability is really interesting. Maybe I missed something, but there seems to be a large jump from \\\"GANs are unstable\\\" to \\\"multi-objective optimization should help\\\". Is there anything (say, a theoretical result or conceptual explanation) in the literature to fill the gap?\"}", "{\"title\": \"Further updates\", \"comment\": \"In this post we describe a number of new experiments that we performed in response to the reviewers\\u2019 questions. We believe that these results strengthen the relevance of the discussed framework and thank the reviewers; their helpful suggestions were very useful in improving our work.\\n\\n1-As suggested by Reviewer 1, we ran experiments on CIFAR-10 at its standard resolution (32x32) for a clear comparison with previous approaches. Results, now shown in Appendix C.4 - Table 7 include a comparison with SNGAN [1], in which we show adding multiple-discriminators with HVM in a DCGAN-like setting yields relevant improvements in both FID and Inception Score.\\n\\n2-Following suggestion of Reviewer 2, we included Table 6 in Appendix C.2, in which we compare single- vs. multiple-discriminators settings of 3 GANs in terms of FID and computational cost. Results support the claim that the added cost yields higher quality samples, which was consistently observed across the different settings.\\n\\n3-To further address Reviewer\\u2019s 1 concern as to whether our method scales-up to higher resolution datasets, we added generated images of size 256x256 obtained by a generator trained with 24 discriminators on the Cats dataset, containing only 1740 training examples, as similarly done in [2]. Notice that adding the multiple discriminators setting allowed us to successfully train the same generator that was shown in [2] to not be able to yield samples that look natural (see Figures 4 and 5 in [2]). Generated samples are now presented in Appendix E.\\n\\nWe highlight that all experiments performed within this work were executed in single GPU hardware, which indicates the multiple discriminator setting is a practical approach.\\n\\nWe believe that our new, stronger results address issues brought up by the reviewers (also cf. individual reviewer responses) and hope that the reviewers will kindly consider our improvements for their final evaluation.\\n\\n[1] Miyato, T., Kataoka, T., Koyama, M., & Yoshida, Y. (2018). Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957.\\n[2] Jolicoeur-Martineau, Alexia. \\\"The relativistic discriminator: a key element missing from standard GAN.\\\" arXiv preprint arXiv:1807.00734 (2018).\"}", "{\"title\": \"Cost-performance analysis and comparison with existing reported scores\", \"comment\": \"We added the suggested computational cost analysis in terms of FLOPS and memory consumption for a complete training step to Appendix C.2. We compared DCGAN, LSGAN, and HingeGAN with their corresponding 24-discriminators versions. We also reported the best FID obtained during training. In summary, these results show that the introduced extra computational cost yields a relevant improvement on the best FID for all cases. We highlight that the increase in performance is solely due to the use of the multiple-discriminators set-up, as all the other aspects were kept unchanged for the same GAN type.\\n\\nRegarding the comparison with other existing results, we hope that Appendix C.4 will address the reviewer\\u2019s concerns. We run our method with 24 discriminators using a DCGAN-like generator as described in [1] on CIFAR-10 in its more commonly used version (32x32). We compared the models in terms of FID and Inception Score (using original implementations in TensorFlow) with the results reported in [1] for SNGAN, DCGAN, and WGAN-GP. Furthermore, we implemented our version of SNGAN [1] and, in this case, we also reported the best FID-ResNet obtained during training. This experiment shows that using our approach will improve the performance of a simple DCGAN-like generator to yield FID and Inception Score on-par with the values reported in [1].\\n\\nWe believe the added results helped us to strengthen our contribution and we thank once more the reviewer for the thoughtful feedback. \\n\\n[1] Miyato, T., Kataoka, T., Koyama, M., & Yoshida, Y. (2018). Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957.\"}", "{\"title\": \"Comparing to the broader established literature\", \"comment\": \"We want to let the reviewer know that suggested experiments were included in the newest version of our manuscript. We point the reviewer to:\\n\\nAppendix D.2 for generated CelebA samples at 128x128 under varying number of discriminators.\\nAppendix E for generated Cats samples at 256x256. Notice that we used only training 1740 samples.\\nAppendix C.4 in which we included results in the original CIFAR-10 for a more clear comparison with other methods. We thus show that adding multiple discriminators with HVM training will shift performance of a vanilla DCGAN-like generator to scores inline with [1].\\n\\nWe hope to have addressed the reviewer\\u2019s concerns.\\n\\n[1] Miyato, T., Kataoka, T., Koyama, M., & Yoshida, Y. (2018). Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957.\"}", "{\"title\": \"Summary of modifications\", \"comment\": [\"We thank the reviewers for their time reading our paper and for providing useful feedback. We summarize the main corresponding modifications in the updated version of the manuscript in the following:\", \"Updated stacked MNIST results with extra evaluation for test sets of different sizes. Evaluations with 10000 and 26000 images (as usually reported) are now shown in Table 1.\", \"Added to Appendix G a plot of minimum FID vs. wall-clock time for MNIST experiments in order to aid the understanding of Figure 3, as suggested by Reviewer 1.\", \"Illustration added to Appendix F to make Section 4.1 easier to follow.\", \"Added samples obtained on CelebA 128x128 with 6, 8, and 10 discriminators to Appendix D.2 in order to show that the proposed method scales-up to higher resolution datasets.\"]}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for the thoughtful comments and feedback.\\n\\nWe quote the reviewer and address the respective comments below.\\n\\n\\u201cThe details and motivations of the Hypervolume Maximization (HVM) method [...]\\u201d\\nWe thank the reviewer for pointing this out. Given the limitation in space in the main text, we added two figures to Appendix G in the hope this will make more clear how the hypervolume interacts with nadir point coordinates values.\\n\\n\\u201cSignificance:\\nUnclear. This work in isolation appears to present an improvement over prior work [...]\\u201d\\n\\nWe included samples of generators trained on CelebA at 128x128 in Appendix D for different numbers of discriminators. Architectures correspond to the ones used for the 64x64 case, with 1 extra conv. layer in both models.\\n\\nIn order to further emphasize the significance of our contributions, we would like to highlight the following points.\\n1-In the coverage evaluation performed on top of stacked MNIST, results reported were computed using 10k generated samples while results reported in previous literature are computed employing a sample of size 26k. Both scenarios are now included in the last uploaded version, and after repeating the evaluations using 26k images we were able to cover the maximum number of 1000 modes with 16 and 24 discriminators, and 776.8+-6.4 modes with 8 discriminators.\\n\\n2-Regarding WGAN-GP, our implementation obtained worse FID-ResNet (trained on CIFAR-10) than DCGAN. On the other hand, Inception Score and FID with Inception model were both better with WGAN-GP, as reported in literature.\\n\\nCons\\n1-\\u201cPerformance of GANs is highly dependent on both model size and compute [...]\\u201d\\n\\nWe focused in going from a single- into the multiple-discriminators case, while keeping the generator architecture and training setting unchanged. This is done to isolate the effect of the added discriminators. We acknowledge the fact that different architectures will benefit differently from the added discriminators, however we observed similar effects in all cases considered within this work.\\n\\nWe further highlight the multiple-discriminator setting is not an alternative to other training schemes for GANs, but rather a complementary training strategy that can (and should, in our view) be used together with other methods. As such, our experiments are intended to (i)-show the effect given by the addition of discriminators, and (ii)-show that hypervolume maximization provides an effective \\u201cpolicy\\u201d to assign importance to different discriminators.\\n\\n2-\\u201cThe paper lacks experiments beyond toy-ish tasks [...]\\u201d\\n\\nWe decided to report Inception Scores as a ratio with respect to DCGAN since they are not directly comparable to most of the values reported in literature. In order to keep a consistent comparison with our main baseline, Neyshabur et al. (2017), we decided to employ exactly the same architecture, which was designed for inputs of size 64x64. We thus upscaled CIFAR-10 and this changes the scores range. Inception Score obtained by DCGAN in the 64x64 rescaled version of CIFAR-10 was 4.0697+-0.0861 (10 runs with 10k samples). Moreover, aiming to better contextualize our contribution with other approaches, we are running experiments with the 32x32 version of CIFAR-10.\\n\\n3-\\u201cFigure 3 is slightly strange in that the x axis is time to best result instead of just overall wallclock time. [...]\\u201d\\n\\nFollowing the reviewer\\u2019s suggestion, we added the suggested plot to Appendix F.\\n\\nAll the models are trained with a fixed budget in terms of iterations (93800, corresponding to 100 epochs with a batch size of 64). Our goal was indeed to emphasize a trade-off between faster convergence to a sub-optimal FID vs. later convergence to a better value. AVG and GMAN were not able to further improve FID after a few training iterations. On the other hand, HV was able to further improve the achieved best FID and MGD could take even more advantage of the available training budget, as it was able to decrease the FID almost until the end of training.\", \"additional_comments\": \"1-\\u201cIn section 3.1 Eq 5 appears to be wrong. [...]\\u201d\\nIndeed the minus signs on the betas are typos on the definition of alpha_k\\u2019s (we also double-checked our implementation and it is correct). We fixed this on the updated version of the manuscript. \\n\\n2-\\u201cIn Fig 6 FID scores computed on a set of 10K samples are shown. [...]\\u201d\\n\\nWe agree and to further investigate this we compared FID values obtained for real data using three different architectures, namely Inception-V3, ResNet18 and VGG-16. The model to calculate FID using Inception-V3 was trained on Imagenet. More specifically, we first compute the statistics of the training partition of CIFAR-10 and then compute the FID for the test set. Obtained values were 3.1796, 0.0319, and 0.0255, respectively, with very small variation. As pointed out by the reviewer, since CIFAR-10 has only 10 classes, using 10k real samples to calculate FID should have a smaller sampling error in comparison with Imagenet.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for the suggestions and constructive feedback.\\n\\nIn the following, we quote the reviewer and respectively respond to the specific concern right below.\\n\\n\\u201cOverall, the proposed method is empirical and the authors show its performance by experiments.\\u201d\\n\\nWe acknowledge the bulk of evidence for adopting our method is empirical. However, we specifically build upon earlier guarantees introduced by Neyshabur et al. (2017), showing that when approximation along a sufficient number of projections (and discriminators as a consequence) is achieved, the distribution induced by the generator converges to the real data distribution. We thus introduce a more suitable optimization framework to ensure approximation along as many projections as possible, which is not enforced by simply optimizing for the loss average if there is some trade-off along different projections.\\n\\n\\u201cAlthough the presentation of the Hypervolume maximization [...] form with other previous methods.\\u201d\\n\\nWe apologize for the lack of clarity in this section. Section 3.2 is a brief review of the Hypervolume formal definition for the more general multi-solution case. We tried to make the single-solution case, employed in our work, more clear and intuitive in Section 4.1, and illustrated it with an example in Fig. 1, in which a single solution l has its hypervolume highlighted for a given nadir point \\\\eta. Maximizing the highlighted volume implies minimizing l1 and l2 simultaneously. We added an illustration to Appendix F which might be useful to understand the loss behavior throughout training.\\n\\n\\u201cFirst, I want to discuss the significance [...] performance improvement. Maybe I\\u2019m wrong.\\u201d\\n\\nWe agree with the reviewer. The computational complexity of training GANs under a multiple discriminator setting is higher by design in terms of both FLOPS and memory, if compared with single-discriminators settings. However, such setting constitutes not an alternative approach for the recent advances in single-discriminator training, but rather a complementary method which can be used together with other methods.\", \"we_would_also_like_to_make_a_few_practical_remarks_regarding_the_use_of_multiple_discriminators\": \"1-While using multiple discriminators may increase the cost of a single training run, the overall cost of training, if one accounts for the several training runs required for hyperparameters search, is reduced. We observed such a behavior in our experiments, as reported in Fig. 5. The reduced variation in training outcomes makes it faster to find a stable training setting when more discriminators are employed. In our point-of-view, multiple-discriminators settings should be employed along with any training scheme of choice, if enough resources are available. As an example, which will be added to the manuscript as soon as we conclude the new experiments, adding discriminators yield the following relative improvement in terms of FID: DCGAN - 55.21%; LSGAN - 57.93% (we are currently running similar experiments on other GANs such as wGAN-GP and hingeGAN).\\n\\n2-While the increase in cost in terms of FLOPS and memory is unavoidable, wall-clock time can be made close to single-discriminators cases since training with respect to different discriminators can be implemented in parallel. Extra cost in time introduced by other frameworks such wGAN or SNGAN cannot be recovered.\\n\\n3-All our experiments were performed in single-GPU settings, which supports the claim that multiple-discriminators training is practical enough to be employed in several common use cases.\\n\\nThe main conclusions we were able to draw from our experiments is that employing multiple discriminators is a practical approach allowing us to trade extra capacity (and thereby extra computational cost) for higher quality and diversity of generated samples when compared to the single-discriminator equivalent setting, while avoiding mode-collapse and divergence during training for a wider set of hyperparameters.\\n\\n\\u201cFrom my perspective, a fair comparison [...] FLOPS and memory consumption.\\u201d\\n\\nWe understand the concern in terms of fairness of comparison and thank the reviewer for the valuable comment. However, the experiments in the paper were designed to show the effect of adding the extra complexity (in terms of number total parameters) specifically through increasing the number of discriminators (and using random projections+HV loss) in the generated samples. We wanted to show the added cost would translate into performance gain.\\n\\n\\u201cThe reported FID is computed from [...] comparison with existing reported scores.\\u201d\\n\\nWe reported in Table 5 in Appendix C the Inception Score and FID with Inception model trained on Imagenet relative to DCGAN. We highlight that our scores are not directly comparable with values reported in other works since we used an upscaled version of CIFAR-10 at 64x64 in order to use the same setting as our main baseline, Neyshabur et al. (2017).\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for the feedback and taking the time for reading our paper. We are glad that the reviewer found our method interesting and hope that the following response, added to the new results included in the manuscript, will make her/him more confident about our contributions.\\n\\nRegarding the mentioned trade-off, we understood that the reviewer is referring to the addition of \\u201ccapacity\\u201d (in terms of the number of parameters) on the discriminators side, as in the multiple-discriminators settings, or in the generator side, as in the pointed reference (Brock et al. 2018). We would like to point out that such approaches have different goals: while adding capacity in the generator side is intended to yield generators able to scale to higher resolution settings and higher quality samples, multiple-discriminators are aimed at stabilizing training, avoiding common issues such as mode-collapse and divergence, which makes final performance of the generator highly dependent of careful hyperparameters tuning. If enough resources are available, both approaches should be used jointly. As we also observed multiple-discriminators training to yield higher quality and diversity when compared to their single-discriminator equivalents, we believe higher scales settings would also benefit.\\n\\nRegarding the insufficiency of results, we would like to respectfully highlight that we presented quantitative and qualitative results (comparing with both single- and multiple-discriminators GANs) in 4 datasets (namely, MINIST, CIFAR-10, Stacked MNIST, and CelebA), with consistent conclusions. We also included samples on a higher resolution for CelebA at 128x128 in Appendix D.2, and are currently running experiments to compare different versions of GANs in the single vs. multiple discriminators settings. Some preliminary results which will be added to the manuscript as soon as we conclude the new experiments, show that adding discriminators yield the following relative improvement in terms of FID: DCGAN - 55.21%; LSGAN - 57.93% (we are currently running similar experiments on other GANs such as wGAN-GP and hingeGAN [2]). Moreover, we would highly appreciate if the reviewer could suggest any further experiment in order to increase her/his confidence in our results.\\n\\n[2] Miyato, Takeru, et al. \\\"Spectral normalization for generative adversarial networks.\\\" arXiv preprint arXiv:1802.05957 (2018).\"}", "{\"title\": \"A comparison of various weighting approaches to multi-discriminator training\", \"review\": \"Clarity:\\nThe work is a clear introduction/overview of this area of research. The reviewer enjoyed the connections to Multiple-Gradient Descent and clear distinctions/contrasts with previous approaches to weighting the outputs of multiple discriminators. All in all, the paper is quite clear in what its contributions are and how it differs from previous approaches. The details and motivations of the Hypervolume Maximization (HVM) method (especially as it relates to and interacts with the slack method of picking the nadir point) were a bit harder to follow intuitively given the standalone information in the paper.\", \"originality\": \"Adapts a technique to approximate MGD called HVM (Miranda 2016) and applies it to multi-discriminator training in GANs. As far as the reviewer is aware, this is a novel application of HVM to this task and well motivated under the MGD interpretation of the problem.\", \"significance\": \"Unclear. This work in isolation appears to present an improvement over prior work in this sub-field, but it is not obvious that the findings in these experiments will continue to be robust in more competitive settings. For instance, the worst performing model on CIFAR10, WGAN-GP (according to the experiments run) WGAN-GP also holds near SOTA Inception scores on CIFAR10 when appropriately tuned. Without any experimental results extending beyond toy datasets like MNIST and CIFAR10 the reviewer is not confident whether fundamental issues with GAN training are being addressed or just artifacts of small scale setups. Closely related previous work (Neyshabur 2017) scaled to 128x128 resolution on a much more difficult dataset - Imagenet Dogs but the authors did not compare in this case.\", \"quality\": \"Some concerns about details of experiments (see cons list and significance section for further discussion).\", \"pros\": [\"The work provides a clear overview of previous work on approaches using multiple discriminators.\", \"The connections of this line of work to MGD and the re-interpretation of various other approaches in this framework is valuable.\", \"The author provides direct comparisons to similar methods, which increases confidence in the results.\", \"On the experiments run, the HVM method appears to be an improvement over the two previous approaches of softmax weighting and straightforward averaging for multiple discriminators.\"], \"cons\": [\"Performance of GANs is highly dependent on both model size and compute expended for a given experiment (see Miyato 2018 for model size and training iterations and Brock 2018 for batch size). Training multiple discriminators (in this paper up to 24) significantly increases compute cost and effective model size. No baselines controlling for the effects of larger models and batch sizes are done.\", \"The paper lacks experiments beyond toy-ish tasks like MNIST and CIFAR10 and does not do a good job comparing to the broader established literature and contextualizing its results on certain tasks such as CIFAR10 (reporting ratios to a baseline instead of absolute values, for instance). The absolute inception score of the baseline DCGAN needs to be reported to allow for this. Is the Inception Score of the authors DCGAN implementation similar to the 6 to 6.5 reported in the literature?\", \"Figure 3 is slightly strange in that the x axis is time to best result result instead of just overall wallclock time. Without additional information I can not determine whether it is admissible. Do all models achieve their best FID scores at similar points in training? Why is this not just a visualization of FID score as a function of wallclock time? A method which has lower variance or continues to make progress for longer than methods which begin to diverge would be unfairly represented by the current Figure.\"], \"additional_comments\": \"In section 3.1 Eq 5 appears to be wrong. The loss of the discriminator is presented in a form to be minimized so exponentiating the negative loss in the softmax weighting term as presented will do the opposite of what is desired and assign lower weight to higher loss discriminators. \\n\\nIn Fig 6 FID scores computed on a set of 10K samples are shown. The authors appear to draw the line for the FID score of real data at 0. But since it is being estimated with only 10K samples there will be sampling error resulting in non-zero FID score. The authors should update this figure to show the box-plot for FID scores computed on random draws of 10K real samples. I have only worked with FID on Imagenet where FID scores for random batches of 10K samples are much higher than 0. I admit there is some chance the value is extremely low on CIFAR10 to make this point irrelevant, however.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"The idea is natural and interesting, the presentation is clear, but short of analysis on the computational cost (FLOPS and memory consumption)\", \"review\": \"This paper studies the problem of training of Generative Adversarial Networks employing a set of discriminators, as opposed to the traditional game involving one generator against a single model. Specifically, this paper claims two contributions:\\n1.\\tWe offer a new perspective on multiple-discriminator GAN training by framing it in the context of multi-objective optimization, and draw similarities between previous research in GANs variations and MGD, commonly employed as a general solver for multi-objective optimization.\\n2.\\tWe propose a new method for training multiple-discriminator GANs: Hypervolume maximization, which weighs the gradient contributions of each discriminator by its loss.\\n\\nOverall, the proposed method is empirical and the authors show its performance by experiments. \\n\\nFirst, I want to discuss the significance of this work (or this kind of work). As surveyed in the paper, the idea of training of Generative Adversarial Networks employing a set of discriminators has been explored by several previous work, and showed some performance improvement. However, this idea (methods along this line) is not popular in GAN applications, like image-to-image translation. I guess that the reason may be that: the significant computational cost (both in FLOPS and memory consumption) increase due to multiple discriminators destroys the benefit from the small performance improvement. Maybe I\\u2019m wrong. In Appendix C Figure 10, the authors compares the wall-lock time between DCGAN, WGAN-GP and multiple-discriminator, and claims that the proposed approach is cheaper than WGAN-GP. However, WGAN-GP is more expensive due to its loss function involves gradients, while the proposed method does not. If directly compared with DCGAN, we can see an obvious increase in wall-clock time (FLOPS). In addition, the additional memory consumption is hidden there, which is a bigger problem in practice when the discriminators are large. SN-GAN have roughly the same computational cost and memory consumption of DC-GAN, but inception and FID are much higher. From my perspective, a fair comparison is under roughly the same FLOPS and memory consumption. \\n\\nThe paper is well-written. The method is well-motivated by the multi-objective optimization perspective. Although the presentation of the Hypervolume maximization method (Section 3.2) is not clear, the resulting loss function (Equation 10) is simple, and shares the same form with other previous methods. The hyperparameter \\\\eta is problematic in the new formulation. The authors propose the Nadir Point Adaption to set this parameter. \\n\\nThe authors conduct extensive experiments to compare different methods. The authors emphasize that the performance is improved with more discriminators, but it\\u2019s good to contain comparison of the computational cost (FLOPS and memory consumption) at the same time. There are some small questions for the experiments. The reported FID is computed from a pretrained classifier that is specific to the dataset, instead of the commonly used Inception model. I recommend the authors also measure the FID with the Inception model, so that we have a direct comparison with existing reported scores.\\n\\nOverall, I found that this work is empirical, and I\\u2019m not convinced by its experiments about the advantage of multiple-discriminator training, due to lacking of fair computational cost comparison with single-discriminator training.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"interesting methods, ok results\", \"review\": \"The paper investigates the use of multi-objective optimization techniques in GAN-setups where there are multiple discriminators. Using multiple discriminators was proposed in Durugkar et al, Arora et al, Neyshabur et al and others. The twist here is to focus on the Pareto front and to import multiple gradient descent and hypervolume-maximization based methods into GANs.\\n\\nThe results are decent. The authors find that optimizing with respect to multiple discriminators increases diversity of samples for a computational cost. However, just scaling up (and carefully optimizing), can yield extremely impressive samples, https://arxiv.org/abs/1809.11096. It is unclear how the tradeoffs in optimizing against multiple discriminators stack-up against bigger GANs. \\n\\nFrom my perspective, the paper is interesting because it introduces new methods into GANs from another community. However, the results themselves are not sufficient for publication.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Response to comment about relevant reference\", \"comment\": \"Hi! Thank you for pointing out this interesting reference. We will surely include it in our related works section.\"}" ] }
ByxHb3R5tX
Universal Successor Features for Transfer Reinforcement Learning
[ "Chen Ma", "Dylan R. Ashley", "Junfeng Wen", "Yoshua Bengio" ]
Transfer in Reinforcement Learning (RL) refers to the idea of applying knowledge gained from previous tasks to solving related tasks. Learning a universal value function (Schaul et al., 2015), which generalizes over goals and states, has previously been shown to be useful for transfer. However, successor features are believed to be more suitable than values for transfer (Dayan, 1993; Barreto et al.,2017), even though they cannot directly generalize to new goals. In this paper, we propose (1) Universal Successor Features (USFs) to capture the underlying dynamics of the environment while allowing generalization to unseen goals and (2) a flexible end-to-end model of USFs that can be trained by interacting with the environment. We show that learning USFs is compatible with any RL algorithm that learns state values using a temporal difference method. Our experiments in a simple gridworld and with two MuJoCo environments show that USFs can greatly accelerate training when learning multiple tasks and can effectively transfer knowledge to new tasks.
[ "Reinforcement Learning", "Successor Features", "Successor Representations", "Transfer Learning", "Representation Learning" ]
https://openreview.net/pdf?id=ByxHb3R5tX
https://openreview.net/forum?id=ByxHb3R5tX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1lwK_BNWV", "S1xvb_FYxN", "SJxab7-DlN", "HJgWDYRUlV", "BJxfPRLUgE", "Skl17Kw7g4", "Byl9e_uzxN", "B1eu52GGx4", "Hyx0GKC3JN", "SJx1orD4J4", "S1gu-BwN1N", "r1lXpKtc0Q", "r1eieYYqAm", "SkxU9_YqR7", "S1gyhwFc0X", "HyxSyWWGA7", "H1gYhUfITQ", "rkxVB9fBaX", "ByeGy5GBTX", "SJe7jR_gam", "BJeFxqwC3Q", "H1gc-aXshm" ], "note_type": [ "comment", "official_comment", "comment", "meta_review", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1546045566716, 1545340926605, 1545175812825, 1545165144898, 1545133657850, 1544939799023, 1544878066364, 1544854671706, 1544509717530, 1543955862731, 1543955712479, 1543309755451, 1543309555318, 1543309454269, 1543309223344, 1542750429126, 1541969584602, 1541904956335, 1541904857802, 1541602971261, 1541466608861, 1541254401931 ], "note_signatures": [ [ "~Shane_Gayal1" ], [ "ICLR.cc/2019/Conference/Paper1169/Authors" ], [ "~Shane_Gayal1" ], [ "ICLR.cc/2019/Conference/Paper1169/Area_Chair2" ], [ "ICLR.cc/2019/Conference/Paper1169/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1169/Authors" ], [ "~Shane_Gayal1" ], [ "ICLR.cc/2019/Conference/Paper1169/Authors" ], [ "ICLR.cc/2019/Conference/Paper1169/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1169/Authors" ], [ "ICLR.cc/2019/Conference/Paper1169/Authors" ], [ "ICLR.cc/2019/Conference/Paper1169/Authors" ], [ "ICLR.cc/2019/Conference/Paper1169/Authors" ], [ "ICLR.cc/2019/Conference/Paper1169/Authors" ], [ "ICLR.cc/2019/Conference/Paper1169/Authors" ], [ "ICLR.cc/2019/Conference/Paper1169/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1169/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1169/Authors" ], [ "ICLR.cc/2019/Conference/Paper1169/Authors" ], [ "ICLR.cc/2019/Conference/Paper1169/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1169/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1169/AnonReviewer3" ] ], "structured_content_str": [ "{\"comment\": \"Can you elaborate more on applying this method with A3C. Especially the modification of the Advantage function? When calculating the advantage function in A3C , scalar rewards and state-value function is used. With this method are you proposing to modify only the value function computation in the Advantage function with USF?\", \"title\": \"Modification of Advantage function and Critic in A3C\"}", "{\"title\": \"HER experiment explanation\", \"comment\": \"Indeed, here HER does not appear to offer a significant improvement when compared with Multi-goal DQN. There are several possible reasons. (1) In our experiments, the tertiary goals are not necessarily on the optimal path of any training goal (i.e., the tertiary goals have a chance of never being experienced during training). (2) There is a chance that the system will train on an irrelevant goal (that is not in any of the goal sets), which is not true in the non-HER case. This can potentially outweigh the benefits of using HER in terms of the performance on the tertiary goals. (3) Even when the tertiary goal appears in some training episodes, such a goal may not be sampled from the buffer very frequently. In our experiment, half of the data we train on is from the current goal, while the other half is from goals used with HER. The probability of one particular tertiary goal being sampled is not very high. (4) The results appearing in the paper use an epsilon of 0.25. Further increasing this can potentially degrade the training performance and significantly increase training time, which most likely outweighs the benefits delivered by HER. Nevertheless, our experiments on all three environments suggest that, in the provided environments, USF can work well whereas HER seems to struggle considerably more.\"}", "{\"comment\": \"I have noted some interesting facts about the proposed USF architectures by the Authors. I will point it down since I also wants to know weather I am correct.\\n\\n1. Previous methods of SF-RL mainly based on DQN architectures and they not highlight any zero shot transfer learning ability. \\n\\n2. Previous networks have used an autoencoder loss to learn the state representation(Kulkarni et al., Zhan et al.) while training on the DQN task. This can add some instability to the system when the state representation is complex. Barreto et al. (2018) have replaced state representation vectors by assuming the current state as a linear combination of scalar rewards to set of base tasks. With Barreto et al. there should be some base tasks. Ma et al. have introduced USF idea first where the state representation vector is extracted from an Auto-encoder. When working with visual states, this can be troublesome. Contrasting to previous work on SF-RL and USF-RL Authors have used a more straightforward way to predict the state representation with USF which is more scalable. \\n\\n3. In SF-RL previous work the reward prediction vector also trained by regressing scalar rewards while training on the DQN baseline. In the USF-RL (Ma et al) introduced a goal-oriented reward vector is produced by a neural network that takes the goal as the input. However, still its hard to train the goal-oriented reward vector prediction network due to the sparsity of the reward structure. Because in an example task of navigation, once the training starts agent will see many negative rewards for a long time and decidedly less positive rewards (If its A3C the network will get updated by many agents ). This makes the training weights of the network unstable. In this paper, authors proposed to use scalar rewards as it is and trained the reward vector prediction network with Q loss.\\n\\n4. This combined the general value function approximates(GVFA) with USF which is useful for the large-scale deep reinforcement learning frameworks like A3C. Because in A3C we can train different agents on different goals. Let's say in a navigation task with different targets, and we can train this whole architecture while learning general patterns. \\n\\n\\nI also have few questions regarding to train this with complex representations like images for a task of navigation in robotics. \\n\\n\\n1. Let's say we have a fixed number of targets where each target is sentimentally different from each other, and both the states and targets are represented in images. So mainly in the architecture proposed by authors, we need to have CNNs for goal embedding, state embedding and reward vector prediction network. How to proceed with this kind of situation ? is it scalable? \\n\\n2. Do we maintain two CNNs for state embedding network and goal embedding network ? Cant we use same networks which is more like a Siamese network? I think this can be a problem when training since practically we only train with given number of goals. \\n\\n3. Do we maintain two CNN for state embedding network and goal embedding network? Can't, we use the same networks which is more like a Siamese network? I think this can be a problem when training since practically we only train with a given number of goals. \\n\\n4. What is the best way to calculate Advantage in the A3C setting? To calculate the advantage in A3C, we need a scalar reward at each time steps. If we replace scalar reward as a linear combination of state representations the A3C agent can get unstable because we use the advantage to update the policy directly. So can we use scalar reward as it is to calculate advantage while the USF replaces the value function?\", \"title\": \"Interesting idea which enables to use USF with large scale RL methods like A3C , IMPALA\"}", "{\"metareview\": \"In considering the reviews and the author response, I would summarize the evaluation of the paper as following: The main idea in the paper -- to combine goal-conditioning with successor features -- is an interesting direction for research, but is somewhat incremental in light of the prior work in the area. Most of the reviewers generally agreed on this point. While a relatively incremental technical contribution could still result in a successful paper with a thorough empirical analysis and compelling results, the evaluation in the paper is unfortunately not very extensive: the provided tasks are very simple, and the difference from prior methods is not very large. All of the tasks are equivalent to either grid worlds or reaching, which are very simple. Without a deeper technical contribution or a more extensive empirical evaluation, I do not think the paper is ready for publication in ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Reasonable but somewhat incremental result\"}", "{\"title\": \"HER experiment details\", \"comment\": \"Thank you for the additional experiments. I must say I'm quite surprised by HER's poor performance. There is a large performance gap between the current and tertiary goals despite HER's ability to evaluate arbitrary goal-performance off-policy. Indeed, it looks as though this gap is as high for HER as it is for the methods that don't exploit knowledge of the goal-reward function. Could you explain why this is the case? Does the gap shrink when training under a more exploratory policy (e.g. high epsilon)?\"}", "{\"title\": \"Yes, USFs can work with general actor-critic methods.\", \"comment\": \"```Thanks for your interests. Yes, USFs can work with general actor-critic methods. Please refer to the MuJoCo experiments in Sec.3.2 with DDPG where an \\\"actor\\\" component is included for the policy. USFs can be similarly applied to the computation of the Advantage in A3C. The modification is straightforward with our method.```\"}", "{\"comment\": \"Can we calculate the Advantage function with this architecture and use it with A3C?\", \"title\": \"Can this method work with Actor - Critic Methods ?\"}", "{\"title\": \"Clarification to HER applicability\", \"comment\": \"While we acknowledge that the HER setting (goal representations with arbitrary-goal evaluation) is common in the literature, our work was primarily influenced by the original, also common UVFA setting (goal representations with given-goal evaluation). In addition to the UVFA paper, there exist several other works that consider the UVFA setting (see for examples [1] and [2]). However, to ensure a fair evaluation of our method, we did compare it both under the UVFA setting and, following your original observation, under the HER setting (see Appendix E where we compare HER with HER + USFs). Our results indicated that USFs may improve performance in this setting as well.\\n\\n[1] Zhu, Yuke, et al. \\\"Target-driven visual navigation in indoor scenes using deep reinforcement learning.\\\" Robotics and Automation (ICRA), 2017 IEEE International Conference on. IEEE, 2017.\\n[2] Mankowitz, Daniel J., et al. \\\"Unicorn: Continual Learning with a Universal, Off-policy Agent.\\\" arXiv preprint arXiv:1802.08294 (2018).\"}", "{\"title\": \"HER applicability\", \"comment\": \"I agree that HER requires being able to evaluate arbitrary-goal-completion rather then just having the environment evaluate the given-goal. However, I don't believe I've ever come across this distinction in the literature, as it seems rare to have access to both the goal representation and given-goal evaluation but not arbitrary-goal evaluation.\\n\\nAll of your clarifications were quite informative, but I'm afraid I'm still not convinced by the overall research contribution.\"}", "{\"title\": \"Reply to minor clarification\", \"comment\": \"Thank you for your feedback. We also included this answer in our main rebuttal:\\n\\nBoth (3) and (4) optimize phi. There is no weight sharing in the methods described in the paper. We have changed the notations in Figure 1, 6 and 7 to reflect this. Sorry for the confusion. Appendix A now provides both a detailed description and the model architecture of the Multi-goal DQN baseline. Essentially the USFs architecture is built upon the Multi-goal DQN architecture by replacing the action value with successor features and adding a component to learn w.\\n\\nWe have attempted to address your concerns in the main rebuttal posted under your original review. Please let us know if there are any other questions or concerns you would like us to respond to.\"}", "{\"title\": \"Reply to minor clarification\", \"comment\": \"Thank you for your feedback. We also included this answer in our main rebuttal:\\n\\nIf you mean that the goal appears in the transition then yes. However, the training mini batch is sampled directly from the replay buffer in which case the transitions sampled are likely not to share a goal.\\n\\nWe have attempted to address your concerns in the main rebuttal posted under your original review. Please let us know if there are any other questions or concerns you would like us to respond to.\"}", "{\"title\": \"We thank the reviewers for their valuable feedback and constructive suggestions.\", \"comment\": [\"With the advice of the reviewers, we added the following contents to the original submission:\", \"We have provided an illustration of the architecture and pseudo-code the Multi-goal DQN baseline (Appendix A).\", \"We have empirically justified our objective function by comparing it with an obvious alternative (Appendix D).\", \"We have provided some additional experiments demonstrating the compatibility of USFs with Hindsight Experience Replay (Appendix E).\", \"We have provided more experimental details and the hyperparameters we used for these experiments (Appendix F).\", \"We have rewritten the related work section in an attempt to be more comprehensive and to provide a more careful analysis of prior work.\", \"We would like to emphasize that USFs learn the dynamics of the environment under *optimal* policies for the given goals. While the original SFs for each goal can capture certain knowledge about the underlying dynamics of the specific task, USFs are able to learn the shared dynamics.\"]}", "{\"title\": \"Thank you for your insightful review and important observations!\", \"comment\": \"[Novelty about extending SFs to be goal-conditioned; model architecture]\\nAppendix A now provides both a detailed description and the model architecture of the Multi-goal DQN baseline. Essentially the USFs architecture is built upon the Multi-goal DQN architecture by replacing the action value with successor features and adding a component to learn w.\\n\\nThere is no weight sharing in the methods described in the paper. We have changed the notations in Figure 1, 6 and 7 to reflect this. Sorry for the confusion.\\n\\n[Q-learning loss and reward-prediction loss as training objectives]\\nWe have now added Appendix D which provides an empirical comparison of learning using Eq.(1)+(4) versus learning using Eq.(3)+(4) with the same USFs architecture. We hope this comparison can justify our proposed objective and show the importance here of using a loss based on action values instead of reward-prediction.\\n\\n[Comparison to UVFA]\\nAs we discussed with Reviewer#2, the original UVFA uses a two-stage learning procedure, which is unstable in practice. However, there seems to be a common adoption of UVFA which uses end-to-end learning and a goal as input. This is our Multi-goal DQN baseline. We believe that this is a fair proxy for UVFA and by comparing against it, we show the advantage of our method over UVFA.\\n\\n[Results will be less impressive when using HER?]\\nOne issue with HER is that it requires access to or must estimate multiple reward functions in each transition. Such a setting is fundamentally different from the setting we evaluate USFs in. However, we note that USFs can be used in conjunction with HER. As we now show in Appendix E, USFs can provide a considerable improvement over HER alone.\"}", "{\"title\": \"Thank you for your thorough review and insightful questions!\", \"comment\": \"Clarity:\\n\\n[Unclear description about the Multi-goal DQN baseline and difference of our method]\\nIn response to your comment, we have added a detailed algorithm description and the model architecture of the Multi-goal DQN baseline to Appendix A. The most notable difference between Multi-goal DQN and our method is that Multi-goal DQN directly learns the action values, while USFs learn successor features and goal-specific features whose inner product estimates the action values. This difference results in both a markedly different architecture in addition to a distinct loss function.\", \"originality_and_significance\": \"[Successor Features have appeared before]\\nBoth [2,4] do indeed use SFs for control tasks but, unlike ours, their method does not perform direct generalization. The GPI theorem only guarantees that the current policy is as good as all previous policies and that the agent will improve upon it, instead of directly generalizing from previous policies. For more detailed comparison, please refer to the comment #2 for \\\"other comments/questions\\\".\\n\\n[This way of learning \\\\phi has been proposed before]\\nTo clarify, the specific way of learning \\\\phi using one layer of neural networks is, to the best of our knowledge, originally proposed in DeepSR (Kulkarni et al. 2016), which we have explicitly cited in our revised paper (Sec.2.2). Notably, differing from both [3] and DeepSR, we do not use an auto-encoder.\\n\\n[Comparison with UVFA]\\nAs we noted previously, and also according to Mankowitz et al. [5], the Multi-goal DQN baseline in our paper can be considered as a common adoption of UVFA. Therefore, we have already shown the advantage of USFs over UVFA to some degree.\\n\\n\\nOther comments/questions:\\n\\n1. We use the actual reward, not a fictitious reward generated by our model. \\n2. To compare with the SFs & GPI appearing in [2,4], we have attempted to apply the algorithm in [4] to our gridworld setting, but unfortunately, it fails to learn during the test phase (second stage). Recall that in our experiment, the training phase amounts to a multi-task setting, and test phase is also a multi-task setting. So transfer must occur from a set of tasks to another set of tasks. SFs & GPI is specifically restricted to transferring between a multi-task setting to a single test task. As such it will naturally fail in our test phase.\\n\\nTo elaborate on this a bit further, a critical difference between our USFs and the framework of SFs & GPI in [2,4] is that SFs & GPI depends on the assumption that SFs are only a function of the policy and not of the task/goal (see Eq.(4) in [4]). If this assumption does not hold then applying SFs learned from one goal directly to another goal (as in GPI) can be problematic. For example, consider our gridworld environment in which an episode ends after the agent reaches the goal. Using one-hot state features and under the optimal policy, the true SFs \\\\psi^{\\\\pi_1} will include nothing except for an optimal path from the current state to the goal state g_1 (i.e., only the cells on the path will be non-zero). When we use these SFs for a different goal g_2, \\\\psi^{\\\\pi_1} is no longer accurate because the agent has to continue moving after reaching g_1. In other words, \\\\psi^{\\\\pi_1} fails to represent the true future state visitations when we deploy \\\\pi_1 for g_2. If we update all the SFs as done in [2], then these SFs will be specific to one particular test goal. As a result, SFs & GPI can only deal with one test task at a time and so is not suitable for our experimental setting, in which we will encounter a different test goal in *each episode*. In contrast to this, USFs depend on the goal g through the discount function \\\\gamma_g (see last equation on page 2). Thus, USFs can automatically be adjusted according to a goal, even under the same policy.\\n\\nOne additional difference between USFs and the SFs & GPI framework from [2,4], is that, unlike USFs, the SFs & GPI cannot actually produce a good zero-shot policy. In order to have a zero-shot policy, we have to know the goal-specific features w. In our model, we have a component specifically for computing w given g (see the right-most part of Fig.1). However, in [2,4], given a new goal, w is first randomly initialized and then gradually improved while exploring the new task. While we can certainly use this random w and GPI to compute a zero-shot policy, the resulting policy would be no better than random.\", \"follow_up_clarification\": \"If you mean that the goal appears in the transition then yes. However, the training minibatch is sampled directly from the replay buffer in which case the transitions sampled are likely not to share a goal.\\n\\n\\n[5] Mankowitz, Daniel J., et al., Unicorn: Continual Learning with a Universal, Off-policy Agent. arXiv preprint arXiv:1802.08294 (2018).\"}", "{\"title\": \"Thank you for your insightful review and thought-provoking questions!\", \"comment\": \"1. 2. We did analyze what Multi-goal DQN and USFs learn in our domains, but we concluded that our environments are ill-suited to provide a particularly interesting answer to this question; we have generally observed USFs learning faster than DQN, but in these domains, we have not found any strong characteristic differences between the eventual policies they develop. We hope to later apply this method to more complicated domains such as the Arcade Learning Environment, but, while we are excited about the prospects of how USFs might behave when applied to such domains, we felt that our limited computational resources would be better used to provide a fairer comparison with our baseline. As such we opted to leave the more complicated domains as future work.\\n\\n3. As to why we believe that Multi-goal DQN with USFs outperforms vanilla Multi-goal DQN, first note that in the first phase of each of our experiments the setting amounts to multi-task learning. So knowledge is being transferred in both the first and second phase of our experiments. In general, we believe that when decomposing the action-values into successor features and goal-specific features, the dynamics of the world learned by the successor features transfer more easily between goals than the action-values alone. We hypothesize that the more dissimilar the values of proximal states are under different goals, the more this dissociation benefits the transfer process. This hypothesis is supported by the larger gap in performance under the room reward structure than under the constant reward structure.\\n\\nFollowing your suggestion, we have rewritten the related work section in an attempt to give a more comprehensive and a more careful analysis of prior work. Also, thank you for bringing the graphical issue to our attention. We have redrawn the figures accordingly.\"}", "{\"title\": \"Clarification\", \"comment\": \"Thanks for the addition & clarification -- it is indeed helpful! And yes, what I meant by 'common adoption' is end-to-end training of an architecture like the one in Figure 6 (appendix).\\n\\np.s. Just a minor follow up clarification on the training protocol for Alg. 2: was the training done on-goal (line 7 & seems to suggest that, but just checking)?\"}", "{\"title\": \"Clarification\", \"comment\": \"Indeed, I was referring to using Eqn. (3) and (4) instead of (1) and (4). Specifically, Eqn. (1) is normally the only thing optimizing phi (barring an aux loss like reconstruction). Do both (3) and (4) optimize phi, or are there some stop gradients?\\n\\nThe weight sharing / weight tying refers to Figure 1, whereby theta_psi is used to embed both the goal and the state. It also looks like theta_psi is used to combine phi(s) and phi(g), but I imagine that is a typo or I'm misinterpreting the figure. The root of my question is basically wondering if the architecture, rather than the losses associated with SF, are what is improving performance. Clarifying the architecture used in each condition would help -- perhaps they are all similar enough that ablations wouldn't be needed.\"}", "{\"title\": \"Request for Minor Clarification\", \"comment\": \"Thanks for your feedback. We have two brief questions regarding your comments before we can feel confident addressing all your comments later in our full rebuttal. Firstly, when you stated we were using a Q-learning loss instead of reward-prediction loss do you mean we're using Eqn. (3) and (4) as our objective rather than Eqn. (1) and (4)? Secondly, can you elaborate what you mean by extra weight sharing? We've added a short section to the appendix elaborating on our Multi-goal DQN baseline which we hope will help you clarify your comment.\"}", "{\"title\": \"Request for Minor Clarification\", \"comment\": \"Thanks for your feedback. We have one quick question regarding one of your comments before we can feel confident addressing all your comments later in our full rebuttal. From what we've observed in the literature we've been lead to believe that the common adoption in the literature for UVFA amounts to the Multi-goal DQN baseline we use in our work. As you pointed out, we failed to adequately explain what the algorithm we're calling Multi-goal DQN is. As such we've added a short section to the appendix elaborating on our Multi-goal DQN baseline. Can you confirm that this is the common adoption of UVFA that you would have expected we would compare with? If not is there any chance you could direct us to some work that describes the common adoption?\"}", "{\"title\": \"Official review: Interesting direction, but in this version, fairly incremental and missing crucial links&comparisons to the related literature\", \"review\": \"\", \"summary\": \"This paper proposes a generalisation of the SFs framework to a goal conditioning representation that could, in principle, generalise over a collection of goals at test time. This is akin to universal value functions [1] (and more generally GVFs). Although I like the idea and it seems a very interesting direction for generalisation to new goals, I do think the execution, the particular instantiation and (lack of) in-depth evaluation with (at least some of the) existing methods in literature -- including UVFAs [1] and the different ways SFs have been used for generalisation [2,3,4] -- is unfortunately letting it down.\", \"clarity\": [\"Reasonably well-written, easy to follow. A couple of things in the experimental section can be improved:\", \"It\\u2019s not totally clear to me what the their baseline Multi-goal DQN is. Does it have the same architecture as Figure 1, but just using (2).\", \"In the plots, the only difference between DQN and DQN+USF is that the second as the additional loss L_{\\\\psi} ? Or is there any other difference?\"], \"originality_and_significance\": \"I\\u2019m a bit split here: I like in principle the idea, but I think this instantiation is (fairly) incremental with respect to the current literature. Even the claimed contributions are a bit thin. Suitability of SFs to any TD-based learning, comes from SRs/SFs satisfying a Bellman eq. which was point out, explored and paired with control algorithms before [2,4]. Also, the particular way of learning the features \\\\phi, without going through the rewards, was already proposed and explored in [3]. That might be a missing reference. \\n\\nThe experiments seems to show slight improvements with respect to a baseline (Multi-DQN). It is not clear to me exactly what this is or if it would dominated even something vanilla UVFAs. I think this is a missing and somewhat mandatory comparison. I know the authors noted that is was because \\u2018UVFAs are prone to instabilities and may require further prior knowledge\\u2019, but I think that refers only to the two-stage (factorisation) procedure proposed in the original paper, not the common adoption in the literature. At the end of the day, the proposed architecture in Fig. 1 is a kind of UVFA, just with a bit more structure, so it would be surprising to me if UVFAs would actually fail in these environments. But if that\\u2019s the case, that\\u2019s a very interesting data point that the additional structure actually helps considerably beyond the incremental advantage exemplified here. \\n\\n\\nOther comments/questions:\\n\\n1) Clarification on the training procedure. The value function $Q(s,a,g)$ are training via eq. (3) with the i) actual reward (coming from the environment) or ii) the \\u2018fictitious\\u2019 reward coming from r(s,a,s\\u2019|g) = \\\\phi(s,a,s\\u2019)^T w(g)? Note that these are very different and only one ensures compatibility between the rewards and the value functions in learning.\\nThe SFs will give you the value function for the reward r(s,a,s\\u2019|g) = \\\\phi(s,a,s\\u2019)^T w(g) and if this is not align with the real reward, the corresponding value function obtained via SFs will not be the value function optimising the real reward. As far as I can see there\\u2019s not criteria that forces this to be the case.\\n\\n2) Comparison with SF transfer literature. Although discussed in the related work section, there is no quantitative comparison to the way SFs were shown to transfer knowledge[2,4], via evaluation and (generalised) policy improvement. Because these ways of generalisation are very different, it\\u2019s not clear go they would stack against each other, or in which scenarios one would be more appropriate than the other.\", \"to_give_a_more_concrete_example\": \"The training procedure in 3.1 makes sure that there\\u2019s fairly good coverage of the whole state-space by sampling goals conditioned on the room. Now if one would train SFs on these train tasks only (even independently), we would have policies that would know how to go to any of the rooms. And for the test tasks we would have the evaluation of these policies to the collection of goals. Which means that applying the methodology of transfer in [2,4] we would zero-shot get policies that reach any of the states encountered in the path of the 12 goals used in the train phase. And even if the test goals are not part of this collection, it stands to reason that a policy that can already go to the goal\\u2019s room and be easily adaptable to reaching the test goal -- aka the evaluation the policy that already reached that room is a good starting point for the improvement step [4].\", \"note\": \"I am willing to reconsider when/if the above have been reconciled/resolved.\", \"references\": \"[1] Schaul, T., Horgan, D., Gregor, K. and Silver, D., 2015, June. Universal value function approximators. In International Conference on Machine Learning (pp. 1312-1320).\\n\\n[2] Andre Barreto, Will Dabney, Remi Munos, Jonathan J Hunt, Tom Schaul, Hado P van Hasselt, and \\u00b4 David Silver. Successor features for transfer in reinforcement learning. In Advances in Neural Information Processing Systems, pp. 4055\\u20134065, 2017.\\n\\n[3] Machado, M.C., Rosenbaum, C., Guo, X., Liu, M., Tesauro, G. and Campbell, M., 2018. Eigenoption Discovery through the Deep Successor Representation, International Conference on Learning Representations, 2018.\\n\\n[4] Barreto, A., Borsa, D., Quan, J., Schaul, T., Silver, D., Hessel, M., Mankowitz, D., Zidek, A. and Munos, R., 2018, July. Transfer in deep reinforcement learning using successor features and generalised policy improvement. In International Conference on Machine Learning (pp. 510-519).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting but not enough depth\", \"review\": \"I like the idea of Universal Successor Features, it seems a bit incremental but I think it is worth exploring. There is some missing aspects and better comparison that can be made for the paper. I believe for the final camera ready these comparisons should be added. Specifically the related work section seems to not be in a desired depth.\\n\\nFrom experiments perspective, there is sufficient experiments that can demonstrate the value of the model. It is a simple model but an elegant application and correctly used for the purpose of the tasks in the paper. I have the following questions which their answers may be good additions to the paper:\\n\\n1. Have you tried analyzing what successor features and goal-specific features learn? For example, one point of addressing this is: what does the agent seem to avoid or do, under your framework (but not normal DQN). \\n2. The tasks in this paper seemed a bit simplistic, how does the model work on more complex applications (games)? It is hard to establish proper comparison, even though your claims are sufficiently supported. \\n3. What is your explanation of cases where blue is under green? One could assume they would meet eventually like top-left in Figure 3. \\n\\nI strongly suggest a rewrite of the related works section and a redo of the graphics. Using PDF may help with odd aspect ratio for text (Fig 4).\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting and clear, but contribution small and with many experimental omissions.\", \"review\": \"In this paper the authors propose an extension to successor features (SF). Akin to UVFAs, they condition on some goal state by concatenating to the current state after some shared preprocessing. The authors claim three contributions: 1) introducing the USF, 2) proposing an appropriate deep learning architecture for it, and 3) showing experimentally that USFs improve transfer both within a goal set and to novel goals.\\n\\nClaims 1) and 2) don't seem particularly noteworthy. Extending SF to be goal-conditioned is very straightforward, doesn't leverage anything unique to the SF formalism (e.g. the reward weights w already encode a goal in some sense), and doesn't attempt to extend its theoretical grounding. The architecture is likewise unsurprising, and the lack of ablations or alternatives make it seem rather unmotivated.\\n\\nThe usage of a Q-learning loss instead of a reward-prediction loss for updating phi is mentioned without citation. This seems quite novel, and could be a significant contribution if its advantage was demonstrated experimentally.\\n\\nThe experiments appear to show a significant advantage for USFs. For the training-goal-set advantage, it would be useful to know the architecture of multi-goal DQN. One hypothesis is that the extra weight-sharing is what is giving USFs an edge, and this should be ruled out. It is briefly mentioned that UVFAs weren't considered due to their stated instability, but its unclear how they differ from the multi-goal DQN.\\n\\nThe novel-goal results are impressive at first glance, but there is a glaring omission. Hindsight experience replay (HER) is mentioned but not evaluated, and would very likely trivialise the train/test goal-set distinction (unless the test goals were never previously visited). As these results are the primary contribution of this paper, this must be addressed prior to publication acceptance.\", \"edit\": \"The addition of HER experiments push this up a bit (5-->6). I'm still concerned about how significant the contribution is (as it is a straightforward extension to SFs), but the empirical results are now quite strong.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
HygBZnRctX
Transferring Knowledge across Learning Processes
[ "Sebastian Flennerhag", "Pablo G. Moreno", "Neil D. Lawrence", "Andreas Damianou" ]
In complex transfer learning scenarios new tasks might not be tightly linked to previous tasks. Approaches that transfer information contained only in the final parameters of a source model will therefore struggle. Instead, transfer learning at at higher level of abstraction is needed. We propose Leap, a framework that achieves this by transferring knowledge across learning processes. We associate each task with a manifold on which the training process travels from initialization to final parameters and construct a meta-learning objective that minimizes the expected length of this path. Our framework leverages only information obtained during training and can be computed on the fly at negligible cost. We demonstrate that our framework outperforms competing methods, both in meta-learning and transfer learning, on a set of computer vision tasks. Finally, we demonstrate that Leap can transfer knowledge across learning processes in demanding reinforcement learning environments (Atari) that involve millions of gradient steps.
[ "meta-learning", "transfer learning" ]
https://openreview.net/pdf?id=HygBZnRctX
https://openreview.net/forum?id=HygBZnRctX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJgUsOYklN", "Hye_eUxDk4", "BkemR3cMkE", "rkezle-C0X", "BkextEnBCQ", "Byxo0m2rRm", "Skgx-yRE0Q", "SyxtTCp4Am", "SJgQTudgAX", "BkeSD__xR7", "HJeAEd_eR7", "SylzzddgRm", "HyeuqwdeCm", "HkeqwPulCQ", "H1xWiy_q2Q", "Bkevhwv927", "HJgKnIJY27" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544685725530, 1544123888075, 1543838923086, 1543536617996, 1542993015832, 1542992851182, 1542934263729, 1542934208883, 1542650043264, 1542649948532, 1542649909874, 1542649865750, 1542649744039, 1542649698117, 1541205913151, 1541203887274, 1541105328934 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1168/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1168/Authors" ], [ "ICLR.cc/2019/Conference/Paper1168/Authors" ], [ "ICLR.cc/2019/Conference/Paper1168/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1168/Authors" ], [ "ICLR.cc/2019/Conference/Paper1168/Authors" ], [ "ICLR.cc/2019/Conference/Paper1168/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1168/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1168/Authors" ], [ "ICLR.cc/2019/Conference/Paper1168/Authors" ], [ "ICLR.cc/2019/Conference/Paper1168/Authors" ], [ "ICLR.cc/2019/Conference/Paper1168/Authors" ], [ "ICLR.cc/2019/Conference/Paper1168/Authors" ], [ "ICLR.cc/2019/Conference/Paper1168/Authors" ], [ "ICLR.cc/2019/Conference/Paper1168/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1168/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1168/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes an approach for learning to transfer knowledge across multiple tasks. It develops a principled approach for an important problem in meta-learning (short horizon bias). Nearly all of the reviewer's concerns were addressed throughout the discussion phase. The main weakness is that the experimental settings are somewhat non-standard (i.e. the Omniglot protocol in the paper is not at all standard). I would encourage the authors to mention the discrepancies from more standard protocols in the paper, to inform the reader. The results are strong nonetheless, evaluating in settings where typical meta-learning algorithms would struggle. The reviewers and I all agree that the paper should be accepted, and I think it should be considered for an oral presentation.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Oral)\", \"title\": \"meta review\"}", "{\"title\": \"Thank you for your update\", \"comment\": \"Dear reviewer,\\n \\nThank you for taking the time to consider our rebuttal and revised manuscript.\\n \\nYou raise good points and we will address these in a final version of the paper; we have added a sentence following the stabilizer describing how it affects the meta gradient, and to answer your question about the norm in the Jacobian approximation, it is indeed the Schatten 1-norm.\"}", "{\"title\": \"Following up on rebuttal\", \"comment\": \"Dear reviewer,\\n \\nFollowing our rebuttal and discussion with R1 and R2, we hope that you find your main concerns addressed. Please let us know if there are any other questions we can answer.\"}", "{\"title\": \"Thank you for the updates and clarifications\", \"comment\": \"The manuscript has been improved substantially thus I updated my score.\\n\\n1. On page 6, there is still no explicit formula to show how \\\\mu (the stabilizer) is applied to the meta-gradient.\\n\\n2. In Appendix B, what is the ||_1 norm on the Jacobian? We need to be clear about matrix norms, because _1 can mean Schatten 1-norm, vector-induced 1-norm etc.\\n\\nMinor\\n- As in Algo.1, the meta-gradient is applied to theta not psi, so it would make more sense for Thm.1 (and the proof in Appendix A) to use theta instead of psi (also to avoid potential confusion).\\n- Correct me if I am wrong, for general p, in the the meta-gradient (Eq.8), the last term should have a single exponent (p-2) on the L_2 norm instead of p(p-2). Moreover, the coefficient before the expectation should be p instead of 2 (this does not affect the algorithm though since we have \\\\beta to control step size). \\n- In Appendix B, the first equation, \\\\alpha^{i^2} is misleading, maybe use (\\\\alpha^i)^2\\n- In Appendix A, right before \\\"with p = 2 defining...\\\", there is a \\\\psi^0_{s,+1} that should be \\\\psi^0_{s+1},\"}", "{\"title\": \"Summary of revisions 2 and 3\", \"comment\": \"Dear reviewers,\\n\\nPlease note that we have made minor revisions since our initial rebuttal, summarized below\\n\\n- Revision 2 added details to experiments, as requested by R3\\n- Revision 3 fixed some typos, improved the explanation of the stabilizer, and addressed R1's comment wrt the proof\"}", "{\"title\": \"Thank you for your diligence\", \"comment\": \"We are very impressed with your diligence and grateful for your input \\u2013 your comments are very helpful in improving our manuscript! We are further grateful for your willingness to engage in the rebuttal and revise your review. Please see below for responses to your comments and questions.\\n \\n> Further baselines on Omniglot, miniImagenet would strengthen the paper\\n \\nWe respect your position, and given time and resources we would have been happy to oblige. We respectfully disagree with regards to miniImagenet being \\u201cless favourable\\u201d to Leap. Since Reptile outperforms MAML on miniImagenet, and Leap can be reduced to Reptile, Leap\\u2019s performance is \\u201clower bounded\\u201d by Reptile. While we take your point that it would be interesting to see how much of a boost other configurations could provide in a few-shot setting, given the feedback we received, we chose to prioritize other parts of the paper as we felt that would add more value.\\n \\n> Pareto optimality on convex loss surfaces\\n \\nWe have found that giving people a visual crutch helps them to understand how Leap behaves. More generally, Leap converges to a locally Pareto optimal point. As a property though, we agree that it\\u2019s not particularly interesting, which is why we don\\u2019t emphasize it in the manuscript.\\n \\n> the stabilizer is still in the meta-gradient\", \"we_noticed_this_as_well_and_have_fixed_it\": \"we agree that it should not be part of equation 8.\\n \\n> confusing claim: \\u201cthe stabilizer reduces emphasis on the gradient of f(\\\\theta)\\u201d\\n \\nApologies for the confusion, our choice of words was somewhat unfortunate. We have revised the manuscript to clarify this point. The weight placed on the task gradient is indeed larger, but as you note, \\\\mu guarantees we follow the descent direction. What we meant here is that \\\\mu reduces the weight placed on following that anomalous line segment, instead attempting to avoiding that neighborhood it in the updated gradient path.\\n \\n> I believe there might be a small mistake in the proof of Theorem 1. Nevertheless, even if this were the case, I think it would not affect the conclusion.\\n \\nThank you for pointing this out, we overloaded the definition of g (note the use of g(z), as opposed to g(^z) in the inner product). We have removed this overloading and explicitly define g(z), largely as you propose.\\n \\nThank you for a careful read, all typos fixed.\"}", "{\"title\": \"Thank you for taking the time to address the review in great detail (II/II)\", \"comment\": \"# MINOR POINTS ABOUT THE REVISED MANUSCRIPT\\n\\n1. The claim that \\\"if both tasks have convex loss surfaces there is a unique optimal initialization that achieves Pareto optimality in terms o total path distance\\\", while true, might not be so helpful, since initialization should in theory be irrelevant for convex losses.\\n\\n2. Currently, Eq. 8 is derived assuming the stabilizer $\\\\mu$ is included in the loss. However, the stabilizer is only introduced afterwards. This might be confusing for some readers if they attempt to derive Eq. 8 themselves when they first encounter it, prior to finishing reading page 6 entirely.\\n\\n3. While I think the role of the stabilizer heuristic is much more clearly explained now, there is a claim that still confuses me slightly. In the last paragraph of page 6, it is said that \\\"The stabilizer ... reduces the weight placed on the gradient of $f_{\\\\tau}(\\\\theta_{\\\\tau}^{i})$\\\". However, under the simplifying assumption $S_{\\\\tau}^{i} = I$, one would have $g_{i} := \\\\nabla f_{\\\\tau}(\\\\theta_{\\\\tau}^{i})$ and $\\\\Delta \\\\theta_{\\\\tau}^{i} = -\\\\alpha_{\\\\tau}^{i} g_{i}$. Then, without stabilizer, the \\\"weight\\\" of $g_{i}$ would be $\\\\alpha_{\\\\tau}^{i} - \\\\Delta f_{\\\\tau}^{i}$ while with stabilizer, the \\\"weight\\\" of $g_{i}$ would then be $\\\\alpha_{\\\\tau}^{i} + \\\\vert \\\\Delta f_{\\\\tau}^{i} \\\\vert$, which in principle could be larger (in magnitude) than the weight without stabilizer. Nonetheless, if this is not mistaken, it would be clear that the stabilizer ensures $g_{i}$ is never effectively followed in the ascent direction in rare cases when $\\\\Delta f_{\\\\tau}^{i}$ is large and positive.\\n\\n4. I believe there might be a small mistake in the proof of Theorem 1. Nevertheless, even if this were the case, I think it would not affect the conclusion.\\n\\nIn the middle of page 15, a derivation implies that $\\\\langle h_{\\\\tau}^{i} - z_{\\\\tau}^{i}, z_{\\\\tau}^{i} - x_{\\\\tau}^{i} \\\\rangle = -\\\\alpha_{\\\\tau}^{i} \\\\langle g(z_{\\\\tau}^{i}), z_{\\\\tau}^{i} - x_{\\\\tau}^{i}\\\\rangle$. However, I believe this ignores the contribution of the extra dimension corresponding to the loss function values. That is, $\\\\langle h_{\\\\tau}^{i} - z_{\\\\tau}^{i}, z_{\\\\tau}^{i} - x_{\\\\tau}^{i} \\\\rangle = \\\\langle \\\\hat{h}_{\\\\tau}^{i} - \\\\hat{z}_{\\\\tau}^{i}, \\\\hat{z}_{\\\\tau}^{i} - \\\\hat{x}_{\\\\tau}^{i} \\\\rangle + \\\\left(f_{\\\\tau}(\\\\hat{h}_{\\\\tau}^{i}) - f_{\\\\tau}(\\\\hat{z}_{\\\\tau}^{i})\\\\right)\\\\left(f_{\\\\tau}(\\\\hat{z}_{\\\\tau}^{i}) - f_{\\\\tau}(\\\\hat{x}_{\\\\tau}^{i})\\\\right)$. Nevertheless, I think that using $\\\\langle \\\\hat{h}_{\\\\tau}^{i} - \\\\hat{z}_{\\\\tau}^{i}, \\\\hat{z}_{\\\\tau}^{i} - \\\\hat{x}_{\\\\tau}^{i} \\\\rangle = -\\\\alpha_{\\\\tau}^{i} \\\\langle g(\\\\hat{z}_{\\\\tau}^{i}), \\\\hat{z}_{\\\\tau}^{i} - \\\\hat{x}_{\\\\tau}^{i}\\\\rangle$ and $\\\\left(f_{\\\\tau}(\\\\hat{h}_{\\\\tau}^{i}) - f_{\\\\tau}(\\\\hat{z}_{\\\\tau}^{i})\\\\right)\\\\left(f_{\\\\tau}(\\\\hat{z}_{\\\\tau}^{i}) - f_{\\\\tau}(\\\\hat{x}_{\\\\tau}^{i})\\\\right) = \\\\left(-\\\\alpha_{\\\\tau}^{i} {\\\\nabla f_{\\\\tau}^{i}(\\\\hat{z}_{\\\\tau}^{i})}^{T} g(\\\\hat{z}_{\\\\tau}^{i}) + O(\\\\alpha_{\\\\tau}^{i})\\\\right) \\\\left(f_{\\\\tau}(\\\\hat{z}_{\\\\tau}^{i}) - f_{\\\\tau}(\\\\hat{x}_{\\\\tau}^{i})\\\\right)$ should still allow bounding $\\\\alpha_{\\\\tau}^{i}$ from above to ensure the objective function decreases.\\n\\nI believe a similar issue (the contribution of the extra dimension not being explicitly shown) might also have occurred in the last step of the proof, when bounding $\\\\vert\\\\vert h_{\\\\tau}^{i} - z_{\\\\tau}^{i} \\\\vert\\\\vert^{p}$ from above by ${\\\\alpha_{\\\\tau}^{i}}^{p} \\\\vert\\\\vert g(\\\\hat{z}_{\\\\tau}^{i}) \\\\vert\\\\vert^{p}$. But as with the previous case, I don't think this would affect the actual argument being made.\\n\\n\\n# TYPOS\", \"page_2\": \"\\\"... our framework can be _extend_ to learn ...\\\"\\n\\\"... initialization. _Differences_ schemes represent ...\\\"\", \"page_4\": \"\\\"... Leap converges on _an_ locally Pareto optimal ...\\\"\\n\\\"... and _progress_ via ...\\\"\", \"page_5\": \"\\\"... and _construct_ baseline gradient ... \\\"\", \"pages_8_and_9\": \"length metric ($d_{2}$) and energy metric ($d_{1}$) -> length metric ($d_{1}$) and energy metric ($d_{2}$)\", \"page_9\": \"\\\"27 games that _has_ an action space ...\\\"\\n\\nPages 9 and 19 (Tables 1 and 3):\\n\\nNo pre-training AUC for the Facescrub task in bold, but the value for PNs is smaller.\", \"page_18\": \"\\\"... (until _convergenve_) ...\\\"\"}", "{\"title\": \"Thank you for taking the time to address the review in great detail (I/II)\", \"comment\": \"# HIGH-LEVEL ASSESSMENT (UPDATED)\\n\\nAfter reading the author rebuttal and going through the revised manuscript, I believe the authors have successfully addressed the vast majority of concerns I had about the original version of the paper. \\n\\nBased on the current version of the article, I lean strongly towards acceptance and have modified my score accordingly.\\n\\n# STATE OF PREVIOUSLY RAISED MAJOR POINTS\\n\\n1. In my original review, I raised issues regarding the way LEAP was motivated and derived; an opinion also voiced by Reviewer 2. \\n\\nI believe Section 2 of the revised manuscript has greatly improved in terms of clarity while simultaneously being more general.\\n\\nI apologise for the mistaken sign in $\\\\Delta \\\\theta_{\\\\tau}^{i}$ in the subsequent analysis. In hindsight, I should have definitely caught that error based on the very unintuitive conclusions that ensue!. The fact that LEAP reduces to Reptile when minimising the expected energy of the \\\"non-augmented\\\" gradient flow makes perfect sense and helps understand what LEAP's \\\"place\\\" is alongside MAML and Reptile. \\n\\nThe authors have also extended LEAP to minimise either the length or the energy of the gradient path, rather than minimising only the energy. This possibility was loosely mentioned in the original manuscript, but not implemented. As pointed out in their rebuttal, minimising the length of the gradient path instead of the energy implicitly \\\"normalises\\\" the magnitude of the gradient w.r.t. the initialisation $\\\\theta_{0}$ across tasks (Eq. 8), which might make LEAP most robust against heterogeneity in the scale of task losses.\\n\\nThe new ablation studies included in Sections B and C of the Appendix are also a great addition to study/justify empirically some of the more heuristic aspects of the paper.\\n\\n2. The original review also raised some concerns regarding Theorem 1 and its proof; a point also raised by Reviewer 2.\\n\\nThe statement of Theorem 1 and, most importantly, its proof, have been almost entirely rewritten. To the best of my knowledge, I believe the revised version is correct (potential minor inconsequential caveats described below), and is now much clearer and easy to follow.\\n\\n3. Besides carrying out the new ablation studies, the authors have introduced two additional baselines in Section 4.2 and now report aggregated results for 10 different seeds in Section 4.3.\\n\\nI still believe that having included additional baselines also in Sections 4.1 and 4.3, as well as evaluating LEAP in a \\\"less favourable\\\" few-shot learning scenario, could have further strengthened the paper. Nevertheless, given the time (and possibly compute) constraints, the revised manuscript also improved considerably in terms of experimental results and, most importantly, already provides sufficient evidence that LEAP can outperform existing approaches when tasks are sufficiently diverse.\"}", "{\"title\": \"Summary of revisions in light of reviews\", \"comment\": [\"[This is a top-level reply with only a summary of our changes, please see our answer to each individual reviewer thread for details]\", \"Dear Reviewers, thank you for throughout and thoughtful feedback and for being overall positive about our work. We have worked through our manuscript and have made several additions (including new experiments) that clarifies the link between the theory and the algorithm, provides further insight into both, and significantly strengthens our experimental results. We hope these additions address any questions raised and address any concerns you may have. In particular, we have\", \"Expanded section 2 to provide further insights into the framework and our proposed solution algorithm.\", \"Generalized Leap to allow for the use of either the energy or length metric as measure of gradient path distance.\", \"Re-organized the proof of theorem 1 to address concerns about completeness and clarity.\", \"Added ablation study with respect to (a) the inclusion of the loss in the task manifold, (b) the use of the energy or length metric, and (c) the use of a regularizer/stabilizer. In short, the more sophisticated the meta objective, the better Leap performs. The length metric converges faster, but final performance is largely equivalent. Adding the loss to the task manifold improves performance, while the stabilizer speeds up convergence.\", \"Added ablation study with respect to the Jacobian approximation, as a function of the learning rate. We find that we can use relatively large learning rates without significant deterioration of the approximation.\", \"Added HAT and Progressive Nets as baselines on Multi-CV. Neither of them outperforms Leap.\", \"Report confidence intervals on Atari games. We find that Leap does better than a random initialization by more consistently exploring useful parts of parameter space.\", \"Please see answers to individual reviewer below, for particular comments.\"]}", "{\"title\": \"Thank you for your review\", \"comment\": \"We thank you for your review. We understand your sentiment and hope that our revised paper will alleviate any concern you may have. More specifically,\\n\\n> 1) The details of the experiments such as parameter configurations are missing\\n\\nThank you for pointing out that further experimental details are needed. We will add further details to ensure our results are fully replicable during this week. \\n\\n> 2) Include more state-of-the-art transfer learning methods\\n\\nWe have added results for Progressive Nets (Rusu et et., 2017), which is a rather demanding baseline as it has more than 8 times as many parameters as Leap, and HAT (Serra et al., 2018) whose paper inspired our setup. We find that they do not change any of our conclusions.\\n\\n> 3) use some commonly used datasets\\n\\nWe would like to point out that all datasets used in our paper are frequent in transfer learning work of various kind; the point we are making here is that Leap is a general purpose framework that can tackle any of them. In particular, Omniglot is frequently used in few-shot learning (Vinyals et al., 2016, Snell et al., 2017, Finn et al., 2017, Nichol et al., 2017), while all datasets in the Multi-CV experiment are common in various forms of transfer learning (Serra et al., 2018, Zenke et al., 2018, Zagoruyko et al., 2017 (https://arxiv.org/abs/1612.03928)). Similarly, Atari is a notoriously difficult transfer learning problem (Schwarz et al., 2018, Rusu et. al., 2017).\\n\\nWe appreciate the sentiment, and in an ideal world we would be happy to add further datasets and baselines to our experiments. However, given time and resource constraint, running multiple large-scale experiments is not feasible. In this paper, we chose Atari as our large-scale experiment. Please also note that we have made further additions to our experimental section (as per our top-level reply) as requested by other reviewers.\"}", "{\"title\": \"Thank you for insightful comments (II/II)\", \"comment\": \"> 3) \\\\Theta is not very well-defined\\n\\nWe fully understand your sentiment, we think it\\u2019s caused by a misunderstanding stemming from the way we describe this constraint. We have updated the paper (section 2.2) to make the following explanations clearer. Intuitively, the purpose of \\\\Theta is to provide an upper bound on what we, as modellers, consider as good performance. Mathematically, we characterize this as some \\\\epsilon bound on the global optimum. However, the only relevant bound is the level of performance we could achieve through our second-best option, i.e. starting from a random initialization or from fine-tuning. This level of performance is what \\\\Theta is about. As such, the global minimum is redundant in the definition, and we have revised the definition of \\\\Theta to avoid it, instead emphasizing that \\\\Theta is defined by the performance we could otherwise achieve.\\n\\n> 4) Experiments\\n\\nThank you for these comments. We were aware of the need for multiple seeds for the RL experiments have updated our results with averages over 10 seeds. Notably, we find that Leap outperforms a random initialization because it more consistently finds good exploration spaces. \\n\\nPlease also note that we have made further additions to our experimental section (as per our top-level reply) as requested by other reviewers.\"}", "{\"title\": \"Thank you for insightful comments (I/II)\", \"comment\": \"We are grateful for your insightful comments and glad that you like many aspects of the paper. We understand that your concerns are related to some theoretical parts, so we hope that our clarifications below, extra experiments and appropriate amendments to the paper will resolve your concerns fully.\\n\\n> 1a) the sign of the f part is flipped which is not sound\\n\\nWe believe that this concern comes from the fact that we omitted clarifying the role of this term which is only a practical regularizer. Hence we apologize and sympathize with your comment. To resolve your concern, let us first say that the regularizer is not essential but, rather, only an optional stabilizer that practically allows for the use of larger step sizes. We have added an ablation study (appendix C) where we show that the regularizer yields faster convergence in terms of meta gradient steps, however the final performance is largely equivalent.\\n\\nOur revised manuscript provides a more through motivation in (section 2.3) that we hope you agree with: in short, the motivation for the stabilizer is that in stochastic gradient descent, the gradient path can be rather volatile. As you say, the gradient path is what it is. As long as it converges, so will Leap (with or without the regularizer). But if we could reduce the noise inherent in SGD, Leap could converge faster, and the stabilizer is a heuristic to do that. Other heuristics can certainly be used, or none at all.\\n\\n\\n> 1b) Second, replacing the Jacobian with the identity matrix is also questionable\\n\\nWe have added a new ablation study (appendix B) which shows that the approximation is quite tight, even for relatively large learning rates. With our best performing (inner loop) learning rate, we find the approximation to be accurate to the fourth decimal. We hope that you will find this study satisfying, although we also hope that you appreciate that the question about what Jacobian approximation is better to use is out of scope of our paper and does not affect the main point of our work. \\n\\nMore generally, any meta-learner that optimizes over the inner training process must approximate the Jacobian in order to scale, and the identity assumption is a frequently used approach that works well in practice. The purpose of this paper is to present a new way of framing meta-learning such that it can scale, leveraging existing approach to the approximations we must make. Our approach relies on prior work by Finn et. al. (2017), who found that the assumptions works well, and Nichol et. al. (2017) who found a similar empirical result, and further showed formally that detaching the Jacobians still optimizes the original objective (approximately).\\n\\nImportantly, we can control the precision of this approximation through the learning rate and the number of gradient steps: for any given number of gradient steps (yielding an upper bound on i), we can choose \\\\alpha such to ensure the approximation is sufficiently accurate to allow meta learning. Our ablation study (appendix B) shows that the resulting restriction on \\\\alpha is not severe.\\n\\nIn summary, we believe that our (admittedly sub-optimal) treatment of the Jacobian is well motivated and in-line with existing methods; we do agree that it can be improved, but this is out of our paper's scope. \\n\\n\\n> 2) The proof of Theorem 1 is not complete\\n\\nWe sympathize with your concern and agree that the presentation of the proof can be made clearer. We have taken your suggestions into account and re-organized the proof to directly establish the desired result, d(\\\\psi^0_{s+1}) < d(\\\\psi^0_s), and have included several commentaries to ensure each step of the proof is clearly linked to the overall objective.\", \"as_for_your_specific_questions\": \"\\\\beta_s is assumed to be sufficiently small to allow for gradient descent on the current baseline. The proof needs to establish that a new baseline generated from the updated initialization has a shorted gradient path length.\\nThere was indeed a typo in the theorem, leading to a missed term (hence the vector).\"}", "{\"title\": \"Thank you for a thorough review! (II/II)\", \"comment\": \"> 2a) Theorem 1 only assert convergence to a stationary point\\n \\nCorrect, apologies for our imprecision: we have updated the paper to reflect that Leap converges to a limit point in \\\\Theta. Our point was that gradient descent on the pull-forward objective is equivalent to gradient descent on the original objective, which we now state explicitly in section 2.3.\\n \\n> 2b) The proof of Theorem 1 may be incomplete\\n \\nThe final \\u201cleap\\u201d is implicit and unfortunately not clearly explained. We have substantially re-organized the proof to prove the desired inequality, d(\\\\psi^0_{s+1}) < d(\\\\psi^0_s), directly. We have also added a commentaries to more clearly explain each step of the proof, to avoid any confusion as to what is being established.\\n \\n> 3) Unfair comparisons / lack of baselines\\n\\nTo address concerns about lacking strong baselines, we have added two baseline related methods that do not regularize with respect to previous tasks, HAT (Serra et al., 2019) and Progressive Nets (Rusu et al., 2017), to the Multi-CV experiment. We found neither HAT nor Progressive Nets match Leap\\u2019s performance. \\n\\nWe hope that you appreciate that the continual learning problem is very different from the type of multitask learning we are considering here. The point we are trying to make in our paper is that MAML, Reptile, and similar methods cannot scale to problems that require more than a handful of gradient steps, while Leap can. As such, we believe that treating Omniglot\\u2013a standard few-shot learning problem where meta learning does well\\u2013 as a multi-shot learning problem is highly relevant. We are not arguing that Leap is superior at few-shot learning, though it could be.\\n \\nPlease also note that we have made further additions to our experimental section (as per our top-level reply) as requested by other reviewers.\"}", "{\"title\": \"Thank you for a thorough review! (I/II)\", \"comment\": \"Thank you for such a thorough review! We are very grateful for your feedback and are excited to use it to improve our paper. Together with our added clarifications and new experiments, we hope that we now address your concerns in full. If not, we are looking forward to discuss more. Please see details below.\\n \\n> 1a) including the loss in the task manifold can make learning unstable if tasks have losses on different magnitudes.\\n \\nLeap is indeed sensitive to differing scales across task objective functions. However, this sensitivity is not due to incorporating the loss in the task manifold, and would exist even if it were omitted. It arises from the fact that the meta gradient is an average over task gradients, which gives tasks with larger gradients (on average) greater influence. As such, this is a general problem applying equally to similar methods, like MAML and Reptile.\\n \\nWe were aware of this and after submission we have experimented with formulations that alleviate this issue. In fact, using the approximate gradient path length (as opposed to the energy) yields a meta gradient that scales all task gradients by a task specific norm that avoids this issue. This is an important improvement, and we are grateful for your insight here. We have generalized Leap (section 2) to allow both for a meta learning objective under the energy metric as well as under the length metric. In appendix C, we have added a new thorough ablation study across design choices and find that while Leap converges faster under the length metric (in terms of meta training steps), final performance is equivalent.\\n \\n> 1b) Including the loss in the task manifold is not optional, as suggested by the paper, but essential, to produce loss-minimizing meta-gradients.\\n \\nWe are very grateful for the time you have taken to investigate this issue! Unfortunately, your argument is based on an incorrect derivation, as there is a small mistake in the second inequality on C_1: you replace \\\\theta_{i+1} - \\\\theta_i with \\\\alpha g_i, but that would imply gradient ascent. The right identity is \\\\theta_{i+1} - \\\\theta_i = - \\\\alpha g_i (see eq. 1).\\n \\nCorrecting for this, C_1 is not only aligned with the Reptile gradient, it *is* the reptile gradient (this exact equivalence breaks down when we use length metric as meta objective, or if we were to jointly learn other aspects of the gradient update, e.g. learning rate / preconditioning). \\n\\nOur newly added ablation study in appendix C shows that Leap can converge even if we remove the loss from the task manifold, but does so at a significantly slower rate and learns a less useful initialization. Including the loss is a key feature of our framework, because it tells Leap how impactful a gradient step is: a gradient step that has a large influence on the loss will be given greater attention, allowing Leap to \\u201cprioritize\\u201d. The importance of this information is clearly illustrated in the Omniglot experiment, were Leap does significantly better than Reptile.\\n \\nThis ability to prioritize is also what motivated us in adding a regularizer, which perhaps is better called a stabilizer. Leap prioritizes large loss deltas, so if the learning rate is too large, or the gradient estimator very noisy, it could happen that we get a large increase in the loss, which would then be prioritized by Leap. Being an anomaly, this doesn\\u2019t derail Leap\\u2013in the end, Leap follows the entire gradient path (see appendix C). As such, the stabilizer is not critical, but it does speed up training and allows the use more aggressive learning rates. Finally, as you point out, our formulation is just one heuristic, others may be better.\"}", "{\"title\": \"A new transfer learning method for knowledge transfer between distinct tasks\", \"review\": \"In this paper, the authors study an important transfer learning problem, i.e., knowledge transfer between distinct tasks, which is usually called 'far transfer' (instead of 'near transfer'). Specifically, the authors propose a lightweight framework called Leap, which aims to achieve knowledge transfer 'across learning processes'. In particular, a method for meta-learning (see Algorithm 1) is developed, which focuses on minimizing 'the expected length of the path' (see the corresponding term in Eqs.(4-6)). Empirical studies on three public datasets show the effectiveness of the proposed method. Overall, the paper is well presented.\\n\\nSome comments/suggestions:\\n(i) The details of the experiments such as parameter configurations are missing, which makes the results not easy to be reproduced.\\n\\n(ii) For the baseline methods used in the experiments, the authors are suggested to include more state-of-the-art transfer learning methods in order to make the results more convincing.\\n\\n(iii) Finally, if the authors can use some commonly used datasets in existing transfer learning works, the comparative results will be more interesting.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review of Transferring Knowledge across Learning Processes\", \"review\": \"\\\\documentclass[10pt]{article}\\n\\\\usepackage{geometry}[1in]\\n\\\\usepackage{amsfonts}\\n\\\\usepackage{amssymb}\\n\\\\usepackage{amsmath}\\n\\\\usepackage{enumerate}\\n\\\\usepackage{indentfirst}\\n\\n\\\\begin{document}\\n\\t\\n\\t\\\\section*{SUMMARY}\\n\\t\\n\\tThe article proposes Leap, a novel meta-learning objective aimed at outperforming state-of-the-art approaches when dealing with collections of tasks that exhibit substantial between-task diversity.\\n\\t\\n\\tSimilarly to prior work such as MAML [1] or Reptile [2], the goal of Leap is to learn an initialization $\\\\theta_{0}$ for the model parameters, shared across tasks, which leads to good and data-efficient generalization performance when fine-tuning the model on a set of held-out tasks. In a nutshell, what sets Leap apart from MAML or Reptile is its cost function, which explicitly accounts for the entire path traversed by the model parameters during task-specific fine-tuning -- i.e., ``inner loop'' optimization --, rather than mainly focusing on the final value attained by the model parameters after fine-tuning. More precisely, Leap looks for an initialization $\\\\theta_{0}$ of the model parameters such that the energy of the path traversed by $\\\\gamma_{\\\\tau}(\\\\theta) = (\\\\theta, f_{\\\\tau}(\\\\theta))$ while fine-tuning $\\\\theta$ to optimize the loss $f_{\\\\tau}(\\\\theta)$ of a task $\\\\tau$ is minimized, on average, across $\\\\tau \\\\sim p(\\\\tau)$. Thus, it could be argued that Leap extends Reptile, which can be informally understood as seeking an initialization $\\\\theta_{0}$ that minimizes the average squared Euclidean distance between $\\\\theta_{0}$ and the model parameters after fine-tuning on each task $\\\\tau \\\\sim p(\\\\tau)$ [2, Section 5.2], by using a distance function between initial and final model parameters that accounts for the geometry of the loss surface of each task during optimization. \\n\\t\\n\\tThe final algorithm introduced in the paper considers however a variant of the aforementioned cost function, motivated by its authors on the basis of stabilising learning and eliminating the need for Hessian-vector products. The resulting approach is then evaluated on image recognition tasks (Omniglot plus a set of six additional computer vision datasets) as well as reinforcement learning tasks (Atari games).\\n\\t\\n\\t\\\\section*{HIGH-LEVEL ASSESSMENT}\\n\\t\\n\\tThe article proposes an interesting extension of existing work in meta-learning. In a slightly different context (meta-optimization), recent work [3] pointed out the existence of a ``short-horizon bias'' which could arise when using meta-learning objectives that apply only a small number of updates during ``inner-loop'' optimization. This observation is well-aligned with the motivation of this article, in which the authors attempt to complement successful methods like MAML or Reptile to perform well also in situations where a large number of gradient descent-based updates are applied during task-specific fine-tuning. Consequently, I believe the article is timely and relevant.\\n\\t\\n\\tUnfortunately, I have some concerns with the current version of the manuscript regarding (i) the proposed approach and the way it is motivated, (ii) the underlying theoretical results and, perhaps most importantly, (iii) the experimental evaluation. In my opinion, these should ideally be tackled prior to publication. Nonetheless, I believe that the proposed approach is promising and that these concerns can be either addressed or clarified. Thus I look forward to the rebuttal.\\n\\t\\n\\t\\\\section*{MAJOR POINTS}\\n\\t\\n\\t\\\\subsection*{1. Issues regarding proposed approach and its motivation/derivation}\\n\\t\\n\\t\\\\textbf{1.a} Section 2.1 argues in favour of studying the path traversed by $\\\\gamma_{\\\\tau}(\\\\theta) = (\\\\theta, f_{\\\\tau}(\\\\theta))$ rather than the path traversed by the model parameters $\\\\theta$ alone. However, this could in turn exacerbate the difficulty in dealing with collections of tasks for which the loss functions have highly diverse scales. For instance, taking the situation to the extreme, one could define an equivalence class of tasks $[\\\\tau] = \\\\left\\\\{\\\\tau \\\\mid f_{\\\\tau}(\\\\theta) = g(\\\\theta) + \\\\mathrm{constant} \\\\right\\\\}$ such that any two tasks $\\\\tau_{1}, \\\\tau_{2} \\\\in [\\\\tau]$ would essentially represent the same underlying task, but could lead to arbitrarily different values of the Leap cost function. \\n\\t\\n\\tGiven that Leap is a model-agnostic approach, like MAML or Reptile, and thus could be potentially applied in many different settings and domains, I believe the authors should study and discuss (theoretically or experimentally) the robustness of Leap with respect to between-task variation in the scale of the loss functions and, in case the method is indeed sensitive to those, propose an effective scheme to normalize them.\\n\\t\\n\\t\\\\textbf{1.b} The current version of the manuscript motivates defining the cost function in terms of $\\\\gamma_{\\\\tau}(\\\\theta) = (\\\\theta, f_{\\\\tau}(\\\\theta))$ rather than the model parameters $\\\\theta$ alone in order to ``avoid information loss'', making it seem that this modification is ``optional'' or, at least, not critical. Nevertheless, taking a closer look at the Leap objective and the meta-updates it induces, I believe it might actually be essential for the correctness of the approach. I elaborate this view in what follows. Let us write the Leap objective for a task $\\\\tau$ as\\n\\t\\\\[\\n\\tF_{\\\\tau}(\\\\theta_{0},\\\\widetilde{\\\\theta}_{0}) = \\\\underbrace{\\\\sum_{i=0}^{K_{\\\\tau} - 1}{\\\\left\\\\vert\\\\left\\\\vert u^{(i+1)}_{\\\\tau}(\\\\widetilde{\\\\theta}_{0}) - u^{(i)}_{\\\\tau}(\\\\theta_{0}) \\\\right\\\\vert\\\\right\\\\vert^{2}}}_{C_{\\\\tau, 1}(\\\\theta_{0},\\\\widetilde{\\\\theta}_{0})} + \\\\underbrace{\\\\sum_{i=0}^{K_{\\\\tau} - 1}{\\\\left( f_{\\\\tau}\\\\left(u^{(i+1)}_{\\\\tau}(\\\\widetilde{\\\\theta}_{0})\\\\right) - f_{\\\\tau}\\\\left(u^{(i)}_{\\\\tau}(\\\\theta_{0})\\\\right) \\\\right)^{2}}}_{C_{\\\\tau, 2}(\\\\theta_{0},\\\\widetilde{\\\\theta}_{0})},\\n\\t\\\\]\\n\\twhere $\\\\widetilde{\\\\theta}_{0}$ denotes a ``frozen'' or ``detached'' copy of $\\\\theta_{0}$ and $u^{(i)}_{\\\\tau}$ maps $\\\\theta_{0}$ to $\\\\theta_{i}$, the model parameters after applying $i$ gradient descent updates to $f_{\\\\tau}$ according to Equation (1) in the manuscript. Then, differentiating $C_{\\\\tau, 1}(\\\\theta_{0},\\\\widetilde{\\\\theta}_{0})$ and $C_{\\\\tau, 2}(\\\\theta_{0},\\\\widetilde{\\\\theta}_{0})$ with respect to $\\\\theta_{0}$ separately yields:\\n\\t\\\\begin{align*}\\n\\t\\\\nabla_{\\\\theta_{0}} C_{\\\\tau, 1}(\\\\theta_{0},\\\\widetilde{\\\\theta}_{0}) &= -2 \\\\sum_{i=0}^{K_{\\\\tau} - 1}{J_{i}^{T}\\\\left(\\\\theta_{i+1} - \\\\theta_{i} \\\\right)} = -2 \\\\alpha \\\\sum_{i=0}^{K_{\\\\tau} - 1}{J_{i}^{T} g_{i}} \\\\\\\\\\n\\t\\\\nabla_{\\\\theta_{0}} C_{\\\\tau, 2}(\\\\theta_{0},\\\\widetilde{\\\\theta}_{0}) &= -2 \\\\sum_{i=0}^{K_{\\\\tau} - 1}{\\\\left(f_{\\\\tau}(\\\\theta_{i+1}) - f_{\\\\tau}(\\\\theta_{i})\\\\right) J_{i}^{T}g_{i}} = -2 \\\\sum_{i=0}^{K_{\\\\tau} - 1}{\\\\Delta f^{i}_{\\\\tau} J_{i}^{T}g_{i}}\\n\\t\\\\end{align*}\\n\\twhere $J_{i} = J_{\\\\theta_{0}}u^{(i)}_{\\\\tau}(\\\\theta_{0})$ denotes the Jacobian of $u^{(i)}_{\\\\tau}$ with respect to $\\\\theta_{0}$, $g_{i} = \\\\left. \\\\nabla_{\\\\theta} f_{\\\\tau}(\\\\theta)\\\\right\\\\rvert_{\\\\theta=\\\\theta_{i}}$ denotes the gradient of the loss function $f_{\\\\tau}$ evaluated at $\\\\theta_{i}$ and $\\\\Delta f^{i}_{\\\\tau} = f_{\\\\tau}(\\\\theta_{i+1}) - f_{\\\\tau}(\\\\theta_{i})$ stands for the change in the loss function after the $i$-th update. To simplify the exposition, a constant ``inner-loop'' learning rate and no preconditioning were assumed, i.e., $\\\\alpha_{i} = \\\\alpha$ and $S_{i} = I$.\\n\\t\\n\\tFurthermore, the article claims that all Jacobian terms are approximated by identity matrices (i.e., $J_{i} = I$) as suggested in Section 5.2 of [1], leading to the following approximations:\\n\\t\\\\begin{align*}\\n\\t\\t\\\\nabla_{\\\\theta_{0}} C_{\\\\tau, 1}(\\\\theta_{0},\\\\widetilde{\\\\theta}_{0}) \\\\approx -2 \\\\alpha \\\\sum_{i=0}^{K_{\\\\tau} - 1}{ g_{i}} \\\\\\\\\\n\\t\\t\\\\nabla_{\\\\theta_{0}} C_{\\\\tau, 2}(\\\\theta_{0},\\\\widetilde{\\\\theta}_{0}) \\\\approx -2 \\\\sum_{i=0}^{K_{\\\\tau} - 1}{\\\\Delta f^{i}_{\\\\tau} g_{i}}\\n\\t\\\\end{align*}\\n\\t\\n\\tInterestingly, it can be seen that the contribution to the meta-update of the energy of the path traversed by the model parameters $\\\\theta$, $g_{\\\\mathrm{Leap},1} =\\\\nabla_{\\\\theta_{0}} C_{\\\\tau, 1}(\\\\theta_{0},\\\\widetilde{\\\\theta}_{0})$, actually points in exactly the opposite direction than the meta-update of Reptile, given by $g_{\\\\mathrm{Reptile}} = \\\\sum_{i=0}^{K_{\\\\tau} - 1}{g_{i}}$ (e.g. Equation (27) in [2]). In summary, if the Leap objective was defined in terms of $\\\\theta$ rather than $(\\\\theta, f_{\\\\tau}(\\\\theta))$, minimising the Leap cost function should maximise Reptile's cost function and viceversa. It is only the term $g_{\\\\mathrm{Leap},2} =\\\\nabla_{\\\\theta_{0}} C_{\\\\tau, 2}(\\\\theta_{0},\\\\widetilde{\\\\theta}_{0})$ that presumably ``re-aligns'' $g_{\\\\mathrm{Reptile}}$ and $g_{\\\\mathrm{Leap}} = g_{\\\\mathrm{Leap},1} + g_{\\\\mathrm{Leap},2}$. Indeed, \\n\\t\\\\[\\n\\tg_{\\\\mathrm{Leap}} = 2 \\\\sum_{i=0}^{K_{\\\\tau} - 1}{\\\\left(-\\\\Delta f^{i}_{\\\\tau} - \\\\alpha \\\\right) g_{i}}\\n\\t\\\\]\\n\\twill have positive inner product with $g_{\\\\mathrm{Reptile}}$ if each gradient update yields a sufficient decrease in the loss $f_{\\\\tau}$, that is, $\\\\Delta f^{i}_{\\\\tau} < -\\\\alpha$.\\n\\t\\n\\tMoreover, I also wonder if this is the reason why the authors introduce the ``regularization'' term $\\\\mu_{\\\\tau}^{i}$, which as it currently stands in the manuscript, does not seem to relate in a particularly intuitive manner to the original objective of minimising the energy of $\\\\gamma(t)$. By introducing $\\\\mu_{\\\\tau}^{i}$, the term $C_{\\\\tau, 2}(\\\\theta_{0},\\\\widetilde{\\\\theta}_{0})$ becomes\\n\\t\\\\[\\n\\t\\tC^{\\\\prime}_{\\\\tau, 2}(\\\\theta_{0},\\\\widetilde{\\\\theta}_{0}) = \\\\sum_{i=0}^{K_{\\\\tau} - 1}{-\\\\mathrm{sign} \\\\left( f_{\\\\tau}\\\\left(u^{(i+1)}_{\\\\tau}(\\\\widetilde{\\\\theta}_{0})\\\\right) - f_{\\\\tau}\\\\left(u^{(i)}_{\\\\tau}(\\\\theta_{0})\\\\right) \\\\right) \\\\left( f_{\\\\tau}\\\\left(u^{(i+1)}_{\\\\tau}(\\\\widetilde{\\\\theta}_{0})\\\\right) - f_{\\\\tau}\\\\left(u^{(i)}_{\\\\tau}(\\\\theta_{0})\\\\right) \\\\right)^{2}},\\n\\t\\\\]\\n\\tleading to $g^{\\\\prime}_{\\\\mathrm{Leap},2} = 2 \\\\sum_{i=0}^{K_{\\\\tau} - 1}{\\\\vert \\\\Delta f^{i}_{\\\\tau} \\\\vert g_{i}}$ and \\n\\t\\\\[\\n\\tg^{\\\\prime}_{\\\\mathrm{Leap}} = 2 \\\\sum_{i=0}^{K_{\\\\tau} - 1}{\\\\left(\\\\vert \\\\Delta f^{i}_{\\\\tau} \\\\vert - \\\\alpha \\\\right) g_{i}}.\\n\\t\\\\]\\n\\tIn turn, this relaxes the sufficient condition under which Leap and Reptile lead to meta-updates with positive inner product, namely, it changes the condition $\\\\Delta f^{i}_{\\\\tau} < -\\\\alpha$ by a less restrictive counterpart $\\\\vert \\\\Delta f^{i}_{\\\\tau} \\\\vert \\\\ge \\\\alpha$.\\n\\t\\n\\tIf these derivations happen to be correct, then I believe the way Leap is currently motivated in the article could be argued to be slightly misleading. What seems to be its main inspiration, accounting for the path that the model parameters traverse during fine-tuning, does not seem to be what drives the meta-updates towards the ``correct'' direction. Instead, the component of the objective due to the path traversed by the loss function values appears to be more important or, at least, not optional. Furthermore, I believe the regularization term $\\\\mu_{\\\\tau}^{i}$ should be better motivated, as the current version of the manuscript does not seem to justify its need clearly enough.\\n\\t\\n\\tFinally, under the assumption that the above is not mistaken, I wonder whether further tweaks to the meta-update, such as $g^{\\\\prime\\\\prime}_{\\\\mathrm{Leap}} = 2 \\\\sum_{i=0}^{K_{\\\\tau} - 1}{\\\\mathrm{max}\\\\left(\\\\vert \\\\Delta f^{i}_{\\\\tau} \\\\vert - \\\\alpha, 0 \\\\right) g_{i}}$, could perhaps turn out to be helpful as well.\\n\\n\\t\\\\subsection*{2. Theoretical results}\\n\\t\\n\\t\\\\textbf{2.a} Theorem 1 currently claims that the Pull-Forward algorithm converges to a local minimum of Equation (5). However, due to the non-convexity of the objective function, only convergence to a stationary point is established.\\n\\t\\n\\t\\\\textbf{2.b} Most importantly, I am not entirely certain that the proof of Theorem 1 is complete in its current form. As I understand it, using the notation introduced by the authors in Appendix A, the following identities hold:\\n\\t\\\\begin{align*}\\n\\t\\tF(\\\\psi_{s};\\\\Psi_{s}) &= \\\\mathbb{E}_{\\\\tau,i} \\\\vert\\\\vert h_{\\\\tau}^{i} - z_{\\\\tau}^{i} \\\\vert\\\\vert^{2} \\\\\\\\\\n\\t\\tF(\\\\psi_{s+1};\\\\Psi_{s}) &= \\\\mathbb{E}_{\\\\tau,i} \\\\vert\\\\vert h_{\\\\tau}^{i} - x_{\\\\tau}^{i} \\\\vert\\\\vert^{2} \\\\\\\\\\n\\t\\tF(\\\\psi_{s};\\\\Psi_{s+1}) &= \\\\mathbb{E}_{\\\\tau,i} \\\\vert\\\\vert y_{\\\\tau}^{i} - z_{\\\\tau}^{i} \\\\vert\\\\vert^{2} \\\\\\\\\\n\\t\\tF(\\\\psi_{s+1};\\\\Psi_{s+1}) &= \\\\mathbb{E}_{\\\\tau,i} \\\\vert\\\\vert y_{\\\\tau}^{i} - x_{\\\\tau}^{i} \\\\vert\\\\vert^{2}.\\n\\t\\\\end{align*}\\n\\t\\n\\tThe bulk of the proof is then devoted to show that $\\\\mathbb{E}_{\\\\tau,i} \\\\vert\\\\vert y_{\\\\tau}^{i} - z_{\\\\tau}^{i} \\\\vert\\\\vert^{2} = F(\\\\psi_{s};\\\\Psi_{s+1}) \\\\ge \\\\mathbb{E}_{\\\\tau,i} \\\\vert\\\\vert y_{\\\\tau}^{i} - x_{\\\\tau}^{i} \\\\vert\\\\vert^{2} = F(\\\\psi_{s+1};\\\\Psi_{s+1})$. However, I do not immediately see how to make the final ``leap'' from $F(\\\\psi_{s+1};\\\\Psi_{s+1}) \\\\le F(\\\\psi_{s};\\\\Psi_{s+1})$ to the actual claim of the Theorem, $F(\\\\psi_{s+1};\\\\Psi_{s+1}) \\\\le F(\\\\psi_{s};\\\\Psi_{s})$.\\n\\t\\n\\t\\\\subsection*{3. Experimental evaluation}\\n\\t\\n\\t\\\\textbf{3.a} The experimental setup of Section 4.1 closely resembles experiments described in articles that introduced continual learning approaches, such as [4]. However, rather than including [4] as a baseline, the current manuscript compares against meta-learning approaches typically used for few-shot learning, such as MAML and Reptile. Consequently, I would argue the combination of experimental setup and selection of baselines is not entirely fair or, at least, it is incomplete.\\n\\t\\n\\tTo this end, I would suggest to (i) include [4] (or a related continual learning approach) as an additional baseline in the experiments currently described in Section 4.1 as well as (ii) perform a new experiment to compare the performance of Leap to that of MAML and Reptile in few-shot classification tasks using OmniGlot and/or Mini-ImageNet as datasets.\\n\\t\\n\\t\\\\textbf{3.b} The Multi-CV experiment described in Section 4.2 currently does not have strong baselines other than Leap. If possible, I would suggest including [5] in the comparison, as it is the article which inspired this particular experiment.\\n\\t\\n\\t\\\\textbf{3.b} Likewise, the same holds for the experiment described in Section 4.3. In this case, I would suggest comparing to [4] for the same reason described above.\\n\\t\\n\\t\\\\section*{MINOR POINTS}\\n\\t\\n\\t\\\\begin{enumerate}\\n\\t\\n\\t\\\\item In Section 2.1, it is claimed that \\\"gradients that largely point in the same direction indicate a convex loss surface, whereas gradients with frequently opposing directions indicate an ill-conditioned loss landscape\\\". Nevertheless, convex loss surfaces can in principle be ill-conditioned as well.\\n\\t\\n\\t\\\\item Introducing a mathematical definition for the metric \\\"area under the training curve\\\" could make the experiment in Section 4.1 more self-contained.\\n\\t\\n\\t\\\\item Several references are outdated, as they cite preprints that have since been accepted at peer-reviewed venues.\\n\\t\\n\\t\\\\item The reinforcement learning experiments in Section 4.3 would benefit from additional runs with multiple seeds, and the subsequent inclusion of confidence intervals.\\n\\t\\n\\t\\\\item I believe certain additional experiments could be insightful. For example, (i) studying how sensitive the performance of Leap is to parameter of the ``inner-loop'' optimizer (e.g. choice of \\n\\toptimizer, learning rate, batch size) or (ii) describing how the introduction of $\\\\mu_{\\\\tau}^{i}$ affects the performance of Leap.\\n\\t\\n\\t\\\\end{enumerate}\\n\\t\\n\\t\\\\section*{TYPOS}\\n\\t\\n\\t\\\\begin{enumerate}\\n\\t\\n\\t\\\\item The first sentence entirely in page 6 appears to have a superfluous word.\\n\\t\\n\\t\\\\item The Taylor series expansion in the proof of Theorem 1 is missing the $O(\\\\bullet)$ terms (or a $\\\\approx$ sign).\\n\\t\\n\\t\\\\item Also in the proof of Theorem 1, if $c_{\\\\tau}^{i} = (\\\\delta_{\\\\tau}^{i})^{2} - \\\\alpha_{\\\\tau}^{i}\\\\xi_{\\\\tau}^{i}\\\\delta_{\\\\tau}^{i}$, wouldn't $\\\\omega = \\\\underset{\\\\tau, i}{\\\\mathrm{sup}} \\\\langle \\\\hat{x}^{i}_{\\\\tau} - \\\\hat{z}^{i}_{\\\\tau}, g(\\\\hat{x}^{i}_{\\\\tau}) - g(\\\\hat{z}^{i}_{\\\\tau})\\\\rangle + \\\\xi_{\\\\tau}^{i}\\\\delta_{\\\\tau}^{i}$ instead?\\n\\t\\n\\t\\\\end{enumerate}\\n\\n \\\\section*{ANSWER TO REBUTTAL}\\n Please see comments in the thread.\\n\\n\\t\\n\\t\\\\section*{REFERENCES}\\n\\t\\n\\t\\\\begin{enumerate}[ {[}1{]} ]\\n\\t\\t\\\\item Finn et al. ``Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks.'' International Conference on Machine Learning. 2017.\\n\\t\\t\\\\item Nichol et al. ``On First-Order Meta-Learning Algorithms.'' arXiv preprint. 2018\\n\\t\\t\\\\item Wu et al. ``Understanding Short-Horizon Bias in Stochastic Meta-Optimization.'' International Conference on Learning Representations. 2018.\\n\\t\\t\\\\item Schwarz et al. ``Progress \\\\& Compress: A scalable framework for continual learning.'' International Conference on Machine Learning. 2018.\\n\\t\\t\\\\item Serr{\\\\`a} et al. ``Overcoming Catastrophic Forgetting with Hard Attention to the Task.'' International Conference on Machine Learning. 2018.\\n\\t\\\\end{enumerate}\\t\\n\\\\end{document}\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting idea, sufficient empirical evidence, with certain questionable tricks\", \"review\": \"This paper proposes Leap, a meta-learning procedure that finds better initialization for new tasks. Leap is based on past training/optimization trajectories and updates the initialization to minimize the total trajectory lengths. Experiments show that Leap outperforms popular alternatives like MAML and Reptile.\\n\\nPros\\n- Novel idea\\n- Relatively well-written\\n- Sufficient experiment evidence\\n\\nCons\\n- There exist several gaps between the theory and the algorithm\\n\\nI have several concerns.\\n1. The idea is clearly delivered, but there are several practical treatments that are questionable. The first special treatment is that on page 5, when the objective is increased instead of decreased, the sign of the f part is flipped, which is not theoretically sound. It is basically saying that when we move from psi^i to psi^{i+1} with increased objective, we lie to the meta-learner that it is decreasing. The optimization trajectory is what it is. It would be beneficial to see the effect of removing this trick, at least in the experiments. Second, replacing the Jacobian with the identity matrix is also questionable. Suppose we use a very small but constant learning rate alpha for a convex problem. Then J^i=(I-G)^i goes to the zero matrix as i increases (G is small positive). However, instead, the paper uses J^i=I for all i. This means that the contributions for all i are the same, which is unsubstantiated.\\n\\n2. The proof of Thm1 in Appendix A is not complete. For example, \\\"By assumption, beta is sufficiently small to satisfy F\\\", which I do not understand the inequality. Is there a missing i superscript? Isn't this the exact inequality we are trying to prove for i=0? As another example, \\\"if the right-most term is positive in expectation, we are done\\\", how so? BTW, the right-most term is a vector so there must be something missing. It would be more understandable if the proof includes a high-level proof roadmap, and frequently reminds the reader where we are in the overall proof now.\\n\\n3. The set \\\\Theta is not very well-defined, and sometimes misleading. Above Eq.(6), \\\\Theta is mathematically defined as the intersection of points whose final solutions are within a tolerance of the *global* optimum, which is in fact unknown. As a result, finding a good initialization in \\\\Theta for all the tasks as in Eq.(5) is not well-defined.\\n\\n4. About the experiments. What is the \\\"Finetuning\\\" in Table 1? Presumably it is multi-headed but it should be made explicit. What is the standard deviation for Fig.4? The claim that \\\"Leap learns faster than a random initialization\\\" for Breakout is not convincing at all.\\n\\nMinors\\n- In Eq.(4), f is a scalar so abs should suffice. This also applies to subsequent formulations.\\n- \\\\mu is introduced above Eq.(8) but never used in the gradient formula.\\n- On p6, there is a missing norm notation when introducing the Reptile algorithm.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
r1xrb3CqtQ
Latent Domain Transfer: Crossing modalities with Bridging Autoencoders
[ "Yingtao Tian", "Jesse Engel" ]
Domain transfer is a exciting and challenging branch of machine learning because models must learn to smoothly transfer between domains, preserving local variations and capturing many aspects of variation without labels. However, most successful applications to date require the two domains to be closely related (ex. image-to-image, video-video), utilizing similar or shared networks to transform domain specific properties like texture, coloring, and line shapes. Here, we demonstrate that it is possible to transfer across modalities (ex. image-to-audio) by first abstracting the data with latent generative models and then learning transformations between latent spaces. We find that a simple variational autoencoder is able to learn a shared latent space to bridge between two generative models in an unsupervised fashion, and even between different types of models (ex. variational autoencoder and a generative adversarial network). We can further impose desired semantic alignment of attributes with a linear classifier in the shared latent space. The proposed variation autoencoder enables preserving both locality and semantic alignment through the transfer process, as shown in the qualitative and quantitative evaluations. Finally, the hierarchical structure decouples the cost of training the base generative models and semantic alignments, enabling computationally efficient and data efficient retraining of personalized mapping functions.
[ "Generative Model", "Latent Space", "Domain Transfer" ]
https://openreview.net/pdf?id=r1xrb3CqtQ
https://openreview.net/forum?id=r1xrb3CqtQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ByljjOTMx4", "SJgnipVK0Q", "B1epHTEY0Q", "S1x-p2EFA7", "SJgdto_epX", "H1xi2iv5hm", "H1gpdRy5hm", "r1eOUNxx5X" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment" ], "note_created": [ 1544898723149, 1543224740019, 1543224645436, 1543224504879, 1541602176049, 1541204915050, 1541172853121, 1538421839914 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1166/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1166/Authors" ], [ "ICLR.cc/2019/Conference/Paper1166/Authors" ], [ "ICLR.cc/2019/Conference/Paper1166/Authors" ], [ "ICLR.cc/2019/Conference/Paper1166/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1166/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1166/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1166/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"This paper studies the problem of heterogeneous domain transfer, for example across different data modalities.\\n\\nThe comments of the reviewers are overlapping to a great extent. On the one hand, the reviewers and AC agree that the problem considered is very interesting and deserves more attention.\\n\\nOn the other hand, the reviewers have raised concerns about the amount of novelty contained in this manuscript, as well as convincingness of results. The AC understands the authors\\u2019 argument that a simple method can be a feature and not a flaw, however this work still does not feel complete. Even within a relatively simple framework, it would be desirable to examine the problem from multiple angles and \\\"disentangle\\\" the effects of the different hypotheses \\u2013 for example the reviewers have drawn attention to end-to-end training and comparison with other baselines. The points raised above, together with improving the manuscript (as commented by reviewers) would make this work more complete.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting problem, but work does not feel complete\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for your time and insight in your review. We\\u2019ve done our best to address your key points below.\\n\\n> The technical parts are weak since the authors use the existing method with to some extent evolution. \\n\\nWe would like to highlight that the problem this paper addresses (cross-modal domain transfer) is difficult and, to the best of our knowledge, relatively unexamined in the literature. We believe it is actually a desirable feature, and not a fault, that the proposed method is fairly straightforward and easy to implement. From a technical standpoint, the main contribution is not a single new model with which to perform domain transfer, but showing it is possible to \\u201cglue together\\u201d the plethora of existing (and yet to be invented) models with small, simple, and efficient bridging models. While we have limited ourselves to several easily quantifiable problems for this paper, nothing about the proposed methods is limited to these models or datasets.\\n\\n> The proposed method can transfer the positive knowledge. However, for the transfer learning, one concerned and important issue is that some negative knowledge information can be also transferred. So how to avoid the negative transferring? Some necessary discussions about this should be given in the manuscript.\\n\\n\\nThank you for the suggestion. Transfer learning does indeed share some surface similarities to the proposed work in that it uses pretrained networks. We would like to highlight, however, that transfer learning is actually quite distinct from domain transfer in that the pretrained networks are used as feature extractors for a new task, while in this work the pretrained networks are used for the same task on which they were trained (generating samples from a given distribution). Since no information is passing between the pretrained networks, the features learned in one domain are not informing the solution of generation in the other domain. \\n\\n> There are many grammar errors in the current manuscript. The authors are suggested to improve the English writing.\\n\\n\\nWe agree with your assessment and apologize for the rushed condition of the initial submission. You will hopefully find that the updated draft has been extensively revised and restructured to improve the clarity of the writing and the arguments.\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your time and expertise in your review, we've addressed the key points below:\\n\\n> (i) The technical novelty (considering the two-step solution) is limited though the studied problem is very interesting.\\n\\nWe would like to highlight that the problem this paper addresses (cross-modal domain transfer) is difficult and, to the best of our knowledge, relatively unexamined in the literature. We believe it is actually a desirable feature, and not a fault, that the proposed method is fairly straightforward and easy to implement. \\n\\nSimilarly, we believe the two-step training actually has some important advantages over end-to-end training. First, this approach allows us to combine models that use dramatically different training procedures. We demonstrate that in this paper by transferring between a maximum-likelihood trained VAE and an adversarial-trained GAN. Second, for large generative models that take weeks to train, it would be infeasible to retrain the entire model for each new domain mapping. As a small example from this paper, training the bridging autoencoder from MNIST->SC09 takes ~1 hour on a single gpu, while retraining the SC09 WaveGAN takes ~4 days. We have also restricted ourselves to intuitive class-level mappings for the purpose of quantitative comparisons in this paper, but in a creative application it is likely each user would prefer their own unique mapping between domains. \\n\\n> \\u201cThe authors are suggested to put the proposed solution in the context of transfer learning, which may better show the significance of this work. Currently, such a discussion and comparison is missing.\\u201d\\n\\nThank you for the suggestion. Transfer learning does indeed share some surface similarities to the proposed work in that it uses pretrained networks. We would like to highlight, however, that transfer learning is actually quite distinct from domain transfer in that the pretrained networks are used as feature extractors for a new task, while in this work the pretrained networks are used for the same task on which they were trained (generating samples from a given distribution). Since no information is passing between the pretrained networks, the features learned in one domain are not informing the solution of generation in the other domain. \\n\\n> \\u201cThere are many grammar errors throughout the whole paper. The authors are suggested to significantly improve the linguistic quality.\\u201d\\n> \\u201cA section of Conclusions is missing.\\u201d\\n\\n\\nWe agree with your assessment and apologize for the rushed condition of the initial submission. You will hopefully find that the updated draft has been extensively revised and restructured to improve the clarity of the writing and the arguments, including adding a conclusion section.\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your time and insight in your review. We've done our best to address your concerns with paper revisions and in the comments below:\\n\\n> \\u201cThe paper is not well-organized, the structure of the paper need improving.\\u201d\\n\\nWe agree with your assessment and thank you for your helpful suggestions. The updated draft has been extensively revised and restructured. For example, following your advice, we have moved the new related work section to follow the methods, and added more details to the figure and table captions to make their explanations self contained. \\n\\n> \\u201cThe technical implementation of the proposition is somewhat trivial. Why the generative model should be pre-trained. Why not try in the end-to-end way. \\u201c\\n\\nWe would like to highlight that the problem this paper addresses (cross-modal domain transfer) is difficult and, to the best of our knowledge, relatively unexamined in the literature. We believe it is actually a desirable feature, and not a fault, that the proposed method is fairly straightforward and easy to implement. \\n\\nThe point about end-to-end training is well-taken. For simpler problems, like MNIST <-> MNIST, and MNIST<-> Fashion MNIST, end-to-end training is indeed tractable. However, we would like to highlight some advantages of the multi-step approach. First, this approach allows us to combine models that use dramatically different training procedures. We demonstrate that in this paper by transferring between a maximum-likelihood trained VAE and an adversarial-trained GAN. Second, for large generative models that take weeks to train, it would be infeasible to retrain the entire model for each new domain mapping. As a small example from this paper, training the bridging autoencoder from MNIST->SC09 takes ~1 hour on a single gpu, while retraining the SC09 WaveGAN takes ~4 days. We have also restricted ourselves to intuitive class-level mappings for the purpose of quantitative comparisons in this paper, but in a creative application it is likely each user would prefer their own unique mapping between domains. \\n\\n\\n> \\u201cThe authors argue that CycleGAN suffers from some drawback. Why do not the authors compare with CycleGAN in this paper?\\u201d\\n\\nThank you for the observation that we could use better external baselines to compare against for domain transfer. We have added comparisons to pix2pix and CycleGAN for MNIST <-> Fashion MNIST. We find lower transfer accuracies and image quality (which we calculate with Frechet Inception Distance), which can be seen in Table 2 and Appendix C. The MNIST <-> MNIST scenario involved transferring between pretrained models with different initial conditions which is not directly comparable and has been omitted. In MNIST <-> SC09, the two domains were too distinct to provide any reasonable transfer with existing methods.\\n\\nAs we mentioned in the paper, we also tried to train a CycleGAN between latent spaces, but weren\\u2019t unable to train the model at all, as the reconstruction loss was often trivially satisfied between models trained with the same Gaussian prior. This was an important finding for us, and gave us motivation to look at other methods for modeling transfer between latent spaces. \\n\\n> \\u201cthe authors also need to compare with more state-of-the-art methods, such as StarGAN.\\u201d\\n\\nAs mentioned above, thank you for pointing out the need for more baselines and we have now included comparisons to pix2pix and CycleGAN. We agree that StarGAN is an impressive model for multi-domain transfer, however, unlike the rest of the methods we compare, it requires additional target label information to be provided by the user at transfer time, which we feel makes CycleGAN a more natural comparison. Also, like CycleGAN, to the best of our knowledge these techniques still rely on structural similarities between domains and do not work as well for multi-modal transfer.\\n\\n> \\u201cSome implementation details are not clearly stated. ...how many labeled samples are used in Table 2?\\u201d\\n\\nAs part of the paper revisions, we have done our best to make all the implementation details more explicit. For example, in the caption table 2, we discuss that we use all available labels (60k for MNIST<-> Fashion MNIST, 16k for MNIST <-> SC09). Table 3 then performs a comparison as the amount of data labels are reduced.\"}", "{\"title\": \"The technical part is weak\", \"review\": \"The authors demonstrate that it is possible to transfer across modalities (e.g., image-to-audio) by first abstracting the data with latent generative models and then learning transformations between latent spaces. We find that a simple variational autoencoder is able to learn a shared latent space to bridge between two generative models in an unsupervised fashion, and even between different types of models (e.g., variational autoencoder and a generative adversarial network). Some detailed comments are listed as follows,\\n1. The technical parts are weak since the authors use the existing method with to some extent evolution. \\n\\n2 The proposed method can transfer the positive knowledge. However, for the transfer learning, one concerned and important issue is that some negative knowledge information can be also transferred. So how to avoid the negative transferring? Some necessary discussions about this should be given in the manuscript.\\n\\n2 There are many grammar errors in the current manuscript. The authors are suggested to improve the English writing.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A two-step solution for heterogeneous domain transfer (e.g., image-to-audio)\", \"review\": \"In this paper, the authors study an interesting problem, i.e., heterogeneous domain transfer such as knowledge transfer between an image domain and a speech/audio domain. In particular, the proposed solution contains two major steps: (i) pre-train each domain via VAE or GAN, and (ii) train a conditional VAE in semi-supervised manner in order to bridge two domains (see Section 2.2). Experiments on three public datasets (including three cross-domain settings) show the effectiveness of the proposed two-step solution.\\n\\nSome Comments/suggestions:\\n(i) The technical novelty (considering the two-step solution) is limited though the studied problem is very interesting.\\n\\n(ii) The authors are suggested to put the proposed solution in the context of transfer learning, which may better show the significance of this work. Currently, such a discussion and comparison is missing.\\n\\n(iii) There are many grammar errors throughout the whole paper. The authors are suggested to significantly improve the linguistic quality.\\n\\n(iv) A section of Conclusions is missing.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"poor organization, trivial techical implementation\", \"review\": \"In this paper, the authors have proposed a cross domain transferring methods, supervised by three category of losses. The experiments somewhat demonstrate the effective of this method. However, this paper still suffers from some drawbacks as below:\\nThe paper is not well-organized, the structure of the paper need improving. For example, the related work is put almost at the end of the paper and the tables and figures are hard to follow sometimes.\\nThe technical implementation of the proposition is somewhat trivial. Why the generative model should be pre-trained. Why not try in the end-to-end way. \\nThe experiments are not convincing. The authors argue that CycleGAN suffers from some drawback. Why do not the authors compare with CycleGAN in this paper? By the way, the authors also need to compare with more state-of-the-art methods, such as StarGAN.\\nSome implementation details are not clearly stated. For example, the authors say \\u201cOur goal can thus be stated as learning transformations that preserve locality and semantic alignment, while requiring as few labels from a user as possible.\\u201d So, how many labeled samples are used in Table 2?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A link to audio samples\", \"comment\": \"There was a mistake in the original that it didn't include a link to the audio samples. They are anonymously available here:\", \"https\": \"//drive.google.com/drive/u/8/folders/12u6fKvg0St6gjQ_c2bThX9B2KRJb7Cvk\\nApologies.\"}" ] }
SyzVb3CcFX
Time-Agnostic Prediction: Predicting Predictable Video Frames
[ "Dinesh Jayaraman", "Frederik Ebert", "Alexei Efros", "Sergey Levine" ]
Prediction is arguably one of the most basic functions of an intelligent system. In general, the problem of predicting events in the future or between two waypoints is exceedingly difficult. However, most phenomena naturally pass through relatively predictable bottlenecks---while we cannot predict the precise trajectory of a robot arm between being at rest and holding an object up, we can be certain that it must have picked the object up. To exploit this, we decouple visual prediction from a rigid notion of time. While conventional approaches predict frames at regularly spaced temporal intervals, our time-agnostic predictors (TAP) are not tied to specific times so that they may instead discover predictable "bottleneck" frames no matter when they occur. We evaluate our approach for future and intermediate frame prediction across three robotic manipulation tasks. Our predictions are not only of higher visual quality, but also correspond to coherent semantic subgoals in temporally extended tasks.
[ "visual prediction", "subgoal generation", "bottleneck states", "time-agnostic" ]
https://openreview.net/pdf?id=SyzVb3CcFX
https://openreview.net/forum?id=SyzVb3CcFX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rylvZgZqxE", "B1x59ApR14", "SJxRR70saQ", "BJlRH2cjp7", "Skl_eodiT7", "SJgSfKuoTX", "Hkl7KkHj6X", "BJxcsIMF6m", "Byxjv_mHam", "B1g2eZUqnm", "H1eLJv-cn7", "BkevZnkq2Q" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545371646996, 1544638097860, 1542345685629, 1542331461626, 1542322927818, 1542322444958, 1542307707296, 1542166177947, 1541908579289, 1541198068062, 1541179102290, 1541172223343 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1165/Authors" ], [ "ICLR.cc/2019/Conference/Paper1165/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1165/Authors" ], [ "ICLR.cc/2019/Conference/Paper1165/Authors" ], [ "ICLR.cc/2019/Conference/Paper1165/Authors" ], [ "ICLR.cc/2019/Conference/Paper1165/Authors" ], [ "~Alexander_Neitz1" ], [ "ICLR.cc/2019/Conference/Paper1165/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1165/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1165/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1165/AnonReviewer3" ] ], "structured_content_str": [ "{\"title\": \"Clarifying contribution and real-data experiments\", \"comment\": \"Thank you, we are glad to have the opportunity to present our work at ICLR 2019!\", \"a_couple_of_clarifications_for_interested_readers\": \"(i) This paper's contribution is not about maintaining prediction uncertainty through VAEs. Instead, the idea is to allow predictors to select which timesteps to make predictions about. We show that this not only improves prediction quality but also consistently predicts semantically coherent changepoints that can be used, for instance, as subgoals for planning.\\n(ii) We do in fact have results for real videos (the BAIR pushing dataset) both in our paper and on the website.\\n\\nThank you,\\n(On behalf of the authors)\"}", "{\"metareview\": \"The paper introduces a new and convincing method for video frame prediction, by adding prediction uncertainty through VAEs. The results are convincing, and the reviewers are convinced.\\n\\nIt's unfortunate however that the method is only evaluated on simulated data. Letting it loose on real data would cement the results and merit oral representation; in the current form, poster presentation is recommended.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"solid work, would merit from more experimentation\"}", "{\"title\": \"Adding a para pointing out as concurrent work\", \"comment\": \"Thanks for bringing this to our notice, and congrats on the great paper! We have added a brief paragraph in related work pointing this out as concurrent work with similar ideas.\\n> \\\"Concurrently with us, Neitz et al 2018 also propose a similar idea that allows a predictor to select when to predict, and their experiments demonstrate its advantages in specially constructed tasks with clear bottlenecks. In comparison, we propose not just the basic time-agnostic loss (Sec 3.1), but also improvements in Sec 3.2 to 3.4 that allow time-agnostic prediction to work in more general tasks such as synthetic and real videos of robotic object manipulation. Our experiments also test the quality of discovered bottlenecks in these scenarios and their usefulness as subgoals for hierarchical planning.\\\"\"}", "{\"title\": \"(R1 review response) Limitations para added, possible additions from appendices to main text?\", \"comment\": \"Thank you for your thoughtful feedback. We address your concerns below.\\n\\n* \\u201cBeyond the method\\u2019s explanation, I found the experiment section to be poorly structured. The figures are small and difficult to follow \\u2013 looking at all the figures it felt that \\u201cmore is actually less\\u201d. Many of the evidence required to understand the method are only included in the appendices. However, having spent the time to go back and forth, I believe the experiments to be scientifically sound and convincing.\\u201d\\n\\nThank you for taking the time to go through the appendices to understand our submission better, and apologies for having made this necessary. While we tried to keep the main manuscript concise, this was intended to aid readability and comprehension rather than hinder it. Is there any particular information you would suggest as particularly important to move up from appendices to the main text? We could also use some part of the remaining space to Figs 5, 6, 7, 8, 9 (showing prediction results from various methods) larger. Accounting for changes after other reviewer feedback, we now have about 1.2 pages left to reach the 10-page maximum. \\n\\n---------------------------------------------\\n* \\u201cI would have liked a discussion in the conclusion on the method\\u2019s limitation. This reviewer believes that the approach will struggle to deal with cyclic motions. In this case the discovered bottlenecks might not be the most useful to predict, as these will correspond to future frames (not nearby though) that are visually similar to the start (in forward) or to the start/end (in bidirectional) frames. \\u201d\\n\\nThank you for this suggestion. We have added a discussion of limitations to Sec 5, including this point. Yes, TAP in cyclic cases may not work in its current form. In particular, we expect that TAP will converge to the easy solution of repeating the input frame, since it will recur in the course of a cyclic motion. One solution is to use $\\\\mathcal{E}\\u2019$ in the generalized minimum formulation of Sec 3.2 to express a preference for predicting frames that are different from the input frames. Another possibility is to preprocess states in some way to get TAP to work meaningfully. For instance, if the state is modified by appending the state visitation count, then the \\u201ccyclicity\\u201d of the trajectory would be destroyed so that bottlenecks are once again meaningful. \\n\\n---------------------------------------------\\n* \\u201cAn additional loss to reward difficult-to-predict frames (though certain compared to other times) might be an interesting additional to conquer more realistic (rather than synthetic) video sequences.\\u201d\\n\\nOur generalized minimum formulation (Sec 3.2) already permits expressing additional objectives apart from ease of prediction. For instance, when we set $w(t)$ in future frame prediction to increase linearly with time as in Sec 4, we are indicating that while farther-away frames may be harder to predict, we would still prefer to predict those, so long as the prediction error is not too high. The idea of preferring predictions that are different from the input frames may also help address this point (cf. response to question about cyclicity above). Please also note that we already show results on BAIR pushing videos (non-synthetic video sequences), but we agree that results on videos from a broader domain would be interesting.\"}", "{\"title\": \"BAIR pushing videos added\", \"comment\": \"Thank you for these suggestions. We have now added BAIR pushing videos to the website.\\n\\nKTH has some of the cyclic structure that R1 references in their review, so it may not be a good fit for TAP out of the box (we are adding a note on this limitation in our conclusions section). We will nevertheless attempt this soon, after prioritizing official review responses.\"}", "{\"title\": \"(R3 review response) Generality of planning subgoals provided by Time-Agnostic Prediction\", \"comment\": \"Thank you for your feedback and careful observations.\\n\\n-------------------------------------------------------------------------\\n* \\u201cThe hierarchical planning evaluation experiment seems like it would clearly favor TAP compared to a fixed model (why would the middle prediction in time of the fixed model correspond to reasonable planning goals?).\\u201d\", \"our_hierarchical_planning_experiment_is_designed_to_evaluate_the_utility_of_a_subgoal_as_follows\": \"a planner spends half its time budget trying to move towards the subgoal, and the remaining half moving towards the end goal. Good subgoals are those that lead to better goal-reaching performance. Since the planner spends exactly half its time planning towards the subgoal, the middle frame is the most obvious choice for a subgoal---this is why FIX targets the middle frame. We will make this clearer in the text.\\n\\nHowever, if the above argument is not convincing, please let us know if the following experiment would help alleviate your concern. Rather than *predict* the middle frame, we could use the ground truth middle frame as a subgoal---let us call this baseline \\u201dGT-MIDDLE\\u201d. More specifically, we would sample a trajectory and the task would be to get from its start state to its end state, using the true middle image in that trajectory as a subgoal. This would answer the question: \\u201cif we were *perfectly* able to predict the middle frame, would that serve as a useful subgoal for planning?\\u201d If GT-MIDDLE is shown to work well, then we could conclude that predicting the middle frame would therefore provide a reasonable baseline.\\n\\nPlease let us know if you believe that such an experiment would be informative and help to address your concern. Otherwise, do you perhaps have an alternative suggestion for a subgoal generation baseline?\\n\\n-------------------------------------------------------------------------\\n* \\u201cFurthermore, for certain tasks and environments it seems like the uncertain frames might be the ones that correspond to important subgoals. For example, for the BAIR Push Dataset, usually the harder frames to predict are the arm-object interactions, which probably would correspond to the relevant subgoals.\\u201d\\n\\nAs shown in the paper, in all our experiments, the low-uncertainty predictions from TAP have corresponded to semantically coherent task decompositions and intuitive subgoals. We believe there are good reasons to expect this to hold generally. Suppose that we had a training dataset that consisted of all possible trajectories between every pair of start and goal states. For example, in the maze in Fig 1, where start and goal states are fixed, suppose our training dataset included *every* possible successful trajectory. Then the easiest-to-predict frames would correspond to frames that occur in *every* possible trajectory between start and goal, which are intuitively good subgoals to aim for. In the maze of Fig 1, these would be the asterisks. Note that other positions in the maze (which would be more uncertain because they do not occur in all trajectories) could also be valid subgoals. But while the asterisks are guaranteed to lie on the shortest path from start to goal, more uncertain subgoals could lead to detours from this shortest path.\\n\\nMore specifically for the BAIR pushing case in your comment, some example BAIR pushing videos are shown at https://sites.google.com/view/ta-pred/home#h.p_MyPmVZLyHypx. They consist of random arm motions, so that the arm is never continuously in contact with objects throughout the video. When an object is displaced in a video, our method usually produces images of the arm initiating or ending contact with that object, as shown in Fig 8. These correspond to low-uncertainty bottlenecks---the arm *must* have come into contact with that object, no matter how precisely the pushing motion occurred. We believe these states are also the most relevant subgoals in these cases, since they represent states that any successful trajectory has to pass through, as argued above.\\n\\nIt is possible to imagine other more difficult settings than BAIR pushing, where bottlenecks would correspond to images of an arm dynamically pushing an object, which, we agree, is a harder prediction task. Since TAP is designed to select *relatively* easier frames, this would not affect it adversely; it would continue to predict the easiest among those difficult frames. \\n\\n--------------------------------------------------------\\nOverall, we broadly agree it is difficult to rigorously claim that TAP *always* discovers meaningful subgoals, since there is no agreed-upon notion of what constitutes a good subgoal. In our responses above, we argue that TAP naturally targets one reasonable notion of a good subgoal --- a state that would occur in a large fraction of trajectories between start and goal states. We will qualify the subgoals claim more clearly in this way in the text if R3 agrees that this would be appropriate.\"}", "{\"comment\": \"Very exciting work! It would be great if you could include a brief comparison to the method proposed in the paper \\\"Adaptive Skip Intervals: Temporal Abstraction for Recurrent Dynamical Models\\\" (Neitz et. al, NIPS 2018; https://arxiv.org/abs/1808.04768 ).\", \"title\": \"Related work\"}", "{\"title\": \"(R2 review response) Clarifications and pdf updates\", \"comment\": \"Thank you for your detailed questions and suggestions. We address your concerns below.\\n-------------------------------------------------------------------------\\n* \\\"What results does figure 4 present? Are they only for the grasping sequence? Please specify. \\\"\\n\\nYes, Fig 4 is only for the grasping sequence. As stated in the para under \\u201cForward prediction\\u201d on Pg 6, this is a scatter plot of minimum l1 error versus the closest-matching step for various models. We have changed the caption to present this information near the figure.\\n\\n-------------------------------------------------------------------------\\n* \\\"In connection with the previous comment, I think the results would be more readable if the match-steps were normalized for each sequence (at least for Figure 4). There would be a clearer mapping between fixT methods and the normalized matching step (e.g., we would expect fix0.75 to achieve a matching step of 0.75 instead of 6 / ? ).\\\" \\n\\nThank you for pointing this out. We agree this does aid readability. We now present normalized match-step in both the figure as well as the table, for uniformity.\\n\\n-------------------------------------------------------------------------\\n* \\u201cThe statement \\u201cthe genmin w(t) preference is bell-shaped and varies from 2/3 at the ends to 1 at the middle frame\\u201d is vague.\\u201d\\n\\nAgreed that this information should be more clearly presented. We omitted this in the submission to save space, but have included this in Appendix E now. In our experiments, w(t) was constructed as follows: the weight would rise linearly from baseval=0.66 at the first frame to 1.0 at the fifth frame, then stay at 1.0 for T-10 frames. From the (T-5)-th frame, it would drop linearly to baseval once more at the last frame. The only hyperparameter we tuned was baseval (search over 2/3 and 1/3).\\n\\n-------------------------------------------------------------------------\\n* \\u201cSection 4, Bottleneck discovery frequency. I am not entirely convinced by the measuring of bottleneck states. You say that a distance is computed between the predicted object position and the ground-truth object position. If a model were to output exactly the same frame as given in context, would the distance be zero? If so, doesn\\u2019t that mean that a model who predicts a non-bottleneck state before or after the robotic arm moves the pieces is estimated to have a very good bottleneck prediction frequency? I found this part of the paper the hardest to follow and the least convincing.\\u201d\\n\\nA model that output the same frame as given in context would actually incur a heavy error. The distance is computed between the predicted object positions and the ground truth object positions *when exactly one of the two objects has been moved*, which may be reasonably assumed to be the \\u201cbottleneck state\\u201d in this task. This means that outputting the starting context frame or the ending context frame would both produce heavy distance errors: the entire displacement of the first object, or the entire displacement of the second object, respectively.\\n\\nUnfortunately, the details of this measurement are quite involved, so we were forced to relegate this to Appendix E (now Appendix F after revisions). We have now added the example suggested by your comment to the paragraph in Section 4, to serve as an intuitive representative of the behavior of our method. Please let us know if this helps make things clearer.\\n\\n----------------------------------------------------------------------------\\n* \\\"I\\u2019m curious as to why you called the method in section 3.2 the \\u201cGeneralized minimum\\u201d? It feels more like a weighted (or preference weighted) minimum to me and confused me a few times as I was reading the paper (GENerative? GENeralized? what\\u2019s general about it?). Just a comment.\\\"\\n\\nThe generalization here has to do with the relationship between Eq 4 and 5 in the submission. To call it \\u201cgeneralized minimum\\u201d is indeed not the most precise since there may be many other ways to generalize the minimum operator, but we choose to call it this for want of a better, concise term.\\n\\nHere is the case for considering it a generalization of the minimum. A standard minimum over i of a function f(i) can be written as: min_i f(i) = f({argmin_i f(i)}), as in Eq 4 in the submission. \\n\\nNow note that there are two occurrences of f(i) in the RHS expression above. The \\u201cgeneralized minimum\\u201d of Eq 5 generalizes this by allowing those two functions to be different as long as they are defined over the same domain: genmin_i f(i) = f({argmin_i g(i)}). \\n\\nRestating in words, the standard minimum value of a function may be defined as the value of the function at *its own argmin*. Instead, the *generalized* minimum of a function f(.) with respect to a function g(.), is the value of f(.) evaluated at *the argmin of g(.)*. This is how \\u201cgenmin\\u201d generalizes \\u201cmin.\\u201d\\n\\nPlease also take a look at the updated pdf portions and let us know if you have any further comments. Thank you.\"}", "{\"comment\": \"This approach seems simple as well as effective for me. However, I have two suggestions to improve the message of this paper.\\n\\nFirst of all, this paper uses few robot simulations and the BAIR pushing (robot arm) dataset, which are relatively easy to memorize/predict compared to the real-world videos. Thus, to reassure this concern, I recommend the author to add results on human action dataset, like Human 3.6M or KTH (if this model takes a long time to train) for example. I believe such an experiment would strengthen this work.\\n\\nAlso, on the website ( https://sites.google.com/view/ta-pred ) shared in this paper, it only includes videos for the simulation task. So, I believe it would be helpful to understand the effectiveness of this work if the author shares the videos from the BAIR pushing (and the human action) dataset on the supplementary website also.\", \"title\": \"Two suggestions: (1) Add an experiment on real videos (e.g. human action) and (2) upload sample videos for the BAIR pushing dataset on the supplementary website.\"}", "{\"title\": \"simple yet effective\", \"review\": \"The authors present a method on prediction of frames in a video, with the key contribution being that the target prediction is floating, resolved by a minimum on the error of prediction. The authors show the merits of the approach on a synthetic benchmark of object manipulation with a robotic arm.\", \"quality\": \"this paper appears to contain a lot of work, and in general is of high quality.\", \"clarity\": \"some sections of the paper were harder to digest, but overall the quality of the writing is good and the authors have made efforts to present examples and diagrams where appropriate. Fig 1, especially helps one get a quick understanding of the concept of a `bottleneck` state.\", \"originality\": \"To the extent of my knowledge, this work is novel. It proposes a new loss function, which is an interesting direction to explore.\", \"significance\": \"I would say this work is significant. There appears to be a significant improvement in the visual quality of predictions. In most cases, the L1 error metric does not show such a huge improvement, but the visual difference is remarkable, so this goes to show that the L1 metric is perhaps not good enough at this point.\\n\\nOverall, I think this work is significant and I would recommend its acceptance for publication at ICLR. There are some drawbacks, but I don\\u2019t think they are major or would justify rejection (see comments below). \\n\\n\\nI\\u2019m curious as to why you called the method in section 3.2 the \\u201cGeneralized minimum\\u201d? It feels more like a weighted (or preference weighted) minimum to me and confused me a few times as I was reading the paper (GENerative? GENeralized? what\\u2019s general about it?). Just a comment.\\n\\nWhat results does figure 4 present? Are they only for the grasping sequence? Please specify. \\n\\nIn connection with the previous comment, I think the results would be more readable if the match-steps were normalized for each sequence (at least for Figure 4). There would be a clearer mapping between fixT methods and the normalized matching step (e.g., we would expect fix0.75 to achieve a matching step of 0.75 instead of 6 / ? ).\\n\\nSection 4, Intermediate prediction. The statement \\u201cthe genmin w(t) preference is bell-shaped\\u201d is vague. Do you mean a Gaussian? If so, you should say \\u201ca Gaussian centered at T/2 and tuned so that \\u2026\\u201d\\n\\nSection 4, Bottleneck discovery frequency. I am not entirely convinced by the measuring of bottleneck states. You say that a distance is computed between the predicted object position and the ground-truth object position. If a model were to output exactly the same frame as given in context, would the distance be zero? If so, doesn\\u2019t that mean that a model who predicts a non-bottleneck state before or after the robotic arm moves the pieces is estimated to have a very good bottleneck prediction frequency? I found this part of the paper the hardest to follow and the least convincing. Perhaps some intermediate results could help prospective readers understand better and be convinced of the protocol\\u2019s merits.\", \"typos\": \"Appendix E, 2nd paragraph, first sentence: \\u201c... generate an bidirectional state\\u201d --> \\u201cgenerate A bidirectional state\\u201d\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Very interesting proposal and well-written method. Experiments section is though poorly structured\", \"review\": \"Revision\\n----------\\nThanks for taking the comments on board. I like the paper, before and after, and so do the other reviewers. Some video results might prove more valuable to follow than the tiny figures in the paper and supplementary. Adding notes on limitations is helpful to understand future extensions.\\n\\n-----------------------------------------------\", \"initial_feedback\": \"---------------------\\nThis is a very exciting proposal that deviates from the typical assumption that all future frames can be predicted with the same certainty. Instead, motivated by the benefits of discovering bottlenecks for hierarchical RL, this work attempts to predict \\u2018predictable video frames\\u2019 \\u2013 those that can be predicted with certainty, through minimising over all future frames (in forward prediction) or all the sequence (in bidirectional prediction). Additional, the paper tops this with a variational autoencoder to encode uncertainty, even within those predictable frames, as well as a GAN for pixel-level generation of future frames. \\n\\nThe first few pages of the paper are a joy to read and convincing by default without looking at experimental evidence. I do not work myself in video prediction, but having read in the area I believe the proposal is very novel and could make a significant shift in how prediction is currently perceived. It is a paper that is easy to recommend for publication based on the formulation novelty, topped with VAEs and GANs as/when needed.\\n\\nBeyond the method\\u2019s explanation, I found the experiment section to be poorly structured. The figures are small and difficult to follow \\u2013 looking at all the figures it felt that \\u201cmore is actually less\\u201d. Many of the evidence required to understand the method are only included in the appendices. However, having spent the time to go back and forth, I believe the experiments to be scientifically sound and convincing.\\n\\nI would have liked a discussion in the conclusion on the method\\u2019s limitation. This reviewer believes that the approach will struggle to deal with cyclic motions. In this case the discovered bottlenecks might not be the most useful to predict, as these will correspond to future frames (not nearby though) that are visually similar to the start (in forward) or to the start/end (in bidirectional) frames. An additional loss to reward difficult-to-predict frames (though certain compared to other times) might be an interesting additional to conquer more realistic (rather than synthetic) video sequences.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Good novel contribution\", \"review\": \"Summary:\\nThe paper reformulates the task of video prediction/interpolation so that a predictor is not forced to generate frames at fixed time intervals, but instead it is trained to generate frames that happen at any point in the future. The motivation for such approach is that there might be future states that are highly uncertain \\u2013 and thus, difficult to predict \\u2013 that might not be useful for other tasks involving video prediction such as planning. The authors derive different loss functions for such Time-Agnostic Prediction (TAP), including extensions to the Variational AutoEncoders (VAE) and Generative Adversarial Networks (GAN) frameworks, and conduct experiments that suggest that the frames predicted by TAP models correspond to \\u2018subgoal\\u2019 states useful for planning.\", \"strenghts\": \"[+] The idea of TAP is novel and intuitively makes sense. \\nIt is clear that there are frames in video prediction that might not be interesting/useful yet are difficult to predict, TAP allows to skip such frames.\\n[+] The formulation of the TAP losses is clear and well justified. \\nThe authors do a good job at showing a first version of a TAP loss, generalizing it to express preferences, and its extension to VAE and GAN models, showing that\", \"weaknesses\": \"[-] The claim that the model discovers meaningful planning subgoals might be overstated. \\nThe hierarchical planning evaluation experiment seems like it would clearly favor TAP compared to a fixed model (why would the middle prediction in time of the fixed model correspond to reasonable planning goals?). Furthermore, for certain tasks and environments it seems like the uncertain frames might be the ones that correspond to important subgoals. For example, for the BAIR Push Dataset, usually the harder frames to predict are the arm-object interactions, which probably would correspond to the relevant subgoals.\\n\\nOverall I believe that the idea in this paper is a meaningful novel contribution. The paper is well-written and the experiments support the fact that TAP might be a better choice for training frame predictors for certain tasks.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
BJg4Z3RqF7
Unsupervised Adversarial Image Reconstruction
[ "Arthur Pajot", "Emmanuel de Bezenac", "Patrick Gallinari" ]
We address the problem of recovering an underlying signal from lossy, inaccurate observations in an unsupervised setting. Typically, we consider situations where there is little to no background knowledge on the structure of the underlying signal, no access to signal-measurement pairs, nor even unpaired signal-measurement data. The only available information is provided by the observations and the measurement process statistics. We cast the problem as finding the \textit{maximum a posteriori} estimate of the signal given each measurement, and propose a general framework for the reconstruction problem. We use a formulation of generative adversarial networks, where the generator takes as input a corrupted observation in order to produce realistic reconstructions, and add a penalty term tying the reconstruction to the associated observation. We evaluate our reconstructions on several image datasets with different types of corruptions. The proposed approach yields better results than alternative baselines, and comparable performance with model variants trained with additional supervision.
[ "Deep Learning", "Adversarial", "MAP", "GAN", "neural networks" ]
https://openreview.net/pdf?id=BJg4Z3RqF7
https://openreview.net/forum?id=BJg4Z3RqF7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Byg9LP8XlV", "HkgUbQ_WgN", "rJesRWZs1V", "BJguc0OE0Q", "BJlcvR_VRm", "H1xoXRON0m", "H1xo8T_4RQ", "BylDLypM0X", "SklY001CnQ", "BylSgJYp2Q", "B1lziIvI3m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544935250127, 1544811262286, 1544389075404, 1542913679947, 1542913633925, 1542913571019, 1542913363274, 1542799183512, 1541435089198, 1541406445185, 1540941465582 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1163/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1163/Authors" ], [ "ICLR.cc/2019/Conference/Paper1163/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1163/Authors" ], [ "ICLR.cc/2019/Conference/Paper1163/Authors" ], [ "ICLR.cc/2019/Conference/Paper1163/Authors" ], [ "ICLR.cc/2019/Conference/Paper1163/Authors" ], [ "ICLR.cc/2019/Conference/Paper1163/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1163/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1163/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1163/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes a GAN-based method to recover images from a noisy version of it. The paper builds upon existing works on AmbientGAN and CS-GAN. By combining the two approaches, the work finds a new method that performs better than existing approaches.\\n\\nThe paper clearly has new interesting ideas which have been executed well. Two of the reviewers have voted in favour of acceptance, with one of the reviewer providing an extensive and detailed review. The third reviewer however has some doubts which were not resolved completely after the rebuttal.\\n\\nUpon reading the work myself, I am convinced that this will be interesting to the community. However, I will recommend the authors to take the comments of Reviewer 2 into account and do whatever it takes to resolve issues pointed by the reviewer.\\n\\nDuring the review process, another related work was found to be very similar to the approach discussed in this work. This work should be cited in the paper, as a prior work that the authors were unaware of.\", \"https\": \"//arxiv.org/abs/1812.04744\\nPlease also discuss any new insights this work offers on top of this existing work.\\n\\nGiven that the above suggestions are taken into account, I recommend to accept this paper.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good work, but a few issues should be addressed in the camera-ready version.\"}", "{\"title\": \"Code Release\", \"comment\": \"We have released the code used in this paper : https://github.com/UNIR-Anonymous/UNIR\"}", "{\"title\": \"post rebuttal comments\", \"comment\": \"The authors have addressed my comments well.\\nI think the paper is a great contribution on solving inverse problems using GANs and I think it should be accepted. \\n\\nI am also concerned by the behavior of Anonreviewer3 who is ignoring the requests of the area chairs (and myself) to write a detailed review and I will request that the TPC chairs to take this into consideration for the future.\"}", "{\"title\": \"Response to comments\", \"comment\": \"Thank you for your feedback. We have taken note of your comments and have been actively working to take them into account.\\nYou raised two main questions , one concerning the measurement process and the second one concerning the need to test the model on additional datasets.\\n\\nConcerning the first question, we have rewritten the sections explaining to the measurement process (please, see also the general comments about the measurement process above). Below is an extract from Section 2.1. \\u201cProblem Setting\\u201d of the updated paper version:\\n\\n\\u201cSuppose there exists a signal X ~ p_X we wish to acquire, but we only have access to this signal through lossy, inaccurate observation Y ~ p_Y. The measurement process is modeled through a stochastic operator F mapping signals X to their associated observations Y. We will refer to F as the measurement process, which corrupts the input signal. F is parameterized by a random variable \\\\Theta ~ p_\\\\Theta following an underlying distribution p_\\\\Theta we can sample from, which represents the factors of corruption. Thus, given a specific signal x, we can simulate its measurement by first sampling \\\\theta from p_\\\\Theta, and then computing F(x; \\\\theta). Additional sources of uncertainty, e.g. due to unknown factors, can be modeled using additive i.i.d. Gaussian noise \\\\Eps ~ \\\\mathcal{N}(0, \\\\sigma^2 I), so that the overall acquisition process becomes: \\nDifferent instances of F will be considered, e.g. like random occlusions, information acquisition from a sparse subset of the signal, overly smoothing out and corrupting the original distribution with additive noise, etc... In such cases, the factors of corruption \\\\Theta might respectively represent the position of the occlusion, the coordinates of the acquired information, or simply the values of the additive noise.\\u201d\\n\\n\\nFor different measurement processes instances, also called corruptions, please refer to the Corruptions section (4.2) in the Experiments Section.\\n\\nAs for the second remark, we have added experiments conducted on two additional datasets: LSUN Bedrooms, and Recipe1M. The results are provided in section 5 and in appendix 3. Overall this confirms the good results of the model already obtained on the first dataset.\"}", "{\"title\": \"Thank your for your feedback\", \"comment\": \"Thank you very much for your review and comments : they are very much appreciated.\\n\\n\\u201cIf I understand correctly, this is the 'Conditional AmbientGAN' approach that is used as a baseline. This is a sensible approach given prior work. However, the authors show that their method ('Unpaired Supervision') performs significantly better compared to the Conditional AmbientGAN baseline. This is very surprising and interesting to me. Please discuss this a bit more ? As far as I understand the proposed method is a merging of AmbientGAN and CS-GAN, but much better than the naive separation. Could you give a bit more intuition on why ?\\u201d\\n\\n\\nIndeed, this is correct. The conditional AmbientGan baseline combines the approaches of AmbientGan and CS-GAN. First, a generative model G of the data is learned without having access to samples of the signal distribution using the AmbientGAN framework. Then, in order to reconstruct the signal from a corrupted measurement y, we look for an input vector z of G that produces a simulated measurement G(z) that looks like y, by minimizing the Euclidean distance between G(z) and y. This method suffers from several drawbacks, which we believe can explain the poor results:\\n\\n* First drawback: suboptimality of the Generator. In theory, if the generator was optimal, under suitable conditions for the measurement process F, it would generate outputs belonging to the manifold of uncorrupted images (that we shall name M). Thus, projecting a measurement onto M should recover an uncorrupted image. However, this is never the case: in practice, GANs suffer from a number of problems. This means that it is possible that images from the manifold of generated images do not correspond to true samples: applying gradient descent to minimize the aforementioned distances, tend to generate images similar to the corrupted images y, and not to uncorrupted images x. Our model does not suffer from this problem because it maximizes the log-likelihood and the prior term jointly. If G generates a signal that does not belong to M in order to maximize the log-likelihood term (similarly to what happens with the ConditionalAmbientGan baseline), the discriminator will easily be able to detect this and consequently, the reconstruction network G is corrected in order to avoid this behaviour.\\n\\n* Second drawback : Euclidean distance used in ConditionalAmbientGan is not adapted in the general case considered in the paper. The natural thing to do would be to find a reconstruction from M that maximizes the likelihood p(y|x). If the corruption in the measurement process corresponds to iid additive noise, it is possible to show that the problem reduces to minimizing the euclidean distance between x and y, like in ConditionalAmbientGan. However, this is not necessarily the case for other measurement processes. Indeed, in the general formulation, the likelihood is intractable;it requires marginalizing on the noise variables \\\\theta, and for each SGD step we would need to approximate it, which would be very costly. Our likelihood term in the cost functions better reflects the true likelihood.\\n\\n\\n\\nIn the appendix where is the proposed method in fig 5- 8 ?\\nFig 5-8 (now 11-14 )are samples from our baselines. The corresponding samples from our model were in figure 9 to 14. We are adding our model to figures 5-8 (11-14). Notes that we are now providing samples from other datasets (see general comments).\\n\\nDoes the proposed method outperform Deep Image Prior ? \\n\\nOur experiments show that for strong corruption function DIP yields poor results compared to our model (see figure 11-14). One of the main explanation is that it does not capture semantic information from the other images of the dataset. \\n\\nFor the measurement process Patch-Band, Remove-Pixel and Remove-Pixel-Channel, Deep Image Prior (DIP) has access to the corruption parameter \\\\theta of the associated measurement (we have used the inpainting formulation of DIP). In other words, it has access to the mask, as opposed to our model. We have conducted experiments where DIP does not have the mask (normal formulation of DIP), and have observed very poor results (which were actually quite similar to the poor results in Conditional AmbientGAN).\"}", "{\"title\": \"We address the questions and add clarifications, which are also reflected in the updated draft\", \"comment\": [\"Thank you for the review. We are sorry that you found the overall presentation confusing, and we have been actively working on trying to make the paper much clearer. We have thus submitted a revised version of the paper taking into account your comments and answering your questions. Please see also the general comments. Typically, we have:\", \"Rewritten Section 2.1 (Problem Setting) describing the abstract measurement process and the role of theta, taking into account your comments.\", \"Modified the Method section (Section 3) in order to make the explanations more straightforward and less abstract. Typically, we moved some mathematical results in the appendix for a more fluent reading.\", \"Added experiments on two additional datasets: LSUN and Recipe-1M (Section 4.1 + appendix C). They illustrate the behavior of the model and of the baselines on image datasets with different characteristics and confirm the good results obtained by our model.\", \"Provided additional details on the hyperparameters and the architecture for overall reproducibility (Section 4.1). Note that we will be releasing the code shortly.\", \"Added details regarding the specific measurement instances (also called corruptions) used in the experiments (Section 4.2 Corruptions),\", \"Added details on the different baselines in Section 4.3. (+ Figures visually describing them in appendix )\"], \"to_answer_your_question_regarding_the_structure_of_the_measurement_process\": \"the measurement (or corruption) process described in equation (1) is assumed known. This means that, as in most of the problem formulations for signal recovery, the structure of the stochastic function F is known. For example, let us consider the additive Gaussian noise case. F(X, \\\\Theta) = X + \\\\Theta, where X is the signal random variable to be recovered, and \\\\Theta is the noise random variable (also called corruption parameter) whose underlying distribution p_\\\\Theta is Gaussian. This distribution p_\\\\Theta is assumed known, although for a specific measurement, we do not know the precise value \\\\theta that contributed to its corruption. In other cases, typically when the measurement process induces a more structured corruption such as in our Patch Band corruption, that randomly places a band occluding the original image (introduced in Section 4.2), \\\\Theta follows a uniform distribution taking its values from the space of pixel coordinates. To simulate this corruption process, one samples a \\\\theta from the prior p_\\\\Theta, and uses it to corrupt the signal x, resulting in measurement y = F(x, \\\\theta). In this case, F places a band using \\\\theta as the position of the top of the band. This is exactly the same formulation as the one used for AmbientGan: the associated corruptions parameter \\\\Theta for \\u201cDropPatch\\u201d which is very similar to our \\u201cPatchBand\\u201d, corresponds to the position of the occluding patch (refer to the official implementation [1]). Note that it would also be possible to sample the size of the box, if its size varies in the corrupted data.\\n\\nPaired/Unpaired variant explanation :\\n\\n\\nFor the two model variants that use the additional information, *Unpaired and Paired Variant* we have added additional details in the Baseline Section 4.3, and additional Figures describing them in the Baseline appendix C. Below is an extract of the Baselines Section of the updated paper:\", \"unpaired_variant\": \"\\u201cHere, we have access to samples of the signal distribution p_X. This means that although we have no paired samples from the joint p_X,Y, we have access to unpaired samples from p_X and p_Y. This baseline is similar to our model although, instead of discriminating between a measurement from the data y and a simulated measurement \\\\hat{y}, we directly discriminate between samples from the signal distribution and the output of the reconstruction network \\\\hat{x}.\\u201d\", \"paired_variant\": \"\\u201cThis baseline has access to signal measurement pairs (y, x) from the joint distribution p_X,Y. Given input measurement y, the reconstruction is obtained by regressing to the associated signal x using a MSE loss. In order to avoid blurry samples, we add an adversarial term in the objective in order to constrain G to produce realistic samples, as in Pix2Pix [2]. The model is trained using the same architectures as our model, and the hyperparameters have been found using cross-validation. \\u201d\\n\\n\\n[1]: https://github.com/AshishBora/ambient-gan/blob/master/src/commons/measure.py#L176\\n[2]: https://phillipi.github.io/pix2pix/\"}", "{\"title\": \"General Comments and Paper Revision\", \"comment\": \"Thanks to all the reviewers for their comments and suggestions. We tried to take all of them into account, we reorganized the paper accordingly and hope to provide now all the required precisions. We address below some general comments/ questions raised by the reviewers and then give detailed answers for each review.\\n\\nThe model presentation as been rewritten, highlighting the main ideas and results (section 3) while deferring some mathematical details to Appendix A. We have added figures illustrating the different components of the model (Fig. 1, 2, 3).\\nDetails on the model parameters used for the experiments are provided in section 4.1, details on the corruption processes used for the experiments in section 4.2 and the baselines used for comparison are described quite extensively in section 4.3.\\nWe performed tests on two additional datasets (LSUN Bedrooms and Recipe-1M). The three datasets have different characteristics, these experiments thus illustrate the model behavior for these different contexts. In the initial version, tests were performed on the CelebA dataset only, and two reviewers mentioned that this was too limited.\\n Finally, the reviewers raised questions on the nature of the perturbation mechanism (the F(x;theta) function in the text). We agree that the description might have been unclear. This is now fully described in section 2.1. In a few words, we suppose that there exists a signal x we wish to reconstruct, but we only have access to x through lossy measurements y. The measurement process is modeled by a stochastic function with corruption parameters theta associated to a prior distribution p_Theta. The observations y are then supposed to be generated as y = F(x; theta). We have added discussions in the text, explaining the instances of F and p_Theta associated to the different types of corruptions used in the experiments.\"}", "{\"title\": \"please write a detailed review\", \"comment\": \"Reviewer 3, your review is in my opinion below the threshold needed for a top scientific conference.\\nPlease read the paper more carefully and write a detailed quality review. We are all working very hard to make the noisy review process better.\"}", "{\"title\": \"UNSUPERVISED ADVERSARIAL IMAGE RECONSTRUCTION\", \"review\": [\"The authors address the problem of recovering an underlying signal from lossy and inaccurate measurements in an unsupervised fashion. They use a GAN framework to recover plausible signals from the measurements in the data.\", \"Authors need to test other datasets, CelebA dataset is too limited.\", \"Similarly, the experiment with different corruption processes are required.\", \"What is a definition of F. It is not clear \\\"measurement process\\\".\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"image reconstruction from noisy samples\", \"review\": \"This is a very interesting paper that achieves something that seems initially impossible:\\nto learn to reconstruct clear images from only seeing noisy or blurry images. \\n\\nThe paper builds on the closely related prior work AmbientGAN which shows that it is possible to learn the *distribution* of uncorrupted samples using only corrupted samples, again a very surprising finding. \\nHowever, AmbientGAN does not try to reconstruct a single image, only to to learn the clear image distribution. The key idea that makes this is possible is knowledge of the statistics of the corruption process: the generator tries to create images that *after they have been corrupted* they look indistinguishable from real corrupted images. This surprisingly works and provably recovers the true distribution under a very wide set of corruption distributions, but tells us nothing about reconstructing an actual image from measurements. \\n\\nGiven access to a generative model for clear images, an image can be reconstructed from measurements by maximizing the likelihood term. This method (CS-GAN) was introduced by Bora et al. in 2017. Therefore one approach to solve the problem that this paper tackles is to first use AmbientGAN to get a generative model for clear images and then use CS-GAN using the learned GAN. If I understand correctly, this is the 'Conditional AmbientGAN' approach that is used as a baseline. This is a sensible approach given prior work. However, the authors show that their method ('Unpaired Supervision') performs significantly better compared to the Conditional AmbientGAN baseline. This is very surprising and interesting to me. Please discuss this a bit more ? As far as I understand the proposed method is a merging of AmbientGAN and CS-GAN, but much better than the naive separation. Could you give a bit more intuition on why ?\\n\\nI would like to add also that the authors can use their approach to learn a better AmbientGAN. After getting their denoised images, these can be used to train a new AmbientGAN, with cleaner images as input , which should be even better no ?\\n\\nIn the appendix where is the proposed method in fig 5- 8 ?\\n\\nDoes the proposed method outperform Deep Image Prior ?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting but confusing\", \"review\": \"This paper presents a method to reconstruct images using only noisy measurements. This problem is practically interesting, since the noiseless signal may be unavailable in many applications. The approach combines ideas from recent development in compressed sensing and GANs. However, the model\\u2019s presentation is confusing, and many important details of the experiments are missing.\", \"pros\": [\"The problem is interesting and important\", \"The combination of compressed sensing and GANs for image reconstruction is novel\"], \"cons\": [\"The model structure is unclear: for example, what is the role of the variable \\\\theta? Section 2.1 says it is known, but the algorithm samples from its prior(?). Since there is no further explanation with respect to the experiments, I am not sure how the values of \\\\theta or its distributions were determined. Although \\\\theta is formally similar to the \\\\theta parameters of the measurement function in ambientGANs, this interpretation is at odds with the example given in the paper (below eq.1, saying \\\\theta can be positions or sizes).\", \"A few important details of the model are missing. For example, what is the exact structure of the measurement function F?\", \"The baseline models are a bit confusing. More detail about unpaired vs paired supervision would also be helpful for understanding how these baseline models use the additional information.\", \"Although the paper mentioned parameters are obtained from cross-validation, it would still be helpful to describe a few important ones (e.g., neural network size, weight \\\\lambda) for comparison with other models.The experiments on only CelebA dataset are too limited.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rylV-2C9KQ
Deep Decoder: Concise Image Representations from Untrained Non-convolutional Networks
[ "Reinhard Heckel", "Paul Hand" ]
Deep neural networks, in particular convolutional neural networks, have become highly effective tools for compressing images and solving inverse problems including denoising, inpainting, and reconstruction from few and noisy measurements. This success can be attributed in part to their ability to represent and generate natural images well. Contrary to classical tools such as wavelets, image-generating deep neural networks have a large number of parameters---typically a multiple of their output dimension---and need to be trained on large datasets. In this paper, we propose an untrained simple image model, called the deep decoder, which is a deep neural network that can generate natural images from very few weight parameters. The deep decoder has a simple architecture with no convolutions and fewer weight parameters than the output dimensionality. This underparameterization enables the deep decoder to compress images into a concise set of network weights, which we show is on par with wavelet-based thresholding. Further, underparameterization provides a barrier to overfitting, allowing the deep decoder to have state-of-the-art performance for denoising. The deep decoder is simple in the sense that each layer has an identical structure that consists of only one upsampling unit, pixel-wise linear combination of channels, ReLU activation, and channelwise normalization. This simplicity makes the network amenable to theoretical analysis, and it sheds light on the aspects of neural networks that enable them to form effective signal representations.
[ "natural image model", "image prior", "under-determined neural networks", "untrained network", "non-convolutional network", "denoising", "inverse problem" ]
https://openreview.net/pdf?id=rylV-2C9KQ
https://openreview.net/forum?id=rylV-2C9KQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ByxI6UMklE", "H1xSIp171N", "HJeslXQCTX", "rkxS0tgca7", "r1gzyKy_a7", "Hkg5wOyOaX", "HJea7OJdpX", "S1llxt2cnQ", "BkeNja15nm", "BJeYeRM0jm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544656574124, 1543859533079, 1542497010998, 1542224333457, 1542088922019, 1542088802346, 1542088740696, 1541224680127, 1541172636236, 1540398576708 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1162/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1162/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1162/Authors" ], [ "ICLR.cc/2019/Conference/Paper1162/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1162/Authors" ], [ "ICLR.cc/2019/Conference/Paper1162/Authors" ], [ "ICLR.cc/2019/Conference/Paper1162/Authors" ], [ "ICLR.cc/2019/Conference/Paper1162/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1162/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1162/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"In this work, the authors propose a simple, under parameterized network architecture which can fit natural images well, when fed with a fixed random input signal. This allows the model to be used for a number of tasks without requiring that the model be trained on a dataset. Further, unlike a recently proposed related method (DIP; [Ulyanov et al., 18]), the method does not require regularization such as early-stopping as with DIP.\\nThe reviewers noted the simplicity and experimental validation, and were unanimous in recommending acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Simple model which achieves good results\"}", "{\"title\": \"Updating the score\", \"comment\": \"I see, thanks.\\n\\nI updated the score, thanks for the improvement in the paper!\\n\\nNote, however, that even though I liked the improvements, the discussion didn't feel very right to me (in particular the applicability of the theorem) and I was considering not updating the score because of that.\"}", "{\"title\": \"response\", \"comment\": \"Thanks for the feedback!\\n\\nRegarding the second concern, yes we do agree with the estimate. The theorem uses the assumption: k^2 log(n_0) / n <= 1/32. This is equal to k^2 <= n/ (32 \\\\log(n) ), and with the parameters of the described case, this indeed says that k^2 <= 2.56. However, the constant 1/32 is not optimal and can be made larger, e.g., by increasing the probability in the statement. Then the bound on k would be less restrictive. The condition essentially says that k^2, the number of parameters, should be smaller (by a logarithmic factor) than the dimension of the output. In this regime the decoder is underparameterized, and throughout the paper, we operate in the regime where the decoder is underparameterized.\\n\\nThanks for asking your colleagues, that is certainly helpful! We do think that the architecture is quite different: our architecture is an underparameterized network without convolutions and an decoder-like structure, while the architecture of the DIP is overparameterized and has an encoder-decoder structure with skip connections. We think the setup is different in that we do not require regularization; however, if setup refers to fitting an un-trained model, then the setup is indeed the same.\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for addressing the first concern and other improvements, it is much better this way!\\n\\nI didn't get your response about the second concern though. Do you agree with my estimate that you would need no more than 2.5 parameters in the described case for the theorem to work (I could easily be wrong there)? If so, how is that \\\"the regime in which the deep decoder operates throughout the paper\\\"?\\n\\nI also want to share the following information with you without asking you to act on it or something, it's up to you to decide if it's important :)\\nI asked three colleagues to read the abstract and introduction (and nothing more) of the updated paper and 1 out of 3 thought that you are doing something completely different than DIP, i.e. different setup AND architecture. He was surprised to learn that it is not the case (which may indicate that my first concern is not fully addressed, but maybe it's fine, you can't expect everyone to understand everything).\\nAnd also, another 1 of the 3 thought that by non-convolutional you meant fully-connected.\"}", "{\"title\": \"response\", \"comment\": \"Many thanks for the detailed review!\", \"main_comments\": \"1/ The DIP approach critically relies on regularization in order to make the method work (both by adding random noise in each optimization step to the input, as well as early stopping). \\nAs the first reviewer noted ``In fact, the DIP of Ulyanov et al. can hardly be considered \\\"a model\\\" (or a prior, for that matter), and instead should be considered \\\"an algorithm\\\", since it relies on the early stopping of a specific optimization algorithm''. \\n\\nHowever we follow the reviewers' suggestion and made clear that the idea to use a deep network without learning as an image model is not new and rewrote the item to ``The network itself acts as a natural data model. Not only does the network require no training (just as the DIP); it also does not critically rely on regularization, for example by early stopping (in contrast to the DIP).''\\nBefore that, in the introduction, in the original and revised version, we have a paragraph devoted to the DIP explaining that Ulyanov et al. introduced the idea of using a deep neural network without learning as an image model.\\n\\n2/ Regarding the theoretical contribution: We fully agree that a limitation of the theorem is that it pertains to a one layered version of the decoder. We are currently extending this to the multilayer case, but still have to address a technical difficulty in counting the number of different sign pattern matrices.\", \"regarding_the_assumptions\": \"The proposition uses the assumption that k^2 log(n_0) / n <= 1/32. Here, the constant 1/32 is not optimal. k^2 is essentially the number of parameters of the model, and n is the output dimension.\\nThe proposition is only interesting if k^2 log(n_0) / n <= 1/20 even without this assumption (due to the right hand side of the lower bound) therefore this assumption is not restrictive. \\n\\nThe bound is applicable if the number of parameters, k^2 is smaller than a logarithmic term times the number of output parameters, i.e., it allows the number of parameters to scale almost linearly in the output dimension. This is the regime in which the deep decoder operates throughout the paper.\\n\\nWe agree that many natural noise patterns have structure, and that those can be better approximated with deep models, and are thus more difficult to remove.\\n\\n3/ We have added the sentence ``In the default architectures with $d=6$ and $k=64$ or $k=128$, we have that N = 25,536 (for k=64) and N = 100,224 (k=128)\\nout of an RGB image space of dimensionality 512\\\\times512\\\\times3=786,432 parameters.'' to specify the number of parameters. \\nThanks for the suggestion to try second order method like LBFGS; we have tried LBFGS as a response to the reviewer's comment. It converges in significantly fewer iterations, but each iterations is so much more expensive that overall it optimizes slower than ADAM or gradient descent.\", \"minor_comments\": \"1/ Figure 4: We have added labels and the sentence ``Early stopping can mildly enhance the performance of DD; to see this note that in panel (a), the minimum is obtained at around 5000 iterations and not at 50,000.'' in the caption to clarify. \\nAlso, we have added the sentence ``Models are fitted independently for the noisy image, the noiseless image, and the noise.'', and rewrote the paragraph\\nThanks for pointing this out!\\nWe agree that here we present only results for one image, but we did carry out simulations for many images, and those plots are qualitatively the same for all the images considered. Thus our conclusions about the model do not only hold for one image.\\n\\n2/ Normalization is applied channel wise. Let z{ij} be the j-th column in the i-th layer. Then z{ij} is normalized independently of any of the other channels.\\n\\n3/ We have reworded the corresponding paragraphs to make clear that while we do not use convolutions, and thus this is not strictly speaking a convolutional neural network, it shares many structural similarities with a conventional neural network, as pointed out by the reviewer.\\n\\n4/ The equation is correct in that the parameter choices in the paper are such that the deep decoder has much fewer model parameters N than its output dimension. Thus N is much less than n.\\n\\n5/ We agree that it is not optimal to use unintroduced notation at this point, but we made this compromise so that we can illustrate the performance of the deep decoder without introducing its details, but wanted to give a reader the chance to later see exactly what parameters we used.\\n\\n6/ Unfortunately choosing k=6 is too small to have a small representation error, i.e., to represent the image well. We have, however not hand-selected the 8 images shown out of the 64, and the other 64-8 images look very similar. We have all the images in the jupyter notebook that comes with the paper.\\n\\n7/ Great question, it is faster to optimize the deep decoder since the adam/SGD steps are cheaper, but it indeed seems to require slightly more iterations for best performance than the DIP.\"}", "{\"title\": \"response\", \"comment\": \"Many thanks for the detailed review!\\n\\n1/ We agree that there are many elements of our architecture that are similar to that of a convolutional network, however the network does not perform convolutions. To reflect both points, we have revised the text to: \\n``The network does not use convolutions. \\nInstead, the network does have pixelwise linear combinations of channels, and just like in a convolutional neural network the weights are \\nare shared among spatial positions.\\nNonetheless, they are not convolutions because they provide no spatial coupling between pixels, despite how pixelwise linear combinations are sometimes called `1x1 convolutions.' '',\\nand we have also added a subsection comparing the compression performance of our architecture to that of a decoder with convolution layers. In a sense, what the deep decoder is doing is separating multiple roles that proper convolutional layers fill: the DD breaks apart the spatial coupling inherent to convolutions from their channel dependence and equivariance. Further, it says that the spatial coupling need not be learned or fit to data, and can be directly imposed by upsampling.\\n\\n2/ Yes, the upsampling analysis in Figure 5 also extends to two-dimensional images. We agree that natural images are only approximately piece-wise smooth after all, and the deep decoder only provides an approximation of natural images (albeit a very good one).\\n\\n3/ We agree and have changed `batch normalization' to `channel normalization' throughout.\\n\\n4/ Great point; we have added the sentence ``The optimal $k$ trades off those two errors; larger noise levels require smaller values of $k$ (or some other form of regularization). \\nIf the noise is significantly larger, then the method requires either choosing $k$ smaller, or it requires another means of regularization, for example early stopping of the optimization.\\nFor example $k=64$ or $128$ performs best out of $\\\\{32,64,128\\\\}$, for a PSNR of around 20dB, while for a PSNR of about 14dB, $k=32$ performs best.''\\n\\n5/ We do not mention the standard deviation, but do specify the SNR throughout (e.g., in table 1 in column identity). We have clarified this in the caption of the table.\\n\\n6/ It essentially produces smooth noise then. The weights learned by the deep decoder pertain to the source noise tensor. We have added a corresponding figure to the jupyter notebook for reproducing Figure 6.\"}", "{\"title\": \"response\", \"comment\": \"Many thanks for the review!\\nGood point regarding the negative results; we have added a subsection in the revised paper entitled ``A non-convolutional network'', where we compare to a convolutional decoder and conclude that ``Our simulations indicate that, indeed, linear combinations, yield more concise representations, albeit not by a huge factor.''.\\n\\nRegarding the minor points, we have reworded the paragraph on regularizing, and changed `compression ratio' to `compression factor', and reworded such that `large compression factor' means large compression.\"}", "{\"title\": \"A more principled DIP, interesting contribution.\", \"review\": \"In this paper, the authors propose a method for dimensionality reduction of image data. They provide a structured and deterministic function G that maps a set of parameters C to an image X = G(C). The number of parameters C is smaller than the number of free parameters in the image X, so this results in a predictive model that can be used for compression, denoising, inpainting, superresolution and other inverse problems.\", \"the_structure_of_g_is_as_follows\": \"starting with a small fixed, multichannel white noise image, linearly mix the channels, truncate the negative values to zero and upsample. This process is repeated multiple times and finally the output is squashed through a sigmoid function for the output to remain in the 0..1 range.\\n\\nThis approach makes sense and the model is indeed more principled than the one taken by Ulyanov et al. In fact, the DIP of Ulyanov et al. can hardly be considered \\\"a model\\\" (or a prior, for that matter), and instead should be considered \\\"an algorithm\\\", since it relies on the early stopping of a specific optimization algorithm. This means that we are not interested in the minimum of the cost function associated to the model, which contradicts the very concept of \\\"cost function\\\". If only global optimizers were available, DIP wouldn't work, showing its value is in the interplay of the \\\"cost\\\" function and a specific optimization algorithm. None of these problems exist with the presented approach.\\n\\nThe exposition is clear and the presented inverse problems as well as demonstrated performance are sufficient.\\n\\nOne thing that I missed while reading the paper is more comment on negative results. Did the authors tried any version of their model with convolutions or pooling and found it not to perform as well? Measuring the number of parameters when including pooling or convolutions can become tricky, was that part of the reason?\", \"minor\": \"\\\"Regularizing by stopping early for regularization,\\\"\\n\\nIn this paper \\\"large compression ratios\\\" means little compression, which I found confusing.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A very interesting paper with good analysis and decent experiments..\", \"review\": \"Brief summary:\\n\\nThis paper presents a deep decoder model which given a target natural image and a random noise tensor learns to decode the noise tensor into the target image by a series of 1x1 convolutions, RELUs, layer wise normalizations and upsampling. The parameter of the convolution are fitted to each target image, where the source noise tensor is fixed. The method is shown to serve as a good model for natural image for a variety of image processing tasks such as denoising and compression.\", \"pros\": [\"an interesting model which is quite intriguing in its simplicity.\", \"good results and good analysis of the model\", \"mostly clear writing and presentation (few typos etc. nothing too serious).\"], \"cons_and_comments\": [\"The author say explicitly that this is not a convolutional model because of the use of 1x1 convolutions. I disagree and I actually think this is important for two reasons. First, though these are 1x1 convolutions, because of the up-sampling operation and the layer wise normalizations the influence of each operation goes beyond the 1x1 support. Furthermore, and more importantly is the weight sharing scheme induced by this - using convolutions is a very natural choice for natural images (no pun intended) due to the translation invariant statistics of natural images. I doubt this would have worked so well hadn't it been modeled this way (not to mention this allows a small number of parameters).\", \"The upsampling analysis is interesting but it is only done on synthetic data - will the result hold for natural images as well? should be easy to try and will allow a better understanding of this choice. Natural images are only approximately piece-wise smooth after all.\", \"The use of the name \\\"batch-norm\\\" for the layer wise normalization is both wrong and misleading. This is just channel-wise normalization with some extra parameters - no need to call it this way (even if it's implemented with the same function) as there is no \\\"batch\\\".\", \"I would have loved to see actual analysis of the method's performance as a function of the noise standard deviation. Specifically, for a fixed k, how would performance increase or decrease, and vice versa - for a given noise level, how would k affect performance.\", \"The actual standard deviation of the noise is not mentioned in any of the experiments (as far as I could tell)\", \"What does the decoder produce when taking a trained C on a given image and changing the source noise tensor? I think that would shed light on what structures are learned and how they propagated in the image, possibly more than Figure 6 (which should really have something to compare to because it's not very informative out of context).\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Overall a nice paper\", \"review\": \"The paper builds upon Deep Image Prior (DIP) - work which shows that one can optimize a neural generator to fit a single image without learning on any dataset, and the output of the generator (which approximates the image) can be used for denoising / super resolution / etc. The paper proposes a new architecture for the DIP method which has much less parameters, but works on par with DIP. Another contribution of the paper is theoretical treatment of (a simplified version of) the proposed architecture showing that it can\\u2019t fit random noise (and thus maybe better suited for denoising).\\n\\nThe paper is clearly written, and the proposed architecture has too cool properties: it\\u2019s compact enough to be used for image compression; and it doesn\\u2019t overfit thus making early stopping notnesesary (which was crucial for the original DIP model).\\n\\nI have two main concerns about this paper.\\nFirst, it is somewhat misleading about its contributions: it's not obvious from abstract/introduction that the whole model is the same as DIP except for the proposed architecture. Specifically, the first contribution listed in the introduction makes it look like this paper introduces the idea of not learning the decoder on the dataset (the one that starts with \\u201cThe network is not learned and itself incorporates all assumptions on the data.\\u201d).\\n\\nMy second concern is about the theoretical contribution. On the one hand, I enjoyed the angle the authors tackled proving that the network architecture is underparameterized enough to be a good model for denoising. On the other hand, the obtained results are very weak: only one layered version of the paper is analysed and the theorem applies only to networks with less than some threshold of parameters. Roughly, the theorem states that if for example we fix any matrix B of size e.g. 256 x k and matrix U of size 512 x 256 and then compute U relu(B C) where C is the vector of parameters of size k x 1, AND if k < 2.5 (i.e. if we use at most 2 parameters), then it would be very hard to fit 512 iid gaussian values (i.e. min_C ||U relu(B C) - eta|| where eta ~ N(0, 1)). This restriction of the number of parameters to be small is only mentioned in the theorem itself, not in the discussion of its implications.\\nAlso, the theorem only applies to the iid noise, while most natural noise patterns have structure (e.g. JPEG artifacts, broken pixels, etc) and thus can probably be better approximated with deep models.\\n\\nSince the paper manages to use very few parameters (BTW, how many parameters in total do you have? Can you please add this number to the text?), it would be cool to see if second order methods like LBFGS can be applied here.\", \"some_less_important_points\": \"Fig 4 is very confusing.\\nFirst, it doesn\\u2019t label the X axis.\\nSecond, the caption mentions that early stopping is beneficial for the proposed method, but I can\\u2019t see it from the figure.\\nThird, I don\\u2019t get what is plotted on different subplots. The text mentions that (a) is fitting the noisy image, (b) is fitting the noiseless image, and (c) is fitting noise. Is it all done independently with three different models? Then why does the figure says test and train loss? And why DIP loss goes up, it should be able to fit anything, right? If not and it\\u2019s a single model that gets fitted on the noisy image and tested on the noiseless image, then how can you estimate the level of noise fitting? ||G(C) - eta|| should be high if G(C) ~= x.\\nAlso, in this quote \\u201cIn Fig. 4(a) we plot the Mean Squared Error (MSE) over the number of iterations of the optimizer for fitting the noisy astronaut image x + \\u03b7 (i.e., FORMULA ...\\u201d the formula doesn\\u2019t correspond to the text.\\nAnd finally, the discussion of this figure makes claims about the behaviour of the model that seems to be too strong to be based on a single image experiment.\\n\\nI don\\u2019t get the details of the batch normalization used: with respect to which axis the mean and variance are computed?\\n\\nThe authors claim that the model is not convolutional. But first, it\\u2019s not obvious why this would be a good thing (or a bad thing for that matter). Second, it\\u2019s not exactly correct (as noted in the paper itself): the architecture uses 1x1 convolutions and upsampling, which combined give a weak and underparametrized analog of convolutions.\\n\\n> The deep decoder is a deep image model G: R N \\u2192 R n, where N is the number of parameters of the model, and n is the output dimension, which is typically much larger than the number of parameters (N << n).\\nI think it should be vice versa, N >> n\\n\\nThe following footnote\\n> Specifically, we took a deep decoder G with d = 6 layers and output dimension 512\\u00d7512\\u00d73, and choose k = 64 and k = 128 for the respective compression ratios.\\nUses unintroduced (at that point) notation and is very confusing.\\n\\nIt would be nice to have a version of Figure 6 with k = 6, so that one can see all feature maps (in contrast to a subset of them).\\n\\nI\\u2019m also wondering, is it harder to optimize the proposed architecture compared to DIP? The literature on distillation indicates that overparameterization can be beneficial for convergence and final performance.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
S1xNb2A9YX
Minimal Images in Deep Neural Networks: Fragile Object Recognition in Natural Images
[ "Sanjana Srivastava", "Guy Ben-Yosef", "Xavier Boix" ]
The human ability to recognize objects is impaired when the object is not shown in full. "Minimal images" are the smallest regions of an image that remain recognizable for humans. Ullman et al. (2016) show that a slight modification of the location and size of the visible region of the minimal image produces a sharp drop in human recognition accuracy. In this paper, we demonstrate that such drops in accuracy due to changes of the visible region are a common phenomenon between humans and existing state-of-the-art deep neural networks (DNNs), and are much more prominent in DNNs. We found many cases where DNNs classified one region correctly and the other incorrectly, though they only differed by one row or column of pixels, and were often bigger than the average human minimal image size. We show that this phenomenon is independent from previous works that have reported lack of invariance to minor modifications in object location in DNNs. Our results thus reveal a new failure mode of DNNs that also affects humans to a much lesser degree. They expose how fragile DNN recognition ability is in natural images even without adversarial patterns being introduced. Bringing the robustness of DNNs in natural images to the human level remains an open challenge for the community.
[ "dnns", "deep neural networks", "natural images", "humans", "minimal images", "fragile object recognition", "visible region", "human ability", "objects" ]
https://openreview.net/pdf?id=S1xNb2A9YX
https://openreview.net/forum?id=S1xNb2A9YX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Byg5N1djkE", "SyxRF01G1V", "BygMhU1Gy4", "r1lABXCFAX", "SylXYGAtAm", "Hyx5WGAtRX", "rJl_o4iG6Q", "Hyl7zKrTnm", "ByxajLCdhQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544417074146, 1543794310030, 1543792298042, 1543263045760, 1543262842882, 1543262721963, 1541743775838, 1541392651475, 1541101220564 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1161/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1161/Authors" ], [ "ICLR.cc/2019/Conference/Paper1161/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1161/Authors" ], [ "ICLR.cc/2019/Conference/Paper1161/Authors" ], [ "ICLR.cc/2019/Conference/Paper1161/Authors" ], [ "ICLR.cc/2019/Conference/Paper1161/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1161/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1161/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper characterizes a particular kind of fragility in the image classification ability of deep networks: minimal image regions which are classified correctly, but for which neighboring regions shifted by one row or column of pixels are classified incorrectly. Comparisons are made to human vision. All three reviewers recommend acceptance. AnonReviewer1 places the paper marginally above threshold, due to limited originality over Ullman et al. 2016, and concerns about overall significance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"metareview\"}", "{\"title\": \"differences vs. Ullman et al 2018\", \"comment\": \"Note that evaluating human FRIs on DNNs (Ullman et al.) is quite different from extracting FRIs from DNNs (our paper). These two experiments investigate different things:\\n\\n*evaluating human FRIs on DNNs (Ullman et al.): Are DNNs affected by human FRIs? The answer was no.\\n*extracting FRIs on DNNs (our paper): do DNNs have their own set of FRIs (different from humans FRIs)? The answer is yes. We showed for the first time that DNNs also have FRIs.\\n\\nSo, Ullman et al. asks about the transferability of human FRIs to DNNs, while we ask if DNNs have their own FRIs.\", \"quoting_from_our_paper_in_the_introduction\": \"\\\"Ullman et al. (2016) show that DNNs are unable to recognize human minimal images, and the DNN\\ndrop in accuracy for these minimal images is gradual rather than sharp. This begs the question of\\nwhether the sharp drop in accuracy for minimal images is a phenomenon exclusive to human vision,\\nor there exist distinct but analogous images that produce a sharp drop in DNN accuracy.\\\"\"}", "{\"title\": \"differences vs. Ullman et al 2018\", \"comment\": \"Thanks the authors for your comments!\\n\\nI agree with some comments and have a question regarding the difference between this work and Ullman et al.\\n\\n> Ullman et al. analyses minimal images for human vision, while we analyze minimal images for DNNs\\n\\nIsn't it that Ullman et al. already compared simulations vs human vision, and showed that DNNs do not recognize FRIs at the human level?\"}", "{\"title\": \"rebuttal\", \"comment\": \"We thank the reviewer for her/his valuable comments and questions, which we address below.\", \"reviewer\": \"\\u201cGiven what we learned from the adversarial example research area, the contribution of this work is low because results might not be too surprising.\\u201d\", \"authors\": \"Note that we have shown a new type of adversarial example that arises without the need of artificial perturbations. As shown in Section 4, previous works on adversarial examples without artificial perturbations use zero-padding and can be alleviated with architectures with large pooling regions, while FRIs are entirely natural images and cannot be similarly alleviated.\"}", "{\"title\": \"rebuttal\", \"comment\": \"We thank the reviewer for her/his valuable comments and questions, which we address below.\", \"reviewer\": \"\\u201chuman vision is kind of different: it makes multiple passes over the same images at multiple scales, so this might contribute significantly to these differences.\\u201d\", \"authors\": \"DNNs and human vision are different and investigating these differences will help developing better DNNs and understanding human vision. In the paper, we have shown that minimal images are a common phenomenon among DNNs and humans, which opens a new line of research for studying the commonalities and differences between DNNs and humans. To illustrate how to proceed in this line of research, we added an experiment to show the effect of multiscale and the eccentricity dependence of human vision in FRIs. Fig. A.4 shows the FRIs for a scale invariant architecture that processes multiple scales in parallel and is eccentricity dependent (Chen et al. 2017), trained in CIFAR-10. We can see that this architecture alleviates FRIs compared to the architectures we previously tested, but there is still much to do to completely close the gap between DNNs and humans.\"}", "{\"title\": \"rebuttal\", \"comment\": \"We thank the reviewer for her/his valuable comments and questions, which we address below.\", \"reviewer\": \"\\u201cIt is interesting to see the sensitivity of DNNs that are trained for the task of object detection to FRIs, like sensitivity of R-CNN to FRIs.\\u201d\", \"authors\": \"We added Figure A.12 in the paper to show qualitative examples of FRIs for the YOLO object detector. This result illustrates that object detectors also suffer from FRIs, as they are based on DNNs. Quantifying how much the accuracy of the detectors is due to FRIs is an interesting follow up of our paper.\"}", "{\"title\": \"Interesting paper but requires additional experiments\", \"review\": \"Ullman et al. showed that slight changes in location or size of visible regions in minimal recognizable images can significantly impair human ability to recognize objects. This paper is a follow-up of Ullman et al. paper, with focus on sensitivity of DNNs to certain regions in images. In other words, slight change of such regions\\u2019 size or location in the image can significantly affect DNN ability in recognizing them, even-though these changes are not noticeable for humans.\", \"comments_and_questions\": [\"This paper provides in-depth study of fragile recognition in DNNs.\", \"Visualizing activations of different layers of DNN for Loose shift/shrink FRIs can potentially provide more details on why the final output of DNN is significantly different for two visually similar images.\", \"Naively augmenting training data with crops of small FRI sizes can potentially harm and confuse DNN in classifying training samples as many small patches in training images are background and they don't contain target object. It is interesting to see the sensitivity of DNNs that are trained for the task of object detection to FRIs, like sensitivity of R-CNN to FRIs. In this case augmenting training data with crops of small FRI sizes can be properly done since ground-truth bounding boxes can determine which region is foreground and which region is background.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting progress in measuring the fragility of deep neural network based recognition.\", \"review\": \"This paper is a more thorough follow-up to e a previous work by Ullman et al that was comparing minimally recognizable patches by humans compared to deep neural network. This paper exhibits that a wide range of architectures features the same fragility and that these effects can combated by better training methodology and different pooling architectures. Still even with those changes deep CNNs still posses more fragile behavior than human vision. One of my criticism is that human vision is kind of different: it makes multiple passes over the same images at multiple scales, so this might contribute significantly to these differences. Still this paper makes a lot of interesting observations and analyses and represents a first methodological study of this phenomenon.\\n\\nA novelty of this work is that it is the first paper that methodologically analyses FRIs for DNNs a reasearch area which might shed new light on the understanding of how vision systems work and the source of misrecognitions and the limitations of recognition systems.\\n\\nIn light of the changes of the paper and the clarification on the novelty aspect of this research, I suggest this paper to be accepted as it constitutes novel research in understanding how DNNs recognize image content and its similarities and differences to human vision.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"ok paper; an important question on upscaling image crops\", \"review\": \"Thanks the authors for an interesting work!\\nThe paper studies the differences between human and DNN vision via means of minimal images (i.e. smallest image crops that can be correctly classified).\", \"there_are_a_few_notable_take_away_messages\": \"1. DNNs are not invariant to even tiny (1-2 px) translations of small image crops.\\n - It would be more insightful if authors added comparison between DNN sensitivity to tiny translations of small image crops vs. full-size images (i.e. translation-based adversarial examples https://openreview.net/forum?id=BJfvknCqFQ ).\\n2. The smaller the image crops, the more sensitive DNNs become (here, more FRIs)\\n3. DNNs and human vision misclassify the image crops differently: (1) DNNs have almost twice more FRI(s) and (2) FRIs of human and DNNs differ in location.\", \"questions\": \"- \\\"After extracting the region from the image, the region is resized to be of the size required by the network.\\\"\\nWould upscaling say a small 28x28 crop into 224x224 image here naturally negatively impact the DNN predictive performance?\\nThat is, because typically image classifiers are trained on one (or a few) fixed resolution(s) of images.\\n\\nOne hypothesis here is that fragile recognition may be because the test image resolution does not match the training image resolution.\\nHuman on the other hands, have been trained on images of variable resolutions.\\n\\nAn alternative to upscaling here is to zero-pad the crop region. Can you help us understand your choice of upscaling here?\\n\\n+ Originality\\nThe originality is limited as it is a close extenstion work of Ullman et al. 2016\\n\\n+ Clarity\\nThe paper is well written and presented.\\n\\n+ Significance\\nThis work extends our understanding of the differences between DNNs and human vision.\\nHowever, given what we learned from the adversarial example research area, the contribution of this work is low because results might not be too surprising.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
Hkg4W2AcFm
Overcoming the Disentanglement vs Reconstruction Trade-off via Jacobian Supervision
[ "José Lezama" ]
A major challenge in learning image representations is the disentangling of the factors of variation underlying the image formation. This is typically achieved with an autoencoder architecture where a subset of the latent variables is constrained to correspond to specific factors, and the rest of them are considered nuisance variables. This approach has an important drawback: as the dimension of the nuisance variables is increased, image reconstruction is improved, but the decoder has the flexibility to ignore the specified factors, thus losing the ability to condition the output on them. In this work, we propose to overcome this trade-off by progressively growing the dimension of the latent code, while constraining the Jacobian of the output image with respect to the disentangled variables to remain the same. As a result, the obtained models are effective at both disentangling and reconstruction. We demonstrate the applicability of this method in both unsupervised and supervised scenarios for learning disentangled representations. In a facial attribute manipulation task, we obtain high quality image generation while smoothly controlling dozens of attributes with a single model. This is an order of magnitude more disentangled factors than state-of-the-art methods, while obtaining visually similar or superior results, and avoiding adversarial training.
[ "disentangling", "autoencoders", "jacobian", "face manipulation" ]
https://openreview.net/pdf?id=Hkg4W2AcFm
https://openreview.net/forum?id=Hkg4W2AcFm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Hkxf2PcZlN", "H1gnj_npCm", "SyxiD3BG0Q", "rJlI1wrzAX", "HklRMVBzA7", "S1lLdzSG0X", "BJxo72kq3X", "r1gqJD2u2X", "SyxL9vbI27", "rygfJ69oqm", "B1xizrM-q7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1544820650027, 1543518371925, 1542769763025, 1542768350433, 1542767638175, 1542767213981, 1541172258823, 1541093090209, 1540917134337, 1539185882427, 1538495763113 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1160/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1160/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1160/Authors" ], [ "ICLR.cc/2019/Conference/Paper1160/Authors" ], [ "ICLR.cc/2019/Conference/Paper1160/Authors" ], [ "ICLR.cc/2019/Conference/Paper1160/Authors" ], [ "ICLR.cc/2019/Conference/Paper1160/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1160/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1160/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1160/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes a new way to tackle the trade-off between disentanglement and reconstruction, by training a teacher autoencoder that learns to disentangle, then distilling into a student model. The distillation is encouraged with a loss term that constrains the Jacobian in an interesting way. The qualitative results with image manipulation are interesting and the general idea seems to be well-liked by the reviewers (and myself).\\n\\nThe main weaknesses of the paper seem to be in the evaluation. Disentanglement is not exactly easy to measure as such. But overall the various ablation studies do show that the Jacobian regularization term improves meaningfully over Fader nets. Given the quality of the results and the fact that this work moves the needle in an important (albeit hard to define) area of learning disentangled representations, I think would be a good piece of work to present at ICLR so I recommend acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"metareview\"}", "{\"title\": \"Better paper now.\", \"comment\": \"The authors have done a good work to improve their submission and addressed my concerns (e.g., Tab 1 and Appendix is good). I have increased the rating by 1.\"}", "{\"title\": \"Authors' response\", \"comment\": \"Thank you very much for your detailed review.\\n\\nWe answer each item below.\\n\\n> \\\"There are not enough quantitative results [...]\\\"\\n\\nWe added quantitative comparisons for both the unsupervised and supervised\\ntasks. The quantitative measure consists in evaluating, via an external\\nclassifier, how well the latent units condition the specified factor of\\nvariation in the generated image. \\n\\nIn the MNIST example we measure how well the first two latent units can\\nmanipulate the digit class in the images generated by the student models. The\\nresults are presented in the new Table 1, showing that the student with Jacobian\\nsupervision obtains a better trade-off between disentanglement and reconstruction.\\n\\nIn the facial attribute manipulation task we used a pre-trained attribute\\nclassifier provided by the authors of Fader Networks. Using the classifier, we\\nmeasure if by manipulating the latent unit corresponding to one attribute we can\\nchange the presence or absence of that attribute in the generated image. We do\\nthis for all attributes and for all images in the test set. The results are\\nshown in Table 2 and Figure 4.\\n\\nFor comparison, we trained two Fader Networks models to manipulate all\\nattributes. The training did not converge and the resulting manipulation and\\nreconstruction performance is inferior to our method. Besides the quantitative\\ncomparison, this can also be seen qualitatively in the new Figures 7 and 8.\\n\\n\\n> \\\"[...] compare the reconstructions of the teacher and the student on the same\\n image.\\\"\\n\\nWe added a new figure to the appendix showing a comparison between the\\nreconstructions obtained by the teacher and by the student (new Figure 7). It\\nshows that the student model is better at reconstructing fine image details. The\\ncomparison also includes a Fader Networks model trained to manipulate multiple\\nattributes, and show that its reconstruction is distorted.\\n\\n> \\\"Also it\\u2019s not clear whether Figure 4 are results from the student model or\\n the teacher model. [...]\\\"\\n\\nSorry for this lack of clarity. Figure 4 shows results by the student model\\ntrained with Jacobian supervision. We clarified this in the manuscript.\\n\\n> \\\"[...] ablation studies for each of the different losses [...]\\\"\\n\\nWe added ablation studies for both unsupervised and supervised tasks in the new\\nsection A.3 in the appendix (page 14). Unless otherwise noted, the weighs of\\nthe losses were found by evaluation on separate validation sets.\\n\\n> \\\"[...] The higher order terms in the Taylor expansion in (2) and (3)\\n can only be ignored when ||y_2 - y_1|| is small [...]\\\"\\n\\nIndeed, because of the higher order terms, even assuming (5) and (6) hold, (7)\\nis only an approximation. Note however that the norm of the approximation error\\nin (7) is that of the difference between the higher order terms of the teacher\\nand the student, namely ||o^T(||y_2-y_1||) - o^S(||y_2-y_1||)||. This might be\\nlower than the individual higher order terms, especially if both decoders\\nrespond similarly to variations in $y$. Currently, our justification is mainly\\nempirical. We also considered weighing the loss by a factor reciprocal\\nto ||y_2-y_1||, to give less importance to pairs of samples for\\nwhich ||y_2-y_1|| is large. Another option we contemplated is, for the Jacobian\\nsupervision, to consider a blurred version of the student, so that it has the\\nlow resolution of the teacher. The formulation still holds and this would also\\nmake (6) easier to enforce. In informal experiments we observed no significant\\nadvantage w.r.t. the current approach, which is simpler. We\\nleave these possible avenues of improvement as future work.\\n\\n> \\\"[...] you say that \\u201cAfter training of\\n the student with d=1 is finished, we consider it as the new teacher\\u201d. Here do\\n you append z to y when you form the new teacher?\\\"\\n\\nYes this is correct. We clarified this in the text. \\n\\n> On page 6 in the paragraph for prediction loss, you say \\u201cThis allows the\\n decoder to naturally \\u2026\\\" of the attributes\\u201d. I guess you mean this allows the\\n model to give realistic interpolations between y=-1 and 1?\\n\\nWe intended to say that we do not require the prediction to be binary values, as\\nif we used the cross-entropy loss, but any real value. Thus, the decoder can\\nread the amount of attribute variation from this variable, and not only if the\\nattribute is present or not.\\n\\n> \\\"[...] \\u201cHere we could have used any random values in lieu of y_2\\u201d [...]\\\"\\n\\nWe wanted to say that the $y$ part in the fabricated latent code could be\\nrandom, but instead we sample it from the data (copy from another sample). \\nWe clarified this in the text.\\n\\n> \\\"typo: conditionnning -> conditioning\\\"\\n\\nThank you.\\n\\n> \\\"I would be inclined to boost the score up to 7 if the authors include some\\n quantitative results along with more thorough comparisons to Fader Networks\\\"\\n\\nThank you. We hope the additional quantitative and qualitative results can\\nconvince you of the superior performance of our method with respect to Fader\\nNetworks, for multiple attributes manipulation.\"}", "{\"title\": \"Authors' response\", \"comment\": \"Thank you very much for reviewing our work.\\n\\n\\nTo address your main concern, we added quantitative comparisons by using external\\nclassifiers to assess the conditioning of the disentangled factors.\", \"we_believe_the_new_quantitative_results_strongly_support_our_two_main_claims\": \"1) Our model outperforms Fader Networks by achieving better reconstruction and\\n multiple attribute manipulation.\\n2) Once a disentangling teacher model has been obtained, the proposed Jacobian\\n loss allows to add latent units that help improving the reconstruction while\\n maintaining the disentangling.\\n\\n\\nWe address each of your concerns below.\\n\\n\\n> \\\"e.g., it is not clear to me why Fig. 5(e) generated by proposed approach is\\n\\u201cmore natural\\u201d than Fig. 5(d)\\\" \\n\\nWe realize that this is a very subjective remark so we removed this claim from\\nthe image caption. The intent of Fig. 5 is to show that even for single\\nattribute manipulation and reconstruction, our proposed method performs similar\\nor better than Fader Networks. For multiple attributes, a Fader Network model\\ndoes not converge and has a poorer reconstruction and attribute manipulation\\nperformance. Besides the new quantitative results in Table 2 and Figure 4, this\\nis also shown qualitatively in the new Figures 7 and 8 in the appendix.\\n\\n> \\\"Also in the paper there are five hyperparameters (Eqn. 14) and the center\\nclaim is that using Jacobian loss is better. However, there is no ablation study\\nto support the claim and/or the design choice.\\\"\\n\\nWe show quantitatively in the new Table 2 and Figure 4 that using the Jacobian\\nsupervision performs better than the cycle-consistency loss, in terms of the\\ndisentanglement versus reconstruction trade-off. To measure the disentangling\\nperformance of the models, we manipulate the latent variables aiming to change\\nthe presence or absence of each attribute, and check with an external classifier\\nthat the attribute is indeed changed. We used a pre-trained classifier provided\\nby the authors of Fader Networks.\\n\\n> \\\"From my opinion, the paper should show the performance of\\nsupervised training of attributes, the effects of using Jacobian loss and/or\\ncycle loss, the inception score of generated images, etc.\\\"\\n\\nWe included ablation studies in the appendix (new Section A.3, page 14). These\\nshow the separate and combined use of Jacobian and cycle-consistency losses for\\ntraining the student (Table 5). Their combination actually works OK. For the\\nsake of simplicity we keep only the Jacobian loss, and the cycle-consistency\\nloss is only used to train the disentangling by the teacher.\\n\\nNote that by using an external classifier, the measure we obtain is in some\\nsense similar to an inception score.\"}", "{\"title\": \"Authors' response\", \"comment\": \"Thank you very much for reviewing our work.\\n\\nWe chose MNIST for the unsupervised disentangling experiment because the two\\nprincipal factors of variation are related to the digit class and thus it served\\nas a very good pedagogic example.\\n\\nTo address your first concern, we conducted further experiments for the\\nunsupervised disentanglement on the Street View House Numbers (SVHN)\\ndataset. The results are shown in the appendix (Section A.5, page 17). In this\\ncase, the two principal factors are related to the shading of the digit image\\nand not to the class. However, we found that later in the progressive discovery\\nof factors of variation, the algorithm learns factors that are quite related to\\nthe digit class (ninth and tenth factors). Then, the final student model is able\\nto manipulate the class of the digit while approximately maintaining the style\\nof the digit (Figure 11).\\n\\nTo address your second concern, we added quantitative experiments for the\\nunsupervised example of Section 3 (new Table 1). These were obtained by using an\\nexternal MNIST classifier to assess the digit class manipulation. The results\\nshow that the Jacobian supervision indeed allows a more advantageous traversing\\nof the disentanglement versus reconstruction trade-off.\\n\\nFinally, we also added quantitative results for the CelebA experiments, showing\\nthe advantage of our method with respect to Fader Networks (new Table 2 and\\nFigure 4).\"}", "{\"title\": \"Revised version\", \"comment\": \"We thank the reviewers for their constructive comments which helped us\\nto significantly improve our submission.\", \"we_did_the_following_modifications_to_address_the_reviewers_concerns\": \"1) We addressed the lack of quantitative results, which was an\\nimportant concern shared among all reviewers. By using external classifiers on\\nthe generated images, we were able to assess the degree of disentangling and\\nconditioning of the models and thus we were able to consistently quantify their\\ntrade-off between disentanglement and reconstruction.\\n\\nWe believe the resulting quantitative results further support our approach. In\\nparticular, we quantitatively demonstrate superior performance to Fader\\nNetworks in the facial attribute manipulation task.\\n\\n2) We extended the unsupervised experiments by including results on the SVHN\\n dataset (Section A.5 in the appendix, page 17).\\n\\n3) We added further qualitative comparison with Fader Networks on image\\n reconstrucion and attributes manipulation (Section A.4 in the appendix, page\\n 15).\\n\\n4) We added ablation studies for the different components in the loss functions\\n (Section A.3 in the appendix, page 14).\\n\\n5) We replaced Figure 3(b) by a more informative graph showing the traversal of\\n the disentanglement-reconstruction trade-off in the new Figure 4.\\n\\n\\n\\nBesides the modifications suggested by the reviewers, we also did the following\", \"changes\": \"6) We made minor modifications to the manuscript aiming to improve our\\n exposition.\\n\\n7) We use a model with different hyperparameters in Figure 1 and we corrected\\n the values of two hyperparameters in the model of Section 4.\\n\\n8) We added one missing reference (Burgess et al., 2018, NIPS workshops).\\n\\n9) We moved Table 3 to the appendix.\"}", "{\"title\": \"Nice results on image manipulation\", \"review\": \"The paper aims to learn an autoencoder that can be used to effectively encode the known attributes/ generative factors and this allows easy and controlled manipulation of the images while producing realistic images.\\n\\nTo achieve this, ordinarily, the encoder produces latent code with two components y and z where y are clamped to known attributes using supervised loss while z is unconstrained and mainly useful for good reconstruction. But his setup fails when z is sufficiently large as the decoder can learn to ignore y altogether. Smaller sized z leads to poor reconstruction.\\n\\nTo overcome this issue, the authors propose to employ a student teacher training paradigm. The teacher is trained such that the encoder only produces y and the decoder that only consumes y. This ensures good disentanglement but poor reconstruction. Subsequently, a student autoencoder is learned which has a much larger latent code and produces both y and z. The y component is mapped to the teacher encoder\\u2019s y component using Jacobian regularization.\", \"positives\": \"The results of image manipulation using known attributes is quite impressive. The authors propose modifications to the Jacobian regularization as simple reconstruction losses for efficient training. The approach avoids adversarial training and thus is easier to train.\", \"negatives\": \"Unsupervised disentanglement results are only shown for MNIST. I am not convinced similar results for unsupervised disentanglement can be obtained on more complex datasets. Authors should include some results on this aspect or reduce the emphasis on unsupervised disentanglement. Also when studying this quantitative evaluation for disentanglement such as in beta-VAE will be nice to have.\", \"typos\": \"\", \"page_3\": \"tobtain -> obtain\", \"page_5\": \"conditionning -> conditioning\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Idea is neat and qualitative results are impressive, but the paper is quite lacking in quantitative results and comparisons to other methods.\", \"review\": \"Summary: The paper proposes a method to tackle the disentanglement-reconstruction tradeoff problem in many disentangling approaches. This is achieved by first training the teacher autoencoder (unsupervised or supervised) that learns to disentangle the factors of variation at the cost of poor reconstruction, and then distills these learned representations into a student model with extra latent dimensions, where these extra latents can be used to improve the reconstructions of the student autoencoder compared to the teacher autoencoder. The distillation of the learned representation is encouraged via a novel Jacobian loss term that encourages the change in reconstructions of the teacher and student to be similar when the latent representation changes. There is one experiment for progressive unsupervised disentangling (disentangling factor by factor) on MNIST data, and one experiment for semi-supervised disentangling on CelebA-HQ.\", \"pros\": [\"I think the idea of progressively capturing factors of variation one by one is neat, and this appears to be one of the first successful attempts at this problem.\", \"The distillation appears to work well on the MNIST data, and does indeed decrease the reconstruction loss of the student compared to the teacher.\", \"The qualitative results on CelebA-HQ look strong (especially apparent in the video), with the clear advantage over Fader Networks being that the proposed model is a single model that can manipulate the 40 different attributes, whereas Fader Nets can only deal with at most 3 attributes per model.\"], \"cons\": [\"There are not enough quantitative results supporting the claim that the model is \\u201ceffective at both disentangling and reconstruction.\\u201d The degree of disentanglement in the representations is only shown qualitatively via latent interpolation, and only for a single model. Such qualitative results are generally prone to cherry-picking and it is difficult to reliably compare different disentangling methods in this manner. This calls for quantitative measures of disentanglement. Had you used a dataset where you know the ground truth factors of variation (e.g. dSprites/2D Shapes data) for the unsupervised disentangling method, then the level of disentanglement in the learned representations could be quantified, and thus your method could be compared against unsupervised disentangling baselines. For the semi-supervised disentanglement example on CelebA, you could for example quantify how well the encoder predicts the different attributes (because there is ground truth here) e.g. report RMSE of the y_i\\u2019s on a held out test set with ground truth. A quantitative comparison with Fader Networks in this manner appears necessary. The qualitative comparison on a single face in Figure 5 is nowhere near sufficient.\", \"There is quantitative evidence that the reconstruction loss decreases when training the student, but here it\\u2019s not clear whether this quantitative difference makes a qualitative difference in the reconstructions. Getting higher fidelity images is one of the motivations behind improving reconstructions, so It would be informative to compare the reconstructions of the teacher and the student on the same image.\", \"In the CelebA experiments, the benefit of student training is not visible in the results. In Figure 5 you already show that the teacher model gives decent reconstructions, yet you don\\u2019t show the reconstruction for the student model (quantitatively you show that it improves in Figure 3b, but again it is worth checking if it makes a difference visually). Also it\\u2019s not clear whether Figure 4 are results from the student model or the teacher model. I\\u2019m guessing that they are from the student model.\", \"These quantitative results could form the basis of doing ablation studies for each of the different losses in the additive loss (for both unsupervised & semi-supervised tasks). Because there are many components in the loss, with a hyperparameter for each, it would be helpful to know what losses the results are sensitive to for the sake of tuning hyperparameters. This would be especially useful should I wish to apply the proposed method to a different dataset.\", \"I think the derivation of the Jacobian loss requires some more justification. The higher order terms in the Taylor expansion in (2) and (3) can only be ignored when ||y_2 - y_1|| is small compared to the coefficients, but there is no validation/justification regarding this.\", \"Other Qs/comments:\", \"On page 5 in the last paragraph of section 3, you say that \\u201cAfter training of the student with d=1 is finished, we consider it as the new teacher\\u201d. Here do you append z to y when you form the new teacher?\", \"On page 6 in the paragraph for prediction loss, you say \\u201cThis allows the decoder to naturally \\u2026. of the attributes\\u201d. I guess you mean this allows the model to give realistic interpolations between y=-1 and 1?\", \"bottom of page 6: \\u201cHere we could have used any random values in lieu of y_2\\u201d <- not sure I understand this?\", \"typo: conditionnning -> conditioning\", \"I would be inclined to boost the score up to 7 if the authors include some quantitative results along with more thorough comparisons to Fader Networks\", \"************ Revision ***********\", \"The authors' updates include further quantitative comparisons to Fader Networks and ablation studies for the different types of losses, addressing the concerns I had in the review. Hence I have boosted up my score to 7.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Need more quantitative experiments to justify the claims.\", \"review\": \"This paper proposed a novel approach for learning disentangled representation from supervised data (x as the input image, y as different attributes), by learning an encoder E and a decoder D so that (1) D(E(x)) reconstructs the image, (2) E(D(x)) reconstruct the latent vector, in particular for the vectors that are constructed by mingling different portion of the latent vectors extracted from two training samples, (3) the Jacobian matrix matches and (4) the predicted latent vector matches with the provided attributes. In addition, the work also proposes to progressively add latent nodes to the network for training. The claim is that using this framework, one avoid GAN-style training (e.g., Fader network) which could be unstable and hard to tune.\\n\\nAlthough the idea is interesting, the experiments are lacking. While previous works (e.g., Fader network) has both qualitative (e.g., image quality when changing attribute values) and quantitative results (e.g., classification results of generated image with novel combination of attributes), this paper only shows visual comparison (Fig. 4 and Fig. 5), and its comparison with Fader network is a bit vague (e.g., it is not clear to me why Fig. 5(e) generated by proposed approach is \\u201cmore natural\\u201d than Fig. 5(d), even if I check the updated version mentioned by the authors' comments). Also in the paper there are five hyperparameters (Eqn. 14) and the center claim is that using Jacobian loss is better. However, there is no ablation study to support the claim and/or the design choice. From my opinion, the paper should show the performance of supervised training of attributes, the effects of using Jacobian loss and/or cycle loss, the inception score of generated images, etc. \\n\\nI acknowledge the authors for their honesty in raising the issues of Fig. 4, and providing an updated version.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"thank you for your interest\", \"comment\": \"(1) In equation (1) y_i refers to an arbitrary dimension in the input space of the\\ndecoders. Both T and S decoders have the same input space for the specified\\nvariables, namely $\\\\mathds{R}^k$. In the paper we use the superscript when we\\nwant to indicate the value was produced by one of the encoders.\\n\\n(2) Please refer to our answer to item (5) below for a quantitative\\ncomparison. Yes, epoch 0 in Fig.1 (d) corresponds to the teacher. We will\\nclarify it.\\n\\n(5) We quantified the level of disentanglement as follows: we evaluated how well\\nthe first two hidden variables ($k$=2), maintain the encoding of the digit class\\nin the student models. We take two images of different digits from the test set,\\nfeed them to the encoder, swap their corresponding 2D subpart of the latent code\\nand feed the fabricated latent codes to the decoder. We then run a pre-trained\\nMNIST classifier in the generated image to see if the class was correctly\\nswapped.\\n\\n| model | $d$ | recons. MSE | swaps OK |\\n|---------------------------------+-----+---------------+-----------|\\n| teacher | 0 | 3.66e-2 | 80.6% |\\n| student w/ Jac. sup. (*) | 14 | 1.38e-2 | 57.2% |\\n| student wo/ Jac. sup. | 14 | 1.12e-2 | 32.0% |\\n| student wo/ Jac. sup | 10 | 1.40e-2 | 41.4% |\\n|---------------------------------|------|--------------|------------|\\n| random weights | 14 | 1.16e-1 | 9.8% |\\n\\nWe observe that at the same level of reconstruction performance (~1.4e-2), the\\nstudent with Jacobian supervision maintains a better disentangling of the class\\n(under this metric) than the student without it. We will include a figure\\nshowing that the reconstruction-disentanglement trade-off traversed by varying\\n$d$ is indeed more advantageous for our model. Note that the first two variables\\ndo not encode perfectly the digit class. This advantage in the trade-off is much\\nlarger in the application of Section 4.\\n\\n(*) Note: this model was trained with $\\\\lambda_{diff} = 0.1$ instead of $1.0$ as\\nthe one currently in the paper. The figure will be updated for this model.\\n\\n(4) We evaluated the disentangling measure (described in (5)), on the\\nMNIST test set, for the student with Jacobian supervision:\\n\\n| xcov weight | $d$ | recons. MSE | swaps OK |\\n|-------------+-----+-------------+----------|\\n| 1e-3 | 14 | 1.38e-2 | 57.2% |\\n| 1e-2 | 14 | 1.46e-2 | 56.3% |\\n| 1e-1 | 14 | 1.49e-2 | 56.6% |\\n\\n(6) Thank you for remarking this important point. In this paper we use the\", \"word_disentangling_to_refer_to_both_aspects\": \"a) each latent unit in the specified part is sensitive to one generative factor\\nb) the value of each of these latent units conditions the generated output such\\nthat it varies the corresponding generative factor\\n\\nWe will clarify this in the manuscript and revise the text to make sure it is\\ncoherent.\\n\\n(7) See item (8)\\n\\n(8) We evaluated quantitatively how well the output is conditioned to the specified\\nfactors, similarly to the procedure described in item (5). To do this, for each\\nimage in the CelebA test set, we tried to flip each of the 32 disentangled\\nattributes, one at a time (e.g. eyeglasses/no eyeglasses). We did the flipping\\nby setting the latent variable y_i to sign(y_i)*-1*\\\\alpha, with \\\\alpha >0 a\\nmultiplier to exaggerate the attribute, found in a separate validation set for\\neach model (\\\\alpha=40 for all).\\n\\nTo verify that the attribute was indeed flipped in the generated image, we used\\nan external classifier trained to predict each of the attributes. We used the\\nclassifier provided by the authors of Lample et al. (2017), which was trained\\ndirectly on the CelebA dataset.\", \"the_results_are_as_follows\": \"| model | $d$ | flips OK | recons. MSE |\\n|------------------------------+--------+------------+---------------|\\n| teacher | 2048 | 73.1% | 1.82e-3 |\\n| student w/ Jac. sup. | 8192 | 72.2% | 1.08e-3 |\\n| student wo/ Jac. sup. | 8192 | 42.7% | 1.04e-3 |\\n|------------------------------+--------+------------+---------------|\\n| Lample et al., 2017 | 2048 | 43.1% | 3.08e-3 |\\n| random weights | 2048 | 20.2% | 1.01e-1 |\\n\\nAt approximately the same reconstruction performance, the student with Jacobian\\nsupervision is significantly better at flipping attributes than the student\\nwithout it. \\n\\nWe also trained a Fader Networks model (Lample et al., 2017) with the same\\nhyperparameters and training epochs as our teacher model. The result suggests\\nthat the adversarial discriminator acting on the latent code harms the\\nreconstruction and that the conditionning is worse than with our teacher model.\\n\\n(9) We will add to the appendix the result of trying the same experiment as in\\nFigure 4, but using the student model without Jacobian supervision. It will be\\nclear from this experiment that the latter cannot effectively control most of\\nthe attributes.\"}", "{\"comment\": \"Dear authors, this is an interesting paper but I have a few questions and concerns:\\n\\n(1) In equation (1) could you explain why y is used instead of y^S and y^T? Is y supposed to refer to some Oracle factors? And if so, it is not clear what assumption the authors are making later in the paper to relate y to y^S and y^T.\\n\\n(2) In Figure 1., the authors claim that the student obtains better reconstruction than the teacher, however is there any quantitative comparison? It is not clear if Figure 1.(d) is sufficient to show this? Does epoch 0 correspond to the teacher? If it does, it would be good to say this explicitly.\\n\\n(3) The derivation of Equation (7) is clear and very easy to follow. \\n\\n(4) Is it possible to quantify the contribution of L_{xcov} to the model?\\n\\n(5) The authors say that: \\n`Once the student model is trained, it generates a better reconstructed image than the teacher model, thanks to the expanded latent code, while maintaining the conditionning of the output that the teacher had.\\u2019\\n\\nThe authors have not quantified the level of \\u2018conditionning\\u2019 (disentanglement) for either the student or the teacher, so it is not clear if this claim is well backed, or the extent to which this is true. It would be hard for other researchers to build on this work, without having methods to qualitatively compare models. Higgins et al. ICLR 2017 propose one method for measuring disentanglement.\\n\\n(6) A more serious concern is that the term disentanglement as defined in the abstract:\\n\\n`where a subset of the latent variables is constrained to correspond to specific factors'\\n\\n is not clear nor is it consistently used throughout the paper. When the authors disentangle MNIST, they appear to be searching for linear separability, and when they disentangle CelebA they appear to be trying to assign one factor of variation (attribute) to each unit of y^T. Additionally, the paper refers more to \\u2018conditionning\\u2019 than disentanglement, it would be nice to rectify or explain this discontinuity between the main body of the text and the title.\\n\\n(7) Reconstruction results in Figure 4. appear to be very good, however there is no quantitative evaluation nor comparison with other models.\\n\\n(8) Additionally, while most of the results in Figure 4. are visually pleasing, there are no quantitative results. From these visual results it is not clear how reliably (or consistently) the model is able to edit the correct attribute? \\n\\n(9) The authors say that:\\n`In comparison, a student model with enlarged latent code but that continues with the training procedure as the teacher, without Jacobian supervision, achieves good reconstruction but loses the effective conditionning on the attributes.\\u2019\\n\\nThere are no quantitative (or qualitative) results to demonstrate that the disentanglement is worse in the `student model with enlarged latent code'.\", \"title\": \"Interesting paper, but I have a few questions and concerns\"}" ] }
H1l7bnR5Ym
ProbGAN: Towards Probabilistic GAN with Theoretical Guarantees
[ "Hao He", "Hao Wang", "Guang-He Lee", "Yonglong Tian" ]
Probabilistic modelling is a principled framework to perform model aggregation, which has been a primary mechanism to combat mode collapse in the context of Generative Adversarial Networks (GAN). In this paper, we propose a novel probabilistic framework for GANs, ProbGAN, which iteratively learns a distribution over generators with a carefully crafted prior. Learning is efficiently triggered by a tailored stochastic gradient Hamiltonian Monte Carlo with a novel gradient approximation to perform Bayesian inference. Our theoretical analysis further reveals that our treatment is the first probabilistic framework that yields an equilibrium where generator distributions are faithful to the data distribution. Empirical evidence on synthetic high-dimensional multi-modal data and image databases (CIFAR-10, STL-10, and ImageNet) demonstrates the superiority of our method over both start-of-the-art multi-generator GANs and other probabilistic treatment for GANs.
[ "Generative Adversarial Networks", "Bayesian Deep Learning", "Mode Collapse", "Inception Score", "Generator", "Discriminator", "CIFAR-10", "STL-10", "ImageNet" ]
https://openreview.net/pdf?id=H1l7bnR5Ym
https://openreview.net/forum?id=H1l7bnR5Ym
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1e8v-B-x4", "rJeRLZfI14", "rJg-57eh07", "BJlIfBJ2RX", "Skg9rHqqC7", "Hyx9hN59A7", "B1lq-8vRa7", "Hkgyn9ICTm", "B1g1aeEapX", "Byg6_RX6Tm", "Hkxok8co27", "ryxAXLZYhm", "SJxci3qu3m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544798558219, 1544065365817, 1543402377368, 1543398669751, 1543312706213, 1543312561923, 1542514177967, 1542511270893, 1542434999460, 1542434420551, 1541281251228, 1541113382009, 1541086369989 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1159/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1159/Authors" ], [ "ICLR.cc/2019/Conference/Paper1159/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1159/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1159/Authors" ], [ "ICLR.cc/2019/Conference/Paper1159/Authors" ], [ "ICLR.cc/2019/Conference/Paper1159/Authors" ], [ "ICLR.cc/2019/Conference/Paper1159/Authors" ], [ "ICLR.cc/2019/Conference/Paper1159/Authors" ], [ "ICLR.cc/2019/Conference/Paper1159/Authors" ], [ "ICLR.cc/2019/Conference/Paper1159/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1159/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1159/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes a new method that builds on the Bayesian modelling framework for GANs and is supported by a theoretical analysis and an empirical evaluation that shows very promising results. All reviewers agree, that the method is interesting and the results are convincing, but that the model does not really fit in the standard Bayesian setting due to a data dependency of the priors. I would therefore encourage the authors to reflect this by adapting the title and making the differences more clear in the camera ready version.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting new model with good performance\"}", "{\"title\": \"Thanks for your helpful comments.\", \"comment\": \"Following is our response to your updated comments.\\n\\n=== Bayesian GAN prior in toy experiment ===\\n\\nWe agree that using a broad normal prior would be a better choice. We have rerun our experiment with the normal prior (with mean 0, std 1) and got the similar results as in the uniform prior case. We will certainly include the new result in our next version of the paper.\\n\\nFollowing is a remark on the modification we make on the toy model to use normal prior. We reparameterize the generator and discriminator. Generator with parameter theta^g produces a data distribution p(x_i; theta^g) = exp(theta^g_i) / sum_j exp(theta^d_j). Under a normal prior N(0,1), its prior probability is p(theta^g) = \\\\prod_j exp(- theta^g * theta^g / 2). Discriminator with parameter theta^d has a score function D(x_i; theta^d) = sigmoid(theta^d_i). Under a normal prior N(0,1), its prior probability is p(theta^d) = \\\\prod_j exp(- theta^d * theta^d / 2).\\n\\n=== why to strip the normal prior ===\\n\\nIt is a very good point. Thank you for your insightful comment on it. \\n\\nIt is true that the prior is crucial for a Bayesian model and should encode domain knowledge. What our empirical analysis shows is that a Gaussian prior is not helpful in the task of Bayesian GANs. At least, the normal prior does not show an advantage over the non-informative prior. Intuitively, putting a normal prior is very similar to have an L2 regularization when training a neural network. It looks helpful in the sense of robust to overfitting. However, we remark that, unlike typical supervised learning where model fitting is connected to generalization performance, an \\\"overfitting\\\" model is desirable for GANs that matches the data distribution perfectly.\\n\\nSince the normal prior does not work, we need a more involved prior that makes the Bayesian modeling work. Our solution is the \\u201cunorthodox\\u201d generator prior as you mentioned. Although it looks rather \\u201cunorthodox\\u201d at first blush, this generator prior is standard in the following senses: (1) As we previously explained to R3, our Bayesian model actually includes two separate models, one for the generator and one for the discriminator. Hence from the generator\\u2019s perspective, the generator is the \\u2018model\\u2019 and the discriminator is the \\u2018data\\u2019 and the other way around from the discriminator's perspective. Note that the real data distribution we want to learn is actually a third-party component; it is, therefore, proper to involve the real data in the prior. (2) Our generator prior encodes our prior belief in the sense that the generator distribution should be stable if the discriminator cannot distinguish the synthetic data and the real data well.\\n\\n=== robustness to overfitting ===\\n\\nWe want to emphasize that the setting we are handling with is different from the traditional prediction settings where Bayesian methods are applied commonly. When dealing with classification or regression, robustness to overfitting is quite important. However, in the GAN computation setting, the overfitting issue is not the main concern since the goal is to produce a distribution that matches the real data distribution perfectly, rather than generalizing to unseen data.\"}", "{\"title\": \"rating update\", \"comment\": \"Thanks for your responses!\\n\\nI really like the way you clarify the differences in expectation over objective vs. objective of expected values. However, if you compare your method to the BGAN, you should use the priors as defined in their paper (i.e. broad normals in theirs vs. uniform in yours) when characterizing the BGAN.\\n\\nAs I mentioned in my initial review, I think you factually strip all priors from the model formulation that are independent of data. In my understanding, this is the basis of any Bayesian model, a data-independent prior that encodes prior belief on the parameters that is then updated by the data via the likelihood.\\nThe implicit uniform prior on your discriminator distribution might be interpreted as non-informative (one could argue it to be a Jeffreys prior) but the definition of the prior on the generator distribution being the posterior state of the last update step is rather unorthodox and, more importantly, relying on data. \\nThis might strip the inherent property of robustness to overfitting from the model which should be one of the main reasons to formulate a Bayesian model in the first place and I think that more elaboration on why this is still a Bayesian model (as they claim in the paper title) is needed here.\\n\\nI totally agree with Rev3 that this is nice model with impressive results, I'm just not convinced by the explanation of why this is. As is argued in the original Bayesian GAN paper, having uniform priors is effectively the same as using a classical GAN. One could see your approach as a clever (probabilistic) extension of the optimization procedure of the classical GAN.\\n\\nGiven that you have clarified on other of my concerns, I updated my rating.\"}", "{\"title\": \"agree on reviewers comments re. Bayesian interpretation\", \"comment\": \"> More elaboration on why this is still a Bayesian model (as they claim in the paper title) is needed here.\\n\\nAgreed, the paper offers a Bayesian model for the adversarial training computation.\\nWhich is good.\\nIt does not offer a traditional Bayesian model which conditions the prior on the observed data.\\nThis should be clarified.\"}", "{\"title\": \"Thanks again for your thoughtful comments.\", \"comment\": \"#1 You have a good Bayesian model of the GAN computation, but it is still not a Bayesian model of the unsupervised inference task.\\n\\nYes, you are right. In this work, we aim to develop a better Bayesian model of the GAN computation. Generally, Bayesian models for unsupervised inference tasks could be a larger topic.\\n\\n#2 I want to see results on the big data sets.\\n\\nThanks for being positive of our work. We have included the results on STL-10 and ImageNet in our revision of the paper (e.g., Table 4 and Figure 4 of Section 5.2). As mentioned in the general response above, our model does provide better performance on both datasets with significant improvements of FID scores. We hope that with the additional results, the experimental work in the current version is more conclusive.\"}", "{\"title\": \"General Response\", \"comment\": \"We thank all the reviewers for the insightful comments and helpful suggestions. Here we summarize the major changes we did in the revision of our paper.\\n\\n1. Adding experiment results on STL-10 and ImageNet.\\n\\nWe follow R3\\u2019s suggestion to compare our models and baselines (MGAN, BGAN) on the larger datasets. Our model does provide better performance on both datasets. Especially, the improvement of FID scores looks significant. We include the new experiment results (Table 4 and Figure 4) in Section 5.2.\\n\\n2. Updating Inception score and FID results on CIFAR-10.\\n\\nThanks to R3\\u2019s help, we find the discrepancy between FIDs given by the PyTorch model and the Tensorflow model. We have switched to the official Tensorflow model for evaluation and updated all results in Table 3 (Section 5.2). We also put a remark in Section B.1 (of the appendix) to make it clearer.\\n\\n3. Emphasizing the difference between our model and Bayesian GAN.\\n\\nR2 suggests that we elaborate more about the difference between our likelihood design (objective value of expectation) and Bayesian GAN\\u2019s likelihood (expectation of objective value). We revise Section 4.2 to explain the differences both in the likelihood and in the prior more clearly.\\n\\n4. Adding a toy experiment to demonstrate different convergence behavior of our model and Bayesian GAN (Figure 1).\\n\\nWe include a new toy experiment on categorical distributions as empirical support for the superior convergence property of our model over the Bayesian GAN. \\n\\nIn our toy experiment, the data is sampled from a finite discrete space (more specifically, a categorical distribution). It is ideal to examine the Bayesian formulation in a finite case since the posterior can then be computed analytically and does not have error caused by inference algorithms. We try different combinations of likelihoods and priors in the experiment and compare their learned distributions. \\n\\nIn Figure 1, we visualize the generated data distributions of different models after they converge. The results show that only when using the combination of our likelihood and our prior can the model converge to the correct equilibrium. The full details of the experiment are included in Section D (of the appendix). This example also serves as an illustration of the convergence issue of Bayesian GAN.\", \"minor_changes\": \"1. Change the term \\u2018hit error\\u2019 to \\u2018hit distance\\u2019 (e.g., in Table 2) to avoid the potential misunderstanding of its meaning.\\n\\n2. Add a few sentences in Section 4.1 to explain why Theorem 1 does not hold for Bayesian GAN.\"}", "{\"title\": \"Updating Inception and Frechet Inception Distance Results on CIFAR10. (Table 3 in the paper)\", \"comment\": \"Previously, our FID results are computed using a PyTorch Implementation (https://github.com/mseitzer/pytorch-fid). Note that there exists a large discrepancy between the FID results conducted by PyTorch Inception model and Tensorflow model. Hence, to facilitate the comparison with previous paper, we decide to reevaluate by the official Tensorflow FID computation code (https://github.com/bioinf-jku/TTUR).\\n\\nHere are the updated results.\\n\\n Inception scores (higher is better)\\n GAN-MM & GAN-NS & WGAN & LSGAN \\nDCGAN & 6.53 & 7.21 & 7.19 & 7.36 \\nMGAN & 7.19 & 7.25 & 7.18 & 7.34\\nBGAN & 7.21 & 7.37 & 7.26 & 7.46\\nours-PSA & 7.75 & 7.53 & 7.28 & 7.36\\n\\n FIDs (lower is better)\\n GAN-MM & GAN-NS & WGAN & LSGAN \\nDCGAN & 35.57 & 27.68 & 28.31 & 29.11 \\nMGAN & 30.01 & 27.55 & 28.37 & 30.72\\nBGAN & 29.87 & 24.32 & 29.87 & 29.19\\nours-PSA & 24.60 & 23.55 & 27.46 & 26.90\\n\\nNote that we are reporting the results with the highest \\u2018Inception score - 0.1 FID\\u2019 for each model. Thus the Inception scores results are also updated.\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Dear AnonReviewer3,\\n\\nThank you for the insightful comments.\\nFollowing is our response to your concerns.\\n\\n=== experiments ===\\n\\nWe will include results on STL-10 and ImageNet in the revision, or a later version if our machines cannot catch up the rebuttal deadline. Compared with Bayesian GAN, actually, we did a more thorough study on the choice of objective function, and our synthetic dataset is harder and more illustrative.\\n\\nHere we clarify the discrepancy between our quantitative evaluation of MGAN and that of the original paper. We actually use the official open-sourced code of MGAN with the same configurations (model architectures, training data). The discrepancy comes from the inception model used to compute FID. We compute FID with PyTorch Inception model (https://github.com/mseitzer/pytorch-fid.). The original MGAN paper did not say which inception model they have used. Our guess is that they used the Tensorflow inception model (https://github.com/bioinf-jku/TTUR). We observed FID computed by PyTorch model is much lower than that computed by the Tensorflow model, because of the different weights of the pre-trained models. A similar phenomenon has been recently observed for Inception Score [1]. To favor a more complete comparison, we will update our FID results by switching to the Tensorflow version.\\n\\nWe had posted the updated results in the comment. In our experiments, the MGAN with GAN-NS objective has the same setting with original MGAN. The Inception score and FID we get are 7.25 and 27.55 which are both worse than the scores reported in the original paper, 8.33 and 26.7. We train MGAN with the officially released code under the configuration reported in the MGAN paper (Table 4 in the appendix). The scores we reported is the best we can get via several training trials.\\n\\n[1] Barratt, Shane, and Rishi Sharma. \\\"A Note on the Inception Score.\\\" arXiv preprint arXiv:1801.01973 (2018).\\n\\n=== Bayesian formulation ===\\n\\nOur method has two separate Bayesian models, one for the generator and one for the discriminator. Take the Bayesian perspective for the generator as an example. The likelihood defined in the first equation of Eqn 2 gives the probability of observing some fixed discriminator distribution for some generator parameter, i.e., p(D^{(t)} | \\\\theta_g). Composite with the prior of the generator parameter q^{(t)}(\\\\theta_g), it is a Bayesian model from a strict perspective. Indeed, to see the correspondence of \\u2018model parameter\\u2019 and \\u2018data\\u2019 in classic Bayesian theory, our generator is the \\u2018model\\u2019 and the discriminator is the \\u2018data\\u2019. We estimate generator distribution by the observed discriminator distribution.\\n\\nThe novelty from classic Bayesian models is on the inference procedure. We integrate the two standard Bayesian models into a dynamical system: each Bayesian problem is solved alternatingly. From a game-theoretic point of view, each optimization problem is the best response strategy of the corresponding player, and the equilibrium presents a generator distribution that produces the target data distribution. \\n\\n=== Why time-series modelling ===\\n\\nThe problem is not a time-series problem. We simply solve it in an iterative manner. (akin to SGD that can iteratively solve both time-series and non-time-series problems). Our goal is to find the equilibrium of generator and discriminator distributions, where they satisfy each other\\u2019s posterior under our Bayesian criterion. It is, however, possible to find the equilibrium via an iterative scheme. We will make this part more clear in the revision.\\n\\n=== A clarification about theorem 1 ===\\n\\nIt is indeed true that Theorem 1 only shows an analysis of the optimal solution in an asymptotic scenario. Unfortunately, it is, to our best knowledge, the best property that has been obtained in recent literature on GANs [2, 3, 4, 5, 6]. However, please note that Bayesian GAN does not even possess such asymptotic property and the difficulty of avoiding such problem as revealed by our analysis in Section 4.2. In contrast, our method is to the first Bayesian method to establish such property. \\n\\n[2] Goodfellow, Ian, et al. \\\"Generative adversarial nets.\\\" Advances in neural information processing systems. (NIPS 2014)\\n[3] Hoang, Quan, et al. \\\"MGAN: Training generative adversarial nets with multiple generators.\\\" (ICLR 2018)\\n[4] Arjovsky, Martin, Soumith Chintala, and L\\u00e9on Bottou. \\\"Wasserstein generative adversarial networks.\\\"(ICML 2017)\\n[5] Mao, Xudong, et al. \\\"Least squares generative adversarial networks.\\\" Computer Vision (ICCV), 2017 IEEE International Conference on. IEEE, 2017.\\n[6] Zhao, Junbo, Michael Mathieu, and Yann LeCun. \\\"Energy-based generative adversarial network.\\\" (ICLR 2017)\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Dear AnonReviewer2,\\n\\nThank you for the feedback. \\nFollowing is our response to your concerns.\\n\\n=== convergence of Bayesian GAN ===\\n\\nThe convergence of Bayesian GAN is indeed a problem, which is one of our key contributions. Bayesian GAN has a subtle difference from the original GANs during learning. To compute the posterior, Bayesian GAN cannot be learned by vanilla gradient descent methods, but is learned by SGHMC. In SGHMC framework, the gradient is always adulterated by white noises. Thus if the gradient from discriminator is always zero, the generator distribution will converge to a Gaussian distribution instead of staying unchanged.\\n\\nIn contrast, we fix this issue by a well-crafted prior for the generator distribution. Intuitively, the gradient from the prior helps combat with the noise and prevent degeneracy of the generator distribution towards a Gaussian distribution. Please note theorem 1 does not hold without introducing suitable prior for the generator.\\n\\n\\n=== expectation of objective value v.s. objective value of expectation ===\\n\\nThis difference is another very critical improvement from Bayesian GAN. We will make it more clear in the revision of the paper.\\n\\nAs shown in Eqn 8, to compute likelihood, Bayesian GAN takes expectation after computing the GAN objective value. While as shown in Eqn 2, we compute GAN objective value after the expectation. The subtle adjustment is crucial. Theorem 1 will not hold if the likelihood is defined as the expectation of loss value as Bayesian GAN did. Intuitively, because the expectation \\\\E_{q_g} p_{gen}(x;\\\\theta_g)) is equivalent to the data distribution p_model(x) produced by the generator distribution, it makes sense to compute GAN objective over it instead of the reversed order (in Bayesian GAN). Besides, it\\u2019s easy to see the gradients of the two different likelihoods is different since, for a given function f, the gradient of \\\\sum_i f(x_i) is usually different from that of f(\\\\sum_i x_i).\\n\\n=== clarification on incompatibility ===\\n\\nThe incompatibility corresponds to the incompatibility between two conditional distributions that can not belong to the same joint distribution. We identify a theoretical flaw of Bayesian GAN under a very simple setting (when only using single Monte-Carlo sample) that leads to incompatible conditionals of generator and discriminator. Moreover, we are not very certain about the concern \\u201cthe used posteriors are conditional distributions with non-identical conditioning sets. I doubt that the argument still holds under this setting.\\u201d Further explanation about \\u201cnon-identical conditioning sets\\u201d will be appreciated.\\n\\n=== relationship between hit error and coverage ===\\n\\nBy our definition, \\u2018hit error\\u2019 is the averaged distance between the generated data points (projected into a low dimensional space) and the low dimensional hyperplane that the ground truth mode lies in. While the \\u2018coverage error\\u2019 measures the similarity between the distribution of projected data points and the ground truth data distribution which is uniform.\\n\\nNote that these two metrics are actually orthogonal to each other, due to the fundamental difference between projection distances (\\u2018hit error\\u2019) and how the projections are distributed (\\u2019coverage error\\u2018). It\\u2019s possible to get the same projection distances in a scattered or dense way. It\\u2019s also possible to get the same projections from different projection distances. \\n\\nWe will change the terminology \\u2018hit error\\u2019 to \\u2018hit distance\\u2019 to make it clearer in our revision.\\n\\n=== further analyze of our inference algorithm ===\\n\\nThe momentum explanation seems an interesting direction to yield a formal explanation of such approximations, but we do not have a concrete analysis yet and leave it as future work.\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Dear AnonReviewer1,\\n\\nThank you for agreeing with the significance of our contribution and voting to accept our paper. We will address the typos.\\n\\nWe make an additional remark here, which might be interesting. Bayesian modeling has been introduced in several mini-max problems in the deep learning community, such as adversarial (robust) learning [1] and GANs. However, most prior works pose Bayesian method as a heuristic without theoretical analysis. This work presents an important initial step toward a rigorous study of modernized Bayesian approaches. \\n\\n[1] Nanyang Ye, Zhanxing Zhu. Bayesian Adversarial Learning. 32nd Annual Conference on Neural Information Processing Systems (NIPS 2018)\"}", "{\"title\": \"experimental work now conclusive\", \"review\": \"PRIOR COMMENT: This paper should be rejected based on the experimental work.\\nExperiments need to be reported for larger datasets. Note the MGAN\\npaper reports results on STL-10 and ImageNet as well.\", \"note\": \"thanks for your good explanation of the Bayesian aspects of the model ...\\nyes I agree, you have a good Bayesian model of the GAN computation , but it\\nis still not a Bayesian model of the unsupervised inference task. This is a somewhat\\nminor point, and should not in anyway influence worth of the paper ... but clarification\\nin paper would be nice.\", \"lemma_1\": \"Very nice observation!! I was trying to work that out,\\nonce I got to Eqn (3), and you thought of it. \\n\\nAlso, you do need to explain 3.2 better. The BGAN paper, actually, is\\na bit confusing from a strict Bayesian perspective, though for\\ndifferent reasons. The problem you are looking at is not a\\ntime-series problem, so it is a bit confusing to be defining it as\\nsuch. You talk about an iterative Bayesian model with priors and\\nlikelihoods. Well, maybe that can be *defined* as a probabilistic\\nmodel, but it is not in any sense a Bayesian model for the estimation\\nof $p_{model}$.\", \"now_my_apologies_to_you\": \"I could make somewhat related statements\\nabout the theory of the BGAN paper, and they got to publish theirs at\\nICLR! But they did do more experimentation.\\n\\nOh, and some smaller but noticable grammar/word usage issues.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Stripping the priors from Bayesian GANs\", \"review\": \"Summary\\n=========\\nThe paper extends Bayesian GANs by altering the generator and discriminator parameter likelihood distributions and their respective priors. \\nThe authors further propose an SGHMC algorithm to collect samples of the resulting posterior distributions on each parameter set and evaluate their approach on both a synthetic and the CIFAR-10 data set. \\nThey claim superiority of their method, reporting a higher distance to mode centers of generated data points and better generator space coverage for the synthetic data set and better inception scores for the real world data for their method.\\n\\nReview\\n=========\\nAs an overall comment, I found the language poor, at times misleading.\\nThe authors should have their manuscript proof-read for grammar and vocabulary.\", \"examples\": \"- amazing superiority (page 1, 3rd paragraph)\\n- Accutally... (page 1, end of 3rd paragraph)\\n- the total mixture of generated data distribution (page 3, mid of 3.1)\\n- Similarity we define (page 3, end of 3.1)\\n- etc.\\nOver the whole manuscript, determiners are missing.\\n\\nThe authors start out with a general introduction to GANs and Bayesian GANs in particular, \\narguing that it is an open research question whether the generator converges to the true data generating distribution in Bayesian GANs.\\nI do not agree here. The Bayesian GAN defines a posterior distribution for the generator that\\nis proportional to the likelihood that the discriminator assigns to generated samples.\\nThe better the generator, the higher the likelihood that the discriminator assign to these samples.\\nIn the case of a perfect generator, here the discriminator is equally unable to distinguish real and generated samples and consequently degenerates to a constant function.\\nUsing the same symmetry argument as the authors, one can show that this is the case for Bayesian GANs.\\n\\nWhile defining the likelihood functions, the iterator variable t is used without introduction.\\n\\nFurther, I a confused by their argument of incompatibility.\\nFirst, they derive a Gibbs style update scheme based on single samples for generator and discriminator parameters using\\nposteriors in which the noise has been explicitly marginalized out by utilizing a Monte Carlo estimate.\\nSecond, the used posteriors are conditional distributions with non-identical conditioning sets.\\nI doubt that the argument still holds under this setting.\\n\\nWith respect to the remaining difference between the proposed approach and Bayesian GAN,\\nI'd like the authors elaborate where exactly the difference between expectation of objective value\\nand objective value of expectation is.\\nSince the original GAN objectives used for crafting the likelihoods are deterministic functions,\\nrandomness is introduced by the distributions over the generator and discriminator parameters.\\nI would have guessed that expectations propagate into the objective functions.\\n\\nIt is, however, interesting to analyze the proposed inference algorithm, especially the introduced posterior distributions.\\nFor the discriminator, this correspond simply to the likelihood function.\\nFor the generator, the likelihood is combined with some prior for which no closed form solution exists.\\nIn fact, this prior changes between iterations of the inference algorithm.\\nThe resulting gradient of the posterior decomposes into the gradient of the current objective and the sum over all previous gradients.\\nWhile this is not a prior in the Bayesian sense (i.e. in the sense of an actual prior belief), it would be interesting to have a closer look at the effect this has on the sampling method.\\nMy educated guess is, that this conceptually adds up to the momentum term in SGHMC and thus slows down the exploration of the parameter space and results in better coverage.\\n\\nThe experiments are inspired by the ones done in the original Bayesian GAN publication.\\nI liked the developed method to measure coverage of the generator space although I find the\\nterm of hit error misleading.\\nGiven that the probabilistic methods all achieve a hit rate of 1, a lower hit error actually points to worse coverage.\\nI was surprised to see that hit error and coverage are only not consistently negatively correlated.\\nAdding statistics over several runs of the models (e.g. 10) would strengthen the claim of superior performance.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A Bayesian GAN with where data distribution is an equilibrium\", \"review\": \"Mode collapse in the context of GANs occurs when the generator only learns one of the\\nmultiple modes of the target distribution. Mode collapsed can be tackled, for instance, using Wasserstein distance instead of Jensen-Shannon divergence. However, this sacrifices accuracy of the generated samples.\\n\\nThis paper is positioned in the context of Bayesian GANs (Saatsci & Wilson 2017) which, by placing a posterior distribution over the generative and discriminative parameters, can potentially learn all the modes. In particular, the paper proposes a Bayesian GAN that, unlike previous Bayesian GANs, has theoretical guarantees of convergence to the real distribution.\\n\\nThe authors put likelihoods over the generator and discriminator with logarithms proportional to the traditional GAN objective functions. Then they choose a prior in the generative parameters which is the output of the last iteration. The prior over the discriminative parameters is a uniform improper prior (constant from minus to plus infinity). Under this specifications, they demonstrate that the true data distribution is an equilibrium under this scheme. \\n\\nFor the inference, they adapt the Stochastic Gradient HMC used by Saatsci & Wilson. To approximate the gradient of the discriminator, they take samples of the generator parameters. To approximate the gradient of the generator they take samples of the discriminator parameters but they also need to compute a gradient of the previous generator distribution. However, because this generator distribution is not available in close form they propose two simple approximations.\\n\\nOverall, I enjoyed reading this paper. It is well written and easy to follow. The motivation is clear, and the contribution is significant. The experiments are convincing enough, comparing their method with Saatsci's Bayesian GAN and with the state of the art of GAN that deals with mode collapse. It seems an interesting improvement over the original Bayesian GAN with theoretical guarantees and an easy implementation.\", \"some_typos\": [\"The authors argue that compare to point mass...\", \"The authors argue that, compared to point mass...\", \"Theorem 1 states that any the ideal generator\", \"Theorem 1 states that any ideal generator\", \"Assume the GAN objective and the discriminator space are symmetry\", \"Assume the GAN objective and the discriminator space have symmetry\", \"Eqn. 8 will degenerated as a Gibbs sampling\", \"Eqn. 8 will degenerate as a Gibbs sampling\"], \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HJeQbnA5tm
Noisy Information Bottlenecks for Generalization
[ "Julius Kunze", "Louis Kirsch", "Hippolyt Ritter", "David Barber" ]
We propose Noisy Information Bottlenecks (NIB) to limit mutual information between learned parameters and the data through noise. We show why this benefits generalization and allows mitigation of model overfitting both for supervised and unsupervised learning, even for arbitrarily complex architectures. We reinterpret methods including the Variational Autoencoder, beta-VAE, network weight uncertainty and a variant of dropout combined with weight decay as special cases of our approach, explaining and quantifying regularizing properties and vulnerabilities within information theory.
[ "information theory", "deep learning", "generalization", "information bottleneck", "variational inference", "approximate inference" ]
https://openreview.net/pdf?id=HJeQbnA5tm
https://openreview.net/forum?id=HJeQbnA5tm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJlw3s0beE", "HkxD7LKWxE", "BygQkyDWx4", "rJxxMZL-x4", "rygrcAAAyE", "ryxS38C_am", "SkEt8Cdp7", "rkx7sBROTX", "r1xY4JK62Q", "SJllCbbahQ", "BkgYfE5o3Q" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544838063419, 1544816159081, 1544806107222, 1544802568137, 1544642189338, 1542149804622, 1542149755513, 1542149531413, 1541406512763, 1541374408192, 1541280784860 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1158/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1158/Authors" ], [ "ICLR.cc/2019/Conference/Paper1158/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1158/Authors" ], [ "ICLR.cc/2019/Conference/Paper1158/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1158/Authors" ], [ "ICLR.cc/2019/Conference/Paper1158/Authors" ], [ "ICLR.cc/2019/Conference/Paper1158/Authors" ], [ "ICLR.cc/2019/Conference/Paper1158/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1158/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1158/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes a regularization method that introduces an information bottleneck between parameters and predictions.\\n\\nThe reviewers agree that the paper proposes some interesting ideas, but those idea need to be clarified. The paper lacks in clarity. The reviewers also doubt whether the paper is expected to have significant impact in the field.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Interesting ideas, lacking in clarity.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you so much for your prompt clarification.\\n\\nThe meaning of \\\\theta depends on the context, for supervised learning it would be the parameters of the model (e.g. a neural network), for a latent variable model in unsupervised learning it would also include the latents (which we have denoted as Y in the paper). The source of confusion here is certainly that \\\\theta -> D is only the model we derive ours from, and we obtain our Markov chain model by **replacing** the original model parameters with a noisy version of \\\\theta, not by appending it. In other words, the model \\\\theta -> D you have in mind is not a part of our model, it is only our starting point from which we motivate our approach.\\n\\nFor example in deep supervised learning, we typically have some neural network parameterized by \\\\theta for which we learn point estimates. In contrast, in our model we use a noisy version \\\\tilde{\\\\theta} **in place** of the parameters, and we only learn the mean of this noisy version. Intuitively, the network never sees the exact value of this learned mean, but only noise-corrupted versions of it, which limits how much information we can learn about D. The conditional distribution given by the network stays the same, hence p'(D | \\\\tilde{\\\\mu}) is identical to the original p(D | \\\\theta).\", \"to_further_illustrate_this_point_from_another_perspective\": \"If you were to generate data from our model, you would first sample a \\\\theta from the prior, sample a noisy version \\\\tilde{\\\\theta} to be used as your network parameters, and finally sample your data from the network (conditioned on some input in the case of supervised learning). We emphasize that once \\\\tilde{\\\\theta} has been sampled conditioned on \\\\theta, the latter is never used again. Hence the Markov property is fulfilled, as the data only depends on \\\\tilde{\\\\theta}.\"}", "{\"title\": \"About step 5 and 6\", \"comment\": \"I am glad that you break down the steps, so it is helpful for me to explain my concern, which is mainly from step (5) and (6).\\n\\nIn theory, if you have a chain like X -> Y and you can add anything like Z in the middle and claim the conditional dependence. But my concern is from the way that you actually construct the specific model in the paper. \\n\\nTo my understanding, \\\\theta is the latent variable of the model where D comes from, therefore you have \\\\theta -> D. Based on the description in the paper, \\\\tilde{\\\\theta} also comes from \\\\theta, so we should have \\\\theta -> \\\\tilde{\\\\theta}. If we really want to combine these two components together, we can only get \\\\tilde{\\\\theta} <- \\\\theta -> D.\\n\\nOn the other hand, if you want to show the Markov assumption holds in \\\\theta -> \\\\tilde{\\\\theta} -> D, I would like to see how you define p(D | \\\\tilde{\\\\theta}) precisely without using any information from \\\\theta. By precisely, I mean not the theoretical factorization like p(D, \\\\tilde{\\\\theta}, \\\\theta) = p(D, \\\\tilde{\\\\theta})p(\\\\tilde{\\\\theta}|\\\\theta)p(\\\\theta). The only thing relevant to this question is p'(D | \\\\tilde{\\\\mu}) = p(D | \\\\theta), which I don't think it is correct (regardless the typo).\"}", "{\"title\": \"Request for clarification\", \"comment\": \"Thank you very much for your response. Unfortunately, it is not clear to us where exactly you see the mistake in our method. We will therefore outline some basic statements about the DPI as well as try to concisely summarize our method and would be very grateful if you could point out a specific statement that you believe to be wrong. We are still convinced that our paper is correct and would like to figure out from which point the misunderstanding arises.\", \"dpi\": \"(1) The DPI applies to any Markov chain X->Y->Z for random variables X, Y, Z.\\n(2) It guarantees that I(X,Y) >= I(X,Z).\\n(3) If we can construct a probabilistic model that contains such a Markov chain, we obtain a limit on the mutual information as given in (2).\", \"noisy_information_bottlenecks\": \"(4) We consider two use cases with the following probabilistic models:\\n(4a) Supervised learning having the form \\\\theta->Y<-X. We group (X,Y) into the combined data variable D.\\n(4b) Unsupervised learning of the form \\\\theta->X. Here the data D is just X. In the paper, we specifically refer to latent variable models with a latent Y, which we can absorb into \\\\theta.\\n(5) This leads to a common dependence structure \\\\theta->D for both cases, where D represent all data and \\\\theta all learned variables. Our method is applicable to probabilistic models of this general form.\\n(6) We replace the parameters \\\\theta with a noisy version \\\\tilde{\\\\theta}, such that D is conditionally independent of \\\\theta given \\\\tilde{\\\\theta} and \\\\tilde{\\\\theta} only depends on \\\\theta, giving us a Markov chain \\\\theta->\\\\tilde{\\\\theta}->D.\\n(7) The DPI applies to this Markov chain, i.e. I(\\\\theta, \\\\tilde{\\\\theta}) >= I (\\\\theta, D).\\n(8) By choosing the noise distribution and the prior on \\\\theta conveniently, i.e. Gaussian, we can calculate I(\\\\theta, \\\\tilde{\\\\theta}) explicitly, giving us an upper bound on the typically intractable I(\\\\theta, D).\\n(9) We show that these Noisy Information Bottlenecks are already present in Gaussian mean-field variational inference.\"}", "{\"title\": \"about the DPI\", \"comment\": \"Maybe I didn't make my point clear. By saying that this paper uses the DPI in a wrong way, I mean we cannot define an arbitrary chain and claim that it follows the assumption of DPI. The specific way of constructing \\\\theta -> \\\\tilde{\\\\theta} -> D as explained in this paper does not make sense to me.\\n\\nSince the DPI is the foundation of this paper, I don't think this paper is ready to be accepted yet.\"}", "{\"title\": \"Response to reviewer 3\", \"comment\": \"Thank you very much for your encouraging review.\\n\\n> I read the paper and understand it, for the most part. The idea is to interpret some regularization techniques as a form of noisy bottleneck, where the mutual information between learned parameters and the data is limited through the injection of noise. While the paper is a pleasant read, I find difficult to access its importance and the applicability of the ideas presented beyond the analogy with the capacity computation. Perhaps other referee will have a clearer opinion.\\n\\nThe main contribution of our paper is indeed to establish a connection between variational inference and regularization by observing that Gaussian mean field introduces an upper bound on the mutual information between data and model parameters. Reinterpreting mean field as point estimation in a noisy model allows us to quantify observed regularizing effects. We show links to existing regularization strategies and validate the usefulness for regularization in targeted experiments.\\n\\nWhile the focus of our present work lies on establishing links between existing directions of research, we believe that our information-theoretic perspective on regularization opens up plenty of avenues for future work, both in supervised and unsupervised learning. \\n\\nFor example, we are interested in improving extraction of unsupervised representations by controlling the amount of extracted information. In particular, we aim to mitigate latent collapse, a problem reported for example in language generation [1] and autoregressive image generation [2], which is currently mitigated with ad-hoc strategies such as KL annealing. Intuitively, if all information can be stored in the model itself, there is little incentive to use a per-sample latent. This is also known as the information preference problem, as briefly discussed at the end of section 2.1. Therefore, limiting mutual information of the data with the model might offer a robust mitigation strategy. Additionally, we believe that the approach can lead to improved representations through disentanglement, as done by beta-VAE [3]. Our formal connection to beta-VAE derived in Appendix C offers a promising information-theoretic perspective on their empirical results.\\n\\nMore generally, we want to explore non-MAP inference on noise-injected models as this would allow for using highly expressive variational distributions while enjoying the information-theoretic guarantees of simpler approximate distributions, as motivated in section 3.3.\\n\\nSince these directions are rather orthogonal, we think that sharing our theoretical framework with the community in an independent piece of work is the most effective way of communicating our ideas.\\n\\n> I'd be interested to hear if the authors see a connection between their formalism and the one of Reference prior in Bayesian inference (Bernardo et al https://arxiv.org/pdf/0904.0156)\\n\\nReference priors are opposite to our work in the sense that they maximize the amount of information data provides about the parameters, while we aim to find models to limit it. Also, see [4] for the relation of Fisher information to generalization.\\n\\nReferences\\n[1] Bowman, S. R., Vilnis, L., Vinyals, O., Dai, A. M., Jozefowicz, R. & Bengio, S. (2015). Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349.\\n[2] Gulrajani, I., Kumar, K., Ahmed, F., Taiga, A. A., Visin, F., Vazquez, D. & Courville, A. (2016). Pixelvae: A latent variable model for natural images. arXiv preprint arXiv:1611.05013.\\n[3] Burgess, C. P., Higgins, I., Pal, A., Matthey, L., Watters, N., Desjardins, G. & Lerchner, A. (2018). Understanding disentangling in beta-VAE. arXiv preprint arXiv:1804.03599.\\n[4] Ly, A., Marsman, M., Verhagen, J., Grasman, R. P. & Wagenmakers, E. J. (2017). A tutorial on Fisher information. Journal of Mathematical Psychology, 80, 40-55, page 30\"}", "{\"title\": \"Response to reviewer 1\", \"comment\": \"Thank you very much for the highly constructive review.\\n\\n> I think this is a very interesting direction, but the present paper is somewhat unclear. In particular, the example in section 3.1 says that a noisy information bottleneck is introduced, but then says that the modified and unmodified models have \\\"training algorithms that are exactly equivalent.\\\" I think this example needs to be clarified.\\n\\nWe realized that the naming was very confusing and consequently, we renamed \\\\tilde\\\\theta to \\\\tilde\\\\mu in the noise-injected model. Now, \\n - the original, noise-free model p has the structure \\\\theta -> D (no bottleneck) while \\n - the adapted, noise-injected model p\\u2019 has the structure \\\\mu -> \\\\tilde\\\\mu -> D (containing a bottleneck).\\nHereby, \\\\tilde\\\\mu is a noise-corrupted version of the new parameters \\\\mu, and we obtain a limit on the mutual information between \\\\mu and D. We simplified Figure 2 and 8 to make this more clear.\\n\\nTo better characterize Gaussian mean field inference on the original model, we aim to find an inference procedure on p\\u2019 so that both algorithms result in exactly the same outcome, e. g. the same calculations are executed when running the corresponding program. We show that there is such an inference procedure on the noisy model, and it has the character of MAP. Note that only if generative and inference model are adapted simultaneously we end up with equivalence. Hereby, \\\\mu (the mean of the Gaussian q) and \\\\theta (the original parameter in p) correspond to \\\\mu (the MAP point-mass of q\\u2019) and \\\\tilde\\\\mu (the noise-injected version of \\\\mu in p\\u2019).\\n\\n> Many of the parameters here are also unclear and not properly defined/introduced. What is the relationship between \\\\theta and \\\\tilde\\\\theta exactly?\\n\\nIn this example, \\\\theta and \\\\tilde\\\\theta never appear in the same model (they are part of p and p\\u2019, respectively). We realized that this is confusing and have therefore renamed \\\\tilde\\\\theta to \\\\tilde\\\\mu.\\n\\n> In this simple model, can we not calculate the mutual information directly (i.e., without the bottleneck)?\\n\\nThis is an excellent question. In fact, we believe that trying to construct noise-free deep models with a specific mutual information of data and parameters for the purpose of generalization would be an interesting research direction. Due to nonlinearities in typical deep models, it is at least not obvious how to calculate the mutual information between data and parameters. The main challenge here would certainly be to come up with an effective estimator. Relatedly, one would have to design priors and architecture to achieve a specific mutual information.\\n\\n> The connection between mutual information and generalization has been studied in several contexts [see, e.g., the references in this paper [...]] and further exploration is desirable. This paper is giving an information-theoretic perspective on existing variational inference methods. Such a perspective is interesting, but needs to be further developed and explained. Specifically, how can mutual information in this context be formally linked to generalization/overfitting?\\n\\nWe updated section 2.2 to relate to the references you mentioned. They explore the link of limiting mutual information and generalization error mostly in theory (and in particular for adaptive analysis). In contrast, we deploy this principle in a practical model structure that is easily applicable to many existing deep and variational learning approaches and provide empirical evidence of the validity of our framework.\\n\\n>Also, the definition of mutual information used in this paper uses the inferred distribution q (e.g., in eq. 2), which is somewhat unusual. As a result, constraining the model will alter the mutual information and I think the effect of this should be remarked on.\\n\\nWe want to emphasize that we do use the standard definition of mutual information. Therefore, the bottleneck implied by Eq. 5 is purely a property of the generative model and not influenced by the approximate inference distribution q.\\nEq. 2 is only introduced to provide additional motivation for our approach as it allows to characterize overfitting in variational inference. The guarantee derived in section 2.2 ties this quantity back to the mutual information from Eq. 5.\"}", "{\"title\": \"Response to reviewer 2\", \"comment\": \"Thank you very much for the constructive review.\\n\\n\\nSummary of our response\\n-------------------------------------\\n\\nWe are certain that the data processing inequality is used correctly. As you stated, the DPI implies for any Markov chain X -> Y -> Z that I(X,Y) >= I(X,Z). Unlike suggested in the review, our model is defined in the form \\\\theta -> \\\\tilde{\\\\theta} -> D, as shown in Figure 1a.\\n\\nFollowing your feedback, we updated section 2.1 and 2.3 for more clarity.\\n\\n\\nDetailed response\\n-------------------------------------\\n\\nWe interleave parts of the review with our detailed response for ease of reading.\\n\\n> [...] the major problem of this paper is that it uses the data processing inequality (DPI) in a **wrong** way. As in (Cover and Thomas, 2012), which is also cited in this paper, DPI is defined on a Markov chain X -> Y -> Z and we have I(X,Y) >= I(X,Z). However, based on the definition of \\\\theta and \\\\tilde{\\\\theta} given in the first sentence of section 2.3, the relation between \\\\theta, \\\\tilde{\\\\theta} and D should be: D <- \\\\theta -> \\\\tilde{\\\\theta} (if it is a generative model) or D -> \\\\theta -> \\\\tilde{\\\\theta} (if a discriminative model).\", \"response\": \"The aim of section 2.1 is to motivate limiting mutual information for the purpose of generalization. We link generalization problems reported in the literature to the introduced information measure. The information necessary to identify or distinguish between training samples is quantified by the empirical entropy, and we called it the identity of the samples. We updated the section to address all of your feedback.\"}", "{\"title\": \"Interesting to read but might lack depth\", \"review\": \"I read the paper and understand it, for the most part. The idea is to interpret some regularization technics as a from of noisy bottleneck, where the mutual information b tween learned parameters and the data is limited through the injection of noise.\\n\\nWhile, the paper is a plaisant read, I find difficult to access its importance and the applicability of the ideas presented beyond the analogy with the capacity computation. Perhaps other referee will have a clearer opinion.\\n\\nI'd be interested to hear if the authors see a connection between their formalism and the one of Reference prior in Bayesian inference (Bernardo et al https://arxiv.org/pdf/0904.0156)\", \"pro\": \"nicely written, clear interpretation of regularization as a noise injection technics, explicit link with information theoery and Shanon capacity.\", \"con\": \"not clear to me how strong and wide the implications are, beyond the analogies and the reinterpretation\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Interesting ideas, but unclear how to interpret\", \"review\": \"This paper studies \\\"Noisy Information Bottlenecks\\\". The overall idea is that, if the mutual information between learned parameters and the data is limited, then this prevents overfitting. It proposes to create a \\\"bottleneck\\\" to limit the mutual information. Specifically, the bottleneck is created by having the data depend on a noisy version of the parameters, rather than the true parameters and invoking the information processing inequality. The paper gives an example of Gaussian mean field inference. Ultimately, the analysis boils down to looking at a signal-to-noise ratio of the algorithm, which looks very much like regularization.\\n\\nI think this is a very interesting direction, but the present paper is somewhat unclear. In particular, the example in section 3.1 says that a noisy information bottleneck is introduced, but then says that the modified and unmodified models have \\\"training algorithms that are exactly equivalent.\\\" I think this example needs to be clarified. Many of the parameters here are also unclear and not properly defined/introduced. What is the relationship between $\\\\theta$ and $\\\\tilde\\\\theta$ exactly? In this simple model, can we not calculate the mutual information directly (i.e., without the bottleneck)?\\n\\nThe connection between mutual information and generalization has been studied in several contexts [see, e.g., the references in this paper and https://arxiv.org/abs/1511.05219 https://arxiv.org/abs/1705.07809 https://arxiv.org/abs/1712.07196 https://arxiv.org/pdf/1605.02277.pdf https://arxiv.org/abs/1710.05233 https://arxiv.org/pdf/1706.00820.pdf ] and further exploration is desirable. This paper is giving an information-theoretic perspective on existing variational inference methods. Such a perspective is interesting, but needs to be further developed and explained. Specifically, how can mutual information in this context be formally linked to generalization/overfitting? Also, the definition of mutual information used in this paper uses the inferred distribution q (e.g., in eq. 2), which is somewhat unusual. As a result, constraining the model will alter the mutual information and I think the effect of this should be remarked on.\\n\\nOverall, I think this paper has some interesting ideas, but those need to be fleshed out and clearly explained in a future revision.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"this paper uses DPI in a wrong way\", \"review\": \"This paper proposes a justification to one observation on VAE: \\\"restricting the family of variational approximations can, in fact, have a positive regularizing effect, leading to better generalization\\\". The explanation given in this work is based on Gaussian mean-field approximation.\\n\\nI had trouble to understand some parts of this paper, since some of the sentences do not make sense to me. For example\\n\\n- the sentence under eq. (2)\\n- the sentence \\\"Bacause the identity of the datapoint can never be learned by ...\\\" What is the identity of a dat point?\\n\\nIt looks like section 2.1 wants to show the connections between eq. (2) and other popularly used inference methods. Somehow, those connections are not clear to me.\\n\\nBesides some issues in the technical details, the major problem of this paper is that it uses the data processing inequality (DPI) in a **wrong** way.\\n\\nAs in (Cover and Thomas, 2012), which is also cited in this paper, DPI is defined on a Markov chain X -> Y -> Z and we have I(X,Y) >= I(X,Z). \\n\\nHowever, based on the definition of \\\\theta and \\\\tilde{\\\\theta} given in the first sentence of section 2.3, the relation between \\\\theta, \\\\tilde{\\\\theta} and D should be: D <- \\\\theta -> \\\\tilde{\\\\theta} (if it is a generative model) or D -> \\\\theta -> \\\\tilde{\\\\theta} (if a discriminative model). Either case, I don't think we can have the inequality in eq. (5).\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }