forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
BJlpCsC5Km | Learning Gibbs-regularized GANs with variational discriminator reparameterization | [
"Nicholas Rhinehart",
"Anqi Liu",
"Kihyuk Sohn",
"Paul Vernaza"
] | We propose a novel approach to regularizing generative adversarial networks (GANs) leveraging learned {\em structured Gibbs distributions}. Our method consists of reparameterizing the discriminator to be an explicit function of two densities: the generator PDF $q$ and a structured Gibbs distribution $\nu$. Leveraging recent work on invertible pushforward density estimators, this reparameterization is made possible by assuming the generator is invertible, which enables the analytic evaluation of the generator PDF $q$. We further propose optimizing the Jeffrey divergence, which balances mode coverage with sample quality. The combination of this loss and reparameterization allows us to effectively regularize the generator by imposing structure from domain knowledge on $\nu$, as in classical graphical models. Applying our method to a vehicle trajectory forecasting task, we observe that we are able to obtain quantitatively superior mode coverage as well as better-quality samples compared to traditional methods. | [
"deep generative models",
"graphical models",
"trajectory forecasting",
"GANs",
"density estimation",
"structured prediction"
] | https://openreview.net/pdf?id=BJlpCsC5Km | https://openreview.net/forum?id=BJlpCsC5Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Skgr1ubWe4",
"Byluja-qAm",
"rJl5v6b9Am",
"H1l_R2b9R7",
"rJera-pdRQ",
"SyeCe585nm",
"HklfE4KKh7",
"SyxS1DdK37"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544783837112,
1543277983748,
1543277921521,
1543277775555,
1543193021363,
1541200374325,
1541145641757,
1541142237405
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper936/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper936/Authors"
],
[
"ICLR.cc/2019/Conference/Paper936/Authors"
],
[
"ICLR.cc/2019/Conference/Paper936/Authors"
],
[
"ICLR.cc/2019/Conference/Paper936/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper936/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper936/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper936/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes to define the GAN discriminator as an explicit function of a invertible generator density and a structured Gibbs distribution to tackle the problems of spurious modes and mode collapse. The resulting model is similar to R2P2, i.e. it can be seen as adding an adversarial component to R2P2, and shows competitive (but no better) performance. Reviewers agree, that these limits the novelty of the contribution, and that the paper would be improved by a more extensive empirical evaluation.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Intersting idea, but novelty is limited and experimental analysis could be extended.\"}",
"{\"title\": \"Algorithm box added and paper revised for clarity\", \"comment\": \"Thank you for your comments. Please note that we have revised\\nthe manuscript to address them.\\n\\n1. typo\\n=======\\n\\nYes, the extra sup is a typo. Thanks for pointing it out.\\n\\n2. Section 2.3 is confusing\\n===========================\\n\\nWe apologize for the confusion. We have tweaked this section in the revised manuscript (which is now Section 3.1) and have added an algorithm box for clarity. \\n\\nV_\\\\phi(x) = log \\\\nu_\\\\phi(x) can be any arbitrary function of x with parameters \\\\phi, although in order to gain the benefits of regularization, V_\\\\phi should be structured appropriately for the domain. For example, if x is an image, then V_\\\\phi(x) could be a CNN with convolution weights \\\\phi.\\n\\nWe emphasize that \\\\exp V_\\\\phi(x) does not need to be explicitly normalized over x. In theory, the V_\\\\phi that maximizes the objective (the equation now called Eq. 1 in the latest revision) is automatically normalized, because the optimal value of \\\\exp V_\\\\phi is p (the data distribution). Since p is normalized, the optimal \\\\exp V_\\\\phi must also be normalized. \\n\\nNote that in practice, \\\\exp V_\\\\phi may not be normalized due to incomplete optimization; however, optimizing it still has the desired effect of pushing up V_\\\\phi where the data is present and pushing it down elsewhere. This seems to be sufficient to make the method work in practice.\\n\\nHow is the inner loop connected to Itakura-Saito divergence minimization?\\n=========================================================================\\n\\nThis is an important point that we have clarified in our updated draft. Eq. 7 (in the latest paper revision) is equivalent to the inner sup_\\\\nu in Eq. 1, in the sense that the argmin_\\\\nu of Eq. 7 is equal to the argmax_\\\\nu of Eq. 1. The only difference is that the sign has been flipped (converting it to a min) and a constant factor of -1-E_{x ~ q} log p(x) has been added. The latter factor does not depend on the parameters \\\\phi of the energy \\\\nu_\\\\phi and hence does not affect optimization over \\\\phi. Since Eq. 7 is equal to the Itakura-Saito divergence between p and \\\\nu, the maximizer of Eq. 1 in \\\\nu is equal to the minimizer of the Itakura-Saito divergence between p and \\\\nu in \\\\nu.\\n\\nOne other subtle issue worth mentioning is our particular definition of the Itakura-Saito divergence, which is parameterized by a \\\"weighting function\\\" q. This is a natural choice for two reasons: first, it allows us to express the divergence as an expectation (with respect to q), thus allowing us to optimize it via SGD; second, it ensures the divergence is bounded if there is a constant offset in the integrand. Intuitively, a constant offset may occur since \\\\nu is not assumed to be normalized.\\n\\nThe Itakura-Saito divergence and its weighted variant may both be derived as Bregman divergences. We would be happy to provide more details in an appendix if desired.\\n\\n3. Algorithm box?\\n=================\\n\\nThank you for this suggestion. We have added an algorithm box, which we believe has significantly enhanced the clarity of the presentation and demonstrates that the method is actually fairly simple to implement in practice.\"}",
"{\"title\": \"Introduction has been revised around a toy example\", \"comment\": \"We apologize for the delayed response, although we note that the discussion period is still open, as far as we understand. We appreciate your insightful comments and have uploaded a revised manuscript taking them into account. In particular, we have rewritten the introduction around results from an illustrative toy experiment (Fig. 1), which we believe significantly strengthens the motivation and readability of the work.\\n\\nWhat are \\\"spurious modes\\\"?\\n==========================\\n\\nPlease note that we used the term \\\"spurious modes\\\" as a somewhat imprecise shorthand and have deemphasized this language in the latest draft. A more accurate description of the phenomenon we address is the inappropriate placement of model distribution mass outside the support of the data distribution, since the local maximum property of a mode is not necessary for the manifestation of the phenomenon. Fig. 1a shows a failure case of the baseline method where model mass is placed outside the support of the data distribution, but the model does not clearly exhibit multiple modes.\\n\\nThe mental image of a spurious mode comes from the following illustrative example. Consider minimizing KL(p,q) over q for fixed data PDF p. Now consider the value of KL(p,q'), where q' = 0.5 * (p + \\\\eta), assuming \\\\eta is such that supp(p) \\\\cap supp(\\\\eta) = \\\\emptyset. It is easy to show that KL(p,q') = log 2, and generalizing the argument shows that KL(p,q') = log N if q' is a mixture of N equally-likely components such that only one is equal to p, and the rest are disjoint from p. Visualizing q' as a mixture of components mostly disjoint from p is what gives rise to the image of spurious modes. It is also practically useful to think of KL(p,q') as penalizing q' only for the log of the number (~N) of spurious modes that it includes.\\n\\nWe preferred to use related citations to provide evidence for the \\\"spurious mode\\\" phenomenon to save space, but we can include the technical argument above if those citations are considered insufficient.\\n\\nAre \\\"spurious modes\\\" real? Does our method fix them?\\n===================================================== \\n\\nIn addition to the arguments above and the toy example, which show the existence of the phenomenon and the ability of our method to fix it, we would also like to point out that the synthetic experiments (Table 1) clearly demonstrate that C3PO significantly reduces the number of model-generated examples falling outside the support of the data distribution. We can measure this by calculating the percent of generated samples that go off-road, as samples from the training data never go off-road. Table 1 shows that about 99% of C3PO's samples stay on-road while retaining a high log-likelihood value (indicating good mode coverage), whereas only 92% of the baseline model's samples stay on-road. See Section 4.2 for details.\\n\\nHow novel is our work?\\n======================\\n\\nAlthough it is true that our technical innovation over R2P2 largely consists of applying Fenchel-variational inference to the KL(p,q) term and proposing to regularize the resulting model by imposing Gibbs structure on the variational function, we believe that to view our work only in these terms would be overly reductive, overlooking its potential impact.\", \"our_work_bridges_a_gap_between_two_camps_in_the_generative_modeling_community\": \"namely, deep generative models, which underestimate the importance of regularization; and classical energy-based probabilistic models, which have not yet realized how recent variational methods can help sidestep the obstacle of partition function estimation. Our work bridges this gap by showing how one method can be seen as both a regularized deep generative model (Sections 1-2) and a novel way to learn an unnormalized graphical model without explicit partition function estimation, via iterative Itakura-Saito divergence minimization (Section 3.1). We believe these insights are novel and will be interesting to a sizable fraction of the ICLR community.\"}",
"{\"title\": \"We address both spurious modes and mode loss. Please see revised paper.\", \"comment\": \"Thank you for your comments. Please see the revised paper, which\\nincorporates the points brought up by you and the other reviewers.\\n\\n1. Is \\\"mode removal\\\" really necessary?\\n======================================\\n\\nUnfortunately, the old introduction may have caused some confusion. We have rewritten the introduction and included a toy experiment to illustrate the main concepts.\\n\\nOur method attempts to prevent both the potential failure modes of deep generative models---specifically, mode collapse and \\\"spurious modes.\\\" We do this by starting from a baseline model (min_q KL(p,q), i.e., \\\"max likelihood\\\") that does not suffer from mode collapse, and then adding a loss component (KL(q,p)) that prevents the \\\"spurious mode\\\" problem of the baseline model. However, it is also perfectly valid to think of our method as starting with the loss that prevents spurious modes (KL(q,p)), but suffers from mode loss, and then adding the component that prevents mode loss KL(p,q). Either way, we argue that preventing both failure modes is important.\\n\\nPlease also see comments to AnonReviewer1 on \\\"spurious modes.\\\"\\n\\n2. Doesn't R2P2 perform just as well?\\n=====================================\\n\\nOur method optimizes an objective that is essentially a trade-off between covering the support of the data as well as possible and not generating any samples outside the support of the data. The results show that C3PO does a better job of managing this trade-off than R2P2: in Table 1, BEVWorld1K, C3PO decreases H(q,p) by 12.9 nats while suffering an increase of only 3.3 nats in H(p,q) compared to R2P2. Please note that this is a logarithmic scale: decreasing H(q,p) by 12.9 nats means that C3PO's samples are about e^12.9 = 400,000 times more likely under the data than R2P2's samples. The BEVWorld1 scenario is more of a sanity check, as it includes only one train/test scene, while BEVWorld1K features 100 training scenes and 1000 test scenes.\\n\\n3. Is invertibility a strong requirement?\\n=========================================\\n\\nOur work builds on several recent papers that demonstrate the feasibility of using invertible generators to enable efficient evaluation of the generator PDF, including [8,18,29,A] (see latest paper revision and [A] below for citations). Also, [31] notes that invertibility of the generator is a particularly natural assumption for the application of trajectory forecasting. As noted in the paper, we borrow the generator architecture of [31] for our work. \\n\\n[A] Diederik P Kingma and Prafulla Dhariwal. \\\"Glow: Generative Flow\\nwith Invertible 1x1 Convolutions.\\\" In: NIPS 2018.\\n\\n4. Imposing structure on the model PDF\\n======================================\\n\\nWe agree that this wording was unfortunate and have revised the introduction to improve overall clarity. We meant to have q represent an abstract model PDF, which could either be implemented as the distribution of a generator's outputs or as a structured Gibbs distribution. Our claim is that regularizing a 'q' represented as the distribution of a generator's outputs is more difficult than regularizing a 'q' represented as an unnormalized Gibbs distribution.\\n\\nThis is illustrated in Fig. 1. Fig. 1a shows the effect of poor regularization of the generator PDF combined with an inadequate loss: artifacts appear in the generated samples due to the peculiarities of the generator structure. We remedy this by using variational inference to train an unnormalized Gibbs distribution \\\\nu to mimic the data PDF p, and then penalizing the generator for generating samples where \\\\nu is low. It is much easier to control the \\\"shape\\\" of \\\\nu than it is to control the shape of the generator's PDF, because the latter is determined by the peculiarities of the generator structure, whereas the shape of \\\\nu is specified directly. By controlling the shape of \\\\nu, we can get different regularization effects on the generator.\\n\\nPlease also see response to AnonReviewer1.\\n\\n8. What\\u2019s the meaning of \\u201cRoad %\\u201d\\n=================================\\n\\nIn the synthetic experiment, we generate a dataset consisting of simulated overhead road maps and corresponding paths. All of the training paths stay on the roads. \\u201cRoad %\\u201d specifies the percent of generated paths that stay on the road. Like KL(q,p), this is a measure of the method\\u2019s ability to generate samples that are likely under the true data distribution. We will clarify this in the next version of the manuscript.\"}",
"{\"title\": \"No Reply. Original Rating.\", \"comment\": \"The authors did not reply. In this situation, I stand by my original review.\"}",
"{\"title\": \"Regularization of GANs to remove spurious modes - but is this what is needed?\", \"review\": \"Summary: The paper tries to answer the problems of regularizing GANS. They reparametrize the discriminator to be an explicit function of two densities: the generator probability density function q and a structured Gibbs distribution v.\", \"comments\": \"\", \"1\": \"This paper focuses on mode coverage problems, where spurious modes of learned model(q) not supported by target model(p) are pruned off. It is not clear why this is a significant problem. GAN trained models typically suffer from mode collapsing, requiring additional noise injection to support generation of diverse data. This work seems to argue that the opposite is worth paying attention to, focusing on removal of modes.\", \"2\": \"The implementation of the architecture is similar to R2P2, except for the introduction of a new adversarial component. But according the evaluation in table 1 and table 2, we see that baseline model R2P2 performs better in -H(p,q) and for -H(q, pKDE) the value is near equal to their model.\", \"3\": \"They assume the generator is invertible, which enables the analytic evaluation of the q. But no supporting evidence or design architecture for the statement above is provided.\", \"4\": \"The explanation of imposing structure on the model distribution is not clear. In the introduction they first claim \\u201cwe cannot impose structure directly on the joint distribution of a GAN\\u2019s outputs.\\u201d But after they claim \\u201cwe submit that regularizing the structure of a GAN\\u2019s generator and discriminator is generally more difficult than imposing meaningful structure directly on the model distribution, which we will refer to as q. These two statements conflict because the model distribution is a joint distribution of GAN\\u2019s outputs.\", \"6_typos\": \"1)\\tin equation(1) we should minimize q for all the terms. \\n2)\\tin equation(1) first term is unrelated with v. \\n3)\\tin equation(1) the sup is for the last two terms. \\n4)\\tin equation(2) in RHS of equation the first term q_ | should be q_ .\", \"7\": \"Writing could be improved.\", \"8\": \"In table 2 what\\u2019s the meaning of evaluation metric Road%\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Decent idea but need more motivating experiments.\", \"review\": \"The paper proposes to reparameterize the discriminator to be an explicit function of two densities so that one could inject domain specific knowledge easily. As the authors say, that one way to inject domain specific information is by learning an energy function. Making use of this intuition, authors proposed to regularize the discriminator in\\n(GANs) framework by leveraging structured Gibbs distributions.\\n\\nI found the introduction a bit hard to read. Otherwise paper is written in a readable way. \\n\\nSomething which I like about this paper, is authors use the proposed method for actual RL problems as compared to just image generation. I think this is important as well as interesting. As a community we should be moving towards evaluating generative models for the problems where we actually want to use generative models for.\", \"some_questions\": [\"I'm not sure if the paper is really novel as the authors themselves point out that it corresponds to adding adversarial component in R2P2.\", \"I also did not find results very convincing. As I said, its important to evaluate on RL problems ONLY if it makes sense on toy problems first. Like in the paper, authors made a big claim about reducing spurious modes, but it has not been demonstrated any where per se. May be authors can construct a toy problem in which they can show that the spurious mode issue, and how the proposed method kills these spurious modes. This also reminds me of the literature in Boltzmann machines and more recently in Variational Walkback [1]. This could also be cited, and could be interesting to authors.\", \"[1] Variational Walkback, https://arxiv.org/abs/1711.02282. The authors in Variational walkback also make the assumption p == q.\", \"What would make the paper stronger ?\", \"Constructing toy problems in order to illustrate the mode coverage and spurious modes issue would be interesting.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review\", \"review\": \"This paper combines a number of ideas to train generative models with (deep) structured constraints. The general idea is similar to Flow-GAN, which learns a normalizing flow-based generator by optimizing the negative loglikelihood with an augmented GAN loss. However, It\\u2019s difficult to impose prior structure information in the GAN framework. To address this problem, the authors proposed to minimize a so-called Gibbs-regularized variational bound of Jeffery divergence, which is the summation of KL and reverse KL divergence. The authors provide some justification that the Jeffery divergence works by yielding good mass-covering and mode-seeing properties.\\n\\nIt appears that the parameterization and adaptation of v throughout optimization is the key contribution of this work --- the technical details are not clear from the paper.\\n\\n1. Typo in the training objective (Eq .1): the second (or the first) \\\"sup\\\" should be removed? \\n\\n2. Section 2.3 is very confusing. Particularly, how is the parameter \\\\phi introduced? What\\u2019s the detailed update of \\\\phi? \\n- \\\"We now observe that our methods can also be interpreted as a way of learning v as a Gibbs distribution approximating p.\\\" If v_\\\\phi(x) is a distribution, what\\u2019s the parametric form of v? \\n- \\\"Generally, this is achieved by structuring the energy function V_\\\\phi:=\\\\log v_\\\\phi.\\\" It seems that V_\\\\phi(x) is a scalar-valued function that represents the negative energy of the distribution v_\\\\phi(x), however, why the distribution is self-normalized? Specifically, why \\\\int \\\\exp(V_\\\\phi) dx = 1? Otherwise, how the authors deal with the partition function \\\\int \\\\exp(V_\\\\phi(x)). \\n- It is unclear to me why the inner loop optimization is connected with Itakura-Saito divergence minimization? The authors may consider including the detailed proofs?\\n\\n3. With the given description, the proposed algorithm is not easy to follow and implement by the reader. The paper would benefit from an Algorithm box with pseudocode.\\n\\nIf the authors can fully address the concerns above, I will consider changing the scores.\", \"other_comments\": \"1. The empirical results are fairly weak. Similar datasets are used, the authors may consider evaluating their approach on various different tasks. \\n\\n2. Duplicate citations \\u2013 R2P2 [35] [36]\\n\\n3. Other related papers:\\n - Belanger et al., End-to-End Learning for Structured Prediction Energy Networks, ICML 17\\n- Tu et al., Learning Approximate Inference Networks for Structured Prediction, ICLR 18\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
BJl6AjC5F7 | Learning to Represent Edits | [
"Pengcheng Yin",
"Graham Neubig",
"Miltiadis Allamanis",
"Marc Brockschmidt",
"Alexander L. Gaunt"
] | We introduce the problem of learning distributed representations of edits. By combining a
"neural editor" with an "edit encoder", our models learn to represent the salient
information of an edit and can be used to apply edits to new inputs.
We experiment on natural language and source code edit data. Our evaluation yields
promising results that suggest that our neural network models learn to capture
the structure and semantics of edits. We hope that this interesting task and
data source will inspire other researchers to work further on this problem. | [
"Representation Learning",
"Source Code",
"Natural Language",
"edit"
] | https://openreview.net/pdf?id=BJl6AjC5F7 | https://openreview.net/forum?id=BJl6AjC5F7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1gYawvxnE",
"S1xCzoWenN",
"rJg4OdAllN",
"B1l0SBlOJE",
"B1xPkXivJV",
"ryguW3YcAQ",
"SkllknFcCQ",
"HkebczdFCm",
"HklnK9E8RX",
"rkeXF8NUCQ",
"HJgMTr4LAm",
"Sye616mI0m",
"Hyg1fVtXaQ",
"rJeUUtum6X",
"r1g6GvdXp7",
"H1gbD8dQ6Q",
"HylSf_0a27",
"H1eeIsRo3Q",
"HkeCSi493X"
],
"note_type": [
"official_comment",
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1557325760918,
1557302037790,
1544771691763,
1544189254120,
1544168158849,
1543310336326,
1543310296122,
1543238280885,
1543027332311,
1543026299377,
1543026106081,
1543023844597,
1541800966804,
1541798221637,
1541797652619,
1541797464648,
1541429261284,
1541299016211,
1541192518347
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper935/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper935/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper935/Authors"
],
[
"ICLR.cc/2019/Conference/Paper935/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper935/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper935/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper935/Authors"
],
[
"ICLR.cc/2019/Conference/Paper935/Authors"
],
[
"ICLR.cc/2019/Conference/Paper935/Authors"
],
[
"ICLR.cc/2019/Conference/Paper935/Authors"
],
[
"ICLR.cc/2019/Conference/Paper935/Authors"
],
[
"ICLR.cc/2019/Conference/Paper935/Authors"
],
[
"ICLR.cc/2019/Conference/Paper935/Authors"
],
[
"ICLR.cc/2019/Conference/Paper935/Authors"
],
[
"ICLR.cc/2019/Conference/Paper935/Authors"
],
[
"ICLR.cc/2019/Conference/Paper935/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper935/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper935/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Differences to learning to apply edits\", \"comment\": \"Thank you for these pointers. While our paper also describes a component to automatically apply edits, that is only required as part of a system that is able to recognise common edits in a very large set of edits. For example, LASE requires the user to provide edits that can be generalised into a common edit script.\\n\\nOur paper is concerned with being able to find groups of edits that describe the same change in a very large set of changes (e.g., all of GitHub); and indeed, our core motivation is to use the method from this paper to identify similar edits, and then feed them to a tool that can extract interpretable edit scripts (though we were thinking more of the method of Rolim et al).\"}",
"{\"comment\": \"It seems that the aim of learning representation of source code changes is similar with research work about systematic edits, which is to learn edit script from some example changes and automatically apply the edit script to other code. Therefore, some methods like LASE [1] and Rase [2] used to address systematic edits should be compared.\\n\\n[1] Meng, N., Kim, M., & McKinley, K. S. (2013). LASE: Locating and Applying Systematic Edits by Learning from Examples. In Proceedings of the 2013 International Conference on Software Engineering (pp. 502\\u2013511). Retrieved from http://dl.acm.org/citation.cfm?id=2486788.2486855\\n[2] Meng, N., Hua, L., Kim, M., & McKinley, K. S. (2015). Does Automated Refactoring Obviate Systematic Editing? In Proceedings of the 37th International Conference on Software Engineering - Volume 1 (pp. 392\\u2013402). Retrieved from http://dl.acm.org/citation.cfm?id=2818754.2818804\", \"title\": \"Lack of comparison with state-of-art in SE field\"}",
"{\"metareview\": \"This paper investigates learning to represent edit operations for two domains: text and source code. The primary contributions of the paper are in the specific task formulation and the new dataset (for source code edits). The technical novelty is relatively weak.\", \"pros\": \"The paper introduces a new dataset for source code edits.\", \"cons\": \"Reviewers raised various concerns about human evaluation and many other experimental details, most of which the rebuttal have successfully addressed. As a result, R3 updated their score from 4 to 6.\", \"verdict\": \"Possible weak accept. None of the remaining issues after the rebuttal is a serious deal breaker (e.g., task simplification by assuming the knowledge of when and where the edit must be applied, simplifying the real-world application of the automatic edits). However, the overall impact and novelty of the paper is relatively weak.\", \"confidence\": \"2: The area chair is not sure\", \"recommendation\": \"Accept (Poster)\", \"title\": \"rebuttal improved the review scores, no serious issues other than relatively weak novelty .\"}",
"{\"title\": \"Human Annotation by Authors\", \"comment\": \"Thanks for clarifying! We certainly agree that this needs to be stated as prominently as possible and we will make changes to state this more prominently and clearly in the next version of the paper.\"}",
"{\"title\": \"Concern about annotation scheme\", \"comment\": \"Thank you for the updates!\\n\\nIn agreement with R3's concerns, I do think it's important to state (prominently) that the annotation was performed by the authors. It seems fairly clear that there are significant qualitative differences, especially between the output of the BoW and seq encoders, and that it would be difficult to avoid bias here. That being said, I think this /does/ reinforce that the differences between models are consistent and measurable.\"}",
"{\"title\": \"Details of re-evaluation\", \"comment\": \"Re-evaluation details\\n\\nThe authors present the task of learning distributed representations of edits, and confront the concrete questions of i) grouping semantically equivalent edits based on their distributed representations, and ii) automatically applying an edit (based on its distributed representation) to a new, un-edited input. In the first several communications we articulated concerns about the significance of the proposed contributions and the quality of the results. With few exceptions, those concerns have been answered, for me fairly convincingly:\\n\\n(1) \\nWe were unsure of the significance of the proposed contributions, since existing systems (e.g. coding IDEs and word editors) already include features for recommending and applying edits, and the authors do not discuss or compare with these.\\n\\nThe revised manuscript argues more clearly that a) automatically identifying and grouping similar edits could be useful for helping humans to recognize common changes or emerging edit patterns when editing natural language and code, and b) that automatically applying similar edits to new (un-edited) text could assist humans with softer edits e.g. stylistic changes to writing. The paper is written more clearly to point out that the results (e.g. t-SNE, edit prediction) illustrate the feasibility of clustering edits and applying \\u201csimilar\\u201d edits automatically. While their prediction results probably are not \\u201cperformance-level\\u201d (e.g. 49% on average in the transfer learning task, with huge variability between C#fixer classes), in my opinion the paper makes a clear case for the significance of the proposed task; moreover, the results show progress on that task (more on this follows).\\n\\n(2)\", \"previously_it_was_not_clear_to_us_whether_the_authors_demonstrated_any_progress_on_the_task_they_proposed\": \"it was not clear that the edit representations produced by their models were useful.\", \"in_particular\": \"the authors evaluated their edit encoders first by annotating the relevance of nearest-neighbor edits in the encoding space, and it was unclear what annotation procedures or guidelines they used. The revised manuscript includes a substantial description of their annotation guidelines in Appendix E. The annotation guidelines are now clear.\\n\\nMoreover, the prediction results now include a (favorable) comparison to baseline methods. The end-to-end encoder+editor system evaluation for predicting edits using gold-standard representations now includes comparisons with a system using a baseline edit encoder (bag-of-edits model). The encoder-editor system in the transfer learning task now includes comparisons with a system using no encoder (i.e., the edit encoding is skipped). Through these prediction results, the authors demonstrate that their end-to-end system substantially outperforms a system using a baseline encoder, which shows that their edit encoders are producing useful distributed representations of edits.\\n\\nThose prediction results also confirm that the self-annotation utilized in the nearest-neighbor evaluations did not produce bogus results, i.e. the edit encoders do useful work.\\n\\n(3)\\nThe above concerns were the main concerns, but the revised manuscript also addresses some smaller ones.\\n\\nTable 1 was revised to highlight the differences between the neural editing model and bag-of-words baseline, partly in response to our review comments. Appendix B contains additional relevant discussion.\\n\\nA short appendix D is included in response to our comments. This addresses the need to remark on sample size and scalability for these model training tasks.\\n\\n(4)\", \"unresolved_concerns\": \"The human annotation task should not have been completed by the authors, who are by nature biased annotators with insight into the kinds of results produced by each model. This concern is substantial, but not enough to reject in light of the overall improvement in this revised draft, especially confirmation of the annotation verdict via prediction results which compare to a baseline encoder.\\n\\nThe huge variability in prediction accuracies in the transfer learning task, combined with comparing to a relatively weak baseline (no edit encoder), suggests far more work should be done to produce significant progress on goal (ii) proposed by the authors. This concern is substantial, but again not enough to reject. (The authors acknowledge the difficulty of the transfer learning task and the need for the development of better algorithms.)\"}",
"{\"title\": \"Authors made good improvements and clarifications\", \"comment\": \"Summary of re-evaluation\\n\\nThe authors made substantial improvements to the paper in the revised version. Our overall opinion is that the motivation, task, and datasets are impressive. At least some of the results represent progress on the tasks they present, and the results which do not represent progress are not disqualifying. The scope of the paper is now more clear, as is the relevance of the results to the motivating applications. The results and discussion have been updated in ways which address our most significant concerns. The paper is likely to be of interest to NLP researchers. There are unresolved problems, listed below, but those problems are not disqualifying. We updated the evaluation\\u2019s score accordingly.\"}",
"{\"title\": \"Thoughts on updates?\", \"comment\": \"Thanks again for your review. We are wondering if our comments have sufficiently addressed your concerns or if there is something that we might have missed.\\n\\nOverall, we would kindly ask that you reconsider your rating given the additional experimental results, evaluations and explanation. Alternatively, could you please provide any further guidance on how to improve the paper?\"}",
"{\"title\": \"Updates to Paper\", \"comment\": \"Thanks again for your insightful comments! We have updated our submission. Below is a brief summary of the changes we made reflecting your comments:\\n\\n1. Question: \\u201cwhat would be enabled by accurate prediction of atomic edits \\u2026 elaborate on the motivation and significance for this new task\\u201d\\n We have presented a detailed explanation in our previous response to your comment. We also revised Section 2 to illustrate some interesting potential downstream applications facilitated by this task. We will include more discussion in the final version given more pages. \\n\\n2. Question: \\\"human evaluation is not described in detail...\\\"\\n We included our annotation instructions, and the inter-rater agreement score in Appendix E.\\n\\n3. Question: \\u201cwhat it means when they say better prediction performance does not necessarily mean it generalizes better...\\u201d\\n We have presented a detailed explanation in our previous response to your comment. We have also rephrased our discussion in Section 4.4 to make the logical flow clearer.\"}",
"{\"title\": \"Updates to Paper\", \"comment\": \"Thanks again for your insightful comments! We have updated our submission. Below is a brief summary of the changes we made reflecting your comments:\\n\\n**Details of Data Annotation** We included our annotation instructions, and the inter-rater agreement score in Appendix E.\\n\\n**Comparison with the Guu et al. Bag-of-Edits Encoder** As pointed out by you, Guu et al. (2017) introduced a generative language model of natural language sentences by editing prototypes. We have included a more detailed explanation in Section 5 (first Para.) to distinguish our work from Guu et al. (2017). While we remark that our work and Guu et al. (2017) are not directly comparable, we have implemented the deterministic version of the \\u201cBag-of-Edits\\u201d edit encoding model in Guu et al. (2017) as a baseline editor encoder for our end-to-end experiments in Section 4.4, Table 4. The results confirm the advantage of our edit encoder models proposed in Section 3.2, which go beyond the simple \\u201cBag-of-Edits\\u201d scheme and can capture the context and positional information of edits. \\n\\nAs in our previous response to your comment, We have presented further analysis regarding the contextual and positional sensitivity of edits in Appendix B, illustrating the importance of using more advanced edit encoders than \\\"Bag-of-Edits\\\" encoders to capture such information.\\n\\n**\\u201cLower-bounds\\u201d of the Transfer Learning Task** We included \\u201clower-bounds\\u201d accuracies for the transfer learning experiments in Table 5. To approximate the lower-bounds, we trained Seq2Seq and Graph2Tree transduction models without using edit encoders, and test the model\\u2019s accuracies in directly transducing an original input code $x-$ into the edited one $x+$.\"}",
"{\"title\": \"Updates to Paper\", \"comment\": \"Thanks again for your insightful comments! We have updated our submission. Below is a brief summary of the changes we made reflecting your comments:\\n\\n1. (Regarding Data Annotation) We included our annotation instructions, and the inter-rater agreement score in Appendix E.\\n\\n2. (Regarding \\\"important task\\\" and \\\"a family of models\\\") We added descriptions in Section 2 describing the difference of our proposed neural approach with existing rule-based editing systems and potential downstream applications facilitated by the task. We leave the problem of identifying which edit representation to apply to an input as interesting future work\\n\\n3. (Regarding \\u201ccapture structure of edits\\u201d and Human Evaluation) We include human evaluation details in Appendix E. We also apologize for the confusion in interpreting Table 1, and have revised its format accordingly. Example 1 in Table 1 shows that the three nearest neighbors returned by the neural editing model are clearly semantically and syntactically relevant to the seed edit (i.e., both the seed edit and the returned neighbors inserted a sentence describing the profession and date of the birth of the topic person), while the nearest neighbors returned by the bag-of-words baseline only rely on surface token overlap, and are not syntactically/semantically similar to the seed edit. We also include discussions about the contextual and positional sensitivity of edits in Appendix B.\\n\\n4. (Regarding results in Table 11) We expanded Appendix C, presenting more analysis and discussions for some challenging C# fixer categories (RCS1077, RCSRCS1197, RCS1207, RCS1032).\\n\\n5. (Regarding the Impact of Training Set Size) We evaluated the precision of our neural editor models with varying amount of training data in Appendix D. The results indicate that our proposed approach is relatively data efficient: our Graph2Tree (on GithubEdits) and Seq2Seq (on WikiAtomicEdits) editors achieve around 90% of the accuracies achieved using the full training set with only 60% of the training data.\"}",
"{\"title\": \"TO ALL REVIEWERS: Updates to Paper and Results of Experiments\", \"comment\": \"We thank again all reviewers for their insightful comments! We have updated our submission reflecting your comments and suggestions. Here is a brief summary of the changes:\\n\\n**Text Updates**\\n\\n**Potential Impact of the Task** We revised Section 2, describing the difference of our proposed neural approach with existing rule-based editing systems. We also illustrate some interesting potential downstream applications facilitated by this task.\\n\\n**Details of Data Annotation** We included our annotation instructions, and the inter-rater agreement score in Appendix E.\\n\\n**Human Evaluation on WikiAtomicEdits** Based on comments from Reviewer-#3, we revised the format of Table 1 and the corresponding discussion in Section 4.2 to highlight the differences of the neural editing model v.s. a simple bag-of-words baseline.\\n\\n**Detailed Analysis of the Transfer Learning Experiments** We expanded Appendix C, presenting more analysis and discussions for some challenging C# fixer categories where our model underperformed.\\n\\n\\n**New Experiments and Analysis**\", \"we_also_included_three_new_experiments_with_analysis\": \"**Comparison with the Guu et al. Bag-of-Edits Encoder** As pointed out by Reviewer-#2, Guu et al. (2017) introduced a generative language model of natural language sentences by editing prototypes. We have included a more detailed explanation in Section 5 (first Para.) to distinguish our work from Guu et al. (2017). While we remark that our work and Guu et al. (2017) are not directly comparable, we have implemented the deterministic version of the \\u201cBag-of-Edits\\u201d edit encoding model in Guu et al. (2017) as a baseline editor encoder for our end-to-end experiments in Section 4.4, Table 4. The results confirm the advantage of our edit encoder models proposed in Section 3.2, which go beyond the simple \\u201cBag-of-Edits\\u201d scheme and can capture the context and positional information of edits. We also present interesting analysis regarding the contextual and positional sensitivity of edits in Appendix B.\\n\\n**\\u201cLower-bounds\\u201d of the Transfer Learning Task** As suggested by Reviewer-#2, we included \\u201clower-bounds\\u201d accuracies for the transfer learning experiments in Table 5. To approximate the lower-bounds, we trained Seq2Seq and Graph2Tree transduction models without using edit encoders, and test the model\\u2019s accuracies in directly transducing an original input code $x-$ into the edited one $x+$.\\n\\n**Performance with Varying Training Data Size** To address the concern raised by Reviewer-#3, we evaluated the precision of our neural editor models with varying amount of training data. We present the results in Appendix D. The results suggest that our proposed approach is relatively data efficient: our Graph2Tree (on GithubEdits) and Seq2Seq (on WikiAtomicEdits) editors achieve around 90% of the accuracies achieved using the full training set with only 60% of the training data.\\n\\nFinally, we would like to thank again the reviewers for their time and insightful comments which have helped make this paper better. We believe that learning to represent edits is an important yet underexplored problem in representation learning for natural language, source code, and other structured data. We hope that this work inspires future research and that the provided datasets/evaluation protocols will further facilitate future exploration of this task in the community.\"}",
"{\"title\": \"Thanks for the comments\", \"comment\": \"*robustness of edit encodings*: Thanks for the comment! Directly measuring the robustness of edit encodings is non-trivial, but our one-shot learning experiments (Sec. 4.4) serve as a good proxy by testing the editing accuracy using the edit encoding from a similar example.\\n\\n*applicability to other tasks*: Our proposed method is general and could be applied to other structured transduction tasks. We perform experiment on natural language edits (sequential) and source code commit data (tree-structured), since these are two commonly occurring sources of edits. We leave applying our model to other data sources as interesting future work.\\n\\n*comparison with Guu et al., 2017*: Thanks for pointing out the related work by Guu et al! As discussed in Section 5, we remark that our motivation and research issues are very different, and these two models are not directly comparable --- Guu et al. focus on learning a generative language model by marginalizing over latent edits, while our work focuses on discriminative learning of (1) representing edits given the original (x-) and edited (x+) data, and (2) applying the learned edit to new input data. We therefore directly evaluate the quality of neighboring edit representations via human annotation, and the end-to-end performance of applying edits to both parallel data and in a novel one-shot learning scenario, which are not covered in Guu et al.\\n\\nNevertheless, our model architecture shares a similar spirit with Guu et al. For example, the model in Guu et al. also has an edit encoder based on \\u201cBag-of-Edits\\u201d (i.e., the posterior distribution $q(z|x-, x+)$) and a seq2seq generation (reconstruction) model of x+ given x- and the edit representation z. In some sense, our seq2seq editor with a \\u201cBag-of-Edits\\u201d edit encoder would be similar as the \\u201cdiscriminative\\u201d version of Guu et al. We will make the difference between this research and Guu et al clearer in an updated version of the paper. Please also refer to below for our response to the \\u201cBag-of-Edits\\u201d edit encoder.\", \"response_to_your_specific_questions\": \"*lower-bounding transfer learning results*: Thanks for the comments! Having a lower-bound is helpful in understanding the relative advantage of our proposed method, however it is not clear what a reasonable lower-bounding baseline would be. One baseline would be an editor model (e.g., Graph2Tree with sequential edit encoder) that doesn\\u2019t use edit encodings. \\n\\n*constrained versions of the edit encoder*: First, we remark that our Bag-of-Word edit encoder (Table 1 and 2) is similar to a \\u201cBag-of-Edits\\u201d model, where the representation of an edit is modeled by a vector of added/deleted tokens (we use different vocabularies for added and deleted words). Our neural edit encoders have access to the full sequences x- and x+. \\n\\nWe also tried a distributional bag-of-edits model like the one used in Guu et al., using an LSTM to summarize only changed tokens. This model had worse performance in our end-to-end experiment (Table 4) and we therefore we did not include the results. Through error analysis we found that many edits are **context and positional sensitive**, and encoding context (i.e., full sequences) is important. For instance, the WikiAtomicEdits examples we present in Table 9 clearly indicate that semantically similar insertions also share similar editing positions, which cannot be captured by the bag-of-edits encoder as in Guu et al. This might be more obvious for structured data source like code edits (c.f., Table 10). For instance, in the first example in Table 10, `Equal()` can be changed to `Empty()` **only** in the `Assert` namespace (i.e., the context). We apologize for the confusion and will include more results and analysis in the final version, facilitating more direct comparison with the editor encoder in Guu et al. Nevertheless, we remark that as discussed above, our work is not directly comparable with Guu et al.\\n\\n*subsampling WikiAtomicEdits*: At the time of submission the WikiAtomicEdits dataset could not be downloaded in full, due to an error with the zip file provided. We managed to extract the first 1M edits from the dataset. We believe that the full corpus would not present significantly different statistical properties from the 1M samples we used.\\n\\n*human evaluation*: please refer to our response regarding annotation. The idea of separating syntactically and semantically similar edits is also very interesting, which we will explore in our final version.\\n\\n*soft metric*: Thanks for the comment! We can definitely do BLEU evaluation on WikiAtomEdits. For source code data, a sensible \\u201csoft\\u201d metric on source code still remains an open research issue (Yin and Neubig, 2017). We will include more discussion in our final version.\\n\\n*classifying Wikipedia edits*: This is a very great idea, thanks for suggesting this. Given the time constraints, we will examine the feasibility of doing something like this for the final version of the paper.\"}",
"{\"title\": \"Thanks for your comments!\", \"comment\": \"Thank you for the careful reading of the paper (including the lengthy appendices!), and elucidating concerns about validity of the task and method. We believe that several of these were due to a lack of clarity in our exposition, that can be resolved. We have attempted to clarify these below and will revise the paper to make things more clear before the end of the review period.\\n\\n* Regarding \\\"important task\\\"\", \"response\": \"we thank the reviewer for his effort in analyzing the many statistics we present in Table 11! We remark that this task is a transfer learning task is indeed non-trivial. For instance, some fixer categories cover many different types of edits (e.g., RCS1077 (https://github.com/JosefPihrt/Roslynator/blob/master/docs/analyzers/RCS1077.md handles 12 differents ways of optimizing LINQ expressions). In these cases, edits are semantically related (\\u201cimproving a LINQ expression\\u201d), but this relationship only exists at a high level and is not directly reflected to the syntactic transformations required by the fixer.\\n\\nOther categories contain complex refactoring rules that require reasoning about a chain of expressions (e.g., RCS1197 (https://github.com/JosefPihrt/Roslynator/blob/master/docs/analyzers/RCS1197.md turns sb.Append(s1 + s2 + \\u2026 + sN) into sb.Append(s1).Append(s2).[...]Append(sN)), which our current models are unable to reason about. We believe that further advances in (general) learning from source code are required to correctly handle theses cases.\\n\\nWe will expand Appendix C with a more fine-grained analysis of the results in Table 11, providing more background on categories whose results deviate substantially from the average.\\n\\n[Impact of training set size and scalability]: Thanks for the comments! We will discuss this in our final version.\"}",
"{\"title\": \"Thanks for your comments!\", \"comment\": \"Question: \\u201cwhat would be enabled by accurate prediction of atomic edits \\u2026 elaborate on the motivation and significance for this new task\\u201d\", \"response\": \"This observation is grounded in the comparison of the results displayed in Tables 4 and 5 in our end-to-end experiment on GitHubEdits data (Section 4.4). Table 4 indicates that given the encoding of an edit (x-, x+), the Seq2Seq editor is most precise in generating x+ from x-, (slightly) outperforming the Graph2Tree editor. We evaluate the generality of edit representations in our \\u201cone-shot\\u201d experiment, where we use the encoding of a related edit (x-, x+) to reconstruct x+\\u2019 from x-\\u2019. There, the Graph2Tree editor performs significantly better than the Seq2Seq editor. The latter experiment serves as a good proxy in evaluating the generalization ability of different system configurations, from whose result we derive the hypothesis that better performance with gold-standard edit encodings might not imply better performance with noisy edit encodings.\\n\\nWe apologize for the confusion and will update the text of the paper to clarify what we mean by generalizable and how we draw that conclusion from our experiments.\", \"question\": \"\\u201cwhat it means when they say better prediction performance does not necessarily mean it generalizes better...\\u201d\"}",
"{\"title\": \"To All Reviewers: Regarding Data Annotation\", \"comment\": \"To all reviewers:\\n\\nWe thank all reviewers for their insightful comments!\\n\\n**Regarding Data Annotation**\\n\\nWe apologize for not detailing the annotation rubric and will make this clearer. We will update the main text to clarify the most important points, and provide the instructions and examples for the rating system in the supplementary material.\\n\\nAs also noted by Reviewer-#1, we realized that it is difficult to come up with a fine-grained rating system (e.g., using a 5-element scale) for characterizing semantic/syntactic similarity between edits, especially for free-form natural language data. We believe this problem alone would be an interesting research issue, reminiscent of studies in categorizing syntactic transformations in natural language (e.g., He et al., 2015). \\n\\nTherefore, we chose to use a simpler 3-element scale (semantically/syntactically equivalent edits, related edits, unrelated). For both natural language and code data, we designed detailed annotation instructions with illustrative examples (will be included in the supplementary material of the next version of our paper). Admittedly, this grading scheme is not perfect, as the category of \\u201crelevant edits\\u201d could be further divided, and it does not distinguish semantically similar edits from syntactically similar ones. However, we found no way to exactly define how to do such finer-grained annotations, and thus used our simple scheme. Note that this simple grading system is already effective in comparing the performance of different models. For example, we observe a clear win of Seq2Seq models over the bag-of-words baseline in both natural language and code datasets (Tables 1 and 2), and Graph2Tree with sequential edit encoder over Seq2Seq (Table 2), especially in Acc@1. \\n\\nThe annotation was carried out by three of the authors, and we anonymized the source of systems that generated the output. Due to time limits we assigned different sampled edits to different annotators. We will provide inter-rater agreement score shortly.\", \"reference\": \"1. H. He, A. G. II, J. Boyd-Graber, and H. D. III. Syntax-based rewriting for simultaneous machine translation. In Empirical Methods in Natural Language Processing (EMNLP 2015)\"}",
"{\"title\": \"This paper looks at learning to represent edits for text revisions and code changes. The main contribution is in defining a new task, providing a new dataset, and building simple neural network models that show good performance.\", \"review\": \"This paper looks at learning to represent edits for text revisions and code changes. The main contributions are as follows:\\n* They define a new task of representing and predicting textual and code changes \\n* They make available a new dataset of code changes (text edit dataset was already available) with labels of the type of change\\n* They try simple neural network models that show good performance in representing and predicting the changes\\n\\nThe NLP community has recently defined the problem of predicting atomic edits for text data (Faraqui, et al. EMNLP 2018, cited in the paper), and that is the source of their Wikipedia revision dataset. Although it is an interesting problem, it is not immediately clear from the Introduction of this paper what would be enabled by accurate prediction of atomic edits (i.e. simple insertions and deletions), and I hope the next version would elaborate on the motivation and significance for this new task. \\n\\nThe \\\"Fixer\\\" dataset that they created is interesting. Those edits supposedly make the code better, so modeling those edits could lead to \\\"better\\\" code. Having that as labeled data enables a clean and convincing evaluation task of predicting similar edits.\\n\\nThe paper focuses on the novelty of the task and the dataset, so the models are simple variations of the existing bidirectional LSTM and the gated graph neural network. Because much of the input text (or code) does not change, the decoder gets to directly copy parts of the input. For code data, the AST is used instead of flat text of the code. These small changes seem reasonable and work well for this problem.\\n\\nEvaluation is not easy for this task. For the task of representing the edits, they show visualizations of the clusters of similar edits and conduct a human evaluation to see how similar these edits actually are. This human evaluation is not described in detail, as they do not say how many people rated the similarity, who they were (how they were recruited), how they were instructed, and what the inter-rater agreement was. The edit prediction evaluation is done well, but it is not clear what it means when they say better prediction performance does not necessarily mean it generalizes better. That may be true, but then without another metric for better generalization, one cannot say that better performance means worse generalization. \\n\\nDespite these minor issues, the paper contributes significantly novel task, dataset, and results. I believe it will lead to interesting future research in representing text and code changes.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"This work introduces a new learning task of automated edits for text/code, a learning framework for it, a dataset, and some evaluations but we found mostly the latter lacked, reducing our enthusiasm.\", \"review\": \"The authors state nicely and clearly the main contributions they see in their work (Intro, last paragraph). Specifically the state the paper: 1) present a new and important machine learning task, 2) present a family of models that capture the structure of edits and compute efficient representations, 3) create a new source code edit dataset, 4) perform a set of experiments on the learned edit representations and present promising empirical evidence that the models succeed in capturing the semantics of edits.\", \"we_decided_to_organize_this_review_by_commenting_on_the_above_stated_contributions_one_at_a_time\": \"\\u201cA new and important machine learning task\\u201d\\n\\nRegarding \\u201cnew task\\u201d:\", \"pro\": \"The experiment results show how frequently the end-to-end system successfully predicted the correct edit given a pre-edit input and a known representation of a similar edit. Gold standard accuracies of more than 70%, and averaged transfer learning accuracies of more than 30%, suggest that this system shows promise for capturing the semantics of edits.\", \"con\": \"Due to concerns expressed above about the model design and evaluation of the edit representations, it remains unclear to what degree the models succeed in capturing the semantics of edits. Table 11 shows dramatic variation in success levels across fixer ID in the transfer learning task, yet the authors do not propose ways their end-to-end system might be adjusted to address areas of weak performance. The authors do not discuss the impact of training set size on their evaluation metrics. The authors do not discuss the degree to which their model training task would scale to larger language datasets such as those needed for the motivating applications.\\n\\n##############\\nBased on the authors' response, revisions, and disucssions we have updated the review and the score.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"review\", \"review\": \"The main contributions of the paper are an edit encoder model similar to (Guu et al. 2017 http://aclweb.org/anthology/Q18-1031), a new dataset of tree-structured source code edits, and thorough and well thought-out analysis of the edit encodings. The paper is clearly written, and provides clear support for each of their main claims.\\n\\nI think this would be of interest to NLP researchers and others working on sequence- and graph-transduction models, but I think the authors could have gone further to demonstrate the robustness of their edit encodings and their applicability to other tasks. This would also benefit greatly from a more direct comparison to Guu et al. 2017, which presents a very similar \\\"neural editor\\\" model.\", \"some_more_specific_points\": [\"I really like the idea of transferring edits from one context to another. The one-shot experiment is well-designed, however it would benefit from also having a lower bound to get a better sense of how good the encodings are.\", \"If I'm reading it correctly, the edit encoder has access to the full sequences x- and x+, in addition to the alignment symbols. I wonder if this hurts the quality of the representations, since it's possible (albeit not efficient) to memorize the output sequence x+ and decode it directly from the 512-dimensional vector. Have you explored more constrained versions of the edit encoder (such as the bag-of-edits from Guu et al. 2017) or alternate learning objectives to control for this?\", \"The WikiAtomicEdits corpus has 13.7 million English insertions - why did you subsample this to only 1M? There is also a human-annotated subset of that you might use as evaluation data, similar to the C#Fixers set.\", \"On the human evaluation: Who were the annotators? The categories \\\"similar edit\\\", and \\\"semantically or syntactically same edit\\\" seem to leave a lot to interpretation; were more specific instructions given? It also might be interesting, if possible, to separately classify syntactically similar and semantically similar edits.\", \"On the automatic evaluation: accuracy seems brittle for evaluating sequence output. Did you consider reporting BLEU, ROUGE, or another \\\"soft\\\" sequence metric?\", \"It would be worth citing existing literature on classification of Wikipedia edits, for example Yang et al. 2017 (https://www.cs.cmu.edu/~diyiy/docs/emnlp17.pdf). An interesting experiment would be to correlate your edit encodings with their taxonomy.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
ryG2Cs09Y7 | Feature prioritization and regularization improve standard accuracy and adversarial robustness | [
"Chihuang Liu",
"Joseph JaJa"
] | Adversarial training has been successfully applied to build robust models at a certain cost. While the robustness of a model increases, the standard classification accuracy declines. This phenomenon is suggested to be an inherent trade-off. We propose a model that employs feature prioritization by a nonlinear attention module and $L_2$ feature regularization to improve the adversarial robustness and the standard accuracy relative to adversarial training. The attention module encourages the model to rely heavily on robust features by assigning larger weights to them while suppressing non-robust features. The regularizer encourages the model to extracts similar features for the natural and adversarial images, effectively ignoring the added perturbation. In addition to evaluating the robustness of our model, we provide justification for the attention module and propose a novel experimental strategy that quantitatively demonstrates that our model is almost ideally aligned with salient data characteristics. Additional experimental results illustrate the power of our model relative to the state of the art methods. | [
"adversarial robustness",
"feature prioritization",
"regularization"
] | https://openreview.net/pdf?id=ryG2Cs09Y7 | https://openreview.net/forum?id=ryG2Cs09Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SyeumnYxgE",
"H1lypnNVyV",
"ryg9zojcRX",
"Skewgioc0m",
"Hkxpu5sq0X",
"Bkx2XmKY2Q",
"HyeSggsdn7",
"SklYe2qunm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544752159972,
1543945399153,
1543318290156,
1543318255288,
1543318133452,
1541145380140,
1541087212589,
1541086192590
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper934/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper934/Authors"
],
[
"ICLR.cc/2019/Conference/Paper934/Authors"
],
[
"ICLR.cc/2019/Conference/Paper934/Authors"
],
[
"ICLR.cc/2019/Conference/Paper934/Authors"
],
[
"ICLR.cc/2019/Conference/Paper934/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper934/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper934/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes an attention mechanism to focus on robust features in the\\ncontext of adversarial attacks. Reviewers asked for more intuition, more\\nresults, and more experiments with different attack/defense models. Authors\\nhave added experimental results and provided some intuition of their proposed\\napproach. Overall, reviewers still think the novelty is too thin and recommend\\nrejection. I concur with them.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}",
"{\"title\": \"Response to post-rebuttal comment\", \"comment\": \"\\\"Also, I found in table 3 that, the larger-capacity model is less robust than the smaller-capacity model against white-box iterative attacks? This is strange.\\\"\\n\\n- It's due to overfitting. Below is the record of the training and test accuracy relative to training epochs for wide networks against PGD-5. The first table is for the baseline model and the second table is for our model. It shows that while the training accuracy keeps increasing, the test accuracy first increases then decreases, which is a sign of model overfitting. Even that both models overfit, our method provides an improvement over the baseline method.\\n\\nBaseline model \\nepoch 10 20 30 40 50 60 70 80 90 100\\ntraining batch accuracy 52.34% 54.69% 59.38% 59.38% 65.62% 53.91% 56.25% 60.94% 56.25% 68.75%\\ntest accuracy 43.86% 47.54% 49.67% 50.82% 50.63% 51.05% 50.66% 52.33% 51.84% 51.07%\\nepoch 110 120 130 140 150 160 170 180 190 200\\ntraining batch accuracy 78.91% 82.81% 78.12% 83.59% 79.69% 86.72% 88.28% 89.84% 92.19% 94.53%\\ntest accuracy 54.29% 52.90% 51.81% 51.23% 50.68% 50.68% 49.98% 49.87% 49.03% 49.15%\\n\\nOur model\\nepoch 10 20 30 40 50 60 70 80 90 100\\ntraining batch accuracy 36.72% 49.22% 43.75% 51.56% 54.59% 56.01% 61.72% 54.21% 63.28% 70.31%\\ntest accuracy 31.99% 38.22% 44.85% 47.82% 50.51% 51.27% 52.42% 52.16% 53.17% 53.26%\\nepoch 110 120 130 140 150 160 170 180 190 200\\ntraining batch accuracy 70.31% 61.38% 69.19% 66.28% 75.44% 85.59% 86.38% 91.41% 95.31% 93.75%\\ntest accuracy 53.10% 52.68% 56.22% 54.38% 54.27% 54.64% 53.85% 53.71% 54.00% 53.23%\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for the kind comments.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We address the comments of this reviewer as follows.\\n\\n1. We have run a significant amount of additional experiments and our proposed method demonstrates a consistent improvement over the baseline method. We agree that comparing with other defense methods would improve the experiment section, but we think that the performance comparison with Madry et al. (2017) is the most important for this work. Our feature prioritization and regularization techniques are used as an improvement over the baseline adversarial training approach. Adversarial training with PGD adversary is the state of art method which is validated in various papers and shows superior performance over other defense methods. Since we show that our methods work better than Madry et al. (2017) in various settings, the advantages transfer to other defense models.\\n\\n2. With the results from extended experiments, we summarize the two contributions as follows: feature regularization significantly improves the white box robustness at the cost of a decline in standard accuracy, and a slight decline in black box accuracy; Attention improves both the white box and black box robustness, and at the same time also increases standard accuracy of a model.\\n\\n3. The gradient maps are direct indicators on how the input features are utilized by a model to produce a final prediction. A large magnitude of the gradient on an input feature signifies a heavy dependence the model has. Human vision is robust against small input perturbations and the perception of an image reflects which input features contribute to the vision robustness. At the same time, the gradient maps of a model also highlight the input features which affect the loss most strongly, therefore more robust models depend on robust features and will be better aligned with human vision. So the alignment of gradient maps with the image can be used to evaluate the robustness a model. Next, the purpose of classifying gradient maps is to provide a comparable quantitative measure of the relative alignment between the two sets of gradient maps and the original images. A standard neural net extracts relevant features from the inputs and makes predictions based on the features. When a gradient map is highly aligned with the original image, the neural net is able to identify more relevant features and thus the classification accuracy will be higher. We agree that the robustness questions arise regarding the meaningfulness and interpretability of the accuracies, but we think this method works for comparison purposes.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We address the concerns raised by the reviewer as follows.\\n\\n1. The intuition behind using attention is to effectively assign weights to features depending on their robustness. Robust features are highly correlated with the class labels and invariant to input perturbations. Since the global features are directly used to produce class label prediction, we can use them as a query to assign attention weights. In this way, robust features that have higher correlations with class labels will be assigned larger weights which in turn contribute to the model's robustness. In order to validate this intuition, we conduct an additional experiment in Section 4.3 to examine the relationship between the robustness of a feature and its assigned attention weight. Figure 2 shows that more robust features are actually assigned larger weights by the attention module while the attention weights of non-robust features are small. In such a way, a model with attention has improved robustness.\\n\\n2. We added the missing citations to our paper. For ALP, we agree that it is similar to our feature regularization approach. However, we argue that feature regularization is more intuitive than ALP. While the logits represent the prediction confidence of a model and pairing the logits prevents a model from being over-confident when making predictions, it's not entirely clear why this would lead to a more robust model. On the other hand, feature regularization motivates a model to learn very similar features for the clean and adversarial inputs. The learned features are invariant to input perturbations thus robust features. From another point of view, a model trained with feature regularization maps clean and adversarial examples from nearby points in the image space to nearby points in the high-dimensional manifold. We updated the related work section. The ImageNet results in ALP paper are still under development. Engstrom et al. (2018) invalidate their claims and the ALP paper was retracted from NIPS 2018 by the authors.\\n\\nEngstrom, Logan, Andrew Ilyas, and Anish Athalye. \\\"Evaluating and understanding the robustness of adversarial logit pairing.\\\" arXiv preprint arXiv:1807.10272 (2018).\\n\\n3. We have run the additional tests against the following adversaries: PGD 20+2, PGD 100+2, PGD 200+2, CW 30+2, CW 100+2 and updated our paper with the corresponding results in Section 4.2, Table 2. The results show that our proposed model is more robust than the baseline model against all adversaries in both white box and black box settings.\\n\\n4. As suggested by the reviewer, we run the additional experiments with a wider ResNet to test our method. Due to time and resources constraints, we choose a 3-times wide ResNet with [16, 48, 96, 192] filters instead of the 10-times wide network in Madry et al. (2017). We believe experiments using a 3-times wide ResNet is able to demonstrate the effectiveness of our method with larger capacity networks and nevertheless, our method is independent of the model size. We present the results for this wide model on CIFAR-10 in Section 4.2, Table 3. From Table 3 we note that our method shows better performances against a wide range of adversaries than the baseline method.\\n\\n5. We have run the experiments on CIFAR-100 and updated our paper with the corresponding results in Section 4.5, Table 5. The results exhibit a consistently better performance of our proposed model over the baseline method on a harder dataset like CIFAR-100.\"}",
"{\"title\": \"Lack of valid explanation, and insufficient experiment\", \"review\": \"This paper studies adversarial training of robust classification models. It is based on PGD training in [madry17]. It proposes two points: 1) add attention schemes, 2) add a feature regularization loss. The results on MNIST and CIFAR10 demonstrate the effectiveness. At last, it did some diagnostic study and visualization on the attention maps and gradient maps.\\n\\n1. Can you provide detailed explanations/intuitions why attention will help train a more robust models?\\n\\n2. Two related adversarial training papers are missing \\\"Ensemble Adversarial Training\\\" (ICLR2018) and \\\"Adversarial Logit Pairing\\\" (ICML2018). Also, feature (logit) regularization has been studied in ALP paper on ImageNet.\\n\\n3. For Table 2 on CIFAR10, I would like to see PGD20 (iterations) + 2 (step size in pixels), PGD100 + 2 and PGD200 + 2. Also, I am interested in seeing CW loss which is based on logit margin. \\n\\n4. I would like to see results using the \\\"wide\\\" model in [madry17] paper for ALP and LRM. I think results from large-capacity models are more convincing.\\n\\n5. I would like to see results on CIFAR100, which is a harder dataset, 100 classes and 500 images per class. I think CIFAR10 alone is not sufficient for justification nowadays (maybe enough one year ago). Since ImageNet is, to some extent, computationally impossible for schools, I want to see the justification results on CIFAR100.\\n\\n##### Post-rebuttal\\n\\nI appreciate the additional results in the rebuttal. I raise the score but it is still slightly below the acceptance. The reasons are 1) incremental novelty; 2) insufficient experiments. Also, I found in table 3 that, the larger-capacity model is less robust than the smaller-capacity model against white-box iterative attacks? This is strange.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"FEATURE PRIORITIZATION AND REGULARIZATION IMPROVE STANDARD ACCURACY AND ADVERSARIAL ROBUSTNESS\", \"review\": \"Summary: This paper argues that improved resistance to adversarial\\nattacks can be achieved by an implicit denoising method in which model\\nweights learned during adversarial training are encouraged to stay\\nclose to a set of reference weights using the ell_2\\npenalty. Additionally, the authors claim that by introducing an\\nattention model which focuses the model training on more robust\\nfeatures they can further improve performance. Some experiments are\\nprovided.\", \"feedback\": \"My main concerns with the paper are:\\n\\n* The experimental section is fairly thin. There are at this point a\\n large number of defense methods, of which Madry et al. is only one. In\\n light of these, the experimental section should be expanded. The\\n results should ideally be reported with error bars, which would help\\n in gauging significance of the results.\\n\\n* The differential impact of the two contributions is not entirely\\n clear. The results in Table 1 suggest that implicit denoising can\\n help, yet at the same time, Table 2 suggests that Black-box\\n performance is better if we just use the attention model. Overall,\\n this conflates the contributions unnecessarily and makes it hard to\\n distingish their individual impact.\\n\\n* The section on gradient maps is not clear. The authors argue that if\\n the gradient map aligns with the image the model depends solely on\\n the robust features. While this may be (somewhat more) intuitive in\\n the context of simple GLMs, it's not clear why it should carry over\\n to DNNs. I think it would help to make these intuitions much more\\n precise. Secondly, even if this were the case, the methodology of\\n using a neural net to classify gradient maps and from this derive a\\n robustness metric raises precisely the kinds of robustness questions\\n that the paper tries to answer. I.e.: how robust is the neural net\\n classifying the gradient images, and how meaningful are it's\\n predictions when gradient maps deviate from \\\"clean\\\" images.\\n\\nOverall, I feel this paper has some potentially interesting ideas, but\\nneeds additional work before it is ready for publication.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting paper\", \"review\": \"This paper proposes a new architecture for adversarial training that is able to improve both accuracy and robustness performances using an attention-based model for feature prioritization and L2 regularization as implicit denoising. The paper is very clear and well written and the contribution is relevant to ICLR.\", \"pros\": [\"The background, model and experiments are clearly explained. The paper provides fair comparisons with a strong baseline on standard datasets.\", \"Using attention mechanisms to improve the model robustness in an adversarial training setting is a strong and novel contribution\", \"Both quantitative and qualitative results are interesting.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
S1en0sRqKm | On the Computational Inefficiency of Large Batch Sizes for Stochastic Gradient Descent | [
"Noah Golmant",
"Nikita Vemuri",
"Zhewei Yao",
"Vladimir Feinberg",
"Amir Gholami",
"Kai Rothauge",
"Michael Mahoney",
"Joseph Gonzalez"
] | Increasing the mini-batch size for stochastic gradient descent offers significant opportunities to reduce wall-clock training time, but there are a variety of theoretical and systems challenges that impede the widespread success of this technique (Daset al., 2016; Keskar et al., 2016). We investigate these issues, with an emphasis on time to convergence and total computational cost, through an extensive empirical analysis of network training across several architectures and problem domains, including image classification, image segmentation, and language modeling. Although it is common practice to increase the batch size in order to fully exploit available computational resources, we find a substantially more nuanced picture. Our main finding is that across a wide range of network architectures and problem domains, increasing the batch size beyond a certain point yields no decrease in wall-clock time to convergence for either train or test loss. This batch size is usually substantially below the capacity of current systems. We show that popular training strategies for large batch size optimization begin to fail before we can populate all available compute resources, and we show that the point at which these methods break down depends more on attributes like model architecture and data complexity than it does directly on the size of the dataset. | [
"Deep learning",
"large batch training",
"scaling rules",
"stochastic gradient descent"
] | https://openreview.net/pdf?id=S1en0sRqKm | https://openreview.net/forum?id=S1en0sRqKm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJxTGceZe4",
"rygQP9i_0Q",
"rJg-T_iO0X",
"SygQyusu0X",
"BygHDPou0X",
"BJepmDsdRQ",
"SyeA0WaOT7",
"SylNiW91T7",
"SJxGhB28hX",
"rJgLKxFVhX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544780308857,
1543187034668,
1543186617422,
1543186395166,
1543186269074,
1543186212957,
1542144470441,
1541542299895,
1540961706214,
1540817022275
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper932/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper932/Authors"
],
[
"ICLR.cc/2019/Conference/Paper932/Authors"
],
[
"ICLR.cc/2019/Conference/Paper932/Authors"
],
[
"ICLR.cc/2019/Conference/Paper932/Authors"
],
[
"ICLR.cc/2019/Conference/Paper932/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper932/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper932/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper932/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents an interesting empirical analysis showing that increasing the batch size beyond a certain point yields no decrease in time to convergence. This is an interesting finding, since it indicates that parallelisation approaches might have their limits. On the other hand, the study does not allow the practitioners to tune their hyperparamters since the optimal batch size is dependent on the model architecture and the dataset. Furthermore, as also pointed out in an anonymous comment, the batch size is VERY large compared to the size of the benchmark sets. Therefore, it would be nice to see if the observation carries over to large-scale data sets, where the number of samples in the mini-batch is still small compared to the total number of samples.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Interestng empirical analysis but insights might be limited\"}",
"{\"title\": \"Authors' response\", \"comment\": \"Thank you for your comments. As you point out, Ma et al. (2017) have already shown that increasing the batch size indefinitely eventually stops yielding any improvement in convergence speed. Missing from this theoretical analysis is a prediction of what exact batch size is too large, rendering their results of limited use for practitioners. Although finding an optimal batch size a priori in the general case has proved elusive, our results demonstrate that this optimal/maximum batch size is heavily problem-dependent (compare the contour plots for DRN and ResNet34 in Figure 1), and they suggest, though do not prove, that current state-of-the-art results on image classification tasks are already nearing the maximum. To our knowledge, our work is the first large-scale empirical study of the effects of batch size on convergence speed and training loss. Current theoretical analyses fail to explore the saturation phenomenon as a function of dataset and model architecture; our results show that the optimal batch size is heavily dependent on these parameters.\\n\\n>>> It is not clear that all the regularization techniques have been tried by the authors, the increase of generalization error is very small, and there is no explanation or insight given by the authors to explain this phenomenon, making this finding of limited interest.\\n\\nThis work primarily studies the effects of batch size on convergence speed and on the minimum achieved training loss; generalization error only compounds these problems. We have included an additional table of test errors, which demonstrates a significant increase in generalization error (e.g. 93.58% for test accuracy at batch size 64 vs. 86.93% at batch size 8k for ResNet34 on CIFAR-10). Other lines of work already explore the effect of batch size on generalization error (theoretical: Jastrz\\u0119bski et al. (arxiv:1711.04623), Zhu et al. (arxiv:1803.00195), and experimental: You et al. (arxiv:1708.03888), Smith and Le (arxiv:1711.0048)), and so we do not make a significant study of it here. \\n\\n>>> \\u201cDataset size is not the only factor determining the computational efficiency of large batch training.\\\" is something obvious to say, as there are plenty of factors that determine the computational efficiency (network connection, map-reduce implementation, etc.)\\n\\nOur claim is poorly worded. Our intention is to claim that, contrary to Smith and Le (arxiv:1711.0048) and Goyal et al. (arxiv:1706.02677), increasing the dataset size does not yield a linear increase in the permissible batch size. In this submission, we support this claim in Figure 4, and we intend to bolster this claim further with future work on larger datasets.\\n\\nOur intention in this work is to understand the ability of large batch sizes to provide significant gains in training efficiency and speed. By measuring convergence speed in terms of training iterations rather than wall-clock time, we demonstrate that, regardless of the particular distributed implementations, these gains quickly become marginal or non-existent. A better wording of this claim would be that model architecture and other problem properties play a more decisive role in determining training efficiency than dataset size alone. \\n\\n>>> Suggestions for future work are not very defined/helpful and no alternate forms presented\\n>>> No discussion on the lock-free gradient descent, that is often suggested as an alternative to batching\\n\\nAsynchronous and lock-free methods are exciting approaches that may help in the future to address the speedup saturation behavior we observe. In this work, we focus on synchronous mini-batch training because most recent large-scale training work focuses on this setting (Chen et al. (arxiv:1604.00981)). A natural direction of future work would be to explore variations in model architecture on the same training data to help disentangle the effects of the dataset and model architecture on the amount of exploitable data parallelism in the problem.\\n\\n>>> Skipped past other heuristics on increasing the size of the batch size as the iterations increase\\n\\nJastrz\\u0119bski et al. (arxiv:1711.04623) and Smith and Le (arxiv:1711.00489) show that increasing the batch size as training progresses has the same effect as decaying the learning rate, and Smith et al. (arXiv:1711.00489) support this claim empirically. However, in their paper they also show that there is a maximum batch size that they can scale to, which actually includes the geometric learning rate scaling that we studied. Therefore, the adaptive batch size also shows the same behavior as explained by the theoretical results of Jastrz\\u0119bski et al. (arxiv:1711.04623).\"}",
"{\"title\": \"Authors' response\", \"comment\": \"Thank you for providing thoughtful commentary and feedback on our submission.\\n\\n>>> Based on empirical evaluation, the paper cannot make any claim about the generality of the obtained results.\\n\\nWe do not claim to have a fully general understanding of how these scaling phenomena will vary across arbitrary problems. However, one of our goals is to show that the scaling behavior of large-batch SGD differs significantly across problem domains, which is in contrast to the large body that explores these techniques almost exclusively in image classification ((You et al., (arxiv:1708.03888), (Goyal et al., arxiv:1706.02677), (Jia et al., arxiv:1807.11205)). The scaling behavior that we observed on these new problems quickly deviates from what we would expect to see on a standard image classification task.\\n\\n>>> [I]t is not clear how the definition of different training phases can help the practitioner to tune the training parameters\\n\\nLearning how to use these scaling observations to provide explicit guidance to practitioners given a particular problem configuration is an open problem. The goal of our work is to show that, in the presence of this speedup saturation phenomenon, simply optimizing within the existing SGD hyperparameter configuration space to accommodate a large batch size is not sufficient to enable significant reductions in training time or computational cost.\\n\\n>>> how are the empirical results obtained in the experiment section expected to depend on the specific dataset/benchmark?\\n\\nRight now, it does not appear as though there is a straightforward way to isolate the impact of a specific property like model architecture, because different datasets / problem domains require markedly different architectures. Even without this challenge, it is very difficult to isolate the exact effect of various dataset and model properties on convergence speed within the same problem domain, because these problems have complex relationships with properties of the overarching objective landscape.\\n\\n>>> what is a batch size that does not allow one to 'fully utilize our available compute'?\\n\\nThe goal of using large batch sizes is to ensure that GPU cycles, not communication bandwidth, is the bottleneck for overall throughput. By a batch size that does not allow one to \\u2018fully utilize available compute\\u2019, we are referring to a batch size that is small enough that communication bandwidth becomes the main bottleneck. Our main conclusion in this submission is that even though increasing batch size allows for the GPU cycles to be fully utilized, increasing batch size beyond a certain point no longer leads to proportional (or even any) improvement in overall time to convergence. In other words, the most cost-efficient batch size for a particular problem may still leave many GPUs sitting idle. \\n\\n>>> does the amount of over-parameterization in the model have any effects on the definition of the training phases? \\n\\nYes; theoretical results link particular manifestations of over-parameterization to the presence of these training phases ((Ma et al., arxiv: 1712.06559), (Yin et al., arxiv:1706.05699)). However, in practice, it is difficult to isolate the effects of the over-parameterization itself, since changing the model architecture to increase the degree of over-parameterization changes the objective landscapes in ways that are difficult to characterize. Furthermore, different ways to change the amount of over-parameterzation (larger hidden layers, more hidden layers, etc.) may have different effects, and there is no clear way to choose a canonical over-paramerization method.\"}",
"{\"title\": \"Authors' response\", \"comment\": \"Thank you very much for your review and comments.\\n\\n>>> It would've been even nicer if the paper covered more variety of popular ML models such as Machine Translation, Speech Recognition, (Conditional) Image Generation, etc which open source implementations are readily available.\\n\\nWe hope to also verify these results across other interesting problem domains such as those you have suggested. For this paper, we selected image segmentation and language modeling tasks because these problems displayed markedly different convergence behavior than what has been hypothesized from existing empirical results, which primarily focus on image classification problems. \\n\\n>>> Since the theory only gives us asymptotic form of the optimal learning rate, empirically you should be tuning the learning rate for each batch size.\\n\\nSince the time of submission, we have performed additional experiments where we try out a variety of learning rates for each batch size, and produce a contour plot of the resulting training error at the end of training. We have included a link to the figure below [1]. We observe that as we increase batch size, it becomes more difficult to find a model with low training loss, regardless of the initial learning rate. We plan to include this figure as supplementary material in the appendix.\"}",
"{\"title\": \"Authors' response\", \"comment\": \"Thank you for your comments. We agree that the degradation in performance we observe is due to the increased batch size, though we argue in Figure 5 that the maximum admissible batch size is not sensitive to the dataset size. To provide a bit more context, the motivation behind our investigation is that there is a broad interest in increasing batch size for synchronous SGD in order to make better use of massively parallel hardware. It is well-known that, when using small batches, increasing the batch size yields a commensurate decrease in the number of iterations needed to converge, without adversely affecting the final training loss. However, for larger batch sizes, our findings show that this trend breaks down. More specifically, the point at which increasing the batch size no longer yields a decrease in the time to convergence depends sensitively on the type of data and the model architecture, but less so on the size of the dataset.\", \"regarding_the_svhn_speedup_curves\": \"in Figure 3, we train on only 50k examples to compare datasets of the same size. In Figure 4, we are training on partitions of the full 600k example dataset.\"}",
"{\"title\": \"Authors' response\", \"comment\": [\"We are very grateful to the reviewers for their time and for their thoughtful reading of the manuscript. We would like to make some general comments here:\", \"Existing theoretical results: Unfortunately, there is not yet a full theoretical treatment of the relationships between batch size, convergence speed, and best-achieved training loss. Some interesting theoretical results in this area exist (e.g. Ma et al. 2017, arxiv:1712.06559; Yin et al. 2018, arxiv:1706.05699), but they either rely on too-stringent assumptions, or they make claims that are too general to be of much use to practitioners. With this in mind, we see our more empirical contribution as an important step toward disentangling the effects of various problem parameters (e.g. model architecture, dataset size, learning rate) on the potential for data parallelism in synchronous SGD. In this way, we see our empirical results as complementary to existing theoretical work.\", \"Generality of our results: The nature of an empirical study of this sort is that results do not generalize to arbitrary other settings, and although we investigate our findings over a range of models, datasets, and learning rates, we make no claim that our results enjoy full generality. However, we hope these results will raise awareness among practitioners that the largest admissible batch size depends heavily on these parameters, and that they must be careful not to increase the batch size too much, lest they increase their GPU utilization without reducing the number of iterations to convergence.\", \"Effect on generalization gap: This work primarily studies the effects of batch size on convergence speed and on the minimum achieved training loss; generalization error only compounds these problems, and we do not study it extensively. Our results confirm the existence of the generalization gap across several problem domains (see Figure 5 in the appendix). We also show that even with the LR scaling rule, the generalization gap persists, especially for non-image-classification problems.\"]}",
"{\"comment\": \"My concern about this paper is that most experiments are done in CIFAR10 and CIFAR10 sample size is ~50000. The batch size they mainly discussed is > 8000. In this regime, MB_size is comparable to total training samples. Thus, SGD assumption MB_size << training sample does not hold. The challenge can be directly due to gradient decent itself; not batch size effect in SGD.\", \"the_other_issue\": \"Fig.3 and Fig.4 SVHN are not consistent. Could you please explain?\", \"title\": \"Not in SGD assumption regime.\"}",
"{\"title\": \"Limited insights in the understanding of the batch size effect\", \"review\": \"The work presented relates to the impact batch-size on the learning performances of common neural network architectures.\", \"pro\": \"having comprehensive study of the limit of gradient-based methods is very useful in practice. This work can help practitioner to limit the number of machines used for optimization.\", \"cons\": [\"very little can be deduced from these experiments:\", \"\\\"Increasing the batch size beyond a certain point yields no improvement in wall-clock time to convergence, even for a system with perfect parallelism.\\\" was a know fact (they cite Ma et al (2017) who even proved it theoretically.\", \"\\\"Increasing the batch size leads to a significant increase in generalization error, which cannot be mitigated by existing techniques.\\\". It is not clear that all the regularization techniques have been tried by the authors, the increase of generalization error is very small, and there is no explanation or insight given by the authors to explain this phenomenon, making this finding of limited interest.\", \"\\\"Dataset size is not the only factor determining the computational efficiency of large batch training.\\\" is something obvious to say, as there are plenty of factors that determine the computational efficiency (network connection, map-reduce implementation, etc.)\"], \"even_the_suggestions_for_future_work_of_the_authors_in_the_conclusion_does_not_help_much\": \"they suggest to look at \\\"alternative forms of parallelism\\\", without citing or giving any clue of what could be such alternative forms.\\nAlso, there is no discussion around lock-free\\n\\nThe authors refer to Ma et al. (2017) for a theoretical analysis of the effect of the batch size, but they skip all the past and very relevant literature on the topic of the effect of the batch size on the convergence. For example, it is recommended to increase the size of the batch size as the iterations increase.\\n\\nFinally, there is no discussion on the lock-free gradient descent, that is often suggested as an alternative to batching.\\n\\nIn conclusion, I'm not convinced there is enough material to accept this paper at the next ICLR conference.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Insightful empirical study of the effect of batch size for convergence speed\", \"review\": \"This paper empirically investigates the effect of batch size on the convergence speed of the mini-batch stochastic gradient descent of popular deep learning models. The fact that there is a diminishing return of batch size is not very surprising and there is a well-known theory behind it, but the theory doesn't exactly tell when we will start to suffer from the diminishing return. Therefore, it is quite valuable for the community to have an empirical analysis across popular ML tasks and models. In this regard, however, It would've been even nicer if the paper covered more variety of popular ML models such as Machine Translation, Speech Recognition, (Conditional) Image Generation, etc which open source implementations are readily available. Otherwise, experiments in this paper are pretty comprehensive. The only additional experiment I would be interested in is to tune learning rate for each batch size, rather than using a base learning rate everywhere, or simple rules such as LSR or SRSR. Since the theory only gives us asymptotic form of the optimal learning rate, empirically you should be tuning the learning rate for each batch size. And this is not totally unrealistic, because you can use a fraction of computational time to do cross-validation for searching the learning rate.\", \"pros\": [\"findings provide us useful direction for future research (that data-parallelism centered distributed training is going to hit the limit soon)\", \"extensive experiments across 5 datasets and 6 neural network architectures\"], \"cons\": [\"experiments are a bit too much focused on image classification\", \"error bars in figures could've provided greater confidence in robustness of findings\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Some interesting empirical results for a popular problem\", \"review\": \"Summary:\\nThe authors present an empirical analysis of how the size of SGD batches affects neural networks' training time.\", \"strengths\": \"As mini-batches training is highly popular nowadays, the problem emphasized by the authors may have a high impact in the community. Together with recent analysis on the generalization properties of over-parametrized models, the paper may help understand more general open problems of neural networks' training. A nice contribution of the paper is the observation that different phases of scaling behaviour exist across a range of datasets and architectures.\", \"weaknesses\": \"Based on empirical evaluation, the paper cannot make any claim about the generality of the obtained results. Even if the authors' analysis is based on a large set of benchmarks, it is hard to asses whether and how the results extend to cases that are not included in Section 4. In particular, it is not clear how the definition of different training phases can help the practitioner to tune the training parameters, as the size and range of the different regimes depend so strongly on the model's architecture and dataset at hand.\", \"questions\": [\"have the properties of mini-batches training been explored from a formal/theoretical perspective? do those results match and confirm the proposed empirical evaluation?\", \"how are the empirical results obtained in the experiment section expected to depend on the specific dataset/benchmark? For example, given a particular architecture, what are the key features that define the three training phases (shape of the nonlinearity, number of layers, underlying distribution of the dataset)?\", \"what is a batch size that does not allow one to 'fully utilize our available compute'?\", \"does the amount of over-parameterization in the model have any effects on the definition of the training phases? How are the results obtained in the paper linked to the generalization gap phenomenon?\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
BklhAj09K7 | Unsupervised Domain Adaptation for Distance Metric Learning | [
"Kihyuk Sohn",
"Wenling Shang",
"Xiang Yu",
"Manmohan Chandraker"
] | Unsupervised domain adaptation is a promising avenue to enhance the performance of deep neural networks on a target domain, using labels only from a source domain. However, the two predominant methods, domain discrepancy reduction learning and semi-supervised learning, are not readily applicable when source and target domains do not share a common label space. This paper addresses the above scenario by learning a representation space that retains discriminative power on both the (labeled) source and (unlabeled) target domains while keeping representations for the two domains well-separated. Inspired by a theoretical analysis, we first reformulate the disjoint classification task, where the source and target domains correspond to non-overlapping class labels, to a verification one. To handle both within and cross domain verifications, we propose a Feature Transfer Network (FTN) to separate the target feature space from the original source space while aligned with a transformed source space. Moreover, we present a non-parametric multi-class entropy minimization loss to further boost the discriminative power of FTNs on the target domain. In experiments, we first illustrate how FTN works in a controlled setting of adapting from MNIST-M to MNIST with disjoint digit classes between the two domains and then demonstrate the effectiveness of FTNs through state-of-the-art performances on a cross-ethnicity face recognition problem.
| [
"domain adaptation",
"distance metric learning",
"face recognition"
] | https://openreview.net/pdf?id=BklhAj09K7 | https://openreview.net/forum?id=BklhAj09K7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkxtDDta1V",
"SJei8ZU81E",
"BklPeLVUyN",
"HJx8mmk7Am",
"Sy86MymAm",
"rylttGymA7",
"BJecfeN52Q",
"HylVtcW53Q",
"ByemonjVnm"
],
"note_type": [
"meta_review",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544554337305,
1544081747313,
1544074735043,
1542808350159,
1542808253516,
1542808193361,
1541189649588,
1541180028259,
1540828315403
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper931/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper931/Authors"
],
[
"~Hui-Po_Wang1"
],
[
"ICLR.cc/2019/Conference/Paper931/Authors"
],
[
"ICLR.cc/2019/Conference/Paper931/Authors"
],
[
"ICLR.cc/2019/Conference/Paper931/Authors"
],
[
"ICLR.cc/2019/Conference/Paper931/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper931/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper931/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a new solution for tackling domain adaptation across disjoint label spaces. Two of the reviewers agree that the main technical approach is interesting and novel. The final reviewer asked for clarification of the problem setting which the authors have provided in their rebuttal. We encourage the authors to include this in the final version. However, there is also a consensus that more experimental evaluation would improve the manuscript and complete experimental details are needed for reliable reproduction.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"An interesting approach for joint domain adaptation and transfer learning\"}",
"{\"title\": \"response\", \"comment\": \"Hi Hui-Po,\\n\\nThanks for your comment.\\n\\nAs you mentioned, the conventional domain adaptation problems assume the same \\\"task\\\" between the source and the target domains and this allows to transfer discriminative knowledge (e.g., classifier) learned from the source domain to the target domain. On the other hand, not all domains with significant domain shift in the input data space share the same output label spaces, such as cross-ethnicity face recognition or other applications in [1].\\n\\nIn this work, we resolve such limitation of conventional domain adaptation methods and provide a framework that is also applicable when label spaces of two domains are disjoint by converting disjoint identification tasks into a shared verification task. Note that, as we clarified in our response to R3, the conversion of identification to verification allows the problem definition fits perfectly into that of domain adaptation as the source and target domains now have the shared verification task. That being said, the knowledge we are transferring from source to the target domain is verification, i.e., binary classification for pair of data being the same class or not. This is also evident from our theoretical analysis presented in Section 3 and Appendix A where we prove that the verification error defined on the pair of data from the target domain can be bounded by the verification error on the source pair and the domain discrepancy.\\n\\nHope this clarifies your concern on \\\"what kind of knowledge is being transferred\\\" between two domains. Please let us know if further clarification is required.\\n\\n[1] Luo et al., Label efficient learning of transferable representations across domains and tasks, NIPS 2017\"}",
"{\"comment\": \"Hi authors,\\n\\nI appreciate you provide thorough and various extension of existing loss functions. However, I would like to know further what's the main problem you want to solve in this work. It seems not to be clear to me.\\n\\nLet me make a guess and maybe explain the main idea in other words. The proposed method is trying to leverage the \\\"semantic\\\" knowledge in the source domain and perform \\\"clustering\\\" on those target samples with unseen labels (because labels are disjoint).\\n\\nAssuming I am correct above, I would like to ask the following questions:\\n\\nIn conventional domain adaptation problem, we usually assume that both domains share some common knowledge so that you can utilize the knowledge (labels and corresponding discriminative power) from the source domain to solve similar problems in the target domain. In your work, however, both input and label spaces are \\\"disjoint\\\". I am curious what kind of knowledge you would like to transfer to the target domain and how you can make sure that the knowledge can be applied to those target samples with unseen labels. If these problems are not clarified, as mentioned by reviewer 3, I would say the major improvement all comes from MCEM, which performs clustering algorithm on target samples, instead of the proposed method.\\n\\nIf I made any mistake above, please correct me directly.\\nThank you for your patient reading.\\n\\nbest,\", \"title\": \"What's the main task you want to address\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for their valuable comments.\\n\\n(In response to 3) We argue that many distance metric adaptation or transfer learning algorithms in deep learning are based on distribution matching. For example, [3,4] uses discriminator-based adversarial loss and [5] uses kernel-based MMD loss to reduce the domain discrepancy. Regardless of the discriminator or the kernel, these methods will push two domains closeby and thus have the same limitation as DANN. The proposed FTN resolves this issue by learning \\u201cdomain-equivariant\\u201d representation and we provide empirical evidence (e.g. Table 2 or Figure 2(b-c)) using DANN as the most representative baseline. While one may try adding more components, such as deep supervision (e.g., applying MMD loss at multiple feature layers) as in [5], we believe that our contribution is orthogonal and complementary to those additional components. \\n\\n(In response to 3) We note that the MCEM is one of our novel contributions, which is only made available through our view on converting the classification task into verification. We agree that it plays a critical role to obtain a highly discriminative representation. For example, [6] considers a similar setting of domain adaptation with disjoint label spaces but they require labeled examples and complete definition of the label space of the target domain to apply classification-based adversarial adaptation learning and entropy regularization. Nonetheless, we provide the within-domain (Table 1) and cross-domain (Table 2) identification accuracy of DANN+MCEM below. We will include this result in the revision:\\n\\nDANN (for within-domain identification, CAU / AA / EA / ALL; for cross-domain, CAU / AA / EA):\", \"within_domain_identification\": \"90.3 / 80.7 / 82.3 / 83.4\", \"cross_domain_identification\": \"94.0 / 93.1 / 92.8\\n\\nSimilarly to the FTN, we observe improvement using MCEM with DANN, as compared to the DANN only model. Comparing between adaptation models with MCEM, we still observe better performance when combined with FTN. Especially, the contrast in performance becomes significant in cross-domain identification task, which confirms the unique capability of FTN in learning to transfer discriminative knowledge by alignment while separating representations across domains.\\n\\n\\n(In response to 1) Our problem setting is adaptation from labeled source to unlabeled target with disjoint label spaces. Following the nomenclature of [1], it contains flavors from both domain adaptation (DA) and transfer learning (TL). The difference in input distribution between source and target domains and the lack of labels in the target domain are similar to that of DA or transductive TL [1], while the difference in label distribution and task definitions between two domains is akin to inductive TL [1,2]. In our work, we formalize this problem in domain adaptation framework using verification as a common task. This is a key contribution that allows theoretical analysis on the generalization bound as presented in Section 3 and Appendix A, while also allowing important novel applications like cross-ethnicity face recognition.\\n\\n\\n(In response to 2) We acknowledged in the second paragraph of Section 2 some existing works on domain adaptation that use the verification loss for problems such as face recognition and person re-identification, while highlighting our novel contribution. We will include more discussion and references [5] related to this.\\n\\n\\n[1] Pan and Yang, A survey on Transfer Learning, 2010\\n[2] Daume, https://nlpers.blogspot.com/2007/11/domain-adaptation-vs-transfer-learning.html\\n[3] Ganin et al., Domain Adversarial Training of Neural Networks, JMLR 2016\\n[4] Sohn et al., Unsupervised domain adaptation for face recognition in unlabeled videos, ICCV 2017\\n[5] Hu et al., Deep Transfer Metric Learning, CVPR 2015\\n[6] Luo et al., Label efficient learning of transferable representations across domains and tasks, NIPS 2017\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank the reviewer for their valuable comments.\\n\\nWe understand the concern in Table 3 that the performance improvement is not as significant as in Table 1. As mentioned in footnote 5, we observe that the ethnicity bias not only exists in the training dataset, but also in public benchmark datasets, such as LFW or IJB-A. While we observe the benefit of FTN over source only model in all evaluation metrics or over DANN in low FAR regime, thus requiring more within as well as cross-domain discriminativeness, we believe that these datasets may not be the best to evaluate the fairness of face recognition algorithms. This indeed is our motivation to collect an ethnicity-balanced test dataset for fair evaluation. We will make the dataset publicly available to the community upon publication.\"}",
"{\"title\": \"clarification on feature reconstruction loss\", \"comment\": \"We thank the reviewer for their valuable comments.\\n\\n1. We clarify that the reference network is pretrained on the labeled source data and fixed over the training of DANN/FTN. In other words, the gradient in Equation(6) is only backpropagated through f, but not through f_{ref}.\\n\\nWe note that the training procedure of reference network resembles the training of teacher network in distillation framework [1], in the sense that both teacher network and our reference network are \\u201cpretrained and fixed\\u201d during the training of student or DANN/FTN, respectively.\\n\\n[1] Hinton et al., Distilling the knowledge in a neural network, NIPS 2014 DL Workshop\\n\\n2. We will add a reference (section 3 and appendix) as suggested.\"}",
"{\"title\": \"A good paper addressing domain adaptation for disjoint labels.\", \"review\": \"The authors studied an interesting problem of unsupervised domain adaptation when the source and the target domains have disjoin labels spaces. The paper proposed a novel feature transfer network, that optimizes domain adversarial loss and domain separation loss.\", \"strengths\": \"1) The proposed approach on Feature Transfer Network was novel and interesting.\\n2) The paper was very well written with a good analysis of various choices.\\n3) Extensive empirical analysis on multi-class settings with a traditional MNIST dataset and a real-world face recognition dataset.\", \"weakness\": \"1) Practical considerations addressing feature reconstruction loss needs more explanation.\", \"comments\": \"The technical contribution of the paper was sound and novel. The paper considered existing work and in a good way generalizes and extends into disjoint label spaces. It was easy to read and follow, most parts of the paper including the Appendix make it a good contribution. However, the reviewer has the following suggestions\\\" \\n\\n1. Under the practical considerations for preventing the mode collapse via feature reconstruction, how is the reference network trained? In the Equation(6) for feature reconstruction, the f_ref term maps the source and target domain examples to new feature space. What do you mean by references network trained on the label data? Please clarify.\\n\\n2. Under the practical considerations for replacing the verification loss, it is said that \\\"Our theoretical analysis suggests to use a verification\\nthe loss that compares the similarity between a pair of images\\\" - Can you please cite the references to make it easier for the reader to follow.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"The motivation is clear but the experiments are not sufficient.\", \"review\": \"In this work, authors consider transfer learning problem when labels for the target domain is not available. Unlike the conventional transfer learning, they introduce a new loss that separates examples from different domains. Besides, they apply the multi-class entropy minimization to optimize the performance in the target domain. Here are my concerns.\\n1.\\tThe concept is not clear. For domain adaptation, we usually assume domains share the same label space. When labels are different, it can be a transfer learning problem.\\n2.\\tOptimizing the verification loss is conventional for distance metric learning based transfer learning and authors should discuss more in the related work.\\n3.\\tThe empirical study is not sufficient. There lacks the method of transfer learning with distance metric learning. Moreover, the major improvement seems from the MCEM rather than the proposed network. How about DANN+MCEM?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting paper addressing a difficult problem. Good formalization and reasonable evaluation\", \"review\": \"I like the idea of the paper and I believe it addressing a very relevant problem. While the authors provide a good formalization of the problem and convincing demonstration of the generalization bound, the evaluation could have been better by including some more challenging experiments to really prove the point of the paper. It is surely good to present the toy example with the MNIST dataset but the ethnicity domain is less difficult than what the authors claim. This is also pretty evident from the results presented (e.g., in Table 3). The proposed approach provides maybe slightly better results than the state of the art but the results do not seem to be statistically significant. This is probable also due to the fact that the problem itself is made simpler by the cropped faces, no background, etc. I would have preferred to see an application domain where the improvement would be more substantial. Nevertheless, I think the theoretical presentation is good and I believe the manuscript has very good potential.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
B1g30j0qF7 | Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes | [
"Roman Novak",
"Lechao Xiao",
"Yasaman Bahri",
"Jaehoon Lee",
"Greg Yang",
"Jiri Hron",
"Daniel A. Abolafia",
"Jeffrey Pennington",
"Jascha Sohl-dickstein"
] | There is a previously identified equivalence between wide fully connected neural networks (FCNs) and Gaussian processes (GPs). This equivalence enables, for instance, test set predictions that would have resulted from a fully Bayesian, infinitely wide trained FCN to be computed without ever instantiating the FCN, but by instead evaluating the corresponding GP. In this work, we derive an analogous equivalence for multi-layer convolutional neural networks (CNNs) both with and without pooling layers, and achieve state of the art results on CIFAR10 for GPs without trainable kernels. We also introduce a Monte Carlo method to estimate the GP corresponding to a given neural network architecture, even in cases where the analytic form has too many terms to be computationally feasible.
Surprisingly, in the absence of pooling layers, the GPs corresponding to CNNs with and without weight sharing are identical. As a consequence, translation equivariance, beneficial in finite channel CNNs trained with stochastic gradient descent (SGD), is guaranteed to play no role in the Bayesian treatment of the infinite channel limit - a qualitative difference between the two regimes that is not present in the FCN case. We confirm experimentally, that while in some scenarios the performance of SGD-trained finite CNNs approaches that of the corresponding GPs as the channel count increases, with careful tuning SGD-trained CNNs can significantly outperform their corresponding GPs, suggesting advantages from SGD training compared to fully Bayesian parameter estimation. | [
"Deep Convolutional Neural Networks",
"Gaussian Processes",
"Bayesian"
] | https://openreview.net/pdf?id=B1g30j0qF7 | https://openreview.net/forum?id=B1g30j0qF7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJxPb2VoWV",
"SkgcPwaGeE",
"Skluw57MeN",
"SklzQ5QfgN",
"SygaK6RU14",
"SJgkvoCrJE",
"BJgxlnn4yV",
"BJejjc3VkV",
"S1lesx2c0X",
"HJgaDeh9R7",
"Hkes7gncAm",
"r1eI1gn5Cm",
"ryeVtyn907",
"Bkg_Xk290X",
"BygZgkn5RQ",
"BJl_qRsqCQ",
"H1xWncjq0Q",
"ryl9t5scRQ",
"rJx7Qcjq07",
"BygYYYicRX",
"BkgW5ui5Cm",
"BkxQiXRYhX",
"SkgZdQ0t37",
"rke_hxhHhX"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1546501119081,
1544898402454,
1544858208178,
1544858137653,
1544117637229,
1544051542737,
1543977959873,
1543977635248,
1543319703786,
1543319652705,
1543319586781,
1543319517712,
1543319419618,
1543319327534,
1543319272806,
1543319184200,
1543318184543,
1543318145642,
1543318043073,
1543317888654,
1543317640692,
1541165979008,
1541165929258,
1540894896029
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper930/Authors"
],
[
"ICLR.cc/2019/Conference/Paper930/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper930/Authors"
],
[
"ICLR.cc/2019/Conference/Paper930/Authors"
],
[
"ICLR.cc/2019/Conference/Paper930/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper930/Authors"
],
[
"ICLR.cc/2019/Conference/Paper930/Authors"
],
[
"ICLR.cc/2019/Conference/Paper930/Authors"
],
[
"ICLR.cc/2019/Conference/Paper930/Authors"
],
[
"ICLR.cc/2019/Conference/Paper930/Authors"
],
[
"ICLR.cc/2019/Conference/Paper930/Authors"
],
[
"ICLR.cc/2019/Conference/Paper930/Authors"
],
[
"ICLR.cc/2019/Conference/Paper930/Authors"
],
[
"ICLR.cc/2019/Conference/Paper930/Authors"
],
[
"ICLR.cc/2019/Conference/Paper930/Authors"
],
[
"ICLR.cc/2019/Conference/Paper930/Authors"
],
[
"ICLR.cc/2019/Conference/Paper930/Authors"
],
[
"ICLR.cc/2019/Conference/Paper930/Authors"
],
[
"ICLR.cc/2019/Conference/Paper930/Authors"
],
[
"ICLR.cc/2019/Conference/Paper930/Authors"
],
[
"ICLR.cc/2019/Conference/Paper930/Authors"
],
[
"ICLR.cc/2019/Conference/Paper930/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper930/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper930/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Suggesting authorship\", \"comment\": \"Dear AnonReviewer3,\\n\\nWe would like to recognize your very detailed and useful review by including you as a co-author in the next revision (among other pending changes addressing your and other reviewers' final remarks). If you are interested, please let us know your name, email and affiliation.\\n\\nThank you!\"}",
"{\"metareview\": \"There has been a recent focus on proving the convergence of Bayesian fully connected networks to GPs. This work takes these ideas one step further, by proving the equivalence in the convolutional case.\\n\\nAll reviewers and the AC are in agreement that this is interesting and impactful work. The nature of the topic is such that experimental evaluations and theoretical proofs are difficult to carry out in a convincing manner, however the authors have done a good job at it, especially after carefully taking into account the reviewers\\u2019 comments.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting work taking recent advances one step further\"}",
"{\"title\": \"1/2 AnonReviewer4 Reply\", \"comment\": \"Thank you for your very comprehensive and encouraging review! Please find our replies to your specific comments below.\\n\\n------------------------------------------------------------------------------------\\n>>> A discussion on reasons for best CNN to give a performance better than GP-CNN (especially with pooling), and a experimental comparison with finite width Bayesian CNN would have made the paper more concrete. \\n\\nPlease note that we have only observed this difference in performance for CNN-GP without pooling, which we explain in discussion (sections 5.1 and 5.3). We don\\u2019t make any claims regarding comparisons CNN-GP with pooling and the respective CNN. We emphasize that CNN-GP throughout the paper refers to a no-pooling architecture, and in the rare cases where we evaluate it with pooling we say so explicitly (e.g. \\u201cMC-CNN-GP with pooling\\u201d, \\u201cGlobal average pooling\\u201d). We will make this more clear in the next revision.\\n\\n------------------------------------------------------------------------------------\\n>>> [...]- (Page 5) on convergence of K^l : From Equations (3) and (4), it can be seen that K^l converges to C(K^{l-1}), with C(K^{l-1}) defined slightly different from the paper, in that the expectation over z is taken w.r.t z~ N(0;A(K)) instead of z ~ N(0; K). Is this equivalent to the expressions (7) and (8) described in the paper for a non-linear function \\\\phi ?\\n\\nYou are correct, it is exactly equivalent (\\\\circ represents composition). We will make this step more clear in the next revision.\\n\\n------------------------------------------------------------------------------------\\n>>> - Experimental comparison with Bayesian CNN, demonstrating the effect of increasing the number of channels.\\n\\nThank you for the suggestion, we agree such experiments would be highly relevant. However, the computational requirements of training Bayesian CNN prohibits us from performing experiments in the many channel setting, which is on the other hand tractable with SGD. Additionally, the connection between SGD and Bayesian inference is an area of active and sometimes contradictory research in the ML community currently, and we believe our experimental results comparing the NN-GP to SGD trained network will therefore be of significant interest.\\n\\n------------------------------------------------------------------------------------\\n>>> - (Page 7) GP-CNN with pooling : Paper proposes subsampling one particular pixel to improve computational efficiency. Has some experiments been performed to evaluate the performance of this approach ? How accurate is this approach ?\\n\\nPlease note that this approach (section 3.2.2) is only related to pooling (section 3.2.1) in that both approaches are particular cases of projection (section 3.2). Subsampling the center pixel is instead more similar to vectorization (section 3.1) in terms of both performance and compute, and is compared to other methods in Figure 1 (blue curve).\\n\\n------------------------------------------------------------------------------------\\n>>> - Discussion on the positive semi-definiteness of the recursive GP-CNN kernel\\n\\nThank you for the suggestion, we will include explicit derivation of this property in the next revision.\\n\\n------------------------------------------------------------------------------------\\n>>> - More explanations on why the best SGD-trained CNN gives a better performance than GP-CNN, especially with pooling. Does the Monte-Carlo approximation of GP-CNN kernel computation could impact this performance? I suppose hyper-parameters of the GP-CNN kernel are not learnt from the data, could this result in a lower accuracy ?\\n\\nPlease see our comment above - we did not evaluate the CNN-GP with pooling on the whole CIFAR10 dataset, since it was prohibitively expensive. The explanation for the difference in performance between the best CNN and best CNN-GP (both without pooling) is given in sections 5.1 and 5.3. Whenever we evaluate a CNN-GP with pooling, it is stated explicitly.\\n\\n------------------------------------------------------------------------------------\\n>>> - Discussion on learning the hyper-parameters of the GP-CNN kernel and its impact on the performance of the model. \\n\\nThank you for the suggestion. This work was primarily focused on comparing CNNs and their respective CNN-GPs, hence we only considered CNN-GP parameters that follow directly from the respective CNN architecture, and are learned only via a grid search (non-linearity, depth, weight and bias variance, pooling). It would indeed be very interesting in future work to do gradient descent of the GP/NN likelihood w.r.t. weight and bias variance, as well as parameterizing the nonlinearity in a differentiable way, and comparing these models.\\n\\nRelatedly, we would like to also draw your attention to the discussion in Appendix A.2, where we link the hyperparameters of the CNN-GP kernel to previous work in deep information propagation.\"}",
"{\"title\": \"2/2 AnonReviewer4 Reply\", \"comment\": \"------------------------------------------------------------------------------------\\n>>> - Demonstrate through some sample figures that GP-CNN with pooling achieves invariance while GP-CNN with out pooling fail to capture it.\\n\\nThank you for the suggestion, we are working on and are planning to include covariance visualizations on toy data in the next revision.\\n\\n------------------------------------------------------------------------------------\\n>>> - Is the best result on CIFAR-10 achieved using the proposed method ? See Deep convolutional Gaussian processes by Kenneth Blomqvist, Samuel Kaski, Markus Heinonen\\n\\nWe cite them in related work and point out that their model (as are other deep GPs) is not a GP but a more complex and expressive probabilistic model. To the best of our knowledge, our result is SOTA on CIFAR10 for GPs without trainable kernels.\\n\\n------------------------------------------------------------------------------------\\n>>> - Include the results with CNN-GP both with pooling and without pooling in Table 1 and Table 2. \\n\\nAs mentioned above, we do not have these results since running a CNN-GP with pooling is prohibitively expensive on such large datasets, especially for a large-scale grid search as was done for Table 2 (see A.7.5)\\n\\n------------------------------------------------------------------------------------\\n>>> - Provide the results of best SGD trained CNN against CNN-GP, both with pooling, as in Figure 3.c. Is the same trend observed in this case also ?\\n\\nPlease see our comments above - we believe evaluating CNN-GP with pooling on complete datasets for such a large grid to lie beyond the scope of this work.\\n\\n------------------------------------------------------------------------------------\\n>>> - Experimental comparison and results on other Image datasets, specifically MNIST. Does the same observations hold on MNIST too ? \\n\\nWe have only run our large-scale grid searches on CIFAR10 since this is the dataset that benefits from the convolutional architecture the most (among considered) and allows to confidently distinguish the performance of different models (see e.g. Table 1). We expect the general trends to generalize to other image datasets.\\n\\n------------------------------------------------------------------------------------\\n>>> [...] ( Axis labels are missing for some figures in Figure 3, \\n\\nSince all plots share common axes and ranges, we only displayed the title of the x-axis (\\u201c#Channels\\u201d) at the bottom once, and the title of the y-axis (\\u201cValidation accuracy\\u201d) in the center to avoid clutter. We can fix it in the next revision.\\n\\n------------------------------------------------------------------------------------\\n>>> and provide legends wherever possible). \\n\\nPlease note that it is not practical to have a complete legend in Figures 3.a and 3.c due to each point representing one of many different hyper-paramater settings (if you refer to other Figures, please let us know which ones).\\n\\n------------------------------------------------------------------------------------\\n>>> The is also an ambiguity in what CNN-GP refers to, with pooling to without pooling.\\n- The term CNN-GP is overloaded in many places in the experimental section. I guess in Table 1, its CNN-GP without pooling, while in Table 2, its CNN-GP with pooling. Kindly make the distinction clear in the nomenclature itself, by calling one of them by a different name. Its also not clear when they mention SGD trained CNN, if it is with pooling or without pooling.\\n\\nThroughout the work, pooling is only used if explicitly mentioned (e.g. \\u201cCNN-GP w/ pooling\\u201d, \\u201cCNN w/ pooling\\u201d etc). Otherwise CNN(-GP) is without pooling. We will make it more explicit in the next revision.\\n\\n------------------------------------------------------------------------------------ \\n>>> - What is the difference between the top and bottom pair of figures in Figure 3 (b). Why is the GP performance different in top and bottom cases.?\\n\\nAs per text labels to the left of the table, top are LCNs (locally-connected networks, CNNs without weight sharing), while bottom are regular CNNs. If pooling is present, LCNs and CNNs result in different respective GPs, hence different performance (see discussion in section 5.1). We will make the text labels more noticeable in the next revision.\\n\\n------------------------------------------------------------------------------------\\n>>> - What does 10, 100, 1000 correspond to in Figure 3 ? Please explain it in caption.\\n\\nThe numbers are depth, as indicated by the label in top-left, and the caption mentions it as well. We will make it more explicit in the next revision.\"}",
"{\"title\": \"REVIEW OF DEEP BAYESIAN CONVOLUTIONAL NETWORKS WITH MANY CHANNELS ARE GAUSSIAN PROCESSES\", \"review\": [\"The paper establishes a connection between infinite channel Bayesian convolutional neural network and Gaussian processes. The authors prove that taking the number of channels in a Bayesian CNN to infinite leads to a GP with a specific Kernel (GP-CNN) and provide a Monte Carlo approach to evaluate the kernels when it is intractable. They show that without pooling the kernel fails to maintain the equivariance property that is achievable with a CNN without pooling. GP-CNN with pooling maintains the invariance property. They make extensive experimental comparison with CNN, demonstrating that as the number of channels become large, CNN achieve performance close to a GP-CNN. A discussion on reasons for best CNN to give a performance better than GP-CNN (especially with pooling), and a experimental comparison with finite width Bayesian CNN would have made the paper more concrete. The paper has both strong theoretical and experimental contribution, and is also very relevant to the ICLR conference.\", \"Quality\", \"The paper provides a theoretical connection between Bayesian CNN with infinite wide channels and Gaussian processes with a recursive kernel (GP-CNN). The derivations and arguments seem correct. The experiments are conducted comparing the performance of SGD trained CNN with GP-CNN, and other models on mainly on CIFAR-10 data set.\", \"However, some discussion and clarity on the following points will be useful to improve the paper.\", \"(Page 5) on convergence of K^l : From Equations (3) and (4), it can be seen that K^l converges to C(K^{l-1}), with C(K^{l-1}) defined slightly different from the paper, in that the expectation over z is taken w.r.t z~ N(0;A(K)) instead of z ~ N(0; K). Is this equivalent to the expressions (7) and (8) described in the paper for a non-linear function \\\\phi ?\", \"Experimental comparison with Bayesian CNN, demonstrating the effect of increasing the number of channels.\", \"(Page 7) GP-CNN with pooling : Paper proposes subsampling one particular pixel to improve computational efficiency. Has some experiments been performed to evaluate the performance of this approach ? How accurate is this approach ?\", \"Discussion on the positive semi-definiteness of the recursive GP-CNN kernel\", \"More explanations on why the best SGD-trained CNN gives a better performance than GP-CNN, especially with pooling. Does the Monte-Carlo approximation of GP-CNN kernel computation could impact this performance? I suppose hyper-parameters of the GP-CNN kernel are not learnt from the data, could this result in a lower accuracy ?\", \"Discussion on learning the hyper-parameters of the GP-CNN kernel and its impact on the performance of the model.\", \"Demonstrate through some sample figures that GP-CNN with pooling achieves invariance while GP-CNN with out pooling fail to capture it.\", \"Is the best result on CIFAR-10 achieved using the proposed method ? See Deep convolutional Gaussian processes by Kenneth Blomqvist, Samuel Kaski, Markus Heinonen\", \"Include the results with CNN-GP both with pooling and without pooling in Table 1 and Table 2.\", \"Provide the results of best SGD trained CNN against CNN-GP, both with pooling, as in Figure 3.c. Is the same trend observed in this case also ?\", \"Experimental comparison and results on other Image datasets, specifically MNIST. Does the same observations hold on MNIST too ?\", \"Clarity\", \"The paper is relatively well written and clearly provides main ideas leading to the results. However, notations could have made more succinct, and figures could have been more legible( Axis labels are missing for some figures in Figure 3, and provide legends wherever possible). The is also an ambiguity in what CNN-GP refers to, with pooling to without pooling.\", \"The term CNN-GP is overloaded in many places in the experimental section. I guess in Table 1, its CNN-GP without pooling, while in Table 2, its CNN-GP with pooling. Kindly make the distinction clear in the nomenclature itself, by calling one of them by a different name. Its also not clear when they mention SGD trained CNN, if it is with pooling or without pooling.\", \"What is the difference between the top and bottom pair of figures in Figure 3 (b). Why is the GP performance different in top and bottom cases.?\", \"What does 10, 100, 1000 correspond to in Figure 3 ? Please explain it in caption.\", \"Originality\", \"Previous works of Lee and G. Matthews (2018) had shown the equivalence between Deep Neural Networks and GPs. This paper has extended it to deep convolutional neural network setting, but is interesting in its own way. The have come up with an equivalent kernel corresponding to infinite wide Bayesian convolution neural network and provided a monte-carlo approach to compute it. Along with the theoretical contribution, they have also provided extensive experimental comparison.\", \"Significance\", \"The paper has made significant contributions connecting the Bayesian convolutional neural networks with Gaussian processes, in deriving the equivalent kernel for GPs, and in demonstrating the performance of the proposed approach on Image datasets\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"AnonReviewer2 message\", \"comment\": \"Dear Reviewer,\\n\\nThank you again for your very thorough and insightful review. We believe we have effectively implemented most of your very helpful suggestions, by expanding the discussion in the text and improving the clarity of figures and exposition. Both reviewers 1 and 3 have raised their scores on the strength of our rebuttal and paper improvements. We are wondering if you also feel that we have significantly improved our paper, and if so whether you would be willing to increase your score as a result. \\n\\nThank you for your consideration!\"}",
"{\"title\": \"AnonReviewer1 message\", \"comment\": \"Dear Reviewer,\\n\\nThank you again for your thoughtful reading and review. We believe you lowered your score due to the justified technical concerns raised by Reviewer 3. However, we have now updated the paper to address those specific issues. Reviewer 3 is satisfied with our response, and has raised their score by 4 points. Would you now be willing to restore your original score, since we have addressed all the open technical concerns, and both other reviewers are now voting for acceptance?\\n\\nThank you very much for your consideration!\"}",
"{\"title\": \"AnonReviewer3 update reply\", \"comment\": \"Thank you for promptly reviewing our revision!\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.20, A.5.1) To ensure the random variables are well-defined, please state explicitly which sigma algebra is F (I am assuming the product Borel sigma-algebra + the relevant definitions of the random variables). This is important for the reader to understand what convergence in distribution on this particular space does and does not imply. \\n\\nWill do.\\n\\n------------------------------------------------------------------------------------\\n>>> Some readers might also appreciate if you used the mentioned \\\"infinite width, finite fan-out, networks\\\" (Matthews et al.) construction (or similar) which would ensure that the collection of random variables {z_i^l}_{i \\\\in N*} is well-defined for any network width and l, which currently does not seem to be the case according to Eqs. (28-29). If the full countably infinite vectors of random variables are not defined for all networks in the sequence, it is not possible to prove their convergence in distribution to the relevant GPs.\\n\\nThank you, we agree that currently the construction process in A.5.3 is not explicit enough to define the countably-infinite collection {z_i^{l, \\\\infty}}_{i \\\\in N*} (as you point out below in more detail), and we will make it so in the next revision.\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.21, A.5.3) Thank you for clarifying the definition of elements of the sequential limit. If possible, I would further recommend first fixing the probability space and then defining the random variables (the argument just before Theorem A.2 seems somewhat circular as R.V.s should first be defined on some space, and not put on a probability space post-hoc; perhaps some product space with the product sigma-algebra would work here?!). \\n\\nThank you for the suggestion. One can define {z_i^{l, \\\\infty}}_{i \\\\in N*} in the place-holder (A.5.1.iii) before defining the neural networks. This avoids reconstructing the probability space / apparent circularity. We will make sure to be more explicit about it in the next revision.\\n\\n------------------------------------------------------------------------------------\\n>>> Furthermore, if I understand correctly, there are now L sequences of neural networks (one sequence for networks with 0, ..., L-1 \\\"infinite layers\\\"), rather than a single sequence, and the \\\"infinite layers\\\" are squashed into a single \\\"infinite layer\\\" which is represented by z_i^\\\\infty? In other words, all the infinite layers are replaced by iid samples from a particular GP and only the finite layers have the standard neural network structure? If I am mistaken (or not), perhaps a further explanatory footnote would help the reader.\\n\\nYou are correct, and we will elaborate on this more in the next revision. This is an inconvenience of the sequential limit approach, since the outputs of any hidden layers only converge in distribution and not necessarily almost surely (point-wisely). Thus we have to re-define/construct them. We believe this inconvenience to be present in all prior / concurrent work using the sequential limit. It might be possible to circumvent this issue with the help of Skorokhod\\u2019s Representation Theorem.\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.21, A.5.3 & p.23, A.5.4) Thank you for improving the discussion of joint convergence. Please clarify that proving convergence for any finite m is sufficient for proving convergence in distribution of the countably infinite vector {z_i}_{i \\\\in N*} for the **product Borel sigma-algebra** (e.g. using an argument like the one on p.19 of Billingsley (1999)).\\n\\nWill do, thank you for pointing this out. \\n\\n------------------------------------------------------------------------------------\\n>>> - (p.21) \\\"Uniformly square-integrable\\\": to me, this phrase suggests that the collection of squares of the functions has to be uniformly integrable but the definition in Eq. (27) only states one of the conditions in definition of uniform integrability. Please clarify that \\\"uniform square-integrability\\\" here is not related to the standard notion of \\\"uniform integrability\\\" in the literature.\\n\\nThanks. Will do.\"}",
"{\"title\": \"1/8 AnonReviewer3 Reply and Summary\", \"comment\": \"Thank you for your _extremely_ detailed and insightful review. Your suggestions have allowed us to significantly improve on the quality of our submission and we are very grateful for your hard work. Please find below a summary of our changes, as well as responses to your specific comments.\\n\\n------------------------------------------------------------------------------------\\n****Summary****\\nWe believe the simultaneous limit proof in section A.4.3 (now A.5.4) to be largely correct, however, as you rightly pointed out, it was lacking in terms of explicit treatment of various aspects and suffered from typos / notational inconsistencies. We believe the current revision addresses all the relevant issues.\\nWe have made sure to have a more explicit, consistent, and rigorous notation throughout the paper. We especially encourage you to review the new section 2.1. \\u201cShapes and indexing\\u201d where we describe our notation in detail.\\nIn response to your valid concerns we have omitted section A.4.2 and have rewritten section 2.2 in the main text to reference results from A.4.3 (now A.5.4). Section A.4.1 (now A.5.3) was revamped to rigorously define a sequential limit NN-GP and show that it results in the same covariance as the simultaneous limit (A.4.3, now A.5.4).\\n\\n------------------------------------------------------------------------------------\\n>>> These experiments and investigations are however based on a theoretical foundation which suffers from several issues. The main problems are an incorrect proof of convergence of the joint distribution of filters, and an improper use of convergence in probability in cases where random variables do not share a common underlying probability space. Unfortunately, either of these by itself invalidates the main theoretical claims which is why I am recommending rejection of the paper.\\n\\nWe now formally define an underlying probability space (see A.5.1). Note that random variables {K^l} have constant dimensionality (|X|d x |X|d, see Equation 4) that does not change with widths. Same convention was implied in the previous revision, however we acknowledge that the notation was not explicit enough and may have been a source of confusion, especially in conjunction with the derivations in A.4.2. Further, we derive the joint convergence (wherever applicable) which can be obtained by coupling the convergence of the covariance in probability to deterministic quantities and an argument using characteristic function. Please see Theorems A.2 and A.5.\"}",
"{\"title\": \"2/8 AnonReviewer3 Reply\", \"comment\": \"------------------------------------------------------------------------------------\\n>>>However, I believe that the argument in (A.4.3) can potentially be rectified, and, as I detail below, is of greater interest to the community relative to the ones in (A.4.1) and (A.4.2). If this is accomplished and the proofs in (A.4.1) and (A.4.2) are either also fixed or left out (A.4.3 is sufficient to justify the claims in the main body), I am willing to significantly improve my rating of this paper and potentially recommend acceptance. For this reason, a \\\"detailed comments\\\" section is appended at the end of the standard review where the technical issues are described in much greater detail.\\n\\nThank you, we believe to have addressed all of your concerns in the new revision (section A.5). We have left the section A.4.2 out as you advised.\\n\\n------------------------------------------------------------------------------------\\n>>> ****General comments****\\n**Bayesian vs. infinite neural networks**\\n[...] Others may of course disagree and find \\\"sequential\\\" limits more interesting, but if the authors wish to keep the description of (A.4.2) in the main paper (Sections 2.2.1-2.2.3), it would be highly beneficial if readers were given the opportunity to understand the differences between the two types of limits so that they can form their own judgement. The authors should then also make clearer that the approach described in Sections 2.2.1-2.2.3 cannot be used to obtain the final result, Eq. (10). I would rather recommend reworking Sections 2.2.1-2.2.3 based on the \\\"simultaneous\\\" limit argument in (A.4.3) which unlike the current one can justify the result in Eq. (10) stated at the end.\\n\\nThank you, we have both revamped the presentation in section 2.2, and added a discussion about different limit approaches in section A.5.\\n\\n------------------------------------------------------------------------------------\\n>>> **Other comments**\\n- (p.2, top) You say your results are \\\"strengthening and extending the result of Matthews et al. (2018)\\\" which is somewhat confusing. Matthews et al. prove a result for FCNs whereas this paper focuses on CNNs. Extension of (A.4.3) to FCNs may well be possible but is not included in this paper. \\n\\nWe added a clarification on how to apply our results to LCNs and FCNs, see section A.5.\\n\\n------------------------------------------------------------------------------------\\n>>> Results in (A.4.1) and (A.4.2) are for the \\\"sequential\\\" whereas Matthews et al. study the \\\"simultaneous\\\" limit. \\n\\nThe emphasis in our work (and in section 2.2 in particular) is now on the simultaneous limit as well.\\n\\n------------------------------------------------------------------------------------\\n>>> Further differences:\\n\\t- Matthews et al. prove convergence for any countable rather than only finite input sets.\\n\\nThank you for pointing this out, our proof indeed generalizes to the same setting as we now mention in section A.5.1.\\n\\n------------------------------------------------------------------------------------\\n>>>\\t- In Matthews et al.'s work, Gaussianity is obtained through use of a particular version of CLT, whereas this work exploits Gaussianity of the prior over weights and biases. Going forward, an extension to more general priors/initialisations (like uniform or any sub-Gaussian) is likely to be easier using the CLT approach.\\n\\nWe have partially relaxed our assumptions on the priors, see section A.5.1 in the new revision. However, please also note that Matthews et al explicitly assume Gaussian priors in their work to the best of our knowledge.\\n\\n------------------------------------------------------------------------------------\\n>>> - Matthews et al.'s assumption on the activation functions is independent of the input set (p.7, Definition 1), whereas this work uses an assumption that is explicitly dependent on input (Eq. (37)) which might be potentially difficult to check.\\n\\nWe no longer have this dependency in the new revision.\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.15, A.2 end) Should also mention Titsias (2009), \\\"Variational Learning of Inducing Variables in Sparse Gaussian Processes\\\", as a classical reference for approximate GP inference.\\n\\nThank you, done.\\n\\n------------------------------------------------------------------------------------\\n>>> ****Questions****\\n- (Section 4) Can you please provide more details on the MC approximation? Specifically, is only the last kernel approximated, or rather all of them, sequentially resampling from the Gaussian with empirical covariance in each layer? In case you tried, is there any qualitative or quantitative difference between the two approaches?\\n\\nWe have only tried to approximate the last kernel, i.e. sampling random networks and averaging their top-level activations.\"}",
"{\"title\": \"3/8 AnonReviewer3 Reply\", \"comment\": \"------------------------------------------------------------------------------------\\n>>> - (Section 4 and Appendix A) Daniely et al. (2016) assume that the inputs to the neural network are l^2 normalised. You mention that the inputs have been normalised in the experiments (A.6). Is this assumption used in any of your proofs? Have you observed that l^2 normalisation improves empirical performance?\\n\\nThe assumption is not used in the new revision. We did not try other (or no) normalization approaches, and normalized inputs mainly as a common preprocessing practice in machine learning.\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.8, Figure 6) How was \\\"the best CNN with the same parameters\\\" selected? If training error is zero for all, was it selected by validation accuracy?\\n\\nYes, we state this in experimental details (A.7.5), and now also in that caption (now Figure 3, c).\\n\\n------------------------------------------------------------------------------------\\n>>> I was assuming that what is plotted is an estimate of the **expected** generalisation error, whereas the above selection procedure would be estimating supremum of the support of the generalisation error estimator which does not seem like a fair comparison. Can you please clarify?\\n\\nIf we understand you correctly (please let us know if not), your concern is with us reporting validation and not test accuracy. This is indeed not a fair comparison, and is slightly biased in favor of NNs over GPs. We have replace it with test accuracy (now Figure 3, c), which is extremely similar.\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.8 and A.6) Why only neural networks with zero training loss were allowed as benchmarks? \\n\\nPlease note that for practical benchmarking purposes we have presented Table 2 (former 1), where non-zero accuracy (not loss - exactly zero loss was not achieved by our trained NNs) results are presented in parentheses and were emphasized in the caption. Otherwise, we wanted to put the two classes of models in as similar conditions as was practically possible; since the GP without regularization perfectly fits the training set, we filtered for this condition in the networks with SGD training. \\n\\nRelatedly, note that NN-GP correspondence could be obtained by Sample-then-optimize procedure of [1], where one train only the read-out weights to convergence (infinite steps) using gradient descent training. For realizable problems (over-parameterized) the trained networks will obtain zero loss. Therefore, trained networks that would correspond to NN-GP necessarily should have zero loss (or close to zero loss if only finite training steps were taken). \\n\\n In our NN experiments with SGD, we relaxed this requirement but still required models to produce 100% accurate train set predictions, and believe that controlling for perfect accuracy allowed us to make arguably more interesting conclusions. E.g. one of the results of this paper is an observation that SGD-trained CNNs can significantly outperform equivalent CNN-GPs. Without controlling for train accuracy the difference may come from CNNs benefitting from underfitting. However the fact that SGD-trained CNNs significantly outperform CNN-GPs even with conditioning for zero error indicates an interesting and more specific mechanism of breakdown of NN-GP correspondence in SGD training.\\n\\n[1] Alexander G. de. G Matthews, Jiri Hron, Richard E. Turner, and Zoubin Ghahramani. Sample-then-optimize posterior sampling for bayesian linear models. In NIPS Workshop on Advances in Approximate Bayesian Inference, 2017\\n\\n------------------------------------------------------------------------------------\\n>>> How did the ones with non-zero training error fared in comparison?\\n\\nAs can be seen in Table 2 (former 1) and noted in the caption, underfitting tends to improve generalization for CNNs. Further, we have produced the analogous plots without the 100% accuracy requirement (NNs can underfit):\", \"https\": \"//www.dropbox.com/s/vxuhzyfj9we9pj2/underfit.pdf?dl=0\\nAs we can see, on full CIFAR10 (top) now the majority of models perform better in the NN case, suggesting that properly tuned underfitting can be a contributing factor of good generalization. However, on the smaller task (bottom), while the trend is altered, the plots are qualitatively similar, potentially due to underfitting on a small dataset being unlikely and hence not playing a significant role.\\n\\n------------------------------------------------------------------------------------\\n>>> Can you please expand on footnote 3?\\nPlease see our comments above + we have added a sentence emphasizing that underfitting can lead to better generalization in the footnote.\"}",
"{\"title\": \"4/8 AnonReviewer3 Reply\", \"comment\": \"------------------------------------------------------------------------------------\\n>>> - (p.8, last sentence) \\\"an observation specific to CNNs and FCNs or LCNs\\\": Matthews et al. (2018, extended version) observed in Section 5.2 that BNNs and their corresponding GP limits do not always perform the same even in the FCN case (cf. their Figure 8). Their paper unfortunately does not compare to equivalent FCNs trained by SGD. Have you experimented with or have an intuition for whether the cases where SGD trained models prevail coincide with the cases where BNNs+MCMC posterior inference outperform their GP limit?\\n\\nWe have not explored BNNs+MCMC experiments in this work. As mentioned in the Discussion (section 5.3), we attribute the observation (SGD-trained finite CNNs outperforming their GPs) to the loss of pixel-pixel covariances. This happens in infinite Bayesian (contrary to finite SGD-trained) models, and we do not have strong intuitions at the moment on whether to attribute this to Bayesian treatment or infinite width (or both). However, as we have mentioned in the conclusion, we enthusiastically agree that this is a very interesting question to answer in future work!\\n\\n------------------------------------------------------------------------------------\\n>>>- (p.15, Table 3) The description says you were using erf activation (instead of the more standard ReLU): why? Have you observed any significant differences? \\n\\nWe did not have a particular reason, and have produced some preliminary results for ReLU below:\", \"https\": \"//www.dropbox.com/s/d3lmb84o9b06syt/infoprop_relu.pdf?dl=0,\\nwhere we see a qualitatively similar trend that is in agreement with Lee et al. 2018 (Figure 4.b, Figure 9, bottom row; rightmost phase diagram is borrowed from their paper as well in our plot).\\n\\n------------------------------------------------------------------------------------\\n>>>Further, how big a proportion of the values in the image is black due to the numerical issues mentioned in A.6.4?\\n\\nTotal of 13%, 2792 out of total 20000 trials (2500 per plot in the table) failed.\\n% of failures per each plot:\\n\\n----------------------------------------\", \"depth\": \"| 1 | 10 | 100 | 1000 |\\n----------------------------------------\\nCNN-GP | 0 | 0 | 9 | 44 |\\n----------------------------------------\\nFCN-GP | 0 | 0 | 13 | 45 |\\n----------------------------------------\\n\\nPlease note that the line between a numerical failure and poor performance is blurry and depends on the specific experimental setup (see A.7.4). Indeed, not all numerical issues result in failures and sometimes will simply produce poor / random results.\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.18, just after Eq. 39) Use of PSD_{|X|d} in (A.4.3) suggests this proof assumes \\\"same\\\" padding is used?! Does the proof generalise to any padding/changing dimensions of filters inside the network?\\n\\nWe now state that we use circular padding and the spatial shape indeed is considered to remain fixed for simplicity (see section 2.1). While we do not consider changing padding / dimensions inside the network, we believe the proof to generalize to such cases easily (by introducing a different A^l operator for each layer, which will still be affine and Lipschitz-continuous). \\n\\n------------------------------------------------------------------------------------\\n>>> - (A.6) Can you comment on the pros & cons of \\\"label regression\\\" for classification and how does it compare with approximate inference when softmax is put on top of a GP (perhaps illustrating by a simple experiment on a toy dataset)?\\n\\nIn order to establish and understand correspondence to the GPs we focused on cases where exact inference on GP side was possible (a benefit of label regression) while working on realistic well known dataset for CNNs.\", \"apparent_downsides_of_label_regression_are\": \"(a) the independent prior on different output classes, which discards our prior knowledge about them being mutually-exclusive and (b) complications in interpreting GP predictions and their uncertainty on categorical outputs. However, the practical impact of softmax on best achieved accuracy in classification tasks is, to the best of our knowledge, not clear due to how well our MSE-trained NNs perform in this work (Table 2 (former 1); we believe FCN results to be close to SOTA using cross-entropy loss, and CNN results to be decent yet unfortunately hard to compare to SOTA due to architecture limitations), and due to FCN- and CNN-GPs performing similarly to the best considered FCNs and LCN. Therefore, while we certainly believe there to be a difference between label regression and proper classification, we do not think a simple toy task can fully illustrate it.\\n\\nWe still think it is interesting future work to implement and investigate the effects of softmax output using cross entropy loss.\"}",
"{\"title\": \"5/8 AnonReviewer3 Reply\", \"comment\": \"------------------------------------------------------------------------------------\\n>>>[detailed comments]\\n****Technical concerns****\\nNotation-wise, I would strongly encourage incorporating the dependence on network width into your notation, at the very least throughout the appendix. It would greatly reduce the amount of mental book-keeping the reader currently has to do, and significantly increase clarity at several places.\\n\\nDone, we now use \\u201c_t\\u201d subscript to show dependence on n^1(t), ..., n^L(t) in the appendix.\\n\\n------------------------------------------------------------------------------------\\n>>> One of my main concerns is that the random variables and their underlying probability space are never formally set-up. This is problematic because convergence in probability is only defined for random variables sharing the same underlying space. At the moment, networks with different widths are not set-up to share a probability space. The practical implication for the approaches relying on convergence in probability of the empirical covariance matrices K is that the convergence in probability is not well-defined exactly because the empirical covariance matrices are not set-up on the same underlying probability space. A possible way to address this issue is to use an approach akin to what Matthews et al. (2018, extended version) call \\\"infinite width, finite fan-out, networks\\\" on page 20. This puts the networks on the same underlying space and because the empirical covariance matrices are measurable functions of thus defined random variables, they will also share the same underlying probability space.\\n\\nDone, we now define the probability space in section A.5.1. Networks of different widths now do share the underlying probability space, and hence {K^l} covariances as well.\\n\\n------------------------------------------------------------------------------------\\n>>> Also regarding convergence in probability, please state explicitly with respect to which metric is the convergence considered when first mentioned (A.4.3 is explicitly using l^\\\\infty; A.4.2 perhaps l^2 or l^\\\\infty?), and make any necessary changes (e.g. show continuity of the mapping C in A.4.2).\\n\\nConvergence is w.r.t. l^\\\\infty and we now state it explicitly in section A.5.6. However, note that due to finite dimensionality all norms are equivalent. While we no longer have section A.4.2, continuity of map C follows from Lemma A.6.2 in the new revision.\\n\\n------------------------------------------------------------------------------------\\n>>> At several places within the paper, you state that the law of large numbers (LLN) or the central limit theorem (CLT) can be applied. Apart from other concerns detailed later, these come with conditions on finiteness of certain expectations (usually the first one or two moments of the relevant random variables). Please provide proofs that these expectations are indeed finite and make any assumptions that you need explicit in the main text.\\n\\nWe now prove finiteness of the necessary moments (see Theorem A.2).\\n------------------------------------------------------------------------------------\\n>>> Another major concern is that none of (A.4.1), (A.4.2) and (A.4.3) successfully proves joint convergence of the filters at the top layer as claimed in the main text (e.g. Eq. (10)), and instead only focuses on marginal convergence of each filter which is not sufficient (cf. the comment on joint vs. pairwise Gaussianity below). This is perhaps sufficient if a single filter is the output of the network, but insufficient otherwise, especially when proving convergence with additional layers added on top of the last convolutional layer (as in Section 3) whenever the number filters is taken to infinity.\\n\\nDone, we now explicitly prove joint convergence wherever applicable.\"}",
"{\"title\": \"6/8 AnonReviewer3 Reply\", \"comment\": \"------------------------------------------------------------------------------------\\n>>> It would be nice, but not necessary for acceptance of the paper, to extend the proofs to uncountable index sets. I think you could use the same argument as described towards the end of Section 2.2 in (Matthews et al., 2018, extended version) and references therein.\\n\\nThank you, indeed our proof extends to the case of countably many inputs with the metric referenced in Matthews et al. 2018, and we now mention it in section A.5.1.\\n\\n------------------------------------------------------------------------------------\\n>>> **Other comments**\\n- I would strongly encourage distinguishing more clearly between probability distributions and density functions. For example, I would infer that lower case p refers to the probability distribution from Eq. (6); however, in Eqs. (8) and (9) the same notation is used for density functions (whilst integrating against the Lebesgue measure). This is quite confusing in this context as the two objects are not the same (see next two comments). I would suggest using capital P when referring to distribution, and lower case p when referring to its density.\\n\\nDone, we believe the new revision should not have any confusing notation.\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.4, Eq. 6) If p is a density, it cannot be equal to a delta distribution. If it is a probability distribution then I am similarly confused - convergence in probability is a statement about behaviour of random variables, not probability distributions; in that case possibly Eq. (6) is trying to say that the empirical distribution of K^l (which is a random variable) conditioned on K^{l-1} converges weakly to the delta distribution on the RHS in probability? Please clarify.\\n\\nThank you, we no longer use delta-function notation in the main text and are clear about modes of convergence.\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.5, Eq. 10) I would recommend stating explicitly the mode of convergence. If p is the density then even assuming A.4.3 can be fixed to prove weak convergence of the **joint** distribution of filters is not enough not justify Eq. (10) - convergence in distribution does not imply pointwise convergence of the density function. If p is the distribution, then I would possibly use the more standard notation '\\\\otimes' instead of '\\\\prod'.\\n\\nThank you for pointing this out, we now always state modes explicitly and do not imply convergence of probability densities.\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.17, end of A.4.2) You say \\\"Note that addition of various layers on top (as discussed in Section 3) does not change the proof in a qualitative way\\\". Can you please provide the formal details? At the very least, joint convergence of filters will have to be established if fully connected layers are added on top. This is the main reason why joint convergence of filters in the top layer is important.\\n\\nDone, see Theorem A.6.\\n\\n------------------------------------------------------------------------------------\\n>>> ****Specific comments & issues for individual proofs****\\n**Approaches suited infinite networks (\\\"sequential limit\\\")**\\nAs mentioned in the beginning, it is not entirely clear how to formalise infinite networks in a way analogous to Eqs. (1) and (2) in your paper. This is important because you are ultimately proving statements about random variables, like convergence in probability, and this is not possible if those random variables are not formally defined. This section only comments on technical issues with the approaches described in (A.4.1) and (A.4.2). From now on, I assume that the authors' were able to formally define all the mentioned random variables in a way that fits with (A.4.1) and (A.4.2).\\n\\nDone. Specifically, we provide a definition in A.5.3 (former A.4.1, \\u201cSequential limit\\u201d; note that, we don\\u2019t make any convergence in probability statements here, only in distribution). A.4.2 is left out.\\n\\n------------------------------------------------------------------------------------\\n>>> (i) Hazan and Jaakola type approach (A.4.1)\\n[...]\\n- (p.16, A.4.1) The application of the multivariate CLT is slightly more complicated than the text suggests. Except for the necessity of proving finiteness of the relevant moments, multivariate CLT does not out-of-the-box apply to infinite dimensional random variables like {z_j^{l+1}}_{1 \\\\leq j \\\\leq \\\\infty} as claimed. Hence joint convergence is not proved which will be problematic for the reasons explained earlier.\\n\\nWe have significantly revamped this section (now A.5.3, \\u201cSequential limit\\u201d), including proving joint convergence and finiteness of the moments.\"}",
"{\"title\": \"7/8 AnonReviewer3 Reply\", \"comment\": \"------------------------------------------------------------------------------------\\n>>> (ii) Lee et al. type approach (A.4.2)\\n[...]\\n\\nPer your suggestion we have removed the section A.4.2. in this revision.\\n\\n------------------------------------------------------------------------------------\\n>>> A note on convergence in probability: In Eq. (3), the focus is on convergence in probability of individual entries of the K matrices. This in general does not imply convergence of all entries jointly. However, the type of convergence studied here is convergence to a constant random variable which is fortunate because simultaneous convergence of all entries in probability can be obtained for free in this case (thanks to having a **finite** number of entries of K). I think it might be potentially beneficial for the reader if this was explicitly stated as a footnote with an appropriate reference included.\\n\\nWe have added a footnote 4 clarifying this step (we believe the limit being a constant not necessary though as long as their number is finite).\\n\\n------------------------------------------------------------------------------------\\n>> A note on marginal vs joint probability: As you say above Eq. (23), you are only proving convergence of a single filter marginally, instead of the full sequence {z_j^L}_{1 \\\\leq j \\\\leq \\\\infty} jointly. Convergence of the marginals does not imply convergence of the joint, which will be problematic for the reasons explained earlier.\\n\\nWe now prove joint convergence in the new revision.\\n\\n------------------------------------------------------------------------------------\\n>>> **Approaches for BNNs (\\\"simultaneous limit\\\")**\\n(iii) The proof in (A.4.3)\\nMy biggest concern about this approach is that it only establishes convergence of a single filter marginally, instead of the full sequence {z_j^L}_{1 \\\\leq j \\\\leq \\\\infty} jointly. Convergence of the marginals does not imply convergence of the joint, which will be problematic for the reasons explained earlier.\\n\\nDone, see Theorem A.6.\\n\\n------------------------------------------------------------------------------------\\n>>> Other comments:\\n- (p.17) You say \\\"Using Theorem A.1 and the arguments in the above section, it is not difficult to see that a sufficient condition is that the empirical covariance converges in probability to the analytic covariance\\\".\\n\\t- Can you please provide more detail as it is unclear what exactly do you have in mind?\\n\\nDone, see Theorem A.6.\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.18) Condition on activation function: The class \\\\Omega(R) is dependent on the considered input set X through the constant R. This seems slightly cumbersome as it would be desirable to know whether a particular activation function can be used without any reference to the data. It would be nice (but not necessary) if you can derive a condition on \\\\phi which would not rely on the constant R but allows ReLU.\\n\\nDone, there\\u2019s no more dataset dependency.\\n\\n------------------------------------------------------------------------------------\\n>>>- (p.19, Eq. 48) I see where Eq. (48) is coming from, i.e. from Eq. (44) and the assumption of \\\\bar{\\\\varepsilon} ball around A(K_\\\\infty^l) being in PSD(R), but it would be nicer if you could be a bit more verbose here and also write out the bound explicitly (caveat: I did not check if the definition of \\\\bar{\\\\varepsilon} matches up but assume a potential modification would not affect the proof in a significant way).\\n\\nDone (now Equations 70-72).\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.19) The second part of the proof is a little confusing, especially after Eq. (49) - please be more verbose here. For example, just after Eq. (49), it is said that because the two random variables have the same distribution, property (3) of \\\\Omega(R)'s definition can be applied. However the two random variables are not identical and importantly are not constructed on the same underlying probability space. Property (3) is a statement about the the set of random variables {T_n (Sigma)}_{Sigma \\\\in PSD_2(R)} and not about the different 2x2 submatrices of K^{l+1}, but it needs to be applied to the latter. \\n\\nDone (Equations 76-77 + see new modified revision of property 3 (now Equation 48)).\\n\\n------------------------------------------------------------------------------------\\n>>> When this is clarified, the next point that could be made clearer is in the following sentence where changing t will affect the 2x2 submatrices of K^{l+1,t} as well as the bound through U(t) and V(t); it is not immediately obvious that the proof goes through as claimed so please be a bit more verbose.\\n\\nDone, we have substantially expanded that part of the proof (starting from Equation 75).\"}",
"{\"title\": \"8/8 AnonReviewer3 Reply\", \"comment\": \"------------------------------------------------------------------------------------\\n>>> ****Typos and other minor remarks****\\n- (p.2, top) \\\"hidden layers go to infinity uniformly\\\": The use of word uniformly is non-standard in this context. Please clarify.\\n\\nDone. The \\u201cuniform\\u201d qualifier was used by analogy of uniform function convergence.\\n\\n------------------------------------------------------------------------------------\\n>>>- (p.3, Eq. 2) Using x for both inputs and post-activations is slightly confusing.\\n\\nChanged post-activations (called activations in the text) to \\u201cy\\u201d.\\n\\n------------------------------------------------------------------------------------\\n>>>- (p.4, Eq. 5) Should v_\\\\beta multiply \\\\sigma_\\\\beta^2 ?\\n\\nIt should not, thank you, fixed.\\n\\n------------------------------------------------------------------------------------\\n>>> - (p. 4) The summands in Equation (3) are iid -> \\\"conditionally iid\\\" (please also specify the conditioning variables/sigma-algebra).\\n\\nDone, thank you.\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.4, Eq. 4) Eq. (4) is slightly confusing given you mention that K is a 4D object on the previous page.\\n\\t- I only understood K is \\\"flattened\\\" into |X|d x |X|d matrix when I reached (A.4.3) - this should be stated in main text as otherwise the above confusion arises.\\n\\nThank you, fixed and clarified (section 2.1.Shapes and indexing).\\n\\n------------------------------------------------------------------------------------\\n>>>- (p.5, 3 and 3.1) The introduction of \\\"curly\\\" K is slighlty confusing. Please provide more detail when introducing the notation, e.g. state in what space the object lives.\\n\\nDone (see also new section 2.1.Shapes and indexing). \\n\\n------------------------------------------------------------------------------------\\n>>> - (p.5, before Eq. (11)) Is R^{n^(l+1)} the right space for vec(z^L) ? It seems that the meaning of z changes here as compared to the definition in Eq. (2). If z is still defined as in Eq. (2), how exactly is the vec operator defined here? Please clarify.\\n\\nNote that it\\u2019s n^{L+1} times d, yet you are correct that it should\\u2019ve been the dimension of z^L(x), not z^L. We have fixed the error and substantially improved the clarity of this section and clarified the notation in section 2.1.Shapes and indexing.\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.16, A.4.2) \\\"law of large number\\\" -> \\\"weak law of large numbers\\\"\\n\\nDone.\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.17) T_n is technically not a function from PSD_2 only but also from some underlying probability space into a measurable space (i.e. can be viewed as a random variable from the product space of PSD_2 and some other measurable space).\\n\\nWe no longer use T_n notation in the new revision.\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.18, Eq. 38) Missing dot at the end. Also the K matrix either should or shouldn't have the superscript \\\"l\\\" (now mixed); it does have the superscript in Eq. (39) so probably \\\"should\\\".\\n\\nDone.\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.18, Eq. 39) Slightly confusing notation. Please clarify that both K and A(K) should have diagonal within the given range.\\n\\nDone (no such confusing notation in the new revision).\\n\\n------------------------------------------------------------------------------------\\n>>>- (p.18) \\\"squared integrable\\\" -> \\\"square integrable\\\" or \\\"square-integrable\\\"\\n\\nDone.\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.18) Last display before Eq. (43): second inequality can be replaced by equality?!\\n\\nThank you, done.\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.19, Eq. 47) The absolute value should be sup norm.\\n\\nWe believe the expression is correct.\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.19, Eq. 49) LHS is a scalar, RHS a 2x2 matrix (typo).\\n\\nBoth are scalars (\\\\Tau_\\\\infty is defined as a scalar).\\n\\n------------------------------------------------------------------------------------\\n>>> - (p.19, last sentence of the proof) It does not seem the inequalities need to be strict.\\n\\nThank you, fixed for n^{l+1}.\"}",
"{\"title\": \"1/4 AnonReviewer2 Reply\", \"comment\": \"Thank you for the very thorough and insightful review. We are glad you found our research useful. We have adjusted the text to address your suggestions. Please see below our responses to specific comments:\\n\\n------------------------------------------------------------------------------------\\n>>> Firstly, and rather mundanely: the figures. Fig 1 is not easy to read due to the density of plotting, and as there is no key it isn\\u2019t possible to tell what it shows.\\n>>> Figure 6 is also missing a key\\n\\nThank you for the suggestion. We have added (partial) keys to figures, increased figures axes / ticks / title fonts, and increased the size of Figures 1, 5, 6 (now Figure 3, a, b, c) to make them more legible. Please note that displaying a full key is not practical in Figures 1 and 6 (now Figure 3, a, c) since each line / point respectively corresponds to one of many distinct setting of hyper-parameters like weight and bias variances, non-linearity, and depth. Instead, these plots serve to give a general picture across many different configurations, various styles serving merely to make different entries more visually separate.\\n\\n------------------------------------------------------------------------------------\\n>>> Figure 2 is rather is called a \\u2018graphical model\\u2019 but the variables (weights and biases) are not shown. It should be specified that this is the graphical model of the infinite limit, in which case the K variables should not be random. Also, the caption on this figure refers to variables that aren\\u2019t in the figure, and is grammatically incorrect (perhaps something like \\u2018the limit of an infinitely wide convolutional\\u2019 is missing?).\\n\\nPlease note that the graphical model does represent a finite CNN with random covariance matrices K^l after marginalizing out the weights and biases, and we believe it to be accurate. Otherwise, we agree that in the infinite channel/width limit the model also remains correct, and covariance matrices K^l indeed become deterministic. However, in the revised version we have replaced the justification in section 2.2 in terms of marginalizing over {K^l} with a more rigorous approach, and this figure no longer appears in the text.\\n\\n------------------------------------------------------------------------------------\\n>>> Figure 3 has a caption which seems to be inconsistent with the coloring (for example green is center pixel in the text, but blue in the key). \\n\\nThank you for noticing this. We have fixed it in the updated version (now Figure 1).\\n\\n------------------------------------------------------------------------------------\\n>>> In Figure 5, what does the tick symbol denote?\\n\\nWe added keys to the figure to clarify the symbols. All plots share x and y axes where each denote number of channels and accuracy. Note that the x-axis is in log scale. Crosses are displayed at #channel values for which NN experiments were run.\"}",
"{\"title\": \"2/4 AnonReviewer2 Reply\", \"comment\": \"------------------------------------------------------------------------------------\\n>>> Finally, the value some of Table 1 is questionable as so many entries are missing. For example, the Fashion-MNIST column has only two values, which seems to me of little use. \\n\\nSince running huge parameter sweeps for NNs (see Appendix A.7.5) is expensive, we have focused the full suite of experiments on CIFAR10, as the dataset benefiting from convolutional structure the most (among considered), thus allowing to gauge qualitatively the difference between different models (see e.g. Table 1 (former 2)). However, we still ran our (much smaller in the number of hyper-parameters) GP experiments on MNIST and Fashion-MNIST (a very recent dataset, hence no results from other work to report), to position our work among current and future SOTA results.\\n\\n------------------------------------------------------------------------------------\\n>>> There is an important distinction between finite width Bayesian-CNNs and the infinite limit, and this distinction is indeed made in the paper but not clearly enough in my view. I would anticipate that some readers might come away after a cursory reading thinking that Bayesian-CNNs are fundamentally worse than their parametric counterparts, but this is emphatically not the message of the paper. It seems that the infinite limit that is the cause of two problems. The first problem (or perhaps benefit) is that the infinite limit gives Gaussian inner layers, just as in the fully connected case. The second problem (and I\\u2019d say this is definitely a problem this time) is that the infinite limit loses the covariance between the pixels, at least with a fully connected final layer. I would recall [Matthews 2018, long version] section 7, which discusses that point that taking the infinite limit in the fully connected is actually potentially undesirable. To quote Matthews 2018, \\u201cMacKay (2002, p. 547) famously reflected on what is lost when taking the Gaussian process limit of a single hidden layer network, remarking that Gaussian processes will not learn hidden features\\u201d. Some discussion of this would enhance the presented paper, in my view. \\n\\nThank you for your comment and your references. We have added a clear disclaimer at the end of introduction that we make no claims about finite width Bayesian networks and added a footnote 8 to expand the discussion section.\\n\\n------------------------------------------------------------------------------------\\n>>> The discussion of eq (7) could be made more clear. Eq (7) is only defined on K, and not in composition with A. It is important that the alpha dependency is preserved by the A operation, and while I suppose this is obvious I would welcome a bit more detail. It would help to demonstrate the application of the results of [Cho and Saul 2009] to the convolution case explicitly (i.e. for C o A), in my view. \\n\\nThank you for your comment, we have clarified the application of the A operation by a) referencing the specific derivation of Equation (7) in Xiao et al, 2018 (Lemma A.1), b) defining A\\u2019s domain and codomain in section 2.2.1, and c) adding a section 2.1. \\u201cShapes and indexing\\u201d to make our matrix/vector notation more precise.\"}",
"{\"title\": \"3/4 AnonReviewer2 Reply\", \"comment\": \"------------------------------------------------------------------------------------\\n>>> Regarding results, effort has clearly gone to keep the comparisons as fair as possible, but with these large datasets it is difficult to disentangle the many factors that might effect performance (as acknowledged on p9). It is a weakness of the paper that there is no toy example. An example demonstrating a situation which can only be solved with hierarchical features (e.g. features that are larger than the receptive field of a single layer) would be particularly interesting, as in this case I think the GP-CNN would fail, even with the average pooling, whereas the finite Bayesian-CNN would succeed (with a sufficiently accurate inference method). \\n\\nThank you for your suggestion. To the best of our knowledge, while (as you have referenced earlier) arguments have been made in the literature against GPs due to lack of hierarchical representation learning present in CNNs (Matthews et al, 2018, section 7), (MacKay 2003, section 45.7), (Neal 1996, Chapter 5) the practical impact of these assertions in a supervised regression setting has not been carefully investigated empirically or theoretically. Moreover, it is unclear if these beliefs hold if we use a sufficiently powerful class of kernels, and we explicitly construct such a class in our work. Further, we believe it is important to decouple hierarchy and finite representations in this discussion. NN-GPs do have a hierarchical kernel, and CNN-GPs have a spatially-local hierarchical kernel with a receptive field (3x3 per layer in our work) smaller than the input images, and they do end up benefiting from hierarchy significantly (see Figure 1 (former 3); further, the best CNN-GP models in Table 2 (former 1) are at least 8 layers deep). Finally, we highlight the similarity in performance between the best finite SGD-trained fully- and locally-connected networks in our work (Tables 1, 2), Lee et al 2018 (Tables 1, 2), as well as the similarities between small Bayesian NNs and NN-GPs in Matthews et al 2018 (section 5.3). Considering all of the above, we believe construction of meaningful datasets that will decisively disentangle performance of finite-feature models from GPs in the context of regression to be a non-trivial research problem and lie beyond the scope of this work.\\n\\n------------------------------------------------------------------------------------\\n>>> It would improve readability to stress the 1D notation in the main text rather than in a footnote. \\n\\nDone, see beginning of section 2.1.\\n\\n------------------------------------------------------------------------------------\\n>>> On first reading I missed this detail and was confused as I was trying to interpret everything as a 2D convolution. On reflection I think notation is used in the paper is good, but I think the generalization to 2D should be elevated to something more than the footnote. Perhaps a paragraph explaining how the 2D case works would be appropriate, especially as all the experiments are in 2D cases. \\n\\nDone, see beginning of section 2.1 referencing section A.3 with an added paragraph \\u201cND convolutions\\u201d at the end.\\n\\n------------------------------------------------------------------------------------\\n>>> 1,2,4 I think \\u2018easily\\u2019 is a bit of an overstatement. In this work the kernel is itself defined via a recursive convolutional operation, which doesn\\u2019t seem to me much more interpretable than the parametric convolution. At least the filters can be examined in parametric case, which isn\\u2019t the case here. I do agree with the sentiment that a function prior is better than an implicit weight prior, however.\\n\\nThank you for this comment. Indeed, at that at the moment the kernel definition does not seem easily interpretable; in-depth investigation of its consequences is the subject of future work. We nonetheless think having compact expressions for the computation performed by a NN, both for the prior and posterior, can open up a novel route towards theoretical understanding. Note that examining filters in parametric space, to the best of our knowledge, can only be done after training and not analytically, the prior therefore remaining difficult to analyze. We have removed the word \\u2018easy\\u2019 from the text and added a footnote referencing filter visualization.\"}",
"{\"title\": \"4/4 AnonReviewer2 Reply\", \"comment\": \"------------------------------------------------------------------------------------\\n>>> 1,2,-1 This seems too vague to me, as at least to some extent, Matthew 2018 did indeed consider using NN-GPs to gain insight about equivalent NN models (e.g. section 5.3)\\n\\nTo our best knowledge, we are the first to learn the role of architecture on the functions represented with the networks using NN-GP correspondence. Specifically, in our Table 1 (former 2), we disentangle the role of network topology and equivariance in CNNs. Previous works (both Lee et al 2018 and Matthews et al 2018) focused on establishing the correspondence and understanding the properties of the corresponding GPs. It would be helpful if you could clarify specific insights laid out in Matthews et al 2018 (section 5.3). As far as we can tell the section discusses how the Bayesian NNs would match NN-GPs. \\n\\n------------------------------------------------------------------------------------\\n>>> 1.1,:,: I find it very surprising that there are no references to Cho and Saul 2009 in this section (one does appear in 2.2.2, however). \\n\\nWe have updated the related work section to address this point.\\n\\n------------------------------------------------------------------------------------\\n>>> 1.1,3,-2:-1 \\u2018Our work differs from all of these in that our GP corresponds exactly to a fully Bayesian CNN in the many channel limit\\u2019 I do not think this is completely true, as the deep convolution GP does correspond to an infinite limit of a Bayesian CNN, just not the same limit as the one taken in this paper. Similarly a DGP following the Danianou and Lawrence 2013 is an infinite limit of a NN, but one with bottlenecks between layers. It is important that readers appreciate that infinite limits can be taken in different ways, and the resulting models may be very different.\\n\\nThank you for the comment, we have modified the text in this section to emphasize this distinction more strongly. However, this seems mostly to be a question of semantics in using the term \\u201cinfinite limit\\u201d, i.e. whether one means to include or exclude bottleneck layers. We wish to point out, though, that the limit we take is nevertheless interesting and likely relevant to networks in which all layers widths are similarly large, which is arguably a rather large class of models used in the wild.\\n\\n------------------------------------------------------------------------------------\\n>>> This certain limit taken in this work has desirable computational properties, but arguably undesirable modelling implications.\\n\\nGood point, and we now emphasize this in the text as well.\\n\\n------------------------------------------------------------------------------------\\n>>> 1.1,-1,-2 It should be made more clear here that the SGD trained models are non-Bayesian. \\n\\nDone.\\n\\n------------------------------------------------------------------------------------\\n>>> Figure 3 The MC-CNN-GP appears to have performance that is nearly independent of the depth, even including 1 layer. Could this be explained?\\n\\nWe believe there are two factors at play. Firstly, to the best of our knowledge, dependence of model (be it NN-GPs or SGD-trained NNs) performance on depth is poorly understood and is difficult to decouple (if at all possible) from the particular dataset and architectural decisions like pooling or residual connections. Therefore, we do not necessarily find the lack of a clear and interpretable dependence surprising. Secondly, performance of MC-GP is subject to approximation noise and bias (we only used 16 filters, see Appendix A.7.3) as well as poor conditioning (see dark bands in Figures 2 and 7 in the new revision). Therefore we conjecture that there could be a similar underlying depth dependence to the one observed in other curves on the plot in Figure 1 (former 3), yet it is mild enough (just like other curves don\\u2019t have steep slopes as well) to be overrun with the MC approximation imperfections.\\n\\n------------------------------------------------------------------------------------\\n>>> 2.2,2,: The z^l variables are zero mean Gaussian with a fixed covariance, not delta functions, as I understand it. They are independent of each other due to the deterministic K^l, certainly, but they are not themselves deterministic. Could this be clarified? \\n\\nYou are correct and we have edited the text to clarify this fact.\"}",
"{\"title\": \"AnonReviewer1 Reply\", \"comment\": \"Thank you for your detailed and encouraging review! We are glad you found our research interesting. Please find below our replies to your specific comments:\\n\\n------------------------------------------------------------------------------------\\n>>> -> Put in bold best results of the experiments.\\n\\nThank you for the suggestion. Tables 1 and 2 are updated. \\n\\n------------------------------------------------------------------------------------\\n>>> -> Why not put \\\"deep\\\" in the title?\\n\\nGood suggestion, we have updated our title.\\n\\n------------------------------------------------------------------------------------\\n>>> -> Define the channel concept in introduction.\\n>>> -> In the introduction, introduce formally a CNN. (brief)\\n\\nWe have formally defined convolutional operation with convolution filters in Section 2.1 (preliminaries).\\n\\n------------------------------------------------------------------------------------\\n>>> -> Define the many channel limit.\\n\\nThe revised introduction describes the many channel limit more concretely (point 1 under the contributions), also see the end of the new section 2.1 \\u201cShapes and indexing\\u201d.\\n\\n------------------------------------------------------------------------------------\\n-> Put a figure with the equivalences and with the contents of the paper explaining a bit.\\n\\nThank you for the suggestion. We have added Figure 4 to better explain the notation and different concepts used in the paper.\"}",
"{\"title\": \"BAYESIAN CONVOLUTIONAL NEURAL NETWORKS WITH MANY CHANNELS ARE GAUSSIAN PROCESSES\", \"review\": \"Overall Score: 7/10.\", \"confidence_score\": \"3/10. (This paper includes so many ideas that I have not been able to prove that are right due to\\nmy limited knowledge, but I think that there are correct).\", \"summary_of_the_main_ideas\": \"This paper establishes a theoretical correspondence between BCNN with many channels and GP and\\npropsoes a Monte Carlo method to estimate the GP corresponding to a NN architecture. It is a very strong and complete\\npaper since its gives theoretical contents and experiments content. I think that it is a really good result that should\\nbe read by anyone interested in Neural Network and GP equivalences, and that Machine Learning in general needs these kind\\nof papers that establish this complicated equivalences.\", \"related_to\": \"The work by Lee and G. Matthews (2018) regarding equivalence between Deep Neural Networks and GPs and the\\nConvolutional Neural Network framework.\", \"strengths\": \"Theoretical content, Experiments and methodology content (even a Monte Carlo approach) makes it a very complete paper.\\nHaving been able to establish complicated and necessary equivalences.\", \"weaknesses\": \"Very difficult for newcomers or non expert technical readers.\\n\\nDoes this submission add value to the ICLR community? : Yes, it adds, and a lot.\", \"quality\": \"Is this submission technically sound?: Yes it is, it is a necessary step in GP-NN equivalence research.\\nAre claims well supported by theoretical analysis or experimental results?: Yes, quite sure.\\nIs this a complete piece of work or work in progress?: Complete piece of work.\\nAre the authors careful and honest about evaluating both the strengths and weaknesses of their work?: Yes, they are.\", \"clarity\": \"Is the submission clearly written?: Yes, but I suggest giving formal introductions to some concepts in the introduction\\nand include a figure with the ideas given or the equivalences.\\nIs it well organized?: Yes, although sometimes section feel a little but put one after the another. More cohesion would be\\nadded if they are introduce before.\\nDoes it adequately inform the reader?: Yes.\", \"originality\": \"Are the tasks or methods new?: The monte carlo is new, the other methods not but the task of the equivalence is new.\\nIs the work a novel combination of well-known techniques?: It is kind of a combination, but the proposed ideas are new, it is very theoretical.\\nIs it clear how this work differs from previous contributions?: Yes, authors bother in explaining it clearly.\\nIs related work adequately cited?: Yes, this is a huge positive point of the paper.\", \"significance\": \"Are the results important?: From my point of view, yes they are.\\nAre others likely to use the ideas or build on them?: I think so, because the topic is hot right now.\\nDoes the submission address a difficult task in a better way than previous work?: It is a new task.\\nDoes it advance the state of the art in a demonstrable way?: Yes, clearly.\\nDoes it provide unique data, unique conclusions about existing data, or a unique theoretical or experimental approach?: Yes, the theoretical approach is sound.\", \"arguments_for_acceptance\": \"It is a paper that provides theory, methodology and experiments regarding a very difficult and challenging task that add value to the community and makes progress in the area of the equivalence between NN and GPs.\", \"arguments_against_acceptance\": \"I do not have.\", \"typos\": \"-> Define the channel concept in introduction.\\n-> Put in bold best results of the experiments.\\n-> Why not put \\\"deep\\\" in the title?\\n-> In the introduction, introduce formally a CNN. (brief)\\n-> Define the many channel limit.\\n-> Put a figure with the equivalences and with the contents of the paper explaining a bit.\", \"after_rebuttal\": \"=============\\n\\nAuthors have addressed many topics that not only I but rev 3 address and hence I score this paper with a 7 and recommend it for publication.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Review\", \"review\": \"****Reply to authors' rebuttal****\\n\\nDear Authors,\\n\\nI greatly appreciate the effort you have put into the rebuttal. The changes you have made have addressed most of my concerns and I believe that the few outstanding ones can be fixed without significantly affecting the main message of the paper. I will thus be recommending acceptance of the paper.\\n\\nBest wishes,\\nRev 3\", \"several_remarks_on_the_updated_version\": [\"(p.20, A.5.1) To ensure the random variables are well-defined, please state explicitly which sigma algebra is F (I am assuming the product Borel sigma-algebra + the relevant definitions of the random variables). This is important for the reader to understand what convergence in distribution on this particular space does and does not imply. Some readers might also appreciate if you used the mentioned \\\"infinite width, finite fan-out, networks\\\" (Matthews et al.) construction (or similar) which would ensure that the collection of random variables {z_i^l}_{i \\\\in N*} is well-defined for any network width and l, which currently does not seem to be the case according to Eqs. (28-29). If the full countably infinite vectors of random variables are not defined for all networks in the sequence, it is not possible to prove their convergence in distribution to the relevant GPs.\", \"(p.21, A.5.3) Thank you for clarifying the definition of elements of the sequential limit. If possible, I would further recommend first fixing the probability space and then defining the random variables (the argument just before Theorem A.2 seems somewhat circular as R.V.s should first be defined on some space, and not put on a probability space post-hoc; perhaps some product space with the product sigma-algebra would work here?!). Furthermore, if I understand correctly, there are now L sequences of neural networks (one sequence for networks with 0, ..., L-1 \\\"infinite layers\\\"), rather than a single sequence, and the \\\"infinite layers\\\" are squashed into a single \\\"infinite layer\\\" which is represented by z_i^\\\\infty? In other words, all the infinite layers are replaced by iid samples from a particular GP and only the finite layers have the standard neural network structure? If I am mistaken (or not), perhaps a further explanatory footnote would help the reader.\", \"(p.21, A.5.3 & p.23, A.5.4) Thank you for improving the discussion of joint convergence. Please clarify that proving convergence for any finite m is sufficient for proving convergence in distribution of the countably infinite vector {z_i}_{i \\\\in N*} for the **product Borel sigma-algebra** (e.g. using an argument like the one on p.19 of Billingsley (1999)).\", \"(p.21) \\\"Uniformly square-integrable\\\": to me, this phrase suggests that the collection of squares of the functions has to be uniformly integrable but the definition in Eq. (27) only states one of the conditions in definition of uniform integrability. Please clarify that \\\"uniform square-integrability\\\" here is not related to the standard notion of \\\"uniform integrability\\\" in the literature.\", \"****Summary****\", \"This paper extends recent results on convergence of Bayesian fully connected networks (FCNs) to Gaussian processes (GPs), to the equivalent relationship between convolutional neural networks (CNNs) and GPs. This is currently an area of high interest, with Xiao et al. (2018) examining the same relationship from a mean-field perspective, and two other concurrent papers making contributions:\"], \"https\": \"//stats.stackexchange.com/questions/180708/x-i-x-j-independent-when-i%E2%89%A0j-but-x-1-x-2-x-3-dependent/180727#180727\\n\\n- (p.16, A.4.1) The application of the multivariate CLT is slightly more complicated than the text suggests. Except for the necessity of proving finiteness of the relevant moments, multivariate CLT does not out-of-the-box apply to infinite dimensional random variables like {z_j^{l+1}}_{1 \\\\leq j \\\\leq \\\\infty} as claimed. Hence joint convergence is not proved which will be problematic for the reasons explained earlier.\\n\\n\\n(ii) Lee et al. type approach (A.4.2)\\n\\nThis type of approach follows the technique used by Lee et al. (2018), \\\"Deep Neural Networks as Gaussian Processes\\\".\\n\\nApplication of the weak law of large numbers (wLLN): As mentioned before, convergence in probability is only possible between random variables on the same underlying space. This is usually not a problem when wLLN is applied as the random variables converge to a constant random variable. Because every constant random variable generates the trivial sigma-algebra, it is measurable for any underlying probability space and thus convergence in probability is well-defined. The situation here is more complicated because the target is constant only conditionally on the previous layer, i.e. is not constant. As a side note, even the conditioning is only well-defined if all random variables live on the same space (conditioning on a random variable is technically conditioning on the sub-sigma-algebra it generates on the shared space).\\n\\nAssuming the problem with all K^{l, t} (t denotes the dependence on network width), for all l \\\\in {1, ... L} and t \\\\in {1, 2, 3, ...}, being on the same underlying probability space is solved, the next point is application of the wLLN itself. You claim \\\"we can apply the law of large numbers and conclude that [Eq. (6)]\\\" (p.4) which is not entirely correct here. Focusing on the application when the sizes of all the previous layers are held fixed, the two conditions that have to be checked here are: (i) the conditional expectation of the iid summands in Eq. (3) is finite; (ii) the sequence of iid variables is fixed. Please provide an explicit proof of (i). Regarding (ii), I am specifically concerned with the fact that with changing t (and thus network widths), the sequence of random variables changes (because the previous K^{l-1,t} matrix changes) which means that completely different size of the current layer may be necessary to get sufficiently close to the target (which has itself changed with t). In other words, instead of having a fixed infinite sequence of iid random variables, you currently have a sequence of growing finite sets of random variables which are iid only within the finite sets, but not between members of the sequence (different t). The direct implication is that this type of proof is not applicable to the \\\"simultaneous limit\\\" case as claimed in the main text (Section 2.2 says all proofs are equivalent and lead to Eq. (10) which explicitly takes the simultaneous limit), since the application would require some form of uniform convergence in probability akin to (A.4.3). I think that the approach taken in (A.4.3) is a correct way to address this issue and would thus recommend focusing on (A.4.3) and leaving (A.4.2) out. The appendix seems to acknowledge that (A.4.2) does not work for the \\\"simultaneous limit\\\" - please adapt the main text accordingly.\", \"a_note_on_convergence_in_probability\": \"In Eq. (3), the focus is on convergence in probability of individual entries of the K matrices. This in general does not imply convergence of all entries jointly. However, the type of convergence studied here is convergence to a constant random variable which is fortunate because simultaneous convergence of all entries in probability can be obtained for free in this case (thanks to having a **finite** number of entries of K). I think it might be potentially beneficial for the reader if this was explicitly stated as a footnote with an appropriate reference included.\", \"a_note_on_marginal_vs_joint_probability\": \"As you say above Eq. (23), you are only proving convergence of a single filter marginally, instead of the full sequence {z_j^L}_{1 \\\\leq j \\\\leq \\\\infty} jointly. Convergence of the marginals does not imply convergence of the joint, which will be problematic for the reasons explained earlier.\\n\\n\\n**Approaches for BNNs (\\\"simultaneous limit\\\")**\\n\\n(iii) The proof in (A.4.3)\\n\\nMy biggest concern about this approach is that it only establishes convergence of a single filter marginally, instead of the full sequence {z_j^L}_{1 \\\\leq j \\\\leq \\\\infty} jointly. Convergence of the marginals does not imply convergence of the joint, which will be problematic for the reasons explained earlier.\", \"other_comments\": [\"(p.17) You say \\\"Using Theorem A.1 and the arguments in the above section, it is not difficult to see that a sufficient condition is that the empirical covariance converges in probability to the analytic covariance\\\".\", \"Can you please provide more detail as it is unclear what exactly do you have in mind?\", \"I will be assuming from now on that you show that a particular combination of the Portmanteau theorem and convergence of K^L in probability to get pointwise convergence of the characteristic function is sufficient.\", \"(p.18) Condition on activation function: The class \\\\Omega(R) is dependent on the considered input set X through the constant R. This seems slightly cumbersome as it would be desirable to know whether a particular activation function can be used without any reference to the data. It would be nice (but not necessary) if you can derive a condition on \\\\phi which would not rely on the constant R but allows ReLU.\", \"(p.19, Eq. 48) I see where Eq. (48) is coming from, i.e. from Eq. (44) and the assumption of \\\\bar{\\\\varepsilon} ball around A(K_\\\\infty^l) being in PSD(R), but it would be nicer if you could be a bit more verbose here and also write out the bound explicitly (caveat: I did not check if the definition of \\\\bar{\\\\varepsilon} matches up but assume a potential modification would not affect the proof in a significant way).\", \"(p.19) The second part of the proof is a little confusing, especially after Eq. (49) - please be more verbose here. For example, just after Eq. (49), it is said that because the two random variables have the same distribution, property (3) of \\\\Omega(R)'s definition can be applied. However the two random variables are not identical and importantly are not constructed on the same underlying probability space. Property (3) is a statement about the the set of random variables {T_n (Sigma)}_{Sigma \\\\in PSD_2(R)} and not about the different 2x2 submatrices of K^{l+1}, but it needs to be applied to the latter. When this is clarified, the next point that could be made clearer is in the following sentence where changing t will affect the 2x2 submatrices of K^{l+1,t} as well as the bound through U(t) and V(t); it is not immediately obvious that the proof goes through as claimed so please be a bit more verbose.\", \"****Typos and other minor remarks****\", \"(p.2, top) \\\"hidden layers go to infinity uniformly\\\": The use of word uniformly is non-standard in this context. Please clarify.\", \"(p.3, Eq. 2) Using x for both inputs and post-activations is slightly confusing.\", \"(p.4, Eq. 5) Should v_\\\\beta multiply \\\\sigma_\\\\beta^2 ?\", \"(p. 4) The summands in Equation (3) are iid -> \\\"conditionally iid\\\" (please also specify the conditioning variables/sigma-algebra).\", \"(p.4, Eq. 4) Eq. (4) is slightly confusing given you mention that K is a 4D object on the previous page.\", \"I only understood K is \\\"flattened\\\" into |X|d x |X|d matrix when I reached (A.4.3) - this should be stated in main text as otherwise the above confusion arises.\", \"(p.5, 3 and 3.1) The introduction of \\\"curly\\\" K is slighlty confusing. Please provide more detail when introducing the notation, e.g. state in what space the object lives.\", \"(p.5, before Eq. (11)) Is R^{n^(l+1)} the right space for vec(z^L) ? It seems that the meaning of z changes here as compared to the definition in Eq. (2). If z is still defined as in Eq. (2), how exactly is the vec operator defined here? Please clarify.\", \"(p.16, A.4.2) \\\"law of large number\\\" -> \\\"weak law of large numbers\\\"\", \"(p.17) T_n is technically not a function from PSD_2 only but also from some underlying probability space into a measurable space (i.e. can be viewed as a random variable from the product space of PSD_2 and some other measurable space).\", \"(p.18, Eq. 38) Missing dot at the end. Also the K matrix either should or shouldn't have the superscript \\\"l\\\" (now mixed); it does have the superscript in Eq. (39) so probably \\\"should\\\".\", \"(p.18, Eq. 39) Slightly confusing notation. Please clarify that both K and A(K) should have diagonal within the given range.\", \"(p.18) \\\"squared integrable\\\" -> \\\"square integrable\\\" or \\\"square-integrable\\\"\", \"(p.18) Last display before Eq. (43): second inequality can be replaced by equality?!\", \"(p.19, Eq. 47) The absolute value should be sup norm.\", \"(p.19, Eq. 49) LHS is a scalar, RHS a 2x2 matrix (typo).\", \"(p.19, last sentence of the proof) It does not seem the inequalities need to be strict.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review\", \"review\": \"This paper extends the recent results concerning GP equivalence of infinitely wide FC nets to the convolutional case. This paper is generally of a high quality (notwithstanding the lack of keys on figures) and provides insights to an important class of model. I recommend that this paper be accepted, but I think it could be improved in a few ways.\\n\\nFirstly, and rather mundanely: the figures. Fig 1 is not easy to read due to the density of plotting, and as there is no key it isn\\u2019t possible to tell what it shows. Figure 2 is rather is called a \\u2018graphical model\\u2019 but the variables (weights and biases) are not shown. It should be specified that this is the graphical model of the infinite limit, in which case the K variables should not be random. Also, the caption on this figure refers to variables that aren\\u2019t in the figure, and is grammatically incorrect (perhaps something like \\u2018the limit of an infinitely wide convolutional\\u2019 is missing?). Figure 3 has a caption which seems to be inconsistent with the coloring (for example green is center pixel in the text, but blue in the key). Figure 6 is also missing a key. In Figure 5, what does the tick symbol denote? Finally, the value some of Table 1 is questionable as so many entries are missing. For example, the Fashion-MNIST column has only two values, which seems to me of little use. [I would have given the paper a rating of 7 were it not for these issues]\\n\\nRegarding the presentation of the content, I found this paper generally easy to follow and the arguments sound. Here are few points:\\n\\nThere is an important distinction between finite width Bayesian-CNNs and the infinite limit, and this distinction is indeed made in the paper but not clearly enough in my view. I would anticipate that some readers might come away after a cursory reading thinking that Bayesian-CNNs are fundamentally worse than their parametric counterparts, but this is emphatically not the message of the paper. It seems that the infinite limit that is the cause of two problems. The first problem (or perhaps benefit) is that the infinite limit gives Gaussian inner layers, just as in the fully connected case. The second problem (and I\\u2019d say this is definitely a problem this time) is that the infinite limit loses the covariance between the pixels, at least with a fully connected final layer. I would recall [Matthews 2018, long version] section 7, which discusses that point that taking the infinite limit in the fully connected is actually potentially undesirable. To quote Matthews 2018, \\u201cMacKay (2002, p. 547) famously reflected on what is lost when taking the Gaussian process limit of a single hidden layer network, remarking that Gaussian processes will not learn hidden features\\u201d. Some discussion of this would enhance the presented paper, in my view. \\n\\nThe discussion of eq (7) could be made more clear. Eq (7) is only defined on K, and not in composition with A. It is important that the alpha dependency is preserved by the A operation, and while I suppose this is obvious I would welcome a bit more detail. It would help to demonstrate the application of the results of [Cho and Saul 2009] to the convolution case explicitly (i.e. for C o A), in my view. \\n\\nRegarding results, effort has clearly gone to keep the comparisons as fair as possible, but with these large datasets it is difficult to disentangle the many factors that might effect performance (as acknowledged on p9). It is a weakness of the paper that there is no toy example. An example demonstrating a situation which can only be solved with hierarchical features (e.g. features that are larger than the receptive field of a single layer) would be particularly interesting, as in this case I think the GP-CNN would fail, even with the average pooling, whereas the finite Bayesian-CNN would succeed (with a sufficiently accurate inference method). \\n\\nIt would improve readability to stress the 1D notation in the main text rather than in a footnote. On first reading I missed this detail and was confused as I was trying to interpret everything as a 2D convolution. On reflection I think notation is used in the paper is good, but I think the generalization to 2D should be elevated to something more than the footnote. Perhaps a paragraph explaining how the 2D case works would be appropriate, especially as all the experiments are in 2D cases. \\n\\nSome further smaller points on specific [section, paragraph, line]s\\n\\n1,2,4 I think \\u2018easily\\u2019 is a bit of an overstatement. In this work the kernel is itself defined via a recursive convolutional operation, which doesn\\u2019t seem to me much more interpretable than the parametric convolution. At least the filters can be examined in parametric case, which isn\\u2019t the case here. I do agree with the sentiment that a function prior is better than an implicit weight prior, however.\\n\\n1,2,-1 This seems too vague to me, as at least to some extent, Matthew 2018 did indeed consider using NN-GPs to gain insight about equivalent NN models (e.g. section 5.3)\\n\\n1.1,:,: I find it very surprising that there are no references to Cho and Saul 2009 in this section (one does appear in 2.2.2, however). \\n\\n1.1,3,-2:-1 \\u2018Our work differs from all of these in that our GP corresponds exactly to a fully Bayesian CNN in the many channel limit\\u2019 I do not think this is completely true, as the deep convolution GP does correspond to an infinite limit of a Bayesian CNN, just not the same limit as the one taken in this paper. Similarly a DGP following the Danianou and Lawrence 2013 is an infinite limit of a NN, but one with bottlenecks between layers. It is important that readers appreciate that infinite limits can be taken in different ways, and the resulting models may be very different. This certain limit taken in this work has desirable computational properties, but arguably undesirable modelling implications.\\n\\n1.1,-1,-2 It should be made more clear here that the SGD trained models are non-Bayesian. \\n\\nFigure 3 The MC-CNN-GP appears to have performance that is nearly independent of the depth, even including 1 layer. Could this be explained?\\n\\n2.2,2,: The z^l variables are zero mean Gaussian with a fixed covariance, not delta functions, as I understand it. They are independent of each other due to the deterministic K^l, certainly, but they are not themselves deterministic. Could this be clarified?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
Hke20iA9Y7 | Efficient Training on Very Large Corpora via Gramian Estimation | [
"Walid Krichene",
"Nicolas Mayoraz",
"Steffen Rendle",
"Li Zhang",
"Xinyang Yi",
"Lichan Hong",
"Ed Chi",
"John Anderson"
] | We study the problem of learning similarity functions over very large corpora using neural network embedding models. These models are typically trained using SGD with random sampling of unobserved pairs, with a sample size that grows quadratically with the corpus size, making it expensive to scale.
We propose new efficient methods to train these models without having to sample unobserved pairs. Inspired by matrix factorization, our approach relies on adding a global quadratic penalty and expressing this term as the inner-product of two generalized Gramians. We show that the gradient of this term can be efficiently computed by maintaining estimates of the Gramians, and develop variance reduction schemes to improve the quality of the estimates. We conduct large-scale experiments that show a significant improvement both in training time and generalization performance compared to sampling methods. | [
"similarity learning",
"pairwise learning",
"matrix factorization",
"Gramian estimation",
"variance reduction",
"neural embedding models",
"recommender systems"
] | https://openreview.net/pdf?id=Hke20iA9Y7 | https://openreview.net/forum?id=Hke20iA9Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJeDbnSNx4",
"SJxICZhjam",
"rkxc_enipX",
"ryltke3iT7",
"BJgu7y3o67",
"ryx3glNinm",
"rJgsqcLt3m",
"r1gf2j2w27",
"r1gepzTgnm",
"BygKKCEuiX",
"r1xBJUGgo7",
"H1ejQiWgsX",
"SJl1I_Zgi7",
"rJx8ypbkjQ",
"rkeCV3lK5m",
"rke9t6tEcm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"comment"
],
"note_created": [
1544997887269,
1542336974276,
1542336626203,
1542336480983,
1542336288037,
1541255156178,
1541134995099,
1541028778041,
1540571832495,
1540013696890,
1539479005278,
1539476259272,
1539475527463,
1539411166090,
1539013686384,
1538723202499
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper929/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper929/Authors"
],
[
"ICLR.cc/2019/Conference/Paper929/Authors"
],
[
"ICLR.cc/2019/Conference/Paper929/Authors"
],
[
"ICLR.cc/2019/Conference/Paper929/Authors"
],
[
"ICLR.cc/2019/Conference/Paper929/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper929/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper929/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper929/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper929/Authors"
],
[
"ICLR.cc/2019/Conference/Paper929/Authors"
],
[
"ICLR.cc/2019/Conference/Paper929/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"~Yu_Bai1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper presents methods to scale learning of embedding models estimated using neural networks. The main idea is to work with Gram matrices whose sizes depend on the length of the embedding. Building upon existing works like SAG algorithm, the paper proposes two new stochastic methods for learning using stochastic estimates of Gram matrices.\\n\\nReviewers find the paper interesting and useful, although have given many suggestions to improve the presentation and experiments. For this reason, I recommend to accept this paper.\", \"a_small_note\": \"SAG algorithm was originally proposed in 2013. The paper only cites the 2017 version. Please include the 2013 version as well.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper on fast stochastic learning of embedding models.\"}",
"{\"title\": \"Thank you for the comments; revision uploaded\", \"comment\": [\"We would like to thank all reviewers for their careful reading and helpful suggestions. We have uploaded a revision of the paper with the following changes:\", \"We added a new section to the appendix (Appendix C) discussing how to adapt the methods to a non-uniform weight matrix.\", \"We added Appendix E.1 to relate the gradient estimation error to the Gramian estimation error, with a numerical experiment (Figure 6) showing the effect of our methods on gradient estimates.\", \"We added a comment to the conclusion to emphasize that our experiments were focused on problems with very large vocabulary size.\", \"We rearranged the introduction, and improved transitions between sections.\", \"We added comments to the numerical experiments (Section 4 and Appendix E) highlighting the effect of the batch size and of the sampling distribution.\", \"We thank the reviewers again for their time and helpful comments.\"]}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Thank you for your review and your helpful suggestions.\\n\\nWe updated the organization following the reviewer's suggestions, by reorganizing the introduction and improving the transitions between sections. We also added a comment about our choice of hyper-parameters: in the main experiments of Section 4, the hyper-parameters were cross-validated using the baseline. The effect of some of the hyper-parameters is further studied in the appendix: the effect of the batch size and learning rate is studied in Appendix D.2 (now Appendix E.4 in the revision), and the effect of the penalty coefficient \\u03bb is illustrated in Appendix C (now Appendix D in the revision). We did not include these results in the main body of the paper for space constraints, and to keep the message focused, but we added a note to Section 4 pointing to the appendix for further details on the effect of the various hyper-parameters.\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Thank you for your review and your helpful suggestions.\\n\\n1) On the effect of sample size: we agree that the sample size directly affects the performance of these methods. We investigated this effect in Appendix D.2 (which is now Appendix E.4 in the revision), where we ran the same experiment on Wikipedia English with batch sizes 128, 512 (Tables 3 and 4), and compared the results to batch size 1024 (Table 2). We simultaneously varied the learning rate to understand its effect as well, but focusing on the effect of batch size only, we can observe that\\n(i) the performance of all methods increases with the batch size (at least in the 128-1024 range). \\n(ii) the relative improvement of our methods (compared to the baseline) is larger for smaller batch sizes: the relative improvement is 19.5% for 1024, 26.7% for 512, and 29.1% for 128.\\nOf course, one cannot increase the batch size indefinitely as there are hard limits on memory size, and the key advantage of our methods is in problems where sampling-based methods give poor estimates even with the largest feasible batch size.\\nThe effect of the batch size can also be seen to some extent in Figure 2.a, where we show the quality of the Gramian estimates for batch size 128 and 1024. The figure suggests that the quality improves, for all methods, with larger batch sizes, and that SOGram with batch size 128 has a comparable estimation quality to the baseline with batch size 1024.\\n\\n2) The reviewer raises an interesting point. We have observed in our experiments that for a fixed sampling distribution, improving the Gramian estimates generally leads to better MAP, but we cannot draw conclusions when the sampling distribution changes. One possible explanation is that the sampling distribution affects both the quality of the Gramian estimates, and the frequency at which the item embeddings are updated. In particular, tail items are sampled more often under uniform sampling than under the other distributions, and updating their embeddings more frequently may contribute to improving the MAP. We added a comment (Appendix E.2 in the revision) to highlight this observation.\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Thank you for your assessment and your helpful suggestions.\", \"regarding_evaluation\": \"since the focus of the paper is on the design of an efficient optimization method, we wanted to choose an experiment where (i) the evaluation metric is aligned with the optimization objective, and (ii) the vocabulary size is very large (on the order of 10^6 or more), making traditional sampling-based methods inefficient, because they would require too many samples to achieve high model quality. This is why we chose the Wikipedia dataset, which is, to our knowledge, one of the few publicly available datasets of this scale. It also offers different subsets of varying scale, which allowed us to illustrate the effect of the problem size, suggesting that the benefit of the Gramian-based methods increases with vocabulary size. We added a note to the revision to comment on our choice.\\nWe also agree that it will be beneficial to evaluate these method on other applications such as more traditional natural language tasks, and this is something we intend to pursue in future work.\"}",
"{\"title\": \"Good paper with clear contribution, could be made stronger with better evaluation\", \"review\": \"The paper is well written with clear contribution to the problem of similarity learning. My only complain is that, I think the evaluation is a bit weak and does not support the claim that is applicable all kinds of problems e.g. nlp and recommender systems. This task in Wikipedia does not seem to be standard (kind of arbitrary) \\u2014 there are some recommendation results in the appendix but I think it should have been in the main paper.\\n\\nOverall interesting but I would recommend evaluating in standard similarity learning for nlp and other tasks (perhaps more than one)\\n\\nThere are specific similarity evaluation sets for word embeddings. It can be found in following papers: https://arxiv.org/pdf/1301.3781.pdf\", \"http\": \"//www.aclweb.org/anthology/D15-1036\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Nice work\", \"review\": \"This paper proposes an efficient algorithm to learn neural embedding models with a dot-product structure over very large corpora. The main method is to reformulate the objective function in terms of generalized Gramiam matrices, and maintain estimates of those matrices in the training process. The algorithm uses less time and achieves significantly better quality than sampling based methods.\\n\\n1. About the experiments, it seems the sample size for sampling based experiments is not discussed. The number of noise samples have a large influence on the performance of the models. In figure 2, different sampling strategies are discussed. It would be cool if we can also see how the sampling size affects the estimation error. \\n\\n2. If we just look at the sampling based methods, in figure 2a, uniform sampling\\u2019s Gramian estimates is the worst. But the MAP of uniform sampling on validation set for all three datasets are not the worst. Do you have any comments?\\n\\n3. wheter an edge -> whether an edge.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good work overall\", \"review\": \"This paper proposes a method for estimating non-linear similarities between items using Gramian estimation. This is achieved by having two separate neural networks defined for each item to be compared, which are then combined via a dot product. The proposed innovation in this paper is to use Gramian estimation for the penalty parameter of the optimization which allows for the non-linear case. Two algorithms are proposed which allow for estimation in the stochastic / online setting. Experiments are presented which appear to show good performance on some standard benchmark tasks.\\n\\nOverall, I think this is an interesting set of ideas for an important problem. I have two reservations. First, the organization of the paper needs to be addressed in order to aid user readability. The paper often jumps across sections without giving motivation or connecting language. This will limit the audience of the paper and the work. Second (and more importantly), I found the experiments to be slightly underwhelming. The hyperparameters (batch size, learning rate) and architecture don\\u2019t have any rationale attached to them. It is also not entirely clear whether the chosen comparison methods fully constitute the current state of the art. Nonetheless, I think this is an interesting idea and strong work with compelling results.\", \"editorial_comments\": \"The organization of this paper leaves something to be desired. The introductions ends very abruptly, and then appears to begin again after the related work section. From what I can tell the first three sections all constitute the introduction and should be merged with appropriate edits to make the narrative clear.\\n\\n\\u201cwhere x and y are nodes in a graph and the similarity is wheter an edge\\u201d \\u2192 typo and sentence ends prematurely.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"On using more general weight matrices\", \"comment\": \"1) For observed pairs, one can use arbitrary weights \\ud835\\udc64_\\ud835\\udc56\\ud835\\udc57 . For the unobserved data, in our problem setting, the set of all possible pairs (i, j) is too large to specify an arbitrary weight matrix (say if the vocabulary size is 10^7 or more, the full weight matrix would have more than 10^14 entries). In such situations one needs to provide a concise description of this weight matrix. One such representation is the sum of a sparse + low-rank component, and our methods handle this case: the sparse component can be optimized directly, and the low-rank component can be optimized using our Gramian estimation methods. The previous answer describes the rank-1 case where \\ud835\\udc64_\\ud835\\udc56\\ud835\\udc57 = \\ud835\\udc4e_\\ud835\\udc56 \\ud835\\udc4f_\\ud835\\udc57 , and the same argument generalizes to the low-rank case (for a rank-r matrix weight matrix, one needs to maintain 2*r Gramians).\\n\\n2) In retrieval setting with a very large corpus, the dot product structure can be the only viable option, as scoring all candidates in linear time is prohibitively expensive, while maximum inner-product search is approximated in sublinear time. As mentioned above, even in models that don't have the dot product structure, our method applies to the global orthogonal regularizer in any embedding layer.\\nWe believe our methods are applicable to industrial settings. Our experiments suggest that the relative improvement (w.r.t. existing sampling based methods) grows with the corpus size (see Table 2), so we expect to see large improvements in applications with very large corpora. As for comparing different model classes (neural embedding models Vs. factorization machines) this is outside the scope of the paper, our focus is instead on developing efficient optimization methods for the neural embedding model class.\"}",
"{\"comment\": \"thanks for your explanation. no doubt, this is an excellent work. I just read the answer for my first question (will read others later). When talking about the weight setting, I mean a_ij which involves both users and items, not only user-specific or item-specifc weight. a_ij is very common and it seems that it cannot always be rewritten as a_i b_j. Do the algorithm apply in this settting?\\n2 I still think the dot product structure in fig1 is not that popular recently, kinda of a bit popular when deep learning is just in the starting stage. Do you find this structure much better than a basic factorization machines\\uff08 just a digression.\", \"btw\": \"what do you think applying this algorithm in industry :)\", \"title\": \"thanks for your reply!\"}",
"{\"title\": \"Thank you for your comments\", \"comment\": \"Thank you for your comments. We will discuss each point below.Thank you for your comments. We will discuss each point below.\\n\\n1) We agree that it is often a good idea to use non-uniform weights, (as well as non-uniform sampling distributions), and the proposed methods support these variants. We did not discuss non-uniform weights to avoid overloading the presentation, but we can certainly add a section to the appendix. As discussed in our previous comment, if we define the penalty as 1/\\ud835\\udc5b^2 \\u2211_\\ud835\\udc56 \\u2211_\\ud835\\udc57 \\ud835\\udc4e_\\ud835\\udc56 \\ud835\\udc4f_\\ud835\\udc57 \\u27e8\\ud835\\udc62_\\ud835\\udc56, \\ud835\\udc63_\\ud835\\udc57\\u27e9^2 , (where in a recommendation setting, \\ud835\\udc4e_\\ud835\\udc56 is a user-specific weight and \\ud835\\udc4f_\\ud835\\udc57 is an item-specific weight), then this expression is equivalent to \\u27e8\\ud835\\udc3a^\\ud835\\udc62, \\ud835\\udc3a^\\ud835\\udc63\\u27e9 where \\ud835\\udc3a^\\ud835\\udc62, \\ud835\\udc3a^\\ud835\\udc63 are weighted Gram matrices, defined by \\ud835\\udc3a^\\ud835\\udc62 = 1/\\ud835\\udc5b \\u2211_\\ud835\\udc56 \\ud835\\udc4e_\\ud835\\udc56 \\ud835\\udc62_\\ud835\\udc56\\u2297\\ud835\\udc62_\\ud835\\udc56 and similarly for \\ud835\\udc3a^\\ud835\\udc63. The same methods (SAGram, SOGram) can be applied to the weighted Gramians.\\n\\n2) The dot product structure remains important in recent literature, e.g. [1, 2, 3], especially in retrieval settings where one needs to score a large corpus, as finding the top-k items in a dot product model is efficient (see literature on Maximum Inner Product Search, e.g. [4, 5] and references therein). In addition to such models, our methods can also apply to arbitrary models using the Global Orthogonal regularizer described in [6]. The effect of the regularizer is to spread-out the distribution of embeddings, which can improve generalization. We show in Appendix C that this regularizer can be written using Gramians, thus one can apply SOGram or SAGram to such models.\\n\\n3) On the choice of loss function: the loss on observed pairs (the function \\u2113 in our notation) is not limited to square loss, and could be logistic loss for example. The penalty function on all pairs, (\\ud835\\udc54 in our notation) is a quadratic function. It can be extended to a larger family (the spherical family discussed in [7]), but this is beyond the scope of this paper.\\n\\n4) On the derivation of the Gramian formulation: we gave a concise derivation in Section 2.2 due to space limitations, but we can expand here and give some intuition. The penalty term \\ud835\\udc54 is a double-sum 1/\\ud835\\udc5b^2 \\u2211_\\ud835\\udc56 \\u2211_\\ud835\\udc57 \\u27e8\\ud835\\udc62_\\ud835\\udc56, \\ud835\\udc63_\\ud835\\udc57\\u27e9^2 . If we focus on the contribution of a single left embedding \\ud835\\udc62_\\ud835\\udc56 , we can observe that this is a quadratic function \\ud835\\udc62 \\u21a6 \\u2211_\\ud835\\udc57 \\u27e8\\ud835\\udc62, \\ud835\\udc63_\\ud835\\udc57\\u27e9^2 . Importantly, this is the same quadratic function that applies to all the \\ud835\\udc62_\\ud835\\udc56 (independent of \\ud835\\udc56 ). A quadratic function on \\u211d^\\ud835\\udc51 can be represented compactly using a \\ud835\\udc58\\u00d7\\ud835\\udc58 matrix, and this is exactly the role of the Gramian \\ud835\\udc3a^\\ud835\\udc63, and because the same function applies to all \\ud835\\udc62_\\ud835\\udc56, we can maintain a single estimate and reuse it across batches (unlike sampling-based methods that recompute the estimate at each step). There is additional discussion in Appendix C on the interpretation of this term.\\n\\n5) On the choice of the weight \\ud835\\udf06: as mentioned in the experiments, this is a hyper-parameter that we tuned using cross-validation. Intuitively, a larger \\ud835\\udf06 puts more emphasis on penalizing deviations from the prior, while a lower \\ud835\\udf06 emphasizes fitting the observations. We have experiments in Appendix C that explore this effect, e.g. the impact on the embedding distribution in Figure 4, and the impact on precision in Figure 5.\\n\\n6) In eq (1), \\ud835\\udc5b denotes the number of observed pairs (size of the training set). To simplify, we also define the Gramians as a sum over training examples, although in a recommendation setting, this can be rewritten as a sum over distinct users and distinct items. More precisely, if we let S be the set of users, and \\ud835\\udc53_s the fraction of training examples which involve user s, then \\ud835\\udc3a^\\ud835\\udc62=1/\\ud835\\udc5b \\u2211_\\ud835\\udc56 \\ud835\\udc62_\\ud835\\udc56\\u2297\\ud835\\udc62_\\ud835\\udc56 = \\u2211_s\\u2208S \\ud835\\udc53_s \\ud835\\udc62_s\\u2297\\ud835\\udc62_s.\\n\\n7) We plan to open-source our TensorFlow implementation in the near future.\\n\\n[1] P. Neculoiu, M. Versteegh and M. Rotaru. Learning Text Similarity with Siamese Recurrent Networks. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016.\\n[2] M. Volkovs, G. Yu, T. Poutanen. DropoutNet: Addressing Cold Start in Recommender Systems. NIPS 2017.\\n[3] P. Covington, J. Adams, E. Sargin. Deep Neural Networks for YouTube Recommendations. Proceedings of the 10th ACM Conference on Recommender Systems (RecSys 2016).\\n[4] B. Neyshabur and N. Srebro. On symmetric and asymmetric lshs for inner product search. ICML 2015.\\n[5] A. Shrivastava and P. Li. Asymmetric lsh (alsh) for sublinear time maximum inner product search (mips). NIPS 2014.\\n[6] X. Zhang, F. X. Yu, S. Kumar, and S. Chang. Learning spread-out local feature descriptors. In IEEE International Conference on Computer Vision (ICCV 2017).\\n[7] P. Vincent, A. de Brebisson, and X. Bouthillier. Efficient exact gradient update for training deep networks with very large sparse targets. In NIPS 2015.\"}",
"{\"title\": \"Non-uniform weights\", \"comment\": \"Thank you for your comments, we will discuss each point below.\\n\\n1) The dot-product structure is important in many applications, especially in retrieval with very large corpora (since it allows efficient scoring using maximum-inner product search techniques [1, 2]). In addition to dot-product models, our methods can also be useful in more general architectures when used jointly with the Global Orthogonal regularizer proposed in [3], which \\\"spreads-out\\\" the embeddings by pushing the embedding distribution towards the uniform distribution. This was shown to improve generalization performance. In the last paragraph of Appendix C, we show that the Global Orthogonal regularizer can be written in terms of Gramians, thus our methods can be used in such models.\\n\\n2) Using non-uniform weights can be important, and it is supported by the methods we propose. They also support the use of a non-uniform sampling distribution, and non-uniform prior (as discussed in Appendix B). For non-uniform weights, if we define the weight of a left item i to be \\ud835\\udc4e_\\ud835\\udc56 and the weight of a right item \\ud835\\udc57 to be \\ud835\\udc4f_\\ud835\\udc57 , and define the penalty term as 1/\\ud835\\udc5b^2 \\u2211_\\ud835\\udc56 \\u2211_\\ud835\\udc57 \\ud835\\udc4e_\\ud835\\udc56 \\ud835\\udc4f_\\ud835\\udc57 \\u27e8\\ud835\\udc62_\\ud835\\udc56, \\ud835\\udc63_\\ud835\\udc57\\u27e9^2, then one can show, using the same argument as in Section 2.2, that this is equal to the matrix inner-product \\u27e8\\ud835\\udc3a^\\ud835\\udc62, \\ud835\\udc3a^\\ud835\\udc63\\u27e9 where \\ud835\\udc3a^\\ud835\\udc62, \\ud835\\udc3a^\\ud835\\udc63 are now weighted Gram matrices given by \\ud835\\udc3a^\\ud835\\udc62 = 1/\\ud835\\udc5b \\u2211_\\ud835\\udc56 \\ud835\\udc4e_\\ud835\\udc56 \\ud835\\udc62_\\ud835\\udc56\\u2297\\ud835\\udc62_\\ud835\\udc56 and similarly for \\ud835\\udc3a^\\ud835\\udc63 . One can then apply SAGram/SOGram to the weighted Gramians.\\n\\n3) It is our intention to open-source our TensorFlow implementation in the near future.\\n\\n[1] Behnam Neyshabur and Nathan Srebro. On symmetric and asymmetric lshs for inner product search. In Proceedings of the 32nd International Conference on Machine Learning (ICML 2015).\\n[2] Anshumali Shrivastava and Ping Li. Asymmetric lsh (alsh) for sublinear time maximum inner product search (mips). In Proceedings of the 27th International Conference on Neural Information Processing Systems (NIPS 2014).\\n[3] Xu Zhang, Felix X. Yu, Sanjiv Kumar, and Shih-Fu Chang. Learning spread-out local feature descriptors. In IEEE International Conference on Computer Vision (ICCV 2017).\"}",
"{\"title\": \"Quality of gradient estimates\", \"comment\": \"Thank you for your comments and for the suggestion.\\nFirst, one can make a formal connection between the quality of Gramian estimates and the quality of gradient estimates.\\nThe prior term can be written as 1/\\ud835\\udc5b \\u2211_i \\u27e8\\ud835\\udc62_\\ud835\\udc56, \\ud835\\udc3a^\\ud835\\udc63 \\ud835\\udc62_\\ud835\\udc56\\u27e9 , thus the partial derivative w.r.t. \\ud835\\udc62_\\ud835\\udc56 is \\u2207_\\ud835\\udc62\\ud835\\udc56 \\ud835\\udc54 = 2/\\ud835\\udc5b \\ud835\\udc3a^\\ud835\\udc63 \\ud835\\udc62_\\ud835\\udc56 . If the Gramian \\ud835\\udc3a^\\ud835\\udc63 is approximated by \\u011c^\\ud835\\udc63 , then the gradient estimation error is 2/\\ud835\\udc5b \\u2211_i \\u2016(\\ud835\\udc3a^\\ud835\\udc63\\u2212 \\u011c^\\ud835\\udc63) \\ud835\\udc62_\\ud835\\udc56\\u2016^2 = 2/\\ud835\\udc5b \\u2211_i \\u27e8(\\ud835\\udc3a^\\ud835\\udc63 \\u2212 \\u011c^\\ud835\\udc63)\\ud835\\udc62_\\ud835\\udc56,(\\ud835\\udc3a^\\ud835\\udc63 \\u2212 \\u011c^\\ud835\\udc63)\\ud835\\udc62_\\ud835\\udc56\\u27e9 which is equal to 2\\u27e8(\\ud835\\udc3a^\\ud835\\udc63 \\u2212 \\u011c^\\ud835\\udc63),(\\ud835\\udc3a^\\ud835\\udc63 \\u2212 \\u011c^\\ud835\\udc63)\\ud835\\udc3a^\\ud835\\udc62\\u27e9 , in other words, the estimation error of the right gradient is the \\\"\\ud835\\udc3a^\\ud835\\udc62 -weighted\\\" Frobenius norm of the left Gramian error.\\nWe generated these plots as suggested, on Wikipedia simple, and we observe the same trend for the gradient estimation errors as the Gramian estimation error in Figure 2.a. We will include this experiment in the updated version of the paper during rebuttal. Thanks again for the suggestion.\"}",
"{\"comment\": \".\", \"i_have_several_questions\": \"first, is using whole data or whole unobserved data necessary? Using whole data is better than sampling methods? I think it may depend, for some relative dense data such as in nlp-word embedding task, particularly for large word corpus, using whole data performs much worse than the sampling methods. The performance of whole data based models is largely determined by the weighting of the unobserved or negative examples. for example in [Bayer et al., 2017], they only use a constant weight and compare with very simple baseline. The model is not applicable for models with weights that associate with both users and items in the recommendation scenario. it is unknown whether whole data based method can beat state-of-the-art. do authors agree\\uff1f\\n\\n2 The model structure is limited to the dot product structure. Although it is a very popular structure in previous literature , it is not the case for deep models. A simple dot product structure is limited in modeling complicated relations. The common way is to add a full-connected layer on top of dot product. it seems that the current model does not support this popular structure.\\n\\n3 the current optimization method is limited to least square loss? what about logistic loss for classification\\n\\n4 The mathematical derivation in section2.2 is very hard to follow. Can you give some motivations and a little bit more details. \\n\\n5 what about the negative weighting design in equation 1?\\n6 eq.(1) is not clear ? why the first term is \\\\sum_i^n as the number of observed examples should be much larger than n\\nwhy the second term is \\\\sum_i^n\\\\sum_i^n, e,g, in recommender system, the number of user and items are different.\\n6 will you release the code if it is accepted. The mathematics are kinda very hard to follow for most readers. Do you think the algorithm is good to be used in industry?\", \"title\": \"Good paper\"}",
"{\"comment\": \"Definitely, it's a good paper.\\nSampling-based methods has dominated the main trend for many years, through BPR in recommendation field and negative sampling in word embedding. Some previous research proposed to train from whole data while their methods only focused on shadow linear models like matrix factorization. This paper proposed to extend the framework of learning from whole data to deep learning based embeddings by using Gramian estimates.\", \"several_questions\": \"1. Although the proposed scheme can get rid of sampling, the final layer must be an inner product. Will it limit the performance of the model?\\n2.The hyperparameter lambda is defined as the weight for negative samples. Is it reasonable to assign a uniform weight for all samples?\\n3.Could you please public the code for one of your evaluation tasks?\", \"title\": \"Very good papers.\"}",
"{\"comment\": \"This paper studies the problem of learning embeddings on large corpora, and proposes to replace the commonly used sampling mechanism by an online Gramian estimate. It seems like the proposed Gramian estimate allows a lot of information reuse (which is otherwise lost in baseline sampling methods) and hence improves the training.\\n\\nI liked the idea of maintaining an estimate of an important (and relatively small-sized) quantity to allow information reuse, and I think it has the potential to be generalized into similar types of problems as well.\", \"a_question_about_the_experiment\": \"in Section 4.2 it is shown that the maintained Gramian estimates are indeed better than sampling estimates. Perhaps a similar test can be done on the gradients, and hopefully the stochastic gradients given by the Gramian estimate are indeed closer to the full gradient, compared with the baseline sampling methods?\", \"title\": \"Interesting idea\"}"
]
} |
|
Hkes0iR9KX | DEEP GEOMETRICAL GRAPH CLASSIFICATION | [
"Mostafa Rahmani",
"Ping Li"
] | Most of the existing Graph Neural Networks (GNNs) are the mere extension of the Convolutional Neural Networks (CNNs) to graphs. Generally, they consist of several steps of message passing between the nodes followed by a global indiscriminate feature pooling function. In many data-sets, however, the nodes are unlabeled or their labels provide no information about the similarity between the nodes and the locations of the nodes in the graph. Accordingly, message passing may not propagate helpful information throughout the graph. We show that this conventional approach can fail to learn to perform even simple graph classification tasks. We alleviate this serious shortcoming of the GNNs by making them a two step method. In the first of the proposed approach, a graph embedding algorithm is utilized to obtain a continuous feature vector for each node of the graph. The embedding algorithm represents the graph as a point-cloud in the embedding space. In the second step, the GNN is applied to the point-cloud representation of the graph provided by the embedding method. The GNN learns to perform the given task by inferring the topological structure of the graph encoded in the spatial distribution of the embedded vectors. In addition, we extend the proposed approach to the graph clustering problem and a new architecture for graph clustering is proposed. Moreover, the spatial representation of the graph is utilized to design a graph pooling algorithm. We turn the problem of graph down-sampling into a column sampling problem, i.e., the sampling algorithm selects a subset of the nodes whose feature vectors preserve the spatial distribution of all the feature vectors. We apply the proposed approach to several popular benchmark data-sets and it is shown that the proposed geometrical approach strongly improves the state-of-the-art result for several data-sets. For instance, for the PTC data-set, we improve the state-of-the-art result for more than 22 %. | [
"Graph classification",
"Deep Learning",
"Graph pooling",
"Embedding"
] | https://openreview.net/pdf?id=Hkes0iR9KX | https://openreview.net/forum?id=Hkes0iR9KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rylcQuoNxN",
"rylNypVmyN",
"ryl_hKczJV",
"B1lJaWuTCX",
"H1x4fS1KC7",
"H1lBLg1FR7",
"SylL8nCdA7",
"Syxu1oN9nX",
"HJg5UASP37",
"HkexO4FN3X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545021473647,
1543879899778,
1543838127544,
1543500214700,
1543202059753,
1543200845024,
1543199821920,
1541192416399,
1541000786403,
1540818024301
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper926/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper926/Authors"
],
[
"ICLR.cc/2019/Conference/Paper926/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper926/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper926/Authors"
],
[
"ICLR.cc/2019/Conference/Paper926/Authors"
],
[
"ICLR.cc/2019/Conference/Paper926/Authors"
],
[
"ICLR.cc/2019/Conference/Paper926/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper926/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper926/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The extension of convnets to non-Euclidean data is a major theme of research in computer vision and signal processing. This paper is concerned with Graph structured datasets. The main idea seems to be interesting: to improve graph neural nets by first embedding the graph in a Euclidean space reducing it to a point cloud, and then exploiting the induced topological structure implicit in the point cloud.\\n\\nHowever, all reviewers found this paper hard to read and improperly motivated due to poor writing quality. The experimental results are somewhat promising but not completely convincing, and the proposed framework lacks a solid theoretical footing. Hence, the AC cannot recommend acceptance at ICLR-2019.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Scores low on presentation quality, motivation and experimental results\"}",
"{\"title\": \"Response\", \"comment\": \"1 - Thanks for the comment. The paper has been thoroughly edited.\\n\\n2- The novel contributions are explicitly explained in abstract and in Section 1. The main contribution is to translate the graph analysis task into a point-cloud analysis task. \\n\\n3 - The proposed pooling method is compared with existing pooling method (including DGCNN and Diffpool). \\nIn DGCNN, the nodes of the graph are sorted. It is evident that sorting in a one dimensional space can destroy the topological structure of the graph. We showed in the synthetic experiments that DGCNN fails to learn to solve the simple graph classification tasks.\\n \\nIn DiffPool the node assignment is learned using a separate GNN. If two nodes have similar adjacency vectors, they will be assigned to the same cluster in the pooled network. However, we think that DiffPool may not be applicable to large graphs because the adjacency matrices of the large graphs are mostly very sparse. Thus, even if two nodes lie in the same cluster, their adjacency vectors can be completely orthogonal. Thus, the network can not figure out which nodes belong to the same cluster. In addition, DiffPool does not preserve the sparsity of the adjacency matrix. It can increase the computation complexity for large graphs.\\nIn sharp contrast, the proposed approach do not need to process the adjacency matrix. The proposed pooling method aims to preserve the spatial distribution of the feature vectors of the nodes. In addition, we can leverage randomized column sampling techniques to significantly reduce its computation complexity for large graphs.\"}",
"{\"title\": \"Response\", \"comment\": \"Dear authors,\\nThank you for your response. I will stick to my initial score. The paper needs a lot more work. Especially the description of the novel contributions (as claimed: pooling etc.) need an overhaul and need to be compared to existing graph pooling methods.\"}",
"{\"title\": \"Review updated\", \"comment\": \"Thanks for your reply, this clarified some misunderstandings. I have updated my review accordingly.\"}",
"{\"title\": \"Response to reviewer 3\", \"comment\": \"\\u0656First we would like to thank the reviewer for his/her helpful comments.\\n\\n--- The motivational example: The problem is not the mean/max aggregator. Even if we use global add pooling, the network can fail to distinguish the clusters. For instance, suppose each node is connected to a certain number of nodes within its cluster but the size of the clusters can vary. In this case it is not possible for the GNN to learn the number of the clusters using graph convolution followed by global add pooling. We actually tried global adding, it did not change the result. Please note even DGCNN completely failed to learn anything. \\nIn the revision, we have added another synthetic experiment. In this new experiment we show that the GNNs fail to learn to cluster graphs. In contrary, the proposed approach turns the graph into a point-cloud in the embedding space. In the revision, we show how we can use the point-cloud representation to design a graph clustering method. \\n \\nOur main motivation for employing graph embedding is to transform the graph analysis task to a point-cloud analysis task. With the point-cloud representation, the network is aware of the differences/similarities between the nodes and it is also aware of the location of a node in the graph. This extra information can enhance the ability of the network in inferring the structure of the graph. We supported this idea with experiments on synthetic data and real data. \\n \\n--- Contradiction between clustering in feature space with clustering nearby nodes: Basically, a graph embedding method (such as DeepWalk) projects nearby nodes to nearby data points in the embedding space. Thus, there is no contradiction between the proposed approach with the method based on clustering nearby nodes.\\nIn addition, clustering nearby nodes is not a data driven method. There is no rigorous analysis showing that clustering nearby nodes leads to the best performance. In the proposed approach, the pooling is carried out dynamically. Thus, the network learns the pooling operation in a data-driven way. For instance, two nodes can be far from each other on the graph but they could have similar structural role. Thus, the network can learn to merge even the nodes which are far from each other.\\n \\n---The success with synthetic data (\\u201cI contribute this success to the proposed pooling\\u201d): The success of the proposed approach with the synthetic experiment can not be contributed to the pooling layers because we reported the result of the network which does not use the pooling layers.\\nThe reason that the proposed approach works perfectly in this case is that graph embedding transforms the graph analysis task into a point-cloud analysis task. Deep networks has been very successful in analyzing point-cloud data [arXiv:1612.00593].\\n \\n--- Comparing to other pooling methods: We are comparing to two other methods, both published in 2018. They employ new graph pooling operators. One is DiffPool and the other one is DGCNN.\\nIn DGCNN, the nodes of the graph are sorted. It is evident that sorting in a one dimensional space can destroy the topological structure of the graph. We showed in the synthetic experiments that DGCNN fails to learn to solve the simple graph classification tasks.\\n \\nIn DiffPool the node assignment is learned using a separate GNN. If two nodes have similar adjacency vectors, they will be assigned to the same cluster in the pooled network. However, we think that DiffPool may not be applicable to large graphs because the adjacency matrices of the large graphs are mostly very sparse. Thus, even if two nodes lie in the same cluster, their adjacency vectors can be completely orthogonal. Thus, the network can not figure out which nodes belong to the same cluster. In addition, DiffPool does not preserve the sparsity of the adjacency matrix. It can increase the computation complexity for large graphs.\\nIn sharp contrast, the proposed approach do not need to process the adjacency matrix. The proposed pooling method aims to preserve the spatial distribution of the feature vectors of the nodes. In addition, we can leverage randomized column sampling techniques to significantly reduce its computation complexity for large graphs.\\n\\n--- \\u201cHow does it (pooling method) perform when using only the features generated by a GCN?\\u201d: basically, we are able to use the proposed pooling method because the embedding step provides a spatial representation of the graph. The proposed pooling method without the spatial information is not meaningful (It would be similar to downsizing a point-cloud without having the location of the points). \\n\\n--- Experiments and variances: Some of the papers which we compare with them did not report the variances. In order to have a consistent table, we did not include the information about variances in the table. We have added the information about the variances to the revision. \\nFor the experiments with real data, we follow the setting used in the previous papers (PSCN, ECC,DGCNN)\"}",
"{\"title\": \"Response to reviewer 2\", \"comment\": \"First we would like to thank the reviewer for his/her constructive comments. We thoroughly edited the paper to address the comments.\\n\\nThe paper has an important message. A GNN without an embedding step can be meaningless. We show this fact with synthetic experiments and the experiments with real data. With synthetic data we show that the GNN basically can not learn to perform the given simple tasks. For some of the real data-sets, the improvement is huge. For instance, for the PTC data-set the improvement is 22 % (62 ==> 76). Moreover, a new architecture based on the proposed idea for graph clustering is proposed. \\n\\n--- Novelty of the presented architecture: In this paper we do not propose a new convolution layer or a new architecture for the GNNs. We employ a simple GNN depicted in Figure 1. \\nThe main contribution of the paper is twofold.\\nFirst, we make an important point that the GNNs can fail to infer the structure of the unlabeled graphs. We show this important point through clear synthetic experiments. The experiments show that convolutional graph neural networks can fail to learn to perform even simple graph classification tasks. We argue that similar to the NLP tasks, a GNN requires an embedding step to make the network aware of the differences/similarities between the nodes of the graph. The embedding step turns the graph analysis task into a point-cloud analysis problem. \\nIn addition, we showed that although we use a simple GNN, the embedding step can significantly improve the results for many of the real datasets. For instance:\", \"ptc\": \"14 % higher accuracy (22 % improvement !)\", \"mutag\": \"6 % higher accuracy (6 % improvement)\\nThese are significant improvements while we are employing a simple GNN.\\nIn addition, we have added a small section to the revision to extend the proposed approach to a graph clustering method. The proposed method leverages the point-cloud representation of the graph. We have added a new experiment which shows that the conventional GNNs cannot be trained to cluster unlabeled graphs. \\n\\nOur second main contribution is the proposed pooling method. The presented idea is simple. We merge the joint closest nodes in the feature space because the embedding step provides a spatial representation of the graph. In contrast to the previous methods, we do not need to run a graph clustering algorithm. In addition, it can down-sample the graph by a fix predefined factor. Moreover, in contrast to the soft pooling method, it is applicable to the sparsely connected graphs and the sparsity of the adjacency is not lost. \\n \\n--- \\\"Graph pooling is also a relatively well-established idea and has been investigated in several papers before. The authors should provide more details on their approach and compare it to existing graph pooling approaches.\\\":\\nYes, pooling is an established idea in processing graphs and point-clouds. In this paper, we propose a novel graph pooling method. In the proposed approach we do not need to run a graph clustering method and the proposed method uses the spatial distribution of the nodes to perform pooling. The proposed method merges the joint nodes which are the closest nodes in the spatial domain. Thus, even if two nodes are far from each other on the graph, they can be merged if they have similar topological roles in the graph. In this paper, we are comparing the proposed pooling method with two new deep learning based approaches which employed new graph pooling layers (DGCNN and DIFFPOOL).\\n \\n--- Clarifying the pooling method: In the revised paper, we have edited the pooling method. An example has been added to the revision to explain the proposed method.\\n \\n--- \\\"The experiments do not provide standard deviation. Graph classification problems usually exhibit a large variance of the means. Hence, it is well possible that the difference in mean is not statistically significant\\\": Unfortunately some of the deep learning based methods that we are comparing with them did not report the variance. Thus, even if we report the variance, the reader can not make a comparison. We have added the information about the variance of the proposed approach with the real data-sets to the revised paper. \\n \\n--- \\\"The experiments look OK but are not ground-breaking and are not enough to make this paper more than a mere combination of existing methods.\\\":\\nWe make an important point in this paper that the GNNs can fail to learn to perform even simple graph analysis Tasks. We provide a clear evidence that message passing without a proper input does not necessarily infer the structure of the graph. In addition, it is shown that with the embedding step, our GNN can significantly advance the state-of-the-art results on the real data-sets. For instance, for the PTC data-set, the improvement is more than 22 %.\"}",
"{\"title\": \"Response to reviewer 1\", \"comment\": \"First we would like to thank the reviewer for his/her helpful comments. We thoroughly edited the paper and added new materials.\\n\\nThe paper has an important message. A GNN without an embedding step can be meaningless. We show this fact with synthetic experiments and the experiments with real data. With synthetic data we show that the GNN basically can not learn to perform the given simple tasks.\\u00a0For some of the real data-sets, the improvement is huge. For instance, for the PTC data-set the improvement is 22 % (62 ==> 76).\\u00a0Moreover, a new architecture based on the proposed idea for graph clustering is proposed. \\n\\n--- Typos: We thoroughly edited the paper. We have edited those parts which could be vague for the reader. In addition, we have added an example to clarify how the pooling method works. \\n \\n---Number of pixels: The reviewer is right about the number of pixels in a 3*3 convolution window. But in that paragraph we are saying that a pixel has 8 neighboring pixels in a 3*3 window. These two statements are not in contradiction.\\n \\n--- \\\"The definition of W in Eq(2) is vague. Is this W shared across all nodes? If so, what\\u2019s the difference between this and regular GNN layers except for replacing summation with a max function?\\\": The weight matrix W is shared across all the nodes. In this paper we use the conventional graph convolution. We do not propose a new method for graph convolution.\\nOur contribution is twofold.\\nFirst, we make an important point about the GNNs. We argue that similar to the neural network employed for the NLP tasks, GNNs needs an embedding step to make the GNN aware of the difference/similarity between the nodes of the graph. The embedding step turns the graph analysis task into a point-cloud analysis problem. We support our idea with multiple experiments. We show through clear synthetic experiments that the convolutional graph neural networks can fail to learn to perform even simple graph classification tasks. \\nIn addition, we show that although we employ a simple GNN, the embedding step can significantly improve the results for many of the real data-sets. For instance:\", \"ptc\": \"14 % higher accuracy (22 % improvement !)\", \"mutag\": \"6 % higher accuracy (7 % improvement !)\\nThese are significant improvements while we are employing a simple GNN. \\nMoreover, we have added a small section to the revision which shows that we can use the spatial representation of the graph to design a new graph clustering method (Fig. 3). We have provided a new experiment which shows that the conventional GNN can not be trained to cluster unlabeled graphs. The proposed architecture for graph clustering leverages the point-cloud representation of the graph. It combines the local representation of the nodes with the global representation of the graph to identify the clusters of the graph. \\n\\nOur second main contribution is the proposed pooling method. The presented idea is simple. We merge the joint closest nodes in the feature space because the embedding step provides a spatial representation of the graph. In contrast to the previous methods, we do not need to run a graph clustering algorithm. It can down-sample the graph by a fix predefined factor (by a factor of $2^z$ where z can be determined). Moreover, in contrast to the soft pooling method, it is applicable to the sparsely connected graphs and the sparsity of the adjacency is not lost.\\n \\n--- the mlp block: The first block is a simple multi-layer perceptron. The first layer of the mlp block transforms the input to a 64 dimensional vector. The next layer of the mlp block, transforms the 64 dimensional vector to a 128-dimensional vector and so on. We have slightly changed this block in the revision. \\n \\n--- The explanation of the pooling method: We have edited the algorithm and we have added an example to the revised paper. \\n \\n---\\\"I think the improvement is not that big. For some data-sets, the network without pooling layers even performs better at one dataset. The authors didn\\u2019t provide enough analysis on these parts\\\":\\nWe report the result on many real data-sets which we advance the state-of-the-art for most of them. For some of the data-sets we advance the state-of-the-art result significantly. For instance, we improve the result for the PTC data-set for more than 22 % .\\nThe network which has the pooling layers yields the state-of-the-art results for some of the data-sets. In the data-sets which we used in our experiment, the size of the graphs are small. Thus, the network which does not has the pooling layers also yields competitive results. \\nSimilar phenomena was observed in the previous papers too. For instance, when Pointnet++ [arXiv:1706.02413] was proposed to add local feature aggregation to Pointnet Network [arXiv:1612.00593], the results were slightly improved for only few data-sets. There is no rigorous analysis of deep networks available in the literature. We do not have a clear analysis of the effect of the pooling layers on the performance of the GNNs.\"}",
"{\"title\": \"A paper addressing an interesting problem, but lacks clarity and hard to understand, tech novelty is unknown\", \"review\": \"This paper proposes a deep GNN network for graph classification problems using their adaptive graph pooling layer. It turns the graph down-sampling problem into a column sampling problem. The approach is applied to several benchmark datasets and achieves good results.\\n\\nWeakness\\n\\n1.\\tThis paper is poorly written and hard to follow. There are lots of typos even in the abstract. It should be at least proofread by an English-proficient person before submitted. For example, in the last paragraph before Section 3. \\u201cIn Ying et al. \\u2026\\u2026. In Ying et al.\\u201d\\n2.\\tIn paragraph 1 of Section 3, there should be 9 pixels around the center pixel including itself in regular 3x3 convolution layers.\\n3.\\tThe definition of W in Eq(2) is vague. Is this W shared across all nodes? If so, what\\u2019s the difference between this and regular GNN layers except for replacing summation with a max function?\\n4.\\tThe network proposed in this paper is just a simple CNN. GNN can adopt such kind of architectures as well. And I didn\\u2019t get numbers of first block in Figure 1. The input d is 64?\\n5.\\tThe algorithm described in Algorithm 1 is hard to follow. There are some latex tools for coding the algorithms.\\n6.\\tThe authors claim that improvements on several datasets are strong. But I think the improvement is not that big. For some datasets, the network without pooling layers even performs better at one dataset. The authors didn\\u2019t provide enough analysis on these parts.\", \"strength\": \"1.\\tThe idea used in this paper for graph nodes sampling is interesting. But it needs more experimental studies to support this idea.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Lack of novelty and poorly executed experiments\", \"review\": \"The authors propose a method for learning representations for graphs. The main purpose is the classification of graphs.\\n\\nThe topic is timely and should be of interest to the ICLR community.\", \"the_proposed_approach_consists_of_four_parts\": \"Initial feature transformation\\nLocal features aggregation\\nGraph pooling\\nFinal aggregator\\n\\nUnfortunately, each of the part is poorly explained and/or a method that has already been used before. For instance, the local feature aggregation is more or less identical to a GCN as introduced by Kipf and Welling. There are now numerous flavors of GCNs and the proposed aggregation function in (2) is not novel. \\n\\nGraph pooling is also a relatively well-established idea and has been investigated in several papers before. The authors should provide more details on their approach and compare it to existing graph pooling approaches. \\n\\nNeither (1) nor (4) are novel contributions. \\n\\nThe experiments look OK but are not ground-breaking and are not enough to make this paper more than a mere combination of existing methods. \\n\\nThe experiments do not provide standard deviation. Graph classification problems usually exhibit a large variance of the means. Hence, it is well possible that the difference in mean is not statistically significant. \\n\\nThe paper could also benefit from a clearer explanation of the method. The explanation of the core parts (e.g., the graph pooling) are difficult to understand and could be made much clearer.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Poorly motivated approach with experimental merits\", \"review\": \"The authors argue that graph neural networks based on the message passing frameworks are not able to infer the topological structure of graphs. Therefore, they propose to use the node embedding features from DeepWalk as (additional) input for the graph convolution. Moreover, a graph pooling operator is proposed, which clusters node pairs in a greedy fashion based on the l2-distances between feature vectors. The proposed methods are evaluated on seven common benchmark datasets and achieve better or comparable results to existing methods. Moreover, the method is evaluated using synthetic toy examples, showing that the proposed extensions help to infer topological structures.\\n\\nA main point of criticism is that the authors claim that graph convolution is not able to infer the topological structure of a graph when no labels are present. In fact the graph convolution operator is closely related to the Weisfeiler-Lehman heuristic for graph isomorphism testing and can distinguish most graphs in many practical application. Therefore, it is not clear why DeepWalk features would increase the expressive power of graph convolution. It should be stated clearly which structural properties can be distinguished using DeepWalk features, but no with mere graph convolution.\", \"the_example_on_page_4_provides_only_a_weak_motivation_for_the_approach\": \"The nodes v_1 and v_2 should be indistinguishable since they are generated using the same generator. Thus, the problem is the mean/max pooling, and not the graph convolution. When using the sum-aggregation and global add pooling, graphs with two clusters and graphs with three clusters are distinguishable again. Further insights how DeepWalk helps to learn more \\\"meaningful\\\" topological features are required to justify its use.\\n\\nClustering nodes that are close in feature space for pooling is a reasonable idea. However, this contradicts the intuition of clustering neighboring nodes in the graph. A short discussion of this phenomenon would strengthen the paper in my opinion.\\n\\nThere are several other questions that not been answered adequately in the article.\\n\\n* The 10-fold cross validation is usually performed using an additional validation set. What kind of stopping criteria has bee use? * It would be helpful to provide standard deviations on these small datasets (although a lot of papers sadly dismiss them).\\n* I really like the use of synthetic data to show superior expressive power, but I am unsure whether this can be contributed to DeepWalk or the use of the proposed pooling operator (or both). Please divide the results for these toy experiments in \\\"GEO-DEEP\\\" and \\\"GEO-deep no pooling\\\". As far as I understand, node features in different clusters should be indistinguishable from each other (even when using DeepWalk), so I contribute this success to the proposed pooling operator.\\n* A visualization of the graphs obtained by the proposed pooling operator would be helpful. How do the coarsened graphs look like? Given that any nodes can end up in the same cluster, and the neighborhood is defined to be the union of the neighboring nodes of the node pairs, I guess coarsened graphs are quite dense.\\n* DiffPool (NIPS 2018, Ying et al.) learns assignment matrices based on a simple GCN model (and thus infers topological structure from message passing). How is the proposed pooling approach related to DiffPool (except that its non-differentiable)? How does it perform when using only the features generated by a GCN? How does it compare to other pooling approaches commonly used, e.g., Graclus? At the moment, it is quite hard to judge the benefits of the proposed pooling operator in comparison to others.\\n\\n\\nIn summary, the paper presents promising experimental results, but lacks a theoretical justification or convincing intuition for the proposed approach. Therefore, at this point I cannot recommend its acceptance.\", \"minor_remarks\": [\"p2: The definition of \\\"neighbour set\\\" is needless in its current form.\", \"p2: The discussion of graph kernels neglects the fact that many graph kernels compute feature vectors that can be used with linear SVMs.\", \"-----------\"], \"update\": \"The comment of the authors clarified some misunderstandings. I now agree that the combination of DeepWalk features and GNNs can encode more/different topological information. I still think that the paper does not make this very clear and does not provide convincing examples. I have update my score accordingly.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
BkloRs0qK7 | A comprehensive, application-oriented study of catastrophic forgetting in DNNs | [
"B. Pfülb",
"A. Gepperth"
] | We present a large-scale empirical study of catastrophic forgetting (CF) in modern Deep Neural Network (DNN) models that perform sequential (or: incremental) learning.
A new experimental protocol is proposed that takes into account typical constraints encountered in application scenarios.
As the investigation is empirical, we evaluate CF behavior on the hitherto largest number of visual classification datasets, from each of which we construct a representative number of Sequential Learning Tasks (SLTs) in close alignment to previous works on CF.
Our results clearly indicate that there is no model that avoids CF for all investigated datasets and SLTs under application conditions. We conclude with a discussion of potential solutions and workarounds to CF, notably for the EWC and IMM models. | [
"incremental learning",
"deep neural networks",
"catatrophic forgetting",
"sequential learning"
] | https://openreview.net/pdf?id=BkloRs0qK7 | https://openreview.net/forum?id=BkloRs0qK7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ryxTgMHJgV",
"ryeHw6IOAQ",
"ryereK-TnQ",
"B1ljgG7inm",
"H1lXXQf5n7"
],
"note_type": [
"meta_review",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544667636537,
1543167324661,
1541376236718,
1541251571165,
1541182234730
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper925/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper925/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper925/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper925/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper925/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper has two main contributions. The first is that it proposes a specific framework for measuring catastrophic forgetting in deep neural networks that incorporates three application-oriented constraints: (1) a low memory footprint, which implies that data from prior tasks cannot be retained; (2) causality, meaning that data from future tasks cannot be used in any way, including hyperparameter optimization and model selection; and (3) update complexity for new tasks that is moderate and also independent of the number of previously learned tasks, which precludes replay strategies. The second contribution is an extensive study of catastrophic forgetting, using different sequential learning tasks derived from 9 different datasets and examining 7 different models. The key conclusions from the study are that (1) permutation-based tasks are comparatively easy and should not be relied on to measure catastrophic forgetting; (2) with the application-oriented contraints in effect, all of the examined models suffer from catastrophic forgetting (a result that is contrary to a number of other recent papers); (3) elastic weight consolidation provides some protection against catastrophic forgetting for simple sequential learning tasks, but fails for more complex tasks; and (4) IMM is effective, but only if causality is violated in the selection of the IMM balancing parameter. The reviewer scores place this paper close to the decision boundary. The most negative reviewer (R2) had concerns about the novelty of the framework and its application-oriented constraints. The authors contend that recent papers on catastrophic forgetting fail to apply these quite natural constraints, leading to the deceptive conclusion that catastrophic forgetting may not be as big of a problem as it once was. The AC read a number of the papers mentioned by the authors and agrees with them: these constraints have been, at least at times, ignored in the literature, and they shouldn't be ignored. The other two reviewers appreciated the scope and rigor of the empirical study. On the balance, the AC thinks this is an important contribution and that it should appear at ICLR.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Catastrophic forgetting: not dead yet\"}",
"{\"title\": \"Update after reading author response\", \"comment\": \"Thank you for your comprehensive response and for updating your paper. The story is now much clearer to me, and as a result, the conclusion is more significant. I will increase my score.\\n\\nI do, however, have suggestions for two changes, which I hope you will incorporate in the final version of your paper. \\n\\nFirstly, regarding your point 3) above, I am happy that you added some text to your paper, but I actually like your explanation here more. Specifically, knowing *what* you expect (e.g., D9-1 adds less new information compared to D5-5, so one might expect CF to be less pronounced here) is valuable to prepare the reader for what you may find later in the paper. Right now, you only say that you expect this to \\\"have an impact on CF\\\". Nothing major, but I think you will help the reader by detailing your expectation.\\n\\nSecondly, regarding your point 6), I agree that the colouring helps to underline your conclusion, but I find the chosen colour scheme quite hard on the eyes. Also, colour blind readers will find it difficult to distinguish between the red and green. I suggest you change the colour scheme, e.g. to a sequential, colour blind safe scheme from http://colorbrewer2.org/. This would also make the table more readable if printed in greyscale. :)\\n\\nThank you for your hard work and good luck with your paper!\"}",
"{\"title\": \"Interesting topic of limited novelty\", \"review\": \"The paper presents a study of the application of some well known methods on 9 datasets focusing on the issue of catastrophic forgetting when considering a sequential learning task in them. In general the presentation of concepts and results is a bit problematic and unclear. Comments, such that the paper presents ' a novel training and model selection paradigm for incremental learning in DNNs ' is not justified. A better description of the results, e.g., in Table 3 should be presented, as well a better linking with the findings; a better structure of the latter would also be required to improve consistency of them. Improving these could make the paper candidate for a poster presentation.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An empirical study of CF, but more recent methods could have been also added for the study\", \"review\": \"Thanks for the updates and rebuttals from the authors.\\n\\nI now think including the results for HAT may not be essential for the current version of the paper. I now understand better about the main point of the paper - providing a different setting for evaluating algorithms for combatting CF, and it seems the widespread framework may not accurately reflect all aspects of the CF problems. \\n\\nI think showing the results for only 2 tasks are fine for other settings except for DP10-10 setting, since most of them already show CF in the given framework for 2 tasks. Maybe only for DP10-10, the authors can run multiple tasks setting, to confirm their claims about the permuted datasets. (but, I believe the vanilla FC model should show CF for multiple permuted tasks.)\\n\\nI have increased my rating to \\\"6: Marginally above acceptance threshold\\\" - it could have been much better to at least give some hints to overcome the CF for the proposed setting, but I guess giving extensive experimental comparisons could be valuable for a publication. \\n\\n=====================\", \"summary\": \"The paper evaluates several recent methods regarding catastrophic forgetting with some stricter application scenarios taken into account. They argue that most methods, including EWC and IMM, are prone to CF, which is against the argument of the original paper.\", \"pro\": [\"Extensive study on several datasets, scenarios give some intuition and feeling about the CF phenomenon.\"], \"con\": [\"There are some more recent baselines., e.g., Joan Serr\\u00e0, D\\u00eddac Sur\\u00eds, Marius Miron, Alexandros Karatzoglou, \\\"Overcoming catastrophic forgetting with hard attention to the task\\\" ICML2018, and it would be interesting to see the performance of those as well.\", \"The authors say that the permutation based data set may not be useful. But, their experiments are only on two tasks, while many work in literature involves much larger number of tasks, sometimes up to 50. So, I am not sure whether the paper's conclusion that the permutation-based SLT should not be used since it's only based on small number of tasks.\", \"While the empirical findings seem useful, it would have been nicer to propose some new method that can get around the issues presented in the paper.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"A large and interesting analysis of CF in DNNs\", \"review\": \"# [Updated after author response]\\nThank you for your response. I am happy to see the updated paper. In particular, the added item in section 1.3 highlights where the novelty of the paper lies, and as a consequence, I think the significance of the paper is increased. Furthermore, the clarity of the paper has increased. \\n\\nIn its current form, I think the paper would be a valuable input to the deep learning community, highlighting an important issue (CF) for neural networks. I have therefore increased my score.\\n\\n------------------------------------------\\n\\n# Summary\\nThe authors present an empirical study of catastrophic forgetting (CF) in deep neural networks. Eight models are tested against nine datasets with 10 classes each but a varying number of samples. The authors construct a number of sequential learning tasks to test the model performances in different scenarios. The main conclusion is that CF is still a problem in all models, despite claims in other papers.\\n\\n# Quality\\nThe paper shows healthy criticism of the methods used to evaluate CF in previous works. I very much like this.\\n\\nWhile I like the different experimental set-ups and the attention to realistic scenarios outlined in section 1.2, I find the analysis of the experiments somewhat superficial. The accuracies of each model for each task and dataset are reported, but there is little insight into what causes CF. For instance, do some choices of hyperparameters consistently cause a higher/lower degree of CF across models? I also think the metrics proposed by Kemker et al. (2018) are more informative than just reporting the last and best accuracy, and that including these metrics would improve the quality of the paper.\\n\\n# Clarity\\nThe paper is generally clearly written and distinct paragraphs are often highlighted, which makes reading and getting an overview much easier. In particular, I like the summary given in sections 1.3 and 1.4.\\n\\nSection 2.4 describing the experimental setup could be clearer. It takes a bit of time to decipher Table 2, and it would have been good with a few short comments on what the different types of tasks (D5-5, D9-1, DP10-10) will tell us about the model performances. E.g. what do you expect to see from the experiments of D5-5 that is not covered by D9-1 and vice versa? And why are the number of tasks in each category so different (8 vs 3 vs 1)?\\n\\nI am not a huge fan of 3D plots, and I don't think they do anything good in section 4. The perspective can make it tricky to compare models, and the different graphs overshadow each other. I would prefer 2D plots in the supplementary, with a few representative ones shown in the main paper. I would also experiment with turning Table 3 into a heat map.\\n\\n# Originality\\nTo my knowledge, the paper presents the largest evaluation of CF in terms of evaluated datasets. Kemker et al. (2018) conduct a somewhat similar experiment using fewer datasets, but a larger number of classes, which makes the CF even clearer. I think it would be good to cite this paper and briefly discuss it in connection with the current work.\\n\\n# Significance\\nThe paper is mostly a report of the outcome of a substantial experiment on CF, showing that all tested models suffer from CF to some extent. While this is interesting and useful to know, there is not much to learn in terms of what can cause or prevent CF in DNNs. The paper's significance lies in showing that CF is still a problem, but there is room for improvement in the analysis of the outcome of the experiments.\\n\\n# Other notes\\nThe first sentence of the second paragraph in section 5 seems to be missing something.\\n\\n# References\\nKemker, R., McClure, M., Abitino, A., Hayes, T., & Kanan, C. (2018). In AAAI Conference on Artificial Intelligence. https://aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16410\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
SJgsCjCqt7 | Variational Autoencoders with Jointly Optimized Latent Dependency Structure | [
"Jiawei He",
"Yu Gong",
"Joseph Marino",
"Greg Mori",
"Andreas Lehrmann"
] | We propose a method for learning the dependency structure between latent variables in deep latent variable models. Our general modeling and inference framework combines the complementary strengths of deep generative models and probabilistic graphical models. In particular, we express the latent variable space of a variational autoencoder (VAE) in terms of a Bayesian network with a learned, flexible dependency structure. The network parameters, variational parameters as well as the latent topology are optimized simultaneously with a single objective. Inference is formulated via a sampling procedure that produces expectations over latent variable structures and incorporates top-down and bottom-up reasoning over latent variable values. We validate our framework in extensive experiments on MNIST, Omniglot, and CIFAR-10. Comparisons to state-of-the-art structured variational autoencoder baselines show improvements in terms of the expressiveness of the learned model. | [
"deep generative models",
"structure learning"
] | https://openreview.net/pdf?id=SJgsCjCqt7 | https://openreview.net/forum?id=SJgsCjCqt7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1lBColRJN",
"BJg0X5SnRX",
"ByxGUZqcCQ",
"BkeRKhQ5Rm",
"B1l6cBxDAX",
"BJxeiNEXR7",
"HyeaxR-fAX",
"SklYoT-G0X",
"HyxKzFZfAm",
"HkeyruWzAm",
"BygZQun5nQ",
"rJgSaaLv3m",
"H1eOmiKiom"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544584140701,
1543424549577,
1543311690213,
1543285894053,
1543075221079,
1542829208063,
1542753780592,
1542753696824,
1542752529079,
1542752311164,
1541224472594,
1541004733064,
1540229919682
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper924/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper924/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper924/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper924/Authors"
],
[
"ICLR.cc/2019/Conference/Paper924/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper924/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper924/Authors"
],
[
"ICLR.cc/2019/Conference/Paper924/Authors"
],
[
"ICLR.cc/2019/Conference/Paper924/Authors"
],
[
"ICLR.cc/2019/Conference/Paper924/Authors"
],
[
"ICLR.cc/2019/Conference/Paper924/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper924/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper924/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"Strengths:\\nThis paper develops a method for learning the structure of discrete latent variables in a VAE. The overall approach is well-explained and reasonable.\", \"weaknesses\": \"Ultimately, this is done using the usual style of discrete relaxations, which come with tradeoffs and inconsistencies.\", \"consensus\": \"The reviewers all agreed that the paper is above the bar.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Slightly hacky but still good progress\"}",
"{\"title\": \"Update to author rebuttal\", \"comment\": \"Thank you for the answers to questions regarding the LadderVAE and the choice of prior distribution. The outcome of both the node perturbations and the linear conditional density experiment is in line with I expected. Overall, I the contribution of the work is bit clearer now and I'm raising my score to reflect this.\\n\\nThe primary actionable recommendation I have is to add visualizations of the data under different node perturbations to the supplementary material for the best model learned on MNIST/Omniglot so that readers may have a better intuition for what low-frequency structural changes and high-frequency elements correspond to.\"}",
"{\"title\": \"Further improvement\", \"comment\": \"Thank you for your further updates which I think improve the paper further. I have consequently decided to increase my score to an 8. One final question that would be good to answer in the camera ready if you can would be to try and see if you can establish what dependency structure (if any) for the generator is compatible will the learned encoder (i.e. is there any generative model for which the learned encoder structure is a faithful inverse). This is not a critical point though and not something I would expect to be successfully addressed during the discussion period.\"}",
"{\"title\": \"Reply to Reviewer 3\", \"comment\": \"We appreciate the positive feedback and are grateful for the helpful insights; they have certainly improved our presentation.\\n\\n(encoder/decoder relationship) As suggested, we have computed mean and standard deviation of the experiment with fully-connected encoder graph and learned decoder graph (5 runs). We have added a discussion of these results to section 4.3 and have also included them in Appendix C.1. Furthermore, we have expanded our discussion of the relationship between the structures of the generative model and its corresponding posterior (first paragraph on page 5), which we hope provides a clearer impression of (1) their connection; and (2) the effects resulting from our parameter sharing approach.\\n(3/4) Computing mean values and standard deviations for the experiments in Table 1 requires training/evaluation of a total of 60 models (5 training runs of 4 models on 3 datasets) w.r.t. 3 performance metrics (LL/ELBO/KL). The majority of these experiments is still running and, for the sake of consistency and to avoid confusion, we decided not to do a partial update of Table 1, which would have resulted in a mix of averaged results and single-run results. An update with error bars for all entries of Table 1, consistent with the supplemental material, will be included in the camera-ready version.\\n(5) Indeed, we implement the FC-VAE as a VAE with predefined structure $c_{i,j}=1$ for all $i>j$. We have added this information to section 4.3.\\n(6) We have changed the wording in section 5.3, so that the presented arguments are not perceived too strong.\"}",
"{\"title\": \"Most concerns adressed, performance comparison remains weak\", \"comment\": \"Thanks for your answer corrections, and additions to the paper. This mainly resolves the issues I noted. As such I see your contribution as an alternative parametrization compared to the FC-VAE which changes the optimization landscape, favoring more sparse latent graphs. This is worth of interest, and I'm raising my rating to reflect that.\\n\\nThe comparison of performance however remains fragile in my opinion. Indeed, it is completely relevant to compare models changing only a single thing (here the latent structure and way it is trained). However different factors of change do not interact in ways that are easy to predict, and it is not obvious to me that applying the same change to the original LadderVAE would necessarily similarly improve its performance.\"}",
"{\"title\": \"Significant improvement\", \"comment\": \"Thank you for the follow-up and revision. Overall, I was very happy that the authors have a made a very genuine effort to address my concerns and have increased my score appropriately from a 5 to a 7.\\n\\nThe main criticism that I do not think was fully addressed (though there was certainly a large improvement) is that about the encoder dependency structure not matching the decoder. The quoted results here are extremely interesting and are essential to trying to understand exactly where the improvements are really coming from. Consequently, I think this point needs a lot more consideration in the paper including rerunning these experiments the same way as in Figure 5 and including them in the paper itself; both values are within the variability of the Graph-VAE so I don't think you can reach the conclusions from your response just yet. I also think I more careful discussion of this is required. It is probably not feasible to fully answer everything now, but it would be good to at least think about exactly how the encoder structuring influences the behavior of the generative model and highlight some question for future work on this subject which I think is actually right at the crux of the work. I might be willing to increase my score further to an 8 if this can be adequately addressed.\", \"other_specific_comments\": \"1) I particularly enjoyed the edits made to address my concerns about the lower bound. I think this is introduced a lot better now and is no longer misleading or guilty of mathsiness.\\n2) The extra experimentation was much appreciated. I think there is still scope to improve this further for the final paper, but they are no longer a cause for concern.\\n3) The values in Table 1 need updating as the LL for the Graph VAE from the original set of results was clearly overestimated given the average performance in the new figure.\\n4) Though I appreciate this will probably not be feasible to achieve during the revision period, having error bars for all the experiments instead of just MNIST would really improve the impact of the results. The y axis label for new figure should also be changed as it currently makes it look like the ELBO instead of the LL.\\n5) I think the FC-VAE still needs a more careful introduction - presumably this is the Graph-VAE with all c set to one? If so, say this explicitly.\\n6) I still find the arguments at the end of section 5 a little strong.\"}",
"{\"title\": \"Reply to Reviewers\", \"comment\": \"We want to thank all reviewers for their thorough and valuable feedback. Questions and concerns are discussed below and clarified in the updated paper.\"}",
"{\"title\": \"Reply to Reviewer 3\", \"comment\": \"(1) Lower bound:\\n Thank you for this insight. Indeed, the lower bound induced by the distribution over dependency variables (Eq.(8)) has different properties than the original ELBO. In particular, it is not a proper variational objective and is not guaranteed to be tight if the approximate posterior matches the true posterior. However, as correctly pointed out, this lower bound recovers the ELBO when the distribution over dependency variables converges to a fixed estimate. In practice, we observe this convergence (Fig. 3), and by the end of training, we are effectively optimizing the ELBO for a single dependency structure. Placing a distribution over dependency structures and optimizing $\\\\widetilde{\\\\mathcal{L}}$ instead of $\\\\mathcal{L}$ should therefore be seen as an annealing technique, facilitating optimization over discrete dependency structures, rather than a strict extension of the variational framework. We have modified Section 3 and Appendix A to more clearly communicate this perspective. While we agree that ``mathiness'' should be kept to a minimum, the derivation in Appendix A shows how the dependency variables affect the ELBO, explaining why we expect the model to converge to a fixed structure. We feel this is helpful for understanding the approach and its behaviour during training. \\n(2) Encoder Dependency:\\nThank you for pointing this out. We agree that the structure of our encoder does not capture all of the possible dependencies in the true posterior, and we have added this point to Section 3.1 for clarity. However, the same is true for vanilla VAEs, where all latent variables are sampled from the approximate posterior independently, i.e. the distribution is axis-aligned in the latent dimensions. Variational inference does not specify whether an approximate posterior is \\\"correct\\\" or \\\"incorrect\\\"; there are only varying degrees in the quality of approximation. Models are also trained to adapt to their approximate posteriors. Intriguingly, we trained a Graph VAE decoder using a fully-connected encoder and found that the log-likelihood of this model was -83.21 nats, which is closer to the performance of FC VAE. Likewise, the log-likelihood of a fully-connected decoder with a Graph VAE encoder was -83.20 nats. Thus, restricting the approximate posterior dependency structure to mirror the prior seems to improve the final performance. This observation could be explored further in future work.\\n(3) Inference module description:\\nWe have added a detailed description of the inference module in Appendix B.3.\\n(4) Clarification on FC VAE:\\nNormalizing flows attempts to account for dependencies in the approximate posterior. On the other hand, FC VAEs, and hierarchical VAEs more generally, add dependencies to the prior on the latent variables. This has been shown to improve the flexibility of the model over the diagonal standard Gaussian priors in Vanilla VAEs. Top-down inference (S\\u00f8nderby, et al., 2016) is one approach for capturing these dependencies in the approximate posterior as well.\\n(5) Explanation for FC VAE comparison:\\nWe have updated the discussion at the end of Section 5 to reflect the fact that, at this point, it is still somewhat speculative that the performance improvements in Graph VAE are due to difficulties in optimization. We agree with the point that learning the structure should result in equal or better performance than assuming a fixed structure. However, FC VAE and Graph VAE are both capable of learning the same set of models, i.e. the same hypothesis space. Graph VAE could conceivably retain all of the dependencies. Likewise, FC VAE could set entire layers of weights to zero, thereby removing dependencies. Thus, it may not be whether structure can be learned, but rather the ease with which it can be learned. Graph VAE can effectively take larger jumps during optimization by modifying a single gating parameter. FC VAE, on the other hand, requires coordinated steps along many parameter dimensions to achieve the same effect. Properly evaluating this perspective will require further follow-up work.\\n(6) Robustness:\\nPlease refer to R2(1) for additional results regarding the robustness of the proposed model, including error bars, a significance test, and stability of the learned structure.\\n(7) Semantics:\\nPlease refer to R1(5) for a perturbation experiment providing additional insight into the encodings of local nodes. Additionally, we have included a visualization of the latent space using a TSNE-embedding in Appendix C.2.\"}",
"{\"title\": \"Reply to Reviewer 2\", \"comment\": \"(1) Robustness & comparison with FC VAE:\\nWe have computed the mean and standard deviation of the test log-likelihood on MNIST across 5 independent runs. Our results are shown in Appendix C, demonstrating a stable learning process. The optimization process is also robust to initialization and other sources of uncertainty and typically converges to the same latent structure. Furthermore, we have addressed the question of significance with respect to FC VAE with a Mann-Whitney U-Test, which rejects the null hypothesis of equal log-likelihood distributions at the 0.01 significance level (p=0.004). We discuss this performance improvement over FC VAE in detail in Section 5.\\n(2) Ladder VAE performance:\\nModel performance is influenced by a multitude of factors. The focus of this paper is on latent structures and our experiments are designed in way that isolates the effects of a model's latent structure as well as possible. To this end, we use the same encoder/decoder structure (Appendix B) and the same number of latent dimensions (80) in all experiments. While following this principle forced us to deviate from the encoder/decoder structure used in (S\\u00f8nderby, et al., 2016), resulting in different test log-likelihoods, our experiments constitute a correct and fair evaluation of all models on equal grounds. Please also refer to R1(3) for a discussion of an additional Ladder VAE experiment with varying node dimension.\\n(3) Gumbel-softmax:\", \"we_thank_the_reviewer_for_pointing_out_two_typos_that_we_have_corrected_in_our_revised_version\": \"(3-1) We multiply the current temperature by 0.99 after each epoch, not 0.999. The temperature after $200$ epochs is thus approximately 0.13, not 0.82. (3-2) Following code from Categorical VAE (Jang et al., 2017), we use only 2 samples to form the Gumbel-softmax distribution, not 3. We want to emphasize that these typos do not carry over to our code and do not affect any of our experimental results. Our implementation will be made publicly available after the review period.\\n(4) Symmetric/sparse structures:\\nWe have included Fig. 3(b) to give a general impression of the learned latent structure, but not all of its properties generalize to different data or a different training setup. In particular, the observed symmetry does not generalize to another M/N-ratio: With N=8 nodes, which is the number used in Table 1, the learned dependency structure does not exhibit any symmetry. Convergence to a sparse structure is discussed in R3(1) and R3(4).\"}",
"{\"title\": \"Reply to Reviewer 1\", \"comment\": \"(1) Inference module description:\\nWe have added a detailed description of the inference module in Appendix B.3.\\n(2) Prior distribution p(c):\\nThe structure gating variables \\\\textbf{c} follow a Bernoulli distribution with parameters $\\\\mathbf{\\\\mu}$, which are initialized at 0.5. We optimize these parameters during the training process across all data examples.\\n(3) Ladder VAE with varying node dimension:\\nFollowing the architecture in the original Ladder VAE paper (S\\u00f8nderby, et al., 2016), we ran an additional experiment using Ladder VAE with node dimensions (4-8-16-32-64). The test log-likelihood of this model on MNIST is -84.0 nats, which is higher than the version with constant node dimension (-84.8 nats) but lower than FC VAE (-83.0 nats) and the proposed Graph VAE (-82.1 nats). In our experiments, we keep the number of node dimensions constant to guarantee the same total latent dimensionality (80) in all models.\\n(4) References:\\nWe thank the reviewer for pointing out two missing references as well as a recent work that has appeared after our submission. We have added them to our discussion in Section 2.2.\\n(5) Node perturbations:\\nWe did not observe a clear semantic pattern when perturbing a child node while keeping its parents fixed. This is not surprising, because intrinsic structure does not necessarily correlate with semantic meaning and our training objective does not incentivize a semantic disentanglement of latent factors. Instead, we observed that nodes close to the root modulate global, low-frequency structural changes, whereas leaf nodes encode local, high-frequency elements of the data.\\n(6) Linear conditional densities:\", \"learning_linear_conditional_densities_leads_to_an_interesting_effect\": \"while the test log-likelihood on MNIST decreases by 2.0 nats to -84.1 nats, the structure now converges to a fully-connected graph. We interpret this behavior as an attempt of the optimization process to compensate less expressive local distributions with additional dependencies.\"}",
"{\"title\": \"Interesting idea: use a matrix of binary random variables to capture dependencies between latent variables in a hierarchical deep generative model.\", \"review\": \"Often in a deep generative model with multiple latent variables, the structure amongst the latent variables is pre-specified before parameter estimation. This work aims to learn the structure as part of the parameters. To do so, this work represents all possible dependencies amongst the latent random variables via a learned binary adjacency matrix, c, where a 1 denotes each parent child relationship.\\n\\nEach setting of c defines a latent variable as the root and subsequent parent-child relationships amongst the others. To be able to support (up to N-1) parents, the paper proposes a neural architecture where the sample from each parent is multiplied by the corresponding value of c_ij (0'd out if the edge does not exist in c), concatenated and fed into an MLP that predicts a distribution over the child node. The inference network shares parameters with the generative model (as in Sonderby et. al). Given any setting of c, one can define the variational lower-bound of the data. This work performs parameter estimation by sampling c and then performing gradient ascent on the resulting lower-bound.\\n\\nThe model is evaluated on MNIST, Omniglot and CIFAR where it is found to outperform a VAE with a single latent variable (with the same number of latent dimensions as the proposed graphVAE), the LadderVAE and the FCVAE (VAE with a fully connected graph). An ablation study is conducted to study the effect of number of nodes and their dimensionality.\\n\\nOverall, the paper is (a) well written, (b) proposes a new, interesting idea and (c) shows that the choice to parameterize structure via the use of auxillary random variables improves the quality of results on some standard benchmarks.\", \"comments_and_questions_for_the_authors\": \"* Clarity\\nIt might be instructive to describe in detail how the inference network is structured for different settings of c (for example, via a scenario with three latent variables) rather than via reference to Sonderby et. al.\\n\\nWhat prior distribution was used for c?\\n\\nFor the baseline comparing to the LadderVAE, what dimensionalities were used for the latent variables in the LadderVAE (which has a chain structured dependence amongst its latent variables)? The experimental setup keeps fixed the latent dimensionality to 80 -- the original paper recommends a different dimensionality for each latent variables in the chain [https://arxiv.org/pdf/1602.02282.pdf, Table 2] -- was this tried? Did the ladderVAE do better if each latent variable in the chain was allowed to have a dimensionality?\\n\\n* Related work\\nThere is related work which leverages Bayesian non-parametric models to learn hierarchical priors for deep generative models. It is worth discussing for putting this line of work into context. For example:\", \"http\": \"//openaccess.thecvf.com/content_ICCV_2017/papers/Goyal_Nonparametric_Variational_Auto-Encoders_ICCV_2017_paper.pdf\", \"and_more_recently\": \"https://arxiv.org/pdf/1810.06891.pdf\\n\\nIn the context of defining inference networks for generative models where the latent variables have structure, Webb et. al [https://arxiv.org/abs/1712.00287] describe how inference networks should be setup in order to invert the generative process.\\n\\n* Qualitative study\\nNotable in its absence is a qualitative analysis of what happens to the data sampled from the model when the various nodes in the learned hierarchy are perturbed holding fixed their parents. Have you attempted this experiment? Are the edge relationships sensible or interesting?\\n\\nIs there a relationship between the complexity of each conditional distribution in the generative model and the learned latent structure? Specifically, have you experimented to see what happens to the learned structure amongst the latent variables if each conditional density is a linear function of its parents?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An idea with potential, but weakly developped and tested\", \"review\": \"The authors propose to augment the latent space of a Variational AutoEncoder [1] with an auto-regressive structure, to improve the expressiveness of both the inference network and the latent prior, making them into a general DAG of latent variables. This works goes further in the same direction as the Ladder VAE [2]. This paper introduces a mechanism for the latent model to directly learn its DAG structure by first considering the fully-connected DAG of latent variables, and adding Bernoulli variables controlling the presence or absence of each edge. The authors derive a new ELBO taking these variables into account, and use it to train the model. The gradients of the parameters of the Bernoulli variables are computed using the Gumbel-Softmax approach [3] and annealing the temperature.\\n\\nThe authors observe with they experiments that the Bernoulli variables converge relatively quickly towards 0 or 1 during the training, fixing the structure of the DAG for the rest of the training. They test their model against a VAE, a Ladder VAE and an alternative to their model were the DAG is fixed to remain fully-connected (FC-VAE), and observe improvements in terms of the ELBO values and log-likelihood estimations.\\n\\nThe main addition of this paper is the introduction of the gating mechanism to reduce the latent DAG from its fully-connected state. It is motivated by the tendency of latent models to fall into local optima.\\n\\nHowever, it is not clear to me what this mechanism as it is now adds to the model:\\n\\n- The reported results shows the improvements of Graph-VAE over FC-VAE to be quite small, making their relevance dubious in the absence of measurement of variance accross different trainings. Additionally, the reported performances for Ladder VAE are inferior to what [2] reports. Actually the performance of Ladder-VAE reported in [2] is better than the one reported for Graph-VAE in this paper, both on the MNIST and Omniglot datasets.\\n\\n- The authors observe that the Bernoulli variables have converged after around ~200 epochs. At this time, according to their reported experimental setup, the Gumbel-Softmax temperature is 0.999^200 ~= 0.82, which is still quite near 1.0, meaning the model is still pretty far from a real Bernoulli-like behavior. And actually, equation 9 is not a proper description of the Gumbel-Softmax as described by [3] : there should be only 2 samples from the Gumbel distribution, not 3. Given these two issues, I can't believe that the c_ij coefficients behave like Bernoulli variables in this experiment. As such, It seems to me that Graph-VAE is nothing more than a special reparametrization of FC-VAE that tends to favor saturating behavior for the c_ij variables.\\n\\n- On figure 3b, the learned structure is very symmetrical (z2, z3, z4 play an identical role in the final DAG). In my opinion, this begs for the introduction of a regulatory mechanism regarding the gating variable to push the model towards sparsity. I was honestly surprised to see this gating mechanism introduced without anything guiding the convergence of the c_ij variables.\\n\\nI like the idea of learning a latent structure DAG for VAEs, but this paper introduces a rather weak way to try to achieve this, and the experimental results are not convincing.\\n\\n[1] https://arxiv.org/abs/1312.6114\\n[2] https://arxiv.org/abs/1602.02282\\n[3] https://arxiv.org/abs/1611.01144\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting paper but with technical issues that need addressing [now addressed in revision]\", \"review\": \"This paper presents a VAE approach in which a dependency structure on the latent variable is learned during training. Specifically, a lower-triangular random binary matrix c is introduced, where c_{i,j} = 1 for i>j, indicates that z_i depends on z_j, where z is the latent vector. Each element of c is separately parametrized by a Bernoulli distribution whose means are optimized for during training, using the target \\\\mathbb{E}_{p(c)}[\\\\mathcal{L}_c] where \\\\mathcal{L}_c indicates the ELBO for a particular instance of c. The resulting \\\"Graph-VAE\\\" scheme is shown to train models with improved marginal likelihood than a number of baselines for MNIST, Omniglot, and CIFAR-10.\\n\\nThe core concept for this paper is good, the results are impressive, and the paper is, for the most part, easy to follow. Though I think a lot of people have been thinking about how to learn dependency structures in VAEs, I think this work is the first to clearly lay out a concrete approach for doing so. I thus think that even though this is not the most novel of papers, it is work which will be of significant interest to the ICLR community. However, the paper has a number of technical issues and I do not believe the paper is suitable for publication unless they are addressed, or at the vest least acknowledged. I further have some misgivings with the experiments and the explanations of some key elements of the method. Because of these issues, I think the paper falls below the acceptance threshold in its current form, but I think they could potentially be correctable during the rebuttal period and I will be very happy to substantially increase my score if they are; I feel this has the potential to be a very good paper that I would ultimately like to see published.\\n\\n%%% Lower bound %%%\\n\\nMy first major concern is in the justification of the final approach (Eq 8), namely using a lower bound argument to move the p(c) term outside of the log. A target being a lower bound on something we care about is never in itself a justification for that target -- it just says that the resulting estimator is provably negatively biased. The arguments behind the use of lower bounds in conventional ELBOs are based on much more subtle arguments in terms of the bound becoming tight if we have good posterior approximations and implicit assumptions that the bound will behave similarly to the true marginal. The bound derived in A.1 of the current paper is instead almost completely useless and serves little purpose other than adding \\\"mathiness\\\" of the type discussed in https://arxiv.org/abs/1807.0334. Eq 8 is not a variational end-to-end target like you claim. It is never tight and will demonstrably behave very differently to the original target.\\n\\nTo see why it will behave very differently, consider how the original and bound would combine two instances of c for the MNIST experiment, one corresponding to the MAP values of c in the final trained system, the other a value of c that has an ELBO which is, say, 10 nats lower. Using Eq 8, these will have similar contributions to the overall expectation and so a good network setup (i.e. theta and phi) is one which produces a decent ELBO for both. Under the original expectation, on the other hand, the MAP value of c corresponds to a setup that has many orders of magnitude higher probability and so the best network setup is the one that does well for the MAP value of c, with the other instance being of little importance. We thus see that the original target and the lower bound behave very differently for a given p(c).\\n\\nThankfully, the target in Eq 8 is a potentially reasonable thing to do in its own right (maybe actually more so that the original formulation), because the averaging over c is somewhat spurious given you are optimizing its mean parameters anyway. It is easy to show that the \\\"optimum\\\" p(c) for a given (\\\\theta,\\\\phi) is always a delta function on the value of c which has the highest ELBO_c. As Fig 3 shows, the optimization of the parameters of p(c) practically leads to such a collapse. This is effectively desirable behavior given the overall aims and so averaging over values of c is from a modeling perspective actually a complete red herring anyway. It is very much possible that the training procedure represented by Eq 8 is (almost by chance) a good approach in terms of learning the optimal configuration for c, but if this is the case it needs to be presented as such, instead of using the current argument about putting a prior on c and constructing a second lower bound, which is a best dubious and misleading, and at worst complete rubbish. Ideally, the current explanations would be replaced by a more principled justification, but even just saying you tried Eq 8 and it worked well empirically would be a lot better than what is there at the moment.\\n\\n%%% Encoder dependency structure does not match the generative model %%%\\n\\nMy second major concern is that the dependency structure used for the encoder is incorrect from the point of view of the generative model. Namely, a dependency structure on the prior does not induce the same dependency structure on the posterior. In general, just because z_1 and z_2 are independent, doesn't mean that z_1 and z_2 are independent given x (see e.g. Bishop). Consequently, the encoder in your setup will be incapable of correctly representing the posterior implied by the generative model. This has a number of serious practical and theoretical knock-on effects, such as prohibiting the bound becoming tight, causing the encoder to indirectly impact the expressivity of the generative model etc. Note that this problem is not shared with the Ladder VAE, as there the Markovian dependency structure means produces a special case where the posterior and prior dependency structure is shared.\", \"as_shown_in_https\": \"//arxiv.org/abs/1712.00287 (a critical missing reference more generally), it is actually possible to derive the dependency structure of the posterior from that of the prior. I think in your case their results imply that the encoder needs to be fully connected as the decoder can induce arbitrary dependencies between the latent variables. I am somewhat surprised that this has not had more of an apparent negative impact on the empirical results and I think at the very very least the paper needs to acknowledge this issue. I would recommend the authors run experiments using a fully connected encoder and the Graph-VAE decoder (and potentially also vice verse). Should this approach perform well, it would represent a more principled approach to replace the old on from a generative model perspective. Should it not, it would provide an empirical justification for what is, in essence, a different restriction to that of the learned prior structure: it is conceivably actually the case that these encoder restrictions induce the desired decoder behavior, but this is distinct to learning a particular dependency structure in the generative model.\\n\\n%%% Specifics of model and experiments %%%\\n\\nThough the paper is generally very easy to read, there as some key areas where the explanations are overly terse. In particular, the explanation surrounding the encoding was difficult to follow and it took me a while to establish exactly what was going on; I am still unsure how \\\\tilde{\\\\psi} and \\\\hat{\\\\psi} are combined. I think a more careful explanation here and a section giving more detail in the appendices would both help massively.\\n\\nI was not clear on exactly what was meant by the FC-VAE. I do not completely agree with the assertion that a standard VAE has independent latents. Though the typical choice that the prior is N(0,I) obviously causes the prior to have independent latents, as explained earlier, this does not mean the latents are independent in the posterior. Furthermore, the encoder implicitly incorporates these dependencies through its mean vector, even if it uses a diagonal covariance (which is usually rather small anyway). What is actually changed from this by the FC-VAE? Are you doing some kind of normalizing flow approach here? If so this needs proper explanation.\\n\\nRelatedly, I am also far from convinced by the arguments presented about why the FC-VAE does worse at the end of the experiments. VAEs attempt to maximize a marginal likelihood (through a surrogate target) and a model which makes no structural assumptions will generally have a lower marginal likelihood than one which makes the correct structural assumptions. It is thus perfectly reasonable that when you learn dependency structures, you will get a higher marginal likelihood than if you presume none. I thus find your arguments about local optima somewhat speculative and further investigation is required.\\n\\n%%% Experiments %%%\\n\\nThough certainly not terrible, I felt that the experimental evaluation of the work could have been better. The biggest issue I have is that no error bars are given for the results, so it is difficult to assess the robustness of the Graph-VAE. I think it would be good to add convergence plots with error bars to see how the performance varies with time and provide an idea of variability. More generally, the experiment section overall feels more terse and rushed than the rest of the paper, with some details difficult to find or potentially even straight up missing.\\n\\nThough Fig 3 is very nice, it would be nice to have additional plots seeing qualitatively what happens with the latent space. E.g. on average what proportion of the c tend to zero? Is the same dependency structure always learned? What do the dataset encodings look like? Are there noticeable qualitative changes in samples generated from the learned models? I would be perfectly happy for the paper to extend over the 8 pages to allow more results addressing these questions.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
BkMq0oRqFQ | Normalization Gradients are Least-squares Residuals | [
"Yi Liu"
] | Batch Normalization (BN) and its variants have seen widespread adoption in the deep learning community because they improve the training of deep neural networks. Discussions of why this normalization works so well remain unsettled. We make explicit the relationship between ordinary least squares and partial derivatives computed when back-propagating through BN. We recast the back-propagation of BN as a least squares fit, which zero-centers and decorrelates partial derivatives from normalized activations. This view, which we term {\em gradient-least-squares}, is an extensible and arithmetically accurate description of BN. To further explore this perspective, we motivate, interpret, and evaluate two adjustments to BN. | [
"Deep Learning",
"Normalization",
"Least squares",
"Gradient regression"
] | https://openreview.net/pdf?id=BkMq0oRqFQ | https://openreview.net/forum?id=BkMq0oRqFQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BylkZeRLyN",
"BkglmsZY0Q",
"B1eSC5-K0X",
"HyxHscZKA7",
"BJxGyPqla7",
"ByghVRki27",
"SJg5feZqnQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544114166997,
1543211800028,
1543211725368,
1543211677139,
1541609177771,
1541238324360,
1541177362457
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper923/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper923/Authors"
],
[
"ICLR.cc/2019/Conference/Paper923/Authors"
],
[
"ICLR.cc/2019/Conference/Paper923/Authors"
],
[
"ICLR.cc/2019/Conference/Paper923/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper923/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper923/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper interprets batch norm in terms of normalizing the backpropagated gradients. All of the reviewers believe this interpretation is novel and potentially interesting, but that the paper doesn't make the case that this helps explain batch norm, or provide useful insights into how to improve it. The authors have responded to the original set of reviews by toning down some of the claims in the original paper, but haven't addressed the reviewers' more substantive concerns. There may potentially be interesting ideas here, but I don't think it's ready for publication at ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"a new interpretation of batch norm, but not clear what we gain from it\"}",
"{\"title\": \"Thank you for your thoughtful criticism, and especially for your kind comments.\", \"comment\": \"Dear Paper923 AnonReviewer3,\\nThank you for your thoughtful criticism, and especially for your kind comments. We have toned down our language broadly in our revision, and removed all mentions of a \\\"unified view.\\\" We agree deeply with your note on needing more focus in the experiments; we would have liked to followup with a cleaner, more convincing application. Sadly, we could not deliver this under the resource constraints.\"}",
"{\"title\": \"We have dialed back our language in the abstract, the TLDR, and the main body of the text to reflect your perspective on our work.\", \"comment\": \"Dear Paper923 AnonReviewer2\\nThank you for your criticisms. We have dialed back our language in the abstract, the TLDR, and the main body of the text to reflect your perspective on our work. We apologize for the typos in the earlier version, and we have been more diligent in this update. Also, we have clarified some of the language around the downstream affine transformation. Ignoring the affine transform is done without loss of generality, in the sense that they can be absorbed into the rest of the network without impacting our view of the gradients of the gaussian normalization. Our experiments were meant to be toy-examples of how one might better understand what happens to the gradient regression under adjustments to BN; ideally, we would like to design a new normalization that outperforms switch normalization, but we have not been able to do that here.\"}",
"{\"title\": \"We have made notable adjustments to our language in the abstract, the TLDR, and the main body of the text, in light of your review.\", \"comment\": \"Dear Paper923 AnonReviewer3,\\nThank you for your criticisms. We have made notable adjustments to our language in the abstract, the TLDR, and the main body of the text, in light of your review. Regarding concerns related to ignoring the affine transform downstream of the gaussian normalization, we have rephrased the text to emphasize that it is done without loss of generality. This is WLOG in the sense that the affine transform after gaussian normalization can be absorbed into the rest of the network. Also, we would like to emphasize that we think of division by the standard deviation during training as a non-affine transform. One way to make that division affine is to use the BN running-variance instead of the batch variance during training -- but this alternative generally known (but not well stated in literature) to lead to poor performance.\"}",
"{\"title\": \"The discussion and conclusion drawn from the experimental part are quite arguable. In my opinion, the paper would gain much more impact if the view developed in this paper were illustrated by more convincing experiments.\", \"review\": \"The authors propose a new interpretation of the batch normalization step inside a neural network.\\nThe main result shows that the backpropagation of the gradient of some loss function through a batch normalization can be seen as a scaled residual of a least square linear fit. This new interpretation is extended to other normalization technics used in the literature and thus give a \\\"unified\\\" view of such methods. \\n\\nThe idea is simple yet very interesting and well introduced. The theoretical results are good and the proofs are well written and easy to follow.\\n\\nHowever the arguments brought forward by this new vision of batch normalization in applications look light (see sections 3.3, 4.1, 4.2). A more detailed interpretation of this new vision on a single application and its impact would have been preferred than numerous applications as it is done in this paper. \\nNot all the existing normalization methods have been extended with success yet, this makes this unified vision a bit less convincing.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"It has not been sufficiently demonstrated that the new perspective regarding batch normalization presented in this work is actually useful for either improving or explaining BN.\", \"review\": \"The primary technical contribution comes from Section 2, where it is demonstrated that the normalized back-propagated gradients obtained from a BN layer can be viewed as the residuals of the gradients obtained without BN regressed via a simple two-parameter model of the activations. In some sense though this result is to be expected, since centering data (i.e., removing the mean as in BN) can be generically viewed as computing the residuals after a least squares fit of a single constant, and similarly for de-trending with respect to a single independent variable, in this case the activations. So I'm not sure that Theorem 1 is really that much of an insightful breakthrough, even if it may be nice to work through the precise details in the specific case of a BN layer and the relationship to gradients.\", \"but_beyond_this_a_larger_issue_is_as_follows\": \"This paper is framed as taking a step in explaining why batch normalization (BN) works so well. For example, even the abstract mentions this as an unsettled issue in motivating the proposed analysis. However, to me the interpretation of BN as introducing a form of least squares fit does not really extend our understanding of why it actually works better in practice, and this is the biggest disconnect of the paper. The new perspective presented might be another way to interpret BN layers, but it unfortunately remains mostly unanswered exactly why this new perspective is relevant in actually explaining BN behavior.\\n\\nThe presented normalization theory is also used to motivate heuristic modifications to standard BN schemes. For example, the paper proposed concatenating BN with a layer normalization layer, demonstrating some modest improvement on CIFAR-10 data. But again, I don't see how viewing these normalization schemes as least-squares residuals motivates such concatenation any more than the merits of the original versions themselves. Moreover, it is not even clear that BN+LN is in fact generally better since only a single data set is considered. There are also no comparisons against competing BN modifications such as switch normalization (Luo et al. 2018) which also involves a hybrid method combining aspects of LN and BN. Why not compare against approaches like this?\\n\\nTo conclude, in Section 6 the paper asks \\\"Why do empirical improvements in neural networks with BN keep the gradient-least-squares residuals and drop the explained portion?\\\" But this question is not at all answered but rather deferred to future work. For me this was a disappointment as this would seem to be an essential ingredient for actually developing a meaningful theory for why BN is helpful in practice.\", \"other_comments\": [\"The analysis from Section 2, including Theorem 1, assume that the BN parameters c and b can be ignored (presumably this means fixing c = 1 and b = 0). I did not carefully check the details, but do all the same derivations and conclusions still seamlessly go through when these parameters have general values that deviate from this standard initialization? If not, then I don't really see what is the practical relevance, since once learning begins, both b and c will typically shift to arbitrary values. Below eq. (1) it states that c and b are only ignored for clarity, but then later I did not see any subsequent discussion to handle the general case, which is what would be actually needed for explaining BN behavior in practice.\", \"Please run a speck-checker. Example, \\\"On some leve, the matrix gradient ...\\\"\", \"The paper cites (Lipton and Steinhardt, 2018) in arguing that reasons for the effectiveness of BN are lacking. Indeed (Lipton and Steinhardt, 2018) criticize the original BN paper for conflating speculation with explanation, or more precisely, framing speculation about why BN should be helpful as an actual true explanation without clear evidence. But to me this submission is hovering somewhere in the same category, speculating that regressing away certain portions of the gradient could be useful but never really providing concrete evidence for why this should offer an improvement.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A premature paper proposing a novel interpretation of Batch Normalisation\", \"review\": \"The paper aims at a better understanding of the positive impacts of Batch Normalisation (BN) on network generalisation (mainly) and convergence of learning. First, the authors propose a novel interpretation of the BN re-parametrisation. They show that an affine transform of the variables with their local variance (scale) and mean (shift) can be interpreted as a decomposition of the gradient of the objective function into a regressor assuming that the gradient is parallel to the variables (up to a shift) and the residual part which is the gradient w.r.t. to the new variables. In the second part of the paper, authors review various normalisation proposals (differing mainly in the subset of variables over which the normalisation statistics is computed) as well as the known empirical findings about the dependence of BN on the batch size. The paper presents an experiment that combines two normalisation variants. A further experiment strives at regularising BN for small batch sizes.\\n\\nUnfortunately, it remains unclear what questions precisely the authors answer in the second part of the paper and, what is more important, how they are related to the novel interpretation of BN presented in the first part. This interpretation holds for any function and can be possibly seen as a gradient pre-conditioning. However, the authors do not \\\"extend\\\" it towards the gradients w.r.t. the network parameters and do not consider the specifics of the learning objectives (a sum of functions, each one depending on one training example only). The main presented experiment combines layer normalisation with standard batch normalisation for a convolutional network. The first one normalises using the statistics over channel and spatial dimensions, whereas the second one uses the statics over the batch and spatial dimensions. The improvements are rather marginal, but, what is more important, the authors do not explain how and why this proposal follows from their new interpretation of BN.\\n\\nOverall, in my view, this paper is premature and not appropriate for publishing at ICLR in its present form.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
ryfcCo0ctQ | Convergent Reinforcement Learning with Function Approximation: A Bilevel Optimization Perspective | [
"Zhuoran Yang",
"Zuyue Fu",
"Kaiqing Zhang",
"Zhaoran Wang"
] | We study reinforcement learning algorithms with nonlinear function approximation in the online setting. By formulating both the problems of value function estimation and policy learning as bilevel optimization problems, we propose online Q-learning and actor-critic algorithms for these two problems respectively. Our algorithms are gradient-based methods and thus are computationally efficient. Moreover, by approximating the iterates using differential equations, we establish convergence guarantees for the proposed algorithms. Thorough numerical experiments are conducted to back up our theory. | [
"reinforcement learning",
"Deep Q-networks",
"actor-critic algorithm",
"ODE approximation"
] | https://openreview.net/pdf?id=ryfcCo0ctQ | https://openreview.net/forum?id=ryfcCo0ctQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HylF_4LbgV",
"SJxcv5Khk4",
"r1eFtj4nkV",
"Syl9zC3r1N",
"rylFIoZ5C7",
"SJlBKqxqRX",
"Hyx_TCHtR7",
"BJeFFCBFRQ",
"HkejBTBFCX",
"SkllJTrKC7",
"B1xSunHYAQ",
"Bkein8QWaQ",
"rJeFeRyWam",
"ryetm_kR3Q",
"rJxpWzYq3X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544803440881,
1544489570437,
1544469376757,
1544044049731,
1543277393041,
1543273084542,
1543229119611,
1543229057305,
1543228738721,
1543228632352,
1543228524771,
1541646003183,
1541631472585,
1541433376663,
1541210629182
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper922/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper922/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper922/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper922/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper922/Authors"
],
[
"ICLR.cc/2019/Conference/Paper922/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper922/Authors"
],
[
"ICLR.cc/2019/Conference/Paper922/Authors"
],
[
"ICLR.cc/2019/Conference/Paper922/Authors"
],
[
"ICLR.cc/2019/Conference/Paper922/Authors"
],
[
"ICLR.cc/2019/Conference/Paper922/Authors"
],
[
"ICLR.cc/2019/Conference/Paper922/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper922/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper922/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper922/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper gives an bilevel optimization view for several standard RL algorithms, and proves their asymptotic convergence with function approximation under some assumptions. The analysis is a two-time scale one, and some empirical study is included.\\n\\nIt's a difficult decision to make for this paper. It clearly has a few things to be liked: (1) the bilevel view seems new in the RL literature (although the view has been implicitly used throughout the literature); (2) the paper is solid and gives rigorous, nontrivial analyses.\\n\\nOn the other hand, reviewers are not convinced it's ready for publication in its current stage:\\n(1) Technical novelty, in the context of published works: extra challenges needed on top of Borkar; similarity to and differences from Dai et al.; ...\\n(2) The practical significance is somewhat limited. Does the analysis provide additional insight into how to improve existing approaches? How restricted are the assumptions? Are the online-vs-batch distinction from Dai et al. really important in practice?\\n(3) What does the paper want to show in the experiments, since no new algorithms are developed? Some claims are made based on very limited empirical evidence. It'd be much better to run algorithms on more controlled situations to show, say, the significance of two timescale updates. Also, as those algorithms are classic Q-learning and actor-critic (quote the authors in responses), how well do the algorithms solve the well-known divergent examples when function approximation is used?\\n(4) Presentation needs to be improved. Reviewers pointed out some over claims and imprecise statements.\\n\\nWhile the author responses were helpful in clarifying some of the questions, reviewers felt that the remaining questions needed to be addressed and the changes would be large enough that another full review cycle is needed.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Solid paper, but unclear significance\"}",
"{\"title\": \"RE:\", \"comment\": \"1. Ok\\n\\n2. Yes the assumptions are stronger, but the results are much stronger as a result, so it's not surprising.\\n\\n3. The practical implications of this theoretical work are unclear. It's nice that it provides motivation for current practices, but it does not provide additional insight into how to improve existing approaches. The authors could significantly strengthen the paper by expanding in this area.\"}",
"{\"title\": \"Already updated my scores\", \"comment\": \"Hi after going through the rebuttals I have already updated my scores accordingly.\"}",
"{\"title\": \"Thank you for the response\", \"comment\": \"After reading the rebuttals, I appreciate the authors addressed most of the my concerns above. Thus I have adjusted my scores to reflect that. I think the work contains some theoretical contributions as opposed to [1], which is used to prove convergence for many actor-critic algorithms, under certain regular assumptions. It would be great to see this result working on more general conditions. Please address the above concerns in the final version, and also stress the theoretical contributions as compared to existing results.\"}",
"{\"title\": \"Compare with related work. Target network\", \"comment\": \"We greatly thank the reviewer for reading our response.\\n\\n1. [Pfau et. al.] We thank the reviewer for listing this related work. This paper draws the connection between GAN and actor-critic and reviews the techniques that stabilizes training for these two methods, respectively. They show that both these two problems can be formulated as bilevel optimization and they formulate GAN as a kind of Actor-Critic without states. \\n\\nCompared with this work, we focus only on the bilevel formulation of reinforcement learning problems. Our goal is to show that such a formulation unifies both the problem of value function estimation and policy optimization, two of the most important approaches in RL. The bilevel formulation of value function estimation is not covered in [Pfau et. al.].\\n\\nMoreover, it is shown in [1] that GAN trained by two-time-scale gradient updates converges to a local Nash equilibrium. Thus, the connection between GAN and AC showed in [Pfau et. al.] in fact corroborates our usage of two-time-scale algorithms for Q-learning and actor-critic. Furthermore, compared with [1] which assumes both the ODEs in the faster and slower time-scales have local asymptotically stable attractors, our assumption is weaker. Our analysis also handles projection of the updates whereas [1] assumes that the gradient updates are always bounded, which we believe might be a strong assumption.\\n\\n2.1. [Dai et. al.] We first note that the algorithm in [Dai et. al.] is a batch RL algorithm whereas our algorithm is online. To see this, please see Line 7 of Algorithm 1 in [Dai et. al] (https://arxiv.org/pdf/1712.10285.pdf, page 6). In Line 7, they assume that the inner maximization problem is replaced by a sample-based optimization where the objective function is computed from the data in the experience replay buffer. Moreover, this maximization problem is non-concave, whose global maximum might be NP-hard to achieve. However, they assume that there exists an optimization oracle that outputs the global maximum, which is denoted by $\\\\omega_{\\\\rho}^j$ in their algorithm. \\n\\nSince the global maximum of a non-concave function can be NP-hard to obtain, we believe that this assumption in [Dai et. al.] is quite strong.\\n\\nIn terms of the assumptions made on the classes of value functions and policy functions, they also require boundedness of these functions. Compared with their assumption, we further require the gradient of the value functions and policies are bounded, which is not significantly stronger and can be satisfied if the parameters are bounded. \\n\\nMoreover, the assumptions 4.1(ii) and 4.3(ii) are technical, which essentially ensures that the gradient updates in the faster time-scale converge. Note that there are divergent examples of TD(0) with nonlinear function approximation. We believe that these assumptions are necessary for convergence analysis. Furthermore, these assumptions are much weaker than the assumption of maximizing non-concave functions to the global maximum in [Dai et. al.]\\n\\n\\n3. Our formulation of bilevel optimization motivates two-time-scale gradient algorithms. When applied to Q-learning, two-time-scale updating rule implies that the target network needs to be updated in a slower timescale. This coincides with the practices in the real world, as shown in the ``baselines'' package ([2]), which is a standard implementation of deep RL algorithms. Thus, when focusing on the target network, our results provide theoretical justifications of the updating rule for the target network used in practice.\\n\\nIn specific, in the DQN implementation in ``baselines'' for Atari games ([3]), the target network is updated every 1000 steps. Thus, the target network is fixed for a long time and only the Q-network is updated. This is captured in the lower-level optimization problem of our bilevel framework. Furthermore, in practice, some practitioners use soft updates of target network where they update the target network using a learning rate $\\\\tau$ that is much smaller than the learning rate of the $Q$-network. Our framework also justifies such training technique.\", \"reference\": \"[1] GANs Trained by a Two Time-Scale Update Rule\\nConverge to a Local Nash Equilibrium (https://arxiv.org/pdf/1706.08500.pdf)\\n\\n[2] OpenAI Baselines: high-quality implementations of reinforcement learning algorithms https://github.com/openai/baselines\\n\\n[3] DQN for Atari games in Baselines: https://github.com/openai/baselines/blob/master/baselines/deepq/experiments/train_pong.py\"}",
"{\"title\": \"Response\", \"comment\": \"1. \\\"Connecting Generative Adversarial Networks and Actor-Critic Methods.\\\" (Pfau and Vinyals, 2017) for an example.\\n\\n2. Yes, the saddle point problem is more specific than the bilevel setting. This seems beneficial.\\n\\n2.1. Are the assumptions in your two-time-scale analysis approach more practical than the analysis in Dai et al.? It would be helpful to discuss why the assumptions you make are likely to hold in practice.\\n\\n2.2. Okay, these additional applications are interesting.\\n\\n2.3. Agreed that it plays a similar role.\\n\\n3. Are the authors claiming that the proposal to use a slower update for the target network is a novel practical contribution, or that their theory motivates an existing practice?\"}",
"{\"title\": \"(Rebuttal Continued) Compare with [Dai et. al.] and [Maei et al.]. Why target network in DQN can be explained using bilevel optimization. Our numerical experiments utilize standard implementations in Deep RL.\", \"comment\": \"6. Theorem 4.2 is non-trivial because it concerns the updates of an online algorithm. Moreover, with function approximation, the Brouwer's fixed point theorem no longer holds because we need to consider an operator on the parameter space, which is not contractive. In addition, the boundedness of the value function can be ensured if we restrict the parameters to compact sets in the Euclidean space. This condition is mainly technical. The fact that rewards are bounded implies that the true value functions are bounded. \\n\\n7. In the experiments, we use neural networks for the value functions and policies and the network structures are the same as standard implementations. Our goal is to verify the observations from our bilevel framework. That is, we verify that the target network in DQN and the policy network in AC should be updated in a slower time-scale. To verify this idea, using standard implementations, we have tried various learning rate configurations on a few standard environments. The numerical results corroborate with our theory.\\n\\n8. The results of the value function estimation are off-policy. Specifically, $\\\\rho$ is the stationary distribution induced by the behavior policy. Our actor-critic algorithm is on-policy. It is known that there are divergent examples for off-policy TD(0) with nonlinear function approximation. It is our future research that under what conditions can we have convergence guarantees for nonlinear off-policy TD(0).\\n\\n9. Bilevel optimization for DQN is motivated by the fact that $Q$-network is often updated with the target network fixed. Essentially, this means that we would like to first minimize the TD-error of each fixed target network and then update the weights of the target network by that of the Q-network.\\n\\nThe intuition of bilevel optimization for AC can be seen from the policy gradient theorem, where the value function appears. Thus, to estimate the policy gradient, we first need to estimate the value function of the given policy. Thus, essentially, we would like to first solve the policy evaluation problem and then update the policy parameter. Combining these two steps together, we obtain the bilevel optimization problem.\\n\\n10. We thank the reviewer for the suggestions. We will add detailed discussion on the target network of DQN and also related work. \\nIn addition, we note that the experiments in this work follow the standard implementations of DQN and Actor-Critic algorithms. The hyper-parameters are also standard. We only modify the learning rates in order to corroborate with our claims drawn from the two-timescale algorithms.\\nIn terms of bilevel optimization, our goal is not to motivate new algorithms from this framework. Instead, we aim to bridge the existing algorithms via this perspective and provide unified convergence analysis. Moreover, using this framework, we give a theoretical justification of the common practice in DQN that using a target network which is updated slowly helps the performance.\", \"reference\": \"[1]. Finite-Sample Analysis of Proximal Gradient TD Algorithms by Liu et al.\\n\\n[2]. SBEED: Convergent Reinforcement Learning with\\nNonlinear Function Approximation by Dai et. al.\"}",
"{\"title\": \"Compare with [Dai et. al.] and [Maei et al.]. Why target network in DQN can be explained using bilevel optimization. Our numerical experiments utilize standard implementations in Deep RL.\", \"comment\": \"1. [Dai et al.] We note that [Dai et. al.] is, in reality, a Batch algorithm since their maximization step is assumed to be solved to the global maximum based on data in the experience replay memory. In specific, in Line 7 of their algorithm, they assume that the global maximum of the empirical loss function computed based on the replay memory is achieved. However, since this objective is non-concave, we believe that this condition is strong and can hardly be guaranteed since maximizing a non-concave function is NP-hard. Since this step is solved based on batch data, it is not an online algorithm. Please also refer to Point 2 for Reviewer 1 and Point 2 for Reviewer 2 for detailed discussions on [Dai et. al]\\n\\n2. Nonlinear GTD paper. Although the nonlinear GTD paper [Maei et al.] provides a convergence proof for GTD with nonlinear function approximation, they only focus on the problem of policy evaluation. In contrast, we propose a general framework that can incorporate both value function estimation and policy optimization. Our claim is that we first attempt to understand RL algorithms with nonlinear function approximation from a general perspective.\\n\\nMoreover, using Fenchel duality as in [1] and [2], we could also formulate nonlinear GTD into a saddle point optimization problem, which is a special case of bilevel optimization. Thus, our framework in Section 3.1 also includes nonlinear GTD as a special case.\\n\\n3. For (3.6), we assume that for each $\\\\omega$, there exists a global minimizer $\\\\theta(\\\\omega)$ of the least-squares error. Under mild conditions on the function class $\\\\{Q_{\\\\theta} \\\\}$, such a minimizer exists. Moreover, if we could find such an $\\\\omega$ such that $\\\\theta(\\\\omega) = \\\\omega$, then $Q_{\\\\omega}$ is the fixed point of the Bellman operator, which is the optimal Q-function. \\n\\nMoreover, if the function class $\\\\{Q_{\\\\theta} \\\\}$ is sufficiently large such that it contains the optimal Q-function, then the optimization problem (3.6) has a solution, which is exactly the optimal Q-function.\\n\\n4. Our algorithm is more closely related to the ``soft target'' update of the target network, which updates the weights of the target network using a small learning rate $\\\\tau << 1$. In contrast, the Q-network is updated using a constant learning rate, which is much larger than $\\\\tau$. In fact, this corroborates our claim that the target network should be updated in a slower time-scale compared with the Q-network.\\n\\nMoreover, for the periodical update of the target network, the period is usually set to be a large number such as 5000. This essentially means that we would like to first fix the target network and only update the Q-network, which is exactly solving the lower-level minimization problem in (3.6). Hence, using a bilevel perspective, we provide a theoretical justification of the common practice that the target network is updated at a slower rate compared with Q-network. \\n\\n5. The algorithms in our paper are the same as the classical Q-learning and actor-critic algorithms. Thus we call this ```online Q-learning'' and ``actor-critic algorithm''. The goal of our paper is not proposing new algorithms. Instead, our goal is to provide a unified view of the online RL algorithms and provide convergence proofs for these algorithms with nonlinear function approximation. We believe that our work is of interest to deep RL community.\"}",
"{\"title\": \"Justification of Assumption 4.3. Our proof requires weaker assumptions than Bhatnagar et. al and the proof requires handling projection, which is not trivial.\", \"comment\": \"1. Assumption 4.3. The first part of Assumption 4.3 considers the family of value functions and policies. Specifically, we assume that the value functions, as functions of $\\\\theta$, are bounded and have bounded and Lipschitz gradient. This assumption can be satisfied if the parameter $\\\\theta$ lies in a compact set. As a simple example, we could set $V_{\\\\theta}(s) = \\\\sigma( \\\\phi(s)^\\\\top \\\\theta)$, where $\\\\phi(s)$ is a bounded feature mapping, $\\\\sigma$ is the Leaky ReLU activation function. In addition, we also assume that the score function $grad \\\\log \\\\pi_\\\\omega$ is bounded. This assumption is also required in classical convergence proof of actor-critic algorithms. This consider is satisfied if $\\\\pi_{\\\\omega} (s,a) \\\\propto \\\\exp( \\\\phi(s,a)^\\\\top \\\\omega)$ is an energy-based policy with feature mapping $\\\\phi(s,a)$ bounded. \\n\\nFor the second condition, the assumption that the ODE has a local asymptotically stable equilibrium for each $\\\\omega$ essentially means that TD(0) for each policy $\\\\pi_{\\\\omega}$ is convergent. Moreover, the solution is Lipschitz with respect to $\\\\omega$. We admit that this condition is strong since TD(0) with nonlinear function approximation can be divergent. We will clarify this condition in the revised version. \\n\\n\\n2. In terms of the convergence technique, since we need to handle nonlinear function approximation, we apply projection to the updates to ensure stability. In contrast, since the value functions in [1] are linear, TD(0) is known to converge to the global minimizer of the mean-squared projected Bellman error. Moreover, they assume that both the two ODEs in the faster and slower time-scales have globally asymptotically stable equilibria. Thus, they do not need to apply projection to the policy parameter, which makes their analysis much simpler. However, even with linear function approximation, it seems unnatural to assume that the ODE in slower time-scale has a unique globally asymptotically stable equilibrium since $J(\\\\omega)$ is in general nonconvex in $\\\\omega$. In this work, we provide convergence analysis under much weaker conditions and incorporate nonlinear value functions. Our proof is somewhat more involved than that in [1] due to handling projection. \\n\\n3. We have tested the actor-critic algorithm on Atari games, which is large-scale and cannot be solved by algorithms with linear function approximation. Our goal is to verify the idea that the $Q$-network in DQN and the critic in actor-critic algorithm should be updated in the faster timescale. \\n\\n[1]. Natural Actor\\u2013Critic Algorithms by Bhatnagar et. al.\"}",
"{\"title\": \"Our work is different from Dai et. al, which is a batch RL algorithm and involves solving non-concave maximization problems to global maximum.\", \"comment\": \"1. We will add more numerical experiments in the revised version and modify the claim of ``thorough experiments''.\\n\\n2. Although [Dai et. al] formulates soft-Q learning as a minimax optimization problem and propose a convergent algorithm, their algorithm is actually not online. The reason is that in the inner maximization step, they need to solve the non-concave maximization to the global maximum. They achieve this step by solving a batch optimization using experience replay and assume that the global maximum can be found by an optimization oracle. Please see Line 7 of Algorithm 1 in [Dai et. al] (https://arxiv.org/pdf/1712.10285.pdf, page 6).\\n\\nThus, given the fact that [Dai et. al] is not an online algorithm, we believe that our claim that our work is the ``first attempt to study the convergence of online reinforcement learning algorithms with nonlinear function approximation in general'' is correct. \\n\\nMoreover, we note that soft Q-learning, which is considered in [Dai et. al], can also be formulated as bilevel optimization by replacing the Bellman operator in (3.7) by the smoothed Bellman operator. \\n\\nFurthermore, our Algorithm 1 is also an off-policy algorithm, where $\\\\rho$ is the stationary distribution on $(S\\\\times A)$ induced by the behavior policy. Please see the second paragraph on Page 5. We will emphasize that our method is off-policy in the revised version. \\n\\nWe thank the reviewer for listing this related work. We will add detailed discussions of this work in revision. Please also see Point 2 in the response to Reviewer 1 for more discussions of [Dai et. al].\\n\\n3. Our actor-critic algorithm is on-policy. We will clarify this in the revised version. For off-policy actor-critic with function approximation, to the best of our knowledge, [1] is the only paper with convergence analysis. The critic update is either GTD(\\\\lambda) or Emphatic-TD(\\\\lambda) and linear function approximation is applied. Their algorithm can also be easily incorporated by the bilevel optimization framework. We will discuss this work in the revised version.\", \"related_work\": \"[1]. Convergent Actor-Critic Algorithms Under Off-Policy Training and Function\\nApproximation\"}",
"{\"title\": \"Our bilevel framework seems novel. Dai et al is not an online algorithm and only focus soft-Q learning, which can also be formulated under our framework.\", \"comment\": \"1. The bilevel viewpoint. Although the bilevel optimization perspective might be used implicitly in the RL literature, to the best of our knowledge, our work is the first to rigorously bridge the problems of Q-learning and actor-critic under the framework of bilevel optimization. The reviewer seems to believe that there is existing work that has provided such a framework already. If would be great if the reviewer could point us to an example of related work. Moreover, we believe that our formulations of Q-learning and policy learning as constrained bilevel optimization problems are novel.\\n\\n2. We will discuss [Dai et. al.] in the revised version. Specifically, they propose a primal-dual formulation for soft Q-learning, which is a value function estimation problem that aims to find the fixed point of the smoothed Bellman operator. This problem can also be formulated as a bilevel optimization problem similar to (3.6). \\n\\nThe differences between our paper and [Dai et. al.] are as follows.\\n\\n(1). Their problem is formulated as minimax optimization where both the inner maximization and outer minimization problems are neither convex nor concave. They propose a batch algorithm for which, in each iteration, requires solving the inner non-concave maximization problem to its global optima, which can hardly be satisfied in practice. \\n\\nIn contrast, we propose online TD-learning algorithms that are shown to be convergent.\\n\\n(2). They study only a particular example of value function estimation, which falls in our bilevel optimization framework. In addition to value function estimation, our framework also includes actor-critic, generative adversarial imitation learning, and inverse reinforcement learning. \\n\\n(3). [Dai et. al] use the Fenchel duality to attack the double sampling issue in soft Q-learning, where the dual function tracks the TD-error. Thus, their dual function essentially is the same as our $Q_{\\\\theta} - T Q_{\\\\omega}$, which implies that their dual function plays a similar role as the target network. Moreover, their Fenchel duality approach cannot be applied to actor-critic. In contrast, our bilevel formulation is more general than their Fenchel duality view.\\n\\n\\n3. In terms of numerical experiments, our goal is to show that two-time-scale learning rates proposed in Algorithm 1 is essential. The two-time-scale learning rates are motivated by the bilevel formulation. When applied to Q-learning, this implies that the target network should be updated at a slower rate compared with the Q-network. When applied to actor-critic, this implies that the critic should be updated at a faster rate. These observations are further corroborated in the experiments. We believe that these ideas are also useful in the choice of learning rates for practitioners.\"}",
"{\"title\": \"Interesting theoretical work, but missing key previous literature\", \"review\": \"The authors frame value function estimation and policy learning as bilevel optimization problems, then present a two-timescale stochastic optimization algorithm and convergence results with non-linear function approximators. Finally, they relate the use of target networks in DQN to their two-timescale procedure.\\n\\nThe authors claim that their first contribution is to \\\"unify the problems of value function estimation and policy learning using the framework of bilevel optimization.\\\" The bilevel viewpoint has a long history in the RL literature. Are the authors claiming novelty here? If so, can they clarify which parts are novel?\\n\\nThe paper is missing important previous work, SBEED (Dai et al. 2018) which shows (seemingly much stronger) convergence results for a smoothed RL problem. The authors need to compare their approach against SBEED and clearly explain what more they are bringing. Furthermore, the Fenchel trick used in SBEED could also be used to attack the \\\"double sampling\\\" issue here, resulting in a saddle-point problem (which is more specific than the bilevel problem). Does going to the bilevel perspective buy us anything?\\n\\n=====\\n\\nIn response to the author's comments, I have increased my score.\\nThe practical implications of this theoretical work are unclear. It's nice that it relates to DQN, but it does not provide additional insight into how to improve existing approaches. The authors could significantly strengthen the paper by expanding in this area.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting work but some of the claims need to be adjusted.\", \"review\": \"This paper interprets the fitted Q-learning, policy evaluation and actor-critic as a bi-level optimization problem. Then, it uses two-timescale stochastic approximation to prove their convergence under nonlinear function approximation. It provides interesting view of the these existing popular reinforcement learning algorithms that is widely used in DRL. However, there are several points to be addressed in the revision, which are mainly in some of its claims.\\n\\nThis paper is mainly a theoretical paper and experiments are carried out on a few simple tasks (Acrobot, MountarinCar, Pong and Breakout). Therefore, it cannot be claimed as \\u201cthorough numerical experiments are conducted\\u201d as in abstract. This claim should be modified.\\n\\nFurthermore, it cannot be claimed that this paper is a \\u201cfirst attempt to study the convergence of online reinforcement learning algorithms with nonlinear function approximation in general\\u201d. There is a recent work [1], which developed a provably convergent reinforcement learning algorithm with nonlinear function approximation even in the off-policy learning setting.\\n[1] B. Dai, A. Shaw, L. Li, L. Xiao, N. He, Z. Liu, J. Chen, L. Song, \\u201cSBEED Learning: Convergent Control with Nonlinear Function Approximation\\u201d, ICML, 2018.\\n\\nThe actor-critic algorithm in the paper uses TD(0) as its policy evaluation algorithm. It is known that the TD(0) algorithm will diverge in nonlinear function approximation and in off-policy learning case. I think the actor-critic algorithm analyzed in the paper is for on-policy learning setting. The authors need to clarify this. Furthermore, the authors may need to comment on how to extend the results to off-policy learning setting.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Incremental theoretical result on Q-learning/actor-critic algorithms; Experiments are quite small-scale\", \"review\": \"In this paper the authors studied reinforcement learning algorithms with nonlinear function approximation. By formulating the problems of value function estimation and policy learning as a bilevel optimization problems, the authors proposed\\nQ-learning and actor-critic algorithms that also contains convergence properties, even when nonlinear function approximations are used. Similar to the stochastic approximation approach adopted by many previous work such as Borkar (https://core.ac.uk/download/pdf/148488247.pdf), they analyze the convergence properties by drawing connections to stability of a two-timescale ODE. Furthermore they also evaluated the effectiveness of the modified Q-learning/actor-critic algorithms on two toy examples.\\n\\nIn general I find this paper interesting in terms of addressing a long-standing open question of convergence analysis of actor-critic/Q-learning algorithms, when general nonlinear function approximations are used. Through reformulating the problem of value estimation and policy improvement as a bilevel optimization problem, they proposed modifications of Q-learning and actor-critic algorithms, and under certain assumptions they showed that these algorithms converge, which is a non-trivial contribution. \\n\\nWhile I appreciate the effort of extending existing analysis of these RL algorithms to general nonlinear function approximation, I find the result of this paper rather incremental. While convergence results are provided, I am not sure how practical are the assumptions listed in the paper. Correct me if i am wrong, it seems that the assumptions are stated for the sake of proving the theoretical results without much practical justifications (especially Assumption 4.3). Furthermore how can one ensure that these assumptions hold (for example Assumption 4.3 (i) and (ii), especially on the existence of locally stable equilibrium point) ? Unfortunately I haven't had a chance to go over all the proof details, it seems to me the analysis is built upon two-time scale stochastic approximation theory, which is a standard tool in convergence analysis of actor-critic. Since the contribution of this paper is mostly theoretical, can the authors highlight the novel contribution (such as proof techniques used here that are different than that in standard actor-critic analysis from e.g. https://www.semanticscholar.org/paper/Natural-actor-critic-algorithms-Bhatnagar-Sutton/6a40ffc156aea0c9abbd92294d6b729d2e5d5797) in the main paper?\\n\\nMy other concern is on the scale of the experiments. While this paper focused on nonlinear function approximation, the examples chosen to evaluate these algorithms are rather small-scale. For example the domains to test Q-learning are standard in RL, and they were previously used to test algorithms with linear function approximation. Can the author compare their results with other existing baselines?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Some imprecisions, but interesting new perspective\", \"review\": \"The paper casts the problems of value learning and policy optimization, which can be problematic the non-linear setting, into the bilevel optimization framework. It proposes two novel algorithms with convergence guarantees. Although other works with similar guarantees exist, these algorithms are very appealing for their simplicity. A limited empirical evaluation is provided for the value-based method in Acrobot and Mountain Car and in the Atari games Pong and Breakout for the proposed bilevel Actor Critic.\\n\\nThere are a few missing references to similar, recent work, including Dai et al\\u2019s saddle-point algorithm (https://arxiv.org/pdf/1712.10285.pdf). Also, the claim that this is \\u201cthe first attempt to study the convergence of online reinforcement learning algorithms with nonlinear function approximation\\u201d can\\u2019t be true (even replacing \\u2018attempt\\u2019 by \\u2018successfully\\u2019, there is e.g. Maei et al.\\u2019s nonlinear GTD paper, see below).\\n\\nAlthough certainly interesting, the claims relating bilevel optimization and the target network are not completely right. E.g. Equation 3.6 as given is a hard constraint on omega. More explicitly, there are no guarantees that either network is the minimizer of the RHS quantity in 3.6.\\n\\nThe two-timescale algorithm is closer in spirit to the use of a target network, but in DQN and variants the target network is periodically reset, as opposed to what the presented theory would suggest. A different breed of \\u201csoft target\\u201d networks, which is more closely related to bilevel optimization has been used to stabilize training in DDPG (https://arxiv.org/abs/1509.02971).\\n\\nThere was some confusion for me on the first pass that you define two algorithms called \\u2018online Q-learning\\u2019 and \\u2018actor-critic\\u2019. Neither algorithm is actually that, and they should be renamed accordingly (perhaps \\u2018bilevel Q-Learning\\u2019 and \\u2018bilevel actor-critic\\u2019?). In particular, standard Q-Learning is online; and the actor-critic method does not minimize the Bellman residual (i.e. I believe the RHS of 3.8 is novel within policy-gradient methods).\\n\\nOnce we\\u2019re operating on a bounded space with continuous operators, Theorem 4.2 is not altogether surprising \\u2013 a case of Brouwer\\u2019s fixed point theorem, short of the result that theta* = omega*, which is explained in the few lines below the theorem. While I do think Theorem 4.2 is important, it would be good to contrast it to existing results from the GTD family of approaches. Also, requiring that |Q_theta(s,a)| <= Qmax is a significant issue -- effectively this test fails for most commonly used value-based algorithms.\\n\\nThe empirical evaluation lacks any comparison to baselines and serves for little more than as a sanity check of the developed theory. This is probably the biggest weakness of the paper, and is unfortunate given the claim of relevance to e.g. deep RL.\\n\\n\\n\\nQuestions\\n\\nThroughout, the assumption of the data being sampled on-policy is made without a clear argument as to why. Would the relaxation of this assumption affect the convergence results?\\n\\nCan the authors provide an intuitive explanation if/why bilevel optimization is necessary?\\n\\nCan you contrast your work with Maei et al., \\u201cConvergent Temporal-Difference Learning with Arbitrary Smooth Function Approximation\\u201d?\\n\\n\\nSuggestions\\n\\nThe discussion surrounding the target network should be improved. In particular, claiming that the DQN target network can be viewed \\u201cas the parameter of the upper level optimization subproblem\\u201d is a stretch from what is actually shown.\\n\\nThe paper was sometimes hard to follow, in part because the claims are not crisply made. I strongly encourage the authors to more clearly relate their results to existing work, and ensure that the names they use match common usage.\\n\\nI would have liked to know more about bilevel optimization, what it aims to solve, and the tools used to do it. Instead all I found was very standard two time-scale methods, which was a little disappointing \\u2013 I don\\u2019t think these have been found to work particularly well in practice. This is particularly relevant in the context of e.g. the target network question.\\n\\nA proper empirical comparison to existing algorithms would significantly improve the quality and relevancy of this work. There are tons of open-source baselines out there, in particular good state of the art implementations. Modifying a standard implementation to optimize its target network along the lines of bilevel optimization should be relatively easy.\", \"revision\": [\"I thank the authors for their detailed feedback, but still think the work isn't quite ready for publication. After reading the other reviews, I will decrease my score from 6 to 5. Some sticking points/suggestions:\", \"Some of my concerns remain unanswered. E.g. the actor-critic method 3.8 is driven by the Bellman residual, which is not the same as e.g. the MSPBE used with linear function approximation. There is no harm in proposing variations on existing algorithms, and I'm not sure why the authors are reluctant to do. Also, Brouwer's fixed point theorem, unlike Banach's, does not require a contractive mapping.\", \"The paper over-claims in a number of places. I highly recommend that the authors make their results more concrete by demonstrating the implications of their method on e.g. linear function approximation. This will also help contrast with Dai et al., etc.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rkgqCiRqKQ | Inferring Reward Functions from Demonstrators with Unknown Biases | [
"Rohin Shah",
"Noah Gundotra",
"Pieter Abbeel",
"Anca Dragan"
] | Our goal is to infer reward functions from demonstrations. In order to infer the correct reward function, we must account for the systematic ways in which the demonstrator is suboptimal. Prior work in inverse reinforcement learning can account for specific, known biases, but cannot handle demonstrators with unknown biases. In this work, we explore the idea of learning the demonstrator's planning algorithm (including their unknown biases), along with their reward function. What makes this challenging is that any demonstration could be explained either by positing a term in the reward function, or by positing a particular systematic bias. We explore what assumptions are sufficient for avoiding this impossibility result: either access to tasks with known rewards which enable estimating the planner separately, or that the demonstrator is sufficiently close to optimal that this can serve as a regularizer. In our exploration with synthetic models of human biases, we find that it is possible to adapt to different biases and perform better than assuming a fixed model of the demonstrator, such as Boltzmann rationality. | [
"Inverse reinforcement learning",
"differentiable planning"
] | https://openreview.net/pdf?id=rkgqCiRqKQ | https://openreview.net/forum?id=rkgqCiRqKQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkegL_4egV",
"rkl48xtZ6m",
"ByeDZjQ62m",
"BJxgiuKq2X"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544730696420,
1541668939933,
1541384959030,
1541212312197
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper921/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper921/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper921/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper921/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The authors study an inverse reinforcement learning problem where the goal is to infer an underlying reward function from demonstration with bias. To achieve this, the authors learn the planners and the reward functions from demonstrations. As this is in general impossible, the authors consider two special cases in which either the reward function is observed on a subset of tasks or in which the observations are assumed to be close to optimal. They propose algorithms for both cases and evaluate these in basic experiments. The problem considered is important and challenging. One issue is that in order to make progress the authors need to make strong and restrictive assumptions (e.g., assumption 3, the well-suited inductive bias). It is not clear if the assumptions made are reasonable. Experimentally, it would be important to see how results change if the model for the planner changes and to evaluate what the inferred biases would be. Overall, there is consensus among the reviewers that the paper is interesting but not ready for publication.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting idea but paper needs more work\"}",
"{\"title\": \"Interesting topic and approach, needs work and careful evaluation\", \"review\": \"Not all examples in the introduction are necessarily biases but can be modeled with reward functions, where reward is given to specific states other than finishing work by the deadline. It would be helpful for the reader to get examples that correspond to the investigated biases.\\n\\nIt would be good if the authors could at least mention that \\u201cBoltzmann rational\\u201d is a specific model of \\u201csystematic\\u201d bias for which much experimental support eith humans and animals exists. \\n\\nThe authors are strongly encouraged to review the literature on IRL, which includes other examples of modeling explicitly suboptimal agents, e.g.:\\n- Rothkopf, C. A., & Dimitrakakis, C. (2011). Preference elicitation and inverse reinforcement learning. ECML.\\nSimilarly, the idea to learn an agent\\u2019s reward functions across multiple tasks has also appeared in the literature before, e.g.:\\n- Dimitrakakis, C., & Rothkopf, C. A. (2011). Bayesian multitask inverse reinforcement learning. EWRL.\\n- Choi, J., & Kim, K. E. (2012). Nonparametric Bayesian inverse reinforcement learning for multiple reward functions. NIPS\", \"the_authors_state\": \"\\u201cThe key idea behind our algorithms is to learn a model of how the demonstrator plans, and invert the model\\u2019s \\\"understanding\\\" using backpropagation to infer the reward from actions.\\u201d\\nIt would be also important in this case to relate this to prior work, as several authors have proposed a very similar idea, in which a particular parameterization of the agent\\u2019s planning given the rewards and the transition function are learned, including Ziebart et al. and Dimitrakakis et al. This is also related to \\n- Neu, G., & Szepesv\\u00e1ri, C. (2007). Apprenticeship learning using inverse reinforcement learning and gradient methods. UAI.\\n\\nIt would be great if the authors could also discuss how assumption 3 is a necessary for accurately inferring reward functions and biases and how deviations from this assumption interfere with the goal of this inference. This seems to be a central and important point for the viability of the approach the authors take here.\\n\\nCurrently, the evaluation of the proposed method is in terms of the loss incurred by a planner between the inferred reward function and the true reward function, figure 3. It would be important for the evaluation of the current manuscript to know what the inferred biases are. That using a wrong model of how actions are generated given values, e.g. myopic vs. Boltzmann-rational, results in wrong inferences, should not be too surprising. Therefore, the main question is: does the proposed algorithm recover the actual biases?\", \"minor_points\": \"\\u201clike they naive and sophisticated hyperbolic discounters\\u201d\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Paper studies a relevant and interesting problem but needs extended empirical evaluation\", \"review\": \"This paper addresses the interesting and challenging problem of learning the reward function from demonstrators which have unknown biases. As this is in general impossible, the authors consider two special cases in which either the reward function is observed on a subset of tasks or in which the observations are assumed to be close to optimal. They propose algorithms for both cases and evaluate these in basic experiments.\\n\\nThe studied problem is relevant as many/most demonstrators have unknown biases and we still need methods to effectively learn from those.\\n\\nAs far as I am aware of the related literature, the problem has not been studied in that explicit form although there is related work which targets the problem of learning from suboptimal demonstrators or demonstrators that can fail, e.g. [1] (I suggest to discuss this and other relevant papers in a related work section).\", \"the_main_shortcomings_of_the_paper_are_a_lack_of_clarity_at_certain_points_and_a_limited_experimental_validation\": [\"For instance, the formalization of \\u201eAssumption 1\\u201c is unclear. In which sense does this cover similarity in planing? As far as I understand, the function D could still map any combination of world model and reward function to any arbitrary policy. What does it mean that the planning algorithm D is \\u201efixed and independent\\u201c?\", \"A crucial point requiring more investigation in my opinion is Assumption 3 (well-suited inductive bias). Empirically the chosen experimental setup yields expected results. However, to better understand the problem of learning with unknown biases it would be important to see how results change if the model for the planner changes. A small step in that direction would have been to provide results for value iteration networks with different number of iterations and number neurons, etc.\", \"If you use the differentiable planner instead of the VIN, how many iterations do you unroll?\", \"Is there any evidence that the proposed approach can work effectively in larger scale domains with more difficult biases? Also in the case in which the biases are inconsistent among demonstrations?\"], \"further_suggestions\": [\"Test how algorithm 1 performs if first initialized on simulated optimal demonstrations.\", \"Improve notation for the planning algorithm D by using brackets.\", \"[1] Shiarlis, K., Messias, J., & Whiteson, S. (2016, May). Inverse reinforcement learning from failure. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems (pp. 1060-1068). International Foundation for Autonomous Agents and Multiagent Systems.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Excellent motivation of work but lacks technical merit; results not convincing\", \"review\": \"This paper has proposed algorithms for inferring reward functions from demonstrations with unknown biases. To achieve this, the authors have proposed to learn planners from demonstrations in multiple tasks via value iteration networks to learn the reward functions.\\n\\nThis paper has provided an excellent motivation of their work in Sections 1 & 2 with references being made to human behaviors and heuristics, though the authors can choose a more realistic running example that is less extreme than making orthogonal decisions/actions. The paper is well-written, up till Section 4. \\n\\nOn the flip side, there does not seem to be any significant technical challenges, perhaps due to some of the assumptions that they have made. Like the authors have mentioned, I do find assumption 3 to be overly strong and restrictive, as empirically demonstrated in Section 5.2. Arguably, is it really weaker than that of noisy rationality? At this moment, it is difficult to overlook this, even though the authors have argued that it may not be as restrictive in the future when more sophisticated differentiable planners are developed.\\n\\nThe experimental results are not as convincing as I would have liked. In particular, Algorithm 2 (learning a demonstrator's model) does not seem to outperform that of assuming an optimal demonstrator for the noiseless case and a Boltzmann demonstrator for the noisy case (Fig. 3). This was also highlighted by the authors as well: \\\"The learning methods tend to perform on par with the best of two choices.\\\" It begs the question whether accounting for unknown systematic bias can indeed outperform the assumption of a particular inaccurate bias when we know a priori whether the demonstrations are noisy or not.\", \"other_detailed_comments_are_provided_below\": \"I would have preferred that the authors present their technical formulations in Section 4 using the demonstrator's trajectories instead of policies.\\n\\nThe authors say that \\\"In some cases, like they naive and sophisticated hyperbolic discounters, especially the noisy ones, the learning methods outperform both optimal and Boltzmann assumptions.\\\" But, Fig. 3 shows that Algorithm 2 does not perform better than either that of the optimal or Boltzmann demonstrator.\\n\\nIn Section 5.2, the authors have empirically demonstrated the poor approximate planning performance of VIN, as compared to an exact model the demonstrator. What then would its implications be on the adaptivity of Algorithms 1 and 2 to biases?\", \"the_following_references_on_irl_with_noisy_demonstration_trajectories_would_be_relevant\": \"Benjamin Burchfiel, Carlo Tomasi, and Ronald Parr. Distance Minimization for Reward Learning from Scored Trajectories. In Proc. AAAI, 2016.\\n\\nJ. Zheng, S. Liu, and L. M. Ni. Robust Bayesian inverse reinforcement learning with sparse behavior noise. In Proc. AAAI, 2014.\", \"minor_issues\": \"On page 4, the expression D : W \\u00d7 R -> S -> A -> [0, 1] can be more easily understood with the use of parentheses.\\n\\nFor Assumption 2b, you can italicize \\\"some\\\".\\n\\nIn the first paragraph of section 4.1, what are you summing over?\", \"line_3_of_algorithm_1\": \"PI_W?\", \"page_7\": \"so as long as?\", \"page_8\": \"figure 4 shows?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rkeqCoA5tX | LEARNING GENERATIVE MODELS FOR DEMIXING OF STRUCTURED SIGNALS FROM THEIR SUPERPOSITION USING GANS | [
"Mohammadreza Soltani",
"Swayambhoo Jain",
"Abhinav V. Sambasivan"
] | Recently, Generative Adversarial Networks (GANs) have emerged as a popular alternative for modeling complex high dimensional distributions. Most of the existing works implicitly assume that the clean samples from the target distribution are easily available. However, in many applications, this assumption is violated. In this paper, we consider the problem of learning GANs under the observation setting when the samples from target distribution are given by the superposition of two structured components. We propose two novel frameworks: denoising-GAN and demixing-GAN. The denoising-GAN assumes access to clean samples from the second component and try to learn the other distribution, whereas demixing-GAN learns the distribution of the components at the same time. Through comprehensive numerical experiments, we demonstrate that proposed frameworks can generate clean samples from unknown distributions, and provide competitive performance in tasks such as denoising, demixing, and compressive sensing. | [
"Generative Models",
"GANs",
"Denosing",
"Demixing",
"Structured Recovery"
] | https://openreview.net/pdf?id=rkeqCoA5tX | https://openreview.net/forum?id=rkeqCoA5tX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJe8q8VwlE",
"SylumfyqRX",
"Byx-dJJqCQ",
"rJg9EyCtAQ",
"SJgIjPdhhm",
"B1ew6SL9nX",
"HJlZFxF_3m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545189006134,
1543266847565,
1543266153082,
1543262002299,
1541339037923,
1541199294773,
1541079161197
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper919/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper919/Authors"
],
[
"ICLR.cc/2019/Conference/Paper919/Authors"
],
[
"ICLR.cc/2019/Conference/Paper919/Authors"
],
[
"ICLR.cc/2019/Conference/Paper919/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper919/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper919/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes two simple generator architecture variants enabling the use of GAN training for the tasks of denoising (from known noise types) and demixing (of two added sources). While the denoising approach is very similar to AmbientGAN and could thus be considered somewhat incremental, all reviewers and the AC agree that the developed use of GANs for demixing is an interesting novel direction. The paper is well written, and the approach is supported by encouraging experimental results on MNIST and Fashion-MNIST.\", \"reviewers_and_ac_noted_the_following_weaknesses_of_the_paper\": \"a) no theoretical support or analysis is provided for the approach, this makes it primarily an empirical study of a nice idea.\\nb) For an empirical study, the experimental evaluation is very limited, both in terms of dataset/problems it is tested on; and in terms of algorithms for demixing/source-separation that it is compared against. \\nFollowing these reviews, the authors added the experiments on Fashion-MNIST and comparison with ICA which are steps in the right direction. This improvement moved one reviewer to positively update his score, but not the others.\\nTaking everything into account, the AC judges that it is a very promising direction, but that more extensive experiments on additional benchmark tasks for demixing and comparison with other demixing algorithms are needed to make this work a more complete contribution.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Interesting and promising novel approach for demixing, but with no theoretical grounding and limited experimental evaluation\"}",
"{\"title\": \"We thank the reviewer 's encouraging feedback. We believe that the manuscript has been significantly strengthened by the reviewer's suggested changes.\", \"comment\": \"We summarize the reviewer\\u2019s concerns below:\\n\\nThe compressed sensing experiment was intended to test whether the proposed GAN based approaches can learn a generative model from heavily corrupted data. The experiment setup is similar to the one considered in the seminal paper Bora, et al (2016). \\\"Compressed sensing using generative models.\\\" The parameters were chosen using the approach described in this paper. In the updated draft, we have provided more details about the LASSO approach about how the L1 parameter has been chosen and other details in section 5.2. \\n\\n-We tried to fix the typos and other grammar issues in the revised version.\\n\\nAs the reviewer's' concern is similar to that of reviewer 2, we repeat our response here for the sake of completion: \\n\\n(1) We have also compared the quality of recovery of constituent components with the ground-truth through numerical criteria such as MSE and PSNR. Please see section 4.5 and 5.3.1 in the revised version of the paper. In the revised version of the paper, we have shown more experiments for another dataset, F-MNIST, and experiments based on both F-MNIST and MNIST datasets to support our proposed demixing-GAN. Please refer to appendix for the set of new experiments. In addition, we compare the performance of demixing-GAN with ICA method for both MNIST dataset, F-MNIST dataset, and a combination of them. Please see the appendix. As illustrated, ICA fails to demix superposed images from each other. We have to mention that while there are various methods for demixing of structured signals from their superpositions such as ones proposed by Hegde et al., 2012; Soltani and Hegde, 2017; McCoy and Tropp, 2014, these methods assume prior knowledge of the sets on which the constituent components lie. This is different from our setup in which there is prior knowledge is assumed and two generators in the demixing-GAN framework are responsible for providing this knowledge without hard-coding approach. As a result, we have selected only ICA as our benchmark to compare the performance of demixing-GAN.\\n\\n(2) Experiments with MNIST digits placed onto natural images is an interesting suggestion. We think these experiments are possible but given the time we are not able to do this experiment. However, we did an experiment in a similar spirit where we mixed images from F-MNIST and MNIST dataset. Our approaches were to successfully demix these two signals. \\nPlease see section 5.3 to 5.5 in the appendix of the revised version of the paper.\"}",
"{\"title\": \"We thank the reviewer's valuable feedback. We believe that the manuscript has been significantly strengthened by the reviewer's suggested changes.\", \"comment\": \"We summarize the reviewer\\u2019s concerns below:\\n-The limited set of experiments \\n-Lack of theoretical analysis for the proposed optimization problems \\n-Identifiability issue in the demixing problem\\n-Comparison with other methods\\n\\n\\nAnswer to (A): \\nWe acknowledge that the experiment only on MNIST dataset is limited and may not be very satisfactory. In the revised version, we have added the experiments for Fashion-MNIST (F-MNIST) dataset regarding our demixing setup to the appendix of the paper. Please see section 5.3 in the appendix for the details of this new set of experiments. The F-MNIST dataset includes 60000 training 28X28 gray-scale images with 10 labels for the objects. As discussed in the appendix, we have trained the demixing-GAN and used the trained generators for the demixing task for different experiment scenarios. In addition, We have conducted another set of experiments based on the mixing of MNIST digits with the F-MNIST objects. In this case, the proposed demixing-GAN is also able to learn the samples of the two components.\\n\\n\\nAnswer to (B): \\nWe acknowledge that the proposed demixing strategy (regularized optimization problem) has not been supported by theoretical guarantee. This is certainly an interesting future direction for us. In this current work, our goal is mostly to verify the ability of GANs for capturing the prior knowledge about the manifolds on which the constituent component lie through the numerical exploration. Since the optimization problem even for simpler denoising case is non-convex, its convergence analysis even for the stationary point is a very challenging problem. As our best knowledge, most of GANs paper suffer from this regard and most of them explore the properties of GAN framework through some numerical verification.\\n\\n\\nAnswer to (C): \\nAs the reviewer has mentioned the inherent identifiability of the constituent components makes the demixing problem an ill-posed problem. We are well-aware about this issue. Without some notion of so-called incoherent between the constituent components, separating of the components is not possible. This issue has been addressed in the Elad et al. (2005); Hegde et al. (2012). However, in these works, the structure of the components is assumed to be sparse in some known domain and through this assumption, the authors characterize the notion of incoherent. What GAN is generating can be considered as an appropriate prior knowledge about the manifolds on which the components lie. \\n\\nAs an attempt to understand the capability of the demixing-GAN, we empirically observed that the hidden representation (z space) of the generators for characterizing the distribution of the components play an essential role to the success/failure of the demixing-GAN. We investigate this observation through some numerical experiment in section 5.5 in the appendix of the revised version. Through some empirical observation, we conjecture that having (random) independent or close orthogonal vector z's for the input of each generator is a necessary condition for the success of learning of the distribution of the constituent components and consequently demixing of them. \\n\\nWe do not quite follow when the reviewer has mentioned that the \\u201ccolumn spaces of two generators\\u2026\\u201d. GAN is a highly nonlinear and nonconvex map from low-dimensional (hidden variable space) to high-dimensional (signal space). We don\\u2019t think so that we can talk about the incoherent of the column space for the output of generators as their action on a vector z cannot be captured by a matrix-vector multiplication. But certainly, we agree with the reviewer about some notion of incoherent should be analyzed for fundamental identifiability issue between the components. This is the subject of our future study.\\n\\n\\nAnswer to (D): \\nTo address the lack of sufficient comparisons, we have provided some experiments through ICA for demixing of two digits from MNIST and F-MNIST datasets. As illustrated in the revised paper, ICA fails to separate the components from each other, while this is not the case for the proposed demixing-GAN. Regarding RPCA comment, if the reviewer means robust principal component analysis, we do not think that that RPCA is related to our setup. In RPCA, we assume that one part is sparse and another is low-rank. Posing the low-rank and sparse constraints in the output of generators is not clear. In this paper, we are mainly demonstrating that the generative models for two data sources can be learned from their additive superposition. Whereas, in RPCA, the structure of the two constituents signals is hard-coded and a fixed apriori. Therefore two approaches operate in fundamentally different settings. We agree that the comparison of the proposed model with RPCA seems reasonable and it is an interesting research question to pose the low-rank and sparse constraints in the output of generators. More investigation about this direction might be an interesting future research.\"}",
"{\"title\": \"We thank the reviewer for his/her valuable feedback. We believe that the manuscript has been significantly strengthened by the reviewer's suggested changes.\", \"comment\": \"We summarize the reviewer\\u2019s concerns below:\\n\\nRegarding (1):\\nWe agree that the demixing problem suffers from a fundamental separability issue which is sometimes referred to as incoherent of constituent components in the literature. This makes the demixing problem as an ill-posed problem. Please refer to the Answer (C) from the first reviewer for further discussion about this issue. In particular, please see section 5.5 in the appendix of the revised version of the paper where we have investigated an interesting empirical observation for the success/failure of the propose demixing-GAN approach based on the hidden variable space (z space) of the generators through some experiments. \\n\\nIn the revised version, we have detailed more information about the experiments setting, such as the methods used to initialize the two generators for our experimental results in the appendix. We have added these details in section 5.1 of the revised version of the pape. \\n\\nRegarding (2):\\nIn the revised version of the paper, we have shown more experiments for another dataset, F-MNIST, and experiments based on both F-MNIST and MNIST datasets to support our proposed demixing-GAN. Please refer to appendix for the set of new experiments. In addition, we compare the performance of demixing-GAN with ICA method for both MNIST dataset, F-MNIST dataset, and a combination of them. Please see the appendix. As illustrated, ICA fails to demix superposed images from each other. We have to mention that while there are various methods for demixing of structured signals from their superpositions such as ones proposed by McCoy and Tropp (2014); Hegde et al. (2012); Soltani and Hegde, (2017), these methods assume prior knowledge of the sets on which the constituent components lie. This is different from our setup in which no prior knowledge is assumed and two generators in the demixing-GAN framework are responsible for providing the knowledge of the low-dimensional manifolds as opposed to the hard-coding approaches. As a result, we have selected only ICA as our benchmark to compare the performance of demixing-GAN.\\n\\nWe have also compared the quality of recovery of constituent components with the ground-truth through numerical criteria such as MSE and PSNR. Please see section 4.5 and 5.3.1 in the revised version of the paper.\"}",
"{\"title\": \"Review\", \"review\": \"In this, paper a GANs-based framework for additive (image) denoising and demixing is proposed. The proposed methodology for denoising largely relies on the Ambient GAN model and hence the technical contribution of the paper in this task appears to be limited. Regarding demixing, as explained in the comments below, the proposed model appears to be superficial in the sense that neither theoretical analysis nor thorough empirical evaluation is provided. The proposed method is evaluated on both tasks (i.e., denoising and demixing) by conducting toy experiments on handwritten digits (MNIST).\\n\\nMore specifically, the authors employ the Ambient GAN to train a generator that generates clean samples when the type of corruption is known (i.e., when corruption is modelled by a known function which interacts with the clean data in an additive way). For denoising, the authors propose to learn the latent variable that generates the clean test image by solving a ridge regularized non-convex inverse problem (Eq. 3). The problem is solved via gradient descent and theoretical analysis on the converge of the algorithm is not provided. Clearly, this approach has limited practical applications since the corruption function needs to be known which rarely happens in practice.\\n\\nNext, considering additive demixing, the authors assume that the corruption/structured signal is unknown but it can be modelled using a convolutional network (using the architecture of DCGAN). They employ the same network architecture for modelling the clean data generation process and learn the parameters of both generators using adversarial training. Demixing is performed by solving a similar by solving a similar ridge regularized non-convex inverse problem as in the case of denoising (i.e., Eq. 4). As authors mention in the paper, it is indeed surprising that the proposed GANs-based model with two generators is able to produce samples from the distribution of each signal component by observing only additive mixtures of these signals. Without any assumptions, the proposed model is not identifiable. This is my main concern regarding this paper and a theoretical investigation is definitely needed. My main questions revolve around under what conditions the column spaces of the two generators are mutually independent and what is the type of components structure that the proposed model can recover. \\n\\nAs mentioned above, the experimental evaluation is limited to the NMIST dataset while comparisons with existing related models such as RPCA and ICA that work efficiently and with guarantees in the additive setting studied in this paper are considered essential in order to prove empirically the merits of the proposed framework.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Extension of AmbientGAN on denoising and demixing problems\\uff0c but experiments are not sufficient\", \"review\": \"This paper proposed two new GAN structures for learning a generative modeling using the superposition of two structured components. These two structures can be viewed as an extension of AmbientGAN. Experiments results on MNIST dataset are presented. Overall, the demixing-GAN structure is relatively novel. However, the potential application seems limited and the experiment result is not sufficient enough to support the idea. Detail comments are as following,\\n\\n\\n1.\\tIt seems there are no independent assumption imposed on the addition of two generators. It is possible that the possible model only will works on simple toy example, where the distributions of two structured components are drastic different. Or the performance will be affected by the initialization. It would be nice if the author test this on more realistic examples, such as the source separation problem in acoustic or the unmixing problem in hyper-spectral images. More detail information about the experiments setting, such as the methods used to initialize the two generators are need. \\n2.\\tIn the experiment part, it would be nice to have Quantitive results presented, for example PSNR for denoising. Simple comparison with several traditional methods could also help understanding the advantage of the model.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"contribution, but experiments lacking\", \"review\": \"Quality is good, just a handful of typos.\\nClaritys above average in explaining the problem setting.\", \"originality\": \"scan refs...\", \"significance\": \"medium\", \"pros\": \"the authors develop a novel GAN-based approach to denoising, demixing, and in the process train generators for the various components (not just inference). Further, for inference, the authors propose an explicit procedure. It seems like a noveel approach to demixing which is exciting.\", \"cons\": \"The experiments do not push the limits of their method. It's difficult to judge the demixing 'power' of the method because it's difficult to tell how hard the problem is. Their method seems to easily solve it (super low MSE). The classification measure is clearly improved by denoising, which is totally unsurprising-- There should definitely be comparison with other denoising methods.\\n\\nIn general, they don't compare to any other methods. Actually in the appendix, comparisons are provided for a basic compressive sensing problem, but their only comparator is \\\"LASSO\\\" with a \\\"fixed regularization parameter\\\", and vanilla GAN. Since the authors \\\"main contribution\\\" (their words) is demixing, I'm surprised that they did not compare with other demixing approaches, or try on a harder problem. Could you give some more details about the LASSO approach? How did you choose the L1 parameter?\\n\\nI have another problem with the demixing experimental setting. On one hand, both the sinusoids and MNIST have \\\"similar characteristics\\\" in the sense that they are both pretty sparse, basically simple combinations of primary curves. This actually makes the problem harder for a dictionary learning approach like MCA (referenced in your paper). On the other hand, both signals are very simple to reconstruct. For example, what if you superimposed the grid of digits onto a natural image? Would you be able to train the higher resolution GAN to handle a more difficult setting? The other demixing setting of adding 1's and 2's has a similar problem.\\n\\nThe authors need to provide (R)MSE results that show how well the method can reconstruct mixture components on average over the dataset. The only comparison is visual, and no comparators are provided.\", \"conclusions\": \"I'm actually torn on this paper. On one hand this paper seems novel and clearly contributes to the field. On the other hand, HOW MUCH contribution is not addressed experimentally, i.e. the method is not properly compared with other denoising or demixing methods, and definitely not pushed to its limits. It's hard to assess the difficulty of the denoising problem because their method does so well, and it's hard to assess the difficulty of demixing because of the lack of comparators.\", \"caveats\": \"I am knowledgeable about iterative optimization approaches to denoising and demixing, especially MCA (morphological component analysis), but *not knowledgeable about GAN-based approaches*, though I have familiarity with GANs.\\n\\n*********************\", \"update_after_author_response\": \"I think the Fashion-MNIST experiments and comparisons with ICA are many times more compelling than the original experiments. I think this is an exciting contribution to dually learning component manifolds for demixing.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
ByecAoAqK7 | Zero-shot Dual Machine Translation | [
"Lierni Sestorain",
"Massimiliano Ciaramita",
"Christian Buck",
"Thomas Hofmann"
] | Neural Machine Translation (NMT) systems rely on large amounts of parallel data.This is a major challenge for low-resource languages. Building on recent work onunsupervised and semi-supervised methods, we present an approach that combineszero-shot and dual learning. The latter relies on reinforcement learning, to exploitthe duality of the machine translation task, and requires only monolingual datafor the target language pair. Experiments on the UN corpus show that a zero-shotdual system, trained on English-French and English-Spanish, outperforms by largemargins a standard NMT system in zero-shot translation performance on Spanish-French (both directions). We also evaluate onnewstest2014. These experimentsshow that the zero-shot dual method outperforms the LSTM-based unsupervisedNMT system proposed in (Lample et al., 2018b), on the en→fr task, while onthe fr→en task it outperforms both the LSTM-based and the Transformers-basedunsupervised NMT systems. | [
"unsupervised",
"machine translation",
"dual learning",
"zero-shot"
] | https://openreview.net/pdf?id=ByecAoAqK7 | https://openreview.net/forum?id=ByecAoAqK7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJxpkVPxl4",
"BJx2tQC007",
"SyesPDp307",
"ByeKDfJ5A7",
"SkgUp-J9Rm",
"SyxLW-1cA7",
"HJeDdxy5CQ",
"BklxgQ0T2X",
"rkl6FwWq3Q",
"ryek_zaK2Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544741860554,
1543590787898,
1543456611403,
1543266913000,
1543266749982,
1543266557916,
1543266415172,
1541427944010,
1541179269328,
1541161574902
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper918/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper918/Authors"
],
[
"ICLR.cc/2019/Conference/Paper918/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper918/Authors"
],
[
"ICLR.cc/2019/Conference/Paper918/Authors"
],
[
"ICLR.cc/2019/Conference/Paper918/Authors"
],
[
"ICLR.cc/2019/Conference/Paper918/Authors"
],
[
"ICLR.cc/2019/Conference/Paper918/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper918/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper918/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper is essentially an application of dual learning to multilingual NMT. The results are reasonable.\\n\\nHowever, reviewers noted that the methodological novelty is minimal, and there are not a large number of new insights to be gained from the main experiments.\\n\\nThus, I am not recommending the paper for acceptance at this time.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Reasonable improvements, but novelty incremental\"}",
"{\"title\": \"Reply to Reviewer3\", \"comment\": \"Thanks for the quick reply!\\n\\n[Reply to #1 and #2] \\nWe believe that pivoting and pseudo-NMTs are a simple, yet competitive baseline. However, we are currently working on adding several additional baselines from the related work.\\nThe numbers you are referring to are not yet included in the paper since the training is still running. We indeed refer to the Dual-0 column in Table 3.\\n\\n[Reply to #3]\\nAgreed. We will add the same sets of results for both experimental settings, making sure that we are using the same setup for both.\\n\\n[Reply to #4 and #5]\\nThe reinforcement learning training hits a peak quite fast and after spending some time at this performance level, it deteriorates. That is why we are analyzing how to stabilize the RL training process so we can leverage more monolingual data and potentially improve peak performance. We are happy to add (in the appendix) a learning curve to show how the RL learning progresses. Is this the quantitative result you're looking for?\\n\\n[Reply to #6]\\nWe apologize for the confusion. We refer to the first bullet point of the reply, as we found that our latest results (26.36 and 26.04 BLEU for WMT14 en -> fr and fr -> en) exceed those obtained with Transformers from Lample et al. (2018b). That being said one would expect that all numbers could improve when using a large Transformer as the base model and we should try this experiment.\"}",
"{\"title\": \"Reply to the rebuttal\", \"comment\": \"Thank the authors for the detailed response.\\n\\n[Reply to response #1 and #2] \\nI agree to the point that simple approach can lead to good results. But, considering no related baseline algorithm is implemented, then how could we say we really need such an approach? Hope the authors can address it in the future. \\n\\nBesides, what does the \\u201c26.36 and 26.04 BLEU for WMT14 en -> fr and fr -> en,\\u201d mean in your response #1? I did not find these two numbers in the paper. Are they the column ``Dual-0\\u2019\\u2019 in Table 3?\\n\\n[Reply to response #3]\\nI think you should add the results of back-translation for MultiUN settings (do I miss anything) because as pointed by Reviewer 1, the WMT experiments in your paper is not an actual unsupervised setting. Therefore, this baseline should be verified on MultiUN.\\n\\n[Reply to response #4 and #5]\\nI do not find the quantitative results. Please attach them. Besides, why more monolingual data seems not helpful in your setting? How could you leverage more monolingual data more efficiently? \\n\\n[Reply to response #6]\\nWhat does the ``(1)\\u2019\\u2019 mean? Can you explain it clearly?\"}",
"{\"title\": \"Response to Reviewer3\", \"comment\": \"1. That is correct. Yet, we see the simplicity of the approach as a merit and the results convincing. We are currently working on stabilizing the RL training in order to be able to leverage bigger amounts of monolingual data and our current results (still with the same 1M monolingual sentences) are at 26.36 and 26.04 BLEU for WMT14 en -> fr and fr -> en, which shows the potential of the approach.\\n\\n2. Thanks for the pointers! Even though we cannot implement them in this limited time, we will keep them in mind to include it in our research.\\n\\n3. We actually did implement them and they were inline with the results in the WMT setting. \\n\\n4. We have experimented with more monolingual data (only on the UN data experiments so far) but it does not seem to help as the biggest gains happen at the beginning of the RL training. We are still looking into how to make better use of it.\\n\\n5. We have thought about these experiments as well. We definitely expect to obtain a better initial baseline model by using more parallel sentences which, we expect, would boost performance overall. The scenario with more languages also needs to be investigated further, but early conclusions in our experiments show that the performance is consistent.\\n\\n6. We don't present Transformer based experiments due to additional implementation effort, which we have yet to tackle. Although Transformer is a state-of-the-art NMT system, RNN-based models are still competitive. Moreover, we have shown in (1) that RNN-based models can beat Transformers and hope to achieve further improvements.\"}",
"{\"title\": \"Response to Reviewer2\", \"comment\": \"1. We'd argue that our approach is unsupervised in the sense that we are training a model without data for the task we are training. From a practical perspective, having parallel data for another language pair is not always possible, as is, to a lesser extent, having monolingual data. We do agree that we should stress that our approach is zero-shot to avoid confusion with unsupervised learning.\\n\\n2. We do not pretrain the embeddings on their own; they are trained with the rest of the baseline multilingual model. That is, the zero-shot training (before RL) performs the embedding pre-training as the initialization step. This step initializes not only the embeddings but also the weights of the translation model, which could be considered a stronger variant of the unsupervised lexicon induction via word vectors. Apart from the initialization, our approach also leverages language models and back-translation to learn the zero-shot translation directions in the form of rewards.\\n\\n3. We use the bilingually aligned data for each language pair. They are approximately of size 18M, 25M and 22M for en-es, en-fr and es-fr, respectively. Each of the development and test sets contain 4000 sentences.\\n\\n4. It is indeed high. We did not tune the hyperparameters of the language model (we use the default ones for the big model in the Tensorflow RNN tutorial) since the weight that we currently use for its reward (the one presented in He et al.) is so low (0.005). However, we plan to analyze its role and we will then need to analyze all hyperparameters more in depth to obtain a better performance.\\n\\n5. For a single direction model with the same capacity, es->fr and fr->es achieve 42.50 and 44.86 respectively. These models were trained on the same 1M sentences used for RL (also using their corresponding translations). As for the pivoting experiments on the UN setup, we do not have the exact numbers but they were slightly below the Dual-0 performance.\\n\\n6. All the years that include Spanish: 2007-2013.\\n\\n7. The NewsCrawl is monolingual and thus, we cannot train a supervised NMT system on it. We did not use the News Commentary corpus, which, while more 'newsy', still differs in style. The Pseudo-NMT is the closest scenario to supervised in-domain training for this setting and we do report these numbers.\\n\\n8. We train on NVIDIA Tesla P100 which use the NVIDIA Pascal GPU architecture.\"}",
"{\"title\": \"Response to Reviewer1\", \"comment\": \"1. That is correct. We improve a multilingual baseline model using monolingual data for the zero-shot translation. The contribution is, as you point out, incremental. Yet, we believe that the simplicity of the approach makes the results stronger. We have shown in our paper that our approach outperforms various baselines such as the zero-shot multilingual NMT system, pivoting and pseudo-NMTs. Moreover, we show that its performance is in line with similar architectures such as Lample et al. (2018b). We are still working on improving the approach and we have improved our results by 1-2 BLEU so far.\\n\\n2. Thank you for the pointer, we will take that work into account. We are aware that our contribution is not totally comparable to fully unsupervised approaches and clearly, introducing a pivot language helps the learning process. However, the approach is unsupervised in the sense that we have no data for the concrete task we're training for which is a setup that we can find for most low resource languages and we believe we should take advantage of it. We'll stress more that we are doing zero-shot to avoid confusion.\"}",
"{\"title\": \"Thanks to reviewers\", \"comment\": \"We'd like to sincerely thank all reviewers for studying our work in such depth and for their detailed and thoughtful comments, which have proven very helpful.\"}",
"{\"title\": \"Novelty is not enough. More experiments needed.\", \"review\": \"This paper can be considered as a direct application of dual learning (He et al. (2016)) to the multilingual GNMT model. The first step is to pre-train the GNMT model with parallel corpora (X, Z) and (Y, Z). The second step is to fine-tune the model with dual learning.\\n\\n1. I originally thought that the paper can formulate the multilingual translation and zero dual learning together as a joint training algorithm. However, the two steps are totally separated, thus the contribution of this paper is incremental. \\n\\n2. The paper actually used two parallel corpora. In this setting, I suggest that the author should also compare with other NMT algorithm using pivot language to bridge two zero-source languages, such as ``A Teacher-Student Framework for Zero-Resource Neural Machine Translation``. It is actually unfair to compare with the completely unsupervised NMT, because the existence of the pivot language can enrich the information between two zero-resource languages. The general unsupervised NMT is often considered as ill-posed problem. However, with parallel corpus, the uncertainty of two language alignment is greatly reduced, making it less ill-posed. The pivot language also plays the role to reduce the uncertainty.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Some nice ideas, but would benefit from a better comparison to related work\", \"review\": [\"Pros:\", \"The paper address the problem of zero-shot translation. The proposed method is essentially to bootstrap a Dual Learning process using a multilingual translation model that already has some degree of zero-shot translation capabilities. The idea is simple, but the approach improves the zero-shot translation performance of the baseline model, and seems to be better than either pivoting or training on direct but out-of-domain parallel data.\", \"The paper is mostly well written and easy to follow. There are some missing details that I've listed below.\"], \"cons\": [\"There is very little comparison to related work. For example, related work by Chen et al. [1], Gu et al. [2] and Lu et al. [3] are not cited nor compared against.\", \"Misc questions/comments:\", \"In a few places you call your approach unsupervised (e.g., in Section 3: \\\"Our method for unsupervised machine translation works as follows: (...)\\\"; Section 5.2 is named \\\"Unsupervised Performance\\\"). But your method is not unsupervised in the traditional sense, since you require lots of parallel data for the target languages, just not necessarily directly between the pair. This may be unrealistic in low-resource settings if there is not an existing suitable pivot language. It'd be more accurate to simply say \\\"zero-shot\\\" (or maybe \\\"semi-supervised\\\") in Section 3 and Section 5.2.\", \"In Section 3.1 you say that your process implements the three principles outlined in Lample et al. (2018b). However, the Initialization principle in that work refers to initializing the embeddings -- do you pretrain the word embeddings as well?\", \"In Section 4 you say that the \\\"UN corpus is of sufficient size\\\". Please mention what the size is.\", \"In Section 4.2, you mention that you set dropout to p=0.65 when training your language model -- this is very high! Did you tune this? Does your language model overfit very badly with lower dropout values?\", \"In Section 5.2, what is the BLEU of an NMT system trained on the es->fr data (i.e., what is the upper bound)? What is the performance of a pivoting model?\", \"In Section 5.3, you say you use \\\"WMT News Crawl, all years.\\\" Please indicate which years explicitly.\", \"In Table 3, what is the performance of a supervised NMT system trained on 1M en-fr sentences of the NC data? Knowing that would help clarify the impact of the domain mismatch.\", \"minor comment: in Section 4.3 you say that you trained on Tesla-P100, but do you mean Pascal P100 or Tesla V100?\", \"[1] Chen et al.: http://aclweb.org/anthology/P17-1176\", \"[2] Gu et al.: http://aclweb.org/anthology/N18-1032\", \"[3] Lu et al.: http://www.statmt.org/wmt18/pdf/WMT009.pdf\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Limited novelty; Experiments are not enough\", \"review\": \"[Summary]\\nThis paper proposed an algorithm for zero-shot translation by using both dual learning (He et al, 2016) and multi-lingual neural machine translation (Johnson et al 2016). Specially, a multilingual model is first trained following (Johnson et al 2016) and then the dual learning (He et al 2016) is applied to the pre-trained model using monolingual data only. Experiments on MultiUN and WMT are carried out to verify the proposed algorithm. \\n\\n[Details]\\n1.\\tThe idea is incremental and the novelty is limited. It is a simple combination of dual learning and multilingual NMT. \\n\\n2.\\tMany important multilingual baselines are missing. [ref1, ref2]. At least one of the related methods should be implemented for comparison.\\n\\n3.\\tThe Pseudo NMT in Table 3 should also be implemented as a baseline for MultiUN experiments for in-domain verification.\\n\\n4.\\tA recent paper [ref3] proves that using more monolingual data will be helpful for NMT training. What if using more monolingual data in your system? I think using $1M$ monolingual data is far from enough.\\n\\n5.\\tWhat if using more bilingual sentence pairs? Will the results be boosted? What if we use more language pairs?\\n\\n6.\\tTransformer (Vaswani et al. 2017) is the state-of-the-art NMT system. At least one of the tasks should be implemented using the strong baseline.\\n\\n[Pros] (+) A first attempt of dual learning and multiple languages; (+) Easy to follow.\\n[Cons] (-) Limited novelty; (-) Experiments are not enough.\\n\\nReferences\\n[ref1] Firat, Orhan, et al. \\\"Zero-resource translation with multi-lingual neural machine translation.\\\" EMNLP (2016).\\n[ref2] Ren, Shuo, et al. \\\"Triangular Architecture for Rare Language Translation.\\\" ACL (2018).\\n[ref3] Edunov, Sergey, et al. \\\"Understanding back-translation at scale.\\\"EMNLP (2018). \\n\\nI am open to be convinced.\\n\\n==== Post Rebuttal ===\\nThanks the authors for the response. I still have concerns about this work. Please refer to my comments \\\"Reply to the rebuttal\\\". Therefore, I keep my score as 5.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
r1lYRjC9F7 | Enabling Factorized Piano Music Modeling and Generation with the MAESTRO Dataset | [
"Curtis Hawthorne",
"Andriy Stasyuk",
"Adam Roberts",
"Ian Simon",
"Cheng-Zhi Anna Huang",
"Sander Dieleman",
"Erich Elsen",
"Jesse Engel",
"Douglas Eck"
] | Generating musical audio directly with neural networks is notoriously difficult because it requires coherently modeling structure at many different timescales. Fortunately, most music is also highly structured and can be represented as discrete note events played on musical instruments. Herein, we show that by using notes as an intermediate representation, we can train a suite of models capable of transcribing, composing, and synthesizing audio waveforms with coherent musical structure on timescales spanning six orders of magnitude (~0.1 ms to ~100 s), a process we call Wave2Midi2Wave. This large advance in the state of the art is enabled by our release of the new MAESTRO (MIDI and Audio Edited for Synchronous TRacks and Organization) dataset, composed of over 172 hours of virtuosic piano performances captured with fine alignment (~3 ms) between note labels and audio waveforms. The networks and the dataset together present a promising approach toward creating new expressive and interpretable neural models of music. | [
"music",
"piano transcription",
"transformer",
"wavnet",
"audio synthesis",
"dataset",
"midi"
] | https://openreview.net/pdf?id=r1lYRjC9F7 | https://openreview.net/forum?id=r1lYRjC9F7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1xoH6bYxV",
"SklV7Ix9aX",
"rJllkIgcTQ",
"H1eFiHgqTQ",
"rklS4Hl5am",
"BJl9uwaQ67",
"B1efz6dgpX",
"S1gnFxZjnX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545309507117,
1542223388120,
1542223319846,
1542223264538,
1542223148906,
1541818226100,
1541602569694,
1541243012203
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper917/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper917/Authors"
],
[
"ICLR.cc/2019/Conference/Paper917/Authors"
],
[
"ICLR.cc/2019/Conference/Paper917/Authors"
],
[
"ICLR.cc/2019/Conference/Paper917/Authors"
],
[
"ICLR.cc/2019/Conference/Paper917/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper917/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper917/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"All reviewers agree that the presented audio data augmentation is very interesting, well presented, and clearly advancing the state of the art in the field. The authors\\u2019 rebuttal clarified the remaining questions by the reviewers. All reviewers recommend strong acceptance (oral presentation) at ICLR. I would like to recommend this paper for oral presentation due to a number of reasons including the importance of the problem addressed (data augmentation is the only way forward in cases where we do not have enough of training data), the novelty and innovativeness of the model, and the clarity of the paper. The work will be of interest to the widest audience beyond ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Oral)\", \"title\": \"metareview\"}",
"{\"title\": \"Response for AnonReviewer2\", \"comment\": \"Thank you for your review and comments.\\n\\n* Eq (1) this is really the joint distribution between audio and notes, not the marginal of audio\\n\\nThank you for catching the mistake. We have updated the equation to include the marginalizing integral through the expectation over notes: P(audio) = E_{notes} [ P(audio|notes) ]\\n\\n* Table 4: What do precision, recall, and f1 score mean for notes with velocity? How close does the system have to be to the velocity to get it right?\\n\\nWe use the mir_eval library for calculating those metrics, and a full description is available here: https://craffel.github.io/mir_eval/#module-mir_eval.transcription_velocity\\n\\nIt implements the evaluation procedure described in Hawthorne et al. (2018).\\n\\nWe have updated the caption for Table 4 to make this more clear.\\n\\n* Table 6: NLL presumably stands for Negative Log Likelihood, but this should be made explicitly\\n\\nThanks, updated the table caption to make this more clear.\\n\\n* Figure 2: Are the error bars the standard deviation of the mean or the standard error of the mean?\\n\\nWe are calculating the standard deviation of the means (we did not divide by the square root of the sample size).\"}",
"{\"title\": \"Response for AnonReviewer1\", \"comment\": \"Thank you for your review and comments.\\n\\n* MIDI itself is a rich language with ability to drive the generation of music using rich sets of customizable sound fonts. Given this, it is not clear that it is necessary to reproduce this function using neural network generation of sounds.\\n\\nSynthesizing realistic audio from symbolic representations is a complex task. While there are many good sounding piano synthesizers, many of them fall well short of producing audio that would be a convincing substitute for a real piano recording. For example, the SoundFont technology referenced can only play particular samples for particular notes (with some simple effects processing). It is incapable of modeling complex physical interactions between different parts of the piano, such as sympathetic resonance, and is limited by the quality and variety of samples included with a particular font (for example, the ability to play longer notes is often achieved by simply looping over a section of a sample). That said, there are some piano synthesis systems that can do a good job of modeling these types of interactions, though they are not as widely available as SoundFonts and are difficult to create. For a good overview of the difficulties and successes in piano modeling, see the paper we cited by Bank et al. \\n\\nOur WaveNet model is able to learn to generate realistic-sounding music with no information other than audio recordings of piano performances, information which would be insufficient for the creation of a SoundFont or physics-informed model. The \\u201cTranscribed\\u201d WaveNet model clearly demonstrates this because we use only the audio from the dataset and we derive training labels by using our transcription model. By training on the audio directly, we implicitly model the complex physical interactions of the instrument, unlike a SoundFont.\\n\\nIt is also interesting to note that the WaveNet model recreates non-piano subtleties of the recording, including the response of the room, breathing of the player, and shuffling of listeners in their seats. These results are encouraging and indicate that such methods could also capture the sound of more dynamic instruments (such as string and wind instruments) for which convincing synthesis/sampling methods lag behind piano. To clarify this point, we have added a paragraph to the Piano Synthesis section of the paper.\\n\\nWe have also updated the paper to further demonstrate our ability to control the output sound by adding year conditioning. Different competition years within the MAESTRO dataset had different microphone placements (e.g., near the piano or farther back in the room), and by conditioning on year, we can control whether the output sounds like a close mic recording or one with more room noise. We present several audio examples in the online supplement: https://goo.gl/6RzHZM\\n\\n* The further limitation of the proposed approach seems to be the challenge of decoding raw music audio with chords, multiple overlaid notes or multiple tracks. MIDI as a representation can support multiple tracks, so it is not necessarily the bottleneck.\\n\\nWe chose to model the music with full polyphony for a couple reasons. One is that, as described above, there are complex interactions in the physical piano and recording environment that would not be reproducible by rending notes separately and then layering them into a single output. Another is that the training data is presented as a single MIDI stream and the audio is not easily separated into multiple tracks.\\n\\n* How much does the data augmentation (audio augmentation) help?\\n\\nWe have added a table showing the differences between training with and without audio augmentation. In the process of analyzing these results, we realized that audio augmentation helps significantly when evaluating on the MAPS dataset (likely because the model is more robust to differences in recording environment and piano qualities), it actually incurs a slight penalty when evaluating on the MAESTRO test set. We have updated the paper with a discussion of these differences.\"}",
"{\"title\": \"Response for AnonReviewer3\", \"comment\": \"Thank you for your review and comments.\\n\\n* Is MAPS actually all produced via sequencer? Having worked with this data I can almost swear that at least a portion of it (in particular, the data used here for test) sounds like live piano performance captured on Disklavier. Possibly I'm mistaken, but this is worth a double check.\\n\\nAccording to the PDF file that accompanies the MAPS dataset (\\u201cMAPS - A piano database for multipitch estimation and automatic transcription of music\\u201d): \\u201cThese high quality files have been carefully hand-written in order to obtain a kind of musical interpretation as a MIDI file.\\u201d We have updated the citation to point to this paper specifically to make things more clear. More information about the process is available on the website that contains the source MIDI files for MAPS: http://www.piano-midi.de/technic.htm\\n\\n* Referring to the triple of models as an auto-encoder makes me slightly uncomfortable given that they are all trained independently, directly from supervised data. \\n\\nThis is a very reasonable point, because there are no learned feature vectors in the latent representation (they come from labels). We have updated the text to instead refer to the model as a \\u201cgenerative model with a discrete latent code of musical notes\\u201d. We have kept the encoder/decoder/prior notation because it still seems appropriate. \\n\\n* The MAESTRO-T results are less interesting than they might appear at first glance given that the transcriptions are from train. The authors do clearly acknowledge this, pointing out that val and test transcription accuracies were near train accuracy. But maybe that same argument could be used to support that the pure MAESTRO results are themselves generalizable, allowing the authors to simplify slightly by removing MAESTRO-T altogether. In short, I'm not sure MAESTRO-T results offer much over MAESTRO results, and could therefore could be omitted. \\n\\nOur goal with the MAESTRO-T dataset was to clearly demonstrate that both the language modeling tasks (Music Transformer) and audio synthesis (WaveNet) can produce compelling results without having access to ground truth labels. We agree that using the train dataset does somewhat diminish this demonstration, but argue that it does more clearly demonstrate the usefulness of the \\u201cWave2Midi2Wave\\u201d process than just using ground truth labels. In future work, we plan to expand our use of these models to datasets that do not have ground truth labels. We have added to the conclusion to clarify this point.\"}",
"{\"title\": \"Update\", \"comment\": \"Thank you to all reviewers for your careful review and comments on the paper. We will address specific questions in responses to particular reviews, but we also wanted to highlight some general updates we have made since the initial submission of the paper:\\n\\nOur transcription results have improved (Note w/ offset F1 score on MAPS configuration 2 test went from 64.03 to 66.33) due to two modifications:\\n* We added an offset detection head to the model, inspired by Kelz et al. (2018).\\n* We trained the transcription for more steps (670k instead of 178k).\\n\\nOur synthesis results have improved because we switched to using a larger receptive field for the Piano Synthesis WaveNet model (6 instead of 3 sequential stacks).\\n\\nIn order to more accurately compare our WaveNet models, we also trained an unconditioned WaveNet model trained only with the audio from the combined MAESTRO training/validation splits with no conditioning signal.\", \"we_improved_our_listening_study_by\": \"* Rerunning it with the improved WaveNet model\\n* Switching to 20-second samples instead of 10-second samples\\n* Clarifying our question to ask the raters which clip they thought sounded more like a recording of somebody playing a musical piece on a real piano.\\n\\nThe study results now show that there is not a statistically significant difference in participant ratings between real recordings and samples from the WaveNet Ground/Test and WaveNet Transcribed/Test models.\\n\\nTo better control the timbre of synthesis output, we implemented year conditioning, which can produce outputs that mimic the microphone placement of the different competition years in the dataset.\\n\\nFinally, we decided to name the process of transcription, MIDI manipulation, and then synthesis Wave2Midi2Wave.\"}",
"{\"title\": \"Large dataset of parallel MIDI/Audio enables better piano music transcription, synthesis, and generation\", \"review\": \"This paper describes a new large scale dataset of aligned MIDI and audio from real piano performances and presents experiments using several existing state-of-the-art models for transcription, synthesis, and generation. As a result of the new dataset being nearly an order of magnitude larger than existing resources, each component model (with some additional tuning to increase capacity) yields impressive results, outperforming the current state-of-the-art on each component task.\\nOverall, while the modeling advances here are small if any, I think this paper represents a solid case study in collecting valuble supervised data to push a set of tasks forward. The engineering is carefully done, well-motivated, and clearly described. The results are impressive on all three tasks. Finally, if the modeling ideas here do not, the dataset itself will go on to influence and support this sub-field for years to come. \\nComments / questions:\\n-Is MAPS actually all produced via sequencer? Having worked with this data I can almost swear that at least a portion of it (in particular, the data used here for test) sounds like live piano performance captured on Disklavier. Possibly I'm mistaken, but this is worth a double check.\\n-Refering to the triple of models as an auto-encoder makes me slightly uncomfortable given that they are all trained independently, directly from supervised data. \\n-The MAESTRO-T results are less interesting than they might appear at first glance given that the transcriptions are from train. The authors do clearly acknowledge this, pointing out that val and test transcription accuracies were near train accuracy. But maybe that same argument could be used to support that the pure MAESTRO results are themselves generalizable, allowing the authors to simplify slightly by removing MAESTRO-T altogether. In short, I'm not sure MAESTRO-T results offer much over MAESTRO results, and could therefore could be omitted.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Learning to generate piano music via MIDI layer\", \"review\": \"The paper addresses the challenge of using neural networks to generate original and expressive piano music. The available techniques today for audio or music generation are not able to sufficient handle the many levels at which music needs to modeled. The result is that while individual music sounds (or notes) can be generated at one level using tools like WaveNet, they don't come together to create a coherent work of music at the higher level. The paper proposes to address this problem by imposing a MIDI representation (piano roll) in the neural modeling of music audio that serves as an intermediate (and interpretable) representation between the analysis (music audio -> MIDI) and synthesis (MIDI -> music audio) in the pipeline of piano music generation. In order to develop and validate the proposed learning architecture, the authors have created a large data set of aligned piano music (raw audio along with MIDI representation). Using this data set for training, validation and test, the paper reports on listening tests that showed slightly less favorable results for the generated music. A few questions and comments are as follows. MIDI itself is a rich language with ability to drive the generation of music using rich sets of customizable sound fonts. Given this, it is not clear that it is necessary to reproduce this function using neural network generation of sounds. The further limitation of the proposed approach seems to be the challenge of decoding raw music audio with chords, multiple overlayed notes or multiple tracks. MIDI as a representation can support multiple tracks, so it is not necessarily the bottleneck. How much does the data augmentation (audio augmentation) help?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Put three state of the art models together and get impressive results for modeling piano music.\", \"review\": \"This paper combines state of the art models for piano transcription, symbolic music synthesis, and waveform generation all using a shared piano-roll representation. It also introduces a new dataset of 172 hours of aligned MIDI and audio from real performances recorded on Yamaha Disklavier pianos in the context of the piano-e-competition.\\n\\nBy using this shared representation and this dataset, it is able to expand the amount of time that it can coherently model music from a few seconds to a minute, necessary for truly modeling entire musical pieces.\\n\\nTraining an existing state of the art transcription model on this data improves performance on a standard benchmark by several percentage points (depending on the specific metric used).\\n\\nListening test results show that people still prefer the real recordings a plurality of the time, but that the syntheses are selected over them a fair amount. One thing that is clear from the audio examples is that the different systems produce output with different equalization levels, which may lead to some of the listening results. If some sort of automatic mastering were done to the outputs this might be avoided.\\n\\nWhile the novelty of the individual algorithms is relatively meager, their combination is very synergistic and makes a significant contribution to the field. Piano music modeling is a long-standing problem that the current paper has made significant progress towards solving.\\n\\nThe paper is very well written, but there are a few minor issues:\\n* Eq (1) this is really the joint distribution between audio and notes, not the marginal of audio\\n* Table 4: What do precision, recall, and f1 score mean for notes with velocity? How close does the system have to be to the velocity to get it right?\\n* Table 6: NLL presumably stands for Negative Log Likelihood, but this should be made explicity\\n* Figure 2: Are the error bars the standard deviation of the mean or the standard error of the mean?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJeKCi0qYX | MILE: A Multi-Level Framework for Scalable Graph Embedding | [
"Jiongqian Liang",
"Saket Gurukar",
"Srinivasan Parthasarathy"
] | Recently there has been a surge of interest in designing graph embedding methods. Few, if any, can scale to a large-sized graph with millions of nodes due to both computational complexity and memory requirements. In this paper, we relax this limitation by introducing the MultI-Level Embedding (MILE) framework – a generic methodology allowing contemporary graph embedding methods to scale to large graphs. MILE repeatedly coarsens the graph into smaller ones using a hybrid matching technique to maintain the backbone structure of the graph. It then applies existing embedding methods on the coarsest graph and refines the embeddings to the original graph through a novel graph convolution neural network that it learns. The proposed MILE framework is agnostic to the underlying graph embedding techniques and can be applied to many existing graph embedding methods without modifying them. We employ our framework on several popular graph embedding techniques and conduct embedding for real-world graphs. Experimental results on five large-scale datasets demonstrate that MILE significantly boosts the speed (order of magnitude) of graph embedding while also often generating embeddings of better quality for the task of node classification. MILE can comfortably scale to a graph with 9 million nodes and 40 million edges, on which existing methods run out of memory or take too long to compute on a modern workstation. | [
"Network Embedding",
"Graph Convolutional Networks",
"Deep Learning"
] | https://openreview.net/pdf?id=HJeKCi0qYX | https://openreview.net/forum?id=HJeKCi0qYX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BkejuntglV",
"rkldvjVFA7",
"HkxLe0BtT7",
"S1xdcdBKam",
"Hkll3Drtp7",
"S1eTTt4tp7",
"H1g_pHZi27",
"BJldyjav2X",
"ByxAk8WMnX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544752242973,
1543224160153,
1542180334322,
1542178959836,
1542178728201,
1542175172555,
1541244351864,
1541032671610,
1540654565861
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper916/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper916/Authors"
],
[
"ICLR.cc/2019/Conference/Paper916/Authors"
],
[
"ICLR.cc/2019/Conference/Paper916/Authors"
],
[
"ICLR.cc/2019/Conference/Paper916/Authors"
],
[
"ICLR.cc/2019/Conference/Paper916/Authors"
],
[
"ICLR.cc/2019/Conference/Paper916/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper916/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper916/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"Significant spread of scores across the reviewers and unfortunately not much discussion despite prompts from the area chair and the authors. The most positive reviewer is the least confident one. Very close to the decision boundary but after careful consideration by the senior PCs just below the acceptance threshold. There is significant literature already on this topic. The \\\"thought delta\\\" created by this paper and the empirical results are also not sufficient for acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Reject\"}",
"{\"title\": \"Revision Notes\", \"comment\": [\"We thank the reviewers for providing insightful reviews. We present below the main changes to the document. Every change is a response to the detailed reviews we received. Specifically we:\", \"Replaced Table 2 (results on selected coarsening levels, previously in the main body) with Figure 3 (results with all varying coarsening levels; previously in Appendix) in response to multiple reviewer questions. The table is now in Appendix and Figure 3 is now in the main body. Blended \\u201cImpact of varying coarsening levels on MILE\\u201d (previously in the Appendix A.5) in the main body (Sec 5.2).\", \"Fixed typo error in Figure 2 as pointed out by the reviewer.\", \"Updated Figure 4 (added results for m=0 and m=2) and Sec 5.4 in response to reviewer comments and edited associated description.\", \"Added related literature on how to extend our ideas to the directed graph (Sec. 3 and Appendix A.9) in response to reviewer comments.\", \"Added \\\"Discussion on reusing \\\\theta\\\" in A.7 in Appendix in response to reviewer comments.\", \"Added \\\"Discussion on the choice of embedding methods\\\" in A.8 in Appendix in response to reviewer comments.\", \"Added \\\"Discussion on extending MILE to directed graphs\\\" in A.9 in Appendix in response to reviewer comments.\", \"Added \\\"Discussion on the effectiveness of SEM\\\" in A.10 in Appendix in response to reviewer comments.\", \"Fixed minor grammatical errors.\"]}",
"{\"title\": \"Acknowledging the reviews and our responses\", \"comment\": \"We thank the reviewer for the review. Please see our responses in detail below.\\n\\n1) \\u201csome claims in the papers are wrong according to existing literatures\\u201d\\n\\n**Response**: \\nWe assume the reviewer is referring to the LINE comparison, please see the detailed response below. \\n\\n----------------------------------------------------\\n2) \\u201cThe reasons that why the method works need to be better explained, which can significantly (improve) the quality of the paper and its impact in the future.\\u201d\\n\\n**Response**: \\nWe conduct a detailed drilldown study which due to lack of space is reported in the Appendix (see Table 5). This drilldown study offers some empirical reasons why we picked the design choices we used which match the intuition described in the main paper. \\n\\n----------------------------------------------------\\n3) \\\"However, such methods rarely scale to large datasets (e.g., graphs with over 1 million nodes) since they are computationally expensive and often memory intensive\\\". This is not TRUE! In the paper of LINE (Tang et al. 2015). It shows the LINE model can easily scale up to networks with one million nodes with a few hours. \\n\\n**Response**: \\nWe use the word \\u201crarely\\u201d in the quote above. We feel this statement is still true (outside of LINE and a couple of other papers very few papers scales to large datasets). Our effort can scale both methods like LINE as well as methods that do not scale particularly well.\\n\\nA few minor notes -- The paper by Tang et al reported results on a 1TB RAM machine -- we used a 128GB RAM machine. We also report results on a much larger dataset Yelp. For all the results, we report the wallclock time of the entire execution.\\n\\nFinally, if the reviewer has a specific suggestion on how to rephrase the above statement we are happy to accommodate.\\n\\n----------------------------------------------------\\n4) \\\"The authors use Equation (7) to learn the parameters of the graph convolutional neural network. I am really surprised that this method works. Especially the learned parameters are shared across different layers. \\\"\\n\\n**Response**:\\n Similar to GCN, \\\\Theta is a matrix of filter parameters and is of size dxd (where d is the embedding dimensionality). Eq. (4) in this paper defines how the embeddings are propagated during embedding refinements, parameterized by \\\\Theta. Intuitively, \\\\Theta defines how different embedding dimensions interact with each other during the embedding propagation. This interaction is dependent on graph structure and base embedding method, which can be learned from the coarsest level. \\n\\nIdeally, we would like to learn this parameter \\\\Theta on every two consecutive levels. But this is not practical since this could be expensive as the graph get more fine-grained (and defeat our purpose of scaling up graph embedding). This trick of \\u201csharing\\u201d parameters across different levels is the trade-off between efficiency and effectiveness. To some extent, it is similar to the original GCN [1], where the authors share the same filter parameters \\\\Theta over the whole graph (as opposed to using different \\\\Theta for different nodes; see Eq (6) and (7) in [1]). -- We did not include these details due to the limit of space but would be happy to add them in the final version. \\n\\nMoreover, we empirically found this works good enough and much more efficient. Table 5 shows that if we do not share \\\\Theta values and use random values for \\\\Theta during refinements, the quality of embedding is much worse (see baseline MILE-untr). We thank the reviewer for this question and we will better explain this in the revised version of the article.\\n\\n[1] Kipf, Thomas N., and Max Welling. \\\"Semi-supervised classification with graph convolutional networks.\\\" ICLR (2017).\\n\\n----------------------------------------------------\\n5) \\\"Have you tried and compared different approaches of graph coarsening?\\\"\\n\\n**Response**: \\nYes -- we did try several ideas -- we included some of these results in Table 5. \\n\\n----------------------------------------------------\\n6). \\\"In Figure 2. (a), according to Equation (1), in the second step, the weight of the edge between A and DE should be 2/sqrt(3)*sqrt(4)?\\\"\\n\\n**Response**: \\nThanks for catching this typo -- it should be in fact 2/(sqrt(4)*sqrt(2)).\", \"reasoning\": \"The degree of node A is D(A) = 4. \\nThe degree of node DE is D(DE) = 2.\\nSo this should be 2/(sqrt(4)*sqrt(2)).\"}",
"{\"title\": \"Thank the Reviewer and Our Responses (Part-1)\", \"comment\": \"We thank the reviewer for providing detailed comments. I tried our best to answer the questions below.\\n----------------------------------------------------\\n1) \\u201cFirst, in many places, the authors claim that the embedding quality of the proposed method is improved. For example, the last sentence of Section 1, and \\\"MILE improves quality\\\" paragraph on Page 7. However, the experimental results fail to support this. As the proposed method is for the large-scale graph, let's focus on the results of YouTube dataset and Yelp dataset first. For Youtube dataset ((d) of Table 2), when m is set to be 8, for all the cases, the performance drops. For Yelp dataset (Figure 3), the authors do not provide Micro-f1 for the original graph (m = 0) or m = 1, 2, so it is hard or impossible to demonstrate that the quality of the proposed method is still good. \\u201d\\n\\n**Response**: \\nWe do observe MILE improves quality on both YouTube and Yelp.\\n\\nRegarding YouTube results in Table 2(d), with m=6, we can see some nice quality gain and huge speedups (on DeepWalk, Node2Vec, and LINE) -- comparing to m=0 (i.e., w/o MILE). Of course, as m increases, the quality could drop. What we want to show here is that MILE could push even further on the speedup side with little loss of quality. But if the quality is the first consideration, we would suggest using a smaller coarsening level (e.g. m=6) where both quality and efficiency gain can be achieved. Please note again that Figure 4 in the Appendix reports results for varying values of m across all methods on all datasets included in Table 2. Note that some results of original GraRep and NetMF methods are missing. This is because these methods are memory-intense and run out of memory on our machine (128GB RAM).\\n\\nRegarding Yelp results in Figure 3, we did not report the performance of original embedding methods (m=0) since these methods either take a substantial amount of time or requires too much memory. However, we just recently finished running LINE and DeepWalk (m=0, i.e. w/o MILE) on Yelp. Our results show Micro-F1 on Yelp with no coarsening (m=0) of is 0.625 for LINE and 0.640 for DW, and they all take more than 80 hours. At m=4 the micro-F1 improves to 0.642 (LINE) and 0.643 (DW) -- it stays relatively constant at this micro-F1 till about m=8. From m=8 to m=22 they dip slightly below 0.64. Note that even at m=10 they outperform the quality we achieve at m=0 quite significantly (0.639 vs. 0.625 for LINE; 0.643 vs 0.640 for DW). The above result is consistent with the results on other datasets, where for smaller values of m, quality improves but after a point there is a tradeoff between quality and speed (Figure 4 in Appendix in original submission makes this point). We will include these results in a revised version of the paper.\\n\\n\\n----------------------------------------------------\\n2) \\\"Second, the comparison with existing methods is not sufficient.\\\"\\n\\n**Response**:\\nWe compare across 5 methods and across 5 datasets over a range of settings both in the main paper and in the Appendix. Please also note that in the drilldown experiment in the Appendix we also defend various design choices. \\n\\n----------------------------------------------------\\n3) \\\"For the most important Yelp dataset (as this dataset fits the motivation scenario (large-scale graph) of this submission), the authors fail to report any comparison. Thus it might not be weak to demonstrate the benefit of the proposed method.\\\"\\n\\n**Response**: \\nWe want to kindly remind the reviewer that we report both Micro-F1 comparison and runtime comparison on all five methods evaluated within the MILE framework (see both parts of Fig 3). We also plan to add a few more updated results of the original LINE and DeepWalk (m=0) as mentioned above.\\n\\n----------------------------------------------------\\n4) \\\"Third, some experiment details are missing. For example, how the authors compute the running time of the proposed method? All the three stages are included? How the authors implement the existing methods? Are these implementations good enough to ensure a fair comparison? \\\"\\n\\n**Response**: \\nAll of these questions are addressed in the paper but we repeat here for the reviewer\\u2019s benefit. We always compare end-to-end wallclock time of all methods (so for all the MILE variants it includes the computation time of all three stages, discussed in Appendix A.1.5). Existing methods are publicly available implementations from the authors\\u2019 GitHub repository when available (pointed out in Appendix A.1.4). Keep in mind in each case we are comparing each method with itself, i.e. with and without MILE (at various coarsening levels). MILE is able to scale all of them individually while in many cases also improving quality. Again, please see Figure 4 in the Appendix as well as the results shared above.\"}",
"{\"title\": \"Thank the Reviewer and Our Responses (Part-2)\", \"comment\": \"5) \\\"On page 2, the authors mention that the proposed method \\\"can be easily extended to directed graph\\\". However, based on my understanding, directly graph will affect both the graph coarsening and embedding refining steps, and it seems not so easy to extend. Do the authors have the solution and experiments for directed graph? It would be interesting to see such results, which enlarges the application scope of the proposed method.\\\"\\n\\n**Response**: \\nNote that as pointed out by Chung et al. [1] one can construct random-walk Laplacians for a directed graph thus incorporating approaches like NetMF to accommodate such solutions. Another simple solution is to symmetrize the graph while accounting for directionality. Once the graph is symmetrized, any of the embedding strategies we discuss can be employed within the MILE framework (including the coarsening technique). There are many ideas for symmetrization of directed graphs (see for example work described by Gleich in 2006 [2] or Satuluri and Parthasarathy in 2011 [3]). \\n\\n[1] Chung, Fan. \\\"Laplacians and the Cheeger inequality for directed graphs.\\\" Annals of Combinatorics 9, no. 1 (2005): 1-19.\\n[2] David Gleich, Hierarchical directed spectral graph partitioning, Information Networks 2006.\\n[3] Venu Satuluri and Srinivasan Parthasarathy, Symmetrizations for clustering directed graphs, EDBT 2011.\\n\\n----------------------------------------------------\\n6) \\\"The toy example on page 3 is very clear. However, for real-world graphs, does the proposed graph coarsening work well? For example, one property the proposed method utilizes is \\\"structurally equivalent\\\". What is the percentage of the nodes that can have such property for real-world graphs?\\\"\\n\\n**Response**: \\nWe included results against strawman coarsening strategies in Table 5 of MILE Drilldown in the Appendix-- see the performance of MILE vs MILE-rm. With regards to how often the structurally equivalent matching (SEM) is effective, this is highly dependent on graph structure but in general 5% ~ 20% of nodes are structurally equivalent (most of which are low-degree nodes). For example, during the first level of coarsening, YouTube has 172,906 nodes (or 86,453 pairs) out of 1,134,890 nodes that are found to be SEM (so ~15%); Yelp has 875,236 nodes (or 437,618 pairs) out of 8,938,630 nodes are SEM (so ~10%). In fact, more nodes are involved in SEM as SEM is run iteratively at each coarsening level. \\n\\n----------------------------------------------------\\n7) \\\"Although the authors claim that the proposed method has great efficiency while the embedding quality is comparable good or even better than the existing methods, I think that there is an efficiency-quality trade-off based on the experimental results in this submission. \\\"\\n\\n**Response**: \\nWe have addressed this comment above. Again, we kindly remind the reviewer on our results in Figure 4 and the analysis around it in the Appendix. To reiterate, we always see an improvement in quality using MILE for smaller values of m as compared to running the embedding method on the original graph (e.g. small m vs. m=0). After a point, there is an efficiency-quality tradeoff as m increases. This is clearly shown in Figure 4 by comparing m=1 (w/ MILE) vs. m=0 (w/o MILE). \\n\\nFor Yelp at m=0 the micro-F1 is 0.625 and it takes over 80 hours to complete (LINE). For m=22 we are obtaining a micro-F1 of 0.635 and it takes about 2.6 hours to complete. So this is a speedup of over 30 and an improvement of the micro-F1 score. On the other hand, at m=8 (which is using MILE) the speedup is about 2.5 (lower) but with an even better micro-F1 score of 0.642 (similar story on DeepWalk) -- showing nice trade-off property when using MILE but are all much better than the one without using MILE.\"}",
"{\"title\": \"Appreciate the comments and we added some clarifications\", \"comment\": \"We thank the reviewer for the insightful comments. We wish to point out that we chose the base embedding methods as they are either recently proposed (NetMF introduced in 2018, and GraRep) or are widely used (DeepWalk, Node2Vec, LINE etc.). By showing the performance gain of using MILE on top of these methods, we want to ensure the contribution of this work is of broad interest to the community.\", \"we_also_want_to_reiterate_that_these_methods_are_quite_different_in_nature\": \"* DeepWalk (DW) and Node2vec (N2V) rely on the use of random walks for latent representation of features.\\n* LINE learns an embedding that directly optimizes a carefully constructed objective function that preserves both first/second order proximity among nodes in the embedding space.\\n* GraRep constructs multiple objective matrices based on high orders of random walk laplacians, factories each objective matrix to generate embeddings and then concatenates the generated embeddings to form final embedding.\\n* NetMF constructs an objective matrix based on random walk Laplacian and factorizes the objective matrix in order to generate the embeddings.\\n\\nIndeed as the reviewer notes, under a few assumptions [1,2], NetMF with an appropriately constructed objective matrix has been shown to \\u201capproximate\\u201d DW, N2V and LINE allowing such be conducting implicit matrix factorization of **approximated** matrices. There are limitations to such approximations (shown in a related context by Arora et al [3]) - the most important one is the requirement of a sufficiently large embedding dimensionality. Additionally, we note that while unification is possible under such a scenario, the methods based on matrix factorization are quite different from the original methods and do place a much larger premium on space (memory consumption) - in fact this is observed by the fact we are unable to run NetMF and GraRep in many cases without incorporating them within MILE (as noted in the paper) and also in one of the other responses below. \\n\\nIn this paper, the base embedding methods are implemented using the original embedding learning algorithm (e.g. DW, N2V, Line) -- which directly are from the authors\\u2019 code. \\n\\nThat being said, we really appreciate reviewer\\u2019s suggestion of exploring MILE on other types of network embedding. As part of the future work, we will look into how MILE can be used in the case of attributed network embedding. In another response below we also discuss how it can be incorporated in a directed graph setting.\\n\\n[1] Qiu, Jiezhong, Yuxiao Dong, Hao Ma, Jian Li, Kuansan Wang, and Jie Tang. \\\"Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec.\\\" In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pp. 459-467. ACM, 2018.\\n[2] Levy, Omer, and Yoav Goldberg. \\\"Neural word embedding as implicit matrix factorization.\\\" In Advances in neural information processing systems, pp. 2177-2185. 2014.\\n[3] Arora, Sanjeev, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. \\\"A latent variable model approach to pmi-based word embeddings.\\\" Transactions of the Association for Computational Linguistics 4 (2016): 385-399.\"}",
"{\"title\": \"Overall Interesting work: clear motivation and nice performance gain\", \"review\": \"This paper proposes a multi-level embedding (MILE) framework, which can be applied on top of existing network embedding methods and helps them scale to large scale networks with faster speed. To get the backbone structure of graph, MILE repeatedly coarsens the graph into smaller ones using a hybrid matching technique, and GCN is used for the refinement of embeddings.\\n\\n[+] The paper is well-written and the idea is clearly presented.\\n[+] MILE is able to reduce computational cost while achieving comparable, or sometimes even better embedding quality. \\n[+] MILE is general enough to apply to different underlying embedding strategies.\\n[-] Most of the baseline methods are of similar type, since LINE, DeepWalk, node2vec and NetMF can all be unified to matrix factorization framework. There have been many new network embedding methods proposed in the past two years. It would be interesting to see how much MILE can help scale these methods.\\n\\nOverall, though there have already been hundreds of papers on network embedding in the past 2~3 years, I think this paper can be an interesting addition to this fast-growing area. Therefore, I would recommend to accept it.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Practically useful, but experiments are not convincing\", \"review\": \"In this submission, the authors propose a three-stage framework for large-scale graph embedding. The proposed method first constructs a small graph by graph coarsening, then applies any existing graph embedding method, and last refines the learned embeddings. It is useful, however, the experimental results are not convincing and cannot support the authors' claims about the proposed method.\\n\\nFirst, in many places, the authors claim that the embedding quality of the proposed method is improved. For example, the last sentence of Section 1, and \\\"MILE improves quality\\\" paragraph on Page 7. However, the experimental results fail to support this. As the proposed method is for the large-scale graph, let's focus on the results of YouTube dataset and Yelp dataset first. For Youtube dataset ((d) of Table 2), when m is set to be 8, for all the cases, the performance drops. For Yelp dataset (Figure 3), the authors do not provide Micro-f1 for the original graph (m = 0) or m = 1, 2, so it is hard or impossible to demonstrate that the quality of the proposed method is still good. \\n\\nSecond, the comparison with existing methods is not sufficient. For the most important Yelp dataset (as this dataset fits the motivation scenario (large-scale graph) of this submission), the authors fail to report any comparison. Thus it might not be weak to demonstrate the benefit of the proposed method.\\n\\nThird, some experiment details are missing. For example, how the authors compute the running time of the proposed method? All the three stages are included? How the authors implement the existing methods? Are these implementations good enough to ensure a fair comparison? \\n\\n*******\", \"some_other_questions\": \"a) On page 2, the authors mention that the proposed method \\\"can be easily extended to directed graph\\\". However, based on my understanding, directly graph will affect both the graph coarsening and embedding refining steps, and it seems not so easy to extend. Do the authors have the solution and experiments for directed graph? It would be interesting to see such results, which enlarges the application scope of the proposed method.\\n\\nb) The toy example on page 3 is very clear. However, for real-world graphs, does the proposed graph coarsening work well? For example, one property the proposed method utilizes is \\\"structurally equivalent\\\". What is the percentage of the nodes that can have such property for real-world graphs? \\n\\n********\", \"some_other_comments\": \"Generally speaking, this submission studies a very practical task. Although the authors claim that the proposed method has great efficiency while the embedding quality is comparable good or even better than the existing methods, I think that there is an efficiency-quality trade-off based on the experimental results in this submission. When m increases, the graph coarsening step causes more information loss, and thus the quality may decrease. Embedding refining step can be regarded as a procedure to reduce such information loss, but may not improve the embedding quality better than the original graph. So to me, it would be more meaningful to study such efficiency-quality trade-off for large-scale graph embedding.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea and result\", \"review\": \"This paper proposed a multi-Level framework for learning node embeddings for large-scale graphs. The author first coarsens the graphs into different levels of subgraphs. The low-level subgraphs are obtained with the node embeddings of the higher-level graphs with a graph convolutional neural network. By iteratively applying this procedure, the node embeddings of the original graphs can be obtained. Experimental results on several networks (including one network with ~10M node) prove the effective and efficiency of the proposed method over existing state-of-the-art approaches.\", \"strength\": [\"scaling up node embedding methods is a very important and practical problem\", \"experiments show that the proposed methods seems to be very effective.\"], \"weakness\": [\"the proposed method seems to be very heuristic\", \"some claims in the papers are wrong according to existing literatures\", \"Overall, the paper is well written and easy to follow. The proposed method is simple but heuristic. However, the performance seems to be quite effective according to the experiments. The reasons that why the method works need to be better explained, which can significantly the quality of the paper and its impact in the future.\"], \"details\": \"-- In the introduction part, \\\"However, such methods rarely scale to large datasets (e.g., graphs with over 1 million nodes) since they are computationally expensive and often memory intensive\\\". This is not TRUE! In the paper of LINE (Tang et al. 2015). It shows the LINE model can easily scale up to networks with one million nodes with a few hours. \\n-- The authors use Equation (7) to learn the parameters of the graph convolutional neural network. I am really surprised that this method works. Especially the learned parameters are shared across different layers. \\n-- Have you tried and compared different approaches of graph coarsening?\\n-- In Figure 2. (a), according to Equation (1), in the second step, the weight of the edge between A and DE should be 2/sqrt(3)*sqrt(4)?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
HyeFAsRctQ | Verification of Non-Linear Specifications for Neural Networks | [
"Chongli Qin",
"Krishnamurthy (Dj) Dvijotham",
"Brendan O'Donoghue",
"Rudy Bunel",
"Robert Stanforth",
"Sven Gowal",
"Jonathan Uesato",
"Grzegorz Swirszcz",
"Pushmeet Kohli"
] | Prior work on neural network verification has focused on specifications that are linear functions of the output of the network, e.g., invariance of the classifier output under adversarial perturbations of the input. In this paper, we extend verification algorithms to be able to certify richer properties of neural networks. To do this we introduce the class of convex-relaxable specifications, which constitute nonlinear specifications that can be verified using a convex relaxation. We show that a number of important properties of interest can be modeled within this class, including conservation of energy in a learned dynamics model of a physical system; semantic consistency of a classifier's output labels under adversarial perturbations and bounding errors in a system that predicts the summation of handwritten digits. Our experimental evaluation shows that our method is able to effectively verify these specifications. Moreover, our evaluation exposes the failure modes in models which cannot be verified to satisfy these specifications. Thus, emphasizing the importance of training models not just to fit training data but also to be consistent with specifications. | [
"Verification",
"Convex Optimization",
"Adversarial Robustness"
] | https://openreview.net/pdf?id=HyeFAsRctQ | https://openreview.net/forum?id=HyeFAsRctQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1lckxsNgE",
"BJe6BOOoRQ",
"rye8eGZ5CX",
"r1l4NRE3p7",
"HJgffAVhpX",
"ryej1AEhT7",
"H1e5Sp42TX",
"BJgDW64npX",
"HygxxT42p7",
"rJeS02Vhpm",
"rJlto34nam",
"HkxsOnE2p7",
"ryeLf5EhaX",
"ryepAig167",
"HklfxJG93Q",
"BJeG6VW537"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545019362303,
1543370821074,
1543274989900,
1542372908194,
1542372874027,
1542372835444,
1542372674192,
1542372606735,
1542372583962,
1542372557257,
1542372513180,
1542372467013,
1542371853957,
1541503956745,
1541181161597,
1541178553964
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper915/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper915/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper915/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper915/Authors"
],
[
"ICLR.cc/2019/Conference/Paper915/Authors"
],
[
"ICLR.cc/2019/Conference/Paper915/Authors"
],
[
"ICLR.cc/2019/Conference/Paper915/Authors"
],
[
"ICLR.cc/2019/Conference/Paper915/Authors"
],
[
"ICLR.cc/2019/Conference/Paper915/Authors"
],
[
"ICLR.cc/2019/Conference/Paper915/Authors"
],
[
"ICLR.cc/2019/Conference/Paper915/Authors"
],
[
"ICLR.cc/2019/Conference/Paper915/Authors"
],
[
"ICLR.cc/2019/Conference/Paper915/Authors"
],
[
"ICLR.cc/2019/Conference/Paper915/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper915/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper915/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes verification algorithms for a class of convex-relaxable specifications to evaluate the robustness of neural networks under adversarial examples.\\n\\nThe reviewers were unanimous in their vote to accept the paper. Note: the remaining score of 5 belongs to a reviewer who agreed to acceptance in the discussion.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting contribution to understanding NNs\"}",
"{\"title\": \"My concerns are addressed.\", \"comment\": \"Thanks for expanding the explanation on the high level idea of this paper. To me, these high level ideas matter much more than technical derivations or extensive experimental results. I think this paper can be accepted.\"}",
"{\"title\": \"Thanks for the updates. I am okay with this paper.\", \"comment\": \"Dear Paper915 Authors,\\n\\nThanks for clarifying my concerns and adding new materials on the toy example and scalability to the paper. I am okay with this paper now if the AC wants to accept it.\\n\\nBTW please make sure also adding detailed structures of each model evaluated to the appendix, or release source code with model specifications.\\n\\nThanks,\\nPaper915 AnonReviewer2\"}",
"{\"title\": \"(continued)\", \"comment\": \"Comment 5: \\u201cI barely found the experimental results satisfying. To find \\\"reasonable\\\" inputs to the model, authors considered perturbing points in the test set. However, I am not sure if this is a reasonable assumption [...]\\u201d\", \"answer_5\": \"We do make the assumption that for the verification task, we should be given both a pre-trained network and a held-out set to do verification on.\\n\\nIt\\u2019s true that in the ideal case, we would be able to verify for all possible inputs in the true distribution, however, in practice this is infeasible. Therefore, verification on a held-out set is considered a suitable proxy in the same way that accuracy on a validation/test data set is considered a suitable proxy and this has been a way to measure robustness in both verification and adversarial communities (see [Dvijotham et al., 2018; Bunel et al., 2017; Athalye et al., 2018; Carlini & Wagner, 2017b; Uesato et al. (2018); Madry et al., 2017; Tjeng & Tedrake, 2017; Cheng et al., 2017; Huang et al., 2017; Ehlers, 2017; Katz et al., 2017; Weng et al., 2018; Wong & Kolter, 2018]). Both prior work and our experiments in Section 4.3 and 4.4 indicate that robustness on the test points is informative.\", \"comment_6\": \"\\u201cIt is hard to have a sense of how good the results are in Figure 1 due to lack of benchmark results.\\u201d\", \"answer_6\": \"The reason we did not include comparisons to benchmark results from literature is that, to the best of our knowledge, this is the first paper which attempts to verify non-linear specifications as presented in our experiments.\\nTo resolve the lack of existing benchmarks, we have attempted to come up with strong baseline results (blue line in Figure 1) to compare with our verification results (green line in Figure 1). The strong baselines we\\u2019ve chosen are:\\nStronger adversarial attacks by having 20 random seeds as initial states for projected gradient descent. \\nFor the pendulum, we note that we can discretize the entire input space (as it lies on the circle). By discretizing the input space into smaller subspaces to do verification on - this is as close as we can get to the true bound. Thus we can treat the exhaustive verification (blue line) as pseudo ground truth.\\n\\nOne thing we want to emphasize again is that since there are no baselines that can do better than the blue line (adversarial bound), the difference between the green and blue line gives us an accurate measure of how suboptimal our algorithm is. \\n\\nAn example, to see how good the results are, is the pendulum (third picture in Figure 2). Here, we see that at perturbation radius delta=0.01, the exhaustive verification gives exactly the same bound as our verification scheme. This means that for this perturbation radius we have essentially found the true percentage of the test set which satisfies the specification. As we increase this perturbation radius to delta=0.06 we find that the difference between the verification bound and exhaustive verification bound is 22%. We had 27000 points in our test set, this means the number of points where we are unable to prove is 5940, but for the rest (21060 points) we are either able to find an adversarial attack which is successful or a proof that specification is satisfied via verification.\", \"comment_7\": \"[The experimental results are very limited. Suggestion to run more experiments on more data sets and re-running them with more settings. N=2 for digit sums looks limited.]\", \"answer_7\": \"We thank the reviewer for this comment. We extended the results for the digit sum problem as suggested. We want to also respectfully note that the number of datasets considered in this paper is in line with other papers in the space of verification and adversarial robustness [Madry et al., 2017; Athalye et al., 2018; Uesato et al. (2018);Carlini & Wagner, 2017b; ;Dvijotham et al., 2018; Bunel et al., 2017; Tjeng & Tedrake, 2017; Cheng et al., 2017; Huang et al., 2017; Ehlers, 2017; Katz et al., 2017; Weng et al., 2018; Wong & Kolter, 2018], and that the compute used to perform all experiments in this paper is already extensive (e.g. the exhaustive verification for the pendulum baseline).\\nWe have now added experimental results for the digit sum problem for N=3 and N=4 in Appendix H.1. In brief: As expected the verification bound becomes looser for larger N, since error accumulates when summing up more digits, however with increasing N performance stabilizes. We have also added Appendix I on what we call entropy specification which we referred to in reply to your first comment about significance (see comment on entropy specification).\"}",
"{\"title\": \"(continued)\", \"comment\": \"Comment 3: The idea of generalizing verifications to a convex-relaxable set is interesting, however, applying it in general is not very clear.\", \"answer_3\": \"The framework outlined in the paper is general, however, for verification to be meaningful the bound is required to be sufficiently tight. We have hence approached this on a case by case basis, as getting the convex hull of an arbitrary set is hard (which is what would ensure the tightest bound). A trivial recipe could be given by a bounding box whose bounds are given by the upper and lower bounds of the sets, but in general this is not sufficiently tight. For sufficiently tight convex-relaxations, we need to make use of functional constraints which are specific to the function itself. There has been a lot of work in approximation algorithms (see http://www.designofapproxalgs.com/book.pdf for a general overview) which try to give provable guarantees by approximating this problem. The cases we have chosen to focus on, namely semantics; physics and downstream specifications, are ones we think are important, thus we have chosen to develop convex-relaxations for these specific specifications. In addition, we have included an extra example in Appendix I regarding entropy specifications please refer to Answer 1 for more details.\", \"comment_4\": \"[One of my main concerns is regarding the relaxation step. There is no discussion on the effects of the tightness of the relaxation [...] especially when there is a trade-off between better approximation to the verification function and tightness of the bound.]\", \"answer_4\": \"In Figure 1, we have attempted to address the tightness of the relaxation. Here, we show two bounds: adversarial bound (blue line) and verification bound (green line). One thing to make clear is that there exists no verification algorithm which can have a bound past the adversarial bound. In other words, the difference between the two bounds is a strong indicator of how tight our verification algorithm is. An example is the first plot of Figure 1. Here, the difference between the adversarial bound and the verification bound is at most only 0.9% for CIFAR 10 test set. Intuitively, this means that we failed to find a provable guarantee for only 90 points out of 10000 in the test set. For the other 19910 we are able to either find an adversarial example which violates the specification or a provable guarantee that the specification is satisfied.\\n\\nIn all cases better approximations of the verification function should give tighter bounds, we don\\u2019t expect a trade-off between the two. However, one trade-off which is important in verification is between the computational costs and the quality of the approximation of the verification function. \\n\\nAdditionally we agree that it is desirable to understand how different relaxations can affect the tightness of the algorithm. To address this we added Appendix H (Comparison of Tightness), where we compare two different relaxation techniques for the physics based specification (conservation of energy). In brief: we consider two different relaxations to the quadratic function, one using semi-definite programming (SDP) techniques and one using linear programming. We find that the SDP relaxation does give tighter bounds, but comes at additional computational costs.\"}",
"{\"title\": \"We thank the reviewer for the detailed feedback\", \"comment\": \"We thank the reviewer for the detailed feedback and criticism. We made adjustments to the paper to address all your concerns and detail the changes below. We hope the changes clarify the concerns regarding the generality of our algorithm and the requested additional experiments.\", \"comment_1\": \"[The method is limited to feed-forward neural networks with ReLU and softmax activation functions and quadratic parts (it would be better to tone down the claims in the abstract and introduction parts.)]\", \"answer_1\": \"We want to clarify that although we have demonstrated most of the results on ReLU feedforward neural networks, it is not limited to such networks. The feedforward nature is indeed required but the ReLU activation function can be replaced with arbitrary activation functions, for example tanh or sigmoid activations (please see https://arxiv.org/abs/1803.06567 for more details). We initially used the ReLU example for clarity of presentation, as a result, maybe the generality of our result is not clear. To address this we have updated Sections 3.1, 3.3 and 3.4. Specifically, we changed the equation: X_{k+1} = ReLU(W_k x_k + b_k) to X_{k+1} = g_k(W_k x_k + b_k). The only change required going from the ReLU equation to the more general equation is the way the bounds ([l_k, u_k]) are propagated through the network and the relaxations applied on the activation functions. For a more general overview of the bound propagation techniques and relaxation of arbitrary activation functions we refer to the following papers https://arxiv.org/abs/1803.06567, https://arxiv.org/pdf/1610.06940.pdf, https://arxiv.org/abs/1805.12514 .\\n\\nWe would also like to clarify that this paper provides a framework for general nonlinear specifications that are convex-relaxable. Although we presented softmax and quadratic specifications this algorithm is not limited to these two cases. To demonstrate this further, we have added Appendix I where we find a convex-relaxation to the entropy of a softmax distribution from a classifier network and use it to verify that a given network is never overly confident. In other words; we would like to verify that a threshold on the entropy of the class probabilities is never violated. The specification at hand is the following:\\nF(x, y) = E + \\\\sum_i exp(y_i)/(\\\\sum_j exp(y_j)) log(exp(y_i)/(\\\\sum_j exp(y_j))) <=0\\nwhich is a non-convex function of the network outputs.\", \"comment_2\": \"Novelty: The idea of accounting for label semantics and quadratic expressions when training a robust neural network is important and very practical. This paper introduces some nice ideas to generalize linear verification functions [...] it seems to be more limited in practice than it claims and falls short in presenting justifying experimental results.\", \"answer_2\": \"We emphasize that our method is not limited to quadratic expressions and label semantics and refer to Answer 1, above, for comments regarding the generality. Regarding your concerns wrt the novelty of the approach: as far as we are aware there is no prior paper considering the problem of verifying nonlinear specifications for neural networks. Regarding the presentation of results: We refer to Answer 6, below, for a detailed justification of our experimental procedure. Additionally we want to highlight that our verification tool was a useful diagnostic in finding the failure modes of pendulum and CIFAR10 models. An example is that when we are able to verify that the pendulum model satisfies energy conservation more - the long term dynamics of the model always reaches a stable equilibrium.\"}",
"{\"title\": \"Thanks you for the review\", \"comment\": \"From your comments it seems that there was a misunderstanding regarding the general applicability of our method. We have updated the paper and provided extensive additional explanations below (please also consider our reply to all authors). To address the comments you have made:\", \"comment_1\": \"Is it critical that the non-linear verifications need to be convex relaxable. Recently, people have observed that a lot of nonconvex optimization problems also have good local solutions. Is it true that the convex relaxable condition is only required for provable algorithm? As the neural network itself is nonconvex, constraining the specification to be convex is a little awkward to me.\", \"answer_1\": \"For verification purposes it is indeed critical that we have either the global optimum value or an upper bound on the global optimum value. Verification of neural networks tries to find a proof that the specification, F(x, y) <= 0, is satisfied for all x and y within a bounded set (https://arxiv.org/abs/1803.06567). Note that this condition is equivalent to max_{x,y} F(x,y) <= 0, thus if we have the global maximum - the problem is solved. However, to find the global optimum value is often NP-hard even for ReLU networks (https://arxiv.org/abs/1705.01320). We can try to find a lower bound to the global optimum value by doing gradient descent to maximize the value of F(x,y). This is called a falsification procedure (as explained in Section 3.1). However, even if the value found is not greater than zero this is not sufficient to give a guarantee that there exists no x and y which can violate the specification, as the value is always a lower bound to the global optimum. Thus, we are motivated to find provable upper bounds on max F(x, y), ie, a number U such that F(x, y) <= U for all x, y in the input and output domain. If this U <=0 then we have found a guarantee that the specification is never violated. In order to do this, we study convex relaxations of this problem that enable computation of provable upper bounds.\\n\\nWe also do not require the specification to be convex (for example the physics specification isn\\u2019t if Q is not a semi-definite matrix), the specification can be some complicated nonlinear function - we just require that it be convex-relaxable, which is a weaker requirement. We slightly rephrased Section 3 to make this point more obvious.\", \"comment_2\": \"The paper contains the example specification functions derived for three specific purpose, I'm wondering how broad the proposed technique could be. Say if I need my neural network to satisfy other additional properties, is there a general recipe or guideline. If not, what's the difficulty intuitively speaking?\", \"answer_2\": \"This proposed technique is capable handling all specifications which are convex-relaxable, i.e. any specification for which the set of values that (x, y, F(x, y)) can take can be bounded by a convex set. The difficulty here is always getting a tight convex set on the specification you would like to verify for. There is a lot of literature in finding tight convex sets (https://eng.uok.ac.ir/mfathi/Courses/Advanced%20Eng%20Math/Linear%20and%20Nonlinear%20Programming.pdf), we have chosen to demonstrate the generality of our framework with three specifications that we deem to be important. In general any convex-relaxable specification can be treated in the same manner as in the paper but, of course, finding a tight convex set should be done on a case-by-case basis. We added an additional example, going beyond quadratic constraints in Appendix I. Here we verify that a given classifier is never overly confident, in other words; we would like to verify that a threshold on the entropy of the class probabilities is never violated.\\n\\nWe would also like to emphasize that this paper is aimed to do post-hoc verification, where we consider a scenario that we are given a pre-trained neural network. Thus this is different to training your neural network to satisfy desirable properties, it is rather a safety measure before the network is put into deployment for real world applications.\", \"comment_3\": \"[The reviewer also commented on a lack of commas]\\nCould you please expand upon this point ?\"}",
"{\"title\": \"(Continued)\", \"comment\": \"Comment 5: \\u201cIs it possible to show how loose the convex relaxation is for a small toy example? For example, the specification involving quadratic function is a good candidate.\\u201d\", \"answer_5\": \"We have now added a section in the Appendix H.2 (Tightness with a Toy Example), here we consider a toy example where the specification is :F(x,y) = x^2 - y^2 - 4<=0. The variables x and y are from two interval sets (-b_x, b_x) and (-b_y, b_y) respectively. Throughout the toy example we keep b_y=9. In Appendix H.2, we have added Figure 8, where the plot on the left shows the true set which satisfies the specification and we also show our convex relaxed set using our relaxation. The convex relaxed set is simply a box around the true set which is bounded hyperbolically (shown in green). In the same figure with the plot on the right, we also show the tightness of our relaxation as the interval set increase in length, specifically as we increase b_x. What we find is that our relaxation becomes looser linearly with respect to the increase in interval length.\", \"minor_comments\": \"[In (4), k is undefined]\\nThanks for spotting this, k was indeed a typo, this is now changed to n. \\n\\n[In (20), I am not sure if it is equivalent to the four inequalities after (22). There are 4 inequalities after (22) but only 3 in (20). ]\\nThere were only three constraints in equation 20 as we enforce X- aa^T to by a symmetric semi-definite matrix. The constraints X_ij - l_j a_i - u_i a_j + l_j u_i >=0, X_ij - l_i a_j - u_ja_i + l_i u_j >=0 becomes the same constraint when X_ij is symmetric. Thanks for spotting this, we have made this clearer in the appendix. In fact, this allowed us to spot that we also missed some constraints which we enforced this is now also added.\"}",
"{\"title\": \"(Continued)\", \"comment\": \"Comment 4: [For the Mujoco experiment, I am not sure how to interpret the delta values in Figure 1. Is the delta trivial?]\", \"answer_4\": [\"Thanks pointing this out, we should have been clearer about this. We have added these details to the appendix. For completeness we also list them here.\", \"The pendulum model takes [cos(theta), sin(theta), v] as input. Here theta is the angle of the pendulum and v is the scaled angular velocity (angular velocity / 10) - the data is generated such that the initial angular velocity lies between (-10, 10), by scaling this with 10 we make sure [cos(theta), sin(theta), v] lies in a box where each side is bounded by [-1, 1].\", \"The pendulum setup is the Pendulum from the DeepMind Control Suite (https://arxiv.org/abs/1801.00690). The pendulum is of length 0.5m and hangs 0.6m above ground.\", \"When the perturbation radius is 0.01. Since the pendulum is of length 0.5m, the perturbation in space is about 0.005 m in the x and y direction. The perturbation of the angular velocity is true_angular_velocity +- 0.1 radians per second (since the input is a scaled angular velocity by a factor of 10). The largest delta value we verified for was 0.06, this means the angular velocity can change upto 0.6 radians per second which is about \\u2155 of a full circle, thus this is not a trivial perturbation.\"]}",
"{\"title\": \"(Continued)\", \"comment\": \"Comment 3: [Detailed network architecture (Model A, Model B). Comment on the scalability of the proposed method]\", \"answer_3\": \"For the CIFAR 10 Semantic Specification, Model A and Model B are identical in terms of network architecture and consist of 4 convolutional and 3 linear layers interleaved with ReLU functions and 860000 parameters. For the MNIST Downstream Task the models consist of two linear layers interleaved with ReLU activation and 15880 parameters - which was enough to get good adversarial accuracy. For the pendulum physics specification we used a two layer neural network with ReLU activations and in toal 120 parameters. Regarding the scalability please see our previous comment.\"}",
"{\"title\": \"(Continued)\", \"comment\": \"Comment 2: \\u201cReport the details on how they solve the relaxed convex problem, and report verification time. What is the largest scale of network that the algorithm can handle within a reasonable time?\\u201d\", \"answer_2\": \"Thanks for the suggestion. We have added Appendix E (Scaling and Implementation) where we explain how we solved the relaxed convex problem. For CIFAR 10 semantic specification and downstream task specification (since all constraints are linear) we have used the open source LP solver GLOP (https://developers.google.com/optimization/lp/glop) and on average this takes 3-10 seconds per data point on a desktop machine (with 1 GPU and 8G of memory) for the largest network we handled. This network consists of 4 convolutional and 3 linear layers comprising of in total 860000 parameters. For the conservation of energy, we used SDP constraints - this relaxation scales quadratically with respect to the input and output dimension of the network. To solve for these set of constraintt we used the CVXOPT solver (https://cvxopt.org/) accessed via the python interface CVXPY (http://www.cvxpy.org/), which is slower than GLOP, and we have only tested this on a small network consisting of two linear layers with a total of 120 parameters. However, we expect that with stronger SDP solvers (like Mosek - https://www.mosek.com/) or by using custom scalable implementations of SDPs (for example, the techniques described in https://people.eecs.berkeley.edu/~stephentu/writeups/first-order-sdp.pdf), we will be able to scale to larger problem instances - we plan to pursue this in future work.\"}",
"{\"title\": \"We thank the reviewer for the detailed feedback and encouraging review. We address individual comments below.\", \"comment\": \"Comment 1: [The authors should distinguish the proposed technique to techniques from [1] and [2] which could be used to convert some non-linear specifications to linear specifications.]\", \"answer_1\": \"We thank the reviewer for highlighting this point, we have now added a paragraph in the section \\u2018Specifications Beyond Robustness\\u2019 to distinguish between existing techniques and convex relaxable specifications. The reviewer is correct in pointing out that some non-linearities can indeed be linearized through the use of different element-wise activation functions. However, in terms of generality as the reviewer mentioned, this mechanism does not work in many cases - an example is the softmax function, which needs every input in the layer to give it\\u2019s output. In this particular case, it is a non-separable nonlinear function and current literature does not support verification with such non-linearities.\"}",
"{\"title\": \"We have updated the paper and thanks for all the reviews.\", \"comment\": \"We thank all reviewers for the detailed reviews and thoughtful remarks. We have addressed all concerns in an updated version of the paper and you can find responses to your questions below.\", \"we_would_like_to_clarify_two_points_that_came_up_in_multiple_reviews\": \"1) The nonlinear specifications that can be verified with our method do not have to be convex. We only require the specification to convex-relaxable - which is a weaker condition. We have rephrased parts of Section 3 to make this more clear.\\n2) The framework outlined in the paper is general, however, for verification to be meaningful the bound is required to be sufficiently tight, which requires a tight convex-relaxation that is dependent on the form of the function and thus has to be problem specific. We also refer to Answer 3 to Reviewer 1 for a more detailed response.\"}",
"{\"title\": \"good paper, with minor issues\", \"review\": \"This paper uses convex relaxation to verify a larger class of specifications\\nfor neural network's properties. Many previous papers use convex relaxations on\\nthe ReLU activation function and solve a relaxed convex problem to give\\nverification bounds. However, most papers consider the verification\\nspecification simply as an affine transformation of neural network's output.\\nThis paper extends the verification specifications to a larger family of\\nfunctions that can be efficiently relaxed.\\n\\nThe author demonstrates three use cases for non-linear specifications,\\nincluding verifying specifications involving label semantics, physic laws and\\ndown-stream tasks, and show some experiments that the proposed verification\\nmethod can find non-vacuous bound for these problems. Additionally, this paper\\nshows some interesting experiments on the value of verification - a more\\nverifiable model seems to provide more interpretable results.\\n\\nOverall, the proposed method seems to be a straightforward extension to\\nexisting works like [2]. However the demonstrated applications of non-linear\\nspecifications are indeed interesting, and the proposed method works well on \\nthese tasks.\", \"i_have_some_minor_questions_regarding_this_paper\": \"1) For some non-linear specifications, we can convert these non-linear elements\\ninto activation functions, and build an equivalent network for verification\\nsuch that the final verification specification becomes linear. For example, for\\nverifying the quadratic specification in physics we can add a \\\"quadratic\\nactivation function\\\" to the network and deal with it using techniques in [1] or\\n[2]. The authors should distinguish the proposed technique with these existing\\ntechniques. My understanding is that the proposed method is more general, but\\nthe authors should better discussing more on the differences in this paper.\\n\\n2) The authors should report the details on how they solve the relaxed convex\\nproblem, and report verification time. Are there any tricks used to improve\\nsolving time? What is the largest scale of network that the algorithm can\\nhandle within a reasonable time?\\n\\n3) The detailed network architecture (Model A, Model B) is not shown. How many\\nlayers and neurons are there in these networks? This is important to show the\\nscalability of the proposed method.\\n\\n4) For the Mujoco experiment, I am not sure how to interpret the delta values\\nin Figure 1. For CIFAR I know it is the delta of pixel values but it is not\\nclear about the delta in Mujoco model. What is the normal range of predicted\\nnumbers in this model? How does the delta compare to it? Is the delta very\\nsmall or trivial?\\n\\n5) Is it possible to show how loose the convex relaxation is for a small toy\\nexample? For example, the specification involving quadratic function is a\\ngood candidate.\", \"there_are_some_small_glitches_in_equations\": \"* In (4), k is undefined\\n* In (20), I am not sure if it is equivalent to the four inequalities after (22).\\nThere are 4 inequalities after (22) but only 3 in (20).\\n\\n\\nMany papers uses convex relaxations for neural network verification. However\\nvery few of them can deal with general non-linear units in neural networks.\\nReLU activation is usually the only non-linear element than we can handle in\\nmost neural network verification works. Currently the only works that can\\nhandle other general non-linear elements are [1][2]. This paper uses more\\ngeneral convex relaxations than these previous approaches, and it can handle\\nnon-separable non-linear specifications. This is a unique contribution to this\\nfield. I recommend accepting this paper as long as the minor issues mentioned\\nabove can be fixed.\\n\\n[1] \\\"Efficient Neural Network Robustness Certification with General Activation\\nFunctions\\\" by Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, Luca Daniel.\\nNIPS 2018\\n\\n[2] \\\"A dual approach to scalable verification of deep networks.\\\" by\\nKrishnamurthy Dvijotham, Robert Stanforth, Sven Gowal, Timothy Mann, and\\nPushmeet Kohli. UAI 2018.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting proposal to craft non-linear, convex relaxable specifications for more complicated networks\", \"review\": \"This paper considers more general non-linear verifications, which can be convexified, for neural networks, and demonstrate that the proposed methodology is capable of modeling several important properties, including the conversation law, semantic consistency, and bounding errors.\\n\\nA few other comments\\n\\n*) Is it critical that the non-linear verifications need to be convex relaxable. Recently, people have observed that a lot of nonconvex optimization problems also have good local solutions. Is it true that the convex relaxable condition is only required for provable algorithm? As the neural network itself is nonconvex, constraining the specification to be convex is a little awkward to me.\\n\\n*) The paper contains the example specification functions derived for three specific purpose, I'm wondering how broad the proposed technique could be. Say if I need my neural network to satisfy other additional properties, is there a general recipe or guideline. If not, what's the difficulty intuitively speaking?\\n\\nThe paper needs to be carefully proofread, and a lot of commas are missing.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Some new ideas to generalize verifications for adversarial robustness but limited investigation and experimental results.\", \"review\": \"- Summary: This paper proposes verification algorithms for a class of convex-relaxable specifications to evaluate the robustness of the network under adversarial examples. Experimental results are shown for semantic specifications for CIFAR, errors in predicting sum of two digits and conservation of energy in a simple pendulum.\\n\\n- Clarity and correctness: It is a well-written and well-organized paper. Notations and expressions are clear. The math seems to be correct. \\n\\n- Significance: The paper claims to have introduced a class of convex-relaxable specifications which constitute specifications that can be verified using a convex relaxation. However, as described later in the paper, it is limited to feed-forward neural networks with ReLU and softmax activation functions and quadratic parts (it would be better to tone down the claims in the abstract and introduction parts.)\\n\\n- Novelty: The idea of accounting for label semantics and quadratic expressions when training a robust neural network is important and very practical. This paper introduces some nice ideas to generalize linear verification functions to a larger class of convex-relaxable functions, however, it seems to be more limited in practice than it claims and falls short in presenting justifying experimental results.\\n\\n** More detailed comments:\\n\\n** The idea of generalizing verifications to a convex-relaxable set is interesting, however, applying it in general is not very clear -- as the authors worked on a case by case basis in section 3.1. \\n\\n** One of my main concerns is regarding the relaxation step. There is no discussion on the effects of the tightness of the relaxation on the actual results of the models; when in reality, there is an infinite pool of candidates for 'convexifying' the verification functions. It would be nice to see that analysis as well as a discussion on how much are we willing to lose w.r.t. to the tightness of the bounds -- especially when there is a trade-off between better approximation to the verification function and tightness of the bound. \\n\\n** I barely found the experimental results satisfying. To find \\\"reasonable\\\" inputs to the model, authors considered perturbing points in the test set. However, I am not sure if this is a reasonable assumption when there would be no access to test data points when training a neural network with robustness to adversarial examples. And if bounding them is a very hard task, I am wondering if that is a reasonable assumption to begin with.\\n\\n** It is hard to have a sense of how good the results are in Figure 1 due to lack of benchmark results (I could not find them in the Appendix either.)\\n\\n** The experimental results in section 4.4 are very limited. I suggest that the authors consider running more experiments on more data sets and re-running them with more settings (N=2 for digit sums looks very limited, and if increasing N has some effects, it would be nice to see them or discuss those effects.)\\n\\n** Page 2, \\\"if they do a find a proof\\\" should be --> \\\"if they do find a proof\\\" \\n** Page 5, \\\"(as described in Section (Bunel et al., 2017; Dvijotham et al., 2018)\\\", \\\"Section\\\" should be omitted.\\n\\n******************************************************\\nAfter reading authors' responses, I decided to change the score to accept. It got clear to me that this paper covers broader models than I originally understood from the paper. Changing the expression to general forms was a useful adjustment in understanding of its framework. Comparing to other relaxation technique was also an interesting argument (added by the authors in section H in the appendix). Adding the experimental results for N=3 and 4 are reassuring.\", \"one_quick_note\": \"I think there should be less referring to papers on arxiv. I understand that this is a rapidly changing area, but it should not become the trend or the norm to refer to unpublished/unverified papers to justify an argument.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
H1fF0iR9KX | Geometry aware convolutional filters for omnidirectional images representation | [
"Renata Khasanova",
"Pascal Frossard"
] | Due to their wide field of view, omnidirectional cameras are frequently used by autonomous vehicles, drones and robots for navigation and other computer vision tasks. The images captured by such cameras, are often analysed and classified with techniques designed for planar images that unfortunately fail to properly handle the native geometry of such images. That results in suboptimal performance, and lack of truly meaningful visual features. In this paper we aim at improving popular deep convolutional neural networks so that they can properly take into account the specific properties of omnidirectional data. In particular we propose an algorithm that adapts convolutional layers, which often serve as a core building block of a CNN, to the properties of omnidirectional images. Thus, our filters have a shape and size that adapts with the location on the omnidirectional image. We show that our method is not limited to spherical surfaces and is able to incorporate the knowledge about any kind of omnidirectional geometry inside the deep learning network. As depicted by our experiments, our method outperforms the existing deep neural network techniques for omnidirectional image classification and compression tasks. | [
"omnidirectional images",
"classification",
"deep learning",
"graph signal processing"
] | https://openreview.net/pdf?id=H1fF0iR9KX | https://openreview.net/forum?id=H1fF0iR9KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1xvUsAQlE",
"HkgEYop-kN",
"HyenTpSY0m",
"BJxHDTHK07",
"B1gzxTSFAX",
"SJlurqK4Tm",
"H1goq7Y52m",
"H1eOGZlqiQ",
"H1lMxTcvc7",
"SylGMfZCKm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1544969038951,
1543785340144,
1543228868401,
1543228764905,
1543228650396,
1541868095753,
1541211026596,
1540124944402,
1538923753795,
1538294282153
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper914/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper914/Authors"
],
[
"ICLR.cc/2019/Conference/Paper914/Authors"
],
[
"ICLR.cc/2019/Conference/Paper914/Authors"
],
[
"ICLR.cc/2019/Conference/Paper914/Authors"
],
[
"ICLR.cc/2019/Conference/Paper914/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper914/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper914/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper914/Authors"
],
[
"~Michael_Bronstein1"
]
],
"structured_content_str": [
"{\"metareview\": \"Strengths:\\n\\nThis paper proposed to use graph-based deep learning methods to apply deep learning techniques to images coming from omnidirectional cameras.\", \"weaknesses\": \"The projected MNIST dataset looks very localized on the sphere and therefore does not seem to leverage that much of the global connectivity of the graph\\nAll reviewers pointed out limitations in the experimental results.\\nThere were significant concerns about the relation of the model to the existing literature. It was pointed out that both the comparison to other methodology, and empirical comparisons were lacking.\\n\\n\\nThe paper received three reject recommendations. There was some discussion with reviewers, which emphasized open issues in the comparison to and references to existing literature as highlighted by contributed comment from Michael Bronstein. Work is clearly not mature enough at this point for ICLR, insufficient comparisons / illustrations\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Area chair recommendation\"}",
"{\"title\": \"Re: Perturbed accuracies - v1 vs v2\", \"comment\": \"Thank you for your question.\\n\\nWe have updated the network architecture by inserting an additional convolutional layer with stride 2 and adding an average pooling operation after the last convolutional layer to reduce the dimensionality of the feature embedding, which permits to achieve better accuracy without increasing number of the parameters. Please note that the structure of the proposed graph-based geometry-aware layer remains unchanged. We described the exact architecture in the appendix. Using this architecture we then rerun the experiments for the CNN baseline and our approach.\"}",
"{\"title\": \"Re: Review\", \"comment\": \"We would like to thank you for the comments. We have updated the description of the approach to make it more clear and self contained. We have also added a better description of the architecture that we use and reorganized the Section 4 to better summarize the results. Please find a detailed answer to the raised questions below. For simplicity we organized our response as a list of answers, one for each paragraph in the reviewer\\u2019s comment.\\n\\n1. Our approach is designed with the goal of incorporating the available prior knowledge inside the optimization procedure. Unlike the existing techniques, such as SphericalCNN, which work very well for spherical surfaces, our method is able to adapt its filters to any type of surface geometry as we have shown in our experiments, which proves the generality of the approach. During our experiments we found the optimization to be stable to different choices of hyperparameters. While it is true that our filters are not equivariant to rotations it is important for our network to have multiple layers as this allows the neurons of the deeper layers to have larger receptive field. This increase happens, because we apply our convolutional filters with stride 2, we have explained this better in the revised version of the paper.\\n\\n2. We have reorganized the results section of the paper to better illustrate the performance of our approach in comparison to the competing techniques. Briefly, regarding the dataset, we apologize for the confusion. We are aware of the dataset of omnidirectional images that is presented in the SphericalCNN paper. However, the dataset that we will make available is slightly different, as it consists of several different datasets that are obtained from MNIST by projecting its images on the surface of different geometric shapes (including simple spheres, modified spheres with different amount of added noise and cubes) and therefore permits developing techniques that are able to adapt to either one of the presented distortions or to all of them. Further, in our experiments we used our dataset (with identical training and test splits) for the evaluation of all the methods, including the SphericalCNN.\\n\\n3. Thank you for this comment, in the original version, we indeed omitted the introduction of the Laplacian matrix and left only the reference to the literature due to the space limitations. We discuss it in more details in the updated version of the paper in Section 3.2. \\n\\n4. Thank you for pointing out that the description of the work of Cohen et al. (2018) is not precise. We have updated Section 2 accordingly in the revised version of the paper. \\n\\n5. Apart from the modified spherical geometries that we introduced to illustrate the generality of the approach, we have also shown the performance of our method for the case of cube-map projection of omnidirectional images. This type of projection has recently become popular and is frequently used for various task including compression. Further, in the revised version of the paper we have added the experiments with the stereographic projection, which is frequently used in fish-eye cameras. Finally, in the revised version we evaluate the performance of our method on a completely different problem of image compression, which shows the generality of our approach, as compared to the competing methods, which were developed solely for the classification task.\\n\\n5. Finally, we believe that our approach provides with a generic way of dealing with different surface geometries as it is able to adapt the filter size and shape by only requiring the knowledge of the projection equations. This is a considerable advantage in comparison to other techniques that are designed only for specific type of surface geometry, as it has a wider application area. Further our suggested approach of defining the filter on the tangent plane is highly beneficial as it allows to avoid complex derivation, which will be required in case analytical computation of the changes in the size and shape of the filter. We have clarified these points in the revised version of the paper and modified the results section to better illustrate the advantages of the proposed technique.\"}",
"{\"title\": \"Re: Interesting idea but needs better illustration\", \"comment\": \"Dear reviewer AnonReviewer3, thank you for your feedback. We have update the text of the paper to improve the clarity and also reorganized the Results section of the paper to better show the evaluation of our approach. Regarding your question we have the following clarifications:\\n\\n1. To define an initial circular area of the filter we project a 3 x 3 pixels area from the equator of the sphere to the tangent plane. This gives the minimum radius for a filter such that when this filter is applied at different elevations on the sphere, each filter component corresponds to at least a point on the spherical image.\\n\\n2. This is a good point. In principle as we sample sphere with a relatively high rate, the value of Euclidean and geodesic distances are very close to each other, so the performance of the network will not change much. In this work the weights are inversely proportional to Euclidean distance between the nodes. However, experimenting with the larger filter sizes, geodesic distances is an interesting direction for future research.\\n\\n3. We apologize if is was not very clear from the text but our anisotropic filters are designed in a way that they cover the same area on the sphere independently of the elevation level. This in turn results in the areas of different sizes in the equirectangular representation of the spherical image. Further, even though for a spherical image we can have an analytic way of computing the size of the filter kernel to have the consistent coverage of the sphere, this approach has one drawback,that for any new type of projection this analytic equations should be re-derived. In our case, however, knowing the projection equations is enough to apply the method, which makes the approach readily available for working with different projection types. \\n\\n4. Thank you for the reference, we added the discussion about Anisotropic CNN (ACNN) and mixture model network (moNet) to the Section 2 of the updated version of the paper. Briefly, the main difference between our approach and theirs is that in this work we propose a framework to build graphs to efficiently process equirectangular, cube mapping or another image projection, while the suggested references are designed for graphs (or manifolds) and do not rely on any prior information about geometrical structure of the projection.\"}",
"{\"title\": \"Re: Experiments too limited to judge the merits\", \"comment\": \"We thank reviewer for the comments. Please find the answer on your questions below:\\n\\n1. We have experimented with the MNIST dataset to show the ability of our method adapt to different geometries of projective surface. We have further tried projecting other images on the sphere, but the resulting representations had various unrealistic artifacts on the borders of projected images. We therefore evaluated our method on a different image compression task, for which we could obtain good quality real omnidirectional images. We have updated the results section of our paper to include this experiment. The method further show that our approach is applicable to a wide range of tasks, while the competing method were solely designed for image classification. \\n\\n2. This is a very good point. Indeed, as we discussed above, we have also experimented with the compression problem and show now the evaluation of our approach in Section 4.3 of the revised manuscript. Briefly we show that, due to the knowledge of the projective geometry that is encoded in the graph structure, we can easily avoid artifacts in the compressed images that are present when using conventional image coding methods. \\n\\n3. Indeed for some of the surfaces it is possible to compute the mapping to the spherical one, however even if a reasonable mapping can be found, the necessity of computing one makes it harder to apply techniques developed for spherical images to other surfaces. While our method does not need this additional preprocessing step, which makes it readily applicable to different surfaces, as depicted by our experiments in the Results section. Further the fact that our approach works directly with the given surface allows it to avoid interpolation artifacts, which may be introduced during the interpolation process, when the given surface is mapped to the spherical one. \\nIn Table 2 we can see that our same algorithm adapts to different surface shapes as they are encoded in the graph structure. In these experiments, SphericalCNN does not have the knowledge about the change of the projection, which results in a drop of performance. Regarding the second part of the question, we would like to clarify that we train separate networks for each of the surfaces. \\n\\n4. Thank you for this question, we add the description about this point in Section 4.1 in updated version of the paper. For all the graph-based approaches we use the graph-based convolutions with stride two on each layer. This allows to increase the size of the receptive field for the neurons in the deeper layers of the network, similarly to the classic ConvNets. This in turns allows the network to process the information from the large area on the sphere without increasing the number of parameters. It is important to note here that having a strided convolution requires building a separate graph for every convolutional layer of the network, which is a subsampled version of the graph from the previous convolutional layer. \\n5. Thank you for the comment. In this paper we mostly focus on the introduction of the generic way of defining an anisotropic graph-based convolutional filter. We have experimented with some variations of shape and size of the filters, and did not observe significant changes in performance. Nevertheless, we believe that a complete study of all possible modifications of the of shape and size of the areas on the tangent plane that define the filter is a very interesting direction for the future research. \\n\\nWe have modified the text of the paper to better focus on the advantages of the proposed technique. We have further added an evaluation of our approach on a different image compression problem, which shows the generality of the approach with respect to the competing methods. \\n\\nWe would also like to further thank reviewer for the style suggestion and pointing out the typo. We updated the text accordingly.\"}",
"{\"title\": \"Experiments too limited to judge the merits\", \"review\": \"This paper proposed to use graph-based deep learning methods to apply deep learning techniques to images coming from omnidirectional cameras. It solves the problem of distorsions introduced by the projection of such images by replacing convolutions by graph-based convolutions, with in particular a combinaison of directed graphs which makes the network able to distinguish between orientations.\\n\\nThe paper is fairly well written and easy to follow, and the need for treating omnidirectional images differently is well motivated. However, since the novelty is not so much in the graph convolution method, or in the use of graph methods for treating spherical signals, but in the combined application of the particular graph method proposed to the domain of omnidirectional images, I would expect a more thorough experimental study of the merits of the method and architectural choices.\\n\\n1. The projected MNIST dataset looks very localized on the sphere and therefore does not seem to leverage that much of the global connectivity of the graph, although it can integrate deformations. Since the dataset is manually projected, why not cover more of the sphere and allow for a more realistic setting with respect to omnidirectional images?\\nMore generally, why not use a realistic high resolution classification dataset and project it on the sphere? While it wouldn't allow for all the characteristics of omnidirectional images such as the wrapping around at the borders, it would lead to a more challenging classification problem. Papers such as [Khasanova & Frossard, 2017a] have at least used two toy-like datasets to discuss the merits of their classification method (MNIST-012, ETH-80), and a direct comparison with these baselines is not offered in this work.\\n\\n2. The method can be applied for a broad variety of tasks but by evaluating it in a classification setting only, it is difficult to have an estimate of its performance in a detection setting, where I would see more uses for the proposed methods in such settings (in particular with respect to rotationally invariant methods, which do not allow for localization).\\n\\n3. I fail to see the relevance of the experiments in Section 4.2 for a realistic application. Supposing a good model for spherical deformations of a lens is known, what prevents one from computing a reasonable inverse mapping and mapping the images back to a sphere? If the mapping is non-invertible (overlaps), then at least using an approximate inverse mapping would yield a competitive baseline.\\nI am surprised at the loss of accuracy in Table 2 with respect to the spherical baseline. Can you identify the source of this loss? Did you retrain the networks for the different deformations, or did you only change the projection of the network trained on a sphere? \\n\\n4. While the papers describes what happens at the level of the first filters, I did not find a clear explanation of what happens in upper layers, and find this point open to interpretation. Are graph convolutions used again based on the previous polynomial filter responses, sampling a bigger region on the sphere? Could you clarify this?\\n\\n5. I would also like to see a study of the choice of the different scales used (in particular, size of the neighborhood).\\n\\nOverall, I find that the paper introduces some interesting points but is too limited experimentally in its current form to allow for a fair evaluation of the merits of the method. Moreover, it leaves some important questions open as to how exactly it is applied (impact of sampling/neighborhood size, design of convolutions in upper layer...) which would need to be clarified and tested.\", \"additional_small_details\": [\"please do not use notation $\\\\mathbb{N}_p$ for the neighborhood, it suggests integers\", \"p. 4 \\\"While effective, these filters ... as according to Eq. (2) filter...\\\" -> article missing for the word \\\"filter\\\"\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea but needs better illustration\", \"review\": \"The paper introduces geometry-aware filters based on constructed graphs into the standard CNN for omnidirectional image classification. Overall, the idea is interesting and the authors propose an extrinsic way to respect the underlying geometry by using tangent space projection. Understanding the graph construction and filter definition is not easy from the text description. It would be better to use a figure to illustrate them.\\n\\n1) How to define the size of the circular area on the tangent plane? \\n\\n2) Will the filter change greatly with the definition of the weight function in the neighborhood? Since the point locates on the sphere, why not using the geodesic distance instead of the Euclidean distance? \\n\\n3) It would be better to directly define the filter on the sphere and make it be intrinsic. The same filter on the tangent space may cover different sizes of regions on the sphere; while we prefer the filter has consistent coverage on the sphere. \\n\\n4) The paper misses the discussion and comparison to Anisotropic CNN (ACNN) and mixture model network (moNet).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"The paper proposes a new way of defining CNNs for omnidirectional images. The method is based on graph convolutional networks, and in contrast to previous work, is applicable to other geometries than spherical ones (e.g. fisheye cameras). Since standard graph CNNs are unable to tell left from right (and up from down, etc.), a key question is how to define anisotropic filters. This is achieved by introducing several directed graphs that have orientation built into the graph structure.\\n\\nThe paper is fairly well written, and contains some new ideas. However, the method seems ad-hoc, somewhat difficult to implement, and numerically brittle. Moreover, the method is not equivariant to rotations, and no other justification is given for why it makes sense to stack the proposed layers to form a multi-layer network. \\n\\nThe results are underwhelming. Only experiments with small networks on MNIST variants are presented. A very marginal improvement over SphericalCNNs is demonstrated on spherical MNIST. I'm confused by the dataset used: The authors write that they created their own spherical MNIST dataset, which will be made publicly available as a contribution of the paper. However, although the present paper fails to mention it, Cohen et al. also released such a dataset [1], which raises the question for why a new one is needed and whether this is really a useful contribution or only results in more difficulty comparing results. Also, it is not stated whether the 95.2 result for SphericalCNNs was obtained from the authors' dataset or from [1]. If the latter, the numbers are not comparable.\\n\\nThe first part of section 3.2 is not very clear. For example, L^l is not defined. L is called the Laplacian matrix, but the Laplacian is not defined. It would be better to make this section more self contained.\\n\\nIn the related work section, it is stated that Cohen et al. use isotropic filters, but this is not correct. In the first layer they use general oriented spherical filters, and in later layers they use SO(3) filters, which allows anisotropy in every layer. Estevez et al. [2] do use isotropic spherical filters.\\n\\nIn principle, the method is applicable to different geometries than the spherical one. However, this ability is only demonstrated on artificial distortions of a sphere (fig 3), not practically relevant geometries like those found fisheye lenses.\\n\\nIn summary, since the approach seems a bit un-principled, does not have nice theoretical properties, and the results are not convincing, I recommend against acceptance of this paper in its current form.\\n\\n\\n[1] https://github.com/jonas-koehler/s2cnn/tree/master/examples/mnist\\n[2] Estevez et al. Learning SO(3) Equivariant Representations with Spherical CNNs\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Re: previous works on geometry-aware deep learning\", \"comment\": \"Dear Michael,\\n\\nThank you very much for the detailed comments. We are well aware of most of these works and their general relation to our proposal in terms of using geometry in the deep learning context. We, however, believe that these works focus on a different problem than ours, namely the one of 3D structure processing. In this paper, we rather propose a new approach to process omnidirectional images, where the challenge is to use the knowledge about image distortion in order to create effective representations. As deep networks suffer from interpretability, there is no straightforward way to incorporate this knowledge to the learning process. Therefore, our contribution is in suggesting an effective way of building several directed graphs to make the filters aware of the geometry of omnidirectional images. Of course, the sphere can be considered as a specific 3D structure - we rather argue that, if the geometry of images is known and fixed, it can be incorporated right away in the learning algorithm, instead of using generic learning solutions.\\n\\nNevertheless, as this might lead to confusion, we will extend our related work by including a subsection about the relation between other geometrical graph-based approaches and our convolutional filters for omnidirectional images, We will specifically highlight the differences between these families of methods. Below we briefly summarise these differences. \\n\\nThe method of [1] propose to define a patch around each point on a manifold using polar system of coordinates. Further, [2,3] propose different ways of constructing such patches on point clouds. The works [2,3] are in spirit similar to our approach in that they aim at creating anisotropic filters that operate on graphs. However, from the methodological perspective they are quite different, as they require computing eigendecomposition in order to model position- and direction-dependent filters, which is a time consuming process. The methods of [1,2,3] were generalised in the work [4], where the authors suggest defining a local system of d-dimensional pseudo-coordinates for every point and learn both the filters and patch operators that work on these coordinates, rather than using fixed kernels. Further, the authors of [5] propose edge-based convolutional kernels and dynamic graph updates. While being flexible and effective for general tasks these methods do not directly take the advantage of the knowledge of the projective geometry that we have in the specific context of the omnidirectional images considered in our work. Instead, we propose to model this a priori knowledge using a specifically designed graph representation. We further introduce a new way of creating such a representation and incorporating it inside a neural network for effective representation learning. \\n\\nA different method was suggested by [6], where the authors propose to process the directed graph by exploiting local graph motifs, which represent its connectivity patterns. The main differences of this method with our work are that first, this method assumes that the directed graph is already given. In our problem, building such a graph that is able to fully take advantage of the image projective geometry is actually one of the main contributions. Second, the approach in [6| does not use the knowledge of the coordinate system associated with omnidirectional images, which we however use in our architecture, in order to define filter orientations.\\n\\nOverall, they are truly key differences between the cited works and ours, which provides a constructive solution for the specific case of omnidirectional images. We will do our best to clarify it in the final version of the paper however.\"}",
"{\"comment\": \"The authors should be aware of a large amount of geometry-aware deep learning methods that are directly related to their work, especially in the intersection of learning, vision, and graphics. In [1], the first intrinsic CNN-like architecture was proposed for manifolds, further extended in [2-4]. In particular, in [2-3] anisotropic convolution filters on manifolds/meshes were proposed. In [6], anisotropic diffusion was extended to general graphs using graph motifs. These approaches can be considered as particular cases of the MoNet architecture [4], which in turn was extended in [5] using more general learnable local operators and dynamic graph updates.Finally, the authors may refer to a review paper [8] on geometric deep learning methods. I would be appropriate to compare to these methods, or at least discuss the differences from the proposed approach.\\n\\n1. Geodesic convolutional neural networks on Riemannian manifolds, ICCV Workshops 2015. \\n\\n2. Learning shape correspondence with anisotropic convolutional neural networks, NIPS 2016. \\n\\n3. Anisotropic diffusion descriptors, Computer Graphics Forum 35(2):431-441, 2016.\\n\\n4. Geometric deep learning on graphs and manifolds using mixture model CNNs, CVPR 2017. \\n\\n5. Dynamic Graph CNN for learning on point clouds, arXiv:1712.00268\\n\\n6. MotifNet: a motif-based Graph Convolutional Network for directed graphs, arXiv:1802.01572\\n\\n7. Geometric deep learning: going beyond Euclidean data, IEEE Signal Processing Magazine, 34(4):18-42, 2017\", \"title\": \"previous works on geometry-aware deep learning\"}"
]
} |
|
S1xtAjR5tX | Improving Sequence-to-Sequence Learning via Optimal Transport | [
"Liqun Chen",
"Yizhe Zhang",
"Ruiyi Zhang",
"Chenyang Tao",
"Zhe Gan",
"Haichao Zhang",
"Bai Li",
"Dinghan Shen",
"Changyou Chen",
"Lawrence Carin"
] | Sequence-to-sequence models are commonly trained via maximum likelihood estimation (MLE). However, standard MLE training considers a word-level objective, predicting the next word given the previous ground-truth partial sentence. This procedure focuses on modeling local syntactic patterns, and may fail to capture long-range semantic structure. We present a novel solution to alleviate these issues. Our approach imposes global sequence-level guidance via new supervision based on optimal transport, enabling the overall characterization and preservation of semantic features. We further show that this method can be understood as a Wasserstein gradient flow trying to match our model to the ground truth sequence distribution. Extensive experiments are conducted to validate the utility of the proposed approach, showing consistent improvements over a wide variety of NLP tasks, including machine translation, abstractive text summarization, and image captioning. | [
"NLP",
"optimal transport",
"sequence to sequence",
"natural language processing"
] | https://openreview.net/pdf?id=S1xtAjR5tX | https://openreview.net/forum?id=S1xtAjR5tX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1gsH3rHxN",
"SkxsRsBBeV",
"Hyg25iCglV",
"BJxfQjpFpQ",
"SJl5T5TYaQ",
"Bkg6P5TKaQ",
"ryluNc6Y67",
"B1lXhKaKam",
"HJlcs_vT3X",
"B1x9vC6K27",
"rJekQDxD2m"
],
"note_type": [
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545063491304,
1545063378581,
1544772500205,
1542212377736,
1542212289876,
1542212197461,
1542212144310,
1542212011320,
1541400738288,
1541164642195,
1540978454533
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper913/Authors"
],
[
"ICLR.cc/2019/Conference/Paper913/Authors"
],
[
"ICLR.cc/2019/Conference/Paper913/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper913/Authors"
],
[
"ICLR.cc/2019/Conference/Paper913/Authors"
],
[
"ICLR.cc/2019/Conference/Paper913/Authors"
],
[
"ICLR.cc/2019/Conference/Paper913/Authors"
],
[
"ICLR.cc/2019/Conference/Paper913/Authors"
],
[
"ICLR.cc/2019/Conference/Paper913/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper913/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper913/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Thanks for the updated review\", \"comment\": \"Dear AnonReviewer1:\\n\\nThanks very much for the updated review and your valuable time.\\n\\nBest,\\nAuthors\"}",
"{\"title\": \"Thanks for the updated review\", \"comment\": \"Dear AnonReviewer3:\\n\\nThanks for your updated comments, we will continue revising our draft to make sure our method is well-justified.\\n\\nThanks again for your valuable time.\\n\\nBest,\\nAuthors\"}",
"{\"metareview\": \"The paper proposes the idea of using optimal transport to evaluate the semantic correspondence between two sets of words predicted by the model and ground truth sequences. Strong empirical results are presented which support the use of optimal transport in conjunction with log-likelihood for training sequence models. I appreciate the improvements to the manuscript during the review process, and I encourage the authors to address the rest of the comments in the final version.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Accept\"}",
"{\"title\": \"Authors' response\", \"comment\": \"Dear AnonReviewer2:\\n\\nThank you very much for your comments!\", \"you_can_find_our_point_to_point_response_to_your_concerns_below\": \"1. OT only considers a set of words not a sequence of word: \\nThis is a valid point, and it is the exact reason why we have also used MLE loss to enforce the order consistency (which referred to as the syntax part in the manuscript). The OT regularization we introduced aims to improve the semantic consistency during model training. \\n\\nWe believe further applying optimal transport can improve the matching of the semantic contents (e.g., key words) between the sequences, so as to help the model match key words and maintain the semantic meaning. Theoretical justification can be found in Section 3.\\n\\nThe optimal transport can also be perceived as a soft-copying mechanism to preserve the key content from source to target. This is related to the CopyNet model [C1], which does not consider the ordering information either. Detailed discussion can be found in Section 2.2, paragraph \\\"Soft-copying mechanism\\\".\\n\\nA more detailed discussion has been added to our paper to clarify this (last paragraph of Section 2 and the first paragraph of Section 4). We also have added an experiment to show that our method can improve both the language model and word embedding matrix, details can be found in Appendix H.\", \"minor_points\": \"1. Some confusions on Figure 1 and OT defined in Eqn (2) \\\\& (3): \\nTo clarify, Eqn (3) is the numerical scheme used to solve the problem defined in Eqn (2). As such, they are equivalent. Sinkhorn differs slightly because it solves a entropy regularized OT problem, which we do not use. Additionally, in Eqn (2) we did not set the \\\"admissible highest number of edges\\\". We think this may be a hard constraint and is not clear how to optimize it in practice. \\n\\n2. Why is OT a natural measure for sequence comparisons, as no ordering information is involved: Admittedly, our OT objective does not explicitly consider ordering information. We basically follow the argument from Word mover's distance (WMD) [C3] to consider a bag-of-words similarity measures, where WMD demonstrates great success in measuring text similarity. We considered this to be ``natural'' because our measure is based on similarity in embedding space, rather than hard-matching. We believe the order information could be helpful. However how to leverage such ordering information in OT objective design in an efficient manner is not trivial and can be an interesting future work. \\n\\n3. What is NMT in Table 1: \\nThanks for pointing out this issue. In Table 1, NMT refers to Google's Neural Machine Translation model. We have made \\\"GNMT\\\" and \\\"NMT\\\" consistent in Table 1 in our revision.\\n\\n4. How do you define \\\"substantial\\\" improvement of the scores: We agree that the phrase \\\"substantial\\\" can be subjective. We rephrase it to ``consistent improvements''.\\n\\n5. Choice of hyper-parameters: \\nWe set $\\\\gamma=0.1$ in our experiments, Figure 3 in the Appendix C, shows how the performance changes with respect to different values of $\\\\gamma$. $\\\\beta$ is the parameter for the proximal point methods, which is fairly robust for different values [C1] and only affects the convergence rate. We choose $\\\\beta=0.5$, since it helps converge faster than other setups. We observed that as long as $\\\\beta$ is within a reasonable range, i.e., (0,10], the results are not sensitive to this hyperparameter.\\n\\n[C1] Xie, et.al (2018), A Fast Proximal Point Method for Computing Wasserstein Distance\\n[C2] Gu et al., Incorporating copying mechanism insequence-to-sequence learning. ACL 2016.\\n[C3] Kusner et al., From Word Embeddings To Document Distances. ICML 2015.\"}",
"{\"title\": \"Authors' response\", \"comment\": \"Dear AnonReviewer1:\\n\\nThank you very much for your inputs!\", \"below_you_can_find_our_detailed_response_to_your_comments\": \"1. Please clarify the algorithmic efficiency and convergence guarantees of the IPOT algorithm: \\nThe IPOT has the same algorithmic complexity compared with Sinkhorn, but it solves the exact OT problem, not the entropy regularized OT. So it is not necessarily faster, but definitely more accurate. The convergence guarantees of IPOT can be find in Theorem 3.1 from [B1]. \\n\\n2. Is OT loss a regularizer or main loss: \\nThe OT is used as a regularizer rather than the main loss. OT training considered in this paper cannot train a proper language model on its own because it does not explicitly consider word ordering. We have fixed our paper in Section 2 correspondingly to reduce the confusion. \\n\\n3. More clarifications on the theoretical justification: \\nWe show in Section 3 that our training objective can be well approximated by a Wasserstein gradient flow, whose solution is the data distribution. \\nThe key formulation is Equation (6), which is the solution of a WGF. We show that under certain conditions, Equation (6) recovers our training objective. Thus we have built a connection between WGF and our proposed method. We appreciate the reviewer's suggestion and have edited Section 3 to make it easier for readers less familiar with the WGF theory to follow. \\n\\n4. It will be good to see additional results on soft-copying OT with attention weights:\\nThanks, this is a very good idea. We will try to pursue this in our future work.\\n\\n\\n[B1] Xie, et.al (2018), A Fast Proximal Point Method for Computing Wasserstein Distance\"}",
"{\"title\": \"Authors' response (2/2)\", \"comment\": \"Minor points:\\n1. Please be more clear about the Bregman-based algorithm: \\nTo clarify, the IPOT algorithm we adopted solves the exact OT problem, not the entropy regularized variant as in Sinkhorn. It is a case of proximal descent scheme, using Bregman divergence as the proximity metric. \\nSinkhorn regularize the entropy of the solution, while IPOT regularize the Bregman divergence between current and last iterate. These points have been clarified in our revision. \\n\\n2. Inconsistent notation of the OT loss: \\nThanks for pointing this out and we have fixed this issue.\"}",
"{\"title\": \"Authors' response (1/2)\", \"comment\": \"Dear AnonReviewer3:\\n\\nThank you very much for your comments!\", \"your_comments_have_been_carefully_addressed_below\": \"1. How does the OT objective leverage the model's belief: \\nWe believe reviewer's confusion can be attributed to the fact that the notations we employed for the IPOT algorithm is no entirely consistent with the rest of our paper (which the reviewer also kindly pointed out as a minor issue). To clarify, our model does take the model's belief into account. \\nWhile the marginal distribution of the words is not directly fed into the IPOT algorithm, it is used to compute the model's predicted embedding. As such, both the model's belief and word embedding are updated when training with the OT objective. More specifically, we have $z_i = E^T w_i, z'_i = E^T \\\\hat{w}_i$ for feature vectors $\\\\{S_i\\\\}$ and $\\\\{ S'_i \\\\}$ used in Alg 1 as input, where $E$ denotes the word embedding matrix, $\\\\{w_i\\\\}$ is the one-hot encoding vector for the $i$-th word in the sentence and $\\\\hat{w}_i$ is the model's belief for the word at that location. This means the model's belief is encoded in $\\\\{z'_i\\\\}$. Alg 1 Ln 2 is just the parameter initialization step for the IPOT algorithm, which of course has zero information on the model's belief. The T matrix will assimilate the model's belief from $\\\\{z'_i\\\\}$ as the IPOT algorithm starts to iterate. We also remark there are a number of different ways to encode the model's belief. In this study, we have experimented with several different popular choices (Average, SoftArgmax, Gumbel/Concrete, etc.) to identify which works best. It turned out that the Soft-argmax approach yields the best empirical performance for the evaluation metrics we considered. In our original submission we did not make this clear enough due to space limitations. In response to the reviewer's comment, we have added more discussions on this point to the paper and included more experiment details in the Appendix G and H, in Page 15. The notation inconsistency has been fixed as well. \\n\\n2. About the choice of SoftArgmax rather than the recently proposed Concrete estimator (a.k.a. Gumbel Softmax (GS)):\\nWe agree with the reviewer that alternative estimators should be discussed. We did consider applying GS in our experiments and found it renders less-stable training, possibly due to a higher variance compared with temperature-annealed Soft-argmax. In our experiments, using GS often give us sub-optimal solution, and at times even worse than the MLE baseline. Consequently, we only reported results with the Soft-argmax in our original submission. In the revised manuscript, we also report the results of GS estimator in Appendix G, Table 15. \\n\\n3. Clarification on the overall training procedure, especially for the MLE part:\\nThanks for the comment. We confirm that the MLE part of our model is the standard MLE training which the uses one-hot encoded ground-truth sentence as input. During training, our model is forwarded once. The algorithm box for training the entire model can be found as Algorithm 2 in Section 2 (Page 5). The OT uses the (annealed) Softmax (soft-argmax) weighted embedding, rather than the embedding of sampled word, at each location to avoid excessive variance. OT loss complements MLE loss and should not be used alone. We have demonstrated its utility both in the fine-tuning stage (after pre-training, as justified in Sec 3) and also right from the beginning of training. Details can be found in Appendix B, Table 7 and 8. We have made these points more clear in the experiment section.\\n\\n4. There seems to be a small gap between the theory from Sec 3 and the practice: \\nWe thank the reviewer for pointing out this confusion, which we hope to resolve as follows. First, Wasserstein distance can be defined for both continuous and discrete distributions [A1], so we believe the reviewer's concern is about the theory regarding Wasserstein gradient flows (WGF). While we have only described the theory for the continuous case (for the sake of simplicity), it actually also holds for the discrete case (see [A2]). These have been clarified in the updated manuscript. \\n\\n5. Some prior work has tried to address the weakness of RL we criticized: \\nThanks for bringing this paper to our attention. In our paper, we narrow down our consideration to a single metric as reward. In response to the reviewer's comment, we will discuss this literature in our manuscript and rephrase our claims accordingly.\\n\\n[A1] G, Luise et.al. (2018), Differential Properties of Sinkhorn Approximation for Learning with Wasserstein Distance\\n[A2] Li \\\\& Montufar (2018), Natural gradient via optimal transport.\"}",
"{\"title\": \"Thanks for all these insightful comments!\", \"comment\": \"We would like to thank all the reviewers for taking their time to contribute these insightful comments, which helped us to improve the original submission. Our detailed point-to-point response can be found in our individual replies to the reviews, and we have also carefully updated the manuscript following the constructive suggestions from the reviewers.\", \"here_is_a_brief_summary_of_major_updates_made_to_the_manuscript\": \"1. Clarifications on the IPOT algorithm (Sec 2.1). \\n2. Discussions on alternative model belief encoding schemes such as Gumbel-softmax, further experiment results updated to the Appendix. (Sec 2.2)\\n3. Additional experiments showing the proposed OT-regularization can improve both word embedding matrix and language model. \\n4. Section 3 has been edited to make it easier to follow for readers less familiar with the theory of Wasserstein gradient flows.\\n5. A new algorithm block describing our full training procedure. (pp. 5)\\n6. Updated notation system to reduce confusion.\"}",
"{\"title\": \"Updated score; final comments\", \"review\": \"====== Final Comments =======\\nI thank the authors for updating the manuscript with clarifications and for clear replies to my concerns. \\n\\nI agree with R2 to some extent that the empirical performance of the method, as well as the formulation, is interesting. In general, the authors addressed my concerns regarding how optimal transport training model interfaces with MLE training and the choice of using scaled softmax for computing the Wasserstein distances. However, I still find myself agreeing with R3 that the choice of sets to compute Wasserstein distances (as opposed to sequences is somewhat unjustified); and it is not clear how the theory in Sec. 3 justifies using sets instead of words, as the data distribution p_d is over sequences in both the W-2 term as well as the MLE term. This would be good to clarify further, or explain more clearly how Sec. 3 justifies this choice.\\n\\nAlso, I missed this in the original review, but the assumption that KL(\\\\mu| p_d) = KL(\\\\p_d| \\\\mu) since \\\\mu is close to p_d for MLE training does not seem like a sensible assumption under model misspecification (as MLE is mode covering). I would suggest the authors discuss this in a revision/ camera-ready version.\\n\\nIn light of these considerations, I am updating the rating to 6 (to reflect the points that have been addressed), but I still do not believe that the method is super-well justified, despite an interesting formulation and strong empirical results (which are aspects the paper could still improve upon). \\n=====================\\n\\n**Summary**\\n\\nThe paper proposes a regularization / fine-tuning scheme in addition to maximum likelihood sequence training using optimal transport. The basic idea is to match a sampled sentence from a model with a ground truth sentence using optimal transport. Experimental results show that the proposed modification improves over MLE training across machine translation, summarization and image captioning domains.\\n\\n**Strengths**\\n+ The proposed approach shows promising results across multiple domains.\\n+ The idea of using optimal transport to match semantics of words intuitively makes sense.\\n\\n**Weaknesses**\\n1. Generally, one would think that for optimal transport we would use probabilities which come from the model, i.e. given a set of words and probabilities for each of the words, one would move mass around from one word to another word (present in the ground truth) if the words were semantically relevant to each other and vice versa. However, it seems the proposed algorithm / model does not get the marginal distributions for the words from the model, and indeed does not use any beliefs from the model in the optimal transport formulation / algorithm [Alg. 1, Line 2]. This essentially then means, following objective 2, that optimal transport has no effect on the model\\u2019s beliefs on the correct word, but merely needs to ensure that the cosine similarity between words which have a high transport mass is minimized, and has no direct relation to the belief of the model itself. This strikes as a somewhat odd design choice. (*)\\n\\n2. In general, another issue with the implementation as described in the paper could be the choice of directly using the temperature scaled softmax as an estimate of the argmax from the model, instead of sampling utterances from the model. It seems worth reporting results, even if as a baseline what sampling from something like a concrete distribution [A] yeilds in terms of results. (*)\\n\\n3. As a confirmation, Is the differentiable formulation for sentences also used for MLE training part of the objective, or does MLE still use ground truth sentences (one-hot encoded) as input. Does this mean that the model is forwarded twice (once with ground truth, and once with softmax outputs)? In general, instead of/ in addition to an Algorithm box that expains the approach of Xie et.al., it would be good to have a clear algorithmic explanation of the training procedure of the proposed model itself. Further, the paper is not clear if the model is always used for finetuning the an MLE model (as Section 3) seems to suggest or if the optimal transport loss is used in conjunction with the model as the bullet ``complementary MLE loss\\u2019\\u2019 seems to suggest. (*)\\n\\n4. Section 3: It is not clear that the results / justifications hold for the proposed model since the distance optimized in the current paper is not a wasserstein-2 distance. Sure, it computes cosine distance, which is L-2 but it appears wasserstein-2 distance is defined for continuous probability measures, while the space on which the current paper computes distances is inherently the (discrete) space of word choices. (*)\\n\\nExperiments\\n5. The SPICE metric for image captioning takes the content of the sentences (and semantics) into account, instead of checking for fluency. Prior work on using RL techniques for sequence predictions have used SPICE + CIDEr [B] to alleviate the problem with RL methods mentioned in page. 5. Would be good to weaken the claim. (*)\\n\\nMinor Points\\n1. It might be good to be more clear about the objective from Xie et.al. by also stating the modified formulation they discuss, and explaining what choice of the function yeilds the bregman divergence of the form discussed in Eqn. 3. \\n2. It would be nice to be consistent with how \\\\mathcal{L}_{ot} is used in the paper. Eqn. 4 lists it with two embedding matrices as inputs, while Alg. 1, Line 11 assigns it to be the inner product of two matrices.\\n\\n**Preliminary Evaluation**\\nIn general, the paper is an interesting take on improving MLE for sequence models. However, while the idea of using optimal transport is interesting and novel for training sequence models, I have questions about the particular way in which it has been implemented, which seems somewhat unjustified. I also have further clarifications about a claimed justification for the model. Given convincing responses for these, and other clarification questions (marked with a *) this would be a good paper.\\n\\nReferences\\n[A]: Maddison, Chris J., Andriy Mnih, and Yee Whye Teh. 2016. \\u201cThe Concrete Distribution: A Continuous Relaxation of Discrete Random Variables.\\u201d arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1611.00712.\\n[B]: Liu, Siqi, Zhenhai Zhu, Ning Ye, Sergio Guadarrama, and Kevin Murphy. 2016. \\u201cImproved Image Captioning via Policy Gradient Optimization of SPIDEr.\\u201d arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1612.00370.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Sequence level regularization based on optimal transport\", \"review\": \"This submission deals with text generation. It proposes a sequence level objective, motivated by optimal transport, which is used as a regularizer to the (more standard) MLE. The goal is to complement MLE by allowing `soft matches` between different tokens with similar meanings. Empirical evaluation shows that the proposed technique improves over baselines on many sequence prediction tasks. I found this paper well motivated and a nice read. I would vote for acceptance.\", \"pros\": [\"A well-motivated and interesting method.\", \"Solid empirical results.\", \"Writing is clear.\"], \"cons\": [\"The split of `syntax--MLE` and `semantics--OT` seems a bit awkward to me. Matching the exact tokens does not appear syntax to me.\", \"Some technical details need to be clarified.\"], \"details\": [\"Could the authors comment on the efficiency by using the IPOT approximate algorithm, e.g., how much speed-up can one get? I'm not familiar with this algorithm, but is there convergence guarantee or one has to set some kind of maximum iterations when applying it in this model?\", \"Bottom of page 4, the `Complementary MLE loss paragraph`. I thought the OT loss is used as a regularizer, from the introduction. If the paper claims MLE is actually used as the complements, evidence showing that the OT loss works reasonably well on its own without including log loss, which I think is not included in the experiments.\", \"I really like the analysis presented in Section 3. But it's a bit hard for me to follow, and additional clarification might be needed.\", \"It would be interesting to see whether the `soft-copying` version of OT-loss can be combined with the copy mechanisms based on attention weights.\", \"================================\", \"Thanks for the clarification and revision! It addressed some of my concerns. I would stick to the current rating, and vote for an acceptance.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An OT-based regularization of the loss of seq2seq models\", \"review\": \"This paper propose to add an OT-based regularization term to seq-2-seq models in order to better take into account the distance between the generated and the reference and/or source sentences, allowing one to capture the semantic meaning of the sequences. Indeed, it allows the computation of a distance between embeddings of a set of words, and this distance is then used to define a penalized objective function.\\nThe main issue with this computation is that it provides a distance between a set of words but not a sequence of words. The ordering is then not taken into account. Authors should discuss this point in the paper.\\nExperiments show an improvement of the method w.r.t. not penalized loss.\", \"minor_comments\": [\"in Figure 1, the OT matching as described in the text is not the solution of eq (2) but rather the solution of eq. (3) or the entropic regularization (the set of \\\"edges\\\" is higher than the admissible highest number of edges).\", \"Introduction \\\"OT [...] providing a natural measure of distance for sequences comparisons\\\": it is not clear why this statement is true. OT allows comparing distributions, with no notion of ordering (see above).\", \"Table 1: what is NMT?\", \"first paragraph, p7: how do you define a \\\"substantial\\\" improvement of the scores?\", \"how do you set parameter $\\\\gamma$ in the experiments? Why did you choose \\\\beta=0.5 for the ipot algorithm?\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJlY0jA5F7 | Improving Sample-based Evaluation for Generative Adversarial Networks | [
"Shaohui Liu*",
"Yi Wei*",
"Jiwen Lu",
"Jie Zhou"
] | In this paper, we propose an improved quantitative evaluation framework for Generative Adversarial Networks (GANs) on generating domain-specific images, where we improve conventional evaluation methods on two levels: the feature representation and the evaluation metric. Unlike most existing evaluation frameworks which transfer the representation of ImageNet inception model to map images onto the feature space, our framework uses a specialized encoder to acquire fine-grained domain-specific representation. Moreover, for datasets with multiple classes, we propose Class-Aware Frechet Distance (CAFD), which employs a Gaussian mixture model on the feature space to better fit the multi-manifold feature distribution. Experiments and analysis on both the feature level and the image level were conducted to demonstrate improvements of our proposed framework over the recently proposed state-of-the-art FID method. To our best knowledge, we are the first to provide counter examples where FID gives inconsistent results with human judgments. It is shown in the experiments that our framework is able to overcome the shortness of FID and improves robustness. Code will be made available. | [
"generative adversarial networks",
"framework",
"fid",
"evaluation",
"images",
"representation",
"feature space",
"experiments",
"gans"
] | https://openreview.net/pdf?id=HJlY0jA5F7 | https://openreview.net/forum?id=HJlY0jA5F7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rkxV9_pqkN",
"BJl1qdsKC7",
"SJgrhvWFhQ",
"ryliKWJBnX",
"BkeEmrOm2Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544374412211,
1543252102726,
1541113772555,
1540841859087,
1540748571786
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper912/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper912/Authors"
],
[
"ICLR.cc/2019/Conference/Paper912/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper912/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper912/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes a novel sample based evaluation metric which extends the idea of FID by replacing the latent features of the inception network by those of a data-set specific (V)AE and the FID by the mean FID of the class-conditional distributions. Furthermore, the paper presents interesting examples for which FID fails to match the human judgment while the new metric does not. All reviewers agree, that while these ideas are interesting, they are not convinced about the originality and significance of the contribution and believe that the work could be improved by a deeper analysis and experimental investigation.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Intersting ideas that need some further investigations\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thanks much for your constructive comments and suggestions.\\n\\n- For the necessity of addressing the sample-based evaluation\\n\\nFID is a widely used metric for evaluating generative models. However, in our experiments we found that it appeared to be inconsistent with human judgements in some cases. Moreover, we found that there exist potentials to improve the existing FID metric.\\n\\n- For the domain-specific representations\\n\\nOur main focus is to address the drawbacks of the ImageNet model on the specific domain e.g. mnist, celebA. We give two seperate proposals including the classifier and the VAE. The experimental results cannot necessarily infer the preferences over the two proposals. Actually, we have not solved the problem of choosing a perfect encoder. We mainly propose to address the necessity of substituting the Imagnet model with a domain-specific encoder to get relatively more meaningful evaluation.\\n\\n- For the CAFD versus FID\\nCAFD is indeed an intra-class version over FID. Thus, this is a relatively simple incremental contribution on the previous method. However, given that FID is widely accepted in the GAN community and used in much literature for evaluation on mnist, fashion-mnist, celebA, etc, we consider it necessary to point it out and conduct both user studies and qualitative experiments to verify the improved effectiveness. Determining the number of modes is a highly non-trivial ill-posed problem, so we choose to use the number of classes for simplicity. MMD and Wasserstein distance are indeed two parallel methods free of the Gaussian assumption. These methods actually make great sense. In this paper, we aim to improve FID and compare our methods only with the baseline. We will delve more into it and maybe more user studies are needed for further comparison. \\n\\n- Overall\\n\\nThanks very much for your kind advice and discussions. We will study more into this serious problem in the future and hope the experimental results in this work can give some inspirations to your concerns. Thank you for your time to review our paper.\"}",
"{\"title\": \"Interesting paper that shows a failure case of FID\", \"review\": \"The paper proposes a new evaluation metric for generative adversarial networks and shows that it is better aligned with human judgment than FID. The metric is based on a domain-specific encoder to extract features of the image rather than ImageNet inception network and a class-aware Frechet distance which makes a Gaussian mixture assumption for the extracted features rather than a simple Gaussian assumption for FID. The paper shows an advantage for the new metric vs the others by constructing examples where FID fails while the proposed metric doesn't. Although this is an interesting finding, it is not a breakthrough in the sense that a domain-specific representation is expected to be better behaved than the features of the inception classifier and using a Gaussian mixture would be an obvious step after FID. Moreover, other metrics don't even rely on any assumption on the features distributions [1,2], so I would expect them to behave at least as well as the proposed metric.\\n\\n\\n[1] :M. Arjovsky, S. Chintala, L. Bottou, Wasserstein gan\\n[2] :M. Binkowski, D. J. Sutherland, M. Arbel, and A. Gretton. Demystifying MMD GANs.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting ideas but not totally convinced\", \"review\": \"This paper proposes a variant of the popular FID score for evaluating GAN-type generative models. The paper makes two major complaints about the FID as it is currently used:\\n\\n1. The standard Inception network features trained on ImageNet might not be a good representation for whatever different dataset is being modeled, e.g. CelebA or CIFAR-10.\\n\\n2. The globally-Gaussian assumption made by the FID doesn't hold, which can cause some problems with the metric.\\n\\nTo address issue 1, the paper proposes choosing features based on a dataset-specific VAE, which can additionally incorporate labels when they're available. For 2, the authors propose to compute something like the FID between each component of a Gaussian mixture, based on soft assignments of points to a class with the VAE's inference network to estimate p(y|x), when labels y are available.\", \"in_terms_of_the_definition_of_the_cafd\": \"it is worth emphasizing that (9) is *not* the Frechet = Wasserstein-2 distance between Gaussian mixtures (which is fine). Rather, it's essentially the mean FID of the class-conditional distributions. This has previously been considered in the conditional GAN case, e.g. by Miyato and Koyama (ICLR 2018, https://openreview.net/forum?id=ByS1VpgRZ ). The difference here is that soft-assignments are supported, through the VAE's inference network, though using any classifier would be essentially equivalent. As long as you have a classifier, you can compute the CAFD, regardless of using a VAE representation or not; the VAE just conveniently gives you a classifier out too. Thus the two components of your proposal are essentially orthogonal.\", \"on_the_choice_of_dataset_dependent_features\": [\"You say several times through the paper that ImageNet-based features are \\\"ineffective\\\" because the class labels do not match with the target, e.g. \\\"fine-grained features distinguishing 'African hunting dog' from 'Cape hunting dog' (which all belong to the category 'dog' in CIFAR-10) are not needed.\\\" This is, I think, somewhat misguided: imagine I took ImageNet and assigned higher-level labels to it, such that each image is only assigned a label at the level \\\"dog,\\\" and then trained a GAN on it. Then a classifier wouldn't need to distinguish \\\"African hunting dog\\\" from \\\"Cape hunting dog.\\\" But a GAN, which doesn't see the labels at all, is being given *exactly the same problem*, and so the GAN still needs to be able to produce both African hunting dogs and Cape hunting dogs (though it doesn't need to be able to tell the two apart).\", \"Moreover, some people believe that CNNs trained on general-purpose approximate the human visual system reasonably well (for an overview of the arguments, see https://neurdiness.wordpress.com/2018/05/17/deep-convolutional-neural-networks-as-models-of-the-visual-system-qa/ ), and although the overall goals of GANs are somewhat fuzzy, \\\"the distribution appears the same to the human visual system\\\" seems pretty good as a goal.\", \"I think it's obvious that ImageNet-trained Inception features do not model the human visual system very well on, say, MNIST.\", \"They're probably also not amazing on CelebA, because it hasn't been fine-tuned for faces the way the human visual system has. (Incidentally, you say that \\\"the ImageNet models can hardly distinguish different faces\\\" -- this needs either a citation or some experimental support, in the appendix, because this is not a well-known fact and seems quite relevant to the common practice of applying ImageNet-trained features to CelebA evaluation.)\", \"But it's not clear to me that they don't model the human visual system reasonably well on CIFAR-10, or at least a theoretical higher-resolution version of it. It's true that ImageNet models will contain some features specific to distinguishing different types of guitars, and there are no guitars in CIFAR-10. But as long as those features aren't strongly activated by actual images from your model, they shouldn't mess up the distributions you're comparing too much.\", \"So if you're going to argue that ImageNet representations are insufficient on vaguely ImageNet-like tasks such as CIFAR-10, I don't think the arguments you have here are quite convincing. Probably, you need some evidence that the scores are made noisier by the irrelevant features and thus harder to estimate, or else maybe strong empirical evidence that using comparable features specific to the dataset distribution performs better.\", \"Anyway, for datasets that are not very much like ImageNet, using dataset-specific features is clearly sensible and perhaps necessary. But:\", \"You only provide pretty limited evidence that the VAE is better than a plain autoencoder, namely Table 2 which shows that the VAE puts less information in the top few principal components. But you only show that up to the top 5 components, and in any case it's not obvious that a more-spread distribution would be better.\", \"An important question that's not really considered here: how much does the FID/CAFD then just measure how well the generative model matches the VAE you get features from? Is it the case that this VAE would give a (nearly-)perfect score under the CAFD, or not?\", \"The results of Figure 1/Table 3 are very interesting. But I wonder how much of this difference in behavior is due to training on CelebA vs ImageNet and how much is due to the architecture or objective of the autoencoder. It might be interesting to compare to features from an ImageNet VAE and/or a CelebA classifier and see what those say. (The discriminator features are something like a CelebA classifier, but there's other things going on there too.)\"], \"on_the_cafd_versus_fid\": \"Your main argument for the CAFD over the FID is that it is based on a richer model of the distribution, which you claim to be closer to true: the FID is based on a multivariate Gaussian assumption with a total of n + n (n-1)/2 parameters, while you use K times as many parameters. Your Table 9 also gives some slight evidence that the Gaussian mixture gives a better fit to the data than a single Gaussian.\\n\\nI'm not entirely convinced by Table 9; comparing p-values is in general not necessarily very meaningful, and in particular it seems quite possible that the Anderson-Darling test simply prefers the mixture because the samples are more closely \\\"clumped together\\\" by the VAE than a random subset of inputs. Moreover, in either case the Gaussian assumption is clearly false a priori: in the Inception case, features are the output of a ReLU activation function and hence zero-inflated, and this or something like it may also be the case in your VAE. So comparing the p-value of tests for hypotheses known a priori to be false is probably a misguided endeavor.\\n\\nBut in any case the FID doesn't *really* assume Gaussianity. It coincides with the Frechet / Wasserstein-2 distance between Gaussians, but it's a perfectly plausible semimetric between any pair of distributions that have means and variances. The claim for superiority of CAFD over FID would then need to be something like \\\"the class-conditional means and variances are more representative of the distribution than the global means and variances.\\\"\", \"re\": \"your claim that \\\"As both FID and CAFD aim to model how well domain-specific images are generated, they are not designed to deal with mode dropping\\\" -- this is something of a strange claim, as dropping an entire mode will hopefully affect both the feature mean and especially the variance unless it is done extremely carefully. A related problem, though, is that the CAFD is essentially insensitive to drastically *reweighting* modes, e.g. producing twice as many 1s as 2s on MNIST: if each mode is modeled correctly, the CAFD will not be changed, while the FID would be strongly affected with reasonable features. The Mode Score KL(p(y*) || p(y)) would be sensitive to this, as you suggest, but it feels somewhat hacky.\\n\\nThe type of analysis in Table 1 is interesting, but one issue is that it is sensitive to the scale of each mode in feature space: if your encoder happens to place 1s close together and 2s relatively more spread apart, you'll see a higher conditional FID for 2s than for 1s even if the visual \\\"sample quality\\\" is the same.\\n\\nOne important piece of related work that's missing is Binkowski et al. (ICLR 2018, https://openreview.net/forum?id=r1lUOzWCW ), who demonstrate that the FID estimator is strongly biased in a misleading way. The same problems are inherited by the CAFD, which you should at least mention. Binkowski et al., and independently Xu et al. (https://arxiv.org/abs/1806.07755 ), also proposed using MMD variants on top of Inception features. This has better statistical properties as shown by Binkowski et al., and also explicitly does not make any parametric assumptions about the distribution of features. It would be worth thinking about the relationship of that approach to the FID/CAFD.\\n\\nAnother metric you could compare to is the \\\"Adversarial Divergence\\\" of Yang et al. (ICLR 2017, https://openreview.net/forum?id=HJ1kmv9xx ) which compares the distribution of classifier output, p(y|x), for x from the model to that from test data. It's a pretty different metric from CAFD with different properties, but since you both require a classifier, it would be good to know how the two compare.\", \"minor_points\": \"In the related work, your discussion of the MMD is misleading: Dziugaite et al. and Li et al. proposed using the MMD for *training* generative models, not for evaluating. Evaluating with two-sample tests based on the MMD using simple kernels was done e.g. by Sutherland et al. (ICLR 2017, https://openreview.net/forum?id=HJWHIKqgl ) and Olmos et al. (https://openreview.net/forum?id=HJWHIKqgl ), and used on top of Inception-like representations e.g. by Lopez-Paz and Oquab (2017), as well as Xu et al. and Binkowski et al. mentioned above.\\n\\nThe derivation (6) of the CAFD is that the derivation (6) is somewhat sloppy about exactly what p() means -- in particular, it's somewhat confusing to use a lowercase p when every distribution you deal with here is actually discrete (for discrete y or for the empirical distribution S, since you're dealing with that and not actually the true distribution of the model, where p(x_i) would not be constant across samples x). It would probably be clearer to distinguish your notation for the true model distribution from the empirical distribution of the S samples.\\n\\nI don't understand your claim on page 6 that \\\"Unlike Inception Score, because CAFD measures distance on the feature space as FID does, it is able to report overfitting.\\\" CAFD, like FID, probably doesn't allow for distributions to appear better than the target in the way that Inception score does. But I don't see how this corresponds to \\\"reporting overfitting\\\"; a model that simply reproduces exactly the empirical distribution of the training set would get an excellent CAFD/FID score, but that's the usual sense of \\\"overfitting.\\\"\", \"overall_thoughts\": \"Using dataset-specific features for evaluation metrics makes a lot of sense, but I don't feel totally satisfied by this paper's investigation of the specific proposal of a VAE, and am particularly worried about whether the metric just ends up preferring models similar to that VAE. I'd really like to see some theoretical and empirical investigation into that.\\n\\nThe CAFD as opposed to FID doesn't feel as nice to me; it's both something of an obvious extension of the previously-used \\\"intra-class FID,\\\" and I am also unconvinced by the paper's arguments for its preferability over the FID or other metrics based on image representations like those of Xu et al.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Compute a GMM in a learned feature space (AE, VAE) and make it class aware by making use of class information or a prediction thereof.\", \"review\": \"The authors study the task of sample-based quantitative evaluation applied to GANs. The authors suggest multiple modifications to existing evaluation pipelines: (1) Instead of embedding the samples in the InceptionNet feature space, train a domain-specific encoder. If labeled data is available, add a cross-entropy loss to the encoder training objective so that the class can be predicted. (2) Instead of fitting a single Gaussian in the feature space, fit a GMM instead. This should allow for a more fine-grained \\u201cclass-aware\\u201d distance between the (empirical) distributions.\", \"pro\": \"Attempt to attack a critical issue in generative modeling. Good overview of competing approaches.\\nSeveral ablation studies of evaluation measures and the behavior of FID with respect to the representation space.\\nThe ideas make sense on a conceptual level, albeit suffering from major practical concerns.\", \"con\": [\"Clarity can be improved (e.g. use of double negatives as in the top of page 3), the same arguments repeated multiple (>3) times (i.e. deficiencies of FID and IS, etc.), Many statements which should be empirically tested are stated as folklore (last paragraph on page 3). In general the paper merits another polishing pass (mode != model, last paragraph in section 3, \\u201cunmatch\\u201d, etc.).\", \"Why would a VAE capture a good feature space? It is known that the tradeoff between what is stored in the latent space versus the discriminator *completely* depends on the power of the discriminator -- if the discriminator is flexible enough it can just learn the marginal distribution and ignore the latent code. Hence, this subtle issue will likely undermine the entire model comparison.\", \"Using the predictive distribution as a soft label for CAFD. Interesting idea, but why would one have access to labels in the first place? Why wouldn't one use a conditional GAN if we already have labels? Secondly, why would the modes necessarily correspond to classes?\", \"Stated issues with FID: Why would you expect FID to be resistant to such drastic transformations as blocking out a significant proportion of pixels with \\u201cblocks\\u201d? This is a *major* change in the underlying distribution. The fact that humans can \\u201cfill in\\u201d this gap should have nothing to do with the quality of the underlying model. Arguably, you can also hide one eye, the nose and the mouth and still judge the sample as \\u201cgood\\u201d.\", \"The ideas presented in this paper are conceptually interesting. However, given the drawbacks discussed above I cannot recommend the acceptance of this work.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
SyVuRiC5K7 | LEARNING TO PROPAGATE LABELS: TRANSDUCTIVE PROPAGATION NETWORK FOR FEW-SHOT LEARNING | [
"Yanbin Liu",
"Juho Lee",
"Minseop Park",
"Saehoon Kim",
"Eunho Yang",
"Sung Ju Hwang",
"Yi Yang"
] | The goal of few-shot learning is to learn a classifier that generalizes well even when trained with a limited number of training instances per class. The recently introduced meta-learning approaches tackle this problem by learning a generic classifier across a large number of multiclass classification tasks and generalizing the model to a new task. Yet, even with such meta-learning, the low-data problem in the novel classification task still remains. In this paper, we propose Transductive Propagation Network (TPN), a novel meta-learning framework for transductive inference that classifies the entire test set at once to alleviate the low-data problem. Specifically, we propose to learn to propagate labels from labeled instances to unlabeled test instances, by learning a graph construction module that exploits the manifold structure in the data. TPN jointly learns both the parameters of feature embedding and the graph construction in an end-to-end manner. We validate TPN on multiple benchmark datasets, on which it largely outperforms existing few-shot learning approaches and achieves the state-of-the-art results. | [
"few-shot learning",
"meta-learning",
"label propagation",
"manifold learning"
] | https://openreview.net/pdf?id=SyVuRiC5K7 | https://openreview.net/forum?id=SyVuRiC5K7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1lqM84kxE",
"ryek8HjfyN",
"HkgdqB_y1V",
"r1lRhOusR7",
"rkg_emdoA7",
"rkxcWJf_Am",
"r1xOHPbuAm",
"Hye0RIZ_RX",
"r1xNOL-uCX",
"S1lqdSZuRX",
"rkxfh9RTTQ",
"rJxOdw3sTX",
"BJgohCDj6X",
"Skx7vDii3X",
"S1x4ca-chQ",
"HJx6UQbfhX",
"H1l4b54ijm",
"SklAweC0qX",
"Bkli_tYd5Q",
"ryekRHiecQ",
"ryeo4LvecQ",
"H1xbcLqkcQ",
"rygAusX19m"
],
"note_type": [
"meta_review",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"comment"
],
"note_created": [
1544664594186,
1543841095508,
1543632271645,
1543370934143,
1543369455795,
1543147265826,
1543145280267,
1543145174079,
1543145067999,
1543144817530,
1542478506410,
1542338415944,
1542319795186,
1541285723021,
1541180812074,
1540653908529,
1540209147991,
1539395686321,
1538984307012,
1538467271493,
1538450995218,
1538397833319,
1538370421994
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper911/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper911/Authors"
],
[
"~anon_ml_reviewer1"
],
[
"ICLR.cc/2019/Conference/Paper911/Authors"
],
[
"~anon_ml_reviewer1"
],
[
"ICLR.cc/2019/Conference/Paper911/Authors"
],
[
"ICLR.cc/2019/Conference/Paper911/Authors"
],
[
"ICLR.cc/2019/Conference/Paper911/Authors"
],
[
"ICLR.cc/2019/Conference/Paper911/Authors"
],
[
"ICLR.cc/2019/Conference/Paper911/Authors"
],
[
"~anon_ml_reviewer1"
],
[
"ICLR.cc/2019/Conference/Paper911/Authors"
],
[
"~anon_ml_reviewer1"
],
[
"ICLR.cc/2019/Conference/Paper911/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper911/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper911/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper911/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper911/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper911/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"As far as I know, this is the first paper to combine transductive learning with few-shot classification. The proposed algorithm, TPN, combines label propagation with episodic training, as well as learning an adaptive kernel bandwidth in order to determine the label propagation graph. The reviewers liked the idea, however there were concerns of novelty and clarity. I think the contributions of the paper and the strong empirical results are sufficient to merit acceptance, however the paper has not undergone a revision since September. It is therefore recommended that the authors improve the clarity based on the reviewer feedback. In particular, clarifying the details around learning \\\\sigma_i and graph construction. It would also be useful to include the discussion of timing complexity in the final draft.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"A new transductive few-shot learning algorithm with strong empirical results\"}",
"{\"title\": \"Thanks for the feedback. New version will be revised.\", \"comment\": \"Thanks for the feedback.\\n\\nWe are going to revise the paper according to the useful suggestions regarding ce-loss and Figure4.\"}",
"{\"comment\": \"Hi\\n\\nIn section 3.2.4, it was written that cross-entropy loss is computed between F* and query labels, however, https://github.com/anonymisedsupplemental/TPN/blob/master/models.py#L145 the loss is computed between F* and the UNION of support labels and query labels. In fact, I changed the loss computation in your code to only use query labels and it resulted in poor accuracy similar to my earlier findings and decreasing alpha would match my previous results.\\n\\nThis is the main issue I've had when my implementation did not work even after some compatible initializations between tensorflow and pytorch.\\n\\nAlso for completeness, please add the relu after FC layer 1 in Figure 4 for graph construction.\", \"title\": \"Issue found: Disparity of CELoss in paper and the code\"}",
"{\"title\": \"Thanks for the reproduction. Pre-processing details.\", \"comment\": \"Thanks for the clarification of reproduction.\\n\\nFor the pre-processing step you mentioned, here we have two pieces of advices:\\n1. We have the dataset flag '--pkl' which controls the usage of pkl data or original image data (our own preprocessing). We tested with our own preprocessing, the performance is similar to pkl data.\\n2. For Mengye Ren's code, I think you can refer to https://github.com/renmengye/few-shot-ssl-public/blob/master/fewshot/data/mini_imagenet.py for more details.\\n\\nFor other code issues, we are glad to offer help in github issue.\"}",
"{\"comment\": \"Hi\\n\\nThank you for the code!\", \"please_see_my_comment_below\": \"https://openreview.net/forum?id=SyVuRiC5K7¬eId=HkgdqB_y1V\\n\\nOne suggestion; it'd be very helpful if you could add explicitly the pre-processing steps that was used to get the pickled data since I could not find it in \\\"Meta-Learning for Semi-Supervised Few-Shot Classification\\\" by Mengye Ren et. al.\", \"title\": \"Issue found\"}",
"{\"title\": \"Anonymous Code Link\", \"comment\": \"Thanks for the feedback.\\n\\nWe got the approval from program chairs to release an anonymous code link, as follow:\", \"https\": \"//github.com/anonymisedsupplemental/TPN\\n\\nWe would like to answer the related questions about our paper and code.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Please refer to our main response in an above comment that addresses the primary and common questions amongst all reviewers. Here we respond to your specific comments.\\n\\n\\\"What can be said about how computationally demanding the procedure is? running label propagation within meta learning might be too costly. \\\"\\n\\n>>> In few-shot learning, episodic paradigm proposed by Matching Networks [1] is widely adopted by current researchers (we follow the same setting to make a fair comparison). In each episode, a small subset of N-way K-shot Q-query examples is sampled from the training set. Typically, for 1-shot experiments, N=5, K=1, Q=15 and for 5-shot experiments, N=5, K=5, Q=15. Thus, the number of training examples are Nx(K+Q) (80 for 1-shot and 100 for 5-shot). Constructing label propagation matrix W involves both support and query examples (80 or 100). So the dimension of W is either 80x80 or 100x100. Running label propagation on such small matrix is quite efficient.\\n\\n\\\"It is not clear how the per-example scalar sigma-i is learned. (for Eq 2)\\\"\\n\\n>>> In Figure 4 of appendix A, we describe the detailed structure of the graph construction module. After we get the per-example feature representation f_{\\\\varphi}(x_i) for x_i, we feed it into the graph construction module g_{\\\\phi}. The output of this module is a one-dimensional scalar. f and g are learned in an end-to-end way in our approach.\\n\\n\\\"solving Eq 3 by matrix inversion does not scale. Would be best to also show results using iterative optimization \\\"\\n\\n>>> We want to answer this question from two aspects. On one hand, few-shot learning assumes that training examples in each class are quite small (only 1 or 5). In this situation, Eq (3) and the closed-form version can be efficiently solved, since the dimension of S is only 80x80 or 100x100. On the other hand, there is plenty of prior work on the scalability and efficiency of label propagation, such as [2], [3], [4], which can extend our work to large-scale data. \\nOn miniImagenet, we performed iterative optimization and got 53.05/68.75 for 1-shot/5-shot experiments with only 10 steps. This is slightly worse than closed-form version (53.75/69.43), because of the inaccurate computation and unstable gradients caused by multiple step iterations.\\n\\n\\n[1] Vinyals, Oriol, et al. \\\"Matching networks for one shot learning.\\\" NIPS. 2016.\\n[2] Liang, De-Ming, and Yu-Feng Li. \\\"Lightweight Label Propagation for Large-Scale Network Data.\\\" IJCAI. 2018.\\n[3] Fujiwara, Yasuhiro, and Go Irie. \\\"Efficient label propagation.\\\" ICML. 2014.\\n[4] Weston, Jason. \\\"Large-Scale Semi-Supervised Learning.\\\"\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Please refer to our main response in an above comment that addresses the primary and common questions amongst all reviewers. Here we respond to your specific comments.\\n\\n\\\"Some technical details are missing. In Section 3.2.2, the authors only explain how they learn example-based \\\\sigma, but details on how to make graph construction end-to-end trainable are missing. Constructing the full weight matrix requires the whole dataset as input and selecting k-nearest neighbor is a non-differentiable operation. Can you give more explanations?\\\"\\n\\n>>> Thanks for pointing out the details. We want to clarify the few-shot setting. We follow the widely-used episodic paradigm proposed by Matching Networks [1]. In each episode (training batch), our algorithm solves a small classification problem which contains N classes each having K support and Q query examples (e.g., N=5, K=1, Q=15, totally 80 examples). The weight matrix is constructed on the support and query examples in each episode rather than the whole dataset. This is very fast and efficient. \\nIn deep neural networks, there is a common trick in computing the gradient of operations non-differentiable at some points, but differentiable elsewhere, such as Max-Pooling (top-1) and top-k. In forward computation pass, the index position of the max (or top-k) values are stored. While in the back propagation pass, the gradient is computed only with respect to these saved positions. This trick is implemented in modern deep learning frameworks such as tensorflow and pytorch. In our paper, we use the tensorflow function tf.nn.top_k() to compute k-nearest neighbor operation.\\n\\n\\\"Does episode training help label propagation? How about the results of label propagation without the episode training? \\\"\\n\\n>>> In our paper, the length scale parameter \\\\sigma is trained in an example-wise and episodic-wise way, as described in section 3.2.2 and Figure 4 of Appendix A. In order to investigate the benefit of episodic training, we combine the heuristic-based label propagation methods [2] with meta-learning to serve as a transductive baseline. Please refer to Table 1 and Table 2 line \\\"Label Propagation\\\". It can be seen that TPN outperforms naive label propagation with a large margin, thus verifying the effectiveness of episode training.\\n\\n\\n[1] Vinyals, Oriol et al. \\\"Matching networks for one shot learning.\\\" NIPS. 2016.\\n[2] Zhou, Denny et al. \\\"Learning with local and global consistency.\\\" NIPS. 2004.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Please refer to our main response in an above comment that addresses the primary and common questions amongst all reviewers. Here we respond to your specific comments.\\n\\n\\\"(1) There is not much technical contribution. It merely just puts the CNN representation learning and the label propagation together to perform end-to-end learning. Considering the optimization problem involved in the learning process, it is hard to judge whether the effect of such a procedure from the optimization perspective.\\\"\\n\\n>>> As mentioned in the main response, the proposed TPN is not a mere combination of CNN representation learning and label propagation. The original label propagation constructs a fixed graph (Eq (1)) to explore the correlation between examples. While in our work, we adaptively construct the graph structure for each episode (training task) with a learnable graph construction module (Figure 4, Appendix A). This leads to better generalization ability for test tasks. \\nIn Table 1 and Table 2, the proposed TPN achieved much higher accuracy than the mere combination model (referred to as \\\"Label Propagation\\\"). \\n\\n\\\"(2) Empirically, it seems TPN achieved very small improvements over the very baseline label propagation. Moreover, the performance reported in this paper seems to be much inferior to the state-of-the-art results reported in the literature. For example, on miniImageNet, TADAM(Oreshkin et al, 2018) reported 58.5 (1-shot) and 76.7(5-shot), which are way better than the results reported in this work. This is a major concern.\\\"\\n\\n>>> At first, we want to clarify the few-shot network architecture setting. Currently, there are two common network architectures: 4-layer ConvNets (e.g., [1][2][3]) and 12-layer ResNet (e.g., [4][5][6][7]). Our method belongs to the first one, which contains much fewer layers than the ResNet setting. Thus, it is more reasonable to compare TADAM with ResNet version of our method. To better relieve the reviewer's concern, we implemented our algorithm with ResNet architecture on miniImagenet dataset and show the results as follow:\\n\\nMethod 1-shot 5-shot\\nSNAIL [4] 55.71 68.88\\nadaResNet [5] 56.88 71.94\\nDiscriminative k-shot [6] 56.30 73.90\\nTADAM [7] 58.50 76.70\\n--------------------------------------------------------\\nOurs 59.46 75.65\\n--------------------------------------------------------\\n\\nIt can be seen that we beat TADAM for 1-shot setting. For 5-shot, we outperform all other recent high-performance methods except for TADAM.\\n\\n>>> We want to clarify that \\\"Label Propagation\\\" in Table 1 and Table 2 is a strong baseline. It combines label propagation method [8] with episodic meta-learning. The usage of transductive inference makes this baseline outperform most published state-of-the-art methods. Moreover, the performance of TPN over label propagation is not very small. For example, in miniImagenet, TPN outperforms label propagation with 1.44% and 1.25% for 1-shot and 5-shot respectively, but this advantage grows to 3.20% and 1.68% with \\\"Higher Shot\\\" training. The improvements are even larger for tieredImagenet with 4.68% and 2.87%. We believe in few-shot learning, this is a large improvement.\\n\\n\\n[1] Finn, Chelsea, Pieter Abbeel, and Sergey Levine. \\\"Model-agnostic meta-learning for fast adaptation of deep networks.\\\" ICML. 2017.\\n[2] Snell, Jake, Kevin Swersky, and Richard Zemel. \\\"Prototypical networks for few-shot learning.\\\" NIPS. 2017.\\n[3] Yang, Flood Sung Yongxin et al. \\\"Learning to compare: Relation network for few-shot learning.\\\" CVPR. 2018.\\n[4] Mishra, Nikhil et al. \\\"A simple neural attentive meta-learner.\\\" ICLR. 2018.\\n[5] Munkhdalai, Tsendsuren et al. \\\"Rapid adaptation with conditionally shifted neurons.\\\" ICML. 2018.\\n[6] Bauer, Matthias et al. \\\"Discriminative k-shot learning using probabilistic models.\\\" arXiv. 2017.\\n[7] Oreshkin, B.N., Lacoste, A. and Rodriguez, P., 2018. \\\"TADAM: Task dependent adaptive metric for improved few-shot learning.\\\" NIPS. 2018.\\n[8] Zhou, Denny, et al. \\\"Learning with local and global consistency.\\\" NIPS. 2004.\"}",
"{\"title\": \"Main response to reviewers\", \"comment\": \"We wish to thank the reviewers for their enlightening feedback! We would like to highlight the novelty and contribution of the proposed method.\\n\\n(1) The proposed TPN is not a direct combination of the original label propagation and few-shot learning, but a novel transductive meta-learning method when facing unevenly distributed data.\\n\\nThe main contribution of the proposed TPN is to propose a novel transductive meta-learning method when facing an uneven data distribution. Most of previous (label) propagation algorithms usually assume samples are distributed evenly in the data space. Unfortunately, in low-shot learning, the data is limited and unevenly distributed, which makes most of existing label propagation algorithms inapplicable. To clearly model the data distribution in low-shot learning settings, previous transductive methods adopt a fixed scheme to explore the correlations between data, i.e., compute the weights with a fixed \\\\sigma as shown in Eq (1). However, as pointed out in previous work [2][3] and in our experimental results, the performance of a transductive method is quite sensitive to the parameter \\\\sigma, and a fixed \\\\sigma will lead to suboptimal results. Our proposed TPN adaptively learns data correlation by calculating an optimal \\\\sigma on a per data basis. As shown in Figure 4 (Appendix A), the correlation among data pairs is optimized and updated in each episode according to data distribution of the neighborhood. In this way, a different model is specifically learned to uncover the correlation of each data pair, thereby largely ameliorating the uneven data distribution problem. \\nExperimentally, TPN shows a big advantage over the direct combination (referred to as \\\"Label Propagation\\\" in Table 1 and Table 2).\\n\\n(2) To the best of our knowledge, we are the first to model transductive inference explicitly in the few-shot meta-learning. This transductive setting paves a new way to solve the limited data problem in few-shot learning. As shown in this paper, if one has the test data in whole or a batch manner, transductive inference significantly improves the performance without additional human annotations.\\n\\n(3) We advanced the state-of-the-art performance on the two most commonly-used benchmark datasets with large margins using the standard 4-layer ConvNets architecture.\\n\\n\\n\\n[1] Zhou, Denny, et al. \\\"Learning with local and global consistency.\\\" NIPS. 2004.\\n[2] Wang, Fei, and Changshui Zhang. \\\"Label propagation through linear neighborhoods.\\\" TKDE. 2008.\\n[3] Xiaojin Z, Zoubin G. \\\"Learning from labeled and unlabeled data with label propagation.\\\" Technical Report. 2002.\"}",
"{\"comment\": \"Thank you for the clarifications! I have already used Snell's code for the embedding and admit baseline implementation is tricky. I have not used Snell's pre-processing as the github repository contains the pre-processing for omniglot only, instead used the RelationNet data pre-processing and loading for mini-imagenet for few corrections such as the normalization mean, std from imagenet https://github.com/floodsung/LearningToCompare_FSL\\n\\nI have double checked the hyper-parameters and accessed the values in debug mode extensively and wherever division happens, I added an epsilon of 1e-6 or 1e-8 including the element-wise division of f_phi(x_i) / (sigma_i + epsilon) and in the computation of D^{-1/2} where I previously used torch.rsqrt() and replaced it with 1.0 / (w.sum(1) + epsilon).sqrt(), (where w is the graph knn matrix with applied masked k_max=20 for each row and zero everywhere else). However, the issue persisted with alpha=0.99 and model does not learn. As said previously, changing alpha to 0.9 or even lower 0.6 helped learning a lot but the final accuracy for 5-way, 1-shot case remained around the previous result of 47.9%. \\n\\nI hope ICLR authorities take anonymous code release into the considerations as this is a major barrier for assessing the reproducibility.\", \"title\": \"Problem persisted\"}",
"{\"title\": \"Implementation details. Code will be released soon.\", \"comment\": \"Thanks for the comment and interest about our paper.\\nAccording to the blind review policy, we can not release the code at this moment. We will release our code and the trained model as soon as the review process ends. Meanwhile, we have sent an email to the program chairs to check if it is allowed to release the code anonymously. We will share the code upon approval.\\n\\nWe are sure about the reproducibility of the results shown in our paper. And in order to ensure the reproducibility, we ran the test procedure 10 times (each with 600 randomly generated episodes) and reported the average results to avoid accidentally high results. We are not sure if you have reproduced the result as outlined in [1]. If not, we sincerely hope you first try to reproduce the baseline method [1], so you may be closer to the right implementation. It took us quite a while to reproduce [1] even the code has been released. \\nNevertheless, we would like to provide more details below which could be useful for you to reproduce the results of our paper.\\n\\n(1) Our implementation is based on Tensorflow 1.3+, and we also tested on Pytorch 0.4.0. There is only a slight accuracy difference.\\n(2) The reason why your results only achieved 25% could be caused by value issues such as divided by zero. Sincerely hope you could double check your code and please make sure you have added an epsilon whenever you call a divide operation. \\n(3) Our model is learned end-to-end from scratch, and no pretrain is needed. We did not see your code, but we reckon you did not use the validation set to decide the early stopping iteration, which is commonly used in few-shot learning, such as Prototypical networks. Please use this practice if it is the case.\\n(4) The detailed hyperparameters are: alpha=0.99, k=20, query=15, lr=0.001 and halved every 10,000 episodes for at most 100,000 episodes. \\n(5) Network architecture details: feature extraction module is exactly the same as Prototypical networks [1], graph construction module is described in Figure 4 of Appendix A. Note that BatchNorm is applied only in Conv layers. In Figure 4, there is no Relu activation after FC layer2. More training details: we use Tensorflow default initialization, BatchNorm with default parameters: decay=0.999 and epsilon=0.001. \\n(6) As to preprocessing, for miniImagenet, we follow Prototypical networks [1] while for tieredImagnet we follow Ren et al. [2].\\n\\nWe have endeavored our best to 'guess' what mistakes you may have made, but there could be other issues that we are unable to enumerate. \\nWe highly suggest that a basic starting point is to reproduce the results of Prototypical networks. Below we provide a few good implementation codes of some related papers.\", \"prototypical_networks\": \"\", \"https\": \"//github.com/renmengye/few-shot-ssl-public\\n\\n\\n[1] Snell, Jake, Kevin Swersky, and Richard Zemel. \\\"Prototypical networks for few-shot learning.\\\" NIPS. 2017.\\n[2] Ren, Mengye, et al. \\\"Meta-learning for semi-supervised few-shot classification.\\\" ICLR. 2018.\", \"tieredimagenet\": \"\"}",
"{\"comment\": \"Since the paper has not provided a reproducible code, based on my implementation in PyTorch 1.0.0.dev20181105, unfortunately I could not reproduce their results on Mini-Imagenet dataset for 5-way, 1-shot and 5-shot scenarios. Using the exact mentioned hyper-parameters, the model didn't learn much in the end-to-end manner and the test accuracy for 5-way, 1-shot was around 25% trained in 50,000 episodes and learning-rate is halved every 10,000 episodes. Instead I pretrained the emebedding in the train set and decreased alpha (label propagation) to 0.9 then it started learning better.\\n\\nThe best accuracy I could get for the 5-way, 1-shot case (trained with 5-way, 1-shot so no higher-shot) is with alpha=0.6 and it is 47.93% (+/- 1.14% as the 95% confidence interval) which is much lower than the claimed 53.75%. More precisely, I trained with batch-size=15 as described in RelationNet, k in knn=20, Xavier initialization of Conv layers, BatchNorm unit weight initialization and zero bias, zero-mean normal initialization with std 0.01 for Linear layers and unit bias, and then tested with 15 query examples where results were averaged over 600 randomly generated episodes from the test set.\\n\\nThe paper did not mention any pre-processing step, so I only resized images to 84 by 84 and normalized the mini-imagenet data using imagenet mean and std.\", \"title\": \"Reproducibility issues\"}",
"{\"title\": \"interesting empirically\", \"review\": \"This paper proposes to address few-shot learning in a transductive way by learning a label propagation model in an end-to-end manner. Semi-supervised few-shot learning is important considering the limitation of the very few labeled instances. This is an interesting work.\", \"the_merits_of_this_paper_lie_in_the_following_aspects\": \"(1) It is the first to learn label propagation for transductive few-shot learning. (2) The proposed approach produced effective empirical results.\", \"the_drawbacks__of_the_work_include_the_following\": \"(1) There is not much technical contribution. It merely just puts the CNN representation learning and the label propagation together to perform end-to-end learning. Considering the optimization problem involved in the learning process, it is hard to judge whether the effect of such a procedure from the optimization perspective. (2) Empirically, it seems TPN achieved very small improvements over the very baseline label propagation. Moreover, the performance reported in this paper seems to be much inferior to the state-of-the-art results reported in the literature. For example, on miniImageNet, TADAM(Oreshkin et al, 2018) reported 58.5 (1-shot) and 76.7(5-shot), which are way better than the results reported in this work. This is a major concern.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Novel idea, but important details and deeper analysis are missing\", \"review\": \"Summary\\nThis paper proposes a meta-learning framework that leverages unlabeled data by learning the graph-based label propogation in an end-to-end manner. The proposed approaches are evaluated on two few-shot datasets and achieves the state-of-the-art results. \\n\\nPros. \\n-This paper is well-motivated. Studying label propagation in the meta-learning setting is interesting and novel. Intuitively, transductive label propagation should improve supervised learning when the number of labeled instances is low. \\n-The empirical results show improvement over the baselines, which are expected. \\n\\nCons.\\n-Some technical details are missing. In Section 3.2.2, the authors only explain how they learn example-based \\\\sigma, but details on how to make graph construction end-to-end trainable are missing. Constructing the full weight matrix requires the whole dataset as input and selecting k-nearest neighbor is a non-differentiable operation. Can you give more explanations?\\n-Does episode training help label propagation? How about the results of label propagation without the episode training?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Transductive few-shot by meta-learning to propagate labels for . Solid work.\", \"review\": \"The paper studies few-host learning in a transductive setting: using meta learning to learn to propagate labels from training samples to test samples.\\n\\nThere is nothing strikingly novel in this work, using unlabeled test samples in a transductive way seem to help slightly. However, the paper does cover a setup that I am not aware that was studied before. The paper is written clearly, and the experiments seem solid.\", \"comments\": \"-- What can be said about how computationally demanding the procedure is? running label propagation within meta learning might be too costly. \\n-- It is not clear how the per-example scalar sigma-i is learned. (for Eq 2)\\n-- solving Eq 3 by matrix inversion does not scale. Would be best to also show results using iterative optimization\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response to experiments and methods\", \"comment\": \"Thanks for the comments.\\n\\nFor each episode, we first utilize both the support set and query set to construct the graph structure. Then, label propagation is performed according to the graph information to get all query set labels. The performance gain comes from the fact that we share information among all query examples and learn to propagate labels. In contrast, inductive methods predict query examples one by one, which does not enjoy this benefit.\\n\\nAs to query set number experiments, please refer to Appendix B.2 for detailed information.\\n\\nFor distractor classes, this is not the main focus of our paper. However, in order to explore the extent of our method, we performed experiments in the presence of distractor classes (same setting as [1]). The results are shown below:\\nModel mini-5way1shot mini-5way5shot tiered-5way1shot tiered-5way5shot\\nSoft k-Mean [1] 48.70+/-0.32 63.55+/-0.28 49.88+/-0.52 68.32+/-0.22\\nSoft k-Mean+Cluster [1] 48.86+/-0.32 61.27+/-0.24 51.36+/-0.31 67.56+/-0.10\\nMasked Soft k-Means [1] 49.04+/-0.31 62.96+/-0.14 51.38+/-0.38 69.08+/-0.25\\nTPN-semi (Ours) 50.43+/-0.84 64.95+/-0.73 53.45+/-0.93 69.93+/-0.80\\n\\nIt can be seen that our TPN-semi algorithm outperforms [1] in all cases, although our method is not specifically designed for the distractor-classes problem.\\nWe believe with care design, the performance of our method will continue to increase. This will be the future work.\"}",
"{\"comment\": \"This paper tried to introduce transductive networks for few-shot learning.\\nI want to know about the experiments here especially about the transductive process that was done for query set. I hope that you can make it clear what was specifically performed in the batch of query set to help you gain the performance?\\nDo you also have any results if you increase the number of query set will affect your performance too? Because I believe this is the contribution that you can have as well from your work.\", \"one_more_thing\": \"I have a question in your results for semisupervised few-shot learning. \\nI read the experiments in semisupervised few-shot learning protocol[1] that there are distractor classes in which I did not see this thing in your paper.\\nI intuitively think that this method might be appropriate for the unlabeled data without many outliers/distractors.\\nDo you also have the experiments about this before? It is fine if you also show the drawback of this method, so the improvements can be proposed in the future to tackle that problem.\\n\\n\\n\\n\\n[1] Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B Tenenbaum, Hugo\\nLarochelle, and Richard S Zemel. Meta-learning for semi-supervised few-shot classification. International Conference on Learning Representations, 2018.\", \"title\": \"Experiments and Methods\"}",
"{\"title\": \"Response to the experiments\", \"comment\": \"Thanks for the comments.\\nThe state-of-the-art performance on Omniglot is quite high (>99% except for 20way-1shot setting), which means this problem is nearly solved. Also, there is a tendency that recent high-quality papers do not report results on Omniglot, such as TADAM [1] (NIPS2018), Delta-encoder [2] (NIPS2018), LEO [3] (ICLR19 submission). For the 20way-1shot setting, we compare our TPN results with Relation Net and Prototypical Net as follows:\\n\\t\\t\\t\\t\\t20way-1shot\\nPrototypical Net 96.00\\nRelation Net 97.60\\nTPN\\t\\t\\t\\t 98.03\\n\\nAlthough zero-shot learning is not our focus, TPN can be easily adapted to zero-shot setting. The modification is similar to Prototypical network or Relation Network. First, a function g can be used to map class-level semantic feature into the same space of visual feature. Then, we can construct graph structure using both features and perform label propagation as in few-shot setting. \\n\\n[1] Oreshkin, Boris N., Alexandre Lacoste, and Pau Rodriguez. \\\"TADAM: Task dependent adaptive metric for improved few-shot learning.\\\" NIPS2018\\n[2] Schwartz, Eli, et al. \\\"Delta-encoder: an effective sample synthesis method for few-shot object recognition.\\\" NIPS2018\\n[3] Anonymous, \\\"Meta-Learning with Latent Embedding Optimization.\\\" ICLR2019 submission.\"}",
"{\"comment\": \"It is interesting that this paper use a label propagation way to solve the low-data testing problem. However, the state-of-art few-shot(zero-shot) methods: Relation Net and Prototypical Net used both minImageNet, Omniglot for few-shot Testing and CUB-200 for zero-shot. So what's your results on Omniglot since you follow the idea of Prototypical Net. In addition, is it possible that your proposed TPN can deal with zero-shot problems since a general few-shot framework can Easily extend to cope with zero-shot problems?\", \"title\": \"About the experiments\"}",
"{\"title\": \"Thank you for pointing out related work\", \"comment\": \"Thanks for pointing out the related work. We would like to include this reference to our manuscript in the next version.\\n\\nOur paper and the mentioned paper share the same idea of using metric learning and transduction. However, the target tasks are different. We focus on few-shot learning and meta-learning while the mentioned paper deals with unsupervised domain adaptation. This distinction leads to different algorithm designs: we learn to propagate labels while the mentioned paper proposes the transduction and adaptation steps.\"}",
"{\"comment\": \"The paper looks very interesting as transductive approaches are powerful for metric learning and semi-supervised learning. And, learning to transductive learn is an interesting direction. I would like to point a related work which authors probably missed which performs metric learning/transfer learning using transduction: https://papers.nips.cc/paper/6360-learning-transferrable-representations-for-unsupervised-domain-adaptation\", \"title\": \"Pointer to a Related Work\"}",
"{\"comment\": \"This paper proposes a novel meta-learning framework, which aims to propagate labels from labeled instances to unlabeled test instances. This framework learns a graph construction module, exploiting the manifold structure in the data. The idea is reasonable and efficacy, and the experiments are comprehensive.\", \"title\": \"Reasonable and efficacy idea\"}"
]
} |
|
HyzdRiR9Y7 | Universal Transformers | [
"Mostafa Dehghani",
"Stephan Gouws",
"Oriol Vinyals",
"Jakob Uszkoreit",
"Lukasz Kaiser"
] | Recurrent neural networks (RNNs) sequentially process data by updating their state with each new data point, and have long been the de facto choice for sequence modeling tasks. However, their inherently sequential computation makes them slow to train. Feed-forward and convolutional architectures have recently been shown to achieve superior results on some sequence modeling tasks such as machine translation, with the added advantage that they concurrently process all inputs in the sequence, leading to easy parallelization and faster training times. Despite these successes, however, popular feed-forward sequence models like the Transformer fail to generalize in many simple tasks that recurrent models handle with ease, e.g. copying strings or even simple logical inference when the string or formula lengths exceed those observed at training time. We propose the Universal Transformer (UT), a parallel-in-time self-attentive recurrent sequence model which can be cast as a generalization of the Transformer model and which addresses these issues. UTs combine the parallelizability and global receptive field of feed-forward sequence models like the Transformer with the recurrent inductive bias of RNNs. We also add a dynamic per-position halting mechanism and find that it improves accuracy on several tasks. In contrast to the standard Transformer, under certain assumptions UTs can be shown to be Turing-complete. Our experiments show that UTs outperform standard Transformers on a wide range of algorithmic and language understanding tasks, including the challenging LAMBADA language modeling task where UTs achieve a new state of the art, and machine translation where UTs achieve a 0.9 BLEU improvement over Transformers on the WMT14 En-De dataset. | [
"sequence-to-sequence",
"rnn",
"transformer",
"machine translation",
"language understanding",
"learning to execute"
] | https://openreview.net/pdf?id=HyzdRiR9Y7 | https://openreview.net/forum?id=HyzdRiR9Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BkeKq8AggV",
"SylL9Yz1lN",
"rkginvfklN",
"HyxfZDmCk4",
"rklvRIQR1N",
"r1xW6d1jCX",
"ByewREh90m",
"Hyx3t4h5A7",
"SkxrQ435AQ",
"Skl0xm35CX",
"B1luhGh90X",
"SkeCBTIYCQ",
"SklQ8hSt07",
"BkgUZgHKCX",
"Sye8Myd937",
"ByeMxPX9nm",
"rkxMwMsFn7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544771216579,
1544657294174,
1544656818843,
1544595194017,
1544595150974,
1543334073509,
1543320782521,
1543320708507,
1543320605269,
1543320310048,
1543320240052,
1543232838477,
1543228490655,
1543225342013,
1541205774063,
1541187306093,
1541153370355
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper910/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper910/Authors"
],
[
"ICLR.cc/2019/Conference/Paper910/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper910/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper910/Authors"
],
[
"ICLR.cc/2019/Conference/Paper910/Authors"
],
[
"ICLR.cc/2019/Conference/Paper910/Authors"
],
[
"ICLR.cc/2019/Conference/Paper910/Authors"
],
[
"ICLR.cc/2019/Conference/Paper910/Authors"
],
[
"~Zihao_Ye1"
],
[
"ICLR.cc/2019/Conference/Paper910/Authors"
],
[
"~Zihao_Ye1"
],
[
"ICLR.cc/2019/Conference/Paper910/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper910/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper910/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper presents Universal Transformers that generalizes Transformers with recurrent connections. The goal of Universal Transformers is to combine the strength of feed-forward convolutional architectures (parallelizability and global receptive fields) with the strength of recurrent neural networks (sequential inductive bias). In addition, the paper investigates a dynamic halting scheme (by adapting Adaptive Computation Time (ACT) of Graves 2016) to allow each individual subsequence to stop recurrent computation dynamically.\", \"pros\": \"The paper presents a new generalized architecture that brings a reasonable novelty over the previous Transformers when combined with the dynamic halting scheme. Empirical results are reasonably comprehensive and the codebase is publicly available.\", \"cons\": \"Unlike RNNs, the network recurs T times over the entire sequence of length M, thus it is not a literal combination of Transformers with RNNs, but only inspired by RNNs. Thus the proposed architecture does not precisely replicate the sequential inductive bias of RNNs. Furthermore, depending on how one views it, the network architecture is not entirely novel in that it is reminiscent of the previous memory network extensions with multi-hop reasoning (--- a point raised by R1 and R2). While several datasets are covered in the empirical study, the selected datasets may be biased toward simpler/easier tasks (--- R1).\", \"verdict\": \"While key ideas might not be entirely novel (R1/R2), the novelty comes from the fact that these ideas have not been combined and experimented in this exact form of Universal Transformers (with optional dynamic halting/ACT), and that the empirical results are reasonably broad and strong, while not entirely impressive (R1). Sufficient novelty and substance overall, and no issues that are dealbreakers.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Universal Transformers (with optional dynamic halting/ACT)\"}",
"{\"title\": \"\\\"Transformer with positional encodings and fixed precision is not Turing complete\\\", from [1]\", \"comment\": \"This is incorrect. Please see our response to the same comment with the heading \\\"Potentially wrong claim in this paper\\\".\"}",
"{\"title\": \"Transformer with fixed-precision is not Turing-complete\", \"comment\": \"Thanks for your comment.\\n\\nThe main point here is that in [1] the authors assume arbitrary-precision arithmetic, as clarified in their responses on OpenReview where they noted \\\"Our proofs are based on having unbounded precision for internal representations [...]\\\". Therefore, as mentioned in their section \\\"The need of arbitrary precision\\\", \\\"[...] the Transformer with positional encodings and fixed precision is not Turing complete.\\\" In other words, in practice (i.e. assuming fixed-precision arithmetic), the Transformer is *not* computationally universal.\\n\\nTo see this, note that in fixed-precision arithmetic a single multiply is O(1) (and so are the nonlinearities). Therefore the computation of the fixed number of attention layers in the Transformer is at most O(n^2), which is polynomial time, while there exist computable functions that are not computed in polynomial time. Or stated in another way: If a model only has a specific time-window, like O(n^2), there are problems it cannot solve, hence it cannot be universal (see also [2] for more on this).\\n\\nIn the Universal Transformer, on the other hand, this time-window is *not* fixed (see Appendix B in the revised version of our paper on OpenReview for an intuitive example). As pointed out by AnonReviewer2 below, we further want to emphasize that this is because the recurrence resulting from tying the weights allows one to vary the number of time-steps T arbitrarily at inference time (i.e. you can train with T=4 and test with any T). This potentially unbounded time-window (which is only possible because of its recurrence) is what makes UT computationally universal. \\n\\nWe will clarify these points in the revised version of the paper.\\n\\n----\\n[1] https://openreview.net/forum?id=HyGBdo0qFm¬eId=HyGBdo0qFm\\n[2] https://en.wikipedia.org/wiki/Time_hierarchy_theorem\"}",
"{\"comment\": \"The claim stated in this paper \\\"Transformers are not Turing-complete\\\" is wrong. It's proved in [1] that Transformer is Turing-complete.\\n\\n[1] https://openreview.net/forum?id=HyGBdo0qFm¬eId=HyGBdo0qFm\", \"title\": \"Wrong claim in this paper\"}",
"{\"comment\": \"The claim stated in this paper \\\"Transformers are not Turing-complete\\\" is potentially wrong. It's proved in [1] that Transformer is Turing-complete. It is definitely necessary to address this concern before this paper can be accepted.\\n\\n[1] https://openreview.net/forum?id=HyGBdo0qFm¬eId=HyGBdo0qFm\", \"title\": \"Potentially wrong claim in this paper\"}",
"{\"title\": \"Worth emphasising: difference with Transformer: tying across recurrence, different train/ test depth T\", \"comment\": \"Thanks for your feedback.\", \"regarding_the_following_argument\": \">>> * In UT, parameters are tied across layers (i.e. the same self-attention and the same transition function is applied across recurrent steps); Transformer has different weights for each layer / step. This is important because a UT trained on T=4 steps can be evaluated using any T, whereas a Transformer trained with T layers/steps can only be evaluated for the same T steps.\\nI guess I had understood this but had not realised the implications. To make the paper persuasive, it might be worth emphasising this specific point.\"}",
"{\"title\": \"Rebuttal Part 1\", \"comment\": \"We thank the reviewer for the thorough review and respond below. We have also updated the paper to address these comments.\\n\\n>>extends Transformer by recursively applying a multi-head self-attention block, rather than stack multiple blocks in the vanilla Transformer. An extra transition function is applied between the recursive blocks\\n\\nTo avoid any potential confusion about the architecture, we note that the {multi-head self-attention + transition}-block is applied recursively *as a whole*. The Transition function is not \\u201cextra\\u201d, it also exists in the standard Transformer, but the difference is that we apply the same Transition function at every layer / step (by tying the weights). This makes the model recurrent (in \\u201cdepth\\u201d or in its concurrent processing steps), which then allows us to vary the number of steps and add dynamic halting -- both impossible with the standard Transformer architecture. \\n\\n\\n>>it also uses a dynamic adaptive computation time (ACT) halting mechanism on each position, as suggested by the previous ACT paper\\n\\nACT was introduced and applied in the context of a sequential RNN model where each symbol is processed one after the other, but with a variable number of steps each. However we apply ACT concurrently to all symbols (i.e. in a parallel-in-time model). It has the same effect of allowing a variable number of processing steps per symbol, but we want to emphasize that the way it is used in UT is different from the original ACT paper (in depth vs in sequence length / time).\\n\\n>>1. [...] The idea behind UT is similar to memory networks and multi-hop reasoning. \\n\\nYes, indeed, the idea behind UT is related to memory networks. We mentioned this briefly (last paragraph of Section 4), but have expanded on this in the updated version: In UT, similar to dynamic memory networks, there is an iterative attention process which allows the model to condition its attention over memory on the result of previous iterations. As we also show in the visualization of the attention distributions for the bAbI task (Appendix F in the revised paper), we can see that there is a notion of temporal states in UT, where the model updates the memory (states) in each step based on the output of previous steps, and this chain of updates can indeed be viewed as steps in a multi-hop reasoning process. \\n\\n>>2. The recursive structure is not applied to the input sequence, so UT does not have the advantage of RNN/LSTM on capturing sequential information and high-order features.\", \"we_disagree_with_this_statement\": \"In self-attentive parallel-in-time models (such as Transformer or UT) information is exchanged between symbols (i.e. sequential information) using the self-attention mechanism. Therefore, in the first step each symbol representation is already conditioned on every other symbol (i.e. includes first-order features). However, as this process is continued, with each additional processing step UTs are in fact able to capture higher-order features between symbols.\"}",
"{\"title\": \"Rebuttal Part 1\", \"comment\": \"We thank the reviewer for the thorough review, and respond below. We have also updated the paper to address these comments.\\n\\n>> \\u201cWhat is the contribution of this work [...]\\u201d\\n\\nWe introduce two changes to the Transformer architecture (namely adding recurrence and dynamic computation) which: \\n\\n1) increase the model\\u2019s theoretical capabilities (make it Turing-complete), \\n2) significantly improve results (compared to standard Transformer) on all tasks that it was evaluated on including large-scale MT (UT improves over standard Transformer by 0.9 BLEU on WMT14 En-De), and lastly \\n3) also increase the *types* of tasks Transformer can learn in the first place (eg a standard Transformer fails on bAbI (solves only 50% of tasks; see Table 1), is vastly outperformed by LSTMs on subject-verb agreement (Table 2), and achieves a test perplexity of 7,321 on LAMBADA (Table 3); on the other hand UT solves 100% of bAbI tasks, outperforms LSTMs on SVA prediction, even performing progressively better as the number of attractors increases, and achieves a state-of-the-art test perplexity of 142 on LAMBADA).\\n\\nWhile we agree (and readily point out throughout) that these are two fairly simple architectural changes, we do want to point out that this yields a new type of parallel-in-time recurrent self-attentive model which blends the best of both worlds of RNNs and Transformers, is theoretically superior to standard Transformers, and practically leads to vastly improved results across a much wider range of tasks, as mentioned above. \\n\\n>> Range of algorithmic tasks limited; experimental / training details missing\\nThe main purpose of evaluating our model on algorithmic tasks is to probe its ability for length generalization in a controlled setup, where we train on 40 symbols and test on 400 symbols. We intentionally chose three simple tasks, i.e. copy, reverse, and addition to mainly focus on the length generalization aspect of the problem, and as can be seen, Transformers and LSTMs perform poorly in this setup in terms of sequence accuracy, while UT is doing a much better job (despite the fact that it\\u2019s not trained with a custom curriculum learning like Neural GPU to perform well on these tasks). Furthermore, we also tested our model on Learning-to-Execute tasks which can be considered in the family of algorithmic tasks.\\n\\nWe have added additional experimental and training details to the revised version of the paper.\\n\\n>> I miss a comparison to Neural GPU and Stack RNN in 3.1 and 3.2\\n\\nThis is because for each of the tasks we only reported the state-of-the-art / best performing baselines and Neural GPUs and Stack RNNs have been outperformed by other methods for both bAbI (3.1) and subject-verb agreement prediction (3.2).\\n\\n>> I miss a proof that the UT is computationally equivalent to a Turing machine. It does not have externally addressable, shared memory like a tape, and I\\u2019m not sure how to transpose read/write heads either.\\n\\nThe proof included in the paper goes by reduction from the Neural GPU which in turn goes by reduction from cellular automata. So this line of proof does not operate directly on a tape or read/write heads, it starts from cellular automatas\\u2019 universality (like the game of life). We have also added an Appendix B to elaborate on this with an example.\"}",
"{\"title\": \"Rebuttal Part 2\", \"comment\": \">> why should width of intermediate layers be exactly equal to sequence length?\\n\\nIf we understand correctly, the question is \\u201cWhy only have one vector per input symbol at every intermediate layer/step?\\u201d. With the self-attention mechanism, both in Transformer and the Universal Transformer at each layer/step, we revise the representation of each symbol given the representations of all the other input symbols in the previous layer/step. Thus, we need vectors representing each symbol in the input at each intermediate layer/step (illustrated in Fig. 1 in the paper). \\n\\n>> why should all hidden state vectors be size $d$, the size of the embeddings chosen at the first layer, which might be chosen out of purely practical reasons like the availability of pre-trained word embeddings?\\n\\nIndeed, there is no architectural constraint in UT for having the same size for the hidden state and input/output embeddings (same as with standard Transformer). These are independent hyper-parameters and one can set different values for them, although this has not really been done in any other transformer-based work as far as we are aware. \\n\\n>>The authors may correct me, but I believe that the UT with FC layers is exactly identical to the Transformer described in Vaswani 2017 for T=6. \\n\\nNo, there are several differences (which prove to be important theoretically and in practice):\\n\\n* In UT, parameters are tied across layers (i.e. the same self-attention and the same transition function is applied across recurrent steps); Transformer has different weights for each layer / step. This is important because a UT trained on T=4 steps can be evaluated using any T, whereas a Transformer trained with T layers/steps can only be evaluated for the same T steps.\\n* Besides the position embedding, we also have time-step embeddings, which are combined into (essentially 2-D) \\u201ccoordinate embeddings\\u201d\\n* We introduce the coordinate embedding at the beginning of each step (not just once at t_0)\\n* Lastly, ACT makes T dynamic for each position, whereas with Transformer T is static.\\n\\n>> So this paper introduces the idea of varying T, interprets it as a form of recurrence, and adds dynamic halting with ACT to that. Interestingly, the recurrence is not over sequence positions here.\", \"it_is_in_fact_the_other_way_around\": \"We introduce recurrence over processing steps (by sharing/tying the transition weights), and that allows us to vary T. We then add ACT to that.\\n\\n(As noted above: You cannot vary T / number of layers between training and testing in a standard Transformer as it is trained with a different set of weights for each of the T layers.)\\n\\n>>Typos and writing suggestions\\nThanks, we\\u2019ve updated these in the revised version. We also increased the resolution of the image in the Figure 4.\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for the thorough review, and respond below. We have also updated the paper to address these comments.\\n\\n>> Questions around Universality of UT\\n\\nThe main ingredient for the universality of UT comes from the recurrence in depth. Unbounded memory is also important, but it\\u2019s the sharing of weights combined with adaptive computation time that brings universality -- even with unbounded size, the standard Transformer would not be universal. We have added an Appendix B to elaborate on this with an illustrative example. \\n\\n>> More detailed descriptions of the tasks\\n\\nWe\\u2019ve added an appendix D, which provides more detail on the tasks and datasets.\\n\\n>> 3. In the discussion, the crucial difference between UT and RNN is that RNN is stated to be that RNN cannot access memory in the recurrent steps while UT can. This seems to be the case for not just UT but any Transformer-type model by construction.\\n\\nThis is correct in the sense that UT, like transformer, can access memory in each of its processing steps. But the crucial difference is that UT, unlike transformer, is recurrent in its steps (similar to RNNs), where the standard Transformer is like a deep feed-forward model where each step is computed using a separate, learned layer. So, as we stated in the paper, \\u201cUTs combine the parallelizability and global receptive field (access to the memory) of feed-forward sequence models like the Transformer with the recurrent inductive bias of RNNs\\u201d. As the experiments demonstrate, this *combination* yields very strong results across a wider range of tasks than either on its own.\\n\\n>> 4. The authors stated that the \\u201crecurrent step\\u201d for RNN is through time (as the authors stated) while the \\u201crecurrent step\\u201d in UT is not through time. [...] In this sense, we may argue that the UT cannot access memory across its own t (stacking across t). [...]\\n\\nYes, this is a good point and indeed correct in terms of the model as reported in the paper. We did also implement a variant of UT where in every step (in depth \\u201ct\\u201d) the model attends to the output of all the previous steps (not just the last one; i.e. it has access to memory across t), but it didn\\u2019t improve results in our experiments. We speculate that this may be because being able to access memory in time (i.e. across sequence length), in particular for language tasks, is more important than being able to access all the previous transformations (i.e. access memory in depth). \\n\\nFurthermore, we also note that the maximum number of steps in depth (denoted $T$ in the paper) is typically *much fewer* than the maximum length of the sequences (denoted $m$ in the paper). This makes access to previous transformations less useful across \\\"recurrent steps\\\" for UTs as the recurrence allows the model to memorize its transformations across the shorter paths in depth (due to vanishing gradient playing a smaller role), and so being able to look up memory in each step (\\u201cacross its own t\\u201d as the reviewer mentions) therefore becomes less useful.\"}",
"{\"title\": \"Rebuttal Part 2\", \"comment\": \">>3. Although evaluated on multiple datasets and tasks, they only cover simple QA task and EN-DE translation task. Comparing to other papers applying modifications to Transformer, it is better to include at least one heavy task on large/challenging dataset/task.\\n\\nWe chose an array of 6 different tasks (ranging from smaller and more structured, to large-scale in the case of the WMT machine translation experiments) in order to measure and highlight different capabilities of UT compared to other models:\\n\\n* We chose bAbI-QA since its set of 20 different tasks each tests a unique aspect of language understanding and reasoning. Besides this, the bAbI-1k data set (as opposed to the 10k version) is quite a challenging setup since a model should be very data efficient to be able to get reasonable results on this data, and as we show, the Transformer (and LSTMs for that matter) are *not* able to solve these tasks. Therefore, given that these state-of-the-art sequence models fail here, we believe evaluating on these tasks to be a reasonable first step to benchmark the capabilities of UTs against other models on (admittedly simpler) structured linguistic inference tasks. \\n* Algorithmic tasks and LTE tasks are also considered as a set of controlled experiments that first of all helps us to compare the model with other theoretically-appealing models like Neural GPU, and to test the models in terms of some specific aspects such as length-generalization or ability to model nesting in the input source code (where again, LSTMs and the Transformer perform very poorly).\\n* The subject-verb agreement task is chosen as it has been shown [1] that the lack of recurrence can prevent the Transformer from solving this task, whereas we show that the Universal Transformer easily solves it and in fact improves as the task gets harder, i.e. more attractors are introduced (last paragraph, Sec 3.2).\\n* Lambada is a challenging large-scale dataset which highlights the difficulties of incorporating broader context in the task of language modeling. Achieving SOTA on this dataset is further evidence that the Universal Transformer provides a better inductive bias for language understanding. \\n* And finally, experiments on the large-scale machine translation task, WMT2014-ENDE, show that the Universal Transformer is not only a theoretically-appealing model, but also a model that performs well on practical real-world tasks.\\n\\nWe believe that, together, this set of 6 diverse tasks highlights the different strengths and weaknesses of UT, especially compared to the well established LSTM and Transformer baselines, and we leave more investigation with more datasets/tasks for future studies. \\n---------------------------------------------------------------\\n[1] Tran, Ke, Arianna Bisazza, and Christof Monz. \\\"The Importance of Being Recurrent for Modeling Hierarchical Structure.\\\" arXiv preprint arXiv:1803.03585 (2018).\\n \\n\\n>>4. On machine translation task, why does the model without dynamic halting achieve the SOTA performance? This is in contrast to the claim of the advantage of using dynamic halting.\\n\\nThe advantage of dynamic halting is that it mainly helps in the smaller (bAbI, SVA) and more structured tasks (Lambada). On MT we achieved marginally better results without it. We believe this is because dynamic halting acts as a useful regularizer on the smaller tasks, and is therefore not as useful when more data is available in the large-scale MT task. We mention this in the discussion of our results, but we emphasize this even more in the revised version of the Introduction.\\n\\n>> 5. The ablation studies focus only on the dynamic halting, but what if weight sharing is removed from the UT?\\n\\nAs noted above, UT without weight-sharing (across depth) is not recurrent (as separate transition functions are learned for each step/\\u201dlayer\\u201d), so it cannot generate a variable number of revisions / processing steps, and therefore also cannot use dynamic halting. It is only with shared transition blocks that the model becomes recurrent, allowing the use of dynamic halting / ACT.\"}",
"{\"comment\": \"oops, the typo I mentioned exist in your arXiv submission rather than openreview submission, sorry about the mistake.\\nAlso thanks for your notice about eqn 5.\", \"title\": \"Thanks for your reply\"}",
"{\"title\": \"Eq 4 is $H^t=LayerNorm(A^t +Transition(A^t))$ in our submission\", \"comment\": \"Thanks for the comment. If you download and check the pdf of our submission in OpenReview, equation 4 is in fact $H^t=LayerNorm(A^t +Transition(A^t))$, and not $H^t=LayerNorm(A^{t-1}+Transition(A^t))$.\\n\\nThere is, however, a small typo in eqn 5. It should be $A^t =LAYERNORM((H^{t\\u22121}+P^t ))+MULTIHEADSELFATTENTION(H^{t\\u22121}+P^t ))$ instead of $A^t =LAYERNORM(H^{t\\u22121}+MULTIHEADSELFATTENTION(H^{t\\u22121}+P^t ))$, as the residual connection in our model adds up the input \\\"with coordinate embedding\\\" to the state. We already fixed this in the revised version of our submission and will upload it to OpenReview soon.\"}",
"{\"comment\": \"In eq 4, you wrote $H^t=LayerNorm(A^{t-1}+Transition(A^t))$.\\nBut according to your text description and figure 4, I suppose it should be $H^t=LayerNorm(A^t +Transition(A^t))$, otherwise, there would be a cross-step residual connection which is not mentioned in the paper.\", \"title\": \"Probably a typo?\"}",
"{\"title\": \"Recursively applying multihead self-attention block in Transformer, small change leads to effective improvements on multiple tasks.\", \"review\": \"This paper extends Transformer by recursively applying a multi-head self-attention block, rather than stack multiple blocks in the vanilla Transformer. An extra transition function is applied between the recursive blocks. This combines the idea from RNN and attention-based models. But the RNN structure here is not applied to the input sequence, but to the sequence of blocks inside the Transformer encoder/decoder. In addition, it also uses a dynamic adaptive computation time (ACT) halting mechanism on each position, as suggested by the previous ACT paper. In fact, it can be seen as a memory network with a dynamic number of hops at the symbol level.\\n\\nThe paper is well-written and easy to follow. The experimental results demonstrate that the proposed model can achieve state-of-the-art prediction quality in several algorithmic and NLP tasks.\\n\\nPros\\n1. The proposed UT is compatible with both algorithmic and NLP tasks by combining the Transformer with weight sharing of recurrence and dynamic halting. In contrast, previous algorithmic and NLP takes can only be solved by more specific neural architectures (e.g., NTM for algorithmic tasks and the Transformer for NLP tasks).\\n2. The empirical results verify the effectiveness of the UT on several benchmarks. \\n3. The careful experimental analyses not only show the insight of dynamic halting in QA task but demonstrate the ACT is very useful for algorithmic tasks. \\n4. The publicly-released codes could make great contributions to the NLP community. \\n\\nCons\\n1. It proposes an incremental change to the original Transformer by introducing recursive connection between multihead self-attention blocks with ACT. The idea behind UT is similar to memory networks and multi-hop reasoning. \\n2. The recursive structure is not applied to the input sequence, so UT does not have the advantage of RNN/LSTM on capturing sequential information and high-order features. \\n3. Although evaluated on multiple datasets and tasks, they only cover simple QA task and EN-DE translation task. Comparing to other papers applying modifications to Transformer, it is better to include at least one heavy task on large/challenging dataset/task. \\n4. On machine translation task, why does the model without dynamic halting achieve the SOTA performance? This is in contrast to the claim of the advantage of using dynamic halting.\\n5. The ablation studies focus only on the dynamic halting, but what if weight sharing is removed from the UT?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Solid and empirically promising model which merges Transformer and recurrent models but without strong intuitive or theoretical support to back up its claims.\", \"review\": \"This paper describes a transformer with recurrent structure to take advantage of self-attention mechanism. The number of recurrences can be dynamically determined through ACT-like halting depending on the difficulty of the input. A series of experiments on language modeling tasks have been demonstrated to show promising performances.\\n\\nThe overall concerns about this paper is that while the performances are quite promising, the theoretical claims and comparisons in the discussion section are of question. The authors attempt to provide connections to other networks (i.e., Natural GPU, RNN) since UT is an amalgamation of both transformers and RNN, they sound a little \\u201chand-wavy\\u201d (i.e., comments about UT effectively interpolating between the feed-forward, fixed-depth Transformer and a gated recurrent architecture). In short, while empirically completely acceptable, intuitively or theoretically it is hard to grasp why UT is superior other than the dynamic/sharing layers across t (not time). I believe that improving this aspect could make this paper even better. Based on the comments below and the responses with the authors, I am willing to improve my score.\", \"pros\": \"1.\\tThe best of both worlds from parallelizable transformer and recurrent structure for repeated self-attention mechanism. Essentially, the \\u201cdepth\\u201d of the transformer can vary if we \\u201cunroll\\u201d the recurrent stacks.\\n\\n2.\\tExtensive experiments showing the performance of UT.\\n\\n3.\\tAnalysis of the effect of the recurrent aspect of UT and how it can vary depending on the task difficulty.\\n\\nComments/cons:\\n1.\\tI am having trouble understanding the \\u201cuniversal\\u201d aspect of the transformer. Is this because the variability of the depth of UT (since \\u201cgiven sufficient memory\\u201d was mentioned)? If so, such characteristic of \\u201ccomputational universality\\u201d does not seem much unique to UT compared to infinite memory for a transformer or a simple RNN across stack (i.e., input is the while sequence and recurrent step is through the stack analogous to UT stack). Please comment on this.\\n\\n2.\\tIt is nice to see many experiments, but without preexisting knowledge about the datasets and their tasks, I can only make relative judgements based on the provided comparisons against other methods. It would be nice to see slightly more detailed descriptions of each task (particularly LAMBADA LM), not necessarily in the main paper (due to space) but in the appendix if possible for improved self-containedness. \\n\\n3.\\tIn the discussion, the crucial difference between UT and RNN is that RNN is stated to be that RNN cannot access memory in the recurrent steps while UT can. This seems to be the case for not just UT but any Transformer-type model by construction.\\n\\n4.\\tThe authors stated that the \\u201crecurrent step\\u201d for RNN is through time (as the authors stated) while the \\u201crecurrent step\\u201d in UT is not through time. While this claim is completely correct itself, the RNN\\u2019s inability to access memory in its \\u201crecurrent steps\\u201d was compared with how UT could still access memory throughout its \\u201crecurrent steps\\u201d. In this sense, we may argue that the UT cannot access memory across its own t (stacking across t). I am not sure if it is fair to make such implications by putting both \\u201crecurrent steps\\u201d to be of same nature and pointing out one\\u2019s weakness. Perhaps the authors could comment on this.\", \"minor\": \"1.\\tTable 2.: Best Stack-RNN for 1 attractor is the highest but not bold-faced.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Good paper, contribution moderate, experiments promising\", \"review\": \"The paper is well written and proofread, concrete and clear. The model is quite clearly explained, especially with the additional space of the supplementary material, appendices A and B (note fig 4 is less good quality than fig 2 for some reason) -- I\\u2019m fine with the use of the Supp Mat for this purpose.\\n \\nThe experiments have been conducted well, and demonstrate a wide range of tasks, which seems to suggest that the UT has pretty general purpose. The range of algorithmic tasks is limited, e.g. compared to the NTM paper.\\nI miss any experimental details at all on training.\\nI miss a comparison to Neural GPU and Stack RNN in 3.1, 3.2.\\n\\nI miss a proof that the UT is computationally equivalent to a Turing machine. It does not have externally addressable, shared memory like a tape, and I\\u2019m not sure how to transpose read/write heads either.\\n\\nThe argument that the UT offers a good balance between inductive bias and expressivity is weak, though it may be the best one can hope for of a statistical model in a way. I note that in 3.1, the Transformer overfits, while it seems to underfit in 3.3 (lower LM and RC accuracy, higher LM perplexity), while the UT fare well, which suggests that the UT hits the balance better than the Transformer, at least.\\n\\nFrom the point of view of network structure, it seems natural to lift further constraints on the model: \\nwhy should width of intermediate layers be exactly equal to sequence length?\\nwhy should all hidden state vectors be size $d$, the size of the embeddings chosen at the first layer, which might be chosen out of purely practical reasons like the availability of pre-trained word embeddings?\\n\\nWhat is the contribution of this work? It starts from the Transformer, the ACT idea for dynamic halting in recurrent nets, the need for models fit for algorithmic tasks. \\nThe UT\\u2019s building blocks are near-identical to the Transformers (and the paper is upfront and does a good job of explaining these similarities, fortunately)\\n- cf eq1-5: residuals, multi-headed self attention, and layer norm around all this. \\n- shared weights among all such units\\n- encoder-decoder architecture\\n- autoregressive decoder with teacher forcing\\n- decoder units like the encoder\\u2019s but with extra layer of attention to final output of encoder\\n- coordinate embeddings\\nThe authors may correct me, but I believe that the UT with FC layers is exactly identical to the Transformer described in Vaswani 2017 for T=6. \\nSo this paper introduces the idea of varying T, interprets it as a form of recurrence, and adds dynamic halting with ACT to that. Interestingly, the recurrence is not over sequence positions here.\\nThis contribution is not major, on the other hand the experimental validation suggests the model is promising.\\n\\nTypos and writing suggestions\", \"above_eq_8\": \"masked such that -> masked so that\", \"eq_8\": \"dimensions of O and H^T are incompatible: d*V, m*d; to evacuate the notation issue for transposition, cf footnote 1, here and elsewhere, you could use either ${^t A}$ or $A^\\\\top$ or $A^\\\\intercal$. You could also write $t=T$ instead of just $T$.\\nsec3.3 line -1: designed such that -> designed so that\\nTowards the beginning of the paper, it may be useful to stabilise terminology for $t$: depth (as opposed to width for $m$), time steps, recurrence dimension, revisions, refinements\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
r1luCsCqFm | Learn From Neighbour: A Curriculum That Train Low Weighted Samples By Imitating | [
"Benyuan Sun",
"Yizhou Wang"
] | Deep neural networks, which gain great success in a wide spectrum of applications, are often time, compute and storage hungry. Curriculum learning proposed to boost training of network by a syllabus from easy to hard. However, the relationship between data complexity and network training is unclear: why hard example harm the performance at beginning but helps at end. In this paper, we aim to investigate on this problem. Similar to internal covariate shift in network forward pass, the distribution changes in weight of top layers also affects training of preceding layers during the backward pass. We call this phenomenon inverse "internal covariate shift". Training hard examples aggravates the distribution shifting and damages the training. To address this problem, we introduce a curriculum loss that consists of two parts: a) an adaptive weight that mitigates large early punishment; b) an additional representation loss for low weighted samples. The intuition of the loss is very simple. We train top layers on "good" samples to reduce large shifting, and encourage "bad" samples to learn from "good" sample. In detail, the adaptive weight assigns small values to hard examples, reducing the influence of noisy gradients. On the other hand, the less-weighted hard sample receives the proposed representation loss. Low-weighted data gets nearly no training signal and can stuck in embedding space for a long time. The proposed representation loss aims to encourage their training. This is done by letting them learn a better representation from its superior neighbours but not participate in learning of top layers. In this way, the fluctuation of top layers is reduced and hard samples also received signals for training. We found in this paper that curriculum learning needs random sampling between tasks for better training. Our curriculum loss is easy to combine with existing stochastic algorithms like SGD. Experimental result shows an consistent improvement over several benchmark datasets. | [
"Curriculum Learning",
"Internal Covariate Shift"
] | https://openreview.net/pdf?id=r1luCsCqFm | https://openreview.net/forum?id=r1luCsCqFm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rkg1p7Ngx4",
"B1eXpM0L6m",
"SyxpzqEPhm",
"ryle7pFQ3X"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544729527465,
1542017722682,
1540995604645,
1540754711917
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper908/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper908/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper908/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper908/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper attempts to address a problem they dub \\\"inverse\\\" covariate shift where an improperly trained output layer can hamper learning. The idea is to use a form of curriculum learning. The reviewers found that the notion of inverse covariate shift was not formally or empirically well defined. Furthermore the baselines used were too weak: the authors should consider comparing against state-of-the-art curriculum learning methods.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-review\"}",
"{\"title\": \"Some basic intuition, but very handwavy, unclear paper, with dubious experimental significance.\", \"review\": [\"This paper suggests a source of slowness when training a two-layer neural networks: improperly trained output layer (classifier) may hamper learning of the hidden layer (feature). The authors call this \\u201cinverse\\u201d internal covariate shift (as opposed to the usual one where the feature distribution shifts and trips the classifier). They identify \\u201chard\\u201d samples, those with large loss, as being the impediment. They then propose a curriculum, where such hard samples are identified at early epochs, their loss attenuated and replaced with a requirement that their features be close to neighboring (in feature space) samples that are similarly classified, but with a more comfortable margin (thus \\u201ceasy\\u201d.) The authors claim that this allows those samples to contribute through their features at first, without slowing the training down, then in later epochs fully contribute. Some experiments are offered as evidence that this indeed helps speedup.\", \"The paper is extremely unclear and was hard to read. The narrative is too casual, a lot of handwaving is made. The notation is very informal and inconsistent. I had to second guess multiple times until deciphering what could have possibly been said. Based on this only, I do not deem this work ready for sharing. Furthermore, there are some general issues with the concepts. Here are some specific remarks.\", \"The intuition of the inverse internal covariate shift is perhaps the main merit of the paper, but I\\u2019m not sure if this was not mostly appreciated already.\", \"The paper offers some experimental poking and probing to find the source of the issue. But that part of the paper (section 3) is disconnected from what follows, mainly because hardness there is not a single point\\u2019s notion, but rather that of regions of space with a heterogeneous presence of classes. This is quite intuitive in fact. Later, in section 4, hard simply means high loss. This isn\\u2019t quite the same, since the former notion means rather being near the decision boundary, which is not captured by just having high loss. (Also, the loss is not specified.)\", \"Some issues with Section 3: the notions of \\u201ctask\\u201d needs a more formal definition, and then subtasks, and union of tasks, priors on tasks, etc. it\\u2019s all too vague. The term \\u201cnon-computable\\u201d has very specific meaning, best to avoid. Figure 2 is very badly explained (I believe the green curve is the number of classes represented by one element or more, while the red curve is the number of classes represented by 5 elements or more, but I had to figure it out on my own). The whole paragraph preceding Figure 3 is hard to follow. I sort of can make up what is going, especially with the hindsight of Section 4, since it\\u2019s basically a variant of the proposed schedule (easy to hard making sure all clusters, as proxy to classes, are represented) without the feature loss, but it needs a rewriting.\", \"It is important to emphasize that the notion of \\u201ceasy\\u201d and \\u201chard\\u201d can change along the training, because they are relative to what the weights are at the hidden layer. Features of some samples may be not very separable at some stage, but they may become very separable later. The suggested algorithm does this reevaluation, but this is not made clear early on.\", \"In Section 4, the sentence where S_t(x) is mentioned is unclear. I assume \\u201csurpass\\u201d means achieving a better loss. Also later M_t (a margin) is used, when I think what is meant is S_t (a set). The whole notation (e.g. \\u201ctopk\\u201d, indexing that is not subscripted, non-math mode math) is bad.\", \"If L_t is indeed a loss (and not a \\u201cperformance\\u201d like it\\u2019s sometimes referred to, as in minus loss), then I assume larger losses means that the weight on the feature loss in equation (3) should be larger. So I think a minus sign is missing in the exponent of equation (2), and also in the algorithm.\", \"I\\u2019m not sure if the experiments actually show a speedup, in the sense of what the authors started out motivating. A speedup, for me, would look like the training progress curves are basically compressed: everything happens sooner, in terms of epochs. Instead, what we have is basically the same shape curve but with a slight boost in performance (Figure 4.) It\\u2019s totally disingenuous to say \\u201cthis is a great boost in speed\\u201d (end of Section 5.2) by saying it took 30 epochs for the non-curriculum version to get to its performance, when within 4 epochs (just like the curriculum version) it was at its final performance basically.\", \"So the real conclusion here is that this curriculum may not have sped up the training in the way we expect it at all. However, the gradual introduction of badly classified samples in later epochs, while essentially replacing their features with similarly classified samples for earlier epochs, has somehow regularized the training. The authors do not discuss this at all, and I think draw the wrong conclusion from the results.\"], \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting idea, poor exposition\", \"review\": \"This paper describes an approach for automated curriculum learning in a deep learning classification setup. The main idea is to weigh data points according to the current value of the loss on these data points. A naive approach would prevent learning from data points that are hard to classify given parameters of the current mode, and so the authors propose to use an additional loss term for these hard data points, which encourages the hidden representation of these data points to be closer to representation of points that are close in the hidden space and yet are easier to classify (in the sense that the loss of easy samples is lower by some threshold value then the loss of hard samples). This last part is implemented by caching hidden representations and classification loss values during training and fetching nearest neighbours in the feature space whenever a hard data point is encountered. The final loss takes the form of a linear combination of the classification loss and the representation loss.\\n\\nThe idea is interesting in the sense that it tries to use information about how difficult classification of a given data point is to improve learning. The proposed representation loss can lead to forming tight cluster of similar data point in the feature space and can make classification easier. It is related to student-teacher networks, where a student is trained to imitate the teacher in generated similar feature representations.\\n\\nThe authors justify the method by introducing the notion of \\u201cinverse internal covariate shift\\u201d. However, it is not defined formally, nor is it supported empirically, and is based on the (often criticized [1]) notion of \\u201cinternal covariate shift\\u201d. For this reason, it is hard to accept the presented argumentation in its current state.\\n\\nMoreover, there seems to be a mistake in equation (2) in \\u00a74.2. The equation defines the method of computing loss weighting for a given datapoint. The authors note that it converges to the value of one with increasing training iterations, but for correctness it should be \\\\in [0, 1]. If it is > 1, one of the losses in equation (3) is negated and is therefore maximised (instead of being minimised), which can lead to unexpected behaviour. Current parameterization allows it to be \\\\in [0, + infinity].\\n\\nExperimental evaluation consists of quantitative evaluation of random sampling (usual SGD) and the proposed approach in training a classification model on MNSIT, CIFAR-10 and CIFAR-100. The proposed approach outperforms random sampling. This is encouraging, but the method should be compared to state of the art in curriculum learning in order to gauge how useful this approach is.\\n\\nThe paper is poorly written, with many grammatical (lack of \\u201cs\\u201d at the end of verbs used in singular 3rd person, many places in the paper) and spelling mistakes (e.g. \\u00a73.2\\u00b66 \\u201ctough\\u201d instead of \\u201cthrough\\u201d, I think). Some descriptions are unclear (e.g. \\u00a74.2\\u00b62), while some parts of the paper seem to be irrelevant to the problem at hand (\\u00a73.1 describes training on a single minibatch for multiple iterations as if it were a separate task and motivates random sampling, which is just SGD).\\n\\nTo summarize, the paper presents a very interesting idea. In its current state it is hard to read, however. It also contains a number of unsupported claims and can be misleading. It could also benefit from a more extensive evaluation. With this in mind, I suggest rejecting this paper.\\n\\n[1] Rahimi, A (2017). Test of Time Award Talk, NIPS.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting idea but limited experiments and analysis\", \"review\": \"This paper proposes a curriculum that encourages training on easy examples first and postpones training on hard examples. However, contrary to common ideas, they propose to keep hard examples contribute to the loss and only forcing them to have internal representations similar to a nearby easy example. The proposed objective is hence biased at the beginning but they dampen it over time to converge to the true objective at the end.\", \"positives\": [\"There is not much work considering each example as an individual subtask.\", \"The observation that an under-fitted classifier can destroy a good feature extractor is good.\"], \"negatives\": [\"In the intro it says \\u201c[update rule of gradient descent] assumes the top layer, F2, to be the right classifier.\\u201d. This seems like a fundamental misunderstanding of gradient descent and the chain rule. The term d output/d F1 takes into account the error in F2.\", \"The caption of figure 2 says the \\u201c... they cannot separate from its neighbors\\u2026\\u201d. If the loss of all examples in a cluster is high, all are being misclassified. A classifier then might have an easy job fixing them if all their labels are the same or have a difficult job if their labels are random. The second scenario is unlikely if based on the claim of this figure, the entropy has decreased during training. In short, the conclusion made in fig 2 does not necessarily hold given that figure.\", \"This method is supposed to speed up training, not necessarily improve the final generalization performance of the model. The figures show the opposite outcomes. It\\u2019s not clear why. The improvement might be due to not tuning the hyperparameters of the baselines.\", \"Figure 3 does not necessarily support the conclusion. The fluctuations might be caused by any curriculum that forces a fixed ordering across training epochs. Often on MNIST, the ordering of data according to the loss does not change significantly throughout training.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJxdAoCcYX | Characterizing Malicious Edges targeting on Graph Neural Networks | [
"Xiaojun Xu",
"Yue Yu",
"Bo Li",
"Le Song",
"Chengfeng Liu",
"Carl Gunter"
] | Deep neural networks on graph structured data have shown increasing success in various applications. However, due to recent studies about vulnerabilities of machine learning models, researchers are encouraged to explore the robustness of graph neural networks (GNNs). So far there are two work targeting to attack GNNs by adding/deleting edges to fool graph based classification tasks. Such attacks are challenging to be detected since the manipulation is very subtle compared with traditional graph attacks. In this paper we propose the first detection mechanism against these two proposed attacks. Given a perturbed graph, we propose a novel graph generation method together with link prediction as preprocessing to detect potential malicious edges. We also propose novel features which can be leveraged to perform outlier detection when the number of added malicious edges are large. Different detection components are proposed and tested, and we also evaluate the performance of final detection pipeline. Extensive experiments are conducted to show that the proposed detection mechanism can achieve AUC above 90% against the two attack strategies on both Cora and Citeseer datasets. We also provide in-depth analysis of different attack strategies and corresponding suitable detection methods. Our results shed light on several principles for detecting different types of attacks. | [
"attacks",
"malicious edges",
"graph neural networks",
"graph",
"gnns",
"data",
"success",
"various applications",
"due"
] | https://openreview.net/pdf?id=HJxdAoCcYX | https://openreview.net/forum?id=HJxdAoCcYX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJeOmrileV",
"rJgbbWRtC7",
"rye7sxAKC7",
"B1ee9e0KAQ",
"SklFh1AtAQ",
"BkloN1RFAm",
"B1luW-jf6X",
"rygr5DrMpm",
"HJg5k3ZMTQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544758559739,
1543262456715,
1543262363105,
1543262344139,
1543262128574,
1543262002690,
1541742848297,
1541719948561,
1541704673653
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper907/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper907/Authors"
],
[
"ICLR.cc/2019/Conference/Paper907/Authors"
],
[
"ICLR.cc/2019/Conference/Paper907/Authors"
],
[
"ICLR.cc/2019/Conference/Paper907/Authors"
],
[
"ICLR.cc/2019/Conference/Paper907/Authors"
],
[
"ICLR.cc/2019/Conference/Paper907/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper907/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper907/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"All reviewers recommended rejecting this submission so I will as well. However, I do not believe it is fundamentally misguided or anything of that nature.\\n\\nUnfortunately, reviewers did not participate as much in discussions with the authors as I believe they should. However, this paper concerns a relatively niche problem of modest interest to the ICLR community. I believe a stronger version of this work would be a more application-focused paper that delved into practical details about a specific case study where this work provides a clear benefit.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Clear reviewer consensus to reject\"}",
"{\"title\": \"Response to Paper907 AnonReviewer1\", \"comment\": \"We thank the reviewer for the valuable comments. Here are our responses to the concerns:\", \"q1\": \"[Unclear relationship with GCN attacks] To what degree do these methods address the particulars of GCN attacks? This could possibly be addressed by better recapping the GCN attacks and explaining how these methods directly relate to those attacks.\", \"a1\": \"Thank you for the interesting question. Basically, the motivation of this work is that: given GCN is currently widely applied, and recently two state-of-the-art attacks have been proposed against GCN models, therefore we aim to explore the possibilities of detecting such adversarial behaviors (added/deleted edges) in various scenarios. In addition, indeed there are attacks against other graph models, but there have been some heuristic detection methods to detect those such as sybil detection which has been studied for years. So here we aim to focus on these new types of attacks against GCN and it is a good idea for us to evaluate our detection method for other attacks in our future work.\\nIn addition, the GCN models exploit local patterns in a large graph to give reasonable predictions. In order to attack a GCN model, a malicious attacker can add edge to a node so that the local property of a particular node can be mixed with wrong information, thus leading to confusion. Therefore, our detection method actually leverages such property of GCN by sampling small subgraphs and train the detector which aims to avoid the adversarial impact of local malicious edges. We also exhibit a new experiment in the paper which adds random edges to the graph and use our methods to detect these edges. The result is not as good as the detection of malicious edges. This shows that our methods do capture the sophisticated adversarial behaviors targeting on graph models.\", \"q2\": \"[Lack of robustness analysis] How robust are these methods? While the intuition behind the methods at a high level seems reasonable, it is unclear if they provide any real robustness to an adversary. Could the previous attacks be adapted if these detection mechanisms were known?\", \"a2\": \"It is hard for an adversary to do adaptive attack against our detection approaches since the detections here are based on graph structures which are non-differentiable. In addition, we have updated a new ensemble model which combines several of our proposed methods. This would make our pipeline harder to bypass since the attacker would have to bypass several different types of detection mechanisms. Nevertheless, we will study whether we can propose stronger adaptive attack without requiring gradient information, which is needed by current attacks, in the future work.\", \"q3\": \"[Unsuitble model names] GraphGen is worded weirdly -- you're not generating graphs, you're building a generative model for which you evaluate the probability that you would have generated an observed subgraph.\", \"a3\": \"Thanks for the valuable suggestion. We have modified the name into GraphGenDetect since we are actually using generative model over graphs to detect malicious edges.\", \"q4\": \"[Insufficiency on related works] Robust MF has been studied and should be cited as well:\\n Benjamin Van Roy and Xiang Yan. Manipulation-resistant collaborative filtering systems. In Proceedings of the Third ACM Conference on Recommender Systems, RecSys \\u201909, pages 165\\u2013172, New York, NY, USA, 2009. ACM.\\n\\n Bhaskar Mehta and Wolfgang Nejdl. Unsupervised strategies for shilling detection and robust collaborative filtering. User Model. User-Adapt. Interact., 19(1-2):65\\u201397, 2009.\", \"a4\": \"Thanks for the related work. Indeed MF-based collaborative filtering systems can be viewed as a special case of graph data, e.g. a bipartite graph between users and items. We have updated our related work by adding the suggested citations with discussions about studies on robust collaborative filtering systems.\"}",
"{\"title\": \"Response to Paper907 AnonReviewer2 (Part 2)\", \"comment\": \"\", \"q5\": \"[Lack of generalization] The detection algorithm seems to exist to detect malicious edges without supervision. In that case, how can we determine which method we should use given that detection performance differs in different dataset?\", \"a5\": \"In the updated version we have evaluated a uniform pipeline for detecting malicious edges, which combines several of our proposed models in section 3.6. We show that the uniform pipeline can detect adversarial edges with an average of over 80% AUC when victim node degree is very small or is large enough.\", \"q6\": \"[Insufficient comparisons with baselines] It would be useful to compare with some existing malicious node/graph pattern mining algorithms such as Graph-Based Fraud Detection in the Face of Camouflage, Hooi et. al. even if the baseline method does not aim to directly solve the addressed problem. And also that literature needs to be cited.\", \"a6\": \"Thanks for the related work. This paper aims to detect fraud links in collaborative filtering systems instead of graph models. We have updated our related work which discusses the robustness of collaborative filtering systems and have cited the paper in section 5.\"}",
"{\"title\": \"Response to Paper907 AnonReviewer2\", \"comment\": \"We thank the reviewer for the valuable comments. Here are our responses to the concerns:\", \"q1\": \"[The unclear relationship with GCN attacks] Given that the proposed algorithms do not leverage the underlying model structure very much, why the proposed algorithms are special to the graphical neural network is not very clear. It will be great if the authors clearly describe what the proposed methods aim to defend.\", \"a1\": \"Thanks for the interesting question. The goal of our proposed methods is: given that graph neural networks are currently widely applied and recently two state-of-the-art attacks have been proposed against the models such as GCN, we aim to explore the possibilities of detecting such adversarial behaviors (added/deleted edges) in various scenarios. So here we aim to focus on these new types of attacks against graph neural network and it is a good idea for us to evaluate our detection method for other attacks in our future work.\\nIn addition, the graph models exploit local patterns in a large graph to give reasonable predictions. In order to attack a graph neural network, a malicious attacker can add edge to a node so that the local property of a particular node can be mixed with wrong information, thus leading to confusion. Therefore, our detection method actually leverages such property of GCN by sampling small subgraphs and train the detector which aims to avoid the adversarial impact of local malicious edges. We also exhibit a new experiment in the paper which adds random edges to the graph and use our methods to detect these edges. The result is not as good as the detection of malicious edges. This shows that our methods do capture the sophisticated adversarial behaviors targeting on graph models.\", \"q2\": \"[Limitation on the generalization of algorithms] It seems that victim nodes are carefully selected and fixed throughout all the experiments, but it limits the generalization about the performance of the proposed algorithms. More extensive evaluations are required along with the guideline of what detection algorithm we have to choose for the unsupervised setting.\", \"a2\": \"We would like to emphasize that we did not cherry-pick the victim nodes. We are randomly picking the victim nodes according to its node degrees since we want to check how our approaches perform over nodes with various (large/small) degrees to evaluate the generalization of the proposed method. The ideal approach is to enumerate all the victim nodes and try our detection approach, but the computational cost would be too high. Therefore, we pick a subset of them randomly and evaluate our approach, and we think the subset size (node degree: 20/10/6) is large enough to show the performance of our models. In addition, we have proposed a uniform pipeline which combines our detection algorithms and we updated the performance of the uniform pipeline in the revision section 4.2.\", \"q3\": \"[Randomly adding edges] In Section 3.1 and some other subsections, it seems to assume that the links in a given network are very clean but in reality there are a lot of noisy connections. How can we distinguish some random connections from malicious connections? Evaluation along with this question will be also useful.\", \"a3\": \"We thank the reviewer for the interesting question.\\nFirst, in our experiments, we do not explicitly assume the given network is clean. For instance, the citation network dataset Cora and Citeseer we use is extracted from the real world, and there is no guarantee that the network is clean.\\nIn addition, based on the suggestion, we added additional experiments in section 4.4 which explicitly randomly add edges to the graphs. In figure 4 we show that our proposed detection method will only detect malicious connections rather than the random ones.\", \"q4\": \"[The effect of SubGraphLinkPred is unclear] In Section 3.2, eventually, the ratio of malicious edges remains the same if the authors use random sampling. In that case, how SubGraphLinkPred helps is not very convincing.\", \"a4\": \"As long as the number of malicious edges is small, many subgraphs will contain no malicious edges at all. We hope that the link prediction model can learn better with these benign subgraphs, instead of simply learning on the large graph with malicious edges. In addition, we hope that the sampled small graphs will exhibit a similar pattern on which we can train a better graph neural network. In contrast, the single original graph is too large and therefore the pattern may be quite difficult to discover.\"}",
"{\"title\": \"Response to Paper907 AnonReviewer3\", \"comment\": \"We thank the reviewer for the valuable comments. Here are our responses to the concerns:\", \"q1\": \"[Significance of detection can be improved] To properly test the detection performance, I recommend that the authors run experiments on various random graph models. Examples of random graph models include Erdos-Renyi, Stochastic Kronecker Graph, Configuration Model with power-law degree distribution, Barabasi-Albert, Watts-Strogatz, Hyperbolic Graphs, Block Two-level Erdos-Renyi, Multiplicative Attribute Graph Model, etc. That way we can learn what types of networks the detection performance is better.\", \"a1\": \"Thanks for the valuable suggestions. We have added additional experiments in section 4.3 for Erdos-Renyi graphs and Barabasi-Albert graphs which represent the scale-free graph family and apply our approach over these random graphs. The result shows that our approach is able to detect adversarial attacks within these random graphs as well.\", \"q2\": \"[Limitation on the generalization of algorithms] In terms of detection models, I recommend that the authors try approaches that look for the goodness of fit and model selection (e.g., see https://arxiv.org/pdf/1806.11220.pdf).\", \"a2\": \"Thanks for the related work. We updated our related work and discuss the relationship with the suggested paper. In the paper, the authors also want to fit a good model given only one observed network. The authors focus on the classification tasks while we would like to detect malicious edges with no supervision.\"}",
"{\"title\": \"General Reply to the Reviewers.\", \"comment\": \"We thank the reviewers for their valuable comments and suggestions. Based on the reviews, we made the following update to our revision:\\n1. To demonstrate the generalization of the proposed method, we generate additional two graphs: Erdos-Renyi graphs and Barabasi-Albert graphs which represent the scale-free graph family and apply our detection approach over these random graphs in section 4.3. We show that our approach is able to detect adversarial attacks within these random graphs with high AUC scores.\\n2. We added the discussion for related work, talking about the robust collaborative learning systems in section 5.\\n3. We added additional experimental results on a unified pipeline of our proposed detection method in section 3.6 and 4.2, and we show that the unified pipeline can achieve usually achieve high AUC on different real-world and random synthetic graph datasets.\\n4. We added additional experiments on adding random edges and detect them in section 4.1 (randomly adding edges) and section 4.4. We show that our pipeline will only detect adversarial edges with high AUC instead of these random edges. This shows that our approach indeed captures the malicious behavior from sophisticated attackers rather than random noise (edges).\"}",
"{\"title\": \"Important topic but significance can be improved\", \"review\": \"The study of detecting malicious edges in graphs is interesting and important. However, the significance of the paper can be improved. To properly test the detection performance, I recommend that the authors run experiments on various random graph models. Examples of random graph models include Erdos-Renyi, Stochastic Kronecker Graph, Configuration Model with power-law degree distribution, Barabasi-Albert, Watts-Strogatz, Hyperbolic Graphs, Block Two-level Erdos-Renyi, Multiplicative Attribute Graph Model, etc. That way we can learn on what types of networks the detection performance is better. Also, in terms of detection models, I recommend that the authors try approaches that look for goodness of fit and model selection (e.g., see https://arxiv.org/pdf/1806.11220.pdf).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"This manuscript tackles the interesting problem, but more improvement seems necessary in various aspects.\", \"review\": [\"The authors address the interesting problem about the attack on the graph convolutional network model. The proposed method is developed under the assumptions about the attacking models sound simple but reasonable with proper references.\", \"However, the proposed approaches mostly include the ideas about the detecting mechanisms instead of being formulated in some novel form. Given that the proposed algorithms do not leverage the underlying model structure very much, why the proposed algorithms are special to the graphical neural network is not very clear.\", \"Also, the evaluations need to be further improved. It seems that victim nodes are carefully selected and fixed throughout all the experiments, but it limits the generalization about the performance of the proposed algorithms. Particularly, since the different detection algorithms perform differently on different datasets, more extensive evaluations are required along with the guideline of what detection algorithm we have to choose for the unsupervised setting.\", \"*Details\", \"It will be great if the authors clearly describe what the proposed methods aim to defend. Basically, the values by protecting some victim nodes, regardless of what attacking models are assumed, will help the audience with better understanding. Some of content in Section 2.2 can be brought up in the introduction.\", \"In Section 3.1 and some other subsections, it seems to assume that the links in a given network are very clean but in reality there are a lot of noisy connections. How can we distinguish some random connections from malicious connections? Evaluation along with this question will be also useful.\", \"In Section 3.2, eventually, the ratio of malicious edges remains the same if the authors use random sampling. In that case, how SubGraphLinkPred helps is not very convincing.\", \"The detection algorithm seems to exist to detect malicious edges without supervision. In that case, how can we determine which method we should use given that detection performance differs in different dataset?\", \"It would be useful to compare with some existing malicious node/graph pattern mining algorithms such as Graph-Based Fraud Detection in the Face of Camouflage, Hooi et. al. even if the baseline method does not aim to directly solve the addressed problem. And also that literature needs to be cited.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"First step on an important problem but hard to tell if its generally useful.\", \"review\": \"In this paper the authors present 4 methods for detecting outlier edges and nodes in graphs so as to prevent adversarial attacks on graph convolutional networks. They demonstrate that their methods are accurate (through high AUC) in detecting edges added by two previous adversarial detection methods.\\n\\nI think focusing on not just attacking GCNs but actually preventing them is awesome, and work of this sort should be highly lauded as I believe prevention is more difficult than attacks. That said, it is hard to tell how general this work is. The methods discussed follow fairly standard anomaly detection procedures (albeit with NN based models). However, this leaves a few key open questions: \\n\\n(1) To what degree do these methods address the particulars of GCN attacks? This could possibly be addressed by better recapping the GCN attacks and explaining how these methods directly relate to those attacks.\\n\\n(2) How robust are these methods? While the intuition behind the methods at a high level seems reasonable, it is unclear if they provide any real robustness to an adversary. Could the previous attacks be adapted if these detection mechanisms were known? For example, I expect that adding edges that are high likelihood and maximally change the victim label would be an effective deception technique. I believe a more thorough theoretical understanding of the robustness of the protection would make me more confident that these are broadly useful. As of now, it seems very much data dependent.\", \"details\": \"GraphGen is worded weirdly -- you're not generating graphs, you're building a generative model for which you evaluate the probability that you would have generated an observed subgraph.\", \"robust_mf_has_been_studied_and_should_be_cited_as_well\": \"Benjamin Van Roy and Xiang Yan. Manipulation-resistant collaborative filtering systems. In Proceedings of the Third ACM Conference on Recommender Systems, RecSys \\u201909, pages 165\\u2013172, New York, NY, USA, 2009. ACM.\\n\\nBhaskar Mehta and Wolfgang Nejdl. Unsupervised strategies for shilling detection and robust collaborative filtering. User Model. User-Adapt. Interact., 19(1-2):65\\u201397, 2009.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
rkgd0iA9FQ | Convergence Guarantees for RMSProp and ADAM in Non-Convex Optimization and an Empirical Comparison to Nesterov Acceleration | [
"Soham De",
"Anirbit Mukherjee",
"Enayat Ullah"
] | RMSProp and ADAM continue to be extremely popular algorithms for training neural nets but their theoretical convergence properties have remained unclear. Further, recent work has seemed to suggest that these algorithms have worse generalization properties when compared to carefully tuned stochastic gradient descent or its momentum variants. In this work, we make progress towards a deeper understanding of ADAM and RMSProp in two ways. First, we provide proofs that these adaptive gradient algorithms are guaranteed to reach criticality for smooth non-convex objectives, and we give bounds on the running time.
Next we design experiments to empirically study the convergence and generalization properties of RMSProp and ADAM against Nesterov's Accelerated Gradient method on a variety of common autoencoder setups and on VGG-9 with CIFAR-10. Through these experiments we demonstrate the interesting sensitivity that ADAM has to its momentum parameter \beta_1. We show that at very high values of the momentum parameter (\beta_1 = 0.99) ADAM outperforms a carefully tuned NAG on most of our experiments, in terms of getting lower training and test losses. On the other hand, NAG can sometimes do better when ADAM's \beta_1 is set to the most commonly used value: \beta_1 = 0.9, indicating the importance of tuning the hyperparameters of ADAM to get better generalization performance.
We also report experiments on different autoencoders to demonstrate that NAG has better abilities in terms of reducing the gradient norms, and it also produces iterates which exhibit an increasing trend for the minimum eigenvalue of the Hessian of the loss function at the iterates. | [
"adaptive gradient descent",
"deeplearning",
"ADAM",
"RMSProp",
"autoencoders"
] | https://openreview.net/pdf?id=rkgd0iA9FQ | https://openreview.net/forum?id=rkgd0iA9FQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJxVqfGrlE",
"Sklzfh9GyV",
"H1x2CXtlCm",
"SkloKxKgAX",
"HJe00ROlRm",
"r1gImA_gAX",
"H1l4kAugAm",
"SkgBbJ9waQ",
"H1gQqQbc2m",
"SJxNpokU27",
"Hke2uNGZhm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545048716367,
1543838730305,
1542652883896,
1542652035515,
1542651605951,
1542651421958,
1542651356124,
1542065916637,
1541178250845,
1540910012484,
1540592756119
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper906/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper906/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper906/Authors"
],
[
"ICLR.cc/2019/Conference/Paper906/Authors"
],
[
"ICLR.cc/2019/Conference/Paper906/Authors"
],
[
"ICLR.cc/2019/Conference/Paper906/Authors"
],
[
"ICLR.cc/2019/Conference/Paper906/Authors"
],
[
"~Jeremy_Bernstein1"
],
[
"ICLR.cc/2019/Conference/Paper906/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper906/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper906/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers and ACs acknowledge that the paper has a solid theoretical contribution because it give a convergence (to critical points) of the ADAM and RMSprop algorithms, and also shows that NAG can be tuned to match or outperform SGD in test errors. However, reviewers and the AC also note that potential improvements for the paper a) the exposition/notations can be improved; b) better comparison to the prior work could be made; c) the theoretical and empirical parts of the paper are somewhat disconnected; d) the proof has an error (that is fixed by the authors with additional assumptions.) Therefore, the paper is not quite ready for publications right now but the AC encourages the authors to submit revisions to other top ML venues.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}",
"{\"title\": \"Comment\", \"comment\": \"I thank the authors for their response.\\nNevertheless, in face of the [Li and Orabona], I think that their contribution is incremental.\\n\\n- Indeed, only when \\\\sigma ->0, then [Li and Orabona] enable fast rate of 1/T. \\n This is relevant to stochastic settings where we use large batch sizes which decrease the variance. I don't see why this contradicts hypothesis (H4') in Page 3 (Just take S=0).\\n\\n- Again, the paper does not tell a coherent story and the two parts of the paper are somewhat unrelated.\\n\\nI therefore keep my score.\"}",
"{\"title\": \"Clarification about the notational issues raised.\", \"comment\": \"Thanks for your detailed review.\\nIn this note let me try to quickly clarify the notation issues that you have raised. \\n\\n1. \\nIn Definition 1(page 3) \\\"g\\\" is a typo! Apologies for this! We have now fixed this typo. \\nIt should be a \\\"f\\\" in terms of which the L-smoothness' defining inequality has been given there. \\n\\n2.\\nIn Definition 2(page 3) \\\"Support(v)\\\" denotes the set of indices of the d-dimensional vector v where the corresponding coordinate of v is non-zero. (...indeed here we are defining the matrix V^(-1/2) using a Penrose inverse and this is somewhat different from say the notation in the ICLR 2018 paper that you mention (https://openreview.net/forum?id=ryQu7f-RZ) because we wanted to have an unifying notation between not just RMSProp and ADAM but also for our Theorem 3.3 where there is no \\\\xi parameter...)\\n\\n3.\\nThere is no conceptual difference between the \\\"diag\\\" in Definition 2 (page 3) and the bold-faced diag that one sees in the paragraph headings on page 18 - this bold-facing in the later instance comes from LaTeX's default settings about how it prints headings in the \\\\paragraph{} environment. \\n\\n4.\\nThe \\\\nabla f(x) which occurs in our Algorithm 1 and 2 pseudocodes (on page 3) is not to be understood as necessarily being a full gradient. It is a notational stand-in for whatever the first-order oracle returns and this oracle could be noisy as we specify in the \\\"Input\\\" lines of these two algorithms. \\n\\nIn the first paragraph (page 1) when we give an informal description of NAG our use the notation \\\\nabla \\\\tilde{f}_{i_t} (distinct from the pseudocodes) is adapted to the specific model of stochasticity that we use in that paragraph which is specified as choosing \\\\tilde{f}_{i_t} randomly from the set of functions {f_1,..,f_k} over which the empirical risk is taking the average. \\n\\n5.\\nBoth in RMSProp as well as ADAM our notation \\\"g_t^2\\\" refers to a vector whose i-th coordinate is the square of the i-th coordinate of the vector g_t i.e if g_t \\\\in R^d then for all i in {1,..,d}, ((g_t)^2)_i = ((g_t)_i)^2 \\n\\nThanks for pointing out the references. We have now incorporated these references into a revised upload. Also we noticed that both these references (arXiv:1808.02941 and arXiv:1808.05671) came out after the first version of our work became public and infact both of them cite us. \\n\\nKindly also see the new VGG-9 with CIFAR-10 experiments in Appendix E (page 29) that we have now added based on suggestions from the Reviewer 1. Hopefully we have now convinced you of the correctness of our results and why they are very interesting.\"}",
"{\"title\": \"Added CIFAR-10 experiments and thanks for telling us about the Li-Orabona paper (and our comparisons to them)\", \"comment\": \"Thanks a lot for your review. Taking your suggestion we have added in Appendix E (page 29) a mini-batch image classification experiment on CIFAR-10 (using VGG-9) and have shown that the basic conclusion we had on MNIST continues to hold: that ADAM has lower test-errors as beta-1 is properly tuned (and in our experiments, as it gets close to 1) and at that point it is among the best performers (in terms of test-loss) when compared to RMSProp and NAG.\\n\\nAlso thanks a lot for bringing this paper ( https://arxiv.org/pdf/1805.08114.pdf) to our attention! We didnt know of this and its indeed a very beautiful paper on a related topic which seems to have appeared a couple of months before our work was completed. We agree that they get the claim to be the first analysis for adaptive gradient methods on non-convex functions and we have now put in a citation for their paper. (Also on the point of similarities, as in Li-Orabona proofs we too do not need projection steps to get convergence.) But we would like to explain that the relative advantages of our results are quite significant. \\n\\nThe algorithms they are analyzing are *not* the realistic RMSProp or ADAM which is what we consider. Unlike Li-Orabona, we demonstrate extensive experimental evidence in our paper to motivate the superiority on neural nets of the particular form of adaptivity that we write proofs about - which infact is much more involved than the modification of AdaGrad that they consider. \\n\\nMost importantly Li-Orabona's analysis is not for \\\"``fully\\\" adaptive algorithms for a very specific reason as we explain in the first point below in the following list of 3 specific reasons as to why we are doing better than them.\\n\\n1.\\nTheir non-asymptotic non-convex proof in Theorem 4 (page 7) can be seen as the closest analogue to our stochastic RMSProp proof. But they use a somewhat artificial form of AdaGrad as specified in their equation 3 (top of page 4).\\nThis particular choice of step-length (used in all but one of their proofs) essentially means that their adaptivity is uniform across all coordinates of the gradient! Hence they are removing a very important feature of adaptive algorithms that these algorithms scale different coordinates of the gradient by varying amounts! (empirically this feature is known to be extremely critical!)\\n\\nIn contrast in our analysis we allow the pre-conditioner to act on the entire gradient vector and thus each coordinate of the gradient gets its own non-trivial scaling. \\n\\nAlso Li-Orabona prevent the t-th preconditioner from depending on the t-th gradient. From experiments on RMSProp we know that this modification can significantly hurt performance. We do not do such modifications! \\n\\n\\n\\n2.\\nAlso their Theorem 4 (page 3) uses an epsilon > 0 setting (their epsilon is defined in their step-size definitions of equations 3 and 4 at the top of page 4). Now not only this means that theirs is not the \\\"true\\\" AdaGrad but to the best of our understanding , this also means that their convergence rate in Theorem 4 is *slower* than the standard rates that we can get for stochastic RMSProp (wth some assumptions on the training set) - which is a more complex algorithm!\\n\\n\\n3.\\nYou said \\\"Concretely they show that in the noiseless setting adaptive methods give a faster rate of $O(1/T)$ compared to the standard rate of $O(1/\\\\sqrt{T})$ of SGD\\\" But apologies that we cannot locate any specific theorem in Li-Orabona which shows such a thing. \\n\\nThe closest they come to such a result is their statement at the top of their page 6 where they say that under the assumption of their parameter sigma = 0 their convex stochastic adaptive proof is O(1/T) convergence as opposed to convex SGD's O(1/sqrt{T}) Kindly let us know if this is the statement in their paper that you had in mind. \\n\\nBut this argument on the top of their page 6 is not convincing to us because when sigma =0 their hypothesis (H4') in Page 3 becomes undefined! And Assumption H4' is necessary for their Theorem 1 (bottom of Page 4). And maybe more importantly for *any* sigma > 0, however small, their upperbound in Theorem 1 will eventually be dominated by the second term in the max which is scaling upwards with T as T^{1/2 + epsilon}. Hence to the best of our understanding, their convergence rates are guaranteed to be *slower* than O(1/sqrt{T}). \\n\\nWe hope we have convinced you as to why our results are significantly better than those in the Li-Orabona paper.\"}",
"{\"title\": \"Clarifications about our Theorem 3.1\", \"comment\": \"Thanks a lot for your careful reading of our proofs and for bringing to our attention this one ambiguous step we had in the proof of Theorem 3.1 Indeed we messed up at that step. We would like to express our sincerest gratitude to you for bringing this to our attention. Kindly see the edited Theorem 3.1 in the revised file that we have uploaded - whereby we have now imposed a certain technical condition about the gradients of the functions contributing to the empirical loss. We note that this condition in no way makes the proof trivial.\\n\\nNow we have a somewhat detailed new Lemma A.1 starting in the middle of page 13. This lemma is invoked inside the proof (at the location where you raised your doubts) and the rest of the proof and the theorem's final guarantees remain unchanged. \\n\\nTo the best of our knowledge is the the first proof giving any non-trivial sufficient conditions for convergence to criticality for stochastic RMSProp. \\n\\nKindly also see the new VGG-9 with CIFAR-10 experiments in Appendix E (page 29) that we have now added based on suggestions from the Reviewer 1. Hopefully we have now convinced you of the correctness of our results and why they are very interesting.\"}",
"{\"title\": \"Thanks for sharing your thoughts!\", \"comment\": \"Thanks a lot for your detailed comments! We had seen your beautiful paper (https://arxiv.org/abs/1802.04434) and had already referred to that in our introduction. We critically use your paper in motivating our setup. We hadnt seen your second paper when we completed our work.\\n\\nAs Reviewer1 pointed out to us it seems that there has been this paper (https://arxiv.org/abs/1805.08114) a couple of months before our work was done and hence they rightfully get the credit for being the first proofs of convergence of adapative gradient methods in non-convex settings. But as we have explained in our response to Reviewer 1 below, that there are a number of reasons why we feel ours is a more natural setup and our results are much more substantial than the Li-Orabona paper. \\n\\nYour paper was definitely a few months before even the Li-Orabona paper but I think sign-based updates which are doing gradient compression should be thought of as a different category than gradient based updates. To our mind, these are conceptually different things. \\n\\nOfcourse there seems to be some underlying connection between the sign pattern of the gradients and convergence of stochastic adaptive methods as evidenced by the kind of assumptions we need to prove that stochastic RMSProp gets to criticality at standard speeds. To the best of our knowledge ours is the first theorem of its kind which gives some non-trivial sufficient conditions for stochastic RMSProp to get to criticality of non-convex functions. This is definitely a direction to look further into and understand this better. We look forward to more exchange of ideas between the two approaches!\"}",
"{\"title\": \"A summary of the edits in the revised version.\", \"comment\": \"We would like to express our sincere gratitude for the reviewers whose valuable feedback has helped improve the paper as reflected in the revised update. We have responded to their specific queries in the individual responses below.\\n\\nIn the revised version of the paper that we have uploaded we have made 2 main edits as follows,\\n(a) Now there is a new Appendix E showing that VGG-9 running on CIFAR-10 continues to demonstrate the interesting beta_1 sensitivity of ADAM that we had previously explored for autoencoders running on MNIST\\n(b) The statement of the Theorem 3.1 has been refined and its proof in Appendix A.1 has been appropriately updated. Now there is a somewhat elaborate Lemma A.1 which helps clarify some issues.\"}",
"{\"comment\": \"Hi there, I want to clarify the relevance of some prior work.\", \"the_authors_say\": \"\\\"To the best of our knowledge, this work gives the first convergence guarantees for adaptive gradient algorithms in the context of non-convex optimization. We show run-time bounds for (stochastic and deterministic) RMSProp and deterministic ADAM to reach approximate criticality on smooth non-convex functions.\\\"\\n\\nBut (Bernstein et al. 2018 a) have proved non-convex, stochastic convergence bounds for signSGD which is equivalent to Adam with all momentum switched off. Therefore the above statement from the authors seems like it may deserve at least some qualification.\\n\\nWhat's more since signSGD is a special case of Adam, it seems fair to ask the authors to analyse and discuss how their work relates to the signSGD work.\\n\\nAdmittedly attacking Adam is harder than signSGD. But it seems like a good strategy for attacking general stochastic Adam would be to extend the signSGD result. There is a refined analysis of signSGD in this paper: (Bernstein et al. 2018 b) which should be of interest. It seems fair not to expect the authors to know about this second work since it came out so recently.\\n\\n(Bernstein et al. 2018 a) https://arxiv.org/abs/1802.04434\\n(Bernstein et al. 2018 b) https://arxiv.org/abs/1810.05291\", \"title\": \"Clarifying the relevance of some prior work\"}",
"{\"title\": \"Nice idea , not so good presentation\", \"review\": \"Summary:\\nThis paper present a convergence analysis of the popular methods RMSProp and ADAM in the case of smooth non-convex functions. In particular it was shown that the above adaptive gradient algorithms are guaranteed to reach critical points for smooth non-convex objectives and bounds on the running time are provided. An empirical investigation is also presented with main focus on the comparison of the adaptive gradient methods and the Nesterov accelerated gradient algorithm (NAG).\", \"comments\": \"Although the results are promising, I found the reading (mainly because of the not defined notation) of this paper really hard. \\nIn terms of presentation, the motivation in introduction is fine, but the following section named \\\"Notations and Pseudocodes\\\" is confusing and has many undefined notations which makes the paper very hard to read. It gives the impression that the section was added the last minute. For example what is fundtion \\\"g\\\" in the definition 1? What is support(v) and the diag(v) in the definition 2. the diag(v) is more obvious to me but then why at page 18 the diag(v)at the top of the page is bold (are these two things different)?\\nIn the presentation of RMSProp what the $g_t^2$ means? Please have a look to last year's ICLR paper [Reddi, Sashank J., Satyen Kale, and Sanjiv Kumar. \\\"On the convergence of adam and beyond.\\\" (2018).] for a more appropriate introduction of the notation.\\n\\nIn the introduction the authors refer to NAG as a stochastic variant of the Nesterov's acceleration and they informally present the algorithm in the end of the first paragraph. There the update rule includes stochastic gradients \\\\nable f_i(.) while in the formal presentation in the update rule there is \\\\nabla f(x) which is the full gradient of the objective function of the original problem. I expect this difference is somehow justified from the mentioning in the algorithm of the possibly noisy oracle but this is never mention in the main text.\\n\\nIf the above statements in terms of presentation, are ignored the convergence results and numerical experiments are interesting. \\nHowever, the numerical evaluation does not correspond to the theoretical results. It is a comparison of NAG ,ADAM and RMSPROP with interesting conclusions that can be beneficial for practitioners that they use these methods.\", \"some_missing_references\": \"\", \"on_adam_methods\": \"1) Chen, Xiangyi, et al. \\\"On the convergence of a class of adam-type algorithms for non-convex optimization.\\\" arXiv preprint arXiv:1808.02941 (2018).\\n2) Zhou, Dongruo, et al. \\\"On the convergence of adaptive gradient methods for nonconvex optimization.\\\" arXiv preprint arXiv:1808.05671 (2018).\\nOn momentum (heavy ball) methods:\\n3) Loizou, Nicolas, and Peter Richt\\u00e1rik. \\\"Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods.\\\" arXiv preprint arXiv:1712.09677 (2017).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"There may exist an error on the proof of Stochastic RMSProp.\", \"review\": \"There may exist an error on the proof of Theorem 3.1 in appendix. For the first equation in page 13, the authors want to estimate lower-bound of the term $E<\\\\nabla{f}(xt),V_t^{-0/5}*gt>$. The second inequality $>$ may be wrong. Please check it carefully. (Hints: both the index sets { i | \\\\nabla{f}(xt))_{i}*gt_{i} <0 } and { i | \\\\nabla{f}(xt))_{i}*gt_{i} >0 } depend on the random variable $gt$. Hence, the expectation and summation cannot be exchanged in the second inequality.)\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Missing a very relevant reference that discusses the exact same issue\", \"review\": \"*Summary:\\nThis paper analyzes the convergence of ADAM and RMSProp to stationary points\\nin the non convex setting.\\nIn the second part the authors experimantally compare the performance of these methods to Nesterov's Accelerated method.\\n\\n\\n\\n*Comments:\\n\\n-The paper does not tell a coherent story and the two parts of the paper are somewhat unrelated.\\n\\n-The authors claim that they are the first to analyze adaptive methods in the non-convex setting, yet this was recently done in \\n[Xiaoyu Li, Francesco Orabona; On the Convergence of Stochastic Gradient Descent with Adaptive Stepsizes]\\nThe authors should cite this paper and compare their results to it.\\n\\n-The above paper of [Li and Orabona] demonstrates a nice benefit of AdaGrad in the non-convex setting. Concretely they show that in the noisless setting adaptive methods give a faster rate of $O(1/T)$ compared to the standard rate of $O(1/\\\\sqrt{T})$ of SGD.\\n\\nUnfortunately, the results of the current paper do not illustrate the benefit of adaptive methods over SGD, since the authors provide similar rates to SGD or even worse rates in some situations.\\nI think that in light of [Li and Orabona] one should expect a $O(1/T)$ rate also for ADAM and RMSProp.\\n\\n\\n-The experimental part is not so related to the first part. And the experimental phenomena is only demonstrated for the MNIST dataset, which is not satisfying. \\n\\n\\n*Summary:\\nThe main contribution of this paper is to provide rates for approaching stationary points.\\nThis is done for ADAM and RMSProp, two adaptive training methods.\\nThe authors do not mention a very relevant reference, [Li and Orabona].\\nAlso, the authors do not show if ADAM and RMSProp have any benefit compared to SGD in the non-convex setting, which is a bit disappointing. Especially since [Li and Orabona] do demonstrate the benefit of AdaGrad in their paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJePRoAct7 | Graph U-Net | [
"Hongyang Gao",
"Shuiwang Ji"
] | We consider the problem of representation learning for graph data. Convolutional neural networks can naturally operate on images, but have significant challenges in dealing with graph data. Given images are special cases of graphs with nodes lie on 2D lattices, graph embedding tasks have a natural correspondence with image pixel-wise prediction tasks such as segmentation. While encoder-decoder architectures like U-Net have been successfully applied on many image pixel-wise prediction tasks, similar methods are lacking for graph data. This is due to the fact that pooling and up-sampling operations are not natural on graph data. To address these challenges, we propose novel graph pooling (gPool) and unpooling (gUnpool) operations in this work. The gPool layer adaptively selects some nodes to form a smaller graph based on their scalar projection values on a trainable projection vector. We further propose the gUnpool layer as the inverse operation of the gPool layer. The gUnpool layer restores the graph into its original structure using the position information of nodes selected in the corresponding gPool layer. Based on our proposed gPool and gUnpool layers, we develop an encoder-decoder model on graph, known as the graph U-Net. Our experimental results on node classification tasks demonstrate that our methods achieve consistently better performance than previous models. | [
"graph",
"pooling",
"unpooling",
"U-Net"
] | https://openreview.net/pdf?id=HJePRoAct7 | https://openreview.net/forum?id=HJePRoAct7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1lZjKGGKS",
"Hyl445AfeN",
"r1gLmvhWlV",
"Hyx1gmL61V",
"HJlgrVbtkV",
"S1lr33nVy4",
"S1lM9Y5TAQ",
"SkgC0A0cRm",
"Syx3R8y5Cm",
"BkenHKxQAQ",
"r1enduxQRQ",
"rkgzTzwzCX",
"S1eOKzDfA7",
"ryeNpxwMRX",
"rJxkbxPGAQ",
"rkglzX79nX",
"BJxG0Xg9h7",
"HJxUcVxrhm",
"BJxCB1HUs7",
"HJxRweG0FQ",
"rJesYj2atX",
"SJgK6c3ptX",
"Bygr-fsTYX",
"rklHQEEpYX"
],
"note_type": [
"comment",
"comment",
"meta_review",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1571068313330,
1544903211808,
1544828702241,
1544540903280,
1544258616010,
1543978157358,
1543510409760,
1543331541545,
1543268051596,
1542814020387,
1542813811638,
1542775482144,
1542775423584,
1542774971769,
1542774775181,
1541186311669,
1541174217593,
1540846734310,
1539882822364,
1538297957894,
1538276226801,
1538276032736,
1538269692600,
1538241565353
],
"note_signatures": [
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper905/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper905/Authors"
],
[
"~Jingjia_Huang1"
],
[
"ICLR.cc/2019/Conference/Paper905/Authors"
],
[
"ICLR.cc/2019/Conference/Paper905/Authors"
],
[
"ICLR.cc/2019/Conference/Paper905/Authors"
],
[
"ICLR.cc/2019/Conference/Paper905/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper905/Authors"
],
[
"ICLR.cc/2019/Conference/Paper905/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper905/Authors"
],
[
"ICLR.cc/2019/Conference/Paper905/Authors"
],
[
"ICLR.cc/2019/Conference/Paper905/Authors"
],
[
"ICLR.cc/2019/Conference/Paper905/Authors"
],
[
"ICLR.cc/2019/Conference/Paper905/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper905/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper905/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper905/Authors"
],
[
"~Michael_Bronstein1"
],
[
"ICLR.cc/2019/Conference/Paper905/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper905/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"comment\": \"if that github is official code of the paper;\", \"https\": \"//github.com/HongyangGao/gunet\\n\\nthe test procedure is not valid. Instead of doing a blind test, they reported the maximum test set accuracy during the epoch. So the results are not test results, but validation result.\\n\\nTo make it more clear, the researcher needs to stop their algorithm regardless of checking test accuracy. In that point, some certain number of predefined epochs can be used. Or some part of train set might be assigned as validation and algorithm needs to stop according to validation set. For instance, I check the valdiation result of PROTEIN dataset yes it is 79% as it reorted, but when I stop the algorithm according to predefined epoch, the test result become %73. \\n\\nSo if I did not miss some point in the code, this method accuracy on PROTEIN is not 79% but 73% which is not good at all.\\nif the researcher clarifies that point, I appreciate it.\", \"title\": \"Test procedure is not valid !\"}",
"{\"comment\": \"I found the proposed idea interesting, but there are a few issues in the experiments that should be addressed.\\n\\n1. Graph augmentation seems to be important to get state of the art results. Without it, this work is better than GAT only on one dataset (Cora) in node classification tasks. It is also not clear which step of graph augmentation is more important: using power 2 of adjacency matrix A or using weighted self connections 2I. To sum up, methods in Table 2 ideally should use the same preprocessing/augmentation, otherwise it is not a fair comparison.\\n2. For the graph classification experiments added as a comment here, it is hard to make any conclusion because of lack of details. For COLLAB the results in a couple of papers are better (80.7% in WL-OA [1] and, as mentioned by the authors, 82.13% in DiffPool [2]). Also, it is not clear how you cope with the fact the nodes are featureless in this dataset and node features are required for your model to learn the projection vector. Do the authors add artificial features such as node degrees or use any node embedding layer to generate strong features before feeding them to Graph U-Net similar to how they do in the node classification tasks? If so, these features/this layer should also be added to the baseline methods for fair comparison. For PROTEINS it is not clear whether authors used continuous node attributes in addition to discrete features. Previous works like WL-OA [1] use only discrete features. The results on D&D look very good and it would be great to report more results on large graphs for which efficient and fast pooling should prove beneficial. Some experiments with random large graphs would be a good start to show that it is much faster (?) than other pooling methods. \\n3. Have you tried adding node reconstruction loss for the graph classification task to improve the model?\\n4. I also agree with other reviewers that the proposed way to solve the problem of isolated nodes after pooling is not the best, yet the problem seems to be critical. Another challenging problem already touched by others is that some groups of nodes can be ignored as a result of pooling. To be convincing, the authors should somehow better show (quantitatively, qualitatively, theoretically, etc.) that they either solve these problems or that these problems are not a big deal. Again, authors could start with some synthetic graphs.\\n\\nI was trying to reproduce the graph classification results as in the authors\\u2019 comments here, but so far results using Graph U-Net are worse than just using baseline GCN (Kipf & Welling, 2017). \\nCan the authors provide all necessary details to reproduce graph classification results?\\nSince hyperparameters for graph classification models are not provided and I guess they are different from hyperparameters of node classification models, I am using hyperparameters of a related recent work [3] that adopted the pooling method from this submission.\", \"my_implementation_is_available_at_https\": \"//github.com/bknyaz/graph_nn.\\n\\nOverall, I understand that the purpose of this paper is to show that U-Net like architecture is also great for graph structured data. But given experiments, I am not completely convinced to apply this model to some graph problem.\\n\\n[1] Nils M. Kriege, Pierre-Louis Giscard, Richard C. Wilson. On Valid Optimal Assignment Kernels and Applications to Graph Classification.\\n[2] Rex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L Hamilton, and Jure\\nLeskovec. Hierarchical graph representation learning with differentiable pooling. The Thirty-second Annual Conference on Neural Information Processing Systems (NIPS), 2018\\n[3] C\\u0103t\\u0103lina Cangea, Petar Veli\\u010dkovi\\u0107, Nikola Jovanovi\\u0107, Thomas Kipf, Pietro Li\\u00f2, Towards Sparse Hierarchical Graph Classifiers, NIPS Workshop on Relational Representation Learning (R2L), 2018\", \"title\": \"Interesting idea, but needs stronger experiments and some clarification\"}",
"{\"metareview\": \"The authors supplied an updated paper resolving the most important reviewer concerns after the deadline for revisions. In part, this was due to reviewers requesting new experiments that take substantial time to complete.\\n\\nAfter discussion with the reviewers, I believe that if the revised manuscript had arrived earlier, then it should be accepted. Without the new results I would recommend rejecting since I believe the original submission lacked important experiments to justify the approach (inductive setting experiments are very useful).\\n\\nThe community has an interest in uniform application of the rules surrounding the revision process. It is not fair to other authors to consider revisions past the deadline and we do not want to encourage late revisions. Better to submit a finished piece of work initially and not assume it will be possible to use up a lot of reviewer time and fix during the review process.\\n\\nWe also don't want to encourage shoddy, rushed experimental work. However, the way we typically handle requests from reviewers that require a lot of work to complete is by rejecting papers and encouraging them to be resubmitted sometime in the future, typically to another similar conference.\\n\\nThus I am recommending rejecting this paper on policy grounds, not on the merits of the latest draft. I believe that we should base the decision on the state of the paper at the same deadline that applies to all other authors.\\n\\nHowever, I am asking the program chairs to review this case since ultimately they will be the final arbiters of policy questions like this.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"difficult case\"}",
"{\"title\": \"Updated PDF link\", \"comment\": \"Sure, we totally understand. Since the update window has been closed at Nov 26, we created an anonymous link for an updated PDF of our paper with modification labeled by red color. We will update this version once it is enabled in the system. We appreciate if the reviewer can evaluate our work by considering this updated version of our paper.\", \"https\": \"//documentcloud.adobe.com/link/track?uri=urn%3Aaaid%3Ascds%3AUS%3A04840d73-14ec-4978-a4a9-c2c78e8a5dea\"}",
"{\"comment\": \"Intuitively, nodes that are close in the spatial domain and have similar neighbours will have similar node embeddings after the GCN layer, which means they would be likely to have close ranking scores. As a result, the whole graph may be pooled into a certain group of nodes that are close in the graph, which probably destroys the inherent stucture of the original graph.\\nSo, my question is did you do any work to visuaize the gPool progress? Are the selected nodes sparsely distributed in different parts of the graph or stayed very close to each others?\", \"title\": \"What's about the sparsity of the graph after gPool?\"}",
"{\"title\": \"Rebuttal phase 3\", \"comment\": \"Thank you for your comments.\\n\\nSince we spent most of the rebuttal time on adding experiments of graph classification tasks, we have not got enough time to update the PDF of our paper (the paper modification deadline was Nov 26). We are working on a new version of the paper with all the updates that we claimed in comments, such as new experimental results and improved literature review. We will update our paper once PDF update is enabled in the system. We appreciate if the reviewer can evaluate our work by considering the additional information in the rebuttal part of this submission along with the original PDF file.\"}",
"{\"title\": \"Rebuttal phase 2 clarification\", \"comment\": \"Here is just a friendly clarification that we address reviewer's comments (for part 1 and part 2) together in the post below. Thank you.\"}",
"{\"title\": \"Rebuttal phase 2\", \"comment\": \"\\u201cRegarding upsampling, it is worth mentioning that the choice of distributing the features of the tracked indices while keeping the other rows in the feature matrix to zero is a design choice. One could think of copying the features of one of the nearest neighbor instead (i.e. the features of the closest node -- in terms of number of hops -- selected during pooling), following the nearest neighbor upsampling commonly used in image-based architectures.\\u201d\\n\\nThanks for your suggestion. That is an excellent point. Indeed, we already used the method of nearest neighbor in our model. In decoder part, we have a GCN layer after each gUnpool layer. In each gUnpool layer, the feature vectors of previously unselected nodes are initially set to zero. The following GCN layer will first do a feature summation from neighboring nodes, which are the nearest neighbors of one hop in the graph. This means the previously unselected nodes are actually initialized by their nearest neighbors, which actually followed your suggestions. Thanks again.\\n\\n\\u201cRegarding the number of parameters, it is hard to assess whether improvement comes from the contributed architecture or an increased number of parameters without having access to a comparison in terms of number of parameters.\\u201d\\n\\nThanks for your comments. Although no model reports their numbers of parameters, we can provide a comparison between our work and DiffPool based on our calculations. DiffPool achieved state-of-the-art performances on graph classification tasks as shown in the table above.\\n\\nDiffPool claimed that they employed a 12-layer network with 8 GCN layers, 2 DIFFPOOL layers, and 2 MLP layers. On small datasets like ENZYMES, the number of hidden dimensions used in GCN and MLP is set to 64 in GCN layers. For large datasets like D&D, this number is 128. \\nIn each DIFFPOOL layer, they use a GraphSage model to generate an assignment matrix, which contains at least one GCN-like layer. By taking the case for small datasets, the number of parameters can be estimated as: 8* 64*64 (for GCN layers) + 2*64*64 (for DIFFPOOL layers) + 2*64*64 (for MLP) = 49152.\\n\\nFor our g-U-Net with the depth of 4, there are 9 GCN layers, each of which employs the hidden dimension of 48. There are 4 gPool layers, each of which contains 48 parameters. So, the total number of parameters is: 9*48*48 (for GCN layers) + 4*48 (for gPool layers) = 20928. From this calculation, the number of parameters of our model in graph classification tasks is far less than that of DiffPool. Notably, we employ the same model architecture for both small and large datasets. Actually, one advantage of our gPool layer is that it doesn\\u2019t involve many trainable parameters, which reduces the risk of overfitting.\\n\\nAlso, we perform ablation study in Section 4.4 by comparing performances between g-U-Net and g-U-Net without using gPool and gUnpool layers. Without these layers, the performance decreases significantly (-2.3%) with only about 100 fewer parameters. This also demonstrates that the performance improvements are due to our proposed methods not the number of parameters.\\n\\n\\u201cRegarding the literature review, all graph-related papers seem relevant to this work to me. The image-based papers are good literature to showcase existing upsampling methods that could have been taken into account (see previous comment related to nearest neighbor upsampling).\\u201d\\n\\nThanks a lot for pointing out these. We will add these citations to corresponding parts in the final version of this work.\"}",
"{\"title\": \"Re: rebuttal\", \"comment\": \"Thank you for the response. I stand by my original rating.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you very much.\"}",
"{\"title\": \"Thanks for the response!\", \"comment\": \"Thanks for the thorough response! Adding the new graph classification results is great, and I agree that a strength of the proposed approach is that it is simpler and less prone to overfitting (e.g., it does not need the auxiliary link prediction objective for stability).\\n\\nI've revised my score to reflect your response.\"}",
"{\"title\": \"Rebuttal to AnonReviewer1 (Part 1)\", \"comment\": \"Thank you for your comments.\\n\\n\\\"-Given that the main contribution of the paper is the introduction of a pooling operation for graph structured data, it might be a good idea to evaluate the operation in a task that does require some kind of downsampling...\\\"\\n\\nTo evaluate our gPool layer on down-sampling-required tasks, we add more experiments on graph classification tasks under inductive learning settings on three standard datasets; those are D&D, Proteins, and Collab datasets with 1178, 1113, and 5000 graphs, respectively. The results including comparison with a state-of-the-art graph pooling method [1] are summarized in Table below. Our proposed methods outperform baseline models including DiffPool on two out of three datasets and achieve new state-of-the-art performances. Notably, the result reported by DiffPool-DET on Collab is significantly higher than other baselines and the other two DiffPool models. \\n\\nNote that the primary contribution of our work is to develop both graph pooling and unpooling layers that together enable the development of graph U-nets. Evaluation of our methods on graph classification tasks only involve pooling layers, which is not a comprehensive evaluation of our proposed methods.\\n\\n++++++++++++++++++++++++++++++++++++++++++++++\\n__________________|____D&D__|__ PROTEINS__|__COLLAB__\\n________PSCN____|___ 76.27__|____75.00_____|____72.60____\\n_______DGCNN__ |___ 79.37__|____76.26_____|____73.76____\\n___DiffPool-DET_|___ 75.47__|____75.62_____|____82.13____\\n_DiffPool-NOLP_|___ 79.98__|____76.22_____|____75.58____\\n______DiffPool___|___ 80.64__|____76.25_____|____75.48____\\n______g-U-Net___|___ 82.43__|____77.68_____|____77.56____\\n\\nDue to time constraint, we will add these results in the final version of our paper.\\n\\n\\\"-Authors claim that one of the motivations to perform their pooling operation is to increase the receptive field. It would be worth comparing pooling/upsamping to dilated convolutions...\\\"\\n\\nThanks for your suggestion. Dilated convolution is not defined on graphs since it is not clear how to define locality on graph data. Actually, regular convolution operations are not available on graph data. GCN only performs a linear transformation after a simple summation from neighboring nodes. To our knowledge, trainable filters on spatial dimension are not available on graph data. It\\u2019s hard to compare with dilated convolutions on graph data.\\n\\n[1] Rex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L Hamilton, and Jure\\nLeskovec. Hierarchical graph representation learning with differentiable pooling. The Thirty-second Annual Conference on Neural Information Processing Systems (NIPS), 2018\"}",
"{\"title\": \"Rebuttal to AnonReviewer1 (Part 2)\", \"comment\": \"Thank you for your comments.\\n\\n\\\"-Some choices in the method seem rather arbitrary, such as the tanh non-linearity in \\\\tilde y. Could the authors elaborate on that? How important is the gating?\\\"\\n\\nThe choices are not arbitrary. In our experiments, the tanh outperforms other commonly employed gated operations. As we stated in Section 3.1 of our paper, the gated operations are very important for training the projection vector p: \\u201cNotably, the gate operation makes the projection vector p trainable by backpropagation\\u2026\\u201d. \\n\\n\\\"-It would be interesting to analyze which nodes where selected by the pooling operators. Are those nodes close together or spread out in the previous graph?\\\"\\n\\nThank you for your suggestion. Due to time limitation, we will add some graph visualization in the final version of our paper.\\n\\n\\\"-...Have the authors tried other upsampling strategies analogous to the ones typically used for images (e.g. upsampling with nearest neighbors)?\\\"\\n\\nWe cannot try other up-sampling strategies since there are still no up-sampling methods for graphs currently. Most up-sampling operations on images need locality information such as deconvolution layer. But it is hard on graph data since numbers of neighing nodes are not fixed and they are not ordered.\\n\\n\\\"-When skipping information from the downsampling path to the upsampling path, is there a concatenation or a summation? How do both operations compare? (note that concatenation introduces many more parameters) How about only skipping only the indices (no summation nor concatenation)? This kind of analysis, as it has been done in the computer vision literature, would be interesting.\\\"\\n\\nWe use summation for skip connection. We compared these two skip connection strategies and found summation worked better. Summation operation can reduce the number of parameters compared to concatenation, which helps to avoid overfitting. Due to page limit, and these are not our main contribution, we did not put such information on the paper. When published, we will release our code, which includes all such implementation details.\\n\\n\\\"-What is the influence of the first embedding layer to reduce the dimensionality of the features?\\\"\\n\\nSince we worked on three citation network datasets, the initial feature vectors are bag-of-words representations, which is really sparse and has high-dimension. We use an embedding layer to reduce them into low-dimensional representations to avoid over-fitting. In practice, this layer is very important since it helps reduce parameters, thereby resulting in better generalization and performance.\\n\\n\\\"-How do the models in Table 2 compare in terms of number of parameters?\\\"\\n\\nWe didn\\u2019t provide such kind of comparisons in our paper since baseline models did not report the number of parameters in their works. But in our models, the numbers of parameters are only around 20 thousand depending on the datasets.\\n\\n\\\"-What's the influence of imposing larger weights on self loop in the graph?\\\"\\n\\nImposing larger weights promotes the performance slightly. Since it is a very popular way in traditional machine learning methods on graph data, we did not provide analysis on this.\\n\\n\\\"-What about experiments in inductive settings?\\\"\\n\\nWe add experiments on graph classification tasks which are under inductive learning settings. The results are summarized in Table above. Our approaches achieve new state-of-the-art performance on graph classification tasks under inductive settings.\\n\\n\\\"-Please add references for the following claim \\\"U-Net models with depth 3 or 4 are commonly used...\\\"\\\"\\n\\nSure, we will add some. Thanks.\\n\\n\\\"-Please double check your references, e.g. in the introduction, citations used for CNNs do not always correspond to CNN architectures. \\\"\\n\\nSure, we will. Thanks.\\n\\n\\\"-The literature review could be significantly improved, missing relevant papers to discuss include:...\\\"\\n\\nThank you for providing these helpful references. I will add some related to the paper. But some references listed here are not very related to my work. For example, - Bruna et al. Spectral networks and locally connected networks on graphs, 2014 works on spectral networks. Also, Isola et al. Image-to-image translation with conditional adversarial networks, 2016 is a GAN work on images and is not related to my work. I will add some related works in the final version of our paper. Thanks.\"}",
"{\"title\": \"Rebuttal to AnonReviewer2\", \"comment\": \"Thank you for your comments.\\n\\n\\\"- It is not clear why the evaluation seem to only be done for the transductive learning settings. I understand that some of the previous work might have done that, but this application scenario is quite limited.\\\"\\n\\nThank you for your suggestions. We applied our proposed approaches to graph classification problems under inductive learning settings. We add more experiments on several graph classification datasets including D&D, Proteins, and Collab datasets, which are standard datasets employed in graph classification tasks under inductive learning settings. The results are summarized in Table below. It can be seen that our proposed pooling method outperforms DiffPool [1] by margins of 1.79% and 1.43% on two datasets. Notably, the result reported by DiffPool-DET on Collab is significantly higher than other baselines and the other two DiffPool models. This demonstrates that our proposed methods can be applied to node classification and graph classification tasks under both transductive and inductive learning settings. \\n\\n++++++++++++++++++++++++++++++++++++++++++++++\\n___________________|____D&D__|__ PROTEINS__|__COLLAB__\\n________PSCN_____|___ 76.27__|____75.00_____ |____72.60____\\n_______DGCNN___|___ 79.37__|____76.26_____ |____73.76____\\n___DiffPool-DET_|___ 75.47__|____75.62_____ |____82.13____\\n_DiffPool-NOLP_|___ 79.98__|____76.22_____ |____75.58____\\n______DiffPool___|___ 80.64__|____76.25_____ |____75.48____\\n______g-U-Net___|___ 82.43__|____77.68_____ |____77.56____\\n\\nDue to time limitation, we will add these results in the final version of our paper.\\n\\n\\\"- One concern about the g-pool operation is that it is not local: unlike e.g. max pool on 2D which produces local maxima, here the selection is done globally, which could lead to situations where the entire parts of the graph are completely ignored. \\\"\\n\\nYes, our gPool operation is performed on global scope instead of local. This is due to the fact that defining locality is very hard especially for pooling operations. Unlike grid-like data such as images and texts, there is no obvious rule to group some nodes into a local patch for pooling operations. In DiffPool [1], they also define a global pooling operation. The difference is that their method learns an assignment matrix to softly assign each node to nodes in the new graph. While our approach is more similar to the regular global k-max pooling operation.\\n\\nAlthough some parts of the graph are abandoned in a gPool layer, our proposed gUnpool layer and graph U-Net architecture will restore the graph structure in the decoder part for feature representation learning. Therefore, we don\\u2019t need to worry about the loss of node information by employing gUnpool layer and graph U-Net architecture.\\n\\n\\\"- Another concern, which has been partially addressed in section 3.4 is that the connectivity is not really taken into account when downsampling the adjacency matrix. The solution which introduces previously non-existing edges and thus kind of modifies the original graph is not very satisfying. \\\"\\n\\nWhen performing pooling operation, the non-existing edges are introduced based on the fact that we employ GCN layers before our proposed gPool layers. Each GCN layer will aggregate one-hop neighboring nodes information for each node in the graph. This means that two nodes that are two hops away will have information communication. Based on this fact, we employ graph power of 2 to augment the graph connectivity to avoid isolated nodes in the graph.\\n\\nAlso, this method will partially solve the connectivity loss problem when down-sampling the graph. But it\\u2019s hard to maintain original graph connectivity when we need to sample some important nodes out especially on sparsely-connected graphs.\\n\\n[1] Rex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L Hamilton, and Jure\\nLeskovec. Hierarchical graph representation learning with differentiable pooling. The Thirty-second Annual Conference on Neural Information Processing Systems (NIPS), 2018\"}",
"{\"title\": \"Rebuttal to AnonReviewer3\", \"comment\": \"Thanks for your comments.\\n\\n\\\"-The primary shortcoming of this paper is that it only evaluates the model on three citation network datasets (Cora, Citseer, and Pubmed). While these datasets are now standard in the GCN/GNN community, they are very small, have few labeled examples, and it would greatly strengthen the paper to use a different dataset or two, e.g., the Reddit or PPI datasets from Hamilton et al. 2017 or the BlogCatolog dataset used in Grover et al. 2016 could be used for node classification. Or the authors could apply the proposed technique to graph classification or link prediction. In this reviewers opinion, it is very hard to judge the general utility of a method when results are only provided on these three very-specific datasets, where the performance differences between methods are now very marginal. \\\"\\n\\nTo evaluate our proposed gPool method for graph classification tasks on other datasets, we add more experiments on several graph classification datasets, including D&D, Proteins, and Collab datasets, which are standard datasets employed in such experiment settings. D&D, Proteins, and Collab datasets contain 1178, 1113, and 5000 graphs and 284.32, 39.06, and 74.49 average numbers of nodes on each graph, respectively. The results are summarized in Table below. We can observe from the results that our proposed gPool method outperforms DiffPool [1] by margins of 1.79% and 1.43% on D&D and Proteins. Notably, the result obtained by DiffPool-DET on Collab is significantly higher than all other methods and the other two DiffPool models. On all three datasets, our model outperforms baseline models including DiffPool. In addition, DiffPool claimed that their training utilized auxiliary task of link prediction to stabilize model performance. But in our experiments, we only use graph labels for training without any auxiliary tasks to stabilize training.\\n\\n++++++++++++++++++++++++++++++++++++++++++++++\\n___________________|____D&D__|__ PROTEINS__|__COLLAB__\\n________PSCN_____|___ 76.27__|____75.00_____|____72.60____\\n______DGCNN____|___ 79.37__|____76.26_____|____73.76____\\n__ DiffPool -DET_|___ 75.47__|____75.62_____|____82.13____\\n_DiffPool-NOLP_ |___ 79.98__|____76.22_____|____75.58____\\n______DiffPool____|___ 80.64__|____76.25_____|____75.48____\\n______g-U-Net____|___ 82.43__|____77.68_____|____77.56____\\n\\nDue to time constraint, we will add these results in the final version of our paper.\\n\\n\\\"-In a related point, while this work cites other approaches that apply pooling operations in graph neural networks (e.g., Ying et al. 2018, Simonovsky and Komodakis 2018), no comparisons are made against these approaches... \\\"\\n\\nThe work proposed in DiffPool can be used for constructing un-pool layers. Our proposed approaches are similar to regular pooling and un-pooling layers used on images and texts. We selected some important nodes to form a new graph using original edges. For DiffPool, the graph will become softly connected with every two nodes connected by a probability rate. In addition, our proposed pooling layer only involves a very small number of extra parameters, which are trainable projection vectors. While in DiffPool, a network is employed for each diff-pool layer to learn the assignment matrix. That may increase the risk of overfitting and make the training unstable. Actually, to stabilize training, DiffPool employs an auxiliary task of link prediction during training in graph classification tasks.\\n\\n\\\"-As another minor point, whereas unpooling operations can be used to define a generative model in the image setting, this is not the case here, as the unpooling operation relies on knowledge about the input graph (i.e., the model always unpools to the same connectivity structure). This is not necessarily a bad thing, but it could improve the paper to clarify this issue. \\\"\\n\\nSure, our proposed gUnpool layer corresponds to the regular un-pool layer used on images. The regular un-pool layer also needs the pooled position information in corresponding regular pooling layer to restore the original image structure. We will add this clarification in the final version of our paper.\\n\\n[1] Rex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L Hamilton, and Jure\\nLeskovec. Hierarchical graph representation learning with differentiable pooling. The Thirty-second Annual Conference on Neural Information Processing Systems (NIPS), 2018\"}",
"{\"title\": \"An interesting paper that could benefit from more empirical comparisons\", \"review\": \"* I have revised my score upwards due to the authors response to my concerns --- particularly the addition of new results on graph classification. The original review remains here, and I respond to the author's response below.\\n\\nThe authors propose a new technique to add \\u201cpooling\\u201d and \\u201cunpooling\\u201d layers to a graph neural network (GNN). To deal with the lack of spatial locality in graphs, the downsampling operation relies on a learned scalar projection vector (which gives the \\u201cscores\\u201d for selecting different nodes). During upsampling, the model simple relies on storing the un-sampled adjacency matrix. Thorough experimental results on Cora, Citeseer, and Pubmed highlight the utility of the approach, with ablation studies isolating the importance of the pool/unpool operations.\\n\\nOverall, this is an interesting paper with the possibility of having a moderate impact within the area of GNNs/GCNs, and the method is clearly described. While there are a number of minor modifications made to the standard GCN model, which could potentially confound the results, the authors do provide a sensible ablation study to isolate the importance of their pool/unpool operations. The overall results on the three node classification datasets are also quite strong. \\n\\nThe primary shortcoming of this paper is that it only evaluates the model on three citation network datasets (Cora, Citseer, and Pubmed). While these datasets are now standard in the GCN/GNN community, they are very small, have few labeled examples, and it would greatly strengthen the paper to use a different dataset or two, e.g., the Reddit or PPI datasets from Hamilton et al. 2017 or the BlogCatolog dataset used in Grover et al. 2016 could be used for node classification. Or the authors could apply the proposed technique to graph classification or link prediction. In this reviewers opinion, it is very hard to judge the general utility of a method when results are only provided on these three very-specific datasets, where the performance differences between methods are now very marginal. \\n\\nIn a related point, while this work cites other approaches that apply pooling operations in graph neural networks (e.g., Ying et al. 2018, Simonovsky and Komodakis 2018), no comparisons are made against these approaches. One would suppose that these comparisons are not made because this paper only tests the graph U-net for node classification, but it would greatly strengthen this paper to add comparisons to these other pooling operations, e.g., for graph classification. Moreover, it is possible to define analogous unpooling operations for Ying et al. 2018 and Simonovsky and Komodakis 2018, similar to the unpooling operation used in this work (e.g., for Ying et al.\\u2019s DiffPool you can just \\u201cunpool\\u201d to the previous graph and assign each node a feature corresponding to the weighted sum of the features of the assigned clusters). Of course, it would require significant work (e.g., experiments on graph classification or some modifications of existing approaches) to actually test whether the pool approach proposed here is actually better than those in Ying et al. 2018 and Simonovsky and Komodakis 2018, but such comparisons are necessary to demonstrate whether the pooling operation proposed here is an improvement over existing works, or whether the primary novelty is the combined application of pooling and unpooling in a node classification setting. \\n\\nAs another minor point, whereas unpooling operations can be used to define a generative model in the image setting, this is not the case here, as the unpooling operation relies on knowledge about the input graph (i.e., the model always unpools to the same connectivity structure). This is not necessarily a bad thing, but it could improve the paper to clarify this issue.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"interesting problem of pooling/upsampling graphs, experimental validation and literature review could be significantly improved\", \"review\": \"This paper proposes pooling and upsampling operations for graph structured data, to be interleaved with graph convolutions, following the spirit of fully convolutional networks for image pixel-wise prediction. Experiments are performed on node classification benchmarks, showing an improvement w.r.t. architectures that do not perform any downsampling/upsampling operations.\\n\\nGiven that the main contribution of the paper is the introduction of a pooling operation for graph structured data, it might be a good idea to evaluate the operation in a task that does require some kind of downsampling, such as graph classification / regression. Moreover, authors should compare to other graph pooling methods.\\n\\nAuthors claim that one of the motivations to perform their pooling operation is to increase the receptive field. It would be worth comparing pooling/upsamping to dilated convolutions to see if they have the same effect on the performance when dealing with graphs. \\n\\nSome choices in the method seem rather arbitrary, such as the tanh non-linearity in \\\\tilde y. Could the authors elaborate on that? How important is the gating?\\n\\nIt would be interesting to analyze which nodes where selected by the pooling operators. Are those nodes close together or spread out in the previous graph?\\n\\nThe proposed unpooling operation seems to be the same as unpooling performed to upsample images, that is using skip connections to track indices, by recovering the position where the max value comes from and setting the rest to 0. Have the authors tried other upsampling strategies analogous to the ones typically used for images (e.g. upsampling with nearest neighbors)?\\n\\nWhen skipping information from the downsampling path to the upsampling path, is there a concatenation or a summation? How do both operations compare? (note that concatenation introduces many more parameters) How about only skipping only the indices (no summation nor concatenation)? This kind of analysis, as it has been done in the computer vision literature, would be interesting.\\n\\nWhat is the influence of the first embedding layer to reduce the dimensionality of the features?\\n\\nHow do the models in Table 2 compare in terms of number of parameters?\\n\\nWhat's the influence of imposing larger weights on self loop in the graph?\\n\\nWhat about experiments in inductive settings?\\n\\nPlease add references for the following claim \\\"U-Net models with depth 3 or 4 are commonly used...\\\"\\n\\nPlease double check your references, e.g. in the introduction, citations used for CNNs do not always correspond to CNN architectures.\\n\\nThe literature review could be significantly improved, missing relevant papers to discuss include:\\n- Gori et al. A new model for learning in graph domains, 2005.\\n- Scarselli et al. The graph neural network model, 2009.\\n- Bruna et al. Spectral networks and locally connected networks on graphs, 2014.\\n- Henaff et al. Deep convolutional networks on graph-structured data, 2015.\\n- Niepert et al. Learning convolutional neural networks for graphs, 2016.\\n- Atwood and Towsley. Diffusion-convolutional neural networks, 2016.\\n- Bronstein et al. Geometric deep learning: going beyond Euclidean data, 2016.\\n- Monti et al. Geometric deep learning on graphs and manifolds using mixture model cnns, 2017.\\n- Fey et al. SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels, 2017.\\n- Gama et al. Convolutional Neural Networks Architectures for Signals Supported on Graphs, 2018.\", \"as_well_as_other_pixel_wise_architecture_for_image_based_tasks_such_as\": [\"Long et al. Fully Convolutional Networks for Semantic Segmentation, 2015.\", \"Jegou et al. The one hundred layers tiramisu: fully convolutional densenets for semantic segmentation, 2016.\", \"Isola et al. Image-to-image translation with conditional adversarial networks, 2016.\", \"Zhao et al. Stacked What-Where auto-encoders, 2015.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good paper, clearly written and has some interesting ideas\", \"review\": \"Summary:\\nThis paper introduces an encoder-decoder neural net architecture for arbitrary graphs. The core contribution is pooling and un-pooling operations for respectively graph down and up sampling.\", \"pros\": [\"U-Net like architectures indeed are very successful in vision applications, and having a model that was similar properties on graphs would be very useful.\", \"The paper is clearly written.\", \"I really liked the idea behind the pooling operation: it is simple, seems easy to implement efficiently, and generally makes sense (although see concerns below).\", \"The choice of the baselines is reasonable, and experimental results seem convincing. Ablation studies are also there.\"], \"cons\": [\"It is not clear why the evaluation seem to only be done for the transductive learning settings. I understand that some of the previous work might have done that, but this application scenario is quite limited.\", \"One concern about the g-pool operation is that it is not local: unlike e.g. max pool on 2D which produces local maxima, here the selection is done globally, which could lead to situations where the entire parts of the graph are completely ignored.\", \"Another concern, which has\", \"been partially addressed in section 3.4 is that the connectivity is not really taken into account when downsampling the adjacency matrix. The solution which introduces previously non-existing edges and thus kind of modifies the original graph is not very satisfying.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Baselines\", \"comment\": \"Thank you for the interest in our work and references. We were aware of these work. But our work mainly focus on graph pooling and un-pooling operations, which are orthogonal to methods in these papers. We would like to add these references as needed in our final version.\"}",
"{\"comment\": \"I believe that many important baseline algorithms for deep learning on graphs are missing, in particular CayleyNet [1] (a generalization of ChebNet using rational functions) MoNet [2] (a more general model of which GAT is a subsetting), and the recent Dual/Primal Graph CNN [3]. Please refer to a review paper [8] on geometric deep learning methods.\\n\\n1. CayleyNets: Graph convolutional neural networks with complex rational spectral filters, arXiv:1705.07664,\\n\\n2. Geometric deep learning on graphs and manifolds using mixture model CNNs, CVPR 2017. \\n\\n3. Dual-Primal Graph Convolutional Networks, arXiv:1806.00770.\\n\\n4. Geometric deep learning: going beyond Euclidean data, IEEE Signal Processing Magazine, 34(4):18-42, 2017\", \"title\": \"important baselines missing\"}",
"{\"title\": \"possible further studies on gating\", \"comment\": \"Sure, I totally agree with you. We can do more experiments about this part. Very happy to have this great discussion with you.\"}",
"{\"comment\": \"Thank you for the prompt reply!\\n\\nWhile I am surprised that the tanh performs the best, if this is indeed the case then of course you should use it. I would definitely recommend that you clarify (within the paper) how the tanh function was arrived at during the revision period.\\n\\nFurther, assuming your intuition about the benefits of tanh is correct, then the vectors substantially opposite of p could be useful too, right? This motivates another experiment, where you\\u2019d take the top-k from y^2, rather than y (to give the opposite direction equal footing). \\n\\nWhat do you think?\\n\\nOnce again, thanks for promptly responding to my query, and best of luck with the reviews.\", \"title\": \"possible further studies on gating\"}",
"{\"title\": \"Why use tanh gating\", \"comment\": \"Hi, thank you for your appreciation and question. Actually, we have tried sigmoid, tanh and softmax. tanh performs the best. We have thought about reasons. There are some possible explanations. The values in y vector are the scalar projection values. The negative values do not mean they are negligible just because they are in the opposite direction of vector p. So if we do sigmoid, their corresponding node vectors will become trivial. And also tanh is zero centered, which facilitates the training of projection vectors. So we choose to use tanh for gate operation. Also the use of tanh can regularize node vectors such that they are in the same direction of projection vector p. We are not sure if this can help with the feature encoding. We may try to investigate this in the future. Hope these explanations can help you. Happy to have future discussion with you if any question. Thank you.\"}",
"{\"comment\": \"Very interesting work!\\n\\nI was wondering, why the hyperbolic tangent activation was used for the gating mechanism in your architecture? The choice doesn't seem to be motivated anywhere in the paper, and given that its output can be negative (and therefore inadvertently flip the activation), the logistic sigmoid should be more appropriate. \\n\\nCould you please comment on this decision?\\n\\nThanks!\", \"title\": \"tanh gating?\"}"
]
} |
|
S1gDCiCqtQ | Learning Representations in Model-Free Hierarchical Reinforcement Learning | [
"Jacob Rafati",
"David Noelle"
] | Common approaches to Reinforcement Learning (RL) are seriously challenged by large-scale applications involving huge state spaces and sparse delayed reward feedback. Hierarchical Reinforcement Learning (HRL) methods attempt to address this scalability issue by learning action selection policies at multiple levels of temporal abstraction. Abstraction can be had by identifying a relatively small set of states that are likely to be useful as subgoals, in concert with the learning of corresponding skill policies to achieve those subgoals. Many approaches to subgoal discovery in HRL depend on the analysis of a model of the environment, but the need to learn such a model introduces its own problems of scale. Once subgoals are identified, skills may be learned through intrinsic motivation, introducing an internal reward signal marking subgoal attainment. In this paper, we present a novel model-free method for subgoal discovery using incremental unsupervised learning over a small memory of the most recent experiences of the agent. When combined with an intrinsic motivation learning mechanism, this method learns subgoals and skills together, based on experiences in the environment. Thus, we offer an original approach to HRL that does not require the acquisition of a model of the environment, suitable for large-scale applications. We demonstrate the efficiency of our method on two RL problems with sparse delayed feedback: a variant of the rooms environment and the ATARI 2600 game called Montezuma's Revenge.
| [
"Reinforcement Learning",
"Model-Free Hierarchical Reinforcement Learning",
"Subgoal Discovery",
"Unsupervised Learning",
"Temporal Difference",
"Temporal Abstraction",
"Intrinsic Motivation",
"Markov Decision Processes",
"Deep Reinforcement Learning",
"Optimization"
] | https://openreview.net/pdf?id=S1gDCiCqtQ | https://openreview.net/forum?id=S1gDCiCqtQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJlpWINWgV",
"BJexixtqhX",
"SkxOFb1937",
"S1eU-a3K3X"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544795653199,
1541210264072,
1541169536308,
1541160190366
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper904/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper904/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper904/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper904/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"Pros:\\n- good results on Montezuma\", \"cons\": [\"moderate novelty\", \"questionable generalization\", \"lack of ablations and analysis\", \"lack of stronger baselines\", \"no rebuttal\", \"The reviewers agree that the paper should be rejected in its current form, and the authors have not bothered revising it to take into account the detailed reviews.\"], \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-review\"}",
"{\"title\": \"A simple ideas works well in a challenging RL domain. The generalizability and significance can be improved if more domains can be tested\", \"review\": \"This paper proposed a model-free HRL method, which is combined with unsupervised learning methods, including abnormality discovery and clustering for subgoal discovery. In all, this paper studies a very important problem in RL and is easy to follow. The technique is sound. Although the novelty is not that significant (combining existing techniques), it showed good results on Montezuma\\u2019 revenge, which is considered as a very challenging problem for primitive action based RL.\\n\\nAlthough the results are impressive, I still have some doubt about the generalizability of the method. It might be helpful to improve its significance if more diversified domains can be tested.\\n\\nThe paper can be strengthen by providing some ablation test, for example, is performance under different K for Kmeans? \\n\\nAlso some important details seems missing, for example, the data used for kmeans, it is mentioned that the input to the controller is four consecutive frame of size 84x84, so the input data dimension is more than 10k, I guess some further dimensionality reduction technique has to be applied in order to run kmeans effectively.\\n\\nRegarding the comparisons, the proposed method is only compared with one primitive action based method. It might be better to include results from other HRL methods, such as Kulkarni et al.\\n\\nIs the curve based on the mean of different runs? It might be useful to include an errorbar to show the statistical significance.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The methods seem somewhat tailored for the tasks and the results on the harder problem are not that convincing.\", \"review\": \"Summary:\\nThe authors propose an HRL system which learns subgoals based on unsupervised analysis of recent trajectories. The subgoals are found via anomaly/outlier detection (in this case states with a very high reward) and the clustering together of states that are very similar. The system is evaluated on the 4-rooms task and on the atari game Montezuma\\u2019s Revenge.\\n\\nThe paper cites relevant work and provides a nice explanation of subgoal-based HRL. The paper is for the most part well-written and easy to follow. \\n\\nThe experiments are unfortunately not making a very convincing case for the general applicability of the the methods. While the system does not employ a model of the environment, k-means clustering based on distances seems to be particularly well-suited for the two environments investigated in the paper. It is known that the 4-rooms experiment is much easier to solve with subgoals that correspond to the rooms themselves. I can only conclude from this experiment that k-means can find those subgoals given the right number (4) of clusters and injecting the knowledge that distances in grid-worlds correlate well with transition probabilities. Similarly, the use of distance-based clustering seems well-suited for games with different rooms like Montezuma\\u2019s Revenge but that might not generalize to many other games. \\n\\nThe anomaly detection subgoal discovery is interesting as a method to speed-up learning but it still requires these (potentially sparse) high reward states to be found first. For tasks with sparse rewards it does make sense to set high reward states as potential subgoals instead of waiting for value to propagate. That said, the reward for the lower level policy is only less sparse in the sense that wasting time gets punished with a negative reward. Subgoal discovery based on rewards should probably also take the ability of the current policy to obtain those rewards into account like some other methods for subgoal discovery do (see for example Florensa et al., 2018). The authors mention that the subgoals were manually chosen by Kulkarni et al. (2016) instead of learned in an unsupervised way but I don\\u2019t think that the visual object detection method employed there is that much more problem specific. \\n\\nLike Kulkarni et al. (2016), the authors compare their method with DQN (Mnih et al. 2015) but it was already known that that baseline cannot solve the task at all and a lot more results on Montezuma\\u2019s Revenge have been published since then. A more insightful baseline would have been to compare with at least some other HRL methods that are able to learn the task to some extend like perhaps Feudal Networks (Vezhnevets et al., 2017). Looking at the graph in the Feudal Networks paper for comparison, the results in this paper seem to be on par with the LSTM baseline there but it is hard to compare this on the basis of the number of episodes. Did the reward go up further after running the experiment longer? \\n\\nSince the results are not that spectacular and a comparison with prior work is lacking, the main contributions of the paper are more conceptual. I think that it is interesting to think more carefully about how sparse reward states and state similarities can be used more efficiently but the ideas in the paper are not original or theoretically founded enough to have a lot of impact without the company of stronger empirical results.\", \"extra_reference\": \"Carlos Florensa, David Held, Xinyang Geng, Pieter Abbeel. (2017). Automatic goal generation for reinforcement learning agents. arXiv preprint arXiv:1705.06366.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This paper proposes an unsupervised method for subgoal discovery and shows how to combine it with a model-free hierarchical reinforcement learning approach. The main idea behind the subgoal discovery approach is to first build up a buffer of \\u201cinteresting\\u201d states using ideas from anomaly detection. The states in the buffer are then clustered and the centroids are taken to be the subgoal states.\", \"clarity\": \"I found the paper somewhat difficult to follow. The main issue is that the details of the algorithm are scattered throughout the paper with Algorithm 1 describing the method only at a very high level. For example, how does the algorithm determine that an agent has reached a goal? It\\u2019s not clear from the algorithm box. Some important details are also left out. The section on Montezuma\\u2019s Revenge mentioned that the goal set was initialized using a \\u201ccustom edge detection algorithm\\u201d. What was the algorithm? Also, what exactly is being clustered (observations or network activations) and using what similarity measure? I can\\u2019t find it anywhere in the paper. Omissions like this make the method completely unreproducible.\", \"novelty\": \"The idea of using clustering to discover goals in reinforcement learning is quite old and the paper does a poor job of citing the most relevant prior work. For example, there is no mention of \\u201cDynamic Abstraction in Reinforcement Learning via Clustering\\u201d by Mannor et al. or of \\u201cLearning Options in Reinforcement Learning\\u201d by Stolle and Precup (which uses bottleneck states as goals). The particular instantiation of clustering interesting states used in this paper does seem to be new but it is important to do a better job of citing relevant prior work and the overall novelty is still somewhat limited.\", \"significance\": \"I was not convinced that there are significant ideas or lessons to be taken away from this paper. The main motivation was to improve scalability of RL and HRL to large state spaces, but the experiments are on the four rooms domain and the first room of Montezuma\\u2019s Revenge, which is not particularly large scale. Existing HRL approaches, e.b. Feudal Networks from Vezhnevets et al. have been shown to work on a much wider range of domains. Further, it\\u2019s not clear how this method could address scalability issues. Repeated clustering could become expensive and it\\u2019s not clear how the number of clusters affects the approach as the complexity of the task increases. I would have liked to see some experiments showing how the performance changes for different numbers of clusters because setting the number of clusters to 4 in the four rooms task is a clear use of prior knowledge about the task.\", \"overall_quality\": \"The proposed approach is based on a number of heuristics and is potentially brittle. Given that there are no ablation experiments looking at how different choices (number of clusters/goals, how outliers are selected, etc) I\\u2019m not sure what to take away from this paper. There are just too many seemingly arbitrary choices and moving parts that are not evaluated separately.\", \"minor_comments\": [\"Can you back up the first sentence of the abstract? AlphaGo/AlphaZero do well on the game of Go which has ~10^170 valid states.\", \"First sentence of introduction. How can the RL problem have a scaling problem? Some RL methods might, but I don\\u2019t understand what it means for a problem to have scaling issues.\", \"Please check your usage of \\\\cite and \\\\citep. Some citations are in the wrong format.\", \"The Q-learning loss in section 2 is wrong. The parameters of the target (r+\\\\gamma max Q) are held fixed in Q-learning.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
Hygv0sC5F7 | When Will Gradient Methods Converge to Max-margin Classifier under ReLU Models? | [
"Tengyu Xu",
"Yi Zhou",
"Kaiyi Ji",
"Yingbin Liang"
] | We study the implicit bias of gradient descent methods in solving a binary classification problem over a linearly separable dataset. The classifier is described by a nonlinear ReLU model and the objective function adopts the exponential loss function. We first characterize the landscape of the loss function and show that there can exist spurious asymptotic local minima besides asymptotic global minima. We then show that gradient descent (GD) can converge to either a global or a local max-margin direction, or may diverge from the desired max-margin direction in a general context. For stochastic gradient descent (SGD), we show that it converges in expectation to either the global or the local max-margin direction if SGD converges. We further explore the implicit bias of these algorithms in learning a multi-neuron network under certain stationary conditions, and show that the learned classifier maximizes the margins of each sample pattern partition under the ReLU activation. | [
"gradient method",
"max-margin",
"ReLU model"
] | https://openreview.net/pdf?id=Hygv0sC5F7 | https://openreview.net/forum?id=Hygv0sC5F7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1eotByYy4",
"SJg80NOj2Q",
"HkgRSG1t3X",
"rkx8w0vw3X"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544250755457,
1541272781898,
1541104198152,
1541008989990
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper903/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper903/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper903/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper903/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers and AC note the following potential weaknesses: 1) the proof techniques largley follow from previous work on linear models 2) it\\u2019s not clear how signficant it is to analyze a one-neuron ReLU model for linearly separable data.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}",
"{\"title\": \"A theoretical paper with very stringent assumptions.\", \"review\": \"This paper considers the binary classification problem with exponential loss and ReLu activation function (single neuron). The authors characterize the asymptotic loss landscape by three different types of critical points. They prove that gradient descent (GD) will result in four different regions and provide convergence rates for GD to converge to an asymptotic global minimum, asymptotic local minimum and local minimum under certain assumptions. The authors also provide convergence results for stochastic gradient descent (SGD) and provide extensions to leaky ReLu activation and multi-neuron networks. The paper is well written and the results are mostly clearly presented. This paper mostly follows the line of research by Soudry et al. (2017, 2018), while it has its own merit due to the ReLu activation function considered. However, there are many strong assumptions that are not carefully verified and I really have concerns about the contribution of this paper since they simplify their analysis and results merely by imposing stringent conditions. In particular, I have the following major comments about the paper:\\n\\n1.\\tIn the definition of max-margin direction, why you use \\\\argmin_{w} max_{i} (w^{\\\\top}x_i)? It seems to me that the definition should be \\\\argmax_{w} min_{i} (w^{\\\\top}x_i). This definition keeps appearing in multiple places in the main paper. \\n2.\\tIn the proof of Theorem 3.2, I am confused by the argument of the case that \\\\hat w^{+} is not in the linearly separable region. More clarification is needed to make the proof rigorous.\\n3.\\tIn the analysis of Theorem 3.3 and 3.4, the authors make a very stringent assumption that the iterate w_t staying in linear separable region for all t>\\\\mathcal{T}. This assumption seems too strong, which should be verified rather than imposed in analysis of SGD. Note that even the example shown in Proposition 2 is still very restrictive (you require all the positive examples or negative examples are very close to one another).\\n4.\\tFurthermore, in the analysis of SGD, the authors did not specify the assumption that \\\\hat w^{+} lies in the linear separable region, which is also required in this theorem and also very strong. Given such strong assumptions, the analytic results seem to be trivial and it is hard to evaluate the authors\\u2019 contribution. \\n5.\\tFor the convergence results of SGD, the current rate is derived on the distance between \\\\|E[w_t] - \\\\hat{w}\\\\|^2. Can you provide similar results for mean square error (E\\\\| w_t - \\\\hat{w} \\\\|^2)? \\n6.\\tIn multi-neuron case, the authors again make very strong assumptions that all the neurons have unchanging activation status. This is not easily achievable without careful characterization or other rigorous assumptions. Under such strong assumptions, the extension to multi-neuron again seems not very meaningful.\", \"other_minor_comments\": \"1.\\tThe references are not correctly cited. For instance, please correct the use of parenthesis in \\u201c\\u2026 which is different from that in (Soudry et al., 2017, Corollary 8)\\u201d and \\u201c\\u2026 hold for various other types of gradient-based algorithms Gunasekar et al. (2018)\\u201d.\\n2.\\tThe sentence \\u201c\\u2026, which the nature of convergence is different from \\u2026\\u201d does not read well. Should it be \\u201cwhere\\u201d or \\u201cof which\\u201d?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Importance of ReLU networks and max-margin used in this paper are unclear.\", \"review\": \"Recently, the implicit bias where gradient descent converges the max-margin classifier was shown for linear models without an explicit regularization.\\nThis paper tries to extend this result to ReLU network, which is more challenging because of the non-convexity.\\nMoreover, a similar property of stochastic gradient descent is also discussed.\\n\\nThe implicit bias is a key property to ensure the superior performance of over-parameterized models, hence this line of research is also important.\\nHowever, I think there are several concerns as summarized below.\\n\\n1. I'm not sure about the significance of the ReLU model (P) considered in the paper.\\nIndeed, the problem (P) is challenging, but an obtained model is linear defined by $w$.\\nTherefore, an advantage of this model over linear models is unclear.\\n\\nMoreover, since the max-margin in this paper is defined by using part of dataset and it is different from the conventional max-margin, the generalization guarantees are not ensured by the margin theory.\\nTherefore, I cannot figure out the importance of an implicit bias in this setting (, which ensures the convergence to this modified max-margin solution).\\nIn addition, the definition of the max-margin seems to be incorrect: argmin max -> argmax min.\\n\\n2. Proposition 1 (variance bound) gives a bound on the sum of norms of stochastic gradients.\\nHowever, I think this bound is obvious because stochastic gradients of the ReLU model (P) are uniformly bounded by the ReLU activation.\\nCombining this boundedness and decreasing learning rates, the bound in Proposition 1 can be obtained immediately.\\nMoreover, the validity of an assumption on $w_t$ made in the proposition should be discussed.\\n\\n3. Lemma F.2 is key to show the main theorem, but I wonder whether this lemma is correct.\\nI think the third equation in the proof seems to be incorrect.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This paper studies ReLU model, or equivalently, one-layer-one-neuron model, for the classification problem. This paper shows if the data is linearly separable, gradient descent may converge to either a global minimum or a sub-optimal local minimum, or diverges. This paper further studies the implicit bias induced by GD and SGD and shows if they converge, they can have a maximum margin solution.\", \"comments\": \"1. Using ReLU model for linearly separable data doesn't make sense to me. When ReLU is used, I expect some more complicated separable condition. \\n2. This paper only studies one-layer-one-neuron model, which is a very restricted setting. It's hard to see how this result can be generalized to the multiple-neuron case.\\n3. The analysis follows closely with previous work in studying the implicit bias for linear models.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
BJfvAoC9YQ | Feature Transformers: A Unified Representation Learning Framework for Lifelong Learning | [
"Hariharan Ravishankar",
"Rahul Venkataramani",
"Saihareesh Anamandra",
"Prasad Sudhakar"
] | Despite the recent advances in representation learning, lifelong learning continues
to be one of the most challenging and unconquered problems. Catastrophic forgetting
and data privacy constitute two of the important challenges for a successful
lifelong learner. Further, existing techniques are designed to handle only specific
manifestations of lifelong learning, whereas a practical lifelong learner is expected
to switch and adapt seamlessly to different scenarios. In this paper, we present a
single, unified mathematical framework for handling the myriad variants of lifelong
learning, while alleviating these two challenges. We utilize an external memory
to store only the features representing past data and learn richer and newer
representations incrementally through transformation neural networks - feature
transformers. We define, simulate and demonstrate exemplary performance on a
realistic lifelong experimental setting using the MNIST rotations dataset, paving
the way for practical lifelong learners. To illustrate the applicability of our method
in data sensitive domains like healthcare, we study the pneumothorax classification
problem from X-ray images, achieving near gold standard performance.
We also benchmark our approach with a number of state-of-the art methods on
MNIST rotations and iCIFAR100 datasets demonstrating superior performance. | [
"continual learning",
"deep learning",
"lifelong learning",
"new task learning",
"representation learning"
] | https://openreview.net/pdf?id=BJfvAoC9YQ | https://openreview.net/forum?id=BJfvAoC9YQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rklG9szleN",
"SJeZEzz20m",
"HJemXlf30Q",
"rkl0J2bhCX",
"HJeejpL5RX",
"rJenDhy5nX",
"rylQtqdYnm",
"HyxmqYlF3m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544723338013,
1543410217100,
1543409690860,
1543408614375,
1543298456437,
1541172324108,
1541143162639,
1541110154825
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper902/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper902/Authors"
],
[
"ICLR.cc/2019/Conference/Paper902/Authors"
],
[
"ICLR.cc/2019/Conference/Paper902/Authors"
],
[
"ICLR.cc/2019/Conference/Paper902/Authors"
],
[
"ICLR.cc/2019/Conference/Paper902/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper902/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper902/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes a framework for continual/lifelong learning that has potential to overcome the problems of catastrophic forgetting and data privacy.\\nR1, R2 and AC agree that the proposed method is not suitable for lifelong learning in its current state as it linearly increases memory and computational cost over time (for storing features of all points in the past and increasing model capacity with new tasks) without account for budget constraints.\\n\\nThe authors responded in their rebuttal that the data is not stored in the original form, but using feature representation (which is important for privacy issues). The main concern, however, was about the fact that one has to store information about all previous data points which is not feasible in lifelong learning. In the revision the authors have tried to address some of the R1\\u2019s and R2\\u2019s suggestion about taking into account the budget constraints. However more in-depth analysis is required to assess feasibility and advantage of the proposed approach. \\nThe authors motivate some of the key elements in their model as to protect privacy. However no actual study was conducted to show that this has been achieved. \\nThe comments from R3 were too brief and did not have a substantial impact on the decision.\\n\\nIn conclusion, AC suggests that the authors prepare a major revision addressing suitability of the proposed approach for continual learning under budget constraints and for privacy preservation and resubmit for another round of reviews.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-Review\"}",
"{\"title\": \"State of the art results\", \"comment\": \"We would like to thank the reviewer for their time and patience.\", \"the_reviewer_has_made_2_important_remarks_on_the_paper\": \"1. Non compelling results\\n2. An engineering solution\\n\\n1. Non compelling results\\nTo the best of our knowledge, we have achieved state-of-the-art results on multiple variants of continual learning. In fact, we would like to take this opportunity to point out that our method outperforms all algorithms on the different variants of continual learning (new task learning(single incremental task/multi-task, incremental learning, domain adaptation). We would like the reviewer to point us to other works which can help us benchmark our results better.\\n\\n2. An engineering solution\\nContinual learning is an important problem with a large number of practical use-cases in the industry across various domains. It is important that such a solution is not only of academic interest but also can be deployed practically. We believe that an innovative solution, which does not compromise privacy of historical data while enabling continual learning will be of sufficient interest to the community and will spur further research in this direction.\"}",
"{\"title\": \"Our method uses practically feasible memory and compute\", \"comment\": \"Thank you, reviewer for your detailed comments on the paper. From the insight gained from reviews, we have accordingly modified the paper, along with a note of changes in our comment, \\\"Major points revisited / concerns addressed\\\" on this forum.\\n\\nAs a couple of reviewers had similar concerns, we have addressed these through a general comment about managing increasing memory and compute. We argue that our method's memory and compute requirement, owing to storage of only a fraction of historical lower dimensional features and requiring the use of additional compute only optional, is practically feasible and of value to the community.\", \"some_of_the_specific_concerns_which_are_addressed_are\": \"i. Redundancy of equation 2 by 7.\\nWe have structured our paper to first paper to first present a broad overview without being cluttered with design choices. Our paper's design was motivated to ensure we first convey the broad idea followed by a specific embodiment which was implemented.\\n\\nii. Errors in Equation 5\\nAs the reviewer pointed out, this equation represents the construction of training data for the feature transformer at time t, in the form of example-label pairs. Though not explicitly mentioned, it should be understood from the context that indeed the correspondence between examples (in the form of features) and labels are maintained due to the order of the union operation. If it adds value to make this point explicit, we would be happy to include this clarification in the next version of the paper.\\n\\niii. Increasing memory/compute\\nBoth these concerns are addressed through the general comment, \\\"Major points revisited / concerns addressed\\\".\\n\\niv. Add experimental details to ensure paper is stand-alone\\nWe have incorporated the comments by reviewers to ensure that all definitions and experimental settings are fully contained in the paper and can be understood unambiguously.\\n\\nv. Memory gains by storing features\\nWhile we agree with the reviewers that memory gains by storing features are limited when working with a toy dataset like MNIST(28x28), as pointed out in the general comment, these gains are substantial when working with real-world datasets, and become extremely pronounced in volumetric datasets like medical imaging (3D Datasets).\\n\\nvi. Paper longer than 8 pages\\nWe consciously wanted to ensure verbosity so that the subtle ideas are conveyed meaningfully.\"}",
"{\"title\": \"Response to AnonReviewer2: Additional computational cost is limited\", \"comment\": \"Thank you for your insights on the paper. Taking your comments into consideration, we have accordingly modified the paper, along with a note of changes in our general comment, \\\"Major points revisited / concerns addressed\\\" on this forum.\", \"the_major_concerns_raised_are\": \"i. increasing computational cost (number of layers) with time\\nii. increasing memory requirement with time\\niii. Experiments on different types of datasets (text/graph).\\n\\nPoints (i and ii): The first couple of points are addressed in our general comment, where we argue that these requirements are not limiting and will not constrain the implementation of a lifelong learner.\", \"point_iii\": \"The focus of this work was on developing a secure, privacy-aware continuous learning system for domains involving images (e.g., medical imaging). While we haven't performed any experiments on text/graphs, we believe that the method is generic enough for application in other forms of data. We plan to validate this method on other data types in the future.\"}",
"{\"title\": \"Major points revisited / concerns addressed\", \"comment\": \"Thanks to all the reviewers for their time and patience. Before we make rebuttals to specific concerns, we highlight a few important facets of the work, which may have been overlooked.\\n\\n1) Privacy-preserving lifelong learners : A key constraint that we operate is that we cannot store data in original form. This is a very pertinent point in data-sensitive domains like Healthcare, where images of subjects cannot be shipped/stored for anonymity/regulatory constraints, which is a point often overlooked. The fundamental question that we address is : \\\"Can we protect privacy while also ensuring competent lifelong learning?\\\" To that end, we demonstrate that, by merely storing low-dimensional features, we can achieve a practical lifelong learner without compromising security. \\n\\n2) Single-framework for lifelong learning: The work addresses all variants of continuous learning together, which has not been addressed yet in the current literature to the best our knowledge. E.g. a method that addresses new-task learning cannot handle incremental learning of same task (LwF) or a method that achieves domain adaptation may not handle new-task learning or incremental learning. Our framework is capable of handling a variety of situations without retraining the entire network, and we have shown the ability of our method to handle real-life type situations, where we simulate complex sequences of these situations. \\n\\n3) Significance of results: The reviewers have not appreciated the exemplary results on three different datasets. We would like to point out that even with the luxury of having all the low dimensional features from previous episodes, obtaining state-of-the art results is non-trivial. This is achieved by seeking newer representations that continue to be separable, and cannot be attributed only to increasing capacity. \\n\\nWe proceed to address the two major concerns of the reviewers - Memory and Computation\\n\\n1) Memory: We do not think storing all features is a stumbling block for realizing the potential of our method as discussed here. \\n\\n\\ta) Storing all features is not necessary \\n\\t\\n\\tAs demonstrated in Section 4.4, by storing only one-fifth of past history, we observe a drop in performance of only ~3% from baseline. To our knowledge, such a performance is extremely compelling and difficult to achieve. We have included an additional experiment on Pneumothorax classification, where we achieve significant performance by retaining only 20% of previous data at every episode. We have captured these ablation experiments in graphical format in the revised version along with size-on-disk computation. \\n\\t\\n\\tb) Storing all features is not prohibitive\\n\\t\\n\\tAdditionally, calculations of size-on-disk suggests that storing features of the entire history is not prohibitive. A typical natural/medical image is 256*256*3 integers or more, whereas our representation is only 4096 floats (16kb). Even the largest available medical image repository of 100k X-ray images takes 1.6GB which is not huge. These are conservative estimates. A standard medical image can be of much larger size (1024*1024) and in 3-D (minimum >10 slices). Any exemplar-based method (iCARL) will have severe storage limitations than our method. Additionally, storing ~50 low-dimensional features occupies same memory as storing one exemplar image. This directly leads to storing more history compactly while addressing catastrophic forgetting and privacy. \\n\\t\\n2) Computation: We highlight that incremental computational requirements are not unrealistic as suggested. \\n\\n\\ta) Additional compute not prohibitive :\\n\\t\\n\\t\\tAs time progresses, only new layers have to be learnt and it is not entirely true that the training overhead is costly. In our experiments, we have shown that the feature transformer layers are typically two layer deep and hence learning them is not expensive. Further, we have added results where we trained with only one additional fc layer, which did not give any drop in performance. This coupled with storing a fraction of historical data, makes training and inference light. \\n\\t\\t\\n\\tb) Additional compute may not be needed :\\n\\t\\n\\t\\tThe intuition behind adding more capacity is to account for cases where existing representations might be insufficient to achieve separation. There is no compulsion to add capacity at each episode (Sec 3.1). We have added the experiment (Sec 4.5) where we do not add additional capacity after 5th episode. We simply transform the previous episode's feature transformer to achieve separable representation. However, we wanted to present a generic framework that can account for the need to seek richer representations for complex lifelong problems. \\n\\t\\t\\n\\tc) Model Compaction : \\n\\t\\t\\n\\t\\tAnother idea that has already been mentioned in the paper is model compaction. Distillation based approaches have demonstrated \\tshrinking of huge networks without loss of performance, which can be employed in this work.\"}",
"{\"title\": \"Continual learning approach with increasing computational cost over time\", \"review\": \"This paper proposes a continual learning approach which transforms intermediate representations of new data obtained by a previously trained model into new intermediate representations that are suitable for a task of interest.\\nWhen a new task and/or data following a different distribution arrives, the proposed method creates a new transformation layer, which means that the model\\u2019s capacity grows proportional to the number of tasks or data sets being addressed over time. Intermediate data representations are stored in memory and its size also grows.\\nThe authors have demonstrated that the proposed method is robust to catastrophic forgetting and it is attributed to the feature transformation component. However, I\\u2019m not convinced by the experimental results because the proposed method accesses all data in the past stored in memory that keeps increasing infinitely. The authors discuss very briefly in Section 5.2 on the performance degradation when the memory size is restricted. In my opinion, the authors should discuss this limitation more clearly on experimental results with various memory sizes.\\n\\nThe proposed approach would make sense and benefit from storing lower dimensional representations of image data even though it learns from the entire data over and over again.\\nBut it is unsure the authors are able to claim the same argument on a different type of data such as text and graph.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The work present a framework for dealing with life long learning, yet it violates two important constraints which every life long learner has to obey: limited memory and computation.\", \"review\": \"Summary:\\n a method is presented for on-going adaptation to changes in data, task or domain distribution. The method is based on adding, at each timed step, an additional network module transforming the features from the previous to the new representation. Training for the new task/data at time t relies on re-training with all previous data, stored as intermediate features. The method is shown to provide better accuracy than na\\u00efve fine tuning, and slightly inferior to plain re-training with all the data.\\nWhile the method is presented as a solution for life long learning, I think it severely violates at least two demands from a feasible solution: using finite memory and using finite computational capacity (i.e. a life-long learning cannot let memory or computation demands to rise linearly with time). Contrary to this, the method presented induces networks which grow linearly in time (in number of layers, hence computation requirements and inference time), and which use a training set growing indefinitely, keeping (representations of) all the examples ever seen so far. If no restrictions on memory and computation time are given, full retraining can be employed, and indeed it provides better results that the suggested method. In the bottom line, I hence do not see the justification for using this method, either as a life-long learner or in another setting.\", \"pros\": [\"the method shows that for continuous adaptation certain representations can be kept instead of the original examples\"], \"cons\": [\"The method claims to present a life long learning strategy, yet it is not scalable to long time horizon (memory and inference costs rise linearly with time)\", \"Some experiments are not presented well enough to be understood.\"], \"more_detailed_comments\": \"\", \"page_3\": \"-\\tEq. 2 is not clear. It contains a term \\u2018classification loss\\u2019 and \\u2018feature_loss\\u2019 which are not defined or explained. While the former is fairly standard, leaving the latter without definition makes this equation incomprehensible. \\no\\tI later see that eq. 7 includes the details. Therefore eq.2 is redundant.\", \"page_4\": [\"Eq. 5 seems to be flawed, though I think I can understand what it wants to say. Specifically, it states two sets: one of examples (represented by the previous feature extractor) and one of labels (of all the examples seen so far). The two sets are stated without correspondence between examples and labels \\u2013 which is useless for learning (which requires example-label correspondence). I think the intention was for a set of (example, label) pairs, where the examples are represented using feature extractor of time t-1.\", \"Algorithm 1 seems to be a brute force approach in which the features of all examples from all problems encountered so far are kept (with their corresponding labels). This means keeping an ever growing set of examples, and training repeatedly at each iteration on this set. These are not realistic assumptions for a life-long learner with finite capacity of memory and computation.\", \"o\\tFor example, for the experiment reported at page 6, including 25 episodes on MNist, each feature transformer is adding 2 additional FC layers to the network. This leads to a network with >50 FC layers at time step 25 \\u2013 not a reasonable and scalable network for life ling learning\"], \"page_6\": [\"The results show that the feature transformer method achieve accuracy close to cumulative re-training, but this is not too surprising, since feature transformer indeed does cumulative re-training: at each time step, it re-trains the classifier (a 2 stage MLP) using all the data at all times steps (i.e. cumulative retraining). The difference from pure cumulative re-training, if I understand correctly, is that the cumulative re-training is done not with the original image representations, but with the intermediate features of time t-1. What do we earn and what do we loose from this? If I understand correctly, we earn that the re-training is faster since only a 2-layer MLP is re-trained instead of the full network. We loose in the respect that the model gorws larger with time, and hence inference becomes prohibitively costly (as the network grows deeper by two layers each time step). Again, I do not think this is a practical or conceptual solution for life long learning.\", \"The experiment reported in figure 3 is not understandable without reading Lopez-Paz et al., 2017 (which I didn\\u2019t). the experiment setting, the task, the performance measurements \\u2013 all these are not explained, leaving this result meaningless for a stand-alone read of this paper.\", \"Page 8: it is stated that \\u201cwe only store low dimensional features\\u201d. However, it is not reported in all experiment exactly what is the dimension of the features stored and if they are of considerably lower dimension than the original images. Specifically for the MNIst experiments it seems that feature stored are of dimension 256, while the original image is of dimension 784 \\u2013 this is lower, but no by an order of magnitude (X10).\", \"The paper is longer than 8 pages.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Mechanical Approach to Augmenting Networks Incrementally for Lifelong Learning\", \"review\": \"The authors provided a training scheme that ensures network retains old performance as new data sets are encountered (e.g. (a) same class no drift, (b) same class with drift, (c) new class added). They do this by incrementally adding FC layers to the network, memory component that stores previous precomputed features, and the objective is a coupling between classification loss on lower level features and a feature-loss on retaining properties of older distributions. The results aren't very compelling and the approach looks like a good engineering solution without strong theoretical support or grounding.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
S1lwRjR9YX | Stability of Stochastic Gradient Method with Momentum for Strongly Convex Loss Functions | [
"Ali Ramezani-Kebrya",
"Ashish Khisti",
"and Ben Liang"
] | While momentum-based methods, in conjunction with the stochastic gradient descent, are widely used when training machine learning models, there is little theoretical understanding on the generalization error of such methods. In practice, the momentum parameter is often chosen in a heuristic fashion with little theoretical guidance. In this work, we use the framework of algorithmic stability to provide an upper-bound on the generalization error for the class of strongly convex loss functions, under mild technical assumptions. Our bound decays to zero inversely with the size of the training set, and increases as the momentum parameter is increased. We also develop an upper-bound on the expected true risk, in terms of the number of training steps, the size of the training set, and the momentum parameter. | [
"Generalization Error",
"Stochastic Gradient Descent",
"Uniform Stability"
] | https://openreview.net/pdf?id=S1lwRjR9YX | https://openreview.net/forum?id=S1lwRjR9YX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BkggqZzYdN",
"r1gJUIiRy4",
"rJxdSh34yV",
"HJlGj7UR0Q",
"Skl6ykATA7",
"SJet7-5TC7",
"HkgEmbtaA7",
"B1e21iAhA7",
"HJlL1YjoCm",
"ryemHZssRm",
"HyeCCsdoRm",
"rJxsFK_iRQ",
"ryeWFpGiCX",
"HkxhFKzs07",
"SkglZ4I90Q",
"SkewoU49R7",
"HJxr0SV90m",
"SyenUHNcA7",
"r1lvWTHc2X",
"Skl1MikYnX",
"SJx-bmHbnm"
],
"note_type": [
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1553699208400,
1544627783202,
1543978048398,
1543558041976,
1543524068591,
1543508256698,
1543504155750,
1543461604237,
1543383262164,
1543381307101,
1543371733983,
1543371139316,
1543347576545,
1543346563723,
1543295992115,
1543288478886,
1543288269255,
1543288148187,
1541197054879,
1541106438792,
1540604665033
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper901/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper901/Authors"
],
[
"ICLR.cc/2019/Conference/Paper901/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper901/Authors"
],
[
"ICLR.cc/2019/Conference/Paper901/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper901/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper901/Authors"
],
[
"ICLR.cc/2019/Conference/Paper901/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper901/Authors"
],
[
"ICLR.cc/2019/Conference/Paper901/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper901/Authors"
],
[
"ICLR.cc/2019/Conference/Paper901/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper901/Authors"
],
[
"ICLR.cc/2019/Conference/Paper901/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper901/Authors"
],
[
"ICLR.cc/2019/Conference/Paper901/Authors"
],
[
"ICLR.cc/2019/Conference/Paper901/Authors"
],
[
"ICLR.cc/2019/Conference/Paper901/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper901/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper901/AnonReviewer2"
]
],
"structured_content_str": [
"{\"comment\": \"Collectible cars have always been and remain an excellent gift for both a child and an adult. For example, if your friend dreams of a particular make or model of car, or perhaps he has one, then why not give him a scale model of his favorite car?\", \"many_collectible_models_https\": \"//www.bestadvisor.com/car-model-kits have a metal case and even prefab models are no exception. Most models open doors, luggage compartment, hood, and even turns the steering wheel in the cabin. All these possibilities depend on the specific model of the souvenir machine and its features. Large-scale collectibles such as 1:18 or 1:24 always have a lot of extra features. And for 1:43 or 1:34 scale models, only the doors are usually opened.\", \"title\": \"carmodel\"}",
"{\"metareview\": \"The paper according to Reviewers needs more work for publication and significantly more clarifications. The Reviewers are not convinced on publishing even after intensive discussion that the AC read in full. The AC recommends further improvements on the paper to address better Reviewer's concerns.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Paper needs more work\"}",
"{\"title\": \"Response to AnonReviewer3 Comments\", \"comment\": \"Dear reviewer,\\n\\nRegarding the comparison with (Jain et al., 2018), we should clarify that only linear regression problem with quadratic loss function has been studied in (Jain et al., 2018), while we consider a general strongly-convex loss function. In addition, our generalization bound is based on uniform stability, which is not the case in (Jain et al., 2018). Hence, we do not find a strong connection between the results of (Jain et al., 2018) and those of our paper. \\n\\nWe emphasize that the claim that \\\"for problems with larger condition number, the momentum should approach to one\\\" is indeed based on *convergence* analysis of GD with momentum. It does not account for *generalization*. In addition, those recommended parameters are not necessarily optimal for SGD. In our view, what mainly matters is extending the results of (Hardt et al., 2016) to SGMM and showing that there exists some mu for which SGMM satisfies uniform stability, i.e., our machine learning model can be trained for multiple epochs and still generalizes. We have verified the trends predicted by our stability bounds using experimental evaluations. Those evaluations are based on common machine learning models leading to a smooth and strongly convex loss function.\\n\\nBy asymptotic, we mean that the problem structure imposes very large *kappa*. As explained before, the gamma parameter can be tuned by adjusting a weight decay regularization parameter in a typical machine learning model. We used 3.5 merely as an example to show that there exists mathematical problems for which the suggested momentum based on the convergence analysis of GD falls within the interval specified by Theorem 2 in our paper. We do not claim that kappa=3.5 necessarily represents practical problems in machine learning. Please note that even for the original work of (Hardt et al., 2016), which analyzes the stability of SGD without momentum, there are some conditions on the learning rate in the theorem statements to satisfy the uniform stability, i.e., unlike the convergence analysis, the stability bounds typically involve limitations on the range of hyper-parameters. We further emphasize that our theorem for convergence analysis (Theorem 3) does not have any limitation on mu.\"}",
"{\"title\": \"Thank you for your response.\", \"comment\": \"I am convinced that proofs are correct and assumptions are also reasonable.\"}",
"{\"title\": \"Response to AnonReviewer1 Comments\", \"comment\": \"Dear reviewer,\\n\\nThank you for your comments and pointing out the typos.\\n\\nRegarding Proposition 1, please note that in our proof (Page 2 in the \\nsupplementary document of the original submission, which can be accessed \\nby clicking \\\"show revisions\\\"), we have first shown (as stated in \\nLemma 2) that the stability bound holds for the average of ${w_t}$. \\nTherefore, we believe that Proposition 1 is correct.\"}",
"{\"title\": \"response\", \"comment\": \"Thank you for your response.\\n\\nFirstly, please note that in Kidambi et al (2018), the paper presents the fact that SGD + momentum (for any value of the learning rate + momentum tuple) does not outperform vanilla SGD (with mu=0) by more than a constant factor (i.e. there is no asymptotic running time improvements in the big-Omega/big-Oh notation). That said, momentum + SGD can never be worse than vanilla SGD since we can always set mu=0 in momentum methods to recreate SGD's behavior.\", \"with_regards_to_values_of_momentum_for_generalization\": \"There are generalization results for acceleration with stochastic gradient methods. In particular, Jain et al. \\\"accelerating stochastic gradient descent for least squares regression\\\" 2017 present (for the strongly convex least squares problem) a result that admits a similar flavor - where, momentum approaches 1 for harder problems (large condition number) than ones for easier problems (low condition number).\\n\\nAt the end of the day, the trend offered by both deterministic and/or stochastic accelerated methods for easy (low condition number) vs. hard (large condition number) problems is what matters: for harder problems with large condition number, the momentum approaches 1, but, for the bounds admitted by the paper, the momentum approaches zero, which is rather worrying. This basically implies that this paper's theory, while beginning to make progress on this problem, does not provide a reasonable guarantee to characterize what happens when SGD is used with momentum.\\n\\nAnd a condition number of 3.5? A typical (easy) machine learning problem has a condition number around O(10^3) or more. Harder (and more typical) ones that have many correlated features have a condition number that is roughly 10^5 or more with greater correlations across features leading to worsening of the condition number. A condition number of 3.5 is trivial (from an optimization standpoint) - gradient descent requires roughly 10-20 steps to converge on this problem. A condition number of 3.5 implies that there is close to little advantage of kappa versus sqrt(kappa), which is the advantage offered by acceleration. Finally, in order to make the condition number 3.5, one would have to regularize the problem too strongly in a way that the solution will very well generalize far worse than solving the erm problem.\\n\\nAgain, I don't understand the claim of why a smaller kappa resembles \\\"non-asymptotics\\\". Precisely put, there is *no* relation of kappa versus asymptotics.\"}",
"{\"title\": \"Additional comments\", \"comment\": \"As for the convergence theorem, I think the proof of Theorem 3 for projected SGMM seems correct, but I found another small bug that I did not notice when I read for the first time.\\nProposition 1 cannot be obtained by Theorem 2 and 3 directly because the stability bound is given for the latest parameter $w_t$ while the convergence is guaranteed for the average of ${w_t}$.\\nThus, it would be nice if the authors could fix it.\\n\\n[Minor typos]\\n- \\\"Since \\\\|.\\\\| is a convex function\\\" -> \\\"Since \\\\|.\\\\|^2 is a convex function\\\".\\n\\n- RHS of Equation after \\\"Note that the LHS of (25) ...\\\" in p. 6:\\n1-\\\\mu -> (1-\\\\mu)^2,\\n\\\\| ... \\\\| -> \\\\| ... \\\\|^2.\"}",
"{\"title\": \"Response to AnonReviewer2 Comment\", \"comment\": \"Our problem setting involves constrained optimization where we seek the optimal solution within a compact set. This constraint is assumed to be given apriori in the problem definition. Such setting have been widely considered in the literature. See for example: (Hardt et al., 2016)[Section 3.4]).\\n\\nPlease note that we do not address the unconstrained optimization problem that you mention in your response. Thus we do not need to design the compact set that increases the chance of the compact set containing the optimal solution.\\n\\nWe hope this clarifies our problem setup and you are convinced by the technical soundness of our work.\"}",
"{\"title\": \"Response\", \"comment\": \"Yes, it should hold for projection steps in (A2) and (A3). But even so, your constant L must depend on the radius of the compact set of the parameters. How could you guarantee the optimal solution of the strongly convex problem is always in that compact set? In order to increase \\\"the chance\\\" of the compact set containing the optimal solution, you need to \\\"increase\\\" the size of the compact set, which implicitly pushes L be arbitrarily large. Therefore, it is only possible if you just find the solution from the small compact set (from which the optimal solution may be very far).\"}",
"{\"title\": \"Response to AnonReviewer2 Comment\", \"comment\": \"Dear reviewer,\\n\\nThe supplementary document was submitted along with the original \\nsubmission. It can be found by clicking the \\\"Show revisions\\\" link below \\nthe paper title.\"}",
"{\"title\": \"response\", \"comment\": \"I am not sure where I could find the supplementary document as you said. The pdf file only contains 9 pages.\"}",
"{\"title\": \"Response to AnonReviewer2 Comment\", \"comment\": \"Dear reviewer,\\n\\nPlease note that inequalities (A.2) and (A.3) (shown in the proof of Lemma 1 in the supplementary document) hold for the projected SGMM update (5) because Euclidean projection does not increase the distance between projected points. We are quite certain that our proof is correct, since our approach to handle projection is a commonly used technique in existing work. For example, it is used in (Hardt et al., 2016)[Section 3.4].\"}",
"{\"title\": \"Response\", \"comment\": \"Dear author(s),\\n\\nYou are using a stochastic algorithm. There is nothing guarantee that all of your updated iterations are in compact set. It is true that you consider projected step as in (5). However, your proof in Lemma 1 from the beginning to (14), you are using without projection. It would damage your bound in (14) as I mentioned before that L could be arbitrarily large. Your explanation in the last paragraph of Lemma 1 is not convincing. In order to fix this issue, you may need to consider your derivation with projected step from the beginning of the proof.\"}",
"{\"title\": \"Response to AnonReviewer2 Comments\", \"comment\": [\"Dear reviewer,\", \"To clarify the correctness of our proof, please note that L in the theorem statement and the proof is bounded due to compactness of the parameter space. We have shown that (19) holds for projected SGMM. Before (19), we have not used L-Lipschitz.\", \"Regarding convergence analysis, please note that our goal is to find a *global* convergence bound not a \\\"local\\\" one. We know that classical results show that heavy ball momentum achieves linear convergence rate locally. However, those results are for batch gradient descent not stochastic gradient descent for a general strongly-convex function.\"]}",
"{\"title\": \"Response to authors\", \"comment\": \"Dear author(s),\\n\\nThank you for your response! \\n\\n1. The set of assumptions (smooth, Lipschitz and strongly convex) is not valid on the whole set R^d, for example quadratic function. Your L may be arbitrarily large and your bound in (14) could be damaged. I do not think you can properly apply the projection step here after deriving (14) in case of L -> \\\\infty. \\n\\n2. I was asking about the linear convergence to \\\"neighborhood\\\", not to the \\\"optimal solution\\\". Your theory seems not able to cover this case.\"}",
"{\"title\": \"Response to AnonReviewer2 Comments\", \"comment\": \"R2C1: We believe that our results are substantial and important. Our analysis involves some subtle but important steps in dealing with the momentum term in the recursion in Section 4. This method was not conceived in prior attempts on this problem. We reproduce the following statement from [Hardt et al., Section 7]: ``One very important technique that we did not discuss is momentum. However, it is not clear that momentum adds stability. It is possible that momentum speeds up training but adversely impacts generalization.'' Our work is the first successful attempt that establishes that SGMM generalizes, for the practically important class of strongly convex loss functions.\\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\", \"r2c2\": \"Please note that we first discuss the proofs without projection to keep the notation uncluttered. We then explain how the proofs can be modified to accommodate projection. We believe this approach is technically sound, and it helps the readers to better understand the insights in our proofs. We respectfully request that the reviewer point out any specific issue in our proofs such that we can fix it.\\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\", \"r2c3\": \"To the best of our understanding, linear convergence happens under a very stringent condition: $\\\\Pr\\\\{\\\\nabla f_i(x^*)=0\\\\}=1$, \\\\ie $x^*$ is a simultaneous minimizer of (almost) all $f_i(x^*)$ [Needell et al. , 2014] . Such a condition would artificially force that the loss function be simultaneously minimized on each training example. In absence of this condition, SGD appears to exhibit similar convergence rate as our paper, albeit under somewhat different assumptions on the loss function.\\n\\nMoreover, in terms of convergence, we cannot claim that SGMM always outperforms SGM without momentum. For example, in [Kidambi et al. , 2018], the authors show that there exists linear regression problems for which SGM outperforms SGMM in terms of convergence for any learning rate and momentum parameter.\"}",
"{\"title\": \"Response to AnonReviewer1 Comments\", \"comment\": \"R1C1: Our analysis involves some subtle but important steps in dealing with the momentum term in the recursion in Section 4. This method was not conceived in prior attempts on this problem. To convince you, we reproduce the following statement from [Hardt et al., Section 7]: ``One very important technique that we did not discuss is momentum. However, it is not clear that momentum adds stability. It is possible that momentum speeds up training but adversely impacts generalization.'' Our work is the first successful attempt that establishes that SGMM generalizes, for the practically important class of strongly convex loss functions.\"}",
"{\"title\": \"Response to AnonReviewer3 Comments\", \"comment\": \"R3C1: Please note that the theoretically advocated momentum parameters mu = (sqrt(kappa)-1)/(sqrt(kappa)+1) [Nesterov, 1983] or mu = ( (sqrt(kappa)-1)/(sqrt(kappa)+1) )^2 [Polyak,1964] are based on the *convergence* analysis of GD with momentum -- they do not account for *generalization*. Therefore, these values are not necessarily optimal for SGD with momentum (SGMM), in terms of our objective of true risk. In Theorem 2, our focus is to find a bound on stability, i.e., the condition on generalization. Our theorem for convergence analysis (Theorem 3) does not have any limitation on mu.\\n\\nOur goal in Theorem 2 is to find the tightest possible bound that shows why machine learning models can be trained for multiple epochs of SGMM while their generalization errors are limited. In order to satisfy uniform stability for SGMM (with constant momentum), we need to have a recursion with coeff<1. Even ignoring the third term in the RHS of (12), we still have to assume mu<1/kappa in order to have such a recursion. \\n\\nWe agree theoretically suggested momentum parameters for GD approach 1 \\u2013 1/sqrt(kappa), which grow close to one as kappa grows large. On the other hand, as our concern in this work is on gamma-strongly convex loss functions, where the gamma parameter can be tuned by adjusting a weight decay regularization parameter in a typical machine learning model, it is indeed important and interesting to study the generalization bound when kappa is not too large, i.e., the non-asymptotic regime. When kappa is not too large, the range specified in our theorem captures a typical value of the suggested momentum based on convergence analysis of GD. As an example, if we set kappa = 3.5, then ((sqrt(kappa)-1)/(sqrt(kappa)+1))^2\\u22480.1, which is approximately 1/(3*kappa). \\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\", \"r3c2\": \"In the original submission, we specified the condition mu*(beta+gamma)<<alpha*beta*gamma in the supplementary document. In the revision, we have explicitly provided this condition it in the proposition statement. Please note that this condition is used only to make tractable the optimization of the expected true risk over alpha. In practice, we can still use alpha as specified in Proposition 1. However, it will not necessarily optimize the upper-bound on the expected true risk.\\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\", \"r3c3\": \"Please note that although [1]-[3] study first-order methods with noisy (imperfect) gradients, none of these works study generalization of SGD for a strongly convex loss function using algorithmic stability. We note that both [1] and [2] are cited in [3]. In the revision, we have added the following sentence to our introduction: \\\"First-order methods with noisy gradient are studied in [Kidambi et al. , 2018] and references therein. In [Kidambi et al. , 2018], the authors show that there exists linear regression problems for which SGM outperforms SGMM in terms of convergence.\\\"\\n\\nRegarding comparison with [Loizou et al. , 2018], please note that [Loizou et al. , 2018] considers the special case of a convex *quadratic* loss function of a least-squares type, while we consider the general case of strongly convex loss functions. Furthermore, we emphasize that we do not limit our analysis of SGMM to super large batch sizes. Our analysis indeed works even for a batch size of one.\"}",
"{\"title\": \"Interesting question and direction\", \"review\": \"This paper presents an analysis of generalization error of SGD with multiple passes for strongly convex objectives using the framework of algorithmic stability [Bousquet and Elisseef, 2002] and its recent use to analyze generalization error of SGD based methods [Hardt, Recht and Singer 2016].\\n\\nThe problem considered by this work is interesting and raises the possibility of understanding generalization related questions of SGD style methods when augmented with momentum, which is common practice in Deep Learning [Sutskever et al. 2013]. That said, there are some concerns about the results as presented in this paper, which I will elaborate below:\\n\\n- Consider the stability bound admitted by theorem 2: The special case (similar to theorem 3.9 of Hardt et al 2016) when the learning rate alpha = 1/beta (which is the typical learning rate that theory advocates), and setting kappa = beta/gamma where kappa is the condition number of the problem, leads to the following bound on momentum allowed by theorem 2, which is:\\n\\n(something non-positive) <= mu < 1/(3*kappa). \\n\\nThis is basically the regime where momentum does not make any difference towards accelerating optimization. Referring to the standard value of momentum for strongly convex functions, we see that the momentum is set as mu = (sqrt(kappa)-1)/(sqrt(kappa)+1) [Nesterov, 1983], or, mu = ( (sqrt(kappa)-1)/(sqrt(kappa)+1) )^2 [Polyak,1964]. Upon simplification of this standard momentum values, we see that mu \\\\approx 1 - 1/sqrt(kappa) which grows close to one as kappa grows large. On the other hand, the momentum values admitted by the paper for their bound is super tiny (which gets to zero as the condition number kappa grows large). This essentially implies there is not much about momentum that is captured by the bound of theorem 2 since there is no characterization of the provided bound for theoretically advocated and practically used parameters for momentum.\\n\\n- In proposition 1, there is no quantitative description of what \\\"sufficiently small\\\" mu (momentum parameter) is - this statement is imprecise. As mentioned in the previous point, sufficiently small mu really is not descriptive of momentum parameters employed in practice (mu in practice typically is >= 0.9). For strongly convex objectives, this should be closer to 1- (1/sqrt(kappa)). Sufficiently small mu parameter essentially does not yield quantitatively different behaviors compared to standard SGD. \\n\\n\\nIn summary, while this paper attempts to make progress on an interesting question, but falls short and doesn't really capture the behavior of these methods that is even mildly reflective of practice (even in terms of the parameter regimes admitted by the bounds proven in the theorems).\\n\\n- This paper does not perform a thorough literature survey of published results. Furthermore, this paper does not present a precise treatment of assumptions (and implications) amongst other works cited in the paper (see for e.g. [4] below). \\n\\n[1] Polyak (1987) presents (generalization) behavior of Heavy Ball momentum with noisy (inexact) gradients.\\n[2] Several efforts in Signal Processing literature do consider the similar setting as one considered by this paper, which is that of Heavy Ball (called accelerated LMS) method with noisy gradients: refer to Proakis (1974), Roy and Shynk (1990), Sharma et al. (1998). \\n[3] Kidambi et al (2018) estimate the \\\"optimization\\\" power (which is a part of characterization of generalization error [Bach and Moulines 2011], since this dominates at the start of optimization) of HB method with Stochastic Gradients and prove that HB+stochastic gradients does not offer any speedup over vanilla SGD.\\n[4] Loizou and Richtarik provide an analysis of stochastic heavy ball with super large batch sizes (so they end up showing accelerated rates) under similar assumptions as considered by this paper, such as assuming the function is smooth and strongly convex. However, the paper dismisses the work of Loizou and Richtarik to be working with a different set of assumptions - this is not really true.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review\", \"review\": \"This paper studies the algorithmic stability of SGD with momentum and provides an upper-bound on true risk through convergence analysis.\\nThis bound clarifies dependencies of convergence speed on the size of dataset and the momentum parameter.\\n\\nThe presentation is easy to follow and technically sounds good.\\nSGD with momentum is heavily used for learning linear models and deep neural networks, hence to analyze its convergence behavior is quite important.\\nThis paper achieves this goal well by extending a previous result on vanilla SGD in a straightforward manner, although it is not technically difficult.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Stability of Stochastic Gradient Method with Momentum for Strongly Convex Loss Functions\", \"review\": \"Comments:\\n\\nThe author(s) provide stability and generalization bounds for SGD with momentum for strongly convex, smooth, and Lipschitz losses. \\n\\nThis paper basically follows and extends the results from (Hardt, Recht, and Singer, 2016). Section 2 is quite identical but without mentioning the overlap from Section 2 in (Hardt et al, 2016). The analysis closely follows the approach from there. \\n\\nThe proof of Theorem 2 has some issues. The set of assumptions (smooth, Lipschitz and strongly convex) is not valid on the whole set R^d, for example quadratic function. In this case, your Lipschitz constant L would be arbitrarily large and could be damaged your theoretical result. To consider projected step is true, but the proof without projection (and then explaining in the end) should have troubles. \\n\\nFrom the theoretical results, it is not clear that momentum parameter affects positively or negatively. In Theorem 3, what is the advantage of this convergence compared to SGD? It seems that it is not better than SGD. Moreover, if \\\\mu = 0 and \\\\gamma > 0, it seems not able to recover the linear convergence to neighborhood of SGD. Please also notice that, in this situation, L also could be large. \\n\\nThe topic could be interesting but the contributions are very incremental. At the current state, I do not support the publications of this paper.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJxwAo09KQ | Learned optimizers that outperform on wall-clock and validation loss | [
"Luke Metz",
"Niru Maheswaranathan",
"Jeremy Nixon",
"Daniel Freeman",
"Jascha Sohl-dickstein"
] | Deep learning has shown that learned functions can dramatically outperform hand-designed functions on perceptual tasks. Analogously, this suggests that learned update functions may similarly outperform current hand-designed optimizers, especially for specific tasks. However, learned optimizers are notoriously difficult to train and have yet to demonstrate wall-clock speedups over hand-designed optimizers, and thus are rarely used in practice. Typically, learned optimizers are trained by truncated backpropagation through an unrolled optimization process. The resulting gradients are either strongly biased (for short truncations) or have exploding norm (for long truncations). In this work we propose a training scheme which overcomes both of these difficulties, by dynamically weighting two unbiased gradient estimators for a variational loss on optimizer performance. This allows us to train neural networks to perform optimization faster than well tuned first-order methods. Moreover, by training the optimizer against validation loss, as opposed to training loss, we are able to use it to train models which generalize better than those trained by first order methods. We demonstrate these results on problems where our learned optimizer trains convolutional networks in a fifth of the wall-clock time compared to tuned first-order methods, and with an improvement | [
"Learned Optimizers",
"Meta-Learning"
] | https://openreview.net/pdf?id=HJxwAo09KQ | https://openreview.net/forum?id=HJxwAo09KQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkxFS_qaxN",
"H1gOOD7yx4",
"B1gnWF_FkV",
"ryeUzvVKkN",
"B1ljRQC4yN",
"S1eAW5pfCm",
"BJgDdU6GAm",
"S1x__R5z0m",
"r1gjUikY6Q",
"r1ewyjyFpX",
"SJlea51tTQ",
"Byetapx9nQ",
"BJxzDbc1n7",
"rkl4p-n6sm",
"Byl8hoXzom"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1545607233354,
1544660848420,
1544288516444,
1544271630203,
1543984083330,
1542801925571,
1542801007130,
1542790767571,
1542155090960,
1542154974846,
1542154936083,
1541176769281,
1540493657848,
1540370876055,
1539615662144
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper900/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper900/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper900/Authors"
],
[
"ICLR.cc/2019/Conference/Paper900/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper900/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper900/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper900/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper900/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper900/Authors"
],
[
"ICLR.cc/2019/Conference/Paper900/Authors"
],
[
"ICLR.cc/2019/Conference/Paper900/Authors"
],
[
"ICLR.cc/2019/Conference/Paper900/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper900/Authors"
],
[
"ICLR.cc/2019/Conference/Paper900/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper900/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Issue resolved\", \"comment\": \"The authors updated the metalearning workshop paper, resolving the issue I raised.\\nI'm looking forward to seeing this interesting work progress!\"}",
"{\"metareview\": \"The paper conveys interesting idea but need more work in terms of fair empirical study and also improvement of the writing. The AC based her summary only on the technical argumentation presented by reviewers and authors.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Metareview\"}",
"{\"title\": \"Metalearning workshop\", \"comment\": \"Thank you for your ongoing care as a reviewer.\\n\\nWe have not updated the 4 page metalearning workshop paper since its submission before the review discussion \\u2014 we have simply been busy.\\n\\nWe completely agree that more extensive baselines would improve the paper. Especially, it would make our results stronger to compare against first order methods in conjunction with regularization, rather than first order methods on their own. We in fact have experiments currently running which extend the baselines in this and other fashions. We intend to include these additional baselines in any future submitted version of the paper.\\n\\nHowever, we also believe that the current baselines are accurately and clearly described, and we do not believe them to be in any sense dishonest.\"}",
"{\"title\": \"Still the same issues in the version in the MetaLearning workshop\", \"comment\": \"I just looked into the version of this work accepted at the NeurIPS workshop on MetaLearning (http://metalearning.ml/2018/papers/metalearn2018_paper38.pdf -- warning to other reviewers: clicking this link will reveal the authors' identity), and I am disappointed to see that the issues with the experiments are not mentioned in it, even though the authors have known about them for a month.\\n\\nI am not asking for new experiments, just for explicitly stating that the authors only compare to optimizers with fixed learning rates and without any regularization. Otherwise, people will walk away from this with overblown expectations, thinking we all should use these learned optimizer (and that the only issue left is generalization to new problems, but this is simply not the case)!\\n\\nI hope that this was merely an issue of the authors forgetting to update that paper. I strongly encourage the authors to emphasize the limitations of the work and to update the camera ready copy.\"}",
"{\"title\": \"Good luck with a future submission\", \"comment\": \"Thank you for your response, and for accepting my points of criticism & promising to fix them for future submissions. I'm looking forward to seeing this interesting work progress!\", \"just_a_quick_point_about_your_reply_concerning_reproducibility\": \"while the weights of the learned optimizer would be somewhat interesting, it would be far more useful for the community to have access to the code for training these weights. E.g., only on toy problems, so that it can be run without massive compute resources (but even if it does take a lot of compute that would still be extremely useful).\\n\\nBy the way, since the paper's methodological contribution is about how the gradient signal is computed, and the definition of learned optimizers is \\\"just\\\" an application (and probably weaker when comparing to stronger baselines), you could consider changing the paper title to something along the lines of \\\"Computing high quality gradient signals for unrolled computation graphs\\\". \\n\\nGood luck with the continuation of this work!\"}",
"{\"title\": \"Response to authors feedback\", \"comment\": \"Although I had initially increased a bit my score, I also think that the third reviewer may have a point. Doing what the first reviewer suggest could be the way to go to guarantee fair experiments. Therefore, I have left my score unchanged.\"}",
"{\"title\": \"Weight decay\", \"comment\": \"After reading the other reviews I want to further raise the issue that Reviewer3 raised for weight decay. I do think that it is unfair to pass to the learned optimizer as inputs the \\\"parameter values\\\" as indicated at the begining of the experimental section This allows them to effectively learn a weight decay update (and essentially simple prior functions over the weights) which could be the main (or even only) reason why the the proposed method perfrorms better than the baselines. Hence, I think for this experiment to be convincing you need to either exclude this term from the inputs to the MLP (and anything else that can resemble it) or alternatively include a weight decay parameter in the baseline algorithms and optimize that with respect to the final validation loss.\"}",
"{\"title\": \"Response to Author Feedback\", \"comment\": \"Thanks for the clarifications. Given them, I will slightly increase my score.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for your thoughtful review! We will take these under consideration for a future submission. Comments addressed below.\\n\\nj=i+1: You are indeed correct. This was an unfortunate typo we realized just after submitting. Good catch and thank you for giving our work such a thorough review!\\n\\nYour points as to exploding or vanishing gradients are correct. If the optimal learning rate is known ahead of time (requiring knowledge of the hessian at each iteration) it is possible to not have exploding gradients. In practice, however, it is rarely possible to find useful bounds on the eigenvalues of the Hessian in neural network training. This, coupled with the fact that the optimal learning rate is often at the edge of unstable dynamics, can lead to learning rates in the unstable regime and thus exploding gradients.\\n\\nWe appreciate the comment about vanishing gradients. We will update this section discussing that gradients do not vanish -- only explode. (Note that the outer-parameters are used at every training iteration -- so even if the backpropagated outer-gradient shrinks exponentially with respect to unrolled optimization steps, it does not vanish with respect to the outer-parameters.)\\n\\n\\\" However, surprisingly here the authors rather than optimizing the learning rate, which they analyzed in the previous part of the section, they are now optimizing the momentum.\\\"\\n\\nHere, we use momentum as it more clearly maps to a physical phenomenon, in the hope that it provides better intuition. Similar behavior holds for learning rate -- it is possible to take a step that is either just under or just over a local maximum, resulting in diverging trajectories.\\n\\n\\\"in addition to the fact that usually large over-parameterized models behave differently than small models.\\\"\\nWe use these toy problems as a tool to build intuition as understanding the non-convex setting is very complex. That being said, we have done additional work (not included) around exploring these effects on larger problems. In particular, we are able to find multiple saddle points / paths to descend a loss function resulting in discontinuous trajectories, and exploding gradients (as in the toy case). We do this by taking 2d slices through the inner problems, and sweeping meta-parameters. We observe discontinuous jumps in final location and trajectory. We will look into adding, likely in the appendix, some examples of this behavior in large networks. We emphasize that Figure 3e already shows a slice through the outer-loss landscape for a neural network task and a neural network optimizer, and that pathological behavior in the unrolled loss landscape is visible in this figure.\", \"scale\": \"As of now, these methods are quite expensive. As a result, the field mostly explores meta-training on small scale tasks. Despite the expense, this work operates on considerably larger models than almost all prior work. We train conv-nets (as opposed to small MLPs) and train for 10k inner-iterations -- around 2 orders of magnitude longer than most existing work. We are extremely interested in pushing these methods further -- applying to even larger problems, but instability in meta-training has previously been a major obstacle to such scaling.\\n\\n\\\"Hence I think its slightly unfair to compare their \\\"final performance\\\" after this fixed period.\\\": When evaluating on training loss, we show optimization speed. With regard to validation loss, you are correct that we do not provide sufficient detail for the \\\"better final performance\\\" claim. In practice, we find overfitting occurs in under 20k inner-iterations (Shown in the dashed line on the figures). We will modify the text / figures to show the training step where existing optimizers begin to overfit.\\n\\n\\\" From the text it is also unclear whether the authors have optimized the parameters of the first-order methods with respect to their training or validation performance\\\"\\nWe do the latter, and will update the text to better emphasize this.\\n\\n\\\"Figure 6 - the results here seem to indicate that the learned optimizer transfers reasonably well, achieving similar performance to first-order methods (slightly faster validation reduction). Given however that these are plots for only 10000 iterations it is still unclear if this is scalable to larger problems.\\\"\\nTransfer to larger problems was not the focus of this work. We targeted stable training of task specific optimizers. In this context, 10k inner iterations is enough to achieve the best performance on these tasks (with learned optimizers). We agree that transfer to larger problems is critical for broad applicability, and are actively working on this.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for your thoughtful review. We will take these under consideration for a future submission. We have addressed your comments below.\", \"truncation\": \"Due to space limitations, we were unable to include a more comprehensive introduction to truncated backpropagation through time (TBTT). We would emphasize though that TBTT is a standard approach in training RNNs, and that it has an identical meaning in the case of backpropagation through many timesteps of unrolled optimization as it does in backpropagation through many timesteps of RNN dynamics. We will update the text to further emphasize this correspondence.\\n\\nUnseen / validation data: We agree that the distinction between validation data on train tasks and validation data on *test* tasks was unclear. We have updated the text to clarify this. To answer your question: we never see any test data before test time. This includes both the training and validation/test images. We split the Imagenet dataset by class (700 for train, 300 for test) and outer-train on the training set (using both train and validation data from those 700 classes). When evaluating our model, we use the alternate set (the remaining 300 test classes), and only use the training images for training those models.\", \"combating_biased_gradients\": \"Previous work was unable to train with longer truncations because of exploding gradients. In this work, we can simply make the truncation length longer which reduces the bias in the gradients. By unrolling longer, we drop fewer terms from the true gradient, thus lowering bias. We will make this connection clearer in the text.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for your thoughtful review! We will take these under consideration for a future submission. Comments addressed below.\", \"lack_of_good_baseline\": \"You raise a good point. We will update the paper to include a more extensive hyperparameter search for the baselines, including a denser learning rate search, and a search over regularization and learning rate decay parameters. We will also soften the claims surrounding speedups over hand designed optimizers.\\n\\nHowever, we would like to reemphasize that learned optimizers have not previously been shown to beat, or even match, standard first order optimizers on wall clock time. We would also like to reemphasize that previous approaches to meta-training learned optimizers required many tricks, and a lucky choice of random seed. A primary aim of our experiments was to provide proof that the proposed meta-training method works well and is reliable enough to train a simple mlp meta-optimizer without the complex tricks employed in other works. We believe that this work represents a significant step forward in training learned optimizers, and we are gratified to hear that you thought our analysis sections and proposed fixes were convincing.\\n\\n\\\"I don't see why the unnumbered equation necessarily leads to an exponential increase; H^{(j)} can be different for each j, such that there isn't a single term being exponentiated. Or am I mistaken?\\\"\\n\\nYou are correct that H^{(j)} can be different for each training iteration in the general case. In a quadratic setting, H is constant, and the outer-gradient will explode if the learning rate is too large. In the non-quadratic setting (any time the Hessian is changing) it is possible for this equation to be stable depending on the sequence of Hessians encountered. Empirically, we find that the optimal meta-parameters are right on the edge of instability, so it is common to enter the unstable regime, causing outer-gradients to then grow exponentially with the number of steps.\\n\\n\\\"The global minimum of the function is not 0.5 as stated in the caption\\\":\\nThis is a typo and will be fixed. Should be ~3.5 (the actual global min). Thank you for spotting this.\\n\\n\\\"The problem in Figure 3a is not the problem discussed in the text.\\\"\\nThis is discussed in the last paragraph of page 4.\", \"reproducibility\": \"We provide more detailed information about experiments in the appendix (page 11-12). Additionally, we are looking into options as to how to release evaluation code containing demonstrative problems and the weights of the learned optimizer.\"}",
"{\"title\": \"Interesting method, but oversold results\", \"review\": \"This paper tackles the problem of learning an optimizer, like \\\"learning to learn by gradient descent by gradient descent\\\" and its follow-up papers. Specifically, the authors focus on obtaining cleaner gradients from the unrolled training procedure. To do this, they use a variational optimization formulation and two different gradient estimates: one based on the reparameterization trick and one based on evolutionary strategies. The paper then uses a method from the recent RL literature to combine these two gradient estimates to obtain a variance that is upper-bounded by the minimum of the two gradients' variances.\\n\\nWhile the method for obtaining lower-variance gradients is interesting and appears useful, the application to learn optimizers is very much oversold: the paper states that the comparison is to \\\"well tuned hand-designed optimizers\\\", but what that comes down to in the experiments is Adam, SGD+Momentum, and RMSProp with a very coarse grid of 11 learning rates and *no regularization* and *no learning rate schedule*. The authors' proposed optimizer is just a one-layer neural net with 32 hidden units that gets as input basically all the terms that the hand-designed optimizers compute, and it has everything it needs to simply use weight decay and learning rate schedules -- precisely what you need for the authors' contributions (speed and generalization). This is a fundamental flaw in the experimental setup (in particular the choice of baselines) and thus a clear reason for rejection.\", \"some_details\": [\"While the authors' method is optimized by training 5 x 0.5 million, i.e. 2.5 million (!) full inner optimization runs of 10k steps each, the hand-designed optimizers get to try 11 values for the learning rate, which are logarithmically spaced between 10^{-4} and 10 (i.e., very coarsely, with sqrt{10} difference between successive values; even just for this fixed learning rate one would want to space factors by as little as 1.1 or so in the optimal region).\", \"The lack of any learning rate schedule for the baselines is highly problematic; it is common knowledge that learning rate schedules are important. This is precisely why one would want to do research on learning optimizers to set the learning rate! Of course, without learning rate schedules one will not obtain a very efficient optimizer and it is easy to show large speedups over that poor baseline (the authors' first stated contribution in the title).\", \"The authors' second stated contribution is that their learned optimizers generalize better than the baselines. But they pass their optimizers all information required to learn arbitrary weight decay, while the baselines are not allowed to use any weight decay. Thus, the second stated contribution in the title also does not hold up.\", \"There are many details in the experiments that would be hard to reproduce truthfully. Given the reproducibility crisis in machine learning, I would trust the results far more if the authors made their code available in anonymized form during the review period. If the authors did this I could also evaluate it against properly tuned baseline optimizers myself. In that case I would lean towards increasing my score since the availability of code for this line of work would be very useful for the community.\", \"Page 4 didn't print for me; both times I tried it came out as a blank page.\", \"Several issues on page 4:\", \"I don't see why the unnumbered equation necessarily leads to an exponential increase; H^{(j)} can be different for each j, such that there isn't a single term being exponentiated. Or am I mistaken?\", \"The problem in Figure 3a is not the problem discussed in the text\", \"The global minimum of the function is not 0.5 as stated in the caption\", \"It is not stated what sort of MLP there is in Figure 3d (again, code availability would fix things like this)\", \"Section 5 is extremely dense. This is the paper's key methodological contribution, and it is less than a page! I would suggest that the authors describe these methods in more detail (about another page) and save space elsewhere in the paper.\", \"The paper is written well and the illustrations of the issues of TBPTT, as well as the authors' fix are convincing. It's a shame, but unfortunately, the stated contributions for the learned optimizers do not hold up.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Overfitting is impossible, since the dataset was not seen during optimizer training\", \"comment\": \"Thank you for your interest and comment! Sorry for the slow response -- we did not receive email notification of your comment, and are only seeing it now.\\n\\nFigure 1 was generated by applying the learned optimizer to a holdout dataset that was not seen during optimizer training. So, the good performance can not be a result of the optimizer overfitting to the training or validation loss of its training tasks, since the optimizer is being evaluated on a task which it was not trained on.\\n\\nThe optimizer is specifically targeted at optimizing three layer CNNs -- the goal in this paper is to design an optimizer that is very good at optimizing a specific architecture. (Though as we explore in Figure 6, and Appendix E, it does nonetheless demonstrate some generalization to new architectures.)\\n\\nWe will modify the Figure 1 caption to better emphasize that the optimizer was not trained on the dataset it is being applied to in the figure, but that it was trained specifically to be very good at optimizing three layer CNNs.\"}",
"{\"title\": \"An interesting paper too condensed and difficult to understand\", \"review\": \"Review:\\n\\n\\tThis paper proposes a method to learn a neural network to perform optimization. The idea is that the neural network will receive as an input several parameters, including the weights of the network to be trained, the gradient, and so on, and will output new updated weights. The neural network that is used to compute new weights can be trained through a complicated process called un-rolled optimization. The authors of the paper show two problems with this approach. Namely, the gradients tend to explode as the number of iterations increases. Truncating the gradient computation introduces some bias. To solve these problems the authors propose a variational objective that smooths the objective surface. The proposed method is evaluated on the image net dataset showing better results than first order methods optimally optimized.\", \"quality\": \"The quality of the paper is high. It addresses an important problem of the community and it seems to give better results than first other methods.\", \"clarity\": \"The clarity of the paper is low. It is difficult to follow and includes many abstract concepts that the reader is not familiar with. I have had problems understanding what the truncation means. Furthermore, it is not clear at all how the validation data is used as a target in the outer-objective. It is also unclear how the bias problem is addressed by the method proposed by the authors. They have said nothing about that, yet in the abstract they say that the proposed method alleviates the two problems detected.\", \"originality\": \"As far as I know the idea proposed is original and very useful to alleviate, at least, one of the problems mentioned of exploding gradients.\", \"significance\": \"It is not clear at all that the method is evaluated on unseen data when using the validation data for outer-training. This may question the significance of the results.\", \"pros\": [\"Interesting idea.\", \"Nice illustrative figures.\", \"Good results.\"], \"cons\": [\"Unclear points in the paper with respect to what truncation means.\", \"The validation data is used for training and there is no left-out data, which may bias the results.\", \"Unclear how the authors address the bias problem in the gradients.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Good idea, but needs more work\", \"review\": \"Summary:\\nThe paper presents a method for \\\"learning an optimizer\\\"(also in the literature Learning to Learn and a form of Meta-Learning) by using a Variational Optimization for the \\\"outer\\\" optimizer loss. The mean idea of the paper is to combine both the reparametrized gradient and the score-function estimator for the Variational Objective and weight them using a product of Gaussians formula for the mean. The method is simple and clearly presented. The paper also presents issues with the standard \\\"learning to learn\\\" optimizers, one being the short-horizon bias and as credited by the authors has been observed before in the literature, and the second one is what is termed the \\\"exponential explosion of gradients\\\" which I think lacks enough justification as currently presented (see below for details). The ideas are clearly stated, although the work is not groundbreaking, but more on combining several ideas into a single one.\", \"experiments\": \"I have a few key issues with the experimental setup, which I think need to be addressed:\\n\\n1. The CNN being optimized is quite small - only 3 layers. This allows the authors to train everything on a CPU. The key issue here, as well with previous work on Learning to Learn, is that it is not clear how scalable is this method to very Deep Networks. \\n\\n2. Figure 1 - The setup is to optimize the problem for 10000 iterations, however, I think it is pretty clear even to the naked eye that the standard first-order optimizers (Adam/RMS/Mom) have not fully converged on this problem. Hence I think its slightly unfair to compare their \\\"final performance\\\" after this fixed period. Additionally using the curriculum the \\\"meta\\\"-optimizer is trained explicitly for 10000 iterations. Hence, it is also unclear if it retains its stability after letting it run for longer. From the text it is also unclear whether the authors have optimized the parameters of the first-order methods with respect to their training or validation performance - I hope this is the latter as that is the only way to fairly compare the two approaches. \\n\\n3. Figure 6 - the results here seem to indicate that the learned optimizer transfers reasonably well, achieving similar performance to first-order methods (slightly faster validation reduction). Given however that these are plots for only 10000 iterations it is still unclear if this is scalable to larger problems.\", \"conclusion\": \"As a whole, I think the idea in the paper is a good one and worth investigating further. However, the objections I have on section 2.3 and the experiments seem to indicate that there needs to be more work into this paper to make it ready for publication. \\n\\n\\nOn section 2.3 and the explosion of gradients:\\n\\nThere is a mistake in the equation on page 4 regarding the \\\"gradient with respect to the learning rate\\\". Although the derivation in Appendix A is correct, the inner product in the equation starts wrongly from j=0, where it should in fact start at j = i + 1. To be more clear the actual enrolled equation for dw^T/dt for 3 steps back is:\\n\\ndw^T/dt = (I - tH^{T-1})(I - tH^{T-2})(I - tH^{T-3}) dw^{T-3} - (I - tH^{T-1})(I - tH^{T-2}) g^{T-3} - (I - tH^{T-1}) g^{T-2} - g^{T-1} \\n\\nHence the product must start at j = i + 1. \\nIt is correct that in this setting the equation is a polynomial of degree T of the Hessian, however, there are several important factors that the authors have not discussed. Namely, if the learning rate is chosen accordingly such that the spectral radius of the Hessian is less than 1/t then rather than the gradient exploding the higher order term will vanish. However, even if they do vanish for large T since the Hessian plays with smaller and smaller power to more recent gradients (after correcting the mistake in the equation) than the actual T-step gradient will never vanish (in fact even if tH = I then dw^T/dt = g^{T-1}). Hence the claims of exploding gradients made in this section coupled with the very limited theoretical analysis seem to unconvincing that this is nessacarily an issue and under what circumstances they are. \\n\\nThe toy example with l(w) = (w - 4)(w - 3) w^2 is indeed interesting for visualizing a case where the gradient explosion does happen. However, surprisingly here the authors rather than optimizing the learning rate, which they analyzed in the previous part of the section, they are now optimizing the momentum. The observation that at high momentum the training is unstable are not really surprising as there are fundamental reasons why too high momentum leads to instabilities and these have been analyzed in the literature. Additionally, it is not mentioned what learning rate is used, which can also play a major role in the effects observed. \\n\\nAs a whole, although the example in this section is interesting, the claims made by the authors and some of the conclusions seem to lack any significant justifications, in addition to the fact that usually large over-parameterized models behave differently than small models.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rJfUCoR5KX | An Empirical study of Binary Neural Networks' Optimisation | [
"Milad Alizadeh",
"Javier Fernández-Marqués",
"Nicholas D. Lane",
"Yarin Gal"
] | Binary neural networks using the Straight-Through-Estimator (STE) have been shown to achieve state-of-the-art results, but their training process is not well-founded. This is due to the discrepancy between the evaluated function in the forward path, and the weight updates in the back-propagation, updates which do not correspond to gradients of the forward path. Efficient convergence and accuracy of binary models often rely on careful fine-tuning and various ad-hoc techniques. In this work, we empirically identify and study the effectiveness of the various ad-hoc techniques commonly used in the literature, providing best-practices for efficient training of binary models. We show that adapting learning rates using second moment methods is crucial for the successful use of the STE, and that other optimisers can easily get stuck in local minima. We also find that many of the commonly employed tricks are only effective towards the end of the training, with these methods making early stages of the training considerably slower. Our analysis disambiguates necessary from unnecessary ad-hoc techniques for training of binary neural networks, paving the way for future development of solid theoretical foundations for these. Our newly-found insights further lead to new procedures which make training of existing binary neural networks notably faster. | [
"binary neural networks",
"quantized neural networks",
"straight-through-estimator"
] | https://openreview.net/pdf?id=rJfUCoR5KX | https://openreview.net/forum?id=rJfUCoR5KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkeMV80egV",
"B1eVJiKyCX",
"Skgim8YkCQ",
"B1xMGsOyR7",
"Bklf3h1anX",
"BJe-izKw3X",
"Bkxxc8rEn7",
"HkxOUX34o7",
"Syx7Ex4c9X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1544771114224,
1542589147727,
1542587938994,
1542585097534,
1541369001901,
1541014169123,
1540802183837,
1539781455659,
1539092523364
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper899/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper899/Authors"
],
[
"ICLR.cc/2019/Conference/Paper899/Authors"
],
[
"ICLR.cc/2019/Conference/Paper899/Authors"
],
[
"ICLR.cc/2019/Conference/Paper899/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper899/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper899/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper899/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper summarizes existing work on binary neural network optimization and performs an empirical study across a few datasets and neural network architectures. I agree with the reviewers that this is a valuable study and it can establish a benchmark to help practitioners develop better binary neural network optimization techniques.\", \"ps\": \"How about \\\"An empirical study of binary neural network optimization\\\" as the title?\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Accept\"}",
"{\"title\": \"Updated the paper to include some results on larger ImageNet dataset\", \"comment\": \"We would like to thank the reviewer for the constructive comments.\\n\\nOur aim in this paper is to provide useful empirical observations and generate possible hypotheses that explain them, rather than to make new claims or theoretical analysis. It is true that we have provided some hypotheses about what might be going on, but at the end of the day, it is difficult to prove such new claims through empirical research. We did not aim to present conclusive observations for \\\"X is necessary for Y\\\" but rather give empirical support that \\\"X seems to be tightly connected to Y\\\", i.e. generating hypothesis to be validated in future theoretical research.\\n\\n> More datasets and architectures\\nWe do agree that in this line of work would benefit from more datasets and model architectures. We intended to repeat our experiments with larger datasets, but a hyperparameter search similar to what we have done for smaller standard datasets is computationally difficult on much larger datasets such as ImageNet dataset. \\n\\nHowever, we have now updated the paper to include results on ImageNet for Section 4 of the paper\\n\\n> Convergence in Figure 4\\nThis touches on the same points raised by another reviewer regarding early stopping. In our experiments, we tried to give the training phase in all experiments more than enough time to converge, but some optimizers (like vanilla SGD) simply fail in many scenarios to converge.\\n\\n> In figure 5, the red curve is \\\"Clipping gradients\\\", which one is correct?\\nThank you for reporting this error. We have updated the paper to correct the order of items in the legend.\\n\\n> Baselines for training binary networks faster\\nWe believe these results are already included in Table 5 where \\\"end-to-end\\\" denotes the original counterpart experiments. Do let us know if you have something different in mind.\"}",
"{\"title\": \"We need to know what works and what doesn't work before we can try to explain the things that work\", \"comment\": \"We would like to thank the reviewer for the careful review and useful comments.\\n\\nWe do agree on the reviewer's point that going forward, a rigorous understanding of these techniques is vital. There have been attempts to provide such theoretical justifications in the literature [1,2] but their scope has been limited. It was not our intention to provide such rigorous theoretical justifications in this paper, but we hope that our empirical work in identifying and understanding the effects of ad-hoc parameters and techniques could clarify things and form a precursor for such theoretical analysis.\", \"regarding_your_specific_comments\": \"> Length of the experiments\\nWe have not used any budget limitation or early stopping in any of the experiments but this is a great point. In fact it makes a lot of sense to frame some our findings in the context of early stopping. In many cases we observed that once validation accuracy stops improving, there is often not a meaningful improvement in remaining training steps. We show how those extra steps allow squeezing a bit more accuracy from the model by training it for a very long time and relying on noise sources. Early stopping could be a good way to think about the actual capability of STE. \\n\\nWe have updated the paper to make it clear where early stopping fits in our study. We have also updated the final \\u201cBest-Practices\\u201d section accordingly.\\n\\n> Deterministic vs. stochastic binary weights\\nGood point. The abstract has now been updated to make it clear that we are studying deterministic binary models and not the stochastic ones.\\n\\n> Statistical significance of Table 3\\nWe have now updated the table in the paper to include the average results over 5 runs. Thank you for pushing us to do this. \\n\\n> Curves and figures for the two-stage clipping\\nWe will update the paper soon to include these figures.\\n\\nFinally, we have updated the paper to include results on ImageNet dataset for Section 4. Hopefully this will make our experiments more comprehensive.\\n\\n[1] Li, Hao, et al. \\\"Training quantized nets: A deeper understanding.\\\" Advances in Neural Information Processing Systems. 2017.\\n[2] Anderson, Alexander G., and Cory P. Berg. \\\"The high-dimensional geometry of binary neural networks.\\\" arXiv preprint arXiv:1705.07199 (2017).\"}",
"{\"title\": \"Updated the paper to include the whitepaper\", \"comment\": \"We would like to thank the reviewer for the constructive comments. We are glad the reviewer found our paper useful and well-written. In agreement with the reviewer, we also believe that the characterization of compression techniques will become more and more important for the community as we go forward.\\n\\nAlso, thank you for bringing the paper from the TensorFlow team [1] to our attention. This whitepaper is very relevant to the direction we have pursued in this paper. Interestingly, similar to our findings, it also identifies that Batch Normalisation should be handled differently when training quantized models to obtain better accuracies. It also looks at the consequences of using ReLU6 vs. ReLU. Another recent paper [2] recommended using PReLU to achieve better accuracy in Binary networks. It may be interesting to look more carefully at the choice of non-linearity function and its effects on the performance of quantized models. \\n\\nWe have now uploaded a newer version of our paper in which we have embedded your comments. The paper now points to this whitepaper in the relevant parts (Sec 1 and Sec 3.3), and we think it has made it a lot better. We have also tried to make our empirical study more comprehensive by adding results on ImageNet dataset for Section 4. \\n\\n[1] Krishnamoorthi, Raghuraman. \\\"Quantizing deep convolutional networks for efficient inference: A whitepaper.\\\" arXiv preprint arXiv:1806.08342 (2018).\\n[2] Tang, Wei, Gang Hua, and Liang Wang. \\\"How to train a compact binary neural network with high accuracy?.\\\" AAAI. 2017.\"}",
"{\"title\": \"Good review, relevant recommendations for a valuable research area\", \"review\": [\"This is a good review paper covering techniques proposed across many of the well-known works in this area and doing an in-depth analysis of the value each of the techniques brings. Additionally, based on these studies the paper offers insights into the best algorithms and procedures to combine to achieving good results.\", \"One recent whitepaper that has related work (not fully overlapping though), that may be worth looking by the authors is at https://arxiv.org/abs/1806.08342. It is fairly new and not very well-known so not surprising that the authors missed it.\", \"Pros\", \"Well written paper with lots of in-depth experiments\", \"Does well at teasing out the impact of each of the techniques and gives some intuitive explanations of why they matter.\", \"Provides better insights into how to make training of binary neural networks faster.\", \"As the importance of low precision networks grows, this is a valuable paper in pushing the area of research forward.\", \"Cons\", \"A review paper, which doesn't add much new to the existing suite of techniques. Note: This is true for most review papers.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Useful empirical study of existing methods\", \"review\": \"The paper systematically studies the training of binary neural networks, where binary in this case refers to single bit weight elements in the network. In particular, different existing training methods are tested and compared for training both MLPs and CNNs.\", \"the_main_findings_of_the_paper_are\": \"- Using methods such as AdaGrad, AdaDelta, RMSProp and ADAM yields better performance than simpler momentum-based methods such as vanilla momentum and Nesterov momentum, which in turn are much better than vanilla SGD\\n- When training binary models, it is common to clip weights and/or gradients for the proxy weights in the network. In the paper it is however shown that these methods hinder using a fast learning rate in the beginning of training, while the methods are required in later stages of training in order to achieve good results\\n- Pre-training the model with full-precision training works well in speeding up training\\n\\nFor a practitioner, the paper presents a very useful reference for what methods work well when training binary networks. Although there are some proposals and hypotheses for reasons behind the results, I see the paper as a review paper of existing methods for training binary networks, showing experiments where the methods are tested using the same benchmark and training procedure in order to give a fair comparison.\\n\\nAs a practical guide, the paper therefore has clear value. What is lacking compared to typical ICLR papers is rigorously presenting new findings. The authors present a hypothesis for why different batch sizes are needed in the beginning compared to the end of training, but I found neither the justification nor the results very convincing with respect to the hypothesis. The way I see it, the actual novel proposals that are made in the paper are two two-stage training methods: one in which the tricks of weight and gradient clipping are only used towards the end of training, and one where the first stage of training is done using a full precision model. It is however quite well known that some training schemes with different stages can lead to improved performance: for instance with ADAM, even if it is an adaptive method, lowering the learning rate towards the end of training is often beneficial. It might therefore be fair to compare the methods to other multi-stage training methods. In addition, I could not find the training curves or final performance figures of the method where clipping is only activated towards the end of training.\\n\\nTo put it all together, the paper is clearly useful for the community as it provides a useful summary of the performance of different methods for training binary neural networks. In addition, it presents two two-stage training schemes that seem to make training even faster. What the paper lacks is rigorous theoretical justifications and clearly novel ideas.\", \"small_comments\": [\"How are the training lengths decided for the different methods? If I am not mistaken, in Figure 2, it seems like the SGD and momentum methods have not yet converged when training is halted. Is there a budget for wall clock time or is early stopping used or something similar? Considering the nature of the paper, I would see this kind of decisions as important to report.\", \"In the abstract, you might want to refer to binary weights somewhere. Based on the abstract it is easy to mix the binary networks in this paper with stochastic binary networks that can also be trained using the STE estimator\", \"Are the differences in the performances in Table 3 statistically significant?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The authors made several claims and provide suggestions on training binary networks. However, the experiments are somewhat not sufficient to support the proposed hypothesis.\", \"review\": \"The authors made several claims and provide suggestions on training binary networks, however, they are not proved or theoretically analyzed. The empirical verification of the proposed hypothesis was viewed as weak as the only two datasets used are small datasets MNIST and CIFAR-10, and the used network architectures are also limited. Much more rigorous and thorough testing is required for an empirical paper which proposes new claims.\\n\\nTake the first claim \\\"end-to-end training of binary networks crucially relies on the optimiser taking advantage of second moment gradient estimates\\\" as an example. As it is known that choice of optimizer is highly dependent on the specific dataset and network structure, it is not convincing to jump to this conclusion using the observations on two small datasets and limited network architectures. E.g, many binarization papers use momentum for ImageNet dataset with residual networks. Does Adam also outperforms momentum in this case? Similarly, it is also hard for me to judge whether the other conclusions made about weight/gradient clipping, the momentum in batch normalization and learning rate, are correct or not.\", \"some_minor_issues_are\": \"1. In Figure 4, different methods are not run to convergence, and the comparison may not be fair.\\n2. The second paragraph in section 4: \\\"It can be seen that not clipping weights when learning rates are large can completely halt the optimisation (red curve in Figure 5).\\\" However, in figure 5, the red curve is \\\"Clipping gradients\\\", which one is correct?\\n3. The authors propose a recipe for faster training of binary networks, is there experiments supporting that training networks with the proposed recipe is faster than the original counterpart?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you very much for your comment. Both of your points are valid. We will definitely add details of the used architecture once we can make changes to the paper. We were planning to include results on ImageNet but unfortunately it did not make the deadline due to limited time and resources. It's difficult to train hundreds of model configurations on ImageNet dataset (as we've done for cifar-10) given its complexity. However, we will include some (but not comprehensive) results on ImageNet once the system allows us to update the paper.\"}",
"{\"comment\": \"1.you said you use VGG-10 on cifar10 \\uff0ccan you put up the framework\\uff1f\\n2. you know cifar10 is a small dataset, how about your expriments on ImageNet?\", \"title\": \"Can you put up VGG-10 \\uff1f\"}"
]
} |
|
B1e8CsRctX | Generative Ensembles for Robust Anomaly Detection | [
"Hyunsun Choi",
"Eric Jang"
] | Deep generative models are capable of learning probability distributions over large, high-dimensional datasets such as images, video and natural language. Generative models trained on samples from p(x) ought to assign low likelihoods to out-of-distribution (OoD) samples from q(x), making them suitable for anomaly detection applications. We show that in practice, likelihood models are themselves susceptible to OoD errors, and even assign large likelihoods to images from other natural datasets. To mitigate these issues, we propose Generative Ensembles, a model-independent technique for OoD detection that combines density-based anomaly detection with uncertainty estimation. Our method outperforms ODIN and VIB baselines on image datasets, and achieves comparable performance to a classification model on the Kaggle Credit Fraud dataset. | [
"Anomaly Detection",
"Uncertainty",
"Out-of-Distribution",
"Generative Models"
] | https://openreview.net/pdf?id=B1e8CsRctX | https://openreview.net/forum?id=B1e8CsRctX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJeoVE_xlE",
"S1eGNUWDk4",
"S1gUWF9U1N",
"S1gbeoTt0X",
"B1gvpV3tAQ",
"SyefX0ayAX",
"ByxCtppk0m",
"SyxJSTTy0Q",
"r1xAPnTy0X",
"SJxjubiqhm",
"ryl3dTZYhX",
"BJxgXKZdhQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544746035470,
1544128042038,
1544100094234,
1543260904532,
1543255230800,
1542606361612,
1542606213979,
1542606135283,
1542605925779,
1541218674718,
1541115252137,
1541048600322
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper898/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper898/Authors"
],
[
"ICLR.cc/2019/Conference/Paper898/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper898/Authors"
],
[
"ICLR.cc/2019/Conference/Paper898/Authors"
],
[
"ICLR.cc/2019/Conference/Paper898/Authors"
],
[
"ICLR.cc/2019/Conference/Paper898/Authors"
],
[
"ICLR.cc/2019/Conference/Paper898/Authors"
],
[
"ICLR.cc/2019/Conference/Paper898/Authors"
],
[
"ICLR.cc/2019/Conference/Paper898/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper898/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper898/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper suggests the use of generative ensembles for detecting out-of-distribution samples.\\n\\nThe reviewers found the paper easy to read, especially after the changes made during the rebuttal. However, further elaboration in the technical descriptions (and assumptions made) could make the work seem more mature, as R2 and R1 point out. \\n\\nThe general feeling by reading the reviews and discussions is that this is promising work that, nevertheless, needs some more novel elements. A possible avenue for increasing the contribution of the paper is to follow R1\\u2019s advice to extract more convincing insights from the results.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Promising but more work needed to reach maturity\"}",
"{\"title\": \"Re: Response\", \"comment\": \"Thank you for the detailed feedback. That's really helpful.\", \"re\": \"Histograms. We are a bit confused now as to what you mean by \\\"overlap\\\" based scoring rule. Under our experimental setup, anomaly detection is performed on a per-example basis from the test distribution. Although we eval AUROC on an empirical test set, we don't have access to the a population of test samples in the scoring rule. So it is not possible to compute \\\"overlap\\\" between two histograms of test points because we evaluate each test point independently.\\n\\nReferencing your earlier comment, it is possible to use training histograms to build a scoring rule. As you described, we can construct indicator function that classifies a data point as an anomaly if it has lower likelihood than the least probable training point or higher likelihood than the most probable training point (where training points are from the empirical training distribution). We can update our results with such a baseline (if this is what you intended), though as we've said before, MNIST and Fashion MNIST test distributions (and NotMNIST too) have considerably overlapping histograms, so it is doomed to fail. We don't think that this is a sufficiently strong baseline for the purposes of evaluating our method.\\n\\n- Re: As the number of data points n grows large, the Expectation of WAIC converges to generalization loss (which is a surrogate objective for KL distance between model and true distribution). See Eq 31. http://www.jmlr.org/papers/volume14/watanabe13a/watanabe13a.pdf and Watanabe 2009, 2010b for proofs. Now suppose we have a modified objective WAIC2 = log p(x) + alpha * Var[log p(x)]. For alpha != 1, this would result in a biased asymptotic estimate of generalization error.\\n\\nThat said, calibrating alpha according to a validation set might yield better AUROC. However, we avoid doing this in our experiments because it would presupposing an OoD distribution (validation set), which may lead to poor performance on the test OoD distribution (which may be different than validation set). Also, to making comparison to prior work easier (since AUROC is supposed to be threshold-independent for a single scalar), we didn't modify the WAIC score function.\"}",
"{\"title\": \"Response\", \"comment\": [\"Thanks for the response!\", \"Some of my concerns regarding clarity have been addressed. Must note that clarity can still benefit from some more editing (a self-contained paper on anomaly detection will describe the experimental setup rather than just referring the reader to two other papers, the GAN notation of q_\\\\theta is clear to me now but is frankly unnecessary imo, details on how many ensembles were trained and how did they differ, etc.).\", \"Re: Posterior Collapse. I also appreciate the results on this experiment.\", \"Based on the first two points, I have updated by score. However, I found the response to the other concerns rather dissatisfying.\", \"Re: Histograms. Besides including the training likelihood results for the datasets in the submission, I think the AUROC based on an \\\"overlap\\\" based scoring rule is a very reasonable and important baseline to include before the expensive process of training ensembles.\", \"WAIC. I think my question was orthogonal to the link you provide to. I was more interested in knowing why the mean and variance terms should be weighted equally, rather than having a hyperparameter controlling their strengths which could be decided based on e.g., a validation set. Some intuition/experiments in this regard would have been welcome.\"]}",
"{\"title\": \"Updated Fashion MNIST numbers to fix a bug.\", \"comment\": \"Our improved VAE experiments on Fashion MNIST had a minor evaluation bug in which some OoD test samples from Omniglot got mixed up into other distributions' evaluation. We've updated the paper to fix this error. After the rebuttal deadline, we'll update our related work section to discuss some of the GAN papers R2 mentioned in their recent comment.\"}",
"{\"title\": \"Explaining Adv Defense definition, and thanks for the references!\", \"comment\": \"Our perspective that \\u201cAdversarial Defense is making ML robust to OoD inputs\\u201d has been established in prior work on, see citations from prior work on model-independent interpretations of adversarial examples being \\u201coff the data manifold\\u201d (https://arxiv.org/abs/1801.02774 [8, 9, 10, 11]). In line with the ideas from https://arxiv.org/abs/1807.06732, we also argue that there are a lot of other \\u201coff-manifold\\u201d inputs besides Lp-norm perturbations. The presence of an adversary can also be regarded as \\u201cworst case inputs\\u201d, which is why we don\\u2019t focus on whether OoD inputs originate from a human adversary or not.\", \"as_for_whether_model_information_can_be_merged_into_ood_samples\": \"we agree that the OoD problem *is* model-independent -- constructing an OoD input does not *require* considering a model. But, this does not preclude the use of a model to construct an OoD input (after all, we have implicit models in our heads when we declare what an OoD input is with respect to the population data).\\n\\nOne type of adversarial example, which we explored in the paper, is to take a reference input that is constructed independently of the model (e.g. Gaussian noise) and perturb it according to a likelihood model which happens to be the one model (or ensemble member) we evaluate it on.\\n\\n--\\nThank you for providing these references for our consideration. These papers all use adversarially trained generators to supply a discriminator with OoD inputs. As we already discussed in Section 2.1, the GAN perspective on anomaly detection is complicated by the fact that every GAN discriminator is typically regarded as an anomaly detector, but in practice is just a discriminative model between p(x) and some q(x) produced by randomized training dynamics. A single GAN discriminator is not a proper generative model for general OoD detection, since p(x)/q(x) is not very good at OoD detection on samples that lie in neither p(x) nor q(x). \\n\\nDespite our stated limitations of simply training a single GAN for OoD detection, the Schlegl et al., Deecke et al., and Kliger et al. works demonstrate good results on a OoD task definition setup to ours so we will revise our \\u201cfirst work\\u201d claim in the paper. Thanks for catching this error.\\n\\nThe Lee et al. work trains a GAN to provide OoD inputs a la adversarial augmentation, but is actually a model-dependent OoD detection-via-a predictive uncertainty metric method (e.g. like Deep Ensembles).\"}",
"{\"title\": \"Overall rebuttal comment from authors\", \"comment\": \"We thank the reviewers for helpful feedback and highlighting points of confusion in our paper.\\n\\nIn considering all 3 reviewers\\u2019 comments (R1 \\u201creads more like a summary blog post\\u201d, R2 \\u201cthe ideas as well as reasoning flow smoothly\\u201d, R3 \\u201cwell-written and easy to follow while providing useful insight and connecting previous work to the subject of study\\u201d) we believe that all reviewers consider our presentation to be logically clear, but may be lacking in technical clarity (raised by Reviewer 1) or novelty (raised by Reviewer 2). There is especially some confusion regarding our notation and how it relates to GAN models for anomaly detection (e.g. \\u201cposterior distribution over alternate distributions\\u201d). \\n\\nTo address technical clarity issues raised by R1, we\\u2019ve answered their questions in comments and made edits to our paper to make the problem setup and notation more clear. We\\u2019ve responded directly to R2\\u2019s comment on why we believe our work is novel. \\n\\nFinally, we\\u2019ve updated the paper with improved VAE experiments on Fashion MNIST (confirming our hypothesis of posterior collapse).\"}",
"{\"title\": \"Thanks!\", \"comment\": \"We thank Reviewer 3 for the review and highlighting missing details from our paper. We\\u2019ve added them into the paper.\\n\\n> - How does the size of the ensemble influence the measured performance?\\n\\nFor CIFAR10, we have found 5 ensembles to make a large difference over 3 ensembles (about .7 AUROC). There seem to be diminishing returns for models > 5.\\n\\n> - It is Fast Gradient Sign Method (FGSM), not FSGM. See [1]. Citing [1] for FGSM would also be appropriate.\\n\\nFixed, and already cited. Thanks!\"}",
"{\"title\": \"Addressing concerns about novelty and use of GANs\", \"comment\": \"We thank Reviewer 2 for their praise and raising concerns about novelty. It is an important point worth discussing.\\n\\nIn addition to proposing a superior method for anomaly detection, part of the novel contribution in this work involved synthesizing concepts from multiple fields likelihood estimation techniques from deep generative models, adversarial defense, model uncertainty, challenging discriminative anomaly detection methods and their relationship to GAN discriminators. \\n\\nWe tie these disparate concepts together into a unified perspective on the OoD problem. Therefore, we took great care into making sure the motivation of our work transitions smoothly, perhaps even to the point of stating the obvious to Reviewer 2. We emphasize that to our knowledge, our work is the first to extend our understanding of the OoD problem in context of prior work in generative modeling, Bayesian Deep Learning, and anomaly detection applications for modern generative models. These connections are not well known in the community and we hope that our paper will amend that.\", \"additional_novel_aspects_of_this_work\": \"The observation that density estimators (as implemented by a deep generative model) are NOT robust to OoD inputs themselves is a novel observation, concurrent with another ICLR submission. To our knowledge, we are also the first work to leverage the modern advancements in deep generative models to perform anomaly detection on high-dimensional inputs such as images.\\n\\nTo address R2\\u2019s comments \\u201cThe reasoning lists the problems of GANs\\u201d and \\u201cWhy to choose GANs though in the first place?\\u201d, we emphasize that we are not saying GANs shouldn\\u2019t be used for anomaly detection, only that their lack of exact likelihoods presents some challenges. We make an effort to make them work in our paper in our comparison to other generative model families.\\n\\n> - page 1: \\\"When training and test distributions differ, neural networks may provide ...\\\" \\n\\nThere are varying degrees of \\u201cout-of-distribution-ness\\u201d at test time. One way to carve up the problem specification is to consider inputs that (1) are different than the training set but you want the model to perform well on anyway, e.g. a subtle change in physics parameters a robot encounters when deployed. (2) inputs the model has no business classifying, i.e. showing a picture of a building to a cat/dog classifier. \\n\\nThe first situation is what you are describing, in which methods like sim2real, domain adaptation, meta-learning can address. As we stated in Section 3.1, our paper primarily deals with the second case, in which you don\\u2019t want the model to give bogus outputs for bogus inputs, which also may be adversarial. We appreciate the feedback that this might be confusing if the reader is assuming problem formulation (1); we welcome the other reviewers to chime in here if it would make things more clear to state this.\"}",
"{\"title\": \"Addressed issues of technical clarity, performed follow-up experiments on posterior collapse\", \"comment\": \"Thank you for the detailed review and critique.\\n\\nWe agree that \\u201cDo Deep ... They Don\\u2019t Know?\\u201d shares a concurrent discovery with us in identifying how generative models assign wrong likelihood to OoD inputs, and have updated our paper to cite their contribution. Our contributions differ in that their work performs analysis of why this phenomenon occurs, while we demonstrate that this can be fixed by using uncertainty estimation and WAIC, and then apply these fixed models to the OoD problem.\\n\\nWe agree that our paper could use more technical clarity, i.e. make this work easier to reproduce. The open-sourced code will be linked to the paper after double-blind review process, which we believe to be the highest standard of technical clarity when specifying our method and evaluation metrics. In the meantime, we\\u2019ve also done the following:\\n\\n1. We\\u2019ve clarified Section 4 to re-iterate that our anomaly detection problem specification is identical to that of Liang et al. 2017 and Alemi et al. 2017, and our evaluation metric (AUROC) is the same.\\n\\n2. Clarified the notation of our notation for p, q, p_theta, q_theta in the paper. We think that R1\\u2019s confusion on our GAN ensemble setup can be addressed by clarifying the reasoning behind our terminology, and explaining a bit further what it means to \\u201crandomly sample a discriminator from a posterior distribution over alternate distributions\\u201d\\n\\nThe choice of terminology is motivated by our GAN variant of generative ensembles. If p(x) is the true generative distribution, p_theta(x) is some generative model\\u2019s approximation of it. In Eq (1), theta is a (multivariate) random variable parameterizing an abstract generative model (e.g. weights in a neural network). We\\u2019ve clarified this in the intro. \\n\\nIn the case of GANs, a subset of the variable theta parameterizes the generator and a subset of theta parameterizes the discriminator. Therefore, samples from the generator come from a generative distribution q_\\\\theta(x). We notate a GAN generator\\u2019s distribution as q_\\\\theta(x) and not p_\\\\theta(x) (which we use for referring to normalizing flow and VAE likelihood models) is that in GANs, the discriminator is being optimized to learn a likelihood ratio p(x) / q_\\\\theta(x). That is, separating true data samples from p(x) from OoD samples from q_\\\\theta(x). \\n\\nThus, q(x) and q_theta(x) always refer to OoD distributions. This also makes discussion more clear in the context of discriminative anomaly detection classifiers (which learn p(x)/q(x)) and GAN discriminators (which learn p(x)/q_theta(x)).\\n\\nIn Section 2.1, we mention \\u201crandomly sampled discriminators\\u201d and \\u201cposterior distribution over alternate distributions\\u201d. Models (theta) trained under SGD can be assumed to be drawn randomly from some posterior distribution over p(theta|x). In a GAN, random variable theta specifies the alternate distribution q_\\\\theta(x), or equivalently, the implicit discriminator likelihood ratio p(x) / q_\\\\theta(x) (when the discriminator is trained with sigmoid cross entropy, which we do). Our GAN ensembles samples entire GANs (i.e. generator and discriminator) together, by training 5 GANs independently and then combining discriminator predictions for OoD classification. It would be problematic to sample only discriminators in the training process, since that does not change q_\\\\theta(x) (and there is the question of how feedback to the generators should be accomplished in this manner).\", \"technical_assessment_questions\": [\"Re: Histograms. This is a reasonable suggestion, and resembles the interpretation of likelihood predictions as a feature, rather than a scoring function. The scoring function you propose is a min/max function over the distribution of features. Another approach would be a statistical hyppothesis test using the training distribution\\u2019s likelihood predictions as the variable of interest. Unfortunately, the likelihoods of OoD distributions often overlap with the in-distribution test samples (MNIST and Fashion MNIST VAEs). In training a GLOW model, you will also find a gap between train and test likelihoods. So generative models are not good enough yet to reduce the generalization gap of likelihood models zero.\", \"We refer the reviewer to \\\"Understanding predictive information criteria for Bayesian models\\\" (Gelman et al.) for a motivation of the WAIC objective. In short, the variance term is a correction for how much the fitting of k parameters will increase predictive accuracy, by chance alone. K is estimated by the variance.\", \"Re: Posterior Collapse: Good suggestion! We went back to our VAE setup and ran a few follow-up experiments to prove this hypothesis. The short answer is that \\u201cyes, decreasing Beta reduced posterior collapse and made things better\\u201d. We\\u2019ve edited section 4.1 to document our findings.\"], \"minor_typos\": \"They have been fixed in the latest revision. Thank you so much for catching these!\"}",
"{\"title\": \"Needs a lot of work on improving technical rigor and clarity\", \"review\": [\"Note to Area Chair: Another paper submitted to ICLR under the title \\u201cDo Deep Generative Models Know What They Don\\u2019t Know?\\u201d shares several similarities with the current submission.\", \"This paper highlights a deficiency of current generative models in detecting out-of-distribution based samples based on likelihoods assigned by the model (in cases where the likelihoods are well-defined) or the discriminator distribution for GANs (where likelihoods are typically not defined). To remedy this deficiency, the paper proposes to use ensembles of generative models to obtain a robust WAIC criteria for anomaly detection.\", \"My main concern is with the level of technical rigor of this work. Much of this has to do with the presentation, which reads to me more like a summary blog post rather than a technical paper.\", \"I couldn\\u2019t find a formal specification of the anomaly detection setup and how generative models are used for this task anywhere in the paper.\", \"Section 2 seems to be the major contribution of this work. But it was very hard to understand what exactly is going on. What is the notation for the generative distribution? Introduction uses p_theta. Page 2, Paragraph 1 uses q_theta (x). Eq. (1) uses p_theta and then the following paragraphs use q_theta.\", \"In Eq. (1), is theta a random variable?\", \"How are generative ensembles trained? All the paper says is \\u201cindependently trained\\u201d. Is the parameter initialization different? Is the dataset shuffling different? Is the dataset sampled with replacement (as in bootstrapping)?\", \"\\u201cBy training an ensemble of GANs we can estimate the posterior distribution over model deciscion boundaries D_theta(x), or equivalently, the posterior distribution over alternate distributions q_theta. In other words, we can use uncertainty estimation on randomly sampled discriminators to de-correlate the OoD classification errors made by a single discriminator\\u201d Why is the discriminator parameterized by theta? What is an ensemble of GANs? Multiple generators or multiple discriminators or both? What are \\u201crandomly sampled discriminators\\u201d? What do the authors mean by \\\"posterior distribution over alternate distributions\\\"?\", \"With regards to the technical assessment, I have the following questions for the authors:\", \"In Figure 1, how do the histograms look for the training distribution of CIFAR? If the histograms for train and test have an overlap much higher than the overlap between the train of CIFAR and test set of any other distribution, then ensembling seems unnecessary and anomaly detecting can simply be done via setting a maximum and a minimum threshold on the likelihood for a test point. In addition to the histograms, I'd be curious to see results with this baseline mechanism.\", \"Why should the WAIC criteria weigh the mean and variance equally?\", \"Did the authors actually try to fix the posterior collapse issue in Figure 3b using beta-VAEs as recommended? Given the simplicity of implementing beta-VAEs, this should be a rather easy experiment to include.\"], \"minor_typos\": [\"ODIN and VIB are not defined in the abstract\", \"Page 3: \\u201cdeciscion\\u201d\", \"Page 2, para 2: \\u201clog_\\\\theta p(x)\\u201d\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Well below the ICLR level\", \"review\": [\"Novelty is minimal and is well below the level required by ICLR.\", \"The reasoning lists the problems of GANs and then the fact that GAN ensembles would target that, based on a toy example in Figure 2.\", \"Why to choose GANs though in the first place? Given the buildup, and given the other well-known training issues about GANs, are they the right choice for the basic modeling units, i.e. the ensemble units, in such case? A GANs adversary bases its comparisons on individual data points, rather than on distribution comparisons or on groups of points like MMD, etc. I understand the reasoning behind the choice of generative models (GMs), but it is choosing GANs out of the set of GMs in this particular case that I am referring to.\", \"The paper is quite well written. The ideas as well as the reasoning flow very smoothly.\", \"Experiments are well prepared.\"], \"rather_minor\": [\"page 1: \\\"When training and test distributions differ, neural networks may provide ...\\\" This is true but may be a clarification here regarding the fact that the neural networks involved with several modeling problems, e.g. the ones trained for domain adaptation or meta-learning tasks, target this shift or difference in domains, and typically provide a way to tackle this problem.\"], \"uodate\": \"Read the rebuttal. My score remains unchanged.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting combination of the previous work with useful results.\", \"review\": \"The authors present an OOD detection scheme with an ensemble of generative models. When the exact likelihood is available from the generative model, the authors approximate the WAIC score. For GAN models, the authors compute the variance over the discriminators for any given input. They show that this method outperforms ODIN and VIB on image datasets and also achieves comparable performance on Kaggle Credit Fraud dataset.\\n\\nThe paper is overall well-written and easy to follow. I only have a few comments about the work.\\n\\nI think the authors should address the following points in the paper.\\n- What is the size of the ensemble for the experiments?\\n- How does the size of the ensemble influence the measured performance?\\n- It is Fast Gradient Sign Method (FGSM), not FSGM. See [1]. Citing [1] for FGSM would also be appropriate.\\n\\nQuality. The submission is technically sound. The empirical results support the claims, and the authors discuss the failure cases. \\nClarity. The paper is well-written and easy to follow while providing useful insight and connecting previous work to the subject of study.\\nOriginality. To the best my knowledge, the proposed approach is a novel combination of well-known techniques.\\nSignificance. The presented idea improves over the state-of-the-art.\\n\\n\\nReferences\\n[1] I. Goodfellow, J. Shlens, and C. Szegedy, \\u201cExplaining and Harnessing Adversarial Examples,\\u201d in ICLR, 2015.\\n-------------------\\nRevision. The rating revised to 6 after the discussion and rebuttal.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
BklUAoAcY7 | Unsupervised Learning of Sentence Representations Using Sequence Consistency | [
"Siddhartha Brahma"
] | Computing universal distributed representations of sentences is a fundamental task in natural language processing. We propose ConsSent, a simple yet surprisingly powerful unsupervised method to learn such representations by enforcing consistency constraints on sequences of tokens. We consider two classes of such constraints – sequences that form a sentence and between two sequences that form a sentence when merged. We learn sentence encoders by training them to distinguish between consistent and inconsistent examples, the latter being generated by randomly perturbing consistent examples in six different ways. Extensive evaluation on several transfer learning and linguistic probing tasks shows improved performance over strong unsupervised and supervised baselines, substantially surpassing them in several cases. Our best results are achieved by training sentence encoders in a multitask setting and by an ensemble of encoders trained on the individual tasks. | [
"sentence representation",
"unsupervised learning",
"LSTM"
] | https://openreview.net/pdf?id=BklUAoAcY7 | https://openreview.net/forum?id=BklUAoAcY7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJxg2eSZxN",
"rJg7d-s_R7",
"ByxnKY1IR7",
"SJgqvKJ8CQ",
"S1gZCvJLAQ",
"SkxJVv1UC7",
"BklExIJ8RX",
"BJg5aByUR7",
"SygFn7yIRm",
"ByeA6nrc3Q",
"H1gNUpxc2X",
"rylUx2jt27"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544798375980,
1543184746519,
1543006596452,
1543006562188,
1543006153357,
1543005990739,
1543005675845,
1543005633620,
1543005104594,
1541196998181,
1541176652275,
1541155821666
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper897/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper897/Authors"
],
[
"ICLR.cc/2019/Conference/Paper897/Authors"
],
[
"ICLR.cc/2019/Conference/Paper897/Authors"
],
[
"ICLR.cc/2019/Conference/Paper897/Authors"
],
[
"ICLR.cc/2019/Conference/Paper897/Authors"
],
[
"ICLR.cc/2019/Conference/Paper897/Authors"
],
[
"ICLR.cc/2019/Conference/Paper897/Authors"
],
[
"ICLR.cc/2019/Conference/Paper897/Authors"
],
[
"ICLR.cc/2019/Conference/Paper897/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper897/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper897/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The overall view of the reviewers is that the paper is not quite good enough as it stands. The reviewers also appreciates the contributions so taking the comments into account and resubmit elsewhere is encouraged.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Not quite enough for acceptance\"}",
"{\"title\": \"Revised version uploaded\", \"comment\": \"We have uploaded a revised and improved version of the paper, incorporating most of the suggestions made by the reviewers. We have already given detailed comments below, but to recap, the revised version contains the following changes\\n\\n1. Results for a multitask trained encoder, which gives the best average performance over all the tasks in SentEval for a single model, have been added. \\n\\n2. Results for an ensemble model comprising of six encoders trained on individual tasks have been added. These represent the best results in the paper, comparable to QuickThoughts in the transfer tasks and much better in the linguistic probing tasks.\\n\\n3. Baseline results for an encoder trained on a language model objective on our dataset have been added. The performance of the encoder is poor because the dataset lacks long contexts of ordered sentences. In contrast, our methods can be trained with unordered sentences. \\n\\n4. Additional related work has been added in Section 2, especially on the language model based encoders.\\n \\n5. Section 7 has been condensed to include only one figure showing the average performance of our models over all the 20 tasks in SentEval.\\n\\n6. Several typos have also been corrected.\"}",
"{\"title\": \"Re: The trained encoders are used in a variety of tasks with good performance.\", \"comment\": \"continued from above....\\n\\n4. As mentioned in point (1) above, we have conducted further experiments in a multitask setting, thereby producing a single model with better average performance than any one of the single models. We have also evaluated the performance of an ensemble of the single models. We pick the six best models, one from each training task, based on the average validation performance on all the 20 tasks in SentEval. These are ConsSent-R(2), ConsSent-P(3), ConsSent-I(3), ConsSent-D(5), ConsSent-N(3) and ConSent-C(4). We then create an ensemble model by weighting the predicted probabilities from each model by the normalized validation performance on each of the tasks in SentEval. The performance of the ensemble model on the transfer tasks is the following\\n\\nMR CR SUBJ MPQA SST TREC MRPC SK-E SK-R STSB AVG\\n81.6 85.1 94.4 90.6 85.2 93.8 77.7 86.8 87.2 77.3 86.0\\n(+1.0) (+0.8) (+0.6) (+0.3) (+1.4) (+1.0) (+0.4) (+0.6) (+2.0) (+1.5) (+1.6) (<-- improvement over best score by any single model. See Table 2 in paper.)\\n\\nThe ensemble model shows improvements in all the tasks, with an average improvement of almost +1.6 points over the best single model ConsSent-N(3). The improvements for MR (+1.0), SST (+1.4), TREC (+1.0), SK-R (+2.0) and STSB (+1.5) over the best score obtained by a single model are particularly significant. \\n\\nThe performance of the ensemble model on the linguistic probing tasks is the following\\n\\nSentLen WC TDepth TConst BShift Tense SNum ONum SOMO CInv AVG\\n92.4 96.1 57.8 87.0 88.3 91.0 91.5 89.7 64.7 74.8 83.3\\n(+2.9) (+4.0) (+2.9) (+2.1) (+0.7) (+0.9) (+1.5) (+1.2) . (+1.1) (+1.6) (+2.8) (<-- improvement over best score by any single model. See Table 2 in paper.)\\n\\nThe ensemble model shows strong improvements across all the tasks, with an average improvement of +2.8 over the best single model ConsSent-D(5). The improvements for SentLen(+2.9), WC(+4.0), TDepth(+2.9) and TConst(+2.1) over the best score obtained by a single model are particularly significant. \\n\\nThe average performance of the ensemble over all the 20 tasks is 84.7, which is an improvement of +2.8 over the best single model ConsSent-R(2). Interestingly, choosing the ensemble by using the best six models (out of the set of 30 models) based on validation performance gave slightly worse results. This empirically shows that the individual training tasks bias the sentence encoders in slightly different ways, and an ensemble of models can benefit from all of them. \\n\\nWe will add these results in the final version of the paper, to be updated shortly.\"}",
"{\"title\": \"Re: The trained encoders are used in a variety of tasks with good performance.\", \"comment\": \"We thank the reviewer for reading the paper in detail and suggesting several improvements.\\n\\n1. Multitask Training\\n\\nWe have conducted further experiments by training models in a multitask setting. We split the six training tasks into two groups - one containing ConsSent-R(3), ConsSent-P(3), ConsSent-I(3), ConsSent-D(3) and the other containing ConsSent-N(3) and ConSent-C(3). We train two separate BiLSTM-Max encoders, one for each group, in a multitask setting by cycling through the tasks in a round-robin manner. Note that we use different classification layers for each of the tasks in the first group, keeping the sentence encoder LSTM same. The representations from the trained encoders are then concatenated to produce the final sentence representation for testing on SentEval. We use a hidden dimension of 1024 for each BiLSTM, thereby producing 4096 dimensional final sentence representations. \\n\\nThe performance of the multitask trained model on the transfer tasks is the following. \\n\\nMR CR SUBJ MPQA SST TREC MRPC SK-E SK-R STSB AVG\\n80.2 84.3 94.4 90.4 83.1 93.1 76.7 83.4 86.5 72.2 84.4\\n\\nCompared to the best scores obtained by a single model in each task (see Table 2 in paper), the multitask model gains in some of the tasks e.g. SUBJ (+0.6), TREC (+0.3), SK-R (+1.3) but also suffers a drop in performance in some e.g. SST (-0.7), SK-E (-2.8) and STSB (-3.6). However, on a average, it matches the performance of the best single model ConsSent-N(3) with an average score of 84.4.\\n\\nThe scores obtained for the linguistic probing tasks are as follows.\\n\\nSentLen WC TDepth TConst BShift Tense SNum ONum SOMO CInv AVG\\n86.8 93.2 50.6 84.2 88.7 89.9 89.0 86.6 62.4 71.5 80.3\\n\\nHere too, the multitask model gains in some cases e.g. WC (+1.1), BShift (+1.1) and suffers in some others e.g. SentLen (-2.7) and CInv (-1.7) over the best score by any single model. The average performance is slightly worse than the best single model on the linguistic probing tasks ConsSent-D(5) which achieves a score of 80.5. \\n\\nHowever, when we consider the average score over all the 20 tasks, the multitask model achieves a better score of 82.4 over the best single model ConsSent-R(2), gaining almost +0.5 points. This shows empirically that a model trained on a combination of tasks proposed in our paper can take advantage of the inductive biases of each of the tasks in a combined model. \\n\\n2. The output of the sentence encoder is the output of the BiLSTMs. We will clarify this in the paper.\\n\\n3. Our goal in training the sentence encoders in the unsupervised tasks is to allow them to learn general sentence representations that can be useful in a wide variety of other tasks. Using nonlinear activations in the classification layers while pertaining unduly biases the model towards the pertaining task, hurting its generalization ability. Such a scheme was also used in training InferSent (Conneau et al. 2017).\\n\\nReferences\\nConneau, Alexis et al. \\u201cSupervised Learning of Universal Sentence Representations from Natural Language Inference Data.\\u201d EMNLP (2017).\"}",
"{\"title\": \"Re: Interesting idea to learn sentence representations\", \"comment\": \"Continuing from above...\\n\\n2. MultiTask training\\n\\nMultitask learning has been shown to give strong results for sentence representation learning in the supervised setting (Subramanian et al. 2018). We split the six training tasks into two groups - one containing ConsSent-R(3), ConsSent-P(3), ConsSent-I(3), ConsSent-D(3) and the other containing ConsSent-N(3) and ConSent-C(3). We train two separate BiLSTM-Max encoders, one for each group, in a multitask setting by cycling through the tasks in a round-robin manner. Note that we use different classification layers for each of the tasks in the first group, keeping the sentence encoder LSTM same. The representations from the trained encoders are then concatenated to produce the final sentence representation for testing on SentEval. We use a hidden dimension of 1024 for each BiLSTM, thereby producing 4096 dimensional final sentence representations. \\n\\nThe performance of the multitask trained model on the transfer tasks is the following. \\n\\nMR CR SUBJ MPQA SST TREC MRPC SK-E SK-R STSB AVG\\n80.2 84.3 94.4 90.4 83.1 93.1 76.7 83.4 86.5 72.2 84.4\\n\\nCompared to the best scores obtained by a single model in each task (see Table 2 in paper), the multitask model gains in some of the tasks e.g. SUBJ (+0.6), TREC (+0.3), SK-R (+1.3) but also suffers a drop in performance in some e.g. SST (-0.7), SK-E (-2.8) and STSB (-3.6). However, on a average, it matches the performance of the best single model ConsSent-N(3) with an average score of 84.4.\\n\\nThe scores obtained for the linguistic probing tasks are as follows.\\n\\nSentLen WC TDepth TConst BShift Tense SNum ONum SOMO CInv AVG\\n86.8 93.2 50.6 84.2 88.7 89.9 89.0 86.6 62.4 71.5 80.3\\n\\nHere too, the multitask model gains in some cases e.g. WC (+1.1), BShift (+1.1) and suffers in some others e.g. SentLen (-2.7) and CInv (-1.7) over the best score by any single model. The average performance is slightly worse than the best single model on the linguistic probing tasks ConsSent-D(5) which achieves a score of 80.5. \\n\\nHowever, when we consider the average score over all the 20 tasks, the multitask model achieves a better score of 82.4 over the best single model ConsSent-R(2), gaining almost +0.5 points. This shows empirically that a model trained on a combination of tasks proposed in our paper can take advantage of the inductive biases of each of the tasks in a combined model. \\n\\nTo summarize, both an ensemble of models trained on the six different tasks and a single model trained in a multitask setting show improvements over any single model, the former more strongly so. We will add these results in the updated version of the paper.\"}",
"{\"title\": \"Re: Interesting idea to learn sentence representations\", \"comment\": \"We thank the reviewer for reading the paper carefully and providing with constructive suggestions.\\n\\nComparison with Language Modeling based sentence encoders\\n\\nWe agree that language modeling is a natural unsupervised objective for natural language and large pertained language models can be used as effective sentence encoders. One key advantage of the objectives we propose is that they can be learned using corpora of unordered sentences. For language modeling based sentence encoders to work, it is important to train them on large continuous spans of text straddling several sentences, as mentioned in (Radford et al. 2018). In fact, when we train a baseline LSTM language model (single layer with 4096 hidden dimensions) on the same dataset we used, and evaluated the sentence representations on SentEval, the results (shown below) were significantly worse than any of the models proposed in our paper.\\n\\nMR CR SUBJ MPQA SST TREC MRPC SK-E SK-R STSB AVG\\n72.1 72.0 87.8 88.1 77.4 75.0 75.4 77.7 70.3 . 54.4 75.0\\n\\nSentLen WC TDepth TConst BShift Tense SNum ONum SOMO CInv AVG\\n64.2 34.7 31.3 51.1 64.6 87.0 75.2 74.1 54.0 61.4 59.8\\n\\nAlternately, it will be worthwhile to train our models on larger spans of text across multiple sentences from BookCorpus and evaluate them. This will be part of our future work. \\n\\nCombining models and training tasks\\n\\nWe have conducted further experiments on combining the different models and tasks. We do so in two different ways.\\n\\n1. Ensemble of the six best models \\n\\nWe pick the six best models, one from each training task, based on the average validation performance on all the 20 tasks in SentEval. These are ConsSent-R(2), ConsSent-P(3), ConsSent-I(3), ConsSent-D(5), ConsSent-N(3) and ConSent-C(4). We then create an ensemble model by weighting the predicted probabilities from each model by the normalized validation performance on each of the tasks in SentEval. The performance of the ensemble model on the transfer tasks is the following\\n\\nMR CR SUBJ MPQA SST TREC MRPC SK-E SK-R STSB AVG\\n81.6 85.1 94.4 90.6 85.2 93.8 77.7 86.8 87.2 77.3 86.0\\n(+1.0) (+0.8) (+0.6) (+0.3) (+1.4) (+1.0) (+0.4) (+0.6) (+2.0) (+1.5) (+1.6) (<-- improvement over best score by any single model. See Table 2 in paper.)\\n\\nThe ensemble model shows improvements in all the tasks, with an average improvement of almost +1.6 points over the best single model ConsSent-N(3). The improvements for MR (+1.0), SST (+1.4), TREC (+1.0), SK-R (+2.0) and STSB (+1.5) over the best score obtained by a single model are particularly significant. \\n\\nThe performance of the ensemble model on the linguistic probing tasks is the following\\n\\nSentLen WC TDepth TConst BShift Tense SNum ONum SOMO CInv AVG\\n92.4 96.1 57.8 87.0 88.3 91.0 91.5 89.7 64.7 74.8 83.3\\n(+2.9) (+4.0) (+2.9) (+2.1) (+0.7) (+0.9) (+1.5) (+1.2) . (+1.1) (+1.6) (+2.8) (<-- improvement over best score by any single model. See Table 2 in paper.)\\n\\nThe ensemble model shows strong improvements across all the tasks, with an average improvement of +2.8 over the best single model ConsSent-D(5). The improvements for SentLen(+2.9), WC(+4.0), TDepth(+2.9) and TConst(+2.1) over the best score obtained by a single model are particularly significant. \\n\\nThe average performance of the ensemble over all the 20 tasks is 84.7, which is an improvement of +2.8 over the best single model ConsSent-R(2). Interestingly, choosing the ensemble by using the best six models (out of the set of 30 models) based on validation performance gave slightly worse results. This empirically shows that the individual training tasks bias the sentence encoders in slightly different ways, and an ensemble of models can benefit from all of them.\"}",
"{\"title\": \"Re: Simple method for learning sentence representations, with competitive results\", \"comment\": \"Continuing from previous comment...\\n\\n2. MultiTask learning\\n\\nMultitask learning has been shown to give strong results for sentence representation learning in the supervised setting (Subramanian et al. 2018). We split the six training tasks into two groups - one containing ConsSent-R(3), ConsSent-P(3), ConsSent-I(3), ConsSent-D(3) and the other containing ConsSent-N(3) and ConSent-C(3). We train two separate BiLSTM-Max encoders, one for each group, in a multitask setting by cycling through the tasks in a round-robin manner. Note that we use different classification layers for each of the tasks in the first group, keeping the sentence encoder LSTM same. The representations from the trained encoders are then concatenated to produce the final sentence representation for testing on SentEval. We use a hidden dimension of 1024 for each BiLSTM, thereby producing 4096 dimensional final sentence representations. \\n\\nThe performance of the multitask trained model on the transfer tasks is the following. \\n\\nMR CR SUBJ MPQA SST TREC MRPC SK-E SK-R STSB AVG\\n80.2 84.3 94.4 90.4 83.1 93.1 76.7 83.4 86.5 72.2 84.4\\n\\nCompared to the best scores obtained by a single model in each task (see Table 2 in paper), the multitask model gains in some of the tasks e.g. SUBJ (+0.6), TREC (+0.3), SK-R (+1.3) but also suffers a drop in performance in some e.g. SST (-0.7), SK-E (-2.8) and STSB (-3.6). However, on a average, it matches the performance of the best single model ConsSent-N(3) with an average score of 84.4.\\n\\n\\nThe scores obtained for the linguistic probing tasks are as follows.\\n\\nSentLen WC TDepth TConst BShift Tense SNum ONum SOMO CInv AVG\\n86.8 93.2 50.6 84.2 88.7 89.9 89.0 86.6 62.4 71.5 80.3\\n\\nHere too, the multitask model gains in some cases e.g. WC (+1.1), BShift (+1.1) and suffers in some others e.g. SentLen (-2.7) and CInv (-1.7) over the best score by any single model. The average performance is slightly worse than the best single model on the linguistic probing tasks ConsSent-D(5) which achieves a score of 80.5. \\n\\nHowever, when we consider the average score over all the 20 tasks, the multitask model achieves a better score of 82.4 over the best single model ConsSent-R(2), gaining almost +0.5 points. This shows empirically that a model trained on a combination of tasks proposed in our paper can take advantage of the inductive biases of each of the tasks in a combined model. \\n\\nTo summarize, both an ensemble of models trained on the six different tasks and a single model trained in a multitask setting show improvements over any single model, the former more strongly so. \\n\\n\\n3. Comparison with QuickThoughts\\n\\nThere are some differences between our approach and QuickThoughts. Our approach only requires a set of unordered sentences, while QuickThoughts requires ordered sentences and this is crucial for its training. Further, our results were obtained by training on about 15M sentences, while QuickThoughts was trained using much larger corpora - 45M sentences in BookCorpus and 126M in UMBC. We use a final sentence representation of 4096 dimensions while QuickThoughts uses 4800 dimensions. It will be worthwhile to train our models on the BookCorpus+UMBC dataset and compare performance, which will be part of our future work. \\n\\nAs suggested by the reviewer, we also took the trained models made available by the authors of QuickThoughts (at https://github.com/lajanugen/S2V) and computed its performance on the linguistic probing tasks. We followed the same protocol that was used to evaluate our models. \\n\\nSentLen WC TDepth TConst BShift Tense SNum ONum SOMO CInv AVG\\n90.6 90.3 40.2 80.7 56.8 86.2 83.0 79.7 55.3 70.0 73.3\\n(+1.1) (+2.1) (-14.7) (-4.2) (-27.7) (-3.8) (-7.0) (-8.8) (-6.2) (-3.2) (-7.2). (<-- QuickThoughts as compared to ConsSent-D(5))\\n\\nSomewhat surprisingly, the average performance was about 73.3, comparable to SkipThought (73.0), but significantly worse than ConsSent-D(5) which has an average score of 80.5. Thus, although QuickThoughts achieves better performance on the transfer tasks, the sentence representations learned by it captures much less linguistic information. \\n\\nWe will add these results in the updated version of the paper, which will be posted shortly.\\n\\nReferences\\n\\nSubramanian, S., Trischler, A., Bengio, Y., & Pal, C.J. (2018). Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning. CoRR, abs/1804.00079.\"}",
"{\"title\": \"Re: Simple method for learning sentence representations, with competitive results\", \"comment\": \"We thank the reviewer for a careful reading of our paper and the detailed comments. We have conducted further experiments on combining the different models and tasks. We do so in two different ways.\\n\\n1. Ensemble of six models, one from each task\\n\\nWe pick the six best models, one from each training task, based on the average validation performance on all the 20 tasks in SentEval. These are ConsSent-R(2), ConsSent-P(3), ConsSent-I(3), ConsSent-D(5), ConsSent-N(3) and ConSent-C(4). We then create an ensemble model by weighting the predicted probabilities from each model by the normalized validation performance on each of the tasks in SentEval. The performance of the ensemble model on the transfer tasks is the following\\n\\nMR CR SUBJ MPQA SST TREC MRPC SK-E SK-R STSB AVG\\n81.6 85.1 94.4 90.6 85.2 93.8 77.7 86.8 87.2 77.3 86.0\\n(+1.0) (+0.8) (+0.6) (+0.3) (+1.4) (+1.0) (+0.4) (+0.6) (+2.0) (+1.5) (+1.6) (<-- improvement over best score by any single model. See Table 2 in paper.)\\n\\nThe ensemble model shows improvements in all the tasks, with an average improvement of almost +1.6 points over the best single model ConsSent-N(3). The improvements for MR (+1.0), SST (+1.4), TREC (+1.0), SK-R (+2.0) and STSB (+1.5) over the best score obtained by a single model are particularly significant. \\n\\nThe performance of the ensemble model on the linguistic probing tasks is the following\\n\\nSentLen WC TDepth TConst BShift Tense SNum ONum SOMO CInv AVG\\n92.4 96.1 57.8 87.0 88.3 91.0 91.5 89.7 64.7 74.8 83.3\\n(+2.9) (+4.0) (+2.9) (+2.1) (+0.7) (+0.9) (+1.5) (+1.2) . (+1.1) (+1.6) (+2.8) (<-- improvement over best score by any single model. See Table 2 in paper.)\\n\\nThe ensemble model shows strong improvements across all the tasks, with an average improvement of +2.8 over the best single model ConsSent-D(5). The improvements for SentLen(+2.9), WC(+4.0), TDepth(+2.9) and TConst(+2.1) over the best score obtained by a single model are particularly significant. \\n\\nThe average performance of the ensemble over all the 20 tasks is 84.7, which is an improvement of +2.8 over the best single model ConsSent-R(2). Interestingly, choosing the ensemble by using the best six models (out of the set of 30 models) based on validation performance gave slightly worse results. This empirically shows that the individual training tasks bias the sentence encoders in slightly different ways, and an ensemble of models can benefit from all of them.\"}",
"{\"title\": \"Re: Authors: Please provide feedback\", \"comment\": \"We will be posting the feedback shortly in the next few minutes.\"}",
"{\"title\": \"Simple method for learning sentence representations, with competitive results\", \"review\": \"== Clarity == \\nThe primary strength of this paper is the simplicity of the approach.\\n\\nMain idea #1: corrupt sentences (via random insertions/deletions/permutations), and train a sentence encoder to determine whether a sentence has been corrupted or not.\\n\\nMain idea #2: split a sentence into two parts (two different ways to do this were proposed). Train a sequence encoder to encode each part such that we can tell whether the two parts came from the same sentence or not.\\n\\nI can see that this would be very easy for others to implement, perhaps encouraging its adoption.\\n\\n== Quality of results ==\\nThe proposed approach is evaluated on the well-known SentEval benchmark.\\n\\nIt generally does not outperform supervised approaches such as InferSent and MultiTask. However, this is fine because the proposed approach uses no supervised data, and can be applied in domains/languages where supervised data is not available.\\n\\nThe approach is competitive with existing state-of-the-art sentence representations such as QuickThoughts. However, it is not definitively better:\\n\\nOut of the 9 tasks with results for QuickThoughts, this approach (ConsSent) performs better on 3 (MPQA +0.1%, TREC +0.4%, MRPC +0.4%). For the other 6 tasks, ConsSent performs worse (MR -1.8%, CR -1.7%, SUBJ -1%, SST -3.8%, SK-R, -2.4%). Taken together, the losses seem to be larger than the gains.\\n\\nFurthermore, the QuickThoughts results were obtained with a single model across all SentEval tasks. In contrast, the ConsSent approach requires a different hyperparameter setting for each task in order to achieve comparable results -- there is no single hyperparameter setting that would give state-of-the-art results across all tasks.\\n\\nThe authors also evaluate on the newly-released linguistic probing tasks in SentEval. They strongly outperform several existing methods on this benchmark. However, it is unclear why they did not compare against QuickThoughts, which was the strongest baseline on the original SentEval tasks.\\n\\n== Originality ==\\nThe proposed approach is simple and straightforward. This is on the whole a great thing, but perhaps not especially surprising from an originality/novelty perspective.\\n\\nTherefore, the significance and impact of this approach really needs to be carried by the quality of the empirical results.\\n\\nThe sentence pair based approaches (ConsSent-N and C) are conceptually interesting, but don't seem to be responsible for the best results on the linguistic probing tasks.\\n\\n== Conclusion ==\", \"pros\": [\"conceptual simplicity\", \"competitive results (better than many previous unsup. sentence representation methods, excluding QuickThoughts)\", \"strong results on SentEval's linguistic probing task\"], \"cons\": [\"no single hyperparameter value (perturbation method and value for k) gets great results across all tasks\", \"some important baselines possibly missing for linguistic probing tasks\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea to learn sentence representations\", \"review\": \"This submission presents a model for self-supervised learning of sentence representations. The core idea is to train a sentence encoder to predict sequence consistency. Sentences from a text corpus are considered consistent (positive examples), while simple editions of these make the negative samples. Six different ways to edit the sequence are proposed. The network is trained to solve this binary classification task, separately for all six possible editions.\\nThe proposed approach is evaluated on SentEval giving encouraging results.\\n\\n+ The proposed approach is interesting. It is similar in some sense to the self-supervised representation learning literature in computer vision, where the network is trained to say- predict the rotation applied to the image.\\n\\n- If one considers that sentence encoders can be trained using a pretext task, this paper lacks a very-simple-yet-hard-to-beat baseline. Unlike for images, natural language has a very natural self-supervised task: language modeling. Results reported for language-modeling-based sentence representations outperform results reported in the tables by a big margin. Here is at least one paper that would be worth mentioning:\\n- Radford, Alec, Rafal Jozefowicz, and Ilya Sutskever. \\\"Learning to generate reviews and discovering sentiment.\\\" arXiv preprint arXiv:1704.01444 (2017). \\nIn order to make things comparable, it would be good to provide reference numbers for an LSTM trained with a LM objective on the same data as the experiments in this paper.\\n\\n- If I understood correctly, all variants are trained separately (for each of the 6 different ways to edit the sequence). This makes the reading of the results very hard. Table 2 should not contain all possible variants, but one single solution that works best according to some criterion. \\nTo this end, why would these models be trained separately? First of all, the main result could be an ensemble of all 6, or the model could be made multi-class, or even multi-label, capable of predicting all variants in a single task.\\n\\nOverall, I think that this paper proposes an interesting alternative for training sentence representations. However, the execution of the paper lacks in several respects outlines above. Therefore, I lean towards rejection, and await the other reviews, comments and answer from the authors to make my final decision.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The paper presents an unsupervised sentence encoding method trained to classify consistent (original) and inconsistent (corrupted) sentences. The trained encoders are used in a variety of tasks with good performance.\", \"review\": \"The paper presents an unsupervised sentence encoding method based on automatically generating inconsistent sentences by applying various transformations either to a single sentence or a pair and then training a model to classify the original sentences from the transformed ones.\\n\\nOverall, I like the paper as it presents a simple method for training unsupervised sentence models which then can be used as part of further NLP tasks.\", \"a_few_comments_on_the_method_and_results\": [\"The results on Table 2 shows that supervised methods outperform unsupervised methods as well as the consistency based models with MultiTask having the largest margin. It would've been interesting to experiment with training multi-task layers on top of the sentence encoder and see how it would've performed.\", \"The detail of the architecture is slightly missing in a sense that it's not directly clear from the text if the output of the BiLSTMs is the final sentence encoding or the final layer before softmax?\", \"Also I would've thought that the output of LSTMs passed through nonlinear dense layers but the text refers to two linear layers.\", \"When I first read the paper, my eyes were looking for the result when you combine all of the transformations and train a single model :) - any reason why you didn't try this experiment?\", \"The paper is missing comparison and reference to recent works on universal language models (e.g. Radford et al 2018, Peters et al 2018, Howard et al 2018) as they rely on more elaborate model architectures and training compared to this paper but ultimately you can use them as sentence encoders.\", \"One final note, which could be a subsequent paper is to treat these transformations as part of an adversarial setup to further increase the robustness of a language model such as those mentioned previously.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SJeUAj05tQ | DADAM: A consensus-based distributed adaptive gradient method for online optimization | [
"Parvin Nazari",
"Davoud Ataee Tarzanagh",
"George Michailidis"
] | Online and stochastic optimization methods such as SGD, ADAGRAD and ADAM are key algorithms in solving large-scale machine learning problems including deep learning. A number of schemes that are based on communications of nodes with a central server have been recently proposed in the literature to parallelize them. A bottleneck of such centralized algorithms lies on the high communication cost incurred by the central node. In this paper, we present a new consensus-based distributed adaptive moment estimation method (DADAM) for online optimization over a decentralized network that enables data parallelization, as well as decentralized computation. Such a framework note only can be extremely useful for learning agents with access to only local data in a communication constrained environment, but as shown in this work also outperform centralized adaptive algorithms such as ADAM for certain realistic classes of loss functions. We analyze the convergence properties of the proposed algorithm and provide a \textit{dynamic regret} bound on the convergence rate of adaptive moment estimation methods in both stochastic and deterministic settings. Empirical results demonstrate that DADAM works well in practice and compares favorably to competing online optimization methods. | [
"dadam",
"distributed adaptive gradient",
"adam",
"online optimization dadam",
"online optimization online",
"stochastic optimization methods",
"sgd",
"adagrad",
"key algorithms",
"machine"
] | https://openreview.net/pdf?id=SJeUAj05tQ | https://openreview.net/forum?id=SJeUAj05tQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1gtCf2pQV",
"H1x6qqZtx4",
"Ske0nAkFe4",
"BkxHPk-Ig4",
"HylSyjA0A7",
"rkxtNFr90X",
"B1lJXkO8CX",
"ryxfxIFE0X",
"Skl0E4qGCQ",
"H1e-5sxy0Q",
"r1xPATVAam",
"HygPwzZapm",
"HJxViZZa6X",
"BJggSWZaTQ",
"ryeU9QY627",
"HJldj5nM2m",
"HJgQoTS-2m"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1548759761500,
1545308820808,
1545301686258,
1545109340922,
1543592669401,
1543293232895,
1543040791190,
1542915562345,
1542788150008,
1542552456898,
1542503887068,
1542423135389,
1542422940509,
1542422840449,
1541407630412,
1540700832108,
1540607387475
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper896/Authors"
],
[
"ICLR.cc/2019/Conference/Paper896/Authors"
],
[
"ICLR.cc/2019/Conference/Paper896/Authors"
],
[
"ICLR.cc/2019/Conference/Paper896/Area_Chair1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper896/Authors"
],
[
"ICLR.cc/2019/Conference/Paper896/Authors"
],
[
"ICLR.cc/2019/Conference/Paper896/AnonReviewer3"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper896/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper896/Authors"
],
[
"ICLR.cc/2019/Conference/Paper896/Authors"
],
[
"ICLR.cc/2019/Conference/Paper896/Authors"
],
[
"ICLR.cc/2019/Conference/Paper896/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper896/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper896/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"We have taken the feedback seriously and improved the paper substantially; see https://arxiv.org/pdf/1901.09109.pdf\", \"the_employed_data_sets_and_software_code_are_available_at\": \"https://github.com/Tarzanagh/DADAM\"}",
"{\"title\": \"Response to Reviewer~3\", \"comment\": \"[Comment] \\nIt's not immediately clear to me why that term can be bounded without the \\\\log(T) term. If this can be done, why not just present the improved result?\\n\\nI don't think adding the statement about the static setting at the end of the remark as is done now is very helpful to the reader. If you want to say that DADAM is better in certain specific settings (e.g. static), then you should restate the entire remark more precisely.\\n\\n [Response]\\n\\nThe main benefit in ADAM-type methods comes in terms of data sparsity as shown in Theorem~4. However, similar to AMSGRAD and ADAM, DADAM also has a regret bounded by $G_{\\\\infty} \\\\sqrt{T}$. \\n\\nLet $\\\\|g_{i,t}\\\\|_{\\\\infty} \\\\leq G_{\\\\infty} $. Then, the term $\\\\sum_{t=1}^T |g_{i,t,d}|/\\\\sqrt{t}$ in the proof of Lemma~13 can be bounded as follows: \\n\\n$$\\\\sum_{t=1}^T |g_{i,t,d}|/\\\\sqrt{t} \\n\\\\leq \\\\sum_{t=1}^T G_{\\\\infty} /\\\\sqrt{t}\\n\\\\leq G_{\\\\infty} \\\\int_{t=1}^{T} 1/\\\\sqrt{t}\\n\\\\leq G_{\\\\infty} \\\\sqrt{T}.$$ \\n\\nThus, the upper-bound on DADAM's regret is a minimum between the one in $O(G_{\\\\infty} \\\\sqrt{T})$ and the one of Theorem~4. \\n\\n-- Please refer to Remark~7 on page 6.\"}",
"{\"title\": \"Response to Reviewer~3\", \"comment\": \"Again, thank you for your valuable feedback.\\n\\n\\nComments 1-1, 1-2 and 1-3) [Design of mixing matrix $W$]\\n\\nThere are several designs for network matrix $W$ in [BPX04], [TLR12] and [SLWY15]. In earlier papers [TLR12,N015], the role of network constraints on the consensus-based distributed optimization has been analyzed. They provided a unified view of how the network affects both the speed of convergence as well as the solution to which the algorithm converge. However, to the best of our knowledge, there is no general rule for determining the best $W$ in decentralized consensus optimization problems (see, Section IV-B in [TLR12]). We consider the Metropolis constant edge weight matrix [BPX04] here since it is easy to implement and has good performance in general [JXM14,SLWY15].\\n\\n\\n-[BPX04] Boyd, S., Diaconis, P., & Xiao, L. (2004). Fastest mixing Markov chain on a graph. SIAM review, 46(4), 667-689.\\n\\n-[TLR12] Tsianos, K. I., Lawlor, S., & Rabbat, M. G. (2012, October). Consensus-based distributed optimization: Practical issues and applications in large-scale machine learning. In Communication, Control, and Computing (Allerton), 2012 50th Annual Allerton Conference on (pp. 1543-1550). IEEE.\\n\\n-[NO15] Nedi\\u0107, A., & Olshevsky, A. (2015). Distributed optimization over time-varying directed graphs. IEEE Transactions on Automatic Control, 60(3), 601-615.\\n\\n-[SLWY15] Shi, W., Ling, Q., Wu, G., & Yin, W. (2015). Extra: An exact first-order algorithm for decentralized consensus optimization. SIAM Journal on Optimization, 25(2), 944-966.\"}",
"{\"metareview\": \"The paper provides a distributed optimization method, applicable to decentralized computation while retaining provable guarantees. This was a borderline paper and a difficult decision.\\n\\nThe proposed algorithm is straightforward (a compliment), showing how adaptive optimization algorithms can still be coordinated in a distributed fashion. The theoretical analysis is interesting, but additional assumptions about the mixing are needed to reach clear conclusions: for example, additional assumptions are required to demonstrate potential advantages over non-distributed adaptive optimization algorithms.\\n\\nThe initial version of the paper was unfortunately sloppy, with numerous typographical errors. More importantly, some key relevant literature was not cited:\\n- Duchi, John C., Alekh Agarwal, and Martin J. Wainwright. \\\"Dual averaging for distributed optimization: Convergence analysis and network scaling.\\\" IEEE Transactions on Automatic control 57.3 (2012): 592-606.\\nIn addition to citing this work, this and the other related works need to be discussed in relation to the proposed approach earlier in the paper, as suggested by Reviewer 3.\\n\\nThere was disagreement between the reviewers in the assessment of this paper. Generally the dissenting reviewer produced the highest quality assessment. This paper is on the borderline, however given the criticisms raised it would benefit from additional theoretical strengthening, improved experimental reporting, and better framing with respect to the existing literature.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Borderline paper: distributed optimization algorithm with analysis\"}",
"{\"comment\": \"Dear Authors:\\n Thank you for the explanation. I need to check the proof carefully. Anyway, this is an interesting paper and hope you can get accepted.\\n Sincerely yours\", \"title\": \"Thank you for the explanation\"}",
"{\"title\": \"Response to Reviewer~1\", \"comment\": \"We thank the reviewer for the helpful and supportive feedback. A detailed point-by-point response to the reviewer's comments follows.\\n\\n1-1 [Comment]\\n\\n-- Corollary 10 shows better performance of DADAM. Besides the detailed derivations, can the authors intuitively explain the key setup which leads to this better performance?\\n\\n1-1 [Response] \\n\\nThe key setup which leads to this regret bound is that we do not use the boundedness assumption for domain or gradient. These assumptions may simplify the proof but lose some sophisticated structures in the distributed optimization problems. Further, the advantage of DADAM over centralized parallel gradient methods is to avoid the communication traffic jam. More specifically, the communication cost for each node of DADAM is O(the degree of the graph) which could be much smaller than $O(n)$ for centralized gradient-based methods . \\n\\n\\nRefer to Paragraph~2 on page 8. \\n\\n\\n1-2 [Comment]\\n\\n-- The experimental results are mainly based on sigmoid loss with simple constraints. The results will be more convincing if the authors can provide studies on more complex objective, for example, regularized loss with both L2 and L1 bounded constraints. \\n\\n1-2 [Response]\\n\\n-- We have provided a detailed implementation for different choices of regularized loss with both L2 and L1 bounded constraints.\\n\\nRefer to Equation ~18 on page 9.\\nRefer to Figure~1 on page 10. \\n\\n\\n1-3 [Comment]\\n\\n-- Th experimental results in Section 5.1 is based on \\\\beta_1 = \\\\beta_2 = \\\\beta_3 = 0.9. From the expression of \\\\hat v_{i,t} in Section 2, this setting implies the most recent v_{i,t} plays a more important role than the historical maximum, hence ADAM is better than AMSGrad. I am curious what the results will look like if we set \\\\beta_3 as a value smaller than 0.5. \\n\\n1-3 [Response]\\n\\n-- In Appendix, we examine the sensitivity of DADAM on the parameters related to the network connection and update of the moment estimate. We consider a range of hyperparameter choices, i.e. $\\\\beta_3 \\\\in {0,0.9,0.99}$. From Figure 4 it can be easily seen that DADAM performs equal or better than AMSGrad $(\\\\beta_3 = 0)$, regardless of the hyper-parameter setting for $\\\\beta_1$ and $ \\\\beta_2$.\\n\\nRefer to Figures~3 and 4 on page 29.\"}",
"{\"title\": \"Convergence Rate of DADAM for Non-Smooth Objectives\", \"comment\": \"Thanks for the interest in our paper and looking into the analysis carefully.\\n\\n-- Performance of SGD is best judged by its sample complexity which is related to the regularity of objective $F(x)$. For convex objective $F(x)$ the stochastic (sub)-gradient method [C85] attains expected functional accuracy $\\\\epsilon$ with after $O(\\\\epsilon^{-2})$ stochastic sub-gradient evaluations . However, for non-convex non-smooth problems, the situation is less clear. The challenge in establishing a sample complexity for non-smooth non-convex sub-gradient-based methods is that the \\u201c convergence criteria,\\u201d namely the objective error $F(x_t) - \\\\inf F$ and the norm of the subgradient can be completely meaningless . Indeed, one cannot expect $F(x_t) - \\\\inf F$ to tend to zero---even in the smooth setting. Also, simple examples, e.g., $F(x) = |x|$, show that $\\\\dist(0, \\\\partial F(x_t))$ can be strictly bounded below by a fixed constant for all iterations. \\n\\n-- In contrast to subgradient-based methods, the ``\\\"convergence criteria\\\" is meaningful for the \\\\emph{proximal sub-gradient methods}~[R76], which constructs $x_{t+1}$ by approximately minimizing the subproblem\\n$\\n \\\\min_{x \\\\in \\\\mathbb{R}^p} \\\\left\\\\{ F(x) + \\\\frac{1}{2c_t}\\\\|x - x_t\\\\|^2\\\\right\\\\},\\n$\\nwhere $c_t$ is a control parameter. Indeed, it is easy to show that under minimal assumptions on $F$, the subdifferential distance $\\\\dist(0, \\\\partial F(x_{t}))$ tends to zero (please see Theorem~1 in [R76]). \\n\\n-- In this paper, we provide the complexity guarantees for an adaptive distributed gradient-based method for a general class of smooth losses in online and stochastic settings. However, the guarantees in this paper apply to the non-smooth settings by using a proximal point scheme similar to [R76, DD18] that may be summarized as follows\\n$\\nx_{t+1} = \\\\argmin_{x \\\\in \\\\mathbb{R}^p} \\\\left\\\\{ \\\\frac{1}{n}\\\\sum_{t=1}^T\\\\sum_{i=1}^n f_{i,t}(x) + \\\\frac{1}{2c_t}\\\\|x - x_t\\\\|^2\\\\right\\\\} .\\n$\\nwhere $c_t$ is a control parameter. \\n\\n---- [C85] Blair, Charles. \\\"Problem complexity and method efficiency in optimization (as nemirovsky and db yudin).\\\" SIAM Review 27.2 (1985): 264.\\n\\n---- [R76] Rockafellar, R. Tyrrell. \\\"Monotone operators and the proximal point algorithm.\\\" SIAM journal on control and optimization 14.5 (1976): 877-898.\\n\\n---- [ DD18] Davis, Damek, and Dmitriy Drusvyatskiy. \\\"Stochastic model-based minimization of weakly convex functions.\\\" arXiv preprint arXiv:1803.06523 (2018).\"}",
"{\"title\": \"Comments in response to author response\", \"comment\": \"Thank you for taking the time to respond to my comments as well as for the revisions to the paper.\\n\\nHere are some further comments in response to each of your responses.\\n\\n1-1) When I said that the actual method wasn't motivated very well, I was referring mostly to the idea of using the mixing matrix W and how it should be specified in practice.\\n\\nOn a separate note, I wasn't aware of [DAW12], and I'm surprised that it's not referenced in your paper. In general, I think it would be helper to the reader if you made it more clear what the contribution of your paper is with respect to existing work on decentralized optimization algorithms over networks. For instance, why aren't [DAW12], [07XBK], and [04XB] discussed in the introduction or in Section 1.1? You also spend a lot of time trying to compare the bounds to centralized adaptive methods but provide any discussion on your work in relation to decentralized non-adaptive methods.\\n\\n1-2) Restricting the mixing matrix W to be symmetric and doubly stochastic still leaves one with a very large family of choices. It's fine to say that optimizing for W isn't the main focus of this work, but its specification is crucial for the performance of the algorithm and its guarantees, so it is important to specify certain choices (which you do with the Metropolis constant edge weight matrix), motivate them (which you don't do), as well as clearly describe their impact on the algorithm's performance (which you also don't do).\\n\\n1-3) Saying that \\\\sigma_2(W) is strictly less than one is not enough, because it still leaves room for 1- \\\\sigma_2(W) to be arbitrarily small, which can make the bounds arbitrarily large (and therefore meaningless).\\n\\nI inspected the revised bound, and it seems more like a restatement than an improvement. In particular, saying that 1-\\\\sigma_2(W) doesn't appear in the regret bound if T is sufficiently large is misleading, because this can require T to be arbitrarily large.\\n\\n1-4) That's good.\\n\\n1-5) It's good that this is now included in the revised version.\\n\\n1-6) It's not immediately clear to me why that term can be bounded without the \\\\log(T) term. If this can be done, why not just present the improved result?\\n\\nI don't think adding the statement about the static setting at the end of the remark as is done now is very helpful to the reader. If you want to say that DADAM is better in certain specific settings (e.g. static), then you should restate the entire remark more precisely.\\n\\n1-8) I still don't think I see any error bars in Figure 1. I do see the discussion on hyperparameters, which I think will be helpful to the readers.\"}",
"{\"comment\": \"Dear Authors:\\n Thank you for the explanation. You propose a general online optimization method, but can you prove that it converges to a critical point in a deep neural network problem? Notice that the objective function may be non-differentiable(like relu). Thank you.\\n Sincerely yours\", \"title\": \"For a general deep neural network, can DADAM converge to a critical point\"}",
"{\"title\": \"Convergence Rate of DADAM for Non-Convex Objectives\", \"comment\": \"Thank you for your interest in the paper.\\n\\nLet $f$ be real-valued, continuously differentiable (possibly nonconvex) function on a closed, convex set $\\\\mathcal{X}$. The projected gradient $G_{\\\\mathcal{X}}(x,f,\\\\alpha)$ can be used to characterize stationary points because if $\\\\mathcal{X}$ is a convex set, then $ x \\\\in \\\\mathcal{X}$ is a stationary point or critical point of continuously differentiable function $f$ if and only if $G_{\\\\mathcal{X}}(x,f,\\\\alpha) =0 $ [87CM, ABS13]. In general, $G_{\\\\mathcal{X}}(x,f,\\\\alpha)$ is discontinuous, but as proved by Calamai and More [87CM], if $f$ is continuously differentiable on $\\\\mathcal{X}$, then the mapping $x \\\\rightarrow \\\\|G_{\\\\mathcal{X}}(x,f,\\\\alpha)\\\\|$ is lower semicontinuous on $\\\\mathcal{X}$. This property implies that if ${x_t}$ is a sequence in $\\\\mathcal{X}$ that converges to $x^*$, and if $G_{\\\\mathcal{X}}(x_t,f,\\\\alpha)$ converges to zero, then $x^*$ is a stationary point of problem \\\\ref{125}. Motivated by [87CM,ABS13,HSZ17], we monitor convergence to a stationary point using the \\\\textit{local regret} which is an extension of projected gradients to the online distributed settings (please see Definition~1). \\n\\n--In Theorem~7, we analyze the convergence of DADAM for general Lipschitz and smooth (possibly non-convex) loss function using the local regret and show that the online distributed algorithms converge even when the loss is non-convex, i.e., the algorithms find a stationary point to the time-varying loss at a rate of $\\\\tilde{O}(\\\\frac{1}{T})$. \\n\\n--In Theorem 9, we extend this result to the stochastic nonconvex settings when noisy gradients are accessible to the agents. Finally, in Corollary~10, we show the potential advantage of DADAM over adaptive algorithms such as ADAM, ADAGRAD and RMSProp for solving stochastic nonconvex optimization problems. More specifically, our theoretical results show that DADAM can be faster than adaptive algorithms for finding stationary points of stochastic non convex problems when $T$ is sufficiently large.\\n\\n \\n---- [87CM] Calamai, Paul H., and Jorge J. Mor\\u00e9. \\\"Projected gradient methods for linearly constrained problems.\\\" Mathematical programming 39.1 (1987): 93-116. \\n\\n---- [ABS13] Attouch, H., Bolte, J., & Svaiter, B. F. (2013). Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward\\u2013backward splitting, and regularized Gauss\\u2013Seidel methods. Mathematical Programming, 137(1-2), 91-129.\\n\\n---- [HSZ17] Hazan, Elad, Karan Singh, and Cyril Zhang. \\\"Efficient Regret Minimization in Non-Convex Games.\\\" arXiv preprint arXiv:1708.00075 (2017).\"}",
"{\"comment\": \"Dear Authors:\\nI appreciate the interesting work authors present in this paper. One question is about the convergence of DADAM on the nonconvex case. Can DADAM converge to a critical point? Thank you.\", \"title\": \"An Interesting Optimization Problem\"}",
"{\"title\": \"Response to Reviewer~2\", \"comment\": \"We thank the reviewer for very helpful and constructive feedback. A detailed point-by-point response to the reviewer's comments follows. \\n\\n\\n1-1 [Comment] \\n\\n-- Could you please explain the implication to equation (7a)? Does it have absolute value on the LHS? \\n\\n1-1 [Response] \\n\\n-- Modified and Fixed.\\n\\n\\n\\n1-2 [Comment] \\n\\n-- Can you explain more clearly about the section 3.2.1? It is not clear to me why DADAM outperform ADAM here. \\n\\n1-2 [Response] \\n\\n-- In this section, we address the question of whether DADAM be faster than ADAM (which is a centralized adaptive algorithm)? We provide the analysis for the convergence rate of the stochastic DADAM in the non-convex setting and show that the convergence rate of DADAM w.r.t time steps is similar to the mini-batch SGD, mini-batch ADAM, centralized parallel ADAM and parallel stochastic gradient descent, but DADAM avoids the communication traffic jam due to its locally distributed nature. \\n\\n- In the context of stochastic nonconvex optimization, we say a gradient-based method gives an $\\\\epsilon$-approximation solution if $$T^{-1}\\\\e{{\\\\bf Reg}^N_T} \\\\leq \\\\epsilon$$ where ${\\\\bf Reg}^N_T$ is defined in Section~2 (please see Definition~2). Now, assume that $T$ is sufficiently large, it satisfies (16a) and (16b), the $\\\\frac{1}{T}$ term will be dominated by the $\\\\frac{1}{\\\\sqrt{nT}}$ term which leads to a $\\\\frac{1}{\\\\sqrt{nT}}$ convergence rate. More specifically, it shows that the computation complexity of DADAM to achieve $\\\\epsilon$-approximation solution is $O(1/\\\\epsilon^2)$. It is worth to mention that the computational complexity per iteration of DADAM is $O(n)$ since the computation of a single stochastic gradient counts 1. Further, since the total number of nodes does not affect the complexity, each node exhibits complexity of $O\\\\big(1/(n\\\\epsilon^2)\\\\big)$. \\n\\nIn summary, a linear speed up can be achieved by DADAM w.r.t computational complexity if $T$ is sufficiently large. \\n\\n\\nRefer to Subsection~3.2.1 on page 8. \\n\\n1-3 [Comment] \\n\\nDid you perform algorithms on many runs and take the average? Also, did you tune the learning rate for all other algorithms to be the best performance? I am not sure how you choose the parameter $\\\\alpha$ here. What if $\\\\alpha$ changes and do not base on that in Yuan et al. 2016?\\n \\n1-3 [Response] \\n\\n-- The experiment is repeated ten times and the average residuals are considered for comparison purposes. \\n\\n-- In [16KLY] and [18JY], fast ergodic convergence rate of DGD was established assuming $T$ is sufficiently large and the step sizes are $\\\\alpha= \\\\frac{1+ \\\\sigma_n}{\\\\rho}$ and $\\\\alpha= \\\\frac{\\\\sigma_n}{\\\\rho}$ for convex and nonconvex objectives, respectively. Our numerical results show efficiency of adaptive algorithms by choosing these parameters. It is worth to mention that recommended $alpha$ for adaptive gradient methods such as ADAM is equal to $0.001$ but this is not optimal for decentralized gradient methods. \\n\\n[18JY] Zeng Jinshan, and Wotao Yin. \\\"On nonconvex decentralized gradient descent.\\\" IEEE Transactions on Signal Processing 66.11 (2018): 2834-2848. \\n\\n[16KLY] Yuan, Kun, Qing Ling, and Wotao Yin. \\\"On the convergence of decentralized gradient descent.\\\" SIAM Journal on Optimization 26.3 (2016): 1835-1854.\\n\\nRefer to Appendix on page 29.\"}",
"{\"title\": \"Response to Reviewer~3\", \"comment\": \"We appreciate the reviewer's constructive comments and suggestions.\\nWe have carefully addressed them in the revised version of the\\npaper and also focused on improving the presentation of the material. \\nA detailed point-by-point response to the reviewer's comments follows.\\n\\n1-1 [Comment]\\n\\n--I didn't find the actual method presented by the authors to be motivated very well. \\n\\n1-1 [Response]\\n\\n--Existing distributed stochastic and adaptive gradient methods for various learning problems, including deep learning, are mostly designed for a network topology with a central node. The main bottleneck of such a topology lies on the communication overload on the central node, since all nodes need to concurrently communicate with it.\\nHence, performance can be significantly degraded when network bandwidth is limited. These considerations motivate us to study an adaptive algorithm for network topologies, where all nodes can only communicate with their neighbors and none of the nodes is designated as ``central\\\". Therefore, the proposed method is suitable for large scale machine learning problems, since it enables both data parallelization and decentralized computation. \\n\\n-- Further, we show that our proposed adaptive distributed algorithm can be faster than its centralized counterpart such as ADAM, ADAGRAD and RMSProp. \\n \\nRefer to Subsection 1.1 on page 2.\\n\\n\\n1-2 [Comment]\\n\\n-- The main innovation with respect to the standard Adam/AMSGrad algorithm is the use of a mixing matrix $W$, but the authors do not discuss how the choice of this matrix influences the performance of the algorithm or how one should specify this input in practice. This seems like an important issue, especially since all of the bounds depend on the second singular value of this matrix. \\n\\n1-2 [Response] \\n\\n-- We assume that the mixing matrix W is symmetric and doubly stochastic (see, equation~1). As mentioned in the Introduction, we consider Metropolis constant edge weight matrix [04XBK, 07XB] (please see, subsection 1.2). When $\\\\hat{W}$ is chosen according to this scheme, $ W = \\\\frac{I+\\\\hat{W}}{2}$ is found to be very efficient [04XBK]. Also, this doubly stochastic matrix implies uniqueness of $\\\\sigma_1(W) = 1$ and warrants that other singular values of $W$ are strictly less than one in magnitude. \\n\\n-- It is worth mentioning that the optimization of matrix $W$ and in particular $\\\\sigma_2$ is not the main focus of this work. To the best of our knowledge, our theorems are the first to establish a tight connection between the convergence rate of distributed adaptive methods to the spectral properties of the underlying\\nnetwork. In particular, the inverse dependence on the spectral gap $1-\\\\sigma_2(W)$ is quite natural, since it is well-known to determine the rates of mixing in random walks on graphs [DAW12, LP17].\\n\\n---- [07XBK] Xiao, Lin, Stephen Boyd, and Seung-Jean Kim. \\\"Distributed average consensus with least-mean-square deviation.\\\" Journal of parallel and distributed computing 67.1 (2007): 33-46.\\n\\n---- [04XB] Xiao, Lin, and Stephen Boyd. \\\"Fast linear iterations for distributed averaging.\\\" Systems and Control Letters 53.1 (2004): 65-78.\\n\\n--- [DAW12] Duchi, John C., Alekh Agarwal, and Martin J. Wainwright. \\\"Dual averaging for distributed optimization: Convergence analysis and network scaling.\\\" IEEE Transactions on Automatic control 57.3 (2012): 592-606.\\n\\n----[LP17]Levin, David A., and Yuval Peres. Markov chains and mixing times. Vol. 107. American Mathematical Soc., 2017.\\n\\nRefer to Subsection~1.2 on page 2.\\n\\n1-3 [Comment]\\n\\nArguments such as Corollary 10 do not actually imply that DADAM outperforms ADAM when this singular value is large, making it difficult to assess the impact of this work. The numerical experiments also do not test for the statistical significance of the results. \\n\\n1-3 [Response]\\n\\n-- First, the doubly stochastic matrix $W$ defined by our strategy in Section~1.2 warrants that $\\\\sigma_2(W)$ is strictly less than one in magnitude and for a fully connected network is actually equal to 0. \\n\\n-- Second, in the revised version, we improve the previous result and the spectral gap $1- \\\\sigma_2(W)$ does not appear in the regret bound of DADAM, if $T$ is sufficiently large. \\n\\n-- Finally, in the context of stochastic nonconvex optimization, we say a gradient-based method gives an $\\\\epsilon$-approximation solution if $T^{-1}\\\\e{{\\\\bf Reg}^N_T} \\\\leq \\\\epsilon,$ where ${\\\\bf Reg}^N_T$ is defined in Section~2.\\nNow, assume that $T$ is sufficiently large, i.e. it satisfies (16a) and (16b), the $\\\\frac{1}{T}$ term will be dominated by the $\\\\frac{1}{\\\\sqrt{nT}}$ term which leads to a $\\\\frac{1}{\\\\sqrt{nT}}$ convergence rate where $n$ is the number of agents. More specifically, it shows that the computation complexity of DADAM to achieve $\\\\epsilon$-approximation solution is $O(1/\\\\epsilon^2)$. This shows that DADAM can be faster than ADAM for nonconvex stochastic optimization problems for $T$ sufficiently large. \\n\\n Refer to Subsection 3.2.1.\"}",
"{\"title\": \"Response to Reviewer~3\", \"comment\": \"1-4 [Comment]\\n\\n--1. page 1: \\\"note only\\\". Typo.\\n\\n--2. page 2: \\\"decentalized\\\". Typo.\\n.\\n.\\n.\\n--21. page 10: Acknowledgements. This shouldn't be included in the submission.\\n\\n1-4 [Response] \\n\\nModified and fixed.\\n\\n1-5 [Comment]\\n\\n9. page 4: \\\"$\\\\hat{v}_{i,t} = v_3 ...$\\\" You should reference how this assignment in the algorithm relates to the AMSGrad algorithm. Moreover, you should explain why you chose to use a convex combination in the assignment instead of just the max.\\n\\n1-5 [Response] \\n\\n-- In some cases, the numerical performance of our experiments is dependent on the choice of parameter $\\\\hat{v}_{i,t}$ and the results provided establish the efficiency of ADAM in comparison to AMSGrad. Indeed, a good step size value generated in any iterate of ADAM is essentially discarded due to the max in AMSGrad. $\\\\hat{v}_{i,t}$ provides a combination of these two approaches and enables us to develop a convergent adaptive method similar to AMSGrad, while maintaining the efficiency of ADAM. \\n\\nRefer to Subsection 2 on page 4. \\n\\n1-6 [Comment]\\n\\n12. page 5: Theorem 4. $D_T$ can be very large in the bound, which would make the upper bound meaningless. Can you set hyperparameters in such a way to minimize it? Also, what is the typical size of $\\\\sigma_2(W)$ that one would incur?\\n\\n13. page 6: Remark 6. This remark seems misleading. It ignores the $\\\\log(T)$ and $D_T$ terms, both of which may dominate the data dependent arguments.\\n\\n1-6 [Response] \\n\\n--It is easy to show that the regret of DADAM is bounded by $O(G_\\\\infty D_T \\\\sqrt{T})$ where $D_T =\\\\max_{d \\\\in \\\\{1,...,p\\\\} } D_{T,d}$. Indeed, the term $\\\\sum_{t=1}^T |g_{i,t,d}|/\\\\sqrt{t}$ in the proof of Lemma~13 can be bounded by $O(G_\\\\infty \\\\sqrt{T})$ instead of $O(G_\\\\infty \\\\sqrt{T\\\\log{T}})$. Hence, the regret of DADAM is upper bounded by minimum of $O(G_\\\\infty D_T \\\\sqrt{T})$ and the bound presented in Theorems~4 and 5, and thus the worst case dependence on $T$ is $\\\\sqrt{T}$ rather than $\\\\sqrt{T \\\\log{T}}$. It is worth mentioning that in the static setting, i.e. $D_T=0$, the regret of DADAM is upper bounded by $O(G_\\\\infty \\\\sqrt{T})$. \\n\\nRefer to Remark 6 on page 6. \\n\\n1-7 [Comment]\\n\\n16. page 7: Equation (14). Doesn't the presence of $\\\\sigma_2(W)$ imply that the $O(1/T)$ term may not be negligible? It would also be helpful to give some examples of how large T needs to be in (15a) and (15b) in order for this statement to take effect.\\n\\n1-7 [Response] \\n\\nPlease see Response 1-3.\\n\\n\\n1-8 [Comment]\\n\\n18. page 9: Figure 1. Without error bars, it is impossible to tell the statistical significance of these results. Moreover, how sensitive are these results to different choices of hyperparameters?\\n\\n\\n1-8 [Response] \\n\\n-- The numerical results shown in Figure~1 are based on the deterministic variants of DADAM, DGD and EXTRA algorithms with only local computation and neighbor communication. Indeed, our goal is to show the exact convergence to the reference logistic classifier $\\\\theta^*$. Hence, error bars are provided based on the residual $\\\\|\\\\frac{\\\\theta_T- \\\\theta^*}{\\\\theta - \\\\theta^*}\\\\|$. We have provided a detailed implementation for different choices of hyperparameters in Appendix.\\n\\nRefer to Figure~1 on page 10.\\n\\nRefer to Figures~3 and 4 on page 29.\"}",
"{\"title\": \"This paper proposes a consensus-based distributed method, namely DADAM, for online optimization. The technical details are well presented and the empirical results are convincing.\", \"review\": \"The proposed DADAM is a sophisticated combination of decentralized optimization and the adaptive moment estimation. DADAM enables data parallelization as well as decentralized computation, hence suitable for large scale machine learning problems.\\n\\nCorollary 10 shows better performance of DADAM. Besides the detailed derivations, can the authors intuitively explain the key setup which leads to this better performance?\\n\\nThe experimental results are mainly based on sigmoid loss with simple constraints. The results will be more convincing if the authors can provide studies on more complex objective, for example, regularized loss with both L2 and L1 bounded constraints. \\n\\nTh experimental results in Section 5.1 is based on \\\\beta_1 = \\\\beta_2 = \\\\beta_3 = 0.9. From the expression of \\\\hat v_{i,t} in Section 2, this setting implies the most recent v_{i,t} plays a more important role than the historical maximum, hence ADAM is better than AMSGrad. I am curious what the results will look like if we set \\\\beta_3 as a value smaller than 0.5.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Novel algorithm for an important problem but not sufficiently justified theoretically or empirically.\", \"review\": \"This paper presents a consensus-based decentralized version of the Adam algorithm for online optimization. The authors consider an empirical risk minimization objective, which they split into different components, and propose running a separate online optimization algorithm for each component, with a consensus synchronization step that involves taking a linear combination of the parameters from each component before applying each component's individual parameter update. The final output is a simple average of the parameters from each component.\\n\\nThe authors study the important problem of distributed optimization and focus on adapting existing state-of-the-art methods to this setting. The algorithm is clearly presented, and to the best of my knowledge, original. The fact that this work includes both theoretical guarantees for the convex and non-convex settings as well as numerical experiments strengthens the contribution.\\n\\nOn the other hand, I didn't find the actual method presented by the authors to be motivated very well. The main innovation with respect to the standard Adam/AMSGrad algorithm is the use of a mixing matrix W, but the authors do not discuss how the choice of this matrix influences the performance of the algorithm or how one should specify this input in practice. This seems like an important issue, especially since all of the bounds depend on the second singular value of this matrix. Moreover, arguments such as Corollary 10 do not actually imply that DADAM outperforms ADAM when this singular value is large, making it difficult to assess the impact of this work. The numerical experiments also do not test for the statistical significance of the results. \\n\\nThere are also many typos that make the submission seem relatively unpolished.\", \"specific_comments\": \"1. page 1: \\\"note only\\\". Typo.\\n2. page 2: \\\"decentalized\\\". Typo.\\n3. page 2: \\\"\\\\Pi_X[x]. If \\\\Pi_X(x)....\\\" Inconsistent notation.\\n4. page 3: \\\"largest singular of matrix\\\". Typo.\\n5. page 3: \\\"x_t* = arg min_{x \\\\in X} f_t(x)\\\". f_t isn't defined in up to this point.\\n6. page 4: \\\"network cost is then given by f_t(x) = \\\\frac{1}{n} \\\\sum_{i=1}^n f_{i,t}(x)\\\" Should the cost be \\\\frac{1}{n} \\\\sum_{i=1}^n f_{i,t}(x_{i,t})? That would be more consistent with the definition of regret presented in Reg_T^C. \\n7. page 4: \\\"assdessed\\\". Typo.\\n8. page 4: \\\" Reg_T^C := \\\\frac{1}{n} \\\\sum_{i=1}^n \\\\sum)_{t=1}^T f_t(x_{i,t})...\\\" Why is this f_t and not f_{i,t}?\\n9. page 4: \\\"\\\\hat{v}_{i,t} = v_3 ...\\\" You should reference how this assignment in the algorithm relates to the AMSGrad algorithm. Moreover, you should explain why you chose to use a convex combination in the assignment instead of just the max.\\n10. page 5: Definition 1. This calculation should be derived and presented somewhere (e.g. in the appendix).\\n11. page 5: Assumption 3. The notation for the stochastic gradient is not very clear and easily distinguishable from the notation for the deterministic gradient.\\n12. page 5: Theorem 4. D_T can be very large in the bound, which would make the upper bound meaningless. Can you set hyperparameters in such a way to minimize it? Also, what is the typical size of \\\\sigma_2(W) that one would incur?\\n13. page 6: Remark 6. This remark seems misleading. It ignores the log(T) and D_T terms, both of which may dominate the data dependent arguments.\\n14. page 6: \\\"The update rules \\\\tilde{v}_{i,t}...\\\". \\\\tilde{v}_{i,t} is introduced but never defined.\\n15. page 6: Last display equation. The first inequality seems like it can be an equality.\\n16. page 7: Equation (14). Doesn't the presence of \\\\sigma_2(W) imply that the O(1/T) term may not be negligible? It would also be helpful to give some examples of how large T needs to be in (15a) and (15b) in order for this statement to take effect.\\n17. page8: \\\"distributed federated averaging SGD (FedAvg)\\\". What is the reference for this? It should be included here. It should probably also be mentioned in the introduction as related work.\\n18. page 9: Figure 1. Without error bars, it is impossible to tell the statistical significance of these results. Moreover, how sensitive are these results to different choices of hyperparameters?\\n19. page 9: \\\"obtain p coefficients\\\". What is p in these experiments?\\n20. page 9: Metropolis constant edge weight matrix W\\\". What is \\\\sigma_2(W) in this case?\\n21. page 10: Acknowledgements. This shouldn't be included in the submission.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A consensus-based distributed adaptive gradient method for online optimization\", \"review\": \"Title: DADAM: A consensus-based distributed adaptive gradient method for online optimization\", \"summary\": \"The paper presented DADAM, a new consensus-based distributed adaptive moment estimation method, for online optimization. The author(s) also provide the convergence analysis and dynamic regret bound. The experiments show good performance of DADAM comparing to other methods.\", \"comments\": \"1) The theoretical results are nice and indeed non-trivial. However, could you please explain the implication to equation (7a)? Does it have absolute value on the LHS? \\n\\n2) Can you explain more clearly about the section 3.2.1? It is not clear to me why DADAM outperform ADAM here. \\n\\n3) Did you perform algorithms on many runs and take the average? Also, did you tune the learning rate for all other algorithms to be the best performance? I am not sure how you choose the parameter \\\\alpha here. What if \\\\alpha changes and do not base on that in Yuan et al. 2016? \\n\\n4) The deep learning experiments are quite simple. In order to validate the performance of the algorithm, it needs to be run on more datasets and networks architectures. MNIST and CIFAR-10 and these simple network architectures are quite standard. I would suggest to provide more if the author(s) have time. \\n\\nIn general, I like this paper. I would love to have discussions with the author(s) during the rebuttal period.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rylIAsCqYm | A2BCD: Asynchronous Acceleration with Optimal Complexity | [
"Robert Hannah",
"Fei Feng",
"Wotao Yin"
] | In this paper, we propose the Asynchronous Accelerated Nonuniform Randomized Block Coordinate Descent algorithm (A2BCD). We prove A2BCD converges linearly to a solution of the convex minimization problem at the same rate as NU_ACDM, so long as the maximum delay is not too large. This is the first asynchronous Nesterov-accelerated algorithm that attains any provable speedup. Moreover, we then prove that these algorithms both have optimal complexity. Asynchronous algorithms complete much faster iterations, and A2BCD has optimal complexity. Hence we observe in experiments that A2BCD is the top-performing coordinate descent algorithm, converging up to 4-5x faster than NU_ACDM on some data sets in terms of wall-clock time. To motivate our theory and proof techniques, we also derive and analyze a continuous-time analog of our algorithm and prove it converges at the same rate. | [
"asynchronous",
"optimization",
"parallel",
"accelerated",
"complexity"
] | https://openreview.net/pdf?id=rylIAsCqYm | https://openreview.net/forum?id=rylIAsCqYm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkekXqTWeE",
"rye8Cm2_67",
"BJeJU3oOpm",
"B1xLs73Da7",
"BJlBRGhDpm",
"SJxmpDwPaX",
"SylE3VwwTm",
"HJeBfaW5n7",
"HJlQLyfShQ",
"r1lDBW9Q3m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544833559182,
1542140878095,
1542138951129,
1542075294094,
1542075084807,
1542055867302,
1542055084146,
1541180684787,
1540853579475,
1540755775187
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper895/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper895/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper895/Authors"
],
[
"ICLR.cc/2019/Conference/Paper895/Authors"
],
[
"ICLR.cc/2019/Conference/Paper895/Authors"
],
[
"ICLR.cc/2019/Conference/Paper895/Authors"
],
[
"ICLR.cc/2019/Conference/Paper895/Authors"
],
[
"ICLR.cc/2019/Conference/Paper895/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper895/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper895/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers all agreed that this paper makes a strong contribution to ICLR by providing the first asynchronous analysis of a Nesterov-accelerated coordinate descent method.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"a clear accept\"}",
"{\"title\": \"Thanks for the explanation\", \"comment\": \"Having read your explanation, I am convinced that the issue was with my intuition. I see how $\\\\underbar{L}$ changes the algorithm's step sizes, it might be good to include a mention of how you can be slightly less aggressive at the cost of a factor of 2, but perhaps that is obvious to most readers.\\n\\nThanks for the comments, I have revised my score for the paper.\"}",
"{\"title\": \"Revision\", \"comment\": \"Dear reviewers. We have revised our paper to include relevant references that we overlooked and minor changes that were suggested. I apologize for the lateness. I (main author) have had the flu and fever for almost a week, which makes this more challenging.\\n\\nWe hope that we have convinced Reviewer 1 that our results are plausible. It has been generally observed experimentally and theoretically, that at least for smooth convex optimization problems, asynchronicity does not cause significant slowdown (for delays not too large). Our results extend this to the case of an accelerated algorithm. We hope that we have helped to reconile Reviewers 3's intuition on acceleration with our results. Our monotonicity is an extension of similar results by Nesterov. We thank Reviewer 2 for drawing our attention to the rich body of work on asynchronous SGD.\\n\\nIn light of our responses to the reviewers' concerns, we hope that you will reconsider our current scores. We believe our work has solved a longstanding and challenging problem in asynchronous optimization. We invite the reviewers' comments on anything else that would help improve our paper and its impact on the ML community. It would be a great honor to present our work at ICML 2019. Sincerely,\\n\\nThe Authors\"}",
"{\"title\": \"L Lower Bar\", \"comment\": \"\\u201cI am also a little bit surprised...\\u201d Indeed we were also surprised :-). The reviewer brings up a very interesting point of discussion. You are correct that all things being equal, having a lower $\\\\underbar{L}$ would make optimization easier. However, changing $\\\\underbar{L}$ actually changes the base synchronous algorithm, and hence you can\\u2019t make a direct comparison.\\n\\nThough we are not completely sure, we actually believe that this is not an artifact. See (***) ahead for why we are not completely sure. A lower $\\\\underbar{L}$ means that there are smaller coordinate Lipschitz constants. The step sizes for these small-Lipschitz coordinates are therefore more aggressive. More aggressive steps increase the error due to asynchronicity. The increase in these errors leads to a more restrictive condition on the delay. The effect of having a lower $\\\\underbar{L}$ on the asynchronous error can be seen in the middle of page 18. This effects coordinates with larger Lipschitz constants less, because they are already taking more conservative step sizes. \\n\\nHowever, it is possible to temper this aggression to obtain a weaker condition on the maximum delay, at the cost of a constant factor in terms of the complexity. We can do this by overestimating the Lipschitz constants of the smaller-value coordinates. We did not include a discussion of this because of space limitations, and our choice to focus on the scenarios where the complexities for asynchronous and synchronous are nearly matched. Consider the $L_{\\u00bd}$ average of the Lipschitz constants $L_{\\u00bd} = ((1/n)sum_{i=1}^nL_i^{\\u00bd})^2$. We replace $L_i$ with $max(L_i,L_{\\u00bd})$. This will at most double $S$, which means the complexity may double. However because we are taking less aggressive steps on small-Lipschitz coordinates, we obtain a weaker condition on the maximum delay. Overestimating the Lipschitz constant in this way amounts to replacing $\\\\underbar{L}$ with $L_{\\u00bd}$ in the convergence condition (and replacing $S$ with $2S$). This also leads to a less counterintuitive condition on the delay.\\n\\n(***) It might be possible to tighten our analysis, and weaken the condition on the delay further, and perhaps change the dependence on $\\\\underbar{L}$. This proved difficult because coordinate-smoothness is actually not that well understood. For instance, the sharp complexity of serial (non-accelerated!) coordinate descent for coordinate smooth functions is actually unknown (to our surprise) because of the absence of nontrivial lower performance bounds for RBCD. For the case where we do not assume coordinate smoothness, simply smoothness, there exist sharp worst-case analyses of coordinate descent. These were important roadmaps that we used to analyze convergence. However, no such roadmap exists in the coordinate-smooth case. There has been some interesting work in this direction by Richtarik and Takac using the so-called \\u201cexpected separable overapproximation\\u201d assumption. However, the meaning and implications of this assumption are unclear. A better understanding of coordinate smoothness in the serial case may enable us to improve our analysis.\\n\\nWe can discuss the difficulty and open questions in this setting if the reviewer is interested. We hope that the reviewer will reconsider their score of our paper.\"}",
"{\"title\": \"Thank you. Intuition.\", \"comment\": \"We thank the reviewer for their time and consideration of our work. We hope that we can convince the reviewer to increase their score. We believe that our work solves a difficult outstanding problem in the field of asynchronous optimization, and provides a valuable contribution to ICLR.\\n\\n\\u201cMy main confusion\\u2026\\u201d We admit this may sound somewhat surprising. However, this result is consistent with many previous theoretical and experimental results in the coordinate descent setting. It is also consistent with the ODE analysis in our paper. For example, in \\u201cAn asynchronous parallel stochastic coordinate descent algorithm\\u201d Liu & Wright (2015), authors obtain linear convergence results for asynchronous RBCD. They obtain a complexity that is within a factor of 4 of the sharp iteration complexity for the synchronous case. Experimentally though, they observe that the complexity for the asynchronous RBCD is essentially the same as serial RBCD -- even for up to 40 cores. So essentially, they observed that this penalty factor is approximately 1 -- not 4. In \\u201cMore Iterations per Second, Same Quality\\u201d, Hannah & Yin manage to prove that this factor is ~1. Their condition on the maximum delay is $\\\\tau = o(m^(\\u00bd))$. \\n\\nOur work extends these theoretical and experimental results to the accelerated case. As you observed, for $\\\\kappa\\\\approx n $, our condition is slightly more restrictive at $\\\\tau = o(m^(\\u00bc))$. In our experiments, we observed that the error vs. number of iterations for A2BCD was essentially the same for synchronous accelerated RBCD. However, we did not include graphs in the interest of space (though we can add these to the appendix if you think they would add to the paper).\\n\\nLet us offer some insight on why this is possible. Consider the non-accelerated case. Coordinate descent methods only modify a block of the solution vector. If one does full-gradient updates, it is known that it is impossible to obtain a speedup. However, since we are only modifying a single block at a time, it is plausible that the delayed gradient would be a good surrogate for the true gradient -- at least on average. Most of the solution vector is actually up to date, but a fraction $O(1/\\\\sqrt{n})$ is outdated. The outdated fraction has a uniformly random distribution, which prevents blocks that have a large influence on the value of the gradient from being outdated most of the time. Given this, it makes sense that some delay is tolerable from a complexity standpoint, and that it is only a question of how much. \\n\\nIn the accelerated case, without modification, we are applying dense updates to the solution vectors. This is because of the averaging steps. However, notice that the quantities that we are averaging are up-to-date, since they are centrally maintained (this turns out to be essential experimentally). It is only the gradient part that can be outdated. Hence, it still remains plausible that our delayed updates are a good surrogate for non-delayed updates.\"}",
"{\"title\": \"Thanks you. Monotonicity. Implementation.\", \"comment\": \"We thank the reviewer for their time and their kind words. The \\u201cminors\\u201d will be addressed later today in an edit.\\n\\n\\u201cTheorem 1 essentially shows...\\u201d You are correct in stating that the objective function value and the distance to the solution are not guaranteed to improve in expectation at each iteration. However, in \\u201cEfficiency of coordinate descent methods on huge-scale optimization problems\\u201d Nesterov (2012), Nesterov discovered that a certain linear combination of both is guaranteed to decrease linearly and monotonically in expectation at every step (see Theorem 6 of that paper). The most familiar way to prove convergence is with estimating sequence techniques. However, we found Nesterov\\u2019s Lyapunov function approach to be a better starting point in light of the existing Lyapunov function techniques for asynchronous algorithms. \\n\\nAs mentioned in Remark 1, our proof, Lyapunov function, and results essentially reduce to Nesterov\\u2019s proof, Lyapunov function, and results in the synchronous case where $\\\\tau=0$. So, a guaranteed improvement of the Lyapunov function at every step should not be that surprising.\\n\\n\\u201cThe actual implemented algorithm...\\u201d This is correct. Depending on the problem at hand and the computational architecture, a different implementation may be the most efficient. These different implementations may have slightly different convergence proofs & properties. It is unclear if there is a general way to prove convergence results for all possible implementations. So, we were forced to simply chose a base case/setup that was as similar as possible to other literature on asynchronous optimization algorithms (i.e., similar to Liu & Wright). We chose ridge regression for our experiments, even though it doesn\\u2019t exactly fit into our base case, because it was of general interest. It also demonstrates that even though this is a coordinate method, it can be used on finite-sum problems via duality. Our proof of convergence for this base case can be seen as a roadmap to prove convergence for other asynchronous accelerated algorithm implementations, which we expect to be fairly similar.\\n\\nWe also believe that our sparse implementation in itself is a useful contribution to the field. We observed that the linear transformation of Lee & Sidford (2013) leads to \\u201ccoordinate friendliness\\u201d, and hence efficient updates. This realization was essential to obtaining a state-of-the-art coordinate descent algorithm, and led to a massive speedup for both the synchronous and asynchronous case.\"}",
"{\"title\": \"Thank you & ESO Assumption\", \"comment\": \"We thank the reviewer for their kind words, and for drawing our attention to some important papers on asynchronous algorithms. We will add a discussion of these references later today. Apart from the odd spacing of the items at the end of the bibliography, are there any other formatting issues you were referring to?\\n\\n\\u201cWould it be possible to extend the present work in that direction?\\u201d Yes, we believe it is extremely likely that it is possible to extend our work in this direction. From a technical viewpoint: The ESO assumption leads to very similar inequalities in the partially separable case as in our base case. The only difference is that the inequalities will rely on ESO parameters instead of coordinate Lipschitz parameters. Given that, it should be possible to map our proof with proper modifications onto this setting. Moreover, we believe the ESO assumption may allow for a tighter analysis, leading to larger allowable delays. ESO has already been used to obtain tighter bounds in the mini-batch setting, which is intimately related to the asynchronous setting. Exploring the interaction between ESO and asynchronicity will surely yield interesting results.\"}",
"{\"title\": \"Review: A2BCD\", \"review\": \"The authors design an accelerated, asynchronous block coordinate descent algorithm, which, for sufficiently small delays attains the iteration complexity of the current state of the art algorithm (which is not parallel/asynchronous). The authors prove a lower bound on the iteration complexity in order to show that their algorithm is near optimal. They also analyze an ODE which is the continuous time limit of A2BCD, which they use to motivate their approach.\\n\\nI am a little bit confused about the guarantee of the algorithm, as it does not agree with my intuition. Perhaps I am simply mistaken in my intuition, but I am concerned that there may need to be additional premises to the Theorem.\\n\\nMy main confusion is with Theorem 1, which says that for $\\\\psi < 3/7$ the iteration complexity is approximately the iteration complexity of NU_ACDM times a factor of $(1 + o(1))/(1-\\\\psi)$, i.e. within that factor of the optimal *non-asynchronous/parallel* algorithm. In particular, since $\\\\psi < 3/7$ this means that the algorithm is within a $7/4 + o(1)$ factor. As mentioned in Corollary 3, this applies for instance when $L_i = L$ for all i and $\\\\tau = \\\\Theta( n^{1/2}\\\\kappa^{-1/4} )$. Therefore, in a regime where $n \\\\approx \\\\kappa$, and $n$ very large, this would indicate that the algorithm would be almost as good as the best synchronous algorithm even for delays $\\\\tau \\\\approx n^{1/4}$. Perhaps I am missing something, but this seems very surprising to me, in particular, I would expect more significant slowdown due to $\\\\tau$. \\n\\nI am also a little bit surprised that the maximum tolerable delay is proportional to the *minimum* smoothness parameter $\\\\underbar{L}$. It seems like decreasing $\\\\underbar{L}$ should make optimization easier and therefore more delay should be tolerated. Perhaps this is simply an artifact of the analysis.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"questions about theories\", \"review\": [\"This paper studies the combination of the asynchronous parallelization and the accelerated stochastic coordinate descent method. The proved convergence rate is claimed to be consistent with the non parallel counterpart. The linear speedup is achievable when the maximal staleness is bounded by n^{1/2} roughly, that sounds very interesting result to me. However, I have a few questions about the correctness of the results:\", \"Theorem 1 essentially shows that every single step is guaranteed to improve the last step in the expectation sense. However, this violates my my experiences to study Nesterov's accelerated methods. To my knowledge, Nesterov's accelerated methods generally do not guarantee improvement over each single step, because accelerate methods essentially constructs a sequence z_{t+1} = A z_t where A is a nonsymmetric matrix with spectral norm greater than 1.\", \"The actual implemented algorithm is using the sparse update other than the analyzed version, since the analyzed version is not efficient or suitable for parallelization. However, the sparse updating rule is equivalent to the original version only for the non asynchronous version. Therefore, the analysis does not apply the actual implementation.\"], \"minors\": [\"pp2 line 8, K(epsilon) is not defined\", \"Eq. (1.4), the index is missing.\", \"missing reference: An Asynchronous Parallel Stochastic Coordinate Descent Algorithm, ICML 2014.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"An elegant solution to a long-standing open question as to the speed-up of asynchronous distributed coordinate descent\", \"review\": \"In distributed optimisation, it is well known that asynchronous methods outperform synchronous methods in many cases. However, the questions as to whether (and when) asynchronous methods can be shown to have any speed-up, as the number of nodes increases, has been open. The paper under review answers the question in the affirmative and does so very elegantly.\\n\\nI have only a few minor quibbles and a question. There are some recent papers that could be cited:\", \"http\": \"//proceedings.mlr.press/v80/lian18a.html\", \"https\": \"//arxiv.org/abs/1406.0238\\nand citations thereof. Would it be possible to extend the present work in that direction?\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
S1grRoR9tQ | Bayesian Deep Learning via Stochastic Gradient MCMC with a Stochastic Approximation Adaptation | [
"Wei Deng",
"Xiao Zhang",
"Faming Liang",
"Guang Lin"
] | We propose a robust Bayesian deep learning algorithm to infer complex posteriors with latent variables. Inspired by dropout, a popular tool for regularization and model ensemble, we assign sparse priors to the weights in deep neural networks (DNN) in order to achieve automatic “dropout” and avoid over-fitting. By alternatively sampling from posterior distribution through stochastic gradient Markov Chain Monte Carlo (SG-MCMC) and optimizing latent variables via stochastic approximation (SA), the trajectory of the target weights is proved to converge to the true posterior distribution conditioned on optimal latent variables. This ensures a stronger regularization on the over-fitted parameter space and more accurate uncertainty quantification on the decisive variables. Simulations from large-p-small-n regressions showcase the robustness of this method when applied to models with latent variables. Additionally, its application on the convolutional neural networks (CNN) leads to state-of-the-art performance on MNIST and Fashion MNIST datasets and improved resistance to adversarial attacks. | [
"generalized stochastic approximation",
"stochastic gradient Markov chain Monte Carlo",
"adaptive algorithm",
"EM algorithm",
"convolutional neural networks",
"Bayesian inference",
"sparse prior",
"spike and slab prior",
"local trap"
] | https://openreview.net/pdf?id=S1grRoR9tQ | https://openreview.net/forum?id=S1grRoR9tQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Sygu7OqSxE",
"H1xyINe1gV",
"BJeOIPB9nQ",
"ByguuVqt3X",
"HyxiMQrDh7"
],
"note_type": [
"official_comment",
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545082911749,
1544647751439,
1541195600292,
1541149808126,
1540997906886
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper894/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper894/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper894/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper894/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper894/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Reply\", \"comment\": \"I\\u2019m not fully convinced that replacing MAP by SG-MCMC in EMVS can solve the local trap problem. It is known that SG-MCMC could be locally trapped and there are methods to alleviate this problem such as adjusting the temperature you mentioned. But the paper did not use any of these techniques. I think only using SG-MCMC itself may not be able to solve the local trap problem. Besides, the empirical results could not demonstrate that the improvement over EMSV is because SG-MCMC-SA solves the local trap. The regression experiment only has one mode, so there is no local trap problem, why is SG-MCMC-SA better than EMSV here? I wonder if ESM will perform similarly to SG-MCMC-SA on this task. It would be better to compare to ESM in all the experiments since it is one of the most related methods.\"}",
"{\"metareview\": \"This paper proposes a Bayesian alternative to dropout for deep networks by extending the EM-based variable selection method with SG-MCMC for sampling weights and stochastic approximation for tuning hyper-parameters. The method is well presented with a clear motivation. The combination of SMVS, SG-MCMC, and SA as a mixed optimization-sampling approach is technically sound.\\n\\nThe main concern raised by the readers is the limited originality. SG-MCMC has been studied extensively for Bayesian deep networks and applying the spike-and-slab prior as an alternative to dropout is a straightforward idea. The main contribution of the paper appears to be extending EMVS to deep net with commonly used sampling techniques for Bayesian networks.\\n\\nAnother concern is the lack of experimental justification for the advantage of the proposed method. While the authors promise to include more experiment results in the camera-ready version, it requires a considerable amount of effort and the decision unfortunately has to be made based on the current revision.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Good Bayesian approach to deep networks with spike-and-slab prior but with limited originality and lack of experiment support\"}",
"{\"title\": \"Unclear benefits of SG-MCMC with SA and the experiments are not sufficiently convincing\", \"review\": \"The authors describe a new method of posterior sampling with latent variables based on SG-MCMC and stochastic approximation (SA). The new method uses a spike and slab prior on the weights of the deep neural networks to encourage sparsity. Experiments on toy regressions, classification and adversarial attacks demonstrate the superiority over SG-MCMC and EMSV.\\n\\nCompared to the previous work EMSV (ESM), the novelty of SG-MCMC-SA is replacing the MAP in EMSV by SG-MCMC with stochastic approximation to alleviate the local trap problem in DNNs. However, I did not see why SG-MCMC with SA can achieve this goal. It is known that SG-MCMC methods tend to get trapped in a local optimal [1]. How did SA solve this problem? Besides, it is unclear to me where Eq. 17 uses stochastic approximation. The authors need to explain more about stochastic approximation for the readers who are not familiar with this method. \\n\\nEmpirical results on a synthetic example, MNIST and FMNIST show that SG-MCMC-SA outperforms the previous methods. However, the improvements of the proposed method are marginal. MNIST and FMNIST are small and easy datasets and it is very hard to tell the effectiveness of SG-MCMC-SA. It would be more convincing to show the empirical results on other datasets, e.g. CIFAR, using some larger architectures. The comparison would be more significant in that case. \\n\\n[1]. Zhang, Yizhe, et al. \\\"Stochastic Gradient Monomial Gamma Sampler.\\\" arXiv preprint arXiv:1706.01498 (2017).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting work, but in my view not substantial novelty and significance\", \"review\": \"TITLE\\nBayesian deep learning via stochastic gradient mcmc with a stochastic approximation adaptation\\n\\nREVIEW SUMMARY\\nFairly well written paper on SG-MCMC type inference in neural networks with slab and spike priors. In my view, the originality and significance is limited.\\n\\nPAPER SUMMARY\\nThe paper develops a method for sampling/optimization of a Bayesian neural network with slab and spike priors on the weights.\\n\\nQUALITY\\nI belive the contribution is technically sound (but I have not checked all equations or the proof of Theorem 1). The empirical evaluation is not unreasonable, but also not strongly convincing.\\n\\nCLARITY\\nThe paper is fairly well written, but grammar and use of English could be slightly improved (not so important). \\n\\nORIGINALITY\\nThe paper builds on existing work on EM-type algorithms for slab and spike models and SG-MCMC for Bayesian inference in neural networks. The novelty of the contribution is limited: The main contribution is the combination of the two methods and some theoretical results. I am not able to judge if there is significant originality in the theoretical results (Theorem 1 + Corr 1+2) but if I am not mistaken it is more or less an application of a known result to this particular setting?\\n\\nSIGNIFICANCE\\nWhile I think the proposed algorithm is reasonable and most likely useful in practice, I am not sure the contribution is substantial enough to gain large interest in the community. \\n\\nFURTHER COMMENTS\\nFigure 2 (d+e) are in my view not so useful for assessing the training/test performance, but I am not even completely sure what the figures shows, as there are no axis labels. I would prefer some results on the loss, perhaps averaged over multiple data sets.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"The proposed SGLD-SA algorithm with its convergence properties is interesting\", \"review\": [\"The proposed SGLD-SA algorithm, together with its convergence properties, is very interesting. The introduction of step size $w^{k}$ is very similar to the \\\"convex combination rule\\\" in (Zhang & Brand 2017) to guarantee convergence.\", \"It seems that this paper only introduced Bayesian inference in the output layers. It would be more interesting to have a complete Bayesian model for the full network including the inner and activation layers.\", \"This paper imposed spike-and-slab prior on the weight vector which can yield sparse connectivity. Similar ideas have been explored to compress the model size of deep networks (Lobacheva, Chirkova and Vetrov 2017; Louizos, Ullrich and Welling 2017 ). It would make this paper stronger to compare the sparsification and compression properties with the above work.\", \"In equation (11) there is a summation from $\\\\beta_{p+1}$ to $\\\\beta_{p+u}$. I wonder where this term comes from, as I thought $\\\\beta$ is a vector of dimension $p$.\"], \"reference\": \"Zhang, Ziming, and Matthew Brand. \\\"Convergent block coordinate descent for training tikhonov regularized deep neural networks.\\\" Advances in Neural Information Processing Systems. 2017.\\n\\nLobacheva, Ekaterina, Nadezhda Chirkova, and Dmitry Vetrov. \\\"Bayesian Sparsification of Recurrent Neural Networks.\\\" arXiv preprint arXiv:1708.00077 (2017).\\n\\nLouizos, Christos, Karen Ullrich, and Max Welling. \\\"Bayesian compression for deep learning.\\\" Advances in Neural Information Processing Systems. 2017.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
HJeB0sC9Fm | Detecting Memorization in ReLU Networks | [
"Edo Collins",
"Siavash Arjomand Bigdeli",
"Sabine Süsstrunk"
] | We propose a new notion of 'non-linearity' of a network layer with respect to an input batch that is based on its proximity to a linear system, which is reflected in the non-negative rank of the activation matrix.
We measure this non-linearity by applying non-negative factorization to the activation matrix.
Considering batches of similar samples, we find that high non-linearity in deep layers is indicative of memorization. Furthermore, by applying our approach layer-by-layer, we find that the mechanism for memorization consists of distinct phases. We perform experiments on fully-connected and convolutional neural networks trained on several image and audio datasets. Our results demonstrate that as an indicator for memorization, our technique can be used to perform early stopping. | [
"Memorization",
"Generalization",
"ReLU",
"Non-negative matrix factorization"
] | https://openreview.net/pdf?id=HJeB0sC9Fm | https://openreview.net/forum?id=HJeB0sC9Fm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1xoDv41gV",
"rJeP-hYZyN",
"rJgi5LI6AQ",
"Hkxm8rUa0Q",
"ByxMYy1j0X",
"ByxVZCnKRX",
"S1eP3MSQR7",
"SkgQvGrQA7",
"rkxAEfHXAQ",
"HJlzuZH7C7",
"Byg4UbSXRQ",
"ByxNZRV7AQ",
"ByePaSxs2X",
"SygfhEWq3m",
"SkgNroyY2m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544664931167,
1543769086977,
1543493266534,
1543492939442,
1543331705770,
1543257596510,
1542832815349,
1542832730631,
1542832694495,
1542832490056,
1542832460481,
1542831611803,
1541240255297,
1541178538073,
1541106491831
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper893/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper893/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper893/Authors"
],
[
"ICLR.cc/2019/Conference/Paper893/Authors"
],
[
"ICLR.cc/2019/Conference/Paper893/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper893/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper893/Authors"
],
[
"ICLR.cc/2019/Conference/Paper893/Authors"
],
[
"ICLR.cc/2019/Conference/Paper893/Authors"
],
[
"ICLR.cc/2019/Conference/Paper893/Authors"
],
[
"ICLR.cc/2019/Conference/Paper893/Authors"
],
[
"ICLR.cc/2019/Conference/Paper893/Authors"
],
[
"ICLR.cc/2019/Conference/Paper893/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper893/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper893/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a new measure to detect memorization based on how well the activations of the network are approximated by a low-rank decomposition. They compare decompositions and find that non-negative matrix factorization provides the best results. They evaluate of several datasets and show that the measure is well correlated with generalization and can be used for early stopping. All reviewers found the work novel, but there were concerns about the usefulness of the method, the experimental setup and the assumptions made. Some of these concerns were addressed by the revisions but concerns about usefulness and insights remained. These issues need to be properly addressed before acceptance.\", \"confidence\": \"2: The area chair is not sure\", \"recommendation\": \"Reject\", \"title\": \"Meta-review\"}",
"{\"title\": \"no title\", \"comment\": \"I comment using the numbering we have in the thread above:\\n\\n1) I find the explanation and the additional experiment convincing.\\n\\n2) I find the new experiment interesting and *partially* supporting the conjecture about the phases.\\n\\n3) Now I understand and this explains the behavior. However, the text in the method section calls $A$ a \\\"layer activation matrix\\\". That is each row represent a sample and the columns the units in that layer. Next, $A$ is what that gets factorized. This does not imply the interpretation in the previous comment about each spatial location becoming a separate row in the $A$ matrix and, thus, is an incorrect representation of the method (for convolutional layers). So, the method section needs to be rewritten to make this clear.\\n\\n4) This point is important since the usefulness of the proposed early stopping procedure is the main empirical contribution of the submission.\", \"final_take\": \"I deeply appreciate the authors effort in addressing these issues. Including all these comments and experiments in the main manuscript will make the work more clear and convincing. However, unfortunately, I still think the work's usefulness from both conceptual and empirical aspects is lacking. From the conceptual point of view, it does not put forward a systematic *and* novel perspective, the phase study of layers is novel/interesting but not systematic, the linearity analysis using NMF is systematic in general but does not put forward novel findings. The empirical usefulness of the proposed method is also at question (see point 4 above). Also, the manuscript had many unclear points, many (or all) of which are cleared through the discussion but requires a reorganization for them to be properly integrated in the text and potentially find other unclear parts.\\n\\nSo, all in all, given my final view above, I would prefer to see the paper get accepted when all these points are properly addressed (which should be possible for the next ML conference) but would not be too disappointed if it gets accepted to this ICLR conference track.\"}",
"{\"title\": \"Response to Reviewer3\", \"comment\": \"Thank you again for the helpful comments.\\n\\n1) \\\"1.a) how sensitive is the new training set accuracy to the choice of the batches 1.b) how many batches do you consider per experiment? 1.c) how sensitive is the study to the number of batches?\\\"\\n\\nIn our experiments, we used one batch per class, per randomization probability p, and per network instance.\\nIn datasets with 10 outputs (Fashion-MNIST, CIFAR10, SVHN, UrbanSounds) we set the number of batches per network to 10, i.e. 10 multi-class batches or 10 single-class batches (1 batch per class). This was to equate the amount of data seen by our method.\\nAdditionally, the batches were allowed to vary across the 10 network instances tested and also per label randomization probability p.\\nThis means that to evaluate each accuracy tradeoff in out experiments (Figures 2, 3, 4, 7 and 10) we randomly sampled 100 batches.\\nWe note that each batch contained 50 samples throughout our experiments.\\n\\nThis choice had no significant impact on our results because of the very small variance across batches.\\nFor instance, we measured the AuC of the curves in Figure 3(a) over 6 different runs, i.e., each time using a different per-class batch. Below we report, per label randomization probability p, the mean and the standard deviation:\\n\\n \\t mean\\t std\\np=0.0\\t0.9756\\t0.0008\\np=0.2\\t0.8779\\t0.0013\\np=0.4\\t0.7704\\t0.0039\\np=0.6\\t0.6847\\t0.0066\\np=0.8\\t0.6381\\t0.0038\\n\\nWe understand the importance of clarity in our experiments, and we will discuss the batch sensitivity and accuracy variance in the paper.\\n\\n2) \\u201cThis is not a satisfactory explanation unless additional experiments are provided. Figure 2.c shows a rapid drop from conv_3_1 to conv_3_2 and then a sharp increase from 4_1 to 4_3. It sounds ad-hoc to me to assume the first drop is a sudden turn of spatial class-information to non-spatial (channel-wise) class-information and the increase is due to just being close to the final classification layer. While this conjecture can be true, proper experiments should be conducted to confirm this.\\\"\\n\\nAs suggested by the reviewer, to support our hypothesis, we performed an extensive layer-by-layer evaluation on the fully-connected network we have trained on Fashion-MNIST.\\n\\nExcept for the network architecture and dataset, the rest of the experiment is identical to the experiment in Figure 2. This is to validate the effect of spatial arrangement in the early layers of our convolutional network.\\n\\nThe results (seen here: https://ibb.co/K6GQbRP ) show that in this case the early layers are less robust to compression, as per the reviewer's initial intuition, which supports our hypothesis as to the phenomenon observed in Figure 2.\\n\\n\\n3) \\\"what is the difference between a \\u201cwhole feature map\\u201d and \\u201cactivations\\u201d? Do you mean that for the convolutional layers the approximation is done separately per channel and thus there are different scaling factors per channel for each sample?\\\"\\n\\nWe apologize for the ambiguity in our terminology. By 'activations' we were referring to individual C-dimensional vectors comprising the NxCxHxW activation tensor.\\nFor NMF and PCA we consider every C-dimensional vector as a single datapoint (of which there are N*H*W), while for classification the network views every 1xCxHxW block as a single datapoint. So while every C-dimensional vector is pointing in the same direction, there is variability due to scale and spatial arrangement within each 1xCxHxW block.\\n\\n\\n4) \\u201cI appreciate the additional experiments provided in Ap. 6.4 However, for this to be true, it is important to show that the hyperparameters used (smoothing factor for the approximated training set accuracy plot, batchsize, k, and maybe others) are independent of the dataset. Otherwise, it would diminish the benefit of not needing a validation set since one needs to find the best values on a heldout set.\\\"\\n\\nWe agree with the reviewer that more investigation and experiments are required for the practical implementation of the early stopping application of our approach. We will clarify this in our paper.\"}",
"{\"title\": \"Response to Reviewer2\", \"comment\": \"Thank you again for the helpful comments.\\n\\nWe agree that the task of detecting memorization in general is not conclusively resolved by our paper.\\nThe measurements based on parameter-norms, mentioned in the paper, have very limited usefulness in practical applications. We also note that the PCA method was proposed in our work. This was a means to provide a baseline for comparison. However, we can change the title to \\\"On detecting...\\\" if there is consensus among the reviewers.\"}",
"{\"title\": \"Thanks for the additional experiments and comments\", \"comment\": \"I appreciate the authors effort in addressing the raised issues by revising the paper, doing additional experiments and replying to comments. I believe the paper has improved, however I see the following main issues are still outstanding:\\n\\n1) \\u201cBatches are not exhaustive over the dataset.\\u201d: 1.a) how sensitive is the new training set accuracy to the choice of the batches 1.b) how many batches do you consider per experiment? 1.c) how sensitive is the study to the number of batches?\\n\\n2) \\u201cThis is because we perform factorization across the channel dimension of the CNN. In early layers most of the information is spread across the spatial dimensions, but as the effective receptive field grows and the spatial resolution of the feature maps decreases with depth, the channel dimension holds more and more information.\\u201d This is not a satisfactory explanation unless additional experiments are provided. Figure 2.c shows a rapid drop from conv_3_1 to conv_3_2 and then a sharp increase from 4_1 to 4_3. It sounds ad-hoc to me to assume the first drop is a sudden turn of spatial class-information to non-spatial (channel-wise) class-information and the increase is due to just being close to the final classification layer. While this conjecture can be true, proper experiments should be conducted to confirm this.\\n\\n3) \\u201cWhile with k=1 all activations do point in the same direction in feature space, classification is ultimately performed over a whole feature map, where the spatial arrangement and different scales of activations leads to different predictions.\\u201d what is the difference between a \\u201cwhole feature map\\u201d and \\u201cactivations\\u201d? Do you mean that for the convolutional layers the approximation is done separately per channel and thus there are different scaling factors per channel for each sample?\\n\\n4) \\u201cour method is useful where a validation set is not available. [...] few-shot learning on MNIST with only 20 images \\u201d I appreciate the additional experiments provided in Ap. 6.4 However, for this to be true, it is important to show that the hyperparameters used (smoothing factor for the approximated training set accuracy plot, batchsize, k, and maybe others) are independent of the dataset. Otherwise, it would diminish the benefit of not needing a validation set since one needs to find the best values on a heldout set.\"}",
"{\"title\": \"Thanks for the rebuttal\", \"comment\": \"Thanks the author for the rebuttal. It clarifies most of my concerns. I have updated my score. Though I still think this paper is around borderline as the method can only be used to comparatively study the memorization effects on the same dataset and for the same network architecture, which could already be achieved via simpler previous methods like PCA, ablation studies or even measuring the norms of the layer weights.\\n\\nIf the paper is accepted, I would like to request the author to modify the paper title to be more specific. The current title sounds like the grand challenge of detecting memorization has been solved in this paper, which is a bit misleading as the general case is still quite open. Maybe the title could be made more specific by mentioning this is an approach based on NMF, or maybe indicating that it only works as a relative comparison on identical tasks and network architectures.\"}",
"{\"title\": \"Response to reviewer 1\", \"comment\": \"We thank the reviewer for the helpful review!\\n\\n\\\"The early stopping section could benefit from more experiments. In particular, it would be helpful to see a scatter plot of the time of peak test loss as a function of NMF/Ablation AuC local maxima and to measure the correlation between these rather than simply showing 3 examples.\\\"\\nAs suggested, we have performed more experiments for early stopping, and summarize the results in Figure 6c. We have also added early stopping experiments in the setting of few-shot learning in section 6.4 of the appendix (specifically Figure 9 and Table 2).\\n\\n\\\"it is worth noting that the variance on random ablations appears to be lower than that of NMF and PCA.\\\"\\nIndeed, we added this comment to the text, as well as a note on computation time.\\n\\n\\\"The error bars on the plots are often hard to see\\\"\\nThank you for pointing this out. We have improved the visibility of error bars in all figures.\"}",
"{\"title\": \"Response to reviewer 2 (2/2)\", \"comment\": \"\\\"The p=1 and p<1 curves seem to be very different. If one is to sample more densely between p=0.8 and p=1, would there still be a clear phase transition?\\\"\\nThis is indeed interesting. We have added plots regarding the phase shift between the cases p=0.8 and p=1 in section 6.5 of the appendix. In the case of CIFAR-10 and our network architecture, this change happens around p=0.9.\\n\\n\\\"Can you add discussions to the computation requirements for the proposed analysis?\\\"\\nAs suggested, we have added a section to the appendix (6.6) discussing the computational complexity of our algorithm. While NMF naturally incurs certain overhead, our implementation runs in reasonable time thanks to GPU acceleration. \\n\\n\\\"For the early stopping experiment, the main text says \\\"These include the test error (in blue)\\\" while in the figure the label axis is \\\"Test loss\\\"\\\"\\nGood catch, we fixed the text.\\n\\n\\\"...can you show in parallel the same plots in error rate, as the test error is more important than the test loss\\\"\\nAs suggested, we show the usefulness of our method to early stopping by comparing accuracy curves in newly added section 6.4 of the appendix.\\n\\n\\\"The convention with subplots seem to be putting sub-captions under the figures, not above\\\"\\nIt is now fixed.\\n\\n[1] Bengio et al. \\\"Representation learning: A review and new perspectives.\\\" 2013\\n[2] Tishby et al. \\\"Deep learning and the information bottleneck principle.\\\" 2015\\n[3] Amjad, et al. \\\"How (Not) To Train Your Neural Network Using the Information Bottleneck Principle.\\\" 2018\"}",
"{\"title\": \"Response to reviewer 2 (1/2)\", \"comment\": \"We thank the reviewer for the insightful comments!\\n\\n\\\"the paper lacks a clear notion of \\\"memorization\\\"... The paper seem to be defining it as how well is the network clustering points from the same class\\\"\\nWe have clarified in the text what we (Introduction, second-to-last paragraph). In particular, our definition of memorization is the network implicitly learning a specific mapping from the sample with index i to the class with index j - a \\\"rule\\\" which does not benefit the network in terms of improving its generalization.\\nWe then suggest that good clustering within approximately linear regions of deep feature space correlates with absence of memorization.\\n\\n\\\"...the paper argues that a good representation cluster all the samples in a class together...\\\"\\n\\\"Are (well generalized) networks really clustering samples of the same class to a centroid?\\\"\\nNMF basis vectors are not centroids in the k-means sense. In particular, a datapoint can be associated with multiple NMF basis vectors and at various scales. Furthermore, we do not claim the existence of a single class-specific cluster, but rather a distribution into a small number of clusters (approximated by k).\\n\\n\\\"Because the networks are using linear classifier in the last layer to classify the samples, it seems only linearly separability would be suffice for the work, which does not necessarily imply clustering.\\\"\\nLinear separability of last-layer activations is indeed all that is required for correct classification. While all the networks we studied achieve perfect linear separability on their *training* data, this is clearly not the case for their validation and test data. Training set linear-separability is therefore not a sufficient condition for test set linear-separability. Our aim is to find additional properties of neural network feature space that are indicative of last-layer linear-separability of test data.\\nThere are many proposals for what makes a \\\"good\\\" feature space in that regard [1,2,3]. In this paper we propose, and give supporting empirical evidence, that feature spaces characterized by a low non-negative rank of single-class activation matrices memorize less and generalize more.\\n\\n\\\"Given two networks (of the same architecture), assume somehow network-1 decides to use the first 9 layers to compute a well clustered representation, while network-2 decides to use the first 5 layers to do the same thing. Do we say network-1 is (more) memorizing in this case?\\\"\\nIn such a situation we would prefer the network that more quickly reduces the non-negative rank, because it is a simpler model of the data. This view is based on the general principle of Occam's razor, and is made concrete with our method.\\n\\n\\\"the notion seems to be more about the underlying task... if a task is more complicated, meaning the input samples in the class have higher variance and requiring more efforts to cluster, then it seems the network will be doing more memorization... when comparing learning on imagenet to learning on MNIST / CIFAR\\\"\\nIt is absolutely true that some datasets are more complicated than others. We therefore do not propose a global measure for memorization, but rather a comparative measure to evaluate competing networks of the same architecture on the same dataset.\\n\\n\\\"Why for all cases, the lower layers all have higher AUC than the higher layers (except the last one)?\\\"\\nIn Figure 2.c, note that for the case p=0, most lower layers actually have lower AuC than higher layers. We interpret this as meaning that without artificially inducing memorization, the network better clusters activations from layer to layer.\\n\\n\\\"The argument given in the paper is that the lower layers are the feature extraction phase while the upper layers are memorization phase.\\\"\\nOur hypothesis is based on the statistical similarity of early layers in our compression studies. We altered the text to emphasize it is indeed currently a hypothesis.\"}",
"{\"title\": \"Response to Reviewer 3 (2/2)\", \"comment\": \"\\\"Apart from the last layer, this form of simplicity of the support for the intermediate layers of a good classifier does not seem to be *necessarily*. ...as long as the activation matrix of each class is linearly separable ... there is no need for it to become simpler\\\"\\nLinear separability of last-layer activations is indeed all that is required for correct classification. While all the networks we studied achieve perfect linear separability on their *training* data, this is clearly not the case for their validation and test data. Training set linear-separability is therefore not a sufficient condition for test set linear-separability. Our aim is to find additional properties of neural network feature space that are indicative of last-layer linear-separability of test data.\\nThere are many proposals for what makes a \\\"good\\\" feature space in that regard [1,2,3]. In this paper we propose, and give supporting empirical evidence, that feature spaces characterized by a low non-negative rank of single-class activation matrices memorize less and generalize more.\\n\\n\\\"different linear regions (polytopes in the input space) should be considered for the linearization of the activation matrix... how can this empirical measurement be translated into a more formal linearity of the global function?\\\"\\nUnder this geometric lens, our observation is that for a particular point cloud, i.e., one associated with a single-class, there is a small number of deep local polytopes where most of the data \\\"fits\\\" without being non-linearly projected into the polytope by ReLU. However, associating these deep polytopes with polytopes in input space is a non-trivial problem which is concerned with \\\"network interpretability\\\", an active area of research.\\n\\n\\\"How can one obtain a measure that is independent of the number of samples in the batch?\\\"\\nThe approach presented in the paper is based on the properties of activation matrices, which inherently depends on a batch and therefore its size. However as mentioned, we have found our measurements to be robust across wide range of batch sizes . \\n\\n\\\"there are questions about ... the usefulness of the observation for training better models and/or giving additional insights to what we know\\\"\\nWe show that memorization and generalization are correlated with non-negative rank of activation matrices. A next step would be to regularize this or a dependent quantity. While, the non-negative rank and rectangle cover number are NP-hard to compute, we believe practical regularizers could be derived from this connection. This is a direction of future work that we intend to pursue, and we see value in sharing these results with the wider community to spark further interest in this direction.\\n\\n[1] Bengio et al. \\\"Representation learning: A review and new perspectives.\\\" 2013\\n[2] Tishby et al. \\\"Deep learning and the information bottleneck principle.\\\" 2015\\n[3] Amjad, et al. \\\"How (Not) To Train Your Neural Network Using the Information Bottleneck Principle.\\\" 2018\\n[4] Zhang et al. \\\"Understanding deep learning requires rethinking generalization.\\\" 2016\\n[5] Collins et al. \\\"Deep feature factorization for concept discovery.\\\" 2018.\"}",
"{\"title\": \"Response to Reviewer 3 (1/2)\", \"comment\": \"We thank the reviewer for the constructive feedback!\\n\\n\\\"The experimental setup is unclear\\\"\\nWe have cleared up the ambiguities regarding our experimental setup (section 4.1, paragraph 3), noting that:\\n- We use training-set batches\\n- Single-class batches are sampled w.r.t. to the training label, i.e., the random/noisy labels for p>0\\n- For every value of p we produced a fixed set of random labels.\\n- We do not use a fixed batch. We randomly sample batches (up to the class label) for each net.\\n- Batches are not exhaustive over the dataset.\\n- In our experiments with various batch sizes (20-100) we did not notice significant difference. We used a batch size of 50 through out the paper.\\n\\n\\\"How come all networks with different label noise levels end up with the same (100%?) accuracy?\\\"\\nThe y-axis in Figure 2 is *training* set accuracy. All network manage to (over)fit their training data regardless of the level of label randomization [4], and therefore show perfect accuracy under weak/no compression.\\n\\n\\\"Why should the performance drop more when linearizing the middle layers (3_2:4_2) than the earlier layers.\\\"\\nThis is because we perform factorization across the channel dimension of the CNN. In early layers most of the information is spread across the spatial dimensions, but as the effective receptive field grows and the spatial resolution of the feature maps decreases with depth, the channel dimension holds more and more information.\\n\\n\\\"When k=1 for NMF and PCA, the difference of the activations for different samples becomes a matter of scale. ... shouldn\\u2019t all classifications become the same for all samples?\\\"\\nWhile with k=1 all activations do point in the same direction in feature space, classification is ultimately performed over a whole feature map, where the spatial arrangement and different scales of activations leads to different predictions.\\n\\n\\\"It would be interesting to study the property of the basis obtained in this border case. The same questions can be studied as one gradually increases k.\\\"\\nQualitative examination of the NMF basis with small values of k is has been undertaken in [5] where in deep layers basis directions are shown to correspond with semantic concepts.\\n\\n\\\"it does not seem to provide a better criterion for early stopping\\\"\\nValidation error is the gold standard for stopping criteria, and so when a validation set is available, we do not propose a better alternative. By showing good correlation with validation error, our method is useful where a validation set is not available. To demonstrate this point we added section 6.4 to the appendix, where we perform few-shot learning on MNIST with only 20 images for training. Early stopping with NMF consistently improves the final accuracy of the network.\\n\\n\\\"An experiment describing how well are the approximations and how that correlate with memorization is missing\\\"\\nWe show and discuss NMF reconstruction error plots in the appendix (section 6.5). The main difficulty in interpreting the NMF error is scale. The error depends on the magnitude of the activations, which varies across networks, layers and even channels. Network accuracy, on the other hand, is more interpretable and is ultimately the quantity of interest w.r.t. the effect on the neural network.\"}",
"{\"title\": \"General comment\", \"comment\": \"We would like to thank all reviewers for their thorough and insightful feedback. We are glad that the reviewers found our approach \\\"novel\\\" and \\\"very interesting\\\", and the paper \\\"clear and focused\\\".\\n\\nWe have made revisions and additions to the paper guided by the reviews. In particular, we have added more experiments for early stopping (also in the few-shot learning setting), added plots for the NMF approximation error w.r.t. memorization, and discuss the computational cost involved with our method.\\n\\nBelow we discuss each of the reviewers' comments in detail.\"}",
"{\"title\": \"Very interesting but not yet a complete work\", \"review\": \"The contribution of the paper is in proposing a quantitative measure of memorization based on the assumption that the activations at the deeper layers of a *generalizing* deep network should be invariant to intra-class variations. The measure corresponds to how well can the activation matrix of a batch be approximated by a low-rank decomposition. The paper proposes to use approximate non-negative matrix factorization and compares it to PCA. As for \\u201cwellness\\u201d it uses the final accuracy of the network after the activation is approximated in some layer(s).\\n\\nThe composition of the paper and its writing makes it an easy read. The work is novel in the way it proposes to measure memorization to the best of the reviewer\\u2019s knowledge. However, the novel insights and/or the practical usefulness of the proposed method seem very limited. Also, there are many questions that comes to my mind that I would appreciate the authors to address:\", \"specific_questions\": \"\", \"the_experimental_setup_is_unclear\": \"Is the linearization-batch taken from the training set or the test set?\\nIf it is taken from the training set, for the case that p>0 (noisy labels), is the batch of a single class obtained from noisy labels or non-noisy labels? \\nFor the experiments, is there only one fixed batch used? How is this batch selected? How sensitive the evaluation is to the selection of the batch members and its size?\\nDo the batches cover the whole set?\\n\\n- Figure 2.a and 2.b: How come all networks with different label noise levels end up with the same (100%?) accuracy? Are the fixed samples different for each p? (class labels change for each p).\\n\\n- Figure 2.c: Why should the performance drop more when linearizing the middle layers (3_2:4_2) than the earlier layers. This seems to be in violation of the assumption about class invariance in deeper layers.\\n\\n- When k=1 for NMF and PCA, the difference of the activations for different samples becomes a matter of scale. In this case, shouldn\\u2019t all classifications become the same for all samples? How does this affect the accuracy? Does it make the evaluation very sensitive to the sampling of the batch? It would be interesting to study the property of the basis obtained in this border case. The same questions can be studied as one gradually increases k.\\nSection 4.2: It starts with the sentence \\u201cIn this section we show our technique is useful for predicting good generalization in a more realistic setting\\u201c. Indeed, the high correlation of the test performance and the proposed memorization measure in this section is very interesting. However, as for usefulness, it does not seem to provide a better criterion for early stopping or other practicalities of ReLU networks. \\n\\n- An experiment describing how well are the approximations (i.e. activation matrix reconstruction error) and how that correlate with memorization is missing.\", \"some_general_questions_that_come_to_my_mind\": [\"the paper assumes (e.g. in page 4) that \\u201cWhen single-class batches are not approximately linear, even in deep layers, we take this as evidence of memorization\\u201d. I have a concern here. Apart from the last layer, this form of simplicity of the support for the intermediate layers of a good classifier does not seem to be *necessarily*. That is, it seems to me that as long as the activation matrix of each class is linearly separable from the activation matrices of the others, there is no need for it to become simpler (by reducing the intra-class variations at the deep layers) for the classification loss to be minimized. Does this mean the paper\\u2019s assumption for memorization is not necessarily valid?\", \"The paper relates the memorization to the extent of local linearity of a deep ReLU network. ReLU networks represent piece-wise linear functions. Thus, in order for this relation to be drawn, probably different linear regions (polytopes in the input space) should be considered for the linearization of the activation matrix. In that regard, how can this empirical measurement be translated into a more formal linearity of the global function?\", \"The rc number as well as the rank k of a good approximation directly depend on the number of samples in the batch. How can one obtain a measure that is independent of the number of samples in the batch?\"], \"summary_judgment\": \"The paper puts forward an interesting observation using a novel approach. However there are questions about the experiments, discussions around the experiments and the usefulness of the observation for training better models and/or giving additional insights to what we know. Considering that, I think the paper would make a very good workshop paper but needs more work to address the bar of an ICLR conference paper. But I am open to discussion with the authors and other reviewers.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"bad clustering == memorization?\", \"review\": \"This paper propose a new way of analyzing the robustness of neural network layers by measuring the level of \\\"non-linearity\\\" in the activation patterns on samples belonging to the same class, and correlate that to the level of \\\"memorization\\\" and generalization.\\n\\nMore specifically, the paper argues that a good representation cluster all the samples in a class together, therefore, in higher layers, the activation pattern of samples from the same class will be almost identical. In this case, the activation matrix will have a small non-negative rank. An approximation algorithm (via non-negative matrix factorization) is then used to compute the robustness and evaluate the robustness (by replacing the activation matrix with its low rank non-negative activation) is measured in a number of experiments with different amount of random label corruptions. The experiments show that networks trained on random labels are less robust than networks trained on true labels.\\n\\nWhile the concept is interesting, I find the arguments in the paper a bit vague, and the usefulness of the algorithm might be hampered by its computation complexity, which is not discussed in the paper.\\n\\nFirst of all, the paper lacks a clear notion of \\\"memorization\\\". While it is generally accepted that learns on random labels can be called \\\"memorization\\\", the paper seem to be defining it as how well is the network clustering points from the same class. Several questions need to be addressed in order for this notion to be justified:\\n\\n1. Are (well generalized) networks really clustering samples of the same class to a centroid? It would be great if some empirical verifications are shown. Because the networks are using linear classifier in the last layer to classify the samples, it seems only linearly separability would be suffice for the work, which does not necessarily imply clustering.\\n\\n2. Given two networks (of the same architecture), assume somehow network-1 decides to use the first 9 layers to compute a well clustered representation, while network-2 decides to use the first 5 layers to do the same thing. Do we say network-1 is (more) memorizing in this case?\\n\\n3. The notion seems to be more about the underlying task than about the networks. Given the measurement, if a task is more complicated, meaning the input samples in the class have higher variance and requiring more efforts to cluster, then it seems the network will be doing more memorization. In other words, while networks will be doing more memorization when comparing a random label task to a true label task, it might also be \\\"doing more memorization\\\" when comparing learning on imagenet to learning on MNIST / CIFAR. One the one hand, this does not seem to fit our \\\"intuition\\\" about memorization; on the other hand, the heavy dependency on the underlying data distribution makes it difficult to compare results learned on different data -- especially since the measurements are based on per-class samples, \\\"random labels\\\" and \\\"true labels\\\" have very different class-conditional distributions.\\n\\nI also have some questions about Figure 2(c). I will continue numbering the question for easier discussion.\\n\\n4. Why for all cases, the lower layers all have higher AUC than the higher layers (except the last one)? The argument given in the paper is that the lower layers are the feature extraction phase while the upper layers are memorization phase. I think if clearly verified, this is a very interesting observation. But the paper currently do not have experiments to verify the hypothesis. Also more studies on this with different networks would be good. For example, with deeper networks, does the feature extraction phase include more layers?\\n\\n5. The p=1 and p<1 curves seem to be very different. If one is to sample more densely between p=0.8 and p=1, would there still be a clear phase transition?\", \"some_other_questions\": \"6. Can you add discussions to the computation requirements for the proposed analysis? This is especially important for the cases where the analysis is used during training as tools to help deciding early stopping.\\n\\n7. For the early stopping experiment, the main text says \\\"These include the test error (in blue)\\\" while in the figure the label axis is \\\"Test loss\\\". I'm assuming it is the cross entropy loss given the values are greater than 1. In this case, can you show in parallel the same plots in error rate, as the test error is more important than the test loss and the test loss could sometimes be artificially huge due to high confident mistakes on ambiguous test examples.\", \"some_minor_issues\": [\"Please proof read the paper for typos. E.g. on the 3rd paragraph of the 1st page: \\\"that networks that networks that\\\".\", \"The convention with subplots seem to be putting sub-captions under the figures, not above.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review for \\\"Detecting Memorization in ReLU Networks\\\"\", \"review\": \"This paper aims to distinguish between networks which memorize and those with generalize by introducing a new detection method based on NMF. They evaluate this method across a number of datasets and provide comparisons to both PCA and random ablations (as in Morcos et al., 2018), finding that NMF outperforms both. Finally, they show that NMF is well-correlated with generalization error and can be used for early stopping.\\n\\nThis is an overall excellent paper. The writing is clear and and focused, and the experiments are careful and rigorous. The discussion of prior work is thorough. The question of how to detect memorization in DNNs is one of great interest, and this makes nice steps towards this goal. As such, it will likely have significant impact.\", \"major_comments\": \"1) The early stopping section could benefit from more experiments. In particular, it would be helpful to see a scatter plot of the time of peak test loss as a function of NMF/Ablation AuC local maxima and to measure the correlation between these rather than simply showing 3 examples.\", \"minor_comments\": \"1) While the comparisons to random ablations are mostly fair, it is worth noting that the variance on random ablations appears to be lower than that of NMF and PCA. \\n\\n2) The error bars on the plots are often hard to see. Increasing the transparency somewhat would be helpful.\", \"typos\": \"1) Section 1, third paragraph: \\u201cWe show that networks that networks that generalize\\u2026\\u201d should be \\u201cWe show that networks that generalize...\\u201d\\n\\n2) Section 3.1, third paragraph: \\u201cBecause threshold is the\\u2026\\u201d should be \\u201cBecause thresholding is the\\u2026\\u201d\\n\\n3) Section 3.2, third paragraph: \\u201cIn the most non-linear case we would\\u2026\\u201d should be \\u201cIn the most non-linear case, we would\\u2026\\u201d\\n\\n4) Figure 2 caption: \\u201c...with increasing level of\\u2026\\u201d should be \\u201c...with increasing levels of\\u2026\\u201d\\n\\n5) Section 4.1.1, second to last line of last paragraph: missing space before final sentence\\n\\n6) Figure 4a label: \\u201cFahsion-MNIST\\u201d should be \\u201cFashion-MNIST\\u201d\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
r1erRoCqtX | LSH Microbatches for Stochastic Gradients: Value in Rearrangement | [
"Eliav Buchnik",
"Edith Cohen",
"Avinatan Hassidim",
"Yossi Matias"
] | Metric embeddings are immensely useful representations of associations between entities (images, users, search queries, words, and more). Embeddings are learned by optimizing a loss objective of the general form of a sum over example associations. Typically, the optimization uses stochastic gradient updates over minibatches of examples that are arranged independently at random. In this work, we propose the use of {\em structured arrangements} through randomized {\em microbatches} of examples that are more likely to include similar ones. We make a principled argument for the properties of our arrangements that accelerate the training and present efficient algorithms to generate microbatches that respect the marginal distribution of training examples. Finally, we observe experimentally that our structured arrangements accelerate training by 3-20\%. Structured arrangements emerge as a powerful and novel performance knob for SGD that is independent and complementary to other SGD hyperparameters and thus is a candidate for wide deployment. | [
"Stochastic Gradient Descent",
"Metric Embeddings",
"Locality Sensitive Hashing",
"Microbatches",
"Sample coordination"
] | https://openreview.net/pdf?id=r1erRoCqtX | https://openreview.net/forum?id=r1erRoCqtX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkeLkrczeE",
"S1eXFlzUAX",
"SkgO0n7TTX",
"rkxDxeQpTQ",
"BJgqBX_jaX",
"SJlFHGEiaQ",
"rkeplrXopm",
"HygR2Mnca7",
"r1liSr-9Tm",
"BylvZPuqn7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544885469638,
1543016571054,
1542433999621,
1542430703057,
1542320961846,
1542304320930,
1542300917046,
1542271669987,
1542227266965,
1541207806756
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper891/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper891/Authors"
],
[
"ICLR.cc/2019/Conference/Paper891/Authors"
],
[
"ICLR.cc/2019/Conference/Paper891/Authors"
],
[
"ICLR.cc/2019/Conference/Paper891/Authors"
],
[
"ICLR.cc/2019/Conference/Paper891/AnonReviewer5"
],
[
"ICLR.cc/2019/Conference/Paper891/Authors"
],
[
"ICLR.cc/2019/Conference/Paper891/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper891/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper891/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"Following the unanimous vote of the four submitted reviews, this paper is not ready for publication at ICLR. Among other concerns raised, the experiments need significant work.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Not ready for publication ICLR\"}",
"{\"title\": \"See revised version\", \"comment\": \"We are thankful for all constructive comments. We uploaded a revised version with overall improvements to the presentation and addressing the reviewers' points and suggestions. We also (i) reorganized the paper where the section that provides principled discussion of the benefit of our methods is now provided after the experiments (ii) included additional results in the appendix (iii) included further explanations to address misunderstandings.\\nWe hope to continue the discussion.\\n\\n Some reviewers were concerned with comparison to other approaches. Our paper proposes and demonstrated the potential of a novel approach to SGD optimization. We are referencing some approaches that are orthogonal, but there is no other baseline for experimental comparison that we are aware of other than independent arrangements (see individual responses).\\n\\n Some reviewers were concerned with the overhead of our method. This is a relevant point that we did not convey well enough in the submission, our revision includes a more thorough analysis. Overall, the overhead is small, and certainly justifies the improvements.\\n\\n Reviewer4 expressed concerns on novelty and correctness of Algorithm 4. The algorithm is correct, we have references to the works we are relying on, we included some intuitive discussion in our response that might help the reviewer understanding, and will be happy to provide more. As for novelty, we ask the reviewer to point out references supporting their claim. We are not aware of any.\"}",
"{\"title\": \"Response to review\", \"comment\": \"We submitted a revised manuscript which we hope clarifies issues. It seems that the reviewer missed on some critical aspects of our contribution. Please see below.\", \"r\": \"\\\"Are the hyper-parameters chosen in a principled way for these experiments?\\\"\", \"a\": \"Hyperparameters such as learning rate were tuned to work well on the baseline independent arrangements and then we used the same values with all methods. Our \\\"mix\\\" method had additional hyperparameters which is the switch points (start with coo, then LSH map, then IND). Those were determined on one split (or generated model) and applied to others.\", \"detailed_comments\": \"\"}",
"{\"title\": \"Response to reviewer concerns\", \"comment\": \"R: \\\"This paper proposes using structured mini-batches to speed up learning of embeddings. However, it lacks sufficient context with previous work and bench-marking to prove the speed-up.\\\"\", \"a\": \"mostly addressed in the revision. Thank you!\", \"r\": \"Formatting issues\"}",
"{\"title\": \"Response addressing the reviewer concerns (see revision also)\", \"comment\": \"Thank you so much for the constructive comments. We revised accordingly and hope the concerns are addressed. This is a novel and potentially powerful optimization to SGD that is orthogonal to other optimizations. Here is our response.\", \"r\": \"\\\"Finally, the experimental results presented do not seem to entirely support the authors\\u2019 conclusions. Figures 2, 3, and 4, as well as several of the figures in the appendix, show some parameter settings for which the gains over the baseline are quite limited. This makes me suspect that perhaps the coordinated minibatches aren\\u2019t the only variable affecting performance\\\"\\n\\n First, Figure 2 shows a different experiment intended to demonstrates the information content in fraction of epochs. So we discuss Figures 3 and 4 that show performance of our main methods and the baseline \\\"ind\\\" for two different metrics. Please note that we did not select what to show. We generated stochastic block matrices with a range of block sizes / number of blocks. Any hyperparameters were either fixed (minibatch sizes, embedding dimension) or tuned to achieve best performance on the *baseline* IND method (learning rate). Then we simply run our methods and show and discuss the results. The only additional hyperparameters were with our methods that used \\\"mixed\\\" arrangements. But the pure (Jaccard*) method was the best performer. We made this much clearer in the revision and significantly improved the discussion of the experimental results.\", \"a\": \"Yes, the x-axis shows the total number (with multiplicities) of processed positive examples. This is proportional to epochs. The gain is with respect to that. Thank you for pointing out this was not clear, we added this clarification.\", \"minor_concerns\": \"All (sections 1-4) were addressed in the revision (will do the figures in Section 6).\", \"as_for_questions\": \"Figure 1 in Section 5 simply demonstrated what we refer to as a \\\"micro level\\\" benefit of coordination, which might be obvious: That even with random assignment, gradient updates involving two entities towards the same (random) entity brings them closer. (In contrast, if the two updates are half an epoch apart then the context embedding is meanwhile modified, so this weakens this effects). The revision made this clearer.\\n\\nSection 6.2: The computation of kappa for the recommendations datasets is explained now.\"}",
"{\"title\": \"Requires further clarification and empirical justification\", \"review\": \"The paper presents a method for improving the convergence rate of Stochastic Gradient Descent for learning embeddings by grouping similar training samples together. The basic idea is that gradients computed on a batch of highly associated samples encode related information in a single update that independent samples might take multiple updates to capture. These structured minibatches are constructed by independently combining subsets of positive examples called \\u201cmicrobatches\\u201d. Two methods are presented for constructing these microbaches; first by grouping positive examples by shared context (called \\u201cbasic\\u201d microbatches), second by applying Locality Sensitive Hashing to further partition the microbatches into groups that are more likely to contain similar examples.\", \"three_datasets_are_used_for_experimental_analysis\": \"a synthetic dataset generated using the stochastic block model, and two large scale recommendation datasets. The presented algorithms are compared to a baseline of independently sampled minibatches using the cosine gap and precision for the top k predictions. The authors show the measured cosine gaps over the course of training as well as the gains in training performance for several sets of hyperparameters.\\n\\nThe motivation and basic intuition behind the work is clearly presented in the introductory section. The theoretical justification for the structured minibatches is reasonably convincing and invites empirical verification.\", \"general_concerns\": \"Any method for improving the performance of an optimization process via additional preprocessing must show that the additional overhead incurred from preprocessing the data (in this case, organizing the minibatches) does not negate the achieved improvement in convergence time. This work presents no evidence that this is the case. I expected to see 1) time complexity analysis of each new algorithm proposed for preprocessing and 2) experimental results showing that the overall computation time, including the proposed preprocessing steps, was reduced by this method. Neither of these things are present in this work.\\n\\nFurthermore, the measured \\u201ctraining gains\\u201d are, to my knowledge, not clearly defined. I assume that the authors are using the number of epochs or iterations before convergence as their measure of training performance, but this should be stated explicitly rather than implicitly.\\n\\nFinally, the experimental results presented do not seem to entirely support the authors\\u2019 conclusions. Figures 2, 3, and 4, as well as several of the figures in the appendix, show some parameter settings for which the gains over the baseline are quite limited. This makes me suspect that perhaps the coordinated minibatches aren\\u2019t the only variable affecting performance.\\n\\nI have organized my remaining minor concerns and requests for clarification by section, detailed below.\\n\\nSection 1\\n- In the last paragraph, the acronym SGNS is mentioned before being defined. You should either state the full name of the method (with citation) or omit the mention altogether.\\n\\nSection 2\\n- I would like a few sentences of additional clarification on what \\u201cfocus\\u201d entities vs. \\u201ccontext\\u201d entities are in the more general case. I am familiar with what they mean in the context of Skip Gram, but I think more discussion on how this generalizes is necessary here. Same goes for what kappa (\\u201cassociation strength\\u201d) means, especially considering that this concept isn\\u2019t really present (to my understanding) in Skip Gram.\\n- Grammar correction:\\n\\u201cThe negative examples provide an antigravity effect that prevents all embeddings to collapse into the same vector\\u201d\\n\\u201cto collapse\\u201d -> \\u201cfrom collapsing\\u201d\\n\\nSection 3\\n- Maybe this is just me, but I find the mu-beta notation for the microbatch distributions rather odd. Why not just use a single symbol?\\n- I would like a bit more clarification on the proof for lemma 3.1, specifically on the last sentence, \\u201cthe product of these events \\u2026\\u201d; that statement did not follow obviously to me.\\n\\nSection 3.1\\n- Remove the period and colon near kappas at the end of paragraph 3. It\\u2019s visually confusing with the dot index notation right next to them.\\n\\nSection 4\\n- Typo: \\u201cWe selects a row vector \\u2026\\u201d -> \\u201cWe select a row vector \\u2026\\u201d\\n\\nSection 5\\n- I don\\u2019t understand what Figure 1 is trying to demonstrate. It doesn\\u2019t do anything (as far as I can tell) to defend the authors\\u2019 claim that COO provides a higher expected increase in cosine similarity than IND.\\n\\nSection 6\\n- All figures in this section should have axis labels. The captions don\\u2019t sufficiently explain what they are.\\n\\nSection 6.2\\n- How is kappa computed for the recommendations datasets? This isn\\u2019t obvious at all.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Response to reviewer concerns\", \"comment\": \"We disagree with the reviewer determination of \\\"limited novelty\\\". This is the first time that the arrangement of examples is considered for SGD in a principled way. It is orthogonal to and very different than other optimizations. We also believe that our experimental results, because of the simplicity and properties of the synthetic data, are particularly meaningful and demonstrate well the strength and potential of the approach.\\n\\n1.\", \"r\": \"\\\"Structure of the paper could be improved\\\"\\n We uploaded a modified version and will be happy to address any further concerns.\", \"a\": \"Could you please elaborate what you are looking for? (We do establish correctness and properties of our arrangement methods. We now also include a more elaborate analysis of the computation involved).\\n\\n 5.\", \"e\": \"We disagree. You can follow the references and proofs if you wish. To obtain intuition why, note that for focus updates (symmetrically for context) the iid exponentials are drawn per context entity to essentially select a sample of contexts for each focus. The coordination is achieved by using the same assignment of randomization to contexts across focus entities. At the extreme, it is easy to verify that when two focus entities have exactly the same set of weighted associations (Jaccard similarity 1), they would have the same sampled context (the one with minimum ratio of iid exponential and weight). If you would like we can provide further details.\\n\\n 3.\"}",
"{\"title\": \"limited novelty and experimental results\", \"review\": \"This paper discussed a non-uniform sampling strategy to construct minibatches in SGD for the task of learning embeddings for object associations. An example throughout the paper is learning embeddings for a set F of focus entities and set C of context entities. In general, for focus update, the algorithm draws for each minibatch certain amount of positive samples (i,j), i \\\\in F and j \\\\in C. Then for each positive pair, we select certain amount of negative samples (i,j\\u2019) for j\\u2019 \\\\in some uniformly randomly selected subset C\\u2019. The same algorithm is implemented for context update, and the training alternates between the two. The authors choose similar positive object in one minibatch since it\\u2019s more efficient. Therefore, LSH hashing is used to point similar items to similar keys. Two similarity measures are used here, Jaccard similarity and cosine similarity. Some experiments are demonstrated on synthetic data and two real datasets to show the effectiveness of their method.\", \"concerns\": \"1.\\tEvery piece of the method has been well studied, and the combination of them proposed in this paper does not seem very novel.\\n2.\\tAlgorithm 4, which is the hashing for Jaccard similarity, seems wrong. Only using iid exponentials cannot make collision probability equal Jaccard similarity.\\n3.\\tLittle experiments on real datasets. No comparison with other non-uniform minibatch construction methods (there should be some).\\n4.\\tNo quantitative analysis. \\n5.\\tStructure of the paper could be improved. For example, it\\u2019s better to put section 4 and 6 together.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Lacks comprehensive results on real-world data sets, writing does not seem revised\", \"review\": \"This paper proposes using structured mini-batches to speed up learning of embeddings. However, it lacks sufficient context with previous work and bench-marking to prove the speed-up. Furthermore, it is difficult to read due to its lack of revision. Sentences are wordy and do not always have sufficient detail.\", \"argument_issues_and_questions\": [\"Since the main claim is a speed-up in training, the authors should support with robust experimentation. Only a synthetic and small test are conducted.\", \"Not being an expert in this subject, it was difficult to follow some of the ideas of the paper. They were presented without clear explanation of why they supported the conclusion. For example, I do not understand Figure 4. It seems COO and IND change places. It is not always clear how the figures support the argument.\", \"What impact does the size of the micro-batch have on the speed-up?\", \"How does this approach compare to other embedding approaches in terms of speed? There are no benchmarks other than IND.\"], \"formatting_issues\": [\"On page one, the sentence \\\"We make a novel case here for the antithesis of coordinated arrangements,\", \"where corresponding associations are much more likely to be included in the same minibatch\\\" seems contradictory. It reads that you are arguing for \\\"the antithesis of coordinated arrangements, namely independent arrangements\\\" when you mean \\\"the antithesis of independent arrangements, namely coordinated arrangements.\\\"\", \"The figures in this paper are all very small with minuscule text and legends. Only after zooming in 200% were they legible. Figure 3, 4, 5, 6, and 7 have no axis labels. It is sometimes clear from the caption what the axes are, but it is hard to follow.\", \"Often references are cited in the text without being set off with parentheses or grammatical support. For example at the top of page three: \\\"One-sided updates were introduced with alternating minimization Csiszar & Tusn\\u00e1dy (1984) and for our purposes they facilitate coordinated arrangements and allow more precise matching of corresponding sets of negative examples to positive ones.\\\" This interrupts the sentence making it hard to read.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Experimentally weak with the results not justifying the increased computation. No comparison to other methods doing non-uniform sampling and mini-batch selection.\", \"review\": [\"###### Post-Revision ########################\", \"Thank you for revising the paper and addressing the reviewers' concerns. The updated version reads much better and I have updated my score.\", \"Unfortunately, I still think that the experimental analysis is not enough to warrant acceptance. I would encourage the authors to have a more detailed set of experiments to showcase the effectiveness of their method and have ablation studies to disentangle the effects of the different moving parts.\", \"###### Post-Revision ########################\", \"This paper considers arranging the examples into mini-batches so as to accelerate the training of metric embeddings. The\", \"The paper doesn't have sufficient experimental evidence to convince me that the proposed method is useful. There is no comparison against baselines. The paper is not clearly written or well organized. Detailed comments below:\", \"For example, when introducing focus and context entities, it would be helpful to give examples of this to make it clearer.\", \"In section 3, please clarify that after drawing both positive and negative examples, what is the size of the minibatch for which the gradient is calculated?\", \"How do you choose the size of the microbatches? If the microbatch size is too small, then the effect of associating examples is small.\", \"In the line, \\\"Instead, we use LSH modules that are available at the start of training and are only a coarse proxy of the target similarity\\\" Why are you not iteratively refining the LSH modules as the training progresses? Won't this lead to an improvement in the performance?\", \"In the line \\\"The coarse embedding can come from a weaker (and cheaper to train) model or from a partially-trained model. In our experiments we use a lower dimension SGNS model.\\\" Could you please clarify what is the additioanal computational complexity of the method? This involves additional computational cost? It doesn't seem to me that the results justify this increased computation. Please justify this.\", \"In Lemma 3.2, the term s_i is undefined\", \"\\\"In early training, basic coordinated microbatches with few or no LSH refinements may be most effective. As training progresses we may need to apply more LSH refinements. Eventually, we may hit a regime where IND arrangements dominate.\\\" This explanation is vague and has no theoretical or empirical evidence supporting it. Please clarify this.\", \"Please fix the size of the axes and the legend in all the figures.\", \"For figure 1, how is the step-size chosen? What is the dimensionality of the examples?\", \"From figure 3, it is not clear that the proposed methods lead to significant gains over the independently sampling the examples? Are there any savings in the wall clock time for the proposed methods? Why is there no comparison against other methods that have proposed non-uniform sampling of examples for SGD (like Zhang, 2017)? Are the hyper-parameters chosen in a principled way for these experiments?\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
r1lrAiA5Ym | Backpropamine: training self-modifying neural networks with differentiable neuromodulated plasticity | [
"Thomas Miconi",
"Aditya Rawal",
"Jeff Clune",
"Kenneth O. Stanley"
] | The impressive lifelong learning in animal brains is primarily enabled by plastic changes in synaptic connectivity. Importantly, these changes are not passive, but are actively controlled by neuromodulation, which is itself under the control of the brain. The resulting self-modifying abilities of the brain play an important role in learning and adaptation, and are a major basis for biological reinforcement learning. Here we show for the first time that artificial neural networks with such neuromodulated plasticity can be trained with gradient descent. Extending previous work on differentiable Hebbian plasticity, we propose a differentiable formulation for the neuromodulation of plasticity. We show that neuromodulated plasticity improves the performance of neural networks on both reinforcement learning and supervised learning tasks. In one task, neuromodulated plastic LSTMs with millions of parameters outperform standard LSTMs on a benchmark language modeling task (controlling for the number of parameters). We conclude that differentiable neuromodulation of plasticity offers a powerful new framework for training neural networks. | [
"meta-learning",
"reinforcement learning",
"plasticity",
"neuromodulation",
"Hebbian learning",
"recurrent neural networks"
] | https://openreview.net/pdf?id=r1lrAiA5Ym | https://openreview.net/forum?id=r1lrAiA5Ym | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ByxBLVaMeE",
"rkg5CK6S1E",
"ryxDQhsSJV",
"Bkgp3oiBk4",
"S1g8Nj6qRX",
"HJeXzvo5CX",
"HJlVxZucAm",
"rJx0WYw9R7",
"S1gcKOwqR7",
"HyxE2PPcRm",
"rkg2K06S2m",
"ryeSkAjN37",
"ryxWDI_Gsm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544897613304,
1544047058119,
1544039454660,
1544039348605,
1543326510075,
1543317258553,
1543303403736,
1543301381640,
1543301250198,
1543301035715,
1540902531570,
1540828637159,
1539634776627
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper890/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper890/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper890/Authors"
],
[
"ICLR.cc/2019/Conference/Paper890/Authors"
],
[
"ICLR.cc/2019/Conference/Paper890/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper890/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper890/Authors"
],
[
"ICLR.cc/2019/Conference/Paper890/Authors"
],
[
"ICLR.cc/2019/Conference/Paper890/Authors"
],
[
"ICLR.cc/2019/Conference/Paper890/Authors"
],
[
"ICLR.cc/2019/Conference/Paper890/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper890/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper890/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The authors consider the problem of active plasticity in the mammalian brain, seen as being a means to enable lifelong learning. Building on the recent paper on differentiable plasticity, the authors propose a learnt, neuro-modulated differentiable plasticity that can be trained with gradient descent but is more flexible than fixed plasticity. The paper is clearly motivated and written, and the tasks are constructed to validate the method by demonstrating clear cases where non-modulated plasticity fails completely but where the proposed approach succeeds. On a large, general language modeling task (PTB) there is a small but consistent improvement over LSTMS. The reviewers were very split on this submission, with two reviewers focusing on the lack of large improvements on large benchmarks, and the other reviewer focusing on the novelty and success of the method on simple tasks. The AC tends to side with the positive review because of the following observations: the method is novel and potentially will have long term impact on the field, the language modeling task seems like a poor fit to demonstrate the advantages of the dynamic plasticity, so focusing on that benchmark overly much is misleading, and the paper is high-quality and interesting to the community.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"meta-review\"}",
"{\"title\": \"RE\", \"comment\": \"I appreciate the careful evaluation on PTB but unfortunately this doesn't address my main concerns, which are that this is still a toy dataset imo, and that there is no visible advantage of using neuromodulation. What are the error bars on these results? Are the changes significant? The modest improvements can have many different reasons, like the intrinsic regularization from scaling weights, or pure chance given that it was only 1 run. I am not willing to trust improvements on a dataset like PTB if they are not quite large. Ultimately, I strongly believe that LM is not a good ask for the proposed approach and I strongly advise to find better applications and harder datasets for this kind of research. For instance, I believe that neuro-modulation could be beneficial for domain adaptation or few shot learning. So I am sticking with my evaluation of this work and strongly encourage to conduct further and more rigorous experimental work.\"}",
"{\"title\": \"Response and additional experiments.\", \"comment\": \"Thank you for your response. In response to reviewer's suggestions, we have performed additional experiment on more complex architectures. In all cases, modulated plasticity provided a moderate but consistent improvement for the same or lower number of parameters. Please see response to Reviewer 1 (who had largely similar comments) for details.\"}",
"{\"title\": \"Response and additional experiments\", \"comment\": \"Thank you for your clarification. We appreciate the careful, highly constructive response.\", \"regarding_language_modeling_experiments\": \"Following the reviewer\\u2019s suggestion, in addition to the smaller model described in the paper, we have successfully implemented plasticity in two larger architectures, including one that produces near state-of-the-art (SOTA) results. The main result is that in all cases, neuromodulation provided a consistent improvement over non-plastic and non-modulated plastic models. Though the level of improvement is modest, these new models are large, highly optimized models, and it is interesting and supportive of our case that nevertheless simply adding neuromodulation gives a boost in all cases. In more detail:\\n\\n- Model 1 (medium-size architecture similar to Gal and Gharamani 2015)\\nThis model has 20 Million parameter and the authors report a test perplexity of 79.7 on test data. We ran hyperparameter search (over learning-rate, learning-rate decay and dropout rates) on this model to bring down the test perplexity to 73.96. This was the baseline model performance. \\nSubsequently, three variants of this baseline model were evaluated: 1) with only plasticity (Miconi et al., 2018), 2) with simple neuromodulation (Equation 3 in the paper), 3) with retroactive neuromodulation (equation 4 and 5). The number of LSTM hidden units were reduced to ensure that the total number of trainable parameters remained the same as the baseline model (20 Million). In each of these variants, the same hyperparameter search was conducted as the one done on the baseline model described above. The results from a single run of each model are the following: \\n\\nBaseline (same model as Gal and Ghahramani 2015) with hyperparameter optimization: 73.96\\nBaseline with plasticity, no neuromodulation (Miconi et al., 2018): 73.81\", \"baseline_with_plasticity_and_simple_neuromodulation\": \"60.88\\n\\nWhile the 0.8 improvement of neuromodulation is modest, it still manages to produce some improvement even in a near-SOTA model, with parameters that were explicitly and carefully tuned for the non-plastic model (indeed, the importance of tuning and optimization was one of the main arguments of the paper). We believe these results confirm the results already included in the paper, showing that neuromodulation enhances the benefits of plasticity by allowing the network to control its own plasticity in real time.\", \"retroactive_neuromodulation_with_eligibility_trace\": \"73.24\\n\\nAdding simple plasticity to the LSTM network provides a 0.15 improvement in perplexity score. However, adding either simple neuromodulation or retroactive neuromodulation to the LSTM network yields a 0.7 perplexity point improvement. \\n\\n\\n- Model 2 (large-scale architecture with near-SOTA results from Merity et al. ICLR 2018)\\nThe complexity of the model (24M parameters with switching optimization algorithms) and the length of training (several days, as explained in our previous response) precluded hyperparameter tuning, so we used the same parameters as advertised on the authors\\u2019 GitHub page for all versions (the only difference being that, for all versions, we did not implement dropout in the recurrent connections because we could not integrate plasticity with the authors\\u2019 specialized code to implement it) and reduced batch size to 7 due to compute limitations. In addition, just like in model 1 above, we reduced the number of LSTM neurons in plastic models only, to ensure equal or lower total number of parameters to the baseline.\\n\\nWith this architecture, we report the following test perplexities on PTB (single run for each model):\\n\\nBaseline (same as Merity et al. ICLR 2018 except for changes described above): 61.68\\nBaseline with plasticity, no neuromodulation (Miconi et al., 2018): 61.81\", \"re\": \"Font issue. We have fixed the font issue and have updated our paper accordingly (it now looks similar to other ICML papers). We\\u2019ll upload the corrected version as soon as the ICLR website accepts final versions.\"}",
"{\"title\": \"Re: Rebuttal\", \"comment\": \"I appreciate the addition of the qualitative analysis. However, much like reviewer 1 my main problem with this work lie in the poor evaluation setup. Unfortunately, still all of the tasks are toy tasks and for PTB the results are particularly low. It is hard to trust any improvements in the perplexity regimes reported here, since 10-20 perplexity point gains are easily achievable with simple LSTMs. Given the relatively simple architectural additions made over previous work I would expect a more rigorous evaluation with more experiments and models that can really show the benefit of this idea in more realistic settings. In its current state I see this work as an interesting workshop addition.\"}",
"{\"title\": \"quick clarification\", \"comment\": \"Thank you for your response. I'd like to just quickly address one of your concerns regarding \\\"SOTA\\\".\\n\\nI agree with your philosophy here, and I don't mean to set SOTA as the required bar in any way whatsoever, and I don't believe I implied such in the review. In turn, I hope that you can appreciate that your results on PTB are not at all close to what an LSTM can achieve, nor what an LSTM could achieve 4 years ago. Indeed, there's still a 30 perplexity difference between these results and that of Zaremba 2014. Nonetheless, the number in and of itself still doesn't necessarily matter, for the reasons you state. \\n\\nHowever, what *does* matter is that the increase you report is on the order of ~2 perplexity, while the baseline is ~40 perplexity away from what we know LSTMs can achieve. Given that we know LSTMs can achieve 40 perplexity better, how can we be certain that the small bump you observe is indeed due to your additions, rather than some potentially unintended errors in, say, optimization or implementation of the baseline? How can we be certain that your results hold if the baseline was even a small amount closer to what we know it is capable of? With these results alone we don't necessarily know whether the effect of backpropamine will hold with better LSTMs, or whether it only works on this particular configuration of an LSTM. These points are *especially* pertinent since the baseline was performed \\\"internally\\\". The desire to compare to SOTA is often to alleviate these worries more than it is to \\\"beat a number\\\". \\n\\nMoreover, I am not entirely convinced of the argument of requiring massive compute applies here given that a single model trained with the compute resources of 4 years ago achieves a much higher score than what is reported here. I believe there's even work done in 2012 that shows better scores using RNNs. \\n\\nRegarding the font -- please download any other ICLR paper and you will see the font is Times New Roman. Pay particular attention to the title to see the difference clearly.\"}",
"{\"title\": \"General response and comments\", \"comment\": \"We thank the reviewers for their insightful comments and suggestions. We appreciate the reviewer\\u2019s agreement that the direction taken in our work is of great interest.\\n\\nIn response to the reviewer\\u2019s comments, the main modifications to the paper are as follows:\\n\\n- We have added a figure (Figure 3) and an Appendix section (A.4) to show the dynamics of neuromodulation during performance of Task 1. This figure reveals that neuromodulation is highly dynamic and reacts to reward in complex, time-varying ways.\\n\\n- We have added a schematic description of Task 1 to facilitate understanding (Figure 1, left).\\n\\n- We have altered the description of Task 3 (language modeling task), and also toned down some of the description of our results.\\n\\nWe agree with the reviewers that, ideally, extending these results to much larger architectures capable of state-of-the-art (SOTA) results would be desirable. As explained in the response to Reviewer 1, we made every effort to implement these large architectures and augment them with plasticity and neuromodulation, given our limited resources. We regret to report that we were unable to fulfill this task in the allotted time (see response to Reviewer 1 for a description of the directions we took, and are still taking). However, we believe that our existing results (showing that plasticity and modulation improve the performance of LSTMs, **all other things being equal**, in their \\u201csignature\\u201d task of language modeling, and using non-trivial, published architectures involving millions of parameters) is in itself of great potential interest. Furthermore, we are uncomfortable with the idea that obtaining SOTA results should be a minimum bar to clear for publication of novel techniques, which might restrict innovation to a few large entities. See response to reviewer 1 for more discussion of this point.\\n\\nSpecific responses to individual reviewers follow.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you to Reviewer 3 for your thoughtful critique and we are happy that you share our enthusiasm for the motivation behind our approach. We share your curiosity on the qualitative behavior of such systems, and as documented in this response we have augmented the paper to address that and other of your suggestions.\", \"re\": \"\\\"- perplexity improvements of less than 1.3 points over plasticity alone (which is the actual baseline for this paper) can hardy be called \\\"significant\\\". Even though they might be statistically significant (meaning nothing more than the two models being statistically different), minor architectural changes can lead to such improvements. Furthermore PTB is not a \\\"challenging\\\" LM benchmark.\\\"\\n\\nWe agree that, while the differences are statistically significant, they are minor. We were using that word technically, but do not want to give the wrong impression. We have thus modified the text to make it clear that we mean \\u201cstatistically significant\\u201d only. We also removed the adjective \\u201cchallenging\\u201d as regards PTB. \\n\\nWe agree that, ideally, a comparison with SOTA architectures would be desirable. As explained in the response to Reviewer 1, despite all our efforts, we found the technical challenges insurmountable given our computational and engineering resources. We will keep trying to investigate such massive architectures in the future.\\n\\n\\nImportantly, our purpose in this task is to show that, **all other things being equal**, a neuromodulated plastic LSTM can outperform a standard LSTM in realistic settings. We believe that outperforming standard LSTMs (again, all else being equal) on their \\u201cworkhorse\\u201d task domain (language processing) is worthy of notice, especially given the ease of implementation of our method which requires only adding a few lines of codes (<10) to a standard LSTM implementation and can then be used as a drop-in replacement to standard LSTM.\"}",
"{\"title\": \"Reply to Reviewer 2\", \"comment\": \"Thank you Reviewer 2 for your positive appraisal of our results and presentation. As documented below, we do our best to address your questions, which have helped us improve the paper.\", \"re\": \"\\\"Why is non-plastic rnn left out of Figure 2b?\\\"\\n\\nAs documented in Miconi et al 2018, non-plastic networks are terrible at this task. We are happy to run this experiment and include it if the reviewer finds it useful.\", \"typos\": \"\\\"However, in Nature,\\\" -- no caps\", \"in_appendix\": \"\\\"(see Figure A.4)\\\" -- the figure is labeled \\\"Figure 3\\\"\\\"\\n\\nWe thank the reviewer for noticing these typos and have fixed them in the text.\"}",
"{\"title\": \"Reply to Reviewer 1\", \"comment\": \"Thank you to Reviewer 1 for noting the clarity of our presentation and reproducibility. We also appreciate the constructive criticism and thought that went into your review.\\n\\nWe spent a considerable amount of time trying to fulfill the reviewer\\u2019s request to match state of the art (SOTA) on PTB. To get SOTA on PTB, we need massive architectures, which considerable computing power and experimentation at the extreme limit of what is achievable for our team. Still, we pursued two directions. First, we tried to reimplement an architecture similar to Melis et al. 2017. However, they did not publish their code, hyperparameters, or weights, requiring re-implementing and re-training from scratch. We tried this path, but soon realized we would not be done in time (especially with a hyperparameter search). \\n\\nWe then tried to weave neuromodulation and differentiable plasticity into the architecture and code base of Merity et al., ICLR 2018 (also tied for SOTA). However, while they could simply leverage existing PyTorch implementations of LSTMs (written in extremely fast C++), we had to re-implement LSTMs \\u201cby hand\\u201d (i.e. as a series of connected layers) in PyTorch to introduce plasticity and neuromodulation. As a result, our networks thus ran considerably slower, by more than 10x (not because our method is intrinsically slower, but just for lack of engineering optimizations on our bespoke Python implementations; we confirmed this by observing that a similar \\u201chand-built\\u201d reimplementation of simple, non-plastic LSTMs ran similarly slower, while producing results identical to Merity et al.). These experiments are thus unfortunately still running. For these reasons (and more provided below), we thus think it more fair (and necessary) to make such experiments the subject of a future paper. \\n\\nThat said, we still believe the results in the current paper demonstrate the benefits of our techniques on a sizable model, and thus it would benefit the community to allow people to know about, and build upon, these new methods and results. The purpose of the present paper is to introduce a novel technique and show that it can produce an advantage in realistic settings, which we believe our PTB task confirms. Our claim is that, all other things being equal (especially the number of parameters), a neuromodulated plastic LSTM outperformed a standard LSTM on this particular benchmark task. We do **not** want to claim that our results are anywhere near SOTA. We have modified our text to avoid possible misunderstandings (see end of next-to-last paragraph in Section 4).\\n\\nAdditionally, philosophically, If SOTA results are the bar for all papers to be accepted into conferences like ICLR, then those venues will be the exclusive domain of those with either the computation or time (i.e. large-scale resources) to dedicate to such results. In that case, many cutting edge ideas will by necessity be excluded from the discussion, as will many research groups. Moreover, insisting on papers to be SOTA to be accepted also likely encourages p-hacking and shoddy science to game the results (even if unintentionally), reducing the quality of science our community tries to build on.\", \"re\": \"\\\"Style (font)\\\": We used the template and do not see the discrepancy. Can you clarify? We are happy to fix it.\"}",
"{\"title\": \"Interesting ideas and clearly presented, but the results do not support the claims\", \"review\": \"This work presents Backpropamine, a neuromodulated plastic LSTM training regime. It extends previous research on differentiable Hebbian plasticity by introducing a neuromodulatory term to help gate information into the Hebbian synapse. The neuromodulatory term is placed under network control, allowing it to be time varying (and hence to be sensitive to the input, for example). Another variant proposes updating the Hebbian synapse with modulated exponential average of the Hebbian product. This average is linked to the notion of an eligibility trace, and ties into some recent biological work that shows the role of dopamine in retroactively modulating synaptic plasticity.\\n\\nOverall the work is nicely motivated and clearly presented. There are some interesting ties to biological work -- in particular, to retroactive plasticity phenomena. There should be sufficient details for a reader to implement this model, thought there are some minor details missing regarding the experimental setup, which will be addressed below.\", \"the_authors_test_their_model_on_three_tasks\": \"cue-award association, maze learning, and Penn Treebank (PTB). In the cue-award association task the retroactive and simple modulation networks perform well, while the non-modulated and non-plastics fail. For the maze navigation task the modulated networks perform better than the non-modulated networks, though the effect is less pronounced. Finally, on PTB the authors report improvements over baseline LSTMs.\\n\\nOne of the main claims of this paper is that neuromodulated plastic LSTMs...outperform standard LSTMs on a benchmark language modeling task\\u201d, and that therefore \\u201cdifferentiable neuromodulation of plasticity offers a powerful new framework for training neural networks\\u201d. This claim is unfortunately unfounded for a very important reason: the LSTM performance is not at all close to that which can be achieved by LSTMs in general. The authors cite such models in the appendix (Melor et al), but claim that \\u201cmuch larger models\\u201d are needed, potentially with other mechanisms, such as dropout. Though this may be true, these models still undermine the claim that \\u201cneuromodulated plastic LSTMs...outperform standard LSTMs on a benchmark language modeling task\\u201d. This claim is simply not true, and more care is needed in reporting the results here in the wider context of the literature. Also, I am left wondering what are considered the parameters of the models -- are only the neuromodulatory terms considered as the additional trainable parameters compared to baseline LSTMs? How are the Hebbian synapses themselves considered in this calculation? If the Hebbian synapses are not considered, then the authors need a control with matched memory-capacities to account for the extra capacity afforded by the Hebbian synapses. Given the ties between Hebbian synapses and attention (see Ba et al), an important control here could be an LSTM with Bahdanau (2014) style attention. \\n\\nFinally, the style (font) of the paper does not adhere to the ICLR style template, and must be changed.\\n\\nOverall, the ideas presented in the paper are intriguing, and further research down this line is encouraged. However, in its current state the work lacks sufficiently strong baselines to support the paper\\u2019s claims; thus, the merits of this approach cannot yet be properly assessed.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Meta-learning to dynamically set the plasticity learning rate\", \"review\": \"Paper summary - This paper extends the differentiable plasticity framework of Miconi et al. (2018) by dynamically modulating the plasticity learning rate. This is accomplished via an output unit of the network which defines the plasticity learning rate for the next timestep. A variation on this dynamic learning rate related to eligibility traces is also proposed.\\n\\nBoth dynamic modulation variations strikingly outperform non-plastic and plastic non-modulated recurrent networks on a cue-reward association task with high-dimensional cues. The methods marginally outperform plastic non-modulated recurrent networks on a 9x9 water maze task. Finally, the authors show that adding dynamic plasticity to a small LSTM without dropout improves performance on Penn Treebank.\\n\\nThe paper motivates dynamic plasticity by analogy to the hypothesized role of dopamine in reward-driven learning in humans and animals.\\n\\nClarity - The paper is very clear and well written. The introduction provides useful insights, motivates the work convincingly, and provides interesting connections to past work.\\n\\nOriginality - I don't know of any other work that models the role of dopamine in quite this way, or that applies dynamic plasticity modulation in settings like these.\\n\\nQuality - The experiments are well chosen and seem technically sound.\\n\\nSignificance - The results show that meta-learning by gradient descent to modulate the plasticity learning rate is a promising direction -- a significant contribution in my view.\\n\\nOther Comments - The citation to Zaremba et al. in Table 1 made it seem like the perplexity result on that line of the table was directly from Zaremba et al's paper. I'd recommend removing the citation from that line to avoid confusion.\\n\\nOne thing I would have loved to see from this paper is a comparison of modulated-plasticity LSTMs with the sota from Melis et al., 2017. I gather that Experiment 3 presents small LSTMs without recurrent dropout instead because combining plasticity and dropout proved challenging (or at least the authors haven't tried it yet). I think the paper is solid as-is; positive results in this comparison would take it to the next level.\", \"questions\": \"Why were zero-sequences necessary in Experiment 1? This aspect of the task seems somewhat contrived, and it makes me wonder whether the striking failure of the non-modulated RNNs depends on this detail. Perhaps the authors could clarify on what a confounding \\\"time-locked scheduling strategy\\\" would look like in this task?\\nWhy does Experiment 1 present pairs of stimuli, rather than high-dimensional individual stimuli?\\nWhy is non-plastic rnn left out of Figure 2b?\\n\\nTypos\\n\\\"However, in Nature,\\\" -- no caps\", \"in_appendix\": \"\\\"(see Figure A.4)\\\" -- the figure is labeled \\\"Figure 3\\\"\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting extension of differentiable plasticity with evaluation that falls too short\", \"review\": \"The paper extends previous work on differentiable placticity to include neuro modulation by parameterizing the learning rate of Hebbs update rule. In addition, the authors introduce retroactive modulation that basically allows the system to delay incorporation of plasticity updates via so eligibility traces. Experiments are performaed on 2 simple toy datasets and a simple language modeling task. A newly developed cue-reward association task shows the clear limitations of basic plasticity and how modulation can resolve this. Slight improvements can also be seen on a simple maze navigation task as well as on a basic language modeling dataset.\\n\\nOverall I like the motivation, provided background information and simplicity of the approach. Furthermore, the cue-reward experiment seems to be a well designed show case for neuro-modulation. However, as the authors acknowledge the overall simplicity of the tasks being evaluated with mostly marginal improvements makes the overall evaluation fall short. Unfortunately the paper doesn't provide any qualitative analysis on how modulation is employed by the models after training. Therefore, although I would like to see an extended version of this paper at the conference, without further experiments and analysis I see the current version rather as an interesting workshop contribution.\", \"strengths\": [\"motivation: the natural extension of previous work on differentiable plasticity based on existing knowledge from neuro science is an important next step\", \"cue reward experiment exemplifies limitations of current plasticity approaches and clearly shows the potential benefits of neuro modulation\", \"maze navigation shows incremental benefits over non-modulated plasticity\", \"thorough experimentation\", \"clipping-trick is a neat observation\"], \"weaknesses\": [\"evaluation: only on toy tasks (which includes PTB), no real world tasks\", \"very incremental improvements on PTB over a very simple baseline (far from SotA)\", \"evaluated models (feed-forward NNs and LSTMs) are very basic and far from current SotA architectures\", \"no qualitative analysis on how modulation is actually use by the systems. E.g., when is modulation strong and when is it not used\"], \"comments\": [\"perplexity improvements of less than 1.3 points over plasticity alone (which is the actual baseline for this paper) can hardy be called \\\"significant\\\". Even though they might be statistically significant (meaning nothing more than the two models being statistically different), minor architectural changes can lead to such improvements. Furthermore PTB is not a \\\"challenging\\\" LM benchmark.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SyerAiCqt7 | Hierarchical Bayesian Modeling for Clustering Sparse Sequences in the Context of Group Profiling | [
"Ishani Chakraborty"
] | This paper proposes a hierarchical Bayesian model for clustering sparse sequences.This is a mixture model and does not need the data to be represented by a Gaussian mixture and that gives significant modelling freedom.It also generates a very interpretable profile for the discovered latent groups.The data that was used for the work have been contributed by a restaurant loyalty program company. The data is a collection of sparse sequences where each entry of each sequence is the number of user visits of one week to some restaurant. This algorithm successfully clustered the data and calculated the expected user affiliation in each cluster. | [
"Hierarchical Bayesian Modeling",
"Sparse sequence clustering",
"Group profiling",
"User group modeling"
] | https://openreview.net/pdf?id=SyerAiCqt7 | https://openreview.net/forum?id=SyerAiCqt7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Skg2l7ngeV",
"ByeQzhQpnm",
"S1e3Sz2jhX",
"B1xrAbZ9n7",
"HJeUU5LIhQ",
"rylBeSoS3X"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544762099604,
1541385227056,
1541288516123,
1541177804808,
1540938317844,
1540891884936
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper889/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper889/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper889/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper889/AnonReviewer5"
],
[
"ICLR.cc/2019/Conference/Paper889/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper889/AnonReviewer4"
]
],
"structured_content_str": [
"{\"metareview\": \"All reviewers agree to reject. While there were many positive points to this work, reviewers believed that it was not yet ready for acceptance.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-Review for Group Profiling paper\"}",
"{\"title\": \"poorly written paper, not ready for publication\", \"review\": \"Pros:\\n-- Clustering sequence vectors is a practical and useful problem. Some of the business use-cases described in the paper are indeed useful and relevant for analytics in healthcare and retail.\", \"cons\": \"-- The paper is poorly written. There are numerous typos and grammatical errors throughout the paper. \\n-- The ideas are not presented coherently. The writing needs to improve quite a bit to get accepted at a conference like ICLR.\\n-- Description of related literature is done very poorly. \\n-- The generative model described clearly lacks justification. The model is not described concretely either. There is no clear description of the inference techniques used.\\n-- Empirical results are weak.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Zero novelty\", \"review\": \"The problem formulation at the bottom of page 3 correspond to what a bag of words preprocessing of a document would provide and in this the clustering would be a much simpler solution that just doing LDA.\\n\\nThe paper has zero interest.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Both the writing and experiments should be improved\", \"review\": \"This paper propose a hierarchical Bayesian model to cluster sparse sequences data. The observations are modeled as Poisson distributions, whose rate parameter \\\\lambda_i is written as the summation of \\\\lambda_{ik}, a Gamma distribution with rate equal to the mixture proportion \\\\alpha_{ik}. The model is implemented in Pystan. Experimental results on a real-world user visit dataset were presented.\\n\\nThe format of this paper, including the listing in the introduction section, the long url in section 2.3, and the model specification in section 3.2, can be improved. In particular, the presentation of the model would be more clear if the graphical model can be specified. \\n\\nThe motivation of choosing the observation model and priors is not clear. In section 3, the author described the details of model specification without explaining why those design choices were appropriate for modeling sparse sequence data.\\n\\nExperimental results on a real-world dataset is presented. However, to demonstrate how the model works, it would be best to add synthetic experiments as sanity check. Results using common baseline approaches should also be presented. The results should also be properly quantified in order to compare the relative advantage of different approaches.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The authors discuss a hierarchical Bayesian framework for clustering sparse sequences. They use data from a restaurant loyalty program to identify users (rows) and weeks of visits (columns) under the assumption that user visits to a restaurant will be sparse across weeks.\", \"review\": \"The paper is very poorly written. It is hard to understand what the real contribution is in this paper.\\nThe connection of the model with HMM is not clear. The literature review has to be rewritten.\\n\\nTo the reader, it sounds that the authors are confused with the fundamentals itself: mixture model, Bayesian models, inference. \\n\\n> Mixture models can be based on any of the exponential family distributions - Gaussian just happens to be the most commonly used.\\n> Again if this is a Bayesian model, why are #clusters not inferred? The authors further mention that in their Pystan implementation K clusters were spun too quick. What was the K used here? Was it set to a very large value or just 3? Did the authors eventually use the truncated infinite mixture model in Pystan?\\n> The authors mention their model is conceptually similar to EM but then end up using NUTS. \\n> Why is a url given in Section 2.3 instead of being given in the references? \\n> Provide a plate model describing Section 3.2.\", \"rating\": \"1: Trivial or wrong\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"HIERARCHICAL BAYESIAN MODELING FOR CLUSTERING SPARSE SEQUENCES IN THE CONTEXT OF GROUP PROFILING\", \"review\": \"The paper discusses clustering sparse sequences using some mixture model. It discusses results about clustering data obtained from a restaurant loyalty program.\\n\\nIt is not clear to me what the research contribution of the paper is. What I see is that some known techniques were used to cluster the loyalty program data and some properties of the experiments conducted noted down. No comparisons are made. I am not sure what to evaluate in this paper.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
S1EERs09YQ | Discovery of Natural Language Concepts in Individual Units of CNNs | [
"Seil Na",
"Yo Joong Choe",
"Dong-Hyun Lee",
"Gunhee Kim"
] | Although deep convolutional networks have achieved improved performance in many natural language tasks, they have been treated as black boxes because they are difficult to interpret. Especially, little is known about how they represent language in their intermediate layers. In an attempt to understand the representations of deep convolutional networks trained on language tasks, we show that individual units are selectively responsive to specific morphemes, words, and phrases, rather than responding to arbitrary and uninterpretable patterns. In order to quantitatively analyze such intriguing phenomenon, we propose a concept alignment method based on how units respond to replicated text. We conduct analyses with different architectures on multiple datasets for classification and translation tasks and provide new insights into how deep models understand natural language. | [
"interpretability of deep neural networks",
"natural language representation"
] | https://openreview.net/pdf?id=S1EERs09YQ | https://openreview.net/forum?id=S1EERs09YQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1xprSvWl4",
"rklw7IW-xN",
"rkeFc0bDkV",
"HkxBOdwqAm",
"Bkx4UOPqRQ",
"SJlEFDw9AQ",
"Ske8LPPq0m",
"SyxDYjcq2m",
"BJeMqyfch7",
"rke4auot2Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544807749325,
1544783390673,
1544130193211,
1543301229228,
1543301195989,
1543300988116,
1543300942458,
1541217151492,
1541181322335,
1541155003530
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper888/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper888/Authors"
],
[
"ICLR.cc/2019/Conference/Paper888/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper888/Authors"
],
[
"ICLR.cc/2019/Conference/Paper888/Authors"
],
[
"ICLR.cc/2019/Conference/Paper888/Authors"
],
[
"ICLR.cc/2019/Conference/Paper888/Authors"
],
[
"ICLR.cc/2019/Conference/Paper888/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper888/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper888/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"Important problem (making NN more transparent); reasonable approach for identifying which linguistic concepts different neurons are sensitive to; rigorous experiments. Paper was reviewed by three experts. Initially there were some concerns but after the author response and reviewer discussion, all three unanimously recommend acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Meta-review\"}",
"{\"title\": \"Response to Reviewer 3 (for post-rebuttal comments)\", \"comment\": \"We are deeply grateful to reviewer3 for thoughtful post-rebuttal suggestions. We will clarify terminology, add more analyses and modify the figures accordingly. For example, we will match the detected concepts with those in WordNet (ConceptNet) tree and update Fig 7 and Fig 14 to show which concepts are detected at each bin.\"}",
"{\"title\": \"Follow-up to author response\", \"comment\": \"Thank you to the authors for your comprehensive replies and revisions. The added analyses help to clarify and solidify the overall picture, and I remain of the opinion that this paper offers some interesting insights into the internal workings of these networks.\"}",
"{\"title\": \"Response to Reviewer 3 (part 2)\", \"comment\": \"5. Concept replication\\n ===================================\\nThe main reason that we replicate each concept into a fixed-length sentence is to normalize the degree of the input signal to the unit activation. We clarify this point in Section 3.3. Without such normalization (e.g. a single instance of a candidate concept as input, as Reviewer 2 suggested), the DoA metric has a bias to prefer a lengthy concept. Please refer to Appendix A.4 for comparison with 'one instance' method.\\n\\n\\n6. Section 4.4\\n ===================================\\nWe thank Reviewer 3 for acknowledging the significance of results in section 4.4.\\n\\n\\n7. Sensitivity of replicate setting\\n ===================================\\nWe add a \\u2018one instance\\u2019 option to the comparison of selectivity (Fig. 2). The results show that the mean selectivity of the \\u2018replicate\\u2019 set is higher than that of the \\u2018one instance\\u2019 set, which implies that a unit's activation increases as its concepts appear more often in the input text. One of our main contributions is the discovery of the units that are selectively responsive to specific natural language concepts and \\u201cit is quantitatively verified\\u201d in Fig. 2.\\n\\n\\n8. Factors that affect concept alignment\\n ===================================\\nIt is an interesting question why certain concepts emerge more than others. We experiment some factors that may affect concept alignment, and add results to Section 4.5 and Appendix F. We investigate the following two hypotheses: (i) The concepts with higher frequency in training data are aligned to more units (as Reviewer 3 suggested). (ii) Concepts that have more influence on the objective function (expected loss) are aligned to more units. For the concepts in the final layer of translation model, we measure the Pearson correlation coefficient between [# of aligned units per concept] and the factor (i) and (ii), and obtain 0.482 / 0.531, respectively. These results make a lot of sense in that the learned representation focuses more on identifying both frequent concepts and important concepts for solving the target task. Yet, we are not sure that we should directly \\u201ccontrol\\u201d the effect of frequency, because it is quite unnatural and non-trivial to manipulate the training data to change the frequency of a specific concept.\\n\\n\\n9. Minor comments from Reviewer 3\\n===================================\\n(1) We update Section 2.2, related work, Section 4.1 and Section 4.3 as Reviewer 3 suggested. Please see the blue fonts.\\n(2) Fig. 5: We thank Reviewer 3 for correcting the typo. The y-axis of Fig. 5 is \\u201cthe number of aligned concepts\\u201d in each layer. For example, the plot on the top left dbpedia shows that more than 100 morpheme concepts are aligned across all units of the 0-th layer. We also update the caption of Fig. 5 for clarification. \\n(3) Appendix A: We add reference to Appendix A in footnote of Section 3.3 of the revised paper.\\n(4) Notation of set of \\u2018random\\u2019 sentences: we will modify notation of random set for less confusing in the camera-ready version. \\n\\n10. Writing and grammar\\n===================================\\nWe sincerely thank Reviewer 3 for thorough proofreading. We correct all the typos.\\n\\nReference\\n===================================\\n[1] Bolei Zhou et al., Revisiting the Importance of Individual Units in CNNs via Ablation (arXiv:1806.02891, 2018)\\n[2] David Bau et al., Network Dissection: Quantifying Interpretability of Deep Visual Representations (CVPR 2017)\\n[3] Ruth Fong et al., Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks (CVPR 2018)\\n[4] Bolei Zhou et al., Object Detectors Emerge In Deep Scene CNNs (ICLR 2015)\"}",
"{\"title\": \"Response to Reviewer 3 (part 1)\", \"comment\": \"We thank Reviewer 3 for positive and constructive review. Please see blue fonts in the newly uploaded draft to check how our paper is updated.\\n\\n1. Concepts\\n===================================\\n(1) We agree that the term \\u2018concept\\u2019 could be ambiguous. Nonetheless, we use the term \\u2018concept\\u2019, following the related work for interpretability [1-4], where the \\u2018units\\u2019 and \\u2018concepts\\u2019 are typically used to refer to the channels of hidden layers and the detected semantic parts of the input information (eg, wheels, cars, legs as visual concepts), respectively. In our work on natural language, the \\u2018concepts\\u2019 in previous work should correspond to morphemes, words, and phrases, which form the fundamental building blocks of natural language. Please also note that we define \\u2018natural language concept\\u2019 in Section 1 instead of \\u2018concept\\u2019 alone for less confusion. \\n\\n\\n(2) We define a \\u201cconcept cluster\\u201d as a set of concepts that are aligned to the same unit and have similar semantics or grammatical roles. We add what concept clusters emerge per task to Appendix E.1. We observe that such concept clusters appear more strongly in classification tasks rather than translation tasks. Also, we investigate how concept clusters vary with layer depth and discuss the detailed results in Appendix E.2, where we discover that units in deeper layers tend to form clusters more strongly than units in earlier layers. Please refer to Appendix E for more results.\\n\\n\\n2. Analyses are qualitative and in a small scale\\n ===================================\\nGiven that we use two state-of-the-art models on seven benchmark datasets, our experiments are large-scale, although some analyses are done qualitatively in small-scale as Reviewer pointed out. \\nTherefore, we add more quantitative and thorough analyses as follows.\\n(1) Ratios of interpretable/non-interpretable units across layers for multiple tasks and datasets (Appendix D).\\n(2) Quantitative measures of concept clusters across layers for multiple tasks and datasets (Appendix E).\\n(3) Correlation coefficients of possible hypotheses on why certain units emerge (i.e. document frequency and delta of expected loss) for multiple tasks and datasets (Section 4.5 and Appendix F). \\n(4) Selectivity variation for different M values = [1,3,5,10] (Appendix C).\\n(5) The number of unique concepts aligned to each layer for multiple tasks and datasets. (Figure 13)\\n\\n3. Paper structure\\n ===================================\\nPer Reviewer 3\\u2019s suggestion, we will move [The Model and the Task] Section to 4.1 in the camera-ready version. \\n\\n\\n4. Sentence representation\\n ===================================\\n(1) We clarify Section 3.2 as Reviewer 3 suggested. Please refer to blue fonts in Section 3.2\\n(2) The idea of mean-pooling over all spatial locations is motivated by Zhou et al. [4]. The only difference is that [4] uses the addition pooling because the input set is fixed-length images, whereas we use the mean pooling because the input is variable-length sentences.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank Reviewer 2 for positive and constructive review. Please see blue fonts in the newly uploaded draft to check how our paper is updated.\\n\\n1. Replicating concepts\\n===================================\\nThe main reason that we replicate each concept into a fixed-length sentence is to normalize the degree of the input signal to the unit activation. Without such normalization (e.g. a single instance of a candidate concept as input, as Reviewer 2 suggested), the DoA metric has a bias to prefer a lengthy concept. We clarify this point at Section 3.3, and present detailed discussion in Appendix A.4.\\n\\n\\n2. M values\\n===================================\\nThe M value is used as a threshold to set how many concepts per unit are considered for later analyses. We observe that the overall trend in our quantitative results does not change much with M. As an example, we add Fig.8 to Appendix C, which shows the trend of selectivity values is stable across different M= [1,3,5,10]. \\n\\n\\n3. Non-interpretable units\\n===================================\\nIt is a highly interesting suggestion to investigate non-interpretable units as well as interpretable ones. We add one approximate method to quantify the non-interpretability of unit to Appendix D in the revised paper.\\nWe define a unit as non-interpretable, if the activation value of its top-activated sentence is higher than the DoA values of all aligned concepts. The intuition is that if a replicated sentence that is composed of only one concept has a less activation value than the top-activated sentences, the unit is not sensitive to the concept compared to a sequence of different words. Using this definition of non-interpretable units, we report the layer-wise ratios of interpretable units in Fig. 9 and some examples of non-interpretable units in Fig.10 in Appendix D. Please refer to Appendix D for the detailed results.\\n\\n\\n4. Figure 5\\n===================================\\nWe thank Reviewer 2 for correcting the typo. The y-axis of Fig. 5 is \\u201cthe number of aligned concepts\\u201d in each layer. For each layer, we collect all concepts, and then count category of each concept. For example, the plot on the top left dbpedia shows that more than 100 morpheme concepts are aligned to the units of the 0-th layer. We also update the caption of Fig. 5 for clarification. \\n\\n\\n5. Concept clusters\\n===================================\\n(1) What concept clusters emerge?\\nAs Reviewer 2 suggested, we add experiments of concept clusters to Fig. 11 and Appendix E.1. The top and left dendrograms of Fig. 11 show the hierarchical cluster of concepts based on the vector space distance between the concepts in the last layer. For clustering ([4]), we use the Euclidean distance as the distance measure, and pretrained Glove ([1]), fastText ([2]), ConceptNet ([3]) embedding for projecting concepts into the vector space. Each element of the heat map represents the number of times two concepts are aligned in the same unit. We observe that several diagonal blocks (clusters) appear more strongly in classification than in translation, particularly in the AG News and the DBpedia dataset. Please refer to Appendix E.1 for more details.\\n\\n(2) Why certain clusters emerge more than others?\\nIt is an interesting question why certain concepts or clusters emerge more than others. We add some results to this inquiry to Section 4.5 and Appendix F. We deal with individual concepts rather than clusters of concepts. We investigate the following two hypotheses: (i) The concepts with higher frequency in training data are aligned to more units. (ii) Concepts that have more influence on the objective function (expectation of the loss) are aligned to more units. For the concepts in the final layer, we measure the Pearson correlation coefficient between [# of aligned units per concept] and the factor (i) and (ii), and obtain 0.482 / 0.531, respectively. These results make a lot of sense in that the learned representation focuses more on identifying both frequent concepts and important concepts for solving the target task.\\n\\n6. Typos\\n===================================\\nWe corrected the typos. Thanks for pointing out.\\n\\nReferences\\n===================================\\n[1] Jeffrey Pennington et al., GloVe: Global Vectors for Word Representation (EMNLP 2014)\\n[2] Piotr Bojanowski et al., Enriching Word Vectors with Subword Information (TACL 2017)\\n[3] Speer Robert et al., ConceptNet 5.5: An Open Multilingual Graph of General Knowledge (AAAI. 2017)\\n[4] Daniel Mullner. Modern hierarchical, agglomerative clustering algorithms. arXiv:1109.2378v1. (arXiv 2011)\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank Reviewer 1 for positive and constructive review. Please see our revisions in blue font to check how our paper is updated.\\n\\n1. Concepts coverage over multiple layers\\n===================================\\nWe plot the number of unique concepts per layer in Figure 13. In all datasets, the number of unique concepts increases with the layer depth, which implies that the units in a deeper layer represent more diverse concepts.\\n\\n\\n2. Multiple occurrences of each concept at different layers\\n===================================\\nWe add Figure 16 to Appendix H to show how many layers each concept appears. Although task and data specific concepts emerge at different layers, there is no strong pattern between the concepts and their occurrences at multiple layers.\\n\\n\\n3. The layers\\u2019 activation dynamics towards noisy elements\\n===================================\\nIt is an interesting suggestion to investigate how unit activations vary with noisy elements of natural language such as synthetic adversarial examples or natural noise (Belinkov et al.[1]) that could attack the model. Since we discover some units that capture the abstract semantics rather than low-level text patterns in Section 4.2, we expect that those units will be not sensitive to such noisy transformation of the concepts. More thorough analysis for this topic will be one of our emergent future works.\\n\\nReferences\\n===================================\\n[1] Yonatan Belinkov et al., Synthetic and Natural Noise Both Break Neural Machine Translation (ICLR 2018)\"}",
"{\"title\": \"Solid paper with interesting insights - left with some questions\", \"review\": \"This paper describes a method for identifying linguistic components (\\\"concepts\\\") to which individual units of convolutional networks are sensitive, by selecting the sentences that most activate the given unit and then quantifying the activation of those units in response to subparts of those sentences that have been isolated and repeated. The paper reports analyses of the sensitivities of different units as well as the evolution of sensitivity across network layers, finding interesting patterns of sensitivity to specific words as well as higher-level categories.\\n\\nI think this paper provides some useful insights into the specialization of hidden layer units in these networks. There are some places where I think the analysis could go deeper / some questions that I'm left with (see comments below), but on the whole I think that the paper sheds useful light on the finer-grained picture of what these models learn internally. I like the fact that the analysis is able to identify a lack of substantial change between middle and deeper layers of the translation model, which inspires a prediction - subsequently borne out - that decreasing the number of layers will not substantially reduce task performance.\\n\\nThe paper is overall written pretty clearly (though some of the questions below could likely be attributed to sub-optimal clarity), and to my knowledge the analyses and insights that it contributes are original. Overall, I think this is a solid paper with some interesting contributions to neural network interpretability.\\n\\nComments/questions:\\n\\n-I'm wondering about the importance of repeating the \\u201cconcepts\\u201d to reach the average sentence length. Do the units not respond adequately with just one instance of the concept (eg \\\"the ball\\\" rather than \\\"the ball the ball the ball\\\")? What is the contribution of repetition alone?\\n\\n-Did you experiment with any other values for M (number of aligned candidate concepts per unit)? It seems that this is a non-trivial modeling decision, as it has bearing on the interesting question of how broadly selective a unit is.\\n\\n-You give examples of units that have interpretable sensitivity patterns - can you give a sense of what proportion of units do *not* respond in an interpretable way, based on your analysis?\\n\\n-What exactly is plotted on the y-axis of Figure 5? Is it number of units, or number of concepts? How does it pool over different instances of a category (different morphemes, different words, etc)? What is the relationship between that measure and the number of distinct words/morphemes etc that produce sensitivity?\\n\\n-I'm interested in the units that cluster members of certain syntactic and semantic categories, and it would be nice to be able to get a broader sense of the scope of these sensitivities. What examples of these categories are captured? Is it clear why certain categories are selected over others? Are they obviously the most optimal categories for task performance?\\n\\n-p7 typo: \\\"morhpeme\\\"\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting results on an important problem, but insufficient analysis and evaluation\", \"review\": \"========== Edit following authors' response ==========\\n\\nThank you for your detailed response and updated version. I think the new revision is significantly improved, mainly in more quantitative analyses and details in several places. I have updated my evaluation accordingly. \\n\\nSee a few more points below.\\n\\n1. Thank you for clarifying your definition of concepts. I still think that the word \\\"concept\\\" has a strong semantic connotation, while the linguistic elements your analyses capture may do other things. The results in appendix E do show that some semantic clusters arise. It's especially interesting to see the blocks in some of the heat maps, where similar \\\"concepts\\\" are clustered together (like the sports terms in AG); consider commenting on this. \\n\\n2. The new quantitative analyses are helpful. One other suggestion that I mentioned before is to connect detected concepts to external resources like WordNet or ConceptNet. That would help show that \\\"concepts\\\" are indeed semantic objects. \\n \\n3. The motivation for replicating as normalizing for length does make sense, although the input would still be unnatural. The comparison to \\\"one instance\\\" is helpful, but it's interesting that the differences between it and replication in figure 2 are not large. It would be good to show results that substantiate your assumption that without replication there will be a bias towards lengthy concepts. Does \\\"one instance\\\" detect more lengthy concepts than replication? \\n\\n4. The results on frequency and loss difference in 4.5 are very interesting. There is another angle to consider frequency: words that appear frequently often carry less semantic content (e.g. function words), so one might conjecture that they would require less units. It may be interesting to look at which concepts are detected at each frequency bin.\\n\\n5. Minor points: section 2.2 still mentions \\\"regression\\\" where it should be \\\"classification\\\". \\n\\n6. A few remaining grammar issues:\\n- \\\"one concept has a less activation value..\\\" - rephrase \\n- end of section 3.3: \\\"this experiments\\\" -> \\\"these experiments\\\"\\n\\n\\n========== Original review follows ==========\", \"summary\": \"=======\\nThis paper analyzes individual units in CNN models for text classification and translation tasks. It defines a measure of sensitivity for a unit and evaluates how sensitive each unit is to \\\"concepts\\\" in the input text, where concepts are morphemes, words, and phrases. The analysis shows that some units seem to learn semantic concepts, while others capture linguistic elements that are frequent or relevant for the end task. Layer-wise results show some correspondence between layer depth and linguistic element size. \\n\\nThe paper studies an important question that is relatively under-studies in NLP compared to the computer vision community. The motivation for the work is quite convincing. \\nI found some of the results and analysis interesting, but overall felt that the work can be made much stronger by more quantitative evaluations. I am also worried that the notion of \\\"concept\\\" is misleading here. See below for this and other comments. I am willing to reconsider my evaluation pending response to the below issues.\", \"main_comments\": \"=============\\n1. Concepts: \\n- morphemes, words, and phrases - are these \\\"concepts\\\"? They are indeed \\\"fundamental building blocks of natural language\\\" (2.2), but \\\"concepts\\\" has a more semantic connotation that I'm not sure these units target at. \\n- Some of the results do suggest that units learn concepts, as the analysis in 4.2 shows a \\\"unit detecting the meaning of certainty in knowledge\\\" and later units that have similar sentiments. It would be informative to quantify this in some way, for example by matching detected concepts to WordNet synsets, sentiment lexicons, etc., or else tagging and classifying them with various NLP tools. This could also reveal if units learn more syntactic or semantic concepts, and so on. \\n2. Generally, many of the analyses in the paper are qualitative and on a small scale. The results will be more convincing with more automatic aggregate measures. \\n3. The structure of the paper is confusing. Section3 starts with the approach but then mentions datasets and tasks (3.1). Section 4 is titled experiments, but section 4.1 starts with defining the concept selectivity. I would suggest reorganizing sections 3 and 4, such that section 3 describes all the methods and metrics, while dataset-specific parts are moved to section 4. \\n4. section 3.2 should provide more details on the sentence representation and how its obtained in the CNN models. A mathematical derivation and/or figure could be helpful. It is also not clear to me what's the motivation for mean-pooling over the l entries of the vector. \\n5. section 3.3: the use of replicated text for \\\"concept alignment\\\" is puzzling. This is not a natural input to the model, and I think more justification and motivation \\u00e5re needed for this issue, as well as perhaps comparison with other approaches. \\n6. I found section 4.4 very interesting. It shows some intuitive results of larger linguistic elements learned at higher layers, but then some results that do not show such a trend. Then, hypothesizing that the middle layers are sufficient AND validating the hypothesis by retraining the model is excellent. It's a very nice demonstration that the analysis can lead to model improvements. \\n7. Figure 2 seems to be almost caused by construction of the different options for S_+. Is it surprising that the replicate set has the highest sensitivity? Is there a better control setup than comparing with a random set? \\n8. One concern that I have is the effect of confounding factors like frequency on the results. The papers occasionally attributes importance to concepts (e.g. in 4.2), but I wonder if instead we may be seeing more frequent words. Controlling for the effect of frequency would be useful.\", \"minor_comments\": [\"==============\", \"Section 2.2, first paragraph: regression should be changed to classification\", \"The related work is generally relevant, although one could mention a few other papers that analyzed individual neurons in NLP tasks [1, 2]\", \"section 4.1: the random set may perhaps be denoted by something more neutral, not S_+ as the replicate and inclusion sets.\", \"section 4.3, last paragraph: listing examples showing that units in Europarl focus on key words would be good.\", \"Figure 5, y axis label: should this be number of units instead of concepts?\", \"Appendix A has several interesting points but there is no reference to them from the main paper.\", \"Writing, grammar, etc.:\", \"======================\", \"Introduction: among them - who is them?\", \"2.1: motivated from -> motivated by; In computer vision community -> In the computer vision community\", \"2.1: quantifying characteristics of representations in layer-wise -> rephrase\", \"3.2: dimension of sentence -> dimension of the/a sentence\", \"4.1: to which -> remove \\\"which\\\"\", \"4.2: in the several encoding layer -> in several encoding layers\", \"4.3: aliged -> aligned\", \"Capitalize titles in references\", \"A.2: with following -> with the following; how much candidate -> how much a candidate; consider following -> consider the following\", \"A.3: induces similar bias -> induces a bias; such phrase -> such a phrase; on very -> on a very\", \"C: where model -> where the model; In consistent -> Consistent; where model -> where the model\", \"References\", \"==========\", \"[1] Qian et al., Analyzing linguistic knowledge in sequential model of sentence\", \"[2] Shi et al., Why Neural Translations are the Right Length\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This paper proposes an interpretation to the activation values of hidden layer units of convolutional neural networks trained on language tasks, aligning those units with natural language concepts. The work is novel and interesting to the NLP community.\", \"review\": \"The paper is well written and structured, presenting the problem clearly and accurately. It contains considerable relevant references and enough background knowledge. It nicely motivates the proposed approach, locates the contributions in the state-of-the-art and reviews related work. It is also very honest in terms of how it differs on the technical level from existing approaches.\\nThe paper presents interesting and novel findings to further state-of-the-art\\u2019s understanding on how language concepts are represented in the intermediate layers of deep convolutional neural networks, showing that channels in convolutional representations are selectively sensitive to specific natural language concepts. It also nicely discusses how concepts granularity evolves with layers\\u2019 deepness in the case of natural language tasks.\\nWhat I am missing, however, is an empirical study of concepts coverage over multiple layers, studying the multiple occurrences of single concepts at different layers, and a deeper dive on the rather noisy elements of natural language and the layers\\u2019 activation dynamics towards such elements.\\nOverall, however, the ideas presented in the paper are interesting and original, and the experimental section is convincing. My recommendation is to accept this submission.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
SyG4RiR5Ym | Neural Distribution Learning for generalized time-to-event prediction | [
"Egil Martinsson",
"Adrian Kim",
"Jaesung Huh",
"Jaegul Choo",
"Jung-Woo Ha"
] | Predicting the time to the next event is an important task in various domains.
However, due to censoring and irregularly sampled sequences, time-to-event prediction has resulted in limited success only for particular tasks, architectures and data. Using recent advances in probabilistic programming and density networks, we make the case for a generalized parametric survival approach, sequentially predicting a distribution over the time to the next event.
Unlike previous work, the proposed method can use asynchronously sampled features for censored, discrete, and multivariate data.
Furthermore, it achieves good performance and near perfect calibration for probabilistic predictions without using rigid network-architectures, multitask approaches, complex learning schemes or non-trivial adaptations of cox-models.
We firmly establish that this can be achieved in the standard neural network framework by simply switching out the output layer and loss function. | [
"Deep Learning",
"Survival Analysis",
"Event prediction",
"Time Series",
"Probabilistic Programming",
"Density Networks"
] | https://openreview.net/pdf?id=SyG4RiR5Ym | https://openreview.net/forum?id=SyG4RiR5Ym | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1xg0J3lxN",
"H1eEWxXcCm",
"SkeMu1XcAX",
"r1la-afcAm",
"HyxXPizcRX",
"HkgIpdGqC7",
"HygO5BG9CX",
"rJgnHrMqAQ",
"SJePeMzp2X",
"HklarYERom",
"r1xplpiajm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544761288040,
1543282684394,
1543282537806,
1543281924667,
1543281499467,
1543280829945,
1543280016463,
1543279939566,
1541378543178,
1540405572902,
1540369652701
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper887/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper887/Authors"
],
[
"ICLR.cc/2019/Conference/Paper887/Authors"
],
[
"ICLR.cc/2019/Conference/Paper887/Authors"
],
[
"ICLR.cc/2019/Conference/Paper887/Authors"
],
[
"ICLR.cc/2019/Conference/Paper887/Authors"
],
[
"ICLR.cc/2019/Conference/Paper887/Authors"
],
[
"ICLR.cc/2019/Conference/Paper887/Authors"
],
[
"ICLR.cc/2019/Conference/Paper887/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper887/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper887/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"All reviewers agree to reject. While there were many positive points to this work, reviewers believed that it was not yet ready for acceptance.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-Review for Neural Distribution Learning\"}",
"{\"title\": \"Why we had only one explicit baseline.\", \"comment\": \"> Also, while the HazardNet framework looks convenient, by using hazard and survival functions as discusses by the authors, it is not clear to me what are the benefits from recent works in neural temporal point processes which also define a general framework for temporal predictions of events. Approaches such at least like \\\"Modeling the intensity function of point process via recurrent neural networks\\\" should be considered in the experiments, though they do not explicitely model censoring but with slight adapations should be able to work well of experimental data.\\n\\nIn short, the main differences are not quantitative, and the metrics that can be used for comparing models differ between modeling strategies as the data used and type of predictions made by other methods are limited. \\n\\nAlso, the fact that many models would work with censored data using only slight adaptations makes it even more surprising why they seem not to take into account the particular problems that arise when having censored data.\\n\\nFurthermore in the cases when they would work they are are built for particular distributions, neural network architectures, probabilistic queries and types of data. The argument we make is therefore that;\\n\\n1) Direct comparison is uninteresting.\\nWe think that there are enough qualitative differences between models and enough questions about the PSA-Approach that the typical question; \\\"Which model performs best on some metric?\\\" may not be the most interesting one to ask just yet.\\n\\nOur proposition is more that there's many soft qualitative aspects with our approach (simplicity of comparing multitude of neural network architectures to name one) that does not fit into a table. \\n\\nWe reasoned that in order to make an interesting and fair comparison we need to factor out maximum amount of confounding factors such as model architecture, which we hard or impossible as most papers are usually about coupling model architecture, data modelling technique and predicted output (Sec. 1-2, 3.4, 4, 6)\\n\\nThe question we focused on was more \\\"Is the model unbiased and calibrated?\\\", which we did by comparing it to the binary case. We have not found any convincing work doing this before. We neither find the answers to those questions obvious. Even for the most similar looking work such as https://arxiv.org/abs/1809.02403 they have even come to completely different conclusions, such as the need to weight the censored and the uncensored loss terms. \\nWe found that Clipping log-likelihood and properly initializing output layer and making sure the assumptions of PSA holds is sufficient for numerically stable training. See discussion with reviewer #1.\\n\\n\\n2) Comparison is unfair.\\nReimplementing our generalized model for most papers using their architecture, predicted distribution, data and queries would often make that implementation mathematically identical to their model (Since they knowingly/unknowingly built upon the fundamental ideas of PSA). \\nConversely; reimplementing their models for new neural network architectures and making them work for censored discrete data or the type of queries our model supports would often transform them beyond recognition and it would be hard to argue that it still is their model.\\n\\nIf we take DeepHit as an example. This proposes an interesting (but arguably complex) method of joining different RNNs together at different time frequencies, each specialized for their own feature type (evenly spaced, asynchronous, event sequences, etc). It predicts multiple types of output in different future timewindows and adds different weighting schemes for the varying loss-terms. They evaluate performance using MAE (which is not defined for censored data), which they base on the paper Recurrent Marked Temporal Point Process (RMTP) (See comments to reviewer #1 about this). There are 10s of examples like this.\\n\\nTo compare for example DeepHit's RNN tuned for their data-reshaping and evaluation method (which we critique) on our data with some neural network architecture of our choice one some metric not explicitly optimized for would be misleading at best, and we wonder which question it would answer in the first place.\\n\\nOther methods that have been subject of much recent debate includes temporal point processes such as Hawkes Processes, which has a hand-crafted way of mediating past history to the parameters of the (particular) current distribution of time to event.\\nWe wanted to give examples on how to build new processes and distributions, not to verify this on all existing distributions manually. \\nFor our model, past events is just one of many possible feature inputs, and as an example we try learning how to map it to the predicted parameters as good as possible using multilayered RNNs or CNNs as an example. \\nOur only claim is that if the current predicted distribution has a Cumulative Hazard Function satisfying certain basic requirements (which a hawkes process clearly does), it works fine.\"}",
"{\"title\": \"The task is generalized, with two concrete realizations (in experiment).\", \"comment\": \"> First of all, this paper should be more clear from the begining of the kind of problems it aim to tackle.\\n\\nWe highly value the feedback. An overarching theme was trying to be as general as possible, as we found much work being too specific when there's a much wider set of data, problems, and network architectures that can be utilized once we understand the fundamentals of predicting a parametric survival distribution.\\n\\nThere is however a very common problem domain that we are explicitly working on (Section 4) we also try to generalize the problem to the general task of predicting a probability distribution with possibly censored or discrete target data, and show how it can even improve on common sparse classification tasks (Section 5).\\n\\nCurrently we can\\u2019t release implementation and all experimental data without breaking double blind. Once this is done, with corresponding visualizations and data-manipulation should be a bit easier. We tried to make some amends to make it clearer.\\n\\n> It the current form, it is very hard to know what are the inputs, are they sequences of various kinds of events or only one type of event per sequence. It is either not clear to me wether the censoring time is constant or not and wether it is given as input (censoring time looks to be known from section 3.4.\\n\\nCensoring time is used to calculate censoring indicators, itself used for training. It typically varies with time. Input to the neural network (features) can be anything, output are parameters of a distribution. \\nSpecifically, for the experiment in section 4 it's sequences of features. The target (supplied during training) is a sequence of time to event (which will look like a countdown/sawtooth wave as in figure 2) and censoring indicators used for the loss function.\\nCensoring time for a sequence would be the time to the end of the sequence (so it's a countdown). Censoring indicators will thus vary.\\n\\nIn section 5, input is a 50 ms time-window of a spectrogram. Target is bivariate; the time to event and time since event (each with their respective censoring indicators). This was supposed to exemplify that with just a change in the output dimension and feature transformation, the same model may be used for something seemingly different like making multivariate predictions.\\n\\n> but in that case I do not really understand the contribution : does it not correspond to a very classical problem where events from outside of the observation window should be considered during training ? classical EM approaches can be developped for this).\\n\\nIt is true that this corresponds closely to the classical problem of, during training, considering whether events were *not* in the observation/data window (i.e censored). While we heard of no relevant EM-methods but consider our method as exactly developing on a classical approach Parametric Survival Analysis (PSA) which we found other work not sufficiently recognizing. \\nThe purpose is to say that - while there seems to be many shiny and complex solutions out there - let's first do an in-depth discussion of the classical approach.\\nIt's easy to see that most other papers can be considered as derivative work of PSA, but we couldn't find anyone going into depth on the idea itself.\\n\\nOur general reasoning is that by thinking from the classical idea of PSA, many variations (our contributions) immediately follows and are easy to implement such as predicting all parameters of the distribution like other Density Networks do, being able to discretize TTE, composing distributions to make other distributions, multivariate predictions, architecture agnosticism all the while making it fit well with popular probabilistic programming paradigms (Edward, PyTorch Distributions, Pyro).\\n\\n> The problem of unevenly spaced sequences should also be more formally defined.\\n\\nWe thought this was clear in the context of event-generated time series no? Asyncronous measurements vs Evenly spaced measurements for temporal models is a classic such problem we wanted to address discussing in section 3.4. In the context of TTE an additional confounding factor is that the lag between observations may be connected to what we want to predict.\"}",
"{\"title\": \"[3] Example: Seemingly similar PSA-work with provable misconception.\", \"comment\": \"# 3.\\nWe added this to Section 2. We thought it had many qualities, especially that it seems to be the first work comparing something very close to a (discrete) parametric survival approach with other approaches using what we understand as comparable NN-architectures for each loss function.\\n\\nIt was overlooked at first because this is one of many papers using weighting schemes for the loss function without evaluating the unbiasedness (read; calibration) of the approach. \\nIn short, the paper (as many others) suggests weighting the loss with '0<a<1' as:\\n\\n'loss = a*uncensored_part+(1-a)*censored_part'\\n\\nOne can show analytically that the weighting scheme will lead to biased predictions and we didn't want to comment on it prematurely as it was just published, but let\\u2019s make this point here.\\n\\nAs a proof, consider the case of the exponential distribution, with 'L' being the scale-parameter, 'Y~exp(L_real)', and 'a' being the proposed loss-weighting scheme, then the expected value of the gradient of the log-likelihood with censoring;\\n\\n'''\\nd/dL E[-a*<Y<c>* log[L]-u(1-a)min(Y,c)/L] =\\n F(c)[(1-a)-a*(L_real/L)]/L = *\\n'''\\nWhere '<Y<c>' is the iverson bracket, being '1' for an uncensored sample.\\n\\nIn other terms, with 'L_real=L' then '*=(1-exp(-c/L))[1-2a]/L' which needs to be '0', so it will converge to 'L_real=L' iff 'a=0.5'. Otherwise it converges to 'L = L_real*(1-a)/a'\\n\\nIf the math is not convincing, let's apply that weighting scheme just for parametric learning in a simple experiment:\\n\\n'''\\nimport torch\\nimport torch.nn as nn\\n\\ndef train_censored(a=0.5,c=1):\\n torch.manual_seed(1)\\n # True distribution is ~Exp(1)\\n L_real = torch.ones(1)\\n dist = torch.distributions.Exponential(rate=L_real)\\n L_log = nn.Parameter(torch.randn(1))\\n optimizer = torch.optim.SGD([L_log,],lr = 1)\\n for step in range(3000):\\n y = dist.sample((1000,))\\n\\n # Censored (Truncated) as [0,c)\\n y = y = torch.min(y,y*0+c)\\n u = (y<c).float() # \\\"non-censoring indicator\\\"\\n optimizer.zero_grad()\\n\\n L = L_log.exp() #\\n # Exponential Log-likelihood with \\\"weighted loss terms\\\"\\n loglik = -u*a*L.log()-(1-a)*y/L\\n loss = -loglik.mean()\\n\\n loss.backward()\\n optimizer.step()\\n print('Sought=',L_real.item(),'\\\\tResult:\\\\t',L.item(),'\\\\tExpected=',(L_real*((1-a)/a)).item())\\n \\ntrain_censored(a=0.5,c=3)\\n#>> Sought= 1.0 Result: 1.0122132301330566 Expected= 1.0\\ntrain_censored(a=0.9,c=3)\\n#>> Sought= 1.0 Result: 0.11424004286527634 Expected= 0.1111111119389534\\ntrain_censored(a=0.1,c=3)\\n#>> Sought= 1.0 Result: 8.986083030700684 Expected= 9.0\\n'''\\n\\nI.e only with equal (no) weighting will it converge to a correct estimate lambda = 1.\\n\\nWe think this example illustrates that there still seems to be some confusion about whether the basic approach we argue for (not weighting or manipulating the loss function) works in the first place, and whether it\\u2019s in fact the correct way of doing it. Considering that many of the apparent applications of these types of models are clinical, this is insufficient.\\n\\nThe point here is that maybe the fundamentals of survival analysis may not be as widely understood as we would like to think it is, and that there's still space to ask questions beyond comparing full implementations of models and rank them on some metric. This is what we wanted to achieve with our paper.\\n\\n#5.\\nThank you, we corrected this.\"}",
"{\"title\": \"[4] It's easy to use censored data - but few do it simply or correctly.\", \"comment\": [\"# 4.\", \"We agree fully with this and think its a strong reason motivating this work. We think that it's completely clear that most loss functions are special cases of parametric survival approach, and modifying the loss in this manner has been known at least since Moivre 1731. Yet we find no work treating it as such or answering basic questions underpinning it or follow what we find to be its implications.\", \"On the contrary, there seem to be a widespread belief for the need for applying different weighting schemes, adding extra loss-terms, carefully designing the target value, the network architecture, and similar. If our approach was obvious, then\", \"Why doesn't all work comment on the problem of censored data when it's inherent in the domain?\", \"Why doesn't everybody predict *all* parameters of some distribution (instead predicting only scale-parameter or similar)?\", \"Why are much work distribution-centric and network architectures designed specifically for particular distributions?\", \"Why is discrete data often treated as a special case?\", \"Why does the presentation/notation of the loss functions so widely differ?\", \"Why is it that there are many papers who uses Neural Networks (NNs) to explicitly model the hazard function but doesn't compare to parametric density network baselines (as the one we propose) with similar NN-architectures/number of parameters?\", \"Why are stochastic process (Temporal Point Process)-based models with restricted ways of taking account of the nonlinearities of feature data still considered relevant baselines to advanced NNs?\", \"Why are evaluations mainly limited to pointwise predictions?\", \"Why are pointwise-predictions evaluated with metrics that does not work for censored data (Ex MAE as for DeepHit)?\", \"Why is there so little discussion regarding the calibration of predicted distributions?\", \"Finally, why is a model with strictly increasing hazard function (RMTP [1]; \\u03bb(x) ~= exp(x/L) still considered among SOTA? No hazard function that we trained which can take that form (ex Weibull) seem to want take it for similar data. Hint: It's about evaluation. If one removes censored data one cuts the right tail of the empirical distribution so hazard is *by design* increasing in the training data since each sequence now ends with an event, so the RMTP hazard function fits by design.)\", \"The point is not to critique prior work, clearly the concepts of parametric survival analysis is widely known but maybe its general implications for how it effortlessly fits with density networks or discrete data is less known? It might have to do with Time To Event modelling being inherently complex and confusing.\", \"In relation to whether we should have added comparisons to other work we refer to the answer we gave to reviewer #3. In short, we would have wanted to compare explicitly to other work, but this was found hard or irrelevant to fit in. Instead we limited ourselves to digging deep into whether this general (Parametric Survival Approach) is correct by considering if it's performant (vs Binary classification) and calibrated.\", \"[1] https://www.kdd.org/kdd2016/papers/files/rpp1081-duA.pdf\"]}",
"{\"title\": \"[1-2] Clarifications on Mixtures, note on discretization.\", \"comment\": \"Thank you for your very relevant and insightful comments. We appreciate the comments and tried small changes to tie named sections together better. Some things were originally kept orthogonal by design, as distributions (3.1-3.3) and loss functions is quite independent from what kind of feature/target engineering takes places (3.4, 4) which we found previous work being unclear about.\\n \\nA small comment on the summary;\\n\\n> The framework allows different popular architectures to learn the representation of the past events with different explicit features.\\n\\nWhile this is true, we mainly focus on using features to learn representations of *future* events.\\n\\n# 1.\\nThis was also pointed out by other reviewers. We tried to fix this as our presentation was not clear. We did in fact run experiments for ParetoMixedHazards, LogisticMixedhazards, WeibullMixedHazards but did not have space to present results broken down by distributions other than in appendix. \\nThe was more to show that the CHF-perspective makes it easy to compose positive distributions that effortlessly interfaces with density networks, than to look into details of individual distributions. We tried to make this clearer with some of the edits. See comment to AnonReviewer1 and AnonReviewer2. \\n\\nIn the accompanying github repository (awaiting publication), we will release all logs from each individual experiment. With code released, it should also be clear how this machinery looks like in practice. We did not find a good method to release it yet without breaking a double-blind policy.\\n\\n# 2.\\nWe hope not to make a case for any optimal method of choosing the bin width or how to aggregate **features** while discretizing. We think this is an application & data-specific question and want to leave it as such, this is why we wanted to make it easy to make different choices. \\n\\nIn the experiments in Section 4, one of the *features* was 'log(1+count of events in prior timestep)'. While experience shows this is usually a good feature, it should be seen as an arbitrary feature generation step that we chose only because it could be done consistently for all datasets under consideration.\\n\\nTo aggregate/discretize **events**, we consider a timestep with *many events* as a time step with at least one event in. This also naturally fits into the discretization strategy.\\n\\nIf a future timestep (say in 'y' steps) has many events (i.e., is highly skewed), we hope that the current time to event distribution is predicted such that the hazard around that future timestep (equivalently;'\\u039b(y+1)-\\u039b(y)') is high. \\nIn the presented framework, this is left to the modeller as the problem of choosing a reasonable distribution for the task, good feature engineering and the choice of neural networks. We hope our framework makes this easy. Through experiments, we can say that using our approach they should get a calibrated prediction when querying 'Pr(Y<=y)' and that this approach is better or at least as good as modelling it as the binary task of whether events will happen in 'y' timesteps.\\n\\nThere are many design choices that have been found through hard gotten experience that are more style than science. It was hard to motivate them in the paper due to the space limit. We put much effort into harmonizing the notation and the perspective on time to event problems. (Too)Much can be said about this.\"}",
"{\"title\": \"final note on non-parametrics\", \"comment\": \"> A major baseline for mixture modeling is always non-parametric modeling.\\n\\nThis is true, which we agree with (see Section 6 and answer to reviewer #3). \\nWe do comment on the qualitative differences (Sec. 2,6) but could not find a reliable method of quantitative comparison. \\nIf we would have made a neural-architecture-independent, scalable semi-parametric model working for discrete (read *heavily tied*) sequential data (we found no such work), that would have been the main topic of the paper rather than treating it as a baseline. Instead we propose this as future work and would be very interested to see it. \\nFor particular model architectures and data, there's already plenty of such comparisons as you note. \\n\\nIn summary, we think that our paper may be an interesting and subtly controversial addition to the discussion on temporal point processes, survival analysis and neural networks and we hope that we've answered your questions.\"}",
"{\"title\": \"Binary predictions is a relevant baseline. Mixtures is just a part of our results.\", \"comment\": \"Thank you for your feedback.\\n\\n> If we are measuring the classification accuracy, there is a little justification for using survival analysis; we could use just a classification algorithm instead.\\n\\nA theme in the paper is to point out that predicting a distribution CDF is the same thing as making *all* classification predictions 'Pr(Y<y)' for every timestep 'y>0' ahead.\\n\\nWe think that our experimental results shows (surprisingly) that our survival model outperforms the classification algorithm on the classification task. Both for the arguably contrieved task of predicting specific timesteps ahead (Section 4) or whether a certain timeframe contains an event (Predicting zero steps ahead, section 5). The latter is a well known binary task in its domain which we solve with an arguably novel multivariate survival-formulation.\\n\\n> The evaluation is quite sub-par. Instead of reporting the standard ranking/concordance metrics, the authors report the accuracy of binary classification in certain future timestamps ahead.\\n\\nWe make a point of our evaluation approach to be non-standard, but we hope that our arguments for it and our critique against the standard evaluation methods for censored sequential problems (Section 3.4, Section 4 and results Section 6) makes sense. \\n\\nWe understand the standard Concordance Index (CI) to estimate how well two predictions are expected to be ordered. A good metric for the dominant paradigm of pointwise-predicting TTE (regression/ranking). \\nIn contrast, our model predicts a distribution so to answer questions of performance and calibration its arguably not a relevant/helpful metric in its commonly known form. \\nTo this goal we found the Binary model a good choice of baseline (Section 4), and since CI is not defined for binary predictions, we omitted it. \\nAs a sidenote, evaluating AUC on different predicted time-windows ahead as we do is tightly related to the time-specific AUC [1][2][3], in turn related to CI. \\n\\nWhile possible (but non-standard), we could have generalized CI to compare two predicted distributions directly as 'Pr(Y_i < Y_j)' with ground truth 'y_i<y_j' when available (\\\"concordant\\\"). This however restrics the parametric form of the distributions and is hence less general.\\n\\n> The authors also don\\u2019t report the result for non-mixture versions, so we cannot see the true advantages of the proposed mixture modeling.\\n\\nThis critique was pointed out by other reviewers, and we tried to edit to make this clearer.\\n\\nThe main questions we wanted to answer was not which network architecture or distributions where the best. It was more;\\n\\n- Does our Parametric Survival model produce calibrated & good predictions independent of choice of architecture and distribution?\\n- Is the this approach better or at least as good as explicitly modeling its binary subqueries? (classification approach)\\n\\nThe conclusions was a resounding *yes*.\\n\\nWe tested (but didn't report) all of the following:\\n\\n(3 datasets) x\\n(4 evaluation thresholds ) x\\n(3 network architectures) x\\n[ Binary x (Use last timesteps or not), \\nHazardNet x (4 distributions) x (MixedHazards or not)]\\n\\nSome per-distribution results can be found in Appendix (see Figure 9) but we could report all tabular statistics broken up by distributions too if interesting. \\n\\nWhile not the main question, one conclusion was no significant improvement (see Appendix fig 9) for the more complex multimodal MixedHazards-distributions. \\nThe reasons can only be speculated about, but it may give hints about the need for predicting fine grained/expressive target distributions, which seems to be quite a concern of current research (Consider DeepHit, Luck et al [0], etc).\\n\\nWhile we found this interesting and surprising in its own right, the main benefit we wanted to show was the ease of testing this in the first place using our framework.\\n\\n> the authors do not compare to the many existing deep hazard model such as Deep Survival [1], DeepSurv [2], DeepHit [3], or many variations based on deep point process modeling.\\n\\nWe do not explicitly test against these, see answer to reviewer #3. \\nIt should be noted on the other hand, that one of our findings - that binary models will be biased unless removing last timesteps - has consequences for all methods employing what we call \\\"classification\\\" or \\\"multitask\\\" approaches (Ex DeepHit, [0] and more). We tried to clarify this in the revised version.\\nOur results implies that unless they preprocessed data as we suggest, their results risks being heavily biased and uncalibrated. If not evaluated as we suggests they won't see this. We can't find any paper commenting on this issue. \\n \\n[0] https://arxiv.org/abs/1705.10245\\n[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5384160/\\n[2] https://academic.oup.com/bib/article/16/1/153/238328\\n[3] https://www.mayo.edu/research/documents/biostat-80pdf/doc-10027891\"}",
"{\"title\": \"Re: Neural Distribution Learning for generalized time-to-event prediction\", \"review\": \"The authors propose a parametric framework (HazardNet) for survival analysis with deep learning where they mainly focus on the discrete-time case. The framework allows different popular architectures to learn the representation of the past events with different explicit features. Then, it considers a bunch of parametric families and their mixtures for the distribution of inter-event time. Experiments include a comprehensive comparison between HazardNet and different binary classifiers trained separately at each target time duration.\\n\\nOverall, the paper is well-written and easy to follow. It seeks to build a strong baseline for the deep survival analysis, which is an hot ongoing research topic recently in literature. However, there are a few weaknesses that should be addressed. \\n\\n1. In the beginning, the paper motivates the mixtures of distributions from MDN. Because most existing work focuses on the formulation of the intensity function, it is very interesting to approach the problem from the cumulative intensity function instead. Originally, it looks like the paper seeks to formulate a general parametric form based on MDN. However, it is disappointing that in the experiments, it still only considers a few classic parametric distributions. There is lack of solid technical connection between Sec 3.1, 3.2 and Sec 4.\\n\\n2. The discretization discussion of Sec 3.4 is not clear. Normally, the major motivation for discretization is application-driven, say, in hospital, the doctor regularly triggers the inspection event. However, how to optimally choose a bin-size and how to aggregate the multiple events within each bin is still not clear, which is not sufficiently discussed in the paper. Why is taking the summation of the events in a bin a proper way of aggregation? What if we have highly skewed bins?\\n\\n3. Although the comparison and experimental setting in Figure 4 is comprehensive, the paper misses a very related work \\\"Deep Recurrent Survival Analysis, https://arxiv.org/abs/1809.02403\\\", which also considers the discrete-time version of survival analysis. Only comparing with the binary classifiers is not quite convincing without referring to other survival analysis work.\\n\\n4. Finally, the authors state that existing temporal point process work \\\"have little meaning without taking into account censored data\\\". However, if inspecting the loss function of these work closely, we can see there is a survival term exactly the same as the log-cumulative hazard in Equation 3 that handles the censored case.\\n\\n5. A typo on the bottom of page 3, should be p(t) = F(t + 1) - F(t)\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Deep Hazard Modeling with Mixture of Distributions\", \"review\": \"This paper proposes to use a mixture of distributions for hazard modeling. They use the standard censored loss and binning-based discretization for handling irregularities in the time series.\\n\\nThe evaluation is quite sub-par. Instead of reporting the standard ranking/concordance metrics, the authors report the accuracy of binary classification in certain future timestamps ahead. If we are measuring the classification accuracy, there is a little justification for using survival analysis; we could use just a classification algorithm instead. Moreover, the authors do not compare to the many existing deep hazard model such as Deep Survival [1], DeepSurv [2], DeepHit [3], or many variations based on deep point process modeling. The authors also don\\u2019t report the result for non-mixture versions, so we cannot see the true advantages of the proposed mixture modeling.\\n\\nA major baseline for mixture modeling is always non-parametric modeling. In this case, given that there are existing works on deep Cox hazard modeling, the authors need to show the advantages of their proposed mixture modeling against deep Cox models.\\n\\nOverall, the methodology in this paper is quite limited and the evaluation is non-standard. Thus, I vote for rejection of the paper.\\n\\n\\n[1] Ranganath, Rajesh, et al. \\\"Deep Survival Analysis.\\\" Machine Learning for Healthcare Conference. 2016.\\n\\n[2] Katzman, Jared L., et al. \\\"DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network.\\\" BMC medical research methodology 18.1 (2018): 24.\\n\\n[3] Lee, Changhee, et al. \\\"Deephit: A deep learning approach to survival analysis with competing risks.\\\" AAAI, 2018.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Important clarifications should be given about the task and the model\", \"review\": \"The paper \\\"Neural Distribution Learning for generalized time-to-event prediction\\\" proposes HazardNet, a neural network framework for time-to-event prediction with right-censored data.\\n \\nFirst of all, this paper should be more clear from the begining of the kind of problems it aim to tackle. The tasks the proposal is able to consider is not easy to realize, at least before the experiments part. The problem should be clearly formalized in the begining of the paper (for instance in the introduction of section 3). It the current form, it is very hard to know what are the inputs, are they sequences of various kinds of events or only one type of event per sequence. It is either not clear to me wether the censoring time is constant or not and wether it is given as input (censoring time looks to be known from section 3.4 but in that case I do not really understand the contribution : does it not correspond to a very classical problem where events from outside of the observation window should be considered during training ? classical EM approaches can be developped for this). The problem of unevenly spaced sequences should also be more formally defined. \\n\\nAlso, while the HazardNet framework looks convenient, by using hazard and survival functions as discusses by the authors, it is not clear to me what are the benefits from recent works in neural temporal point processes which also define a general framework for temporal predictions of events. Approaches such at least like \\\"Modeling the intensity function of point process via recurrent neural networks\\\" should be considered in the experiments, though they do not explicitely model censoring but with slight adapations should be able to work well of experimental data.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
ryzECoAcY7 | Learning Multi-Level Hierarchies with Hindsight | [
"Andrew Levy",
"George Konidaris",
"Robert Platt",
"Kate Saenko"
] | Hierarchical agents have the potential to solve sequential decision making tasks with greater sample efficiency than their non-hierarchical counterparts because hierarchical agents can break down tasks into sets of subtasks that only require short sequences of decisions. In order to realize this potential of faster learning, hierarchical agents need to be able to learn their multiple levels of policies in parallel so these simpler subproblems can be solved simultaneously. Yet, learning multiple levels of policies in parallel is hard because it is inherently unstable: changes in a policy at one level of the hierarchy may cause changes in the transition and reward functions at higher levels in the hierarchy, making it difficult to jointly learn multiple levels of policies. In this paper, we introduce a new Hierarchical Reinforcement Learning (HRL) framework, Hierarchical Actor-Critic (HAC), that can overcome the instability issues that arise when agents try to jointly learn multiple levels of policies. The main idea behind HAC is to train each level of the hierarchy independently of the lower levels by training each level as if the lower level policies are already optimal. We demonstrate experimentally in both grid world and simulated robotics domains that our approach can significantly accelerate learning relative to other non-hierarchical and hierarchical methods. Indeed, our framework is the first to successfully learn 3-level hierarchies in parallel in tasks with continuous state and action spaces. | [
"Hierarchical Reinforcement Learning",
"Reinforcement Learning",
"Deep Reinforcement Learning"
] | https://openreview.net/pdf?id=ryzECoAcY7 | https://openreview.net/forum?id=ryzECoAcY7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJl4ztWXeN",
"r1eBZZkM1V",
"SJgUq8V20Q",
"SkepZSziR7",
"B1es8hEXA7",
"S1eu7XbQ07",
"ryeYJ7Z7Rm",
"HklGNKpgAX",
"BJxw72d92Q",
"rylY0Tmc37",
"Byg24c5_37"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544915212443,
1543790845049,
1543419534375,
1543345413351,
1542831187332,
1542816543566,
1542816480551,
1542670634236,
1541209119113,
1541189073183,
1541085747943
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper886/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper886/Authors"
],
[
"ICLR.cc/2019/Conference/Paper886/Authors"
],
[
"ICLR.cc/2019/Conference/Paper886/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper886/Authors"
],
[
"ICLR.cc/2019/Conference/Paper886/Authors"
],
[
"ICLR.cc/2019/Conference/Paper886/Authors"
],
[
"ICLR.cc/2019/Conference/Paper886/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper886/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper886/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper886/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"As per R3: This paper presents a novel approach for doing hierarchical deep RL (HRL) on UMDPs by:\\n(a) use of hindsight experience replay at multiple levels; combined with (b) max T timesteps\\nat each level. By effectively learning from missed goals at multiple levels, it allows for fairly \\ndata-efficient learning and can (in principle) work for an arbitrary number of levels.\\nHRL is an important open problem.\\n\\nThe weaknesses described reviewers include limited comparisons to other HRL methods; its applicaiton to fairly simple domain;\\nits still unclear what the benefit of >=4 levels is, and what the diminishing returns are wrt to the claim of working\\nfor an arbitrary number of levels. R1(5) and R3(7) stand by their scores. R1(5) still has some remaining concerns\\nregarding some experiments not being done across all tasks, an older version of the HAL algo baseline being used, and \\nlack of insight regarding >= 4 levels.\\n\\nBased on the balance of the reviewers comments and the AC's own reading of the paper and results, \\nand the importance of the problem, the AC leans towards accept. Using Hindsight Exp Replay across multiple levels\\nis a simple-but-interesting idea, and the terminate-after-T steps is an interesting heuristic to make this effective.\\nWhile the paper does not give insight for large (>=4) levels, it does make for an interesting framework that\\nwill inspire further work. The AC recommends that the claims regarding an \\\"arbitrary number of levels\\\" be significantly toned down.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"lean towards accept\"}",
"{\"title\": \"Response to Remaining Reviewer Concerns\", \"comment\": \"Dear Reviewers and Area Chair,\\n\\nIn the post below, we address many of the Reviewers\\u2019 remaining concerns.\\n\\nR1/R3 - How does Hierarchical Actor-Critic (HAC) perform with 4-level agents?\\n\\nWe have not run any experiments with 4-level agents using the latest iteration of HAC. However, we did previously run an experiment with 4-level agents in the inverted pendulum task using a slightly different version of HAC. In this older version of HAC, the subgoal space was the concatenation of the desired Cartesian (x,y,z) end-effector position and the desired angular velocity of the pendulum (i.e. a 4-dim subgoal space). Whereas in the latest version of HAC, the subgoal space in the inverted pendulum task is just the state space, which is the 2-dim vector including the angle of pendulum and the angular velocity of the pendulum. A video of the 4-level agents in the inverted pendulum task is available at https://www.youtube.com/watch?v=Q_NGMkQ29oU . With the older version of HAC, we did find that the 4-level agents outperformed agents using 1, 2, and 3 levels of hierarchy in the inverted pendulum task. \\n\\n\\nR2/R3 - Can you provide ablation tests examining subgoal testing? (R3) How does HAC perform without subgoal testing? (R2) Does exploration noise need to be turned off when subgoal testing?\\n\\nWe have not implemented any ablation tests that examine subgoal testing using the latest version of HAC. However, we did perform limited ablation tests using previous versions of the algorithm, and the results do support our implementation of subgoal testing. With these previous versions of the algorithm, agents that did not use subgoal testing were not able to solve the UR5 Reacher task. Similarly, we did previously implement an agent in the inverted pendulum task that penalized all missed subgoals. In the single trial that was implemented, performance was significantly worse than when noise was turned off during subgoal testing. This latter result may have occurred because penalizing all missed subgoals even when exploration noise is added may disincentivize a level from setting distant subgoals as these are more likely to be missed when a level uses a noisy policy. When a level needs to set closer subgoals, the level needs to learn a longer sequence of subgoal actions, which can slow learning. \\n\\n\\nR3 - Is it necessary to use the same value for the policy limit parameter H (\\u201cT\\u201d in initial draft) for each level?\\n\\nNo, the policy limit parameter for each level can be set to whichever value the user prefers. We designed HAC to use a single value for H for two reasons. First, using different values of H for each level may hurt the ability of agents to equitably divide the task amongst its levels. Different values for H will mean that one level needs to learn a longer sequence than the others, and learning may slow if this difference is large. Second, a single value for all levels limits the number of hyperparameters that need to be set by the user. \\n\\n\\nR1 - What if the goals are unknown?\\n\\nWe assume the scenario in which the agent is given a set of goal states to learn to achieve. However, this set of goal states can include all possible goal states so the designer is not required to specify a particular set of goals.\"}",
"{\"title\": \"Response to Anonymous Reviewer\", \"comment\": \"Dear Anonymous Reviewer,\\n\\nThank you for your feedback and patience in awaiting our response. \\n\\nI have read through the papers you suggested and agree that these should be cited. In the revised paper, I have highlighted reference 2 in http://people.idsia.ch/~juergen/subgoals.html as another Hierarchical Reinforcement Learning (HRL) algorithm that can learn in tasks that use continuous state and action spaces. I have also listed the papers introducing HQ-Learning (reference 6) and HASSLE (reference 10) in the related work section as other HRL methods. In the final draft of the paper, I will add the other versions of the above papers, references 1, 3, 5, and 11.\\n\\nRegarding our claim, we have adjusted our statement to be more precise. In the revised paper, we claim that \\u201cOur framework is the first HRL approach to show results in which 3-level agents outperform both 2-level and 1-level agents in tasks with continuous state and action spaces.\\u201d The purpose of this statement is to highlight that our framework has not only shown the first results of working 3-level agents in continuous domains, but that the framework can actually use the extra levels of hierarchy to improve performance as a result of its ability to learn policies in parallel.\"}",
"{\"title\": \"Revisions and related work\", \"comment\": \"The added comparison to the HIRO in the related work and experiment sections is nice, and addresses my main concern with the paper. I'm maintaining my score of 7 - Accept.\"}",
"{\"title\": \"Comparison results added to revised paper\", \"comment\": \"We have added the graphs containing our comparison results to the revised paper (see page 9). Please note that we will be making further revisions to the paper.\"}",
"{\"title\": \"Hierarchical RL Algorithm Comparison Implemented (2 of 2)\", \"comment\": \"The second key difference is how each algorithm handles the non-stationarity of the higher-level state transition functions. Consider the situation in which a two-level agent is in state A with a task goal of state C. The higher-level proposes state B as a subgoal, but in less than T low-level actions, the agent does end up achieving the goal state C. Due to its strategy of replacing the original subgoal action with the subgoal state that was achieved, the higher level within the HAC agent would receive the transition [state = A, subgoal action = C, reward = 0, next state = C, goal = C]. In other words, the upper level within HAC would replay the episode as if it had originally proposed state C as the subgoal. On the other hand, HIRO\\u2019s off-policy correction strategy chooses the action component in the higher level transition to be the subgoal state that would most likely cause the sequence of (state, action) tuples that occurred at the lower level when the low-level policy was trying to achieve the original subgoal state B. In the example above, this will most likely be state B or some other state that is not state C. Thus, the higher level HIRO agent will likely receive the transition [state = A, subgoal action = B, reward = cumulative low level reward, next state = C, goal = C]. We believe that using state B or some other state that is not state C as the subgoal action component is a critical error because this transition likely will not be valid in the future, while the transition passed to HAC likely will. As the lower level continues to improve, at some point proposing subgoal state B will not result in the agent ending in state C. This renders the update HIRO made obsolete. At the same time, the lower level policy should be able to learn to move the agent from state A to at least close to state C. When this occurs and HIRO again needs to decide on which subgoal state action to choose for the above high-level transition, it is likely that it will choose state C as the subgoal that would most likely cause the low-level (state, action) tuples that previously occurred given the current low level policy. The higher level in HIRO would then make a similar update to the one that the higher level within HAC made possibly much earlier in training.\\n\\nThe main result here is that because HAC agents replace the original subgoal with the actual subgoal state achieved, HAC agents can learn all policies within the hierarchy in parallel. On the other hand, HIRO may need to wait for the policy at one level to converge before it can accurately train the policy at the next higher level, which should cause HIRO to learn more slowly than HAC. This also has important consequences for adding more levels to the hierarchy. Because HAC learns all policies in parallel, adding more levels can be helpful as it can shorten the sequence of actions that each level needs to learn. However, when one policy is learned at a time as in HIRO, there is less of a benefit to inserting additional levels into the hierarchy. \\n\\n\\n[1] O. Nachum, S. Gu, H. Lee, and S. Levine, \\u201cData-efficient hierarchical reinforcement learning,\\u201d CoRR , vol. abs/1805.08296, 2018.\\n\\n[2] Alexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg, David Silver, and Koray Kavukcuoglu. Feudal networks for hierarchical reinforcement learning. CoRR, abs/1703.01161, 2017. URL http://arxiv.org/abs/1703.01161.\\n\\n[3] Pierre-Luc Bacon, Jean Harb, and Doina Precup. The option-critic architecture. CoRR ,\\nabs/1609.05140, 2016. URL http://arxiv.org/abs/1609.05140 .\\n\\n\\n[4] M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin, P. Abbeel, and W. Zaremba, \\u201cHindsight experience replay,\\u201d in NIPS , 2017.\\n\\n[5] Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approximators. In Francis Bach and David Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning , volume 37 of Proceedings of Machine Learning Research , pp. 1312\\u20131320, Lille, France, 07\\u201309 Jul 2015. PMLR. URL http://proceedings.mlr.press/v37/\\nschaul15.html .\"}",
"{\"title\": \"Hierarchical RL Algorithm Comparison Implemented (1 of 2)\", \"comment\": \"Dear Reviewers, Anonymous Reviewer, and Area Chair,\\n\\nThank you for your helpful feedback. In this note, we would like to provide an update on the hierarchical RL comparison we have implemented. We will post a separate comment shortly addressing the other questions and concerns the reviewers and anonymous reviewer had.\\n\\nWe have completed a comparison to the algorithm HIRO (HIerarchical Reinforcement learning with Off-policy correction)[1]. We chose to compare to HIRO because (i) the algorithm has been peer-reviewed as it was recently accepted to the NeurIPS (NIPS) 2018 conference and (ii) the method has proven that it can outperform the other leading hierarchical RL algorithms that work in continuous state and action spaces: FuN (FeUdal Networks)[2] and Option-Critic[3]. The paper introducing HIRO is available at https://arxiv.org/abs/1805.08296.\\n\\nWe compared the two-level version of our approach, Hierarchical Actor-Critic (HAC), to HIRO, which by design has two levels, on both the relatively easy inverted pendulum task and the relatively more difficult UR5 reacher task. In both tasks, the two-level version of HAC outperforms HIRO. The outperformance is particularly substantial in the UR5 reacher task as HIRO is unable to maintain a goal achievement success rate > 0%, while the two-level version of HAC can achieve a 90+% success rate in around 900 episodes. We will provide these performance comparison charts in the revised paper.\\n\\nBelow we discuss two key differences between the algorithms that we believe enable HAC to outperform HIRO. We will explain the remaining key differences in the revised paper. \\n\\nThe first significant difference between the algorithms is that HIRO does not use Hindsight Experience Replay (HER)[4] at the upper level of the hierarchy. HIRO also does not use HER at the lower level and instead uses a distance-based reward function. The data augmentation technique HER is critical because it helps the upper-level Universal Value Function Approximator (UVFA)[5] learn about the (state, action, goal) tuples that should have high Q-values. In other words, HER helps the higher-level UVFA learn about the helpful subgoal actions that can move the agent from the current state to goals throughout the goal space. This is important because the UVFA can then attempt to transfer these high Q-values to the subgoal actions that are relevant for achieving the higher level\\u2019s current set of task goals. HER is particularly important for relatively difficult tasks, such as UR5 Reacher, in which it is unlikely that the goal can be achieved with a random policy. Without HER in these situations, because the sparse reward is rarely achieved and because there are no high Q-values to transfer, the UVFA will likely be unable to assign high Q-values to the subgoal actions that can solve the task, which should slow down learning.\"}",
"{\"title\": \"mixed reviews; author response? further input from reviewers?\", \"comment\": \"Thank you to the assigned reviewers and the anonymous reviewer for your feedback.\\nIt would be good to hear a response from the authors. Similarly, the reviewers can also provide additional comments after having read the other reviews. Now is the time for further discussion.\", \"open_issues_include\": \"(i) comparison to other HRL methods and claims of originality; (ii) what is the impact of having >2 levels?\\n\\nArea chair\"}",
"{\"title\": \"The technique is sound and demonstrated good performance on a range of RL tasks, however its significance is not fully demonstrated.\", \"review\": \"Pros:\\n1. A nice idea combining universal MDP formulation and Hindsight experience replay for HRL that can deal with hierarchies with more than two levels of policies in continuous tasks.\\n2. Good empirical results\", \"cons\": \"1. One limitation of this work is that the goal set is known. What if the goals are unknown?\\n\\n2. The current domains seems relative simple comparing other existing papers on HRL, hence it is hard to tell the significance of the method.\\n\\n3. It Lacks thorough experimental analysis. Some comments are suggestions are provided here.\\n---Since the proposed framework can deal with arbitrary level of hierarchies, it might be better to include include an experiment comparing the more than 2 subgoal layers. This will help understand whether there is any diminishing return by increasing the number of layers.\\n\\n---What kind of policy representations and hyperparameters of the training algorithm are used? Are they the same for different domains? Some critical details and some ablation test should be provided.\\n\\n---The paper can also be strengthened if some comparisons to other HRL methods can be included.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Very nice approach for hierarchical RL\", \"review\": \"This paper presents a novel approach for doing hierarchical deep RL. Each level of the hierarchy is rewarded for reaching a goal state. The top level's goal state is the environment goal, lower level goals states are the actions of the higher levels. The lowest level's actions are primitive actions. Each level can act until it reaches it goal or a maximum of T steps. Then HER is used to still learn from missed subgoals. For example, if the lowest level is given a subgoal and fails to achieve it, it is trained with a new experience where the goal was the achieved state. In addition, the level above is trained with an experience where the action it chose (the subgoal that was not achieved) is replaced with the subgoal that was achieved. So HER is replacing goals on one level and replacing actions on the higher level. The paper shows nice empirical results across 6 domains.\", \"the_two_main_differences_from_prior_work_are\": \"1. Explicit constraint on how long the policies at each level can be.\\n2. Use of HER in a novel way (on goals and actions) to learn from failed attempts at reaching subgoals from lower levels.\\n\\nThe use of HER in this work is really powerful and everything fits together nicely to make it work.\\n\\nThe only un-satisfying part of the algorithm is the need for a subgoal testing phase. Some actions are randomly decided to be testing phases, where all exploration is turned off at lower levels and the agent at the level selecting that subgoal is given negative reward if the subgoal is not achieved. This feels a bit unnatural to me. Does it not work if you punish a level for selecting a failed subgoal even if exploration is on? Does this phase unnecessarily punish levels for selecting subgoals that aren't reached early in learning, where even with no exploration a lower level may not have learned to reach the subgoal yet?\\n\\nThe main drawback of the paper is that there is no empirical comparison to related work. Instead the approach is only compared to doing learning with no hierarchy. Still, in all 6 domains, there is a clear improvement to using the hierarchy vs a flat hierarchy.\", \"pros\": [\"Nice approach for hierarchical deep RL\", \"Great use of HER to improve subgoal learning\", \"Good empirical results showing benefit of approach over flat learning\"], \"cons\": [\"No empirical comparison to related work\", \"Subgoal testing phase seems a bit hacky.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An HRL framework with an arbitrary number of levels\", \"review\": \"This paper proposed a framework that can improve the performances of reinforcement learning algorithms in tasks that involve long time horizons and sparse rewards. The proposed method is a hierarchical reinforcement learning framework that can use policy hierarchies with an arbitrary number of levels. To improve the sample efficiency in the learning process, the authors proposed to apply the hindsight experience replay mechanism at each level. Also, in order to avoid the actor function to output an unrealistic subgoal, the authors proposed the subgoal testing technique.\\n\\nThe proposed framework is interesting. And the example in Section 3.5 clearly demonstrate how this framework works. The authors proposed to solve a UMDP by solving a hierarchy of k UMDPs, where k is a hyperparameter. Each level (except for the bottom most level) will output subgoal states for the next level to achieve. This hierarchy is reasonable and easy to understand. However, from the definition on Page 3, it seems that all of the intermediate levels i (the case where 0 < i < k - 1) has the same state and action spaces. They are all equal to the state set of the original UMDP. Under this setting, will adding more intermediate levels help improve the performance a lot? We only see results with at most one intermediate level in the experiment. It will be better if the authors can show results on more levels (i.e. at least 4 levels in total). \\n\\nMoreover, the proposed framework has a policy limit parameter T, meaning that we only consider if a goal can be achieved within T steps or not, at each level. Is this parameter necessary to be the same for all levels? Also, it will be better if the author can show some results on the performances of the proposed method according to different values for T. The authors also proposed the subgoal testing technique. It is also better if the authors can show some performance comparisons on the cases with and without this technique.\\n\\nThe authors claimed that their method has the advantage over some existing HRL methods (e.g. the Option-Critic Architecture [1]) that their method can use policy hierarchies with an arbitrary number of levels while these methods can only use policy hierarchies with two levels. In the experiments, the authors also showed that, in some of their experiments, the 3-layer agent (with 2 subgoal layers) outperforms the 2-layer agent (with 1 subgoal layer), under their framework. However, the authors did not compare their 2-layer agent's performance with these existing HRL methods, which means that we do not know if their 3-layer agent's performance is better than that of some of the existing 2-layer agent methods. In addition to that, as I mentioned before, it is better if the authors can show experiment results on more levels (e.g. 4 levels and more) to show that their method can perform well in practice for policy hierarchies with many levels.\", \"references\": \"[1] Pierre-Luc Bacon, Jean Harb, and Doina Precup. The option-critic architecture. CoRR, abs/1609.05140, 2016.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rJxNAjC5F7 | Learning Hash Codes via Hamming Distance Targets | [
"Martin Loncaric",
"Ryan Weber",
"Bowei Liu"
] | We present a powerful new loss function and training scheme for learning binary hash codes with any differentiable model and similarity function.
Our loss function improves over prior methods by using log likelihood loss on top of an accurate approximation for the probability that two inputs fall within a Hamming distance target.
Our novel training scheme obtains a good estimate of the true gradient by better sampling inputs and evaluating loss terms between all pairs of inputs in each minibatch.
To fully leverage the resulting hashes, we use multi-indexing.
We demonstrate that these techniques provide large improvements to a similarity search tasks.
We report the best results to date on competitive information retrieval tasks for Imagenet and SIFT 1M, improving recall from 73% to 85% and reducing query cost by a factor of 2-8, respectively. | [
"information retrieval",
"learning to hash",
"cbir"
] | https://openreview.net/pdf?id=rJxNAjC5F7 | https://openreview.net/forum?id=rJxNAjC5F7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Bkxzmi6lx4",
"SJeZZiNtAm",
"Hyl1p4zYRX",
"S1e4CbbYR7",
"r1gyNgoyTX",
"BJlMlHIJ67",
"H1ljGMwA3Q",
"S1l19KUA27",
"rkxP4RvF3X",
"ByxcPXfOim",
"rJxr2gvHom",
"HkxzzDtL9Q",
"Sye0dUFAKX",
"rJgytH0atQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1544768281603,
1543224056686,
1543214263043,
1543209419973,
1541546023136,
1541526762102,
1541464595146,
1541462406703,
1541140014534,
1540002658416,
1539825836981,
1538852617939,
1538328182067,
1538282871303
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper885/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper885/Authors"
],
[
"ICLR.cc/2019/Conference/Paper885/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper885/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper885/Authors"
],
[
"ICLR.cc/2019/Conference/Paper885/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper885/Authors"
],
[
"ICLR.cc/2019/Conference/Paper885/Authors"
],
[
"ICLR.cc/2019/Conference/Paper885/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper885/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper885/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper885/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes learning a hash function that maps high dimensional data to binary codes, and uses multi-index hashing for efficient retrieval. The paper discusses similar results to \\\"Similarity estimation techniques from rounding algorithms, M Charikar, 2002\\\" without citing this paper. The proposed learning idea is also similar to \\\"Binary Reconstructive Embedding, B. Kulis, T. Darrell, NIPS'09\\\" without citation. Please study the learning to hash literature and discuss the similarities and differences with your approach.\\n\\nDue to missing citations and lack of novelty, I believe the paper does not pass the bar for acceptance at ICLR.\", \"ps\": \"PQ and its better variants (optimized PQ and cartesian k-means) are from a different family of quantization techniques as pointed out by R3 and multi-index hashing is not directly applicable to such techniques. Regardless, I am also surprised that your technique just using hamming distance is able to outperform PQ using lookup table distance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"The paper needs improvement\"}",
"{\"title\": \"Supervised clarification and practical precision/recall/speed\", \"comment\": \"You are absolutely correct that PQ is typically used as an unsupervised method. In this context of learning to hash, supervised is often used to mean that the model learned from the same dataset that queries are made against, whereas unsupervised is used to mean that the model learned from a different dataset. In our SIFT 1M example, both PQ and HDT-E are trained in an unsupervised way.\\n\\nWe do indeed use similarity labels, but we make them by discretizing the distances between SIFT vectors into 0's and 1's, which is strictly less information than the exact distances used in training PQ. We assert that discretizing these distances should not be considered applying additional labeled data. We will clarify this in the paper.\\n\\nWe are currently using HDT on a growing dataset of over 1 million hours of video (>1 petabyte), hashing it at 6 to 16 FPS for over 20 billion hashes with 32-bit hash substrings. Our use case is reverse video lookup. Every hour, we query over 10 million frames of incoming video against this multi-indexed dataset, using a small database instance (6 nodes) on commodity hardware.\\n\\nOn a per-frame level, we achieved 81% recall and a false positive rate of 10^-10. By considering nearby frames, our recall is somewhat higher, and precision is unmeasurably close to 1.\\n\\nSince this is a near-neighbor task, high performance is synonymous with low frame-wise false positive rate. This product has seen substantial iteration. Our original implementation using a wavelet embedding had a per-frame recall of 70% and a false positive rate of 9x10^-8. While we could usually filter out false positives by considering nearby frames, returning hundreds of them slowed performance by a factor of roughly 10. When we switched to a hash trained with HDT, this problem dissolved.\"}",
"{\"title\": \"Response to Author's Questions\", \"comment\": \"I appreciate the authors' response. I've increased my score to 6, as most of my smaller questions have been resolved.\"}",
"{\"title\": \"thanks\", \"comment\": \"I still feel the proposed method is a supervised algorithm (with similarity/dissimilarity labels) while PQ is unsupervised. Seems the other reviewers have similar opinions so the future version could clarify this.\\n\\nI would like to raise my rating to marginally above acceptance if the paper can describe convincingly how the algorithm is used in products, especially of both precision/recall and speed.\"}",
"{\"title\": \"Addressing your concerns\", \"comment\": \"Thank you for your thoughtful review. We have used HDT in production systems with datasets of over 10 Billion rows and achieved average retrieval times of <2ms, so it is quite practical. We can clear up most of your concerns easily:\\n\\n1. The comparison against PQ is fair because both are provided only the \\\"train\\\" set from the SIFT1M dataset; neither receives the query samples. As described in section 3.2, we defined similarity within the train set by each elements 10 nearest neighbors in the train set, but this is not additional information.\\n\\nPQ also \\\"learns\\\" a codebook, relying on the assumption that training and testing datasets will be similar. There may be cases where it better handles novel data points, but HDT performed much better on a common benchmark nonetheless.\\n\\n2. As mentioned, we have used HDT with datasets of over 10 Billion rows and achieved average retrieval times of <2ms. Expected memory usage for kNN is simply O(k) by streaming through query results. Expected space to store the indexed dataset is O((r+1)Nn), where r is the Hamming radius and n is the number of bits per hash. Since r is practically no more than ~5, and hash bit size is small, this is not much of a concern.\\n\\n3. We plot different values of \\\\lambda in Figure 4 to demonstrate how the recall/performance tradeoff varies, so that should address part of your question. p_0 is simply chosen such to extrapolate values of likelihood less that 10^{-100}. As \\\\lambda_w is simply a regularization term, we did think it warranted an entire table or plot to compare.\\n\\nHow would you recommend comparing against \\u201cBillion-scale similarity search with GPUs\\u201d? We are familiar with this work, but its main feature and results are for exact kNN with powerful computing resources, so there is no interesting recall/performance comparison to draw.\\n\\n4. We have just included this in the latest draft.\"}",
"{\"title\": \"Interesting intuition but still far from a real-world solution\", \"review\": \"This paper is about learning to hash. The basic idea is motivated by the intuition: given points z_i and z_j on the hypersphere, the angle between the two points is arccos(z_i \\\\dot z_j), while the probability that a random bit differs between them is arccos(z_i \\\\dot z_j)/\\\\pi. This leads to a nice formulation of learning Hamming Distance Target (HDT), although the optimization procedure requires every input has a similar neighbor in the batch.\\n\\nThe minor issue of this paper is that the writing should be polished. There are numerous typos in paper citing (e.g., Norouzi et al in the 3rd page is missing the reference; Figure 3.2 in the 7th page should be Figure 3; and a number of small typos). But I believe these issues could be fixed easily.\\n\\nThe major issue is how we should evaluate a learning to hash paper with nice intuition but not convincing results. Below are my concerns of the proposed approach.\\n\\n1. Learning to hash (including the HDT in this paper) and product quantization (PQ) are not based on the same scenario, so it is unfair to claim hashing method outperforms PQ.\", \"most_learning_to_hash_methods_requires_two_things_in_the_following\": \"a) the query samples\\nb) similar/dissimilar samples (or we can call them neighbors and non-neighbor) to the query\\n\\nPQ does not require a) and b). As a result, in PQ based systems, a query can be compared with codewords using Euclidean distance, without mapping to a hash code. This is important especially for novel queries, because if the system does not see similar samples during training, it will probably fail to map such samples to good hash codes. \\n\\nSuch advantage of PQ (or other related quantization methods) is important for real-world systems, however, not obvious in a controlled experiment setting. As shown in the paper, HDT assumes the queries will be similar in the training and testing stages, and benefits from this restricted setting. But I believe such assumption may not hold in real systems. \\n\\n2. It is not clear to me that how scalable the proposed method is.\\n\\nI hope section 1.2 can give analysis on both **space** and time complexity of Algorithm 2. It will be more intuitive to show how many ms it will take to search a billion scale dataset. Currently I am not convinced how scalable the proposed algorithm is. \\n\\n3. Implementation details\\nIn page 5, it is not clear how the hyper parameters \\\\lamda, \\\\lamda_w and p_0 are selected and how sensitive the performance is. I am also interested in the comparison with [Johnson Dooze Jegou 2017] \\u201cBillion-scale similarity search with GPUs\\u201d.\\n\\n4. Missing literature\\nI think one important recent paper is \\u201cMultiscale quantization for fast similarity search\\u201d NIP 2017\\n\\n\\nTo summarize, I like the idea of this paper but I feel there are still gap between the current draft and real working system. I wish the submission could be improved in the future.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Addressing your concerns\", \"comment\": [\"Thank you for your thoughtful review. We can clear up most of your concerns easily:\", \"Unfortunately different benchmarks in this field use different optimization criteria, so out of input/output/goals, only output can be clearly defined (and we define it to be a binary hash code). A given problem may choose any specific input, such an image, audio clip, or vector. The goals vary wildly, and our benchmarks used MAP and recall vs. query cost. We will work to make this clearer in the final draft.\", \"The datasets we compare against are common choices, and we believe they are sufficient (and our % improvements large enough) for the results to be clear.\", \"Good question. The advantage of comparing the angle between points (rather than Euclidean distance) is that we are able to build a good statistical approximation of the conditional distribution of Hamming distances. Any such model must assume some distribution for the embedding. We believe a uniform distribution on the hypersphere is most natural, since the binarized hash codes depend only on the direction of the embedding point, not its magnitude.\", \"While Gong et al. also work with a hypersphere, their approach is very different, choosing a projection that optimizes binarization loss, rather than log likelihood of falling within a desired Hamming distance, which we argue better represents the true optimization goal. We will mention them in our final version.\", \"A public commenter already mentioned this, so please refer to that chain. Cao et al. updated those figures later, after Lu et al. published their paper. We will include the updated figures in our final version. Lu et al.'s results simply compare against Cao et al.'s reported results. Both papers use the same subsets of ImageNet and are directly comparable. We believe using the same dataset splits as previous authors makes our results stronger, showing that we did not cherry-pick the dataset.\", \"This is a great suggestion.\"]}",
"{\"title\": \"Yes\", \"comment\": \"\\\"There is one key issue: in general hashing is not good at multi-indexing search for vector-based search in the Euclidean distance or Cosine similarity.\\\": All multi-index search relies on hash codes, so we are not quite sure what you mean. You may argue that multi-indexing has underperformed on Euclidean or Cosine similarity tasks in the past, but it should be clear from our abstract that our approach (HDT) refutes that.\\n\\n\\\"The advantage of hashing is reducing the code size and thus memory cost, but it is still not as good as quantization=based approach.\\\" - Quantization approaches also create hash codes; to quote Jegou et. al.'s Product Quantization paper, \\\"Formally, a quantizer is a function q mapping a D- dimensional vector x \\u2208 RD to a vector q(x) \\u2208 C = {ci;i \\u2208 I}, where the index set I is from now on assumed to be finite: I = 0...k \\u2212 1.\\\" Again, as stated in our abstract abstract, our method outperforms Product Quantization on its own benchmark. \\n\\n(1). The MAP@1000 criterion is defined based on the top 1000 results by Hamming distance (section 3.1). This is achieved for all models by a linear scan.\\n(2). The product quantization search is as defined in Jegou et. al.; it does not use multi-indexing. Our HDT-E does use multi-indexing. Recall is not the number of distance comparisons; to quote section 3.2, \\\"Metrics used are SIFT 1M recall@100 vs. number of distance comparisons, a measure of query cost\\\".\"}",
"{\"title\": \"Solid idea in the learning to hash area, needs further development\", \"review\": \"Summary: This paper contributes to the area of learning to hash. The goal is to take high-dimensional vectors in R^n resulting from an embedding and map them to binary codewords with the goal of similar vectors being mapped to close codewords (in Hamming distance). The authors introduce a loss function for this problem that's based on angles between points on the hypersphere, relying on the intuition that angles corresponds to the number of times needed to cross to the other side of the hypersphere in each coordinate. This is approximately the Hamming distance under a simple quantization scheme. The loss function itself forces similar points together and dissimilar points apart by matching the Hamming distance to the binomial CDF of the angle quantization. They also suggest a batching scheme that enforces the presence of both similar and dissimilar matches. To confirm the utility of this loss function, the authors empirically verify similarity on ImageNet and SIFT.\", \"strengths\": \"The main idea, to match up angles between points on the hypersphere and Hamming distance is pretty clever. The loss function itself seems generally useful.\", \"weaknesses\": \"First, I thought the paper was pretty difficult to understand without a lot of background from previous papers. For the most part the authors don't actually state what the input/output/goals are, leaving it implied from the context, which is tough for the reader. The overall organization isn't great. The paper doesn't contain any theory even for simplified or toy cases (which actually seems potentially tractable here); there is only simple intuition. I think that is fine, but then the empirical results should be extensive, and unfortunately they are not.\", \"verdict\": \"I think this work contains a great main idea and could become quite a good paper in the future, but the work required to illustrate and demonstrate the idea is not fully there yet.\", \"comments_and_questions\": [\"Why do you actually need the embedded points y to be on the unit hypersphere? You could compute distances between points at different radii. The results probably shouldn't change much.\", \"There's at least a few other papers that use a similar idea, for example\", \"Gong et al \\\"Angular Quantization-based Binary Codes for Fast Similarity Search\\\" at NIPS 2012. Would be good to discuss the differences.\", \"The experimental section seems very limited for an empirical paper. There's at least a few confusing details, noted below:\", \"The experimental results for ImageNet comparing against other models are directly taken from those reported by Lu et al. That's fine, but it does mean that it's hard to make comparisons against *any* other paper than the Lu paper. For example, if the selected ImageNet classes are different, then the results of the comparison may well be different. I checked the HashNet paper (Cao et al. 2017), and it papers that their own reported numbers for ImageNet are better than those of the Lu et al paper. That is, I see 0.5059 0.6306 0.6835 for 16/32/64 bit codewords vs Lu's result of 0.442 0.606 0.684, which is quoted in this paper. What's causing this difference? It would probably be a bit less convenient but ultimately better if the results for comparison were reproduced by the authors, and possibly on a different class split compared to the single Lu paper.\", \"The comparison against PQ should also consider more recent works of the same flavor as PQ, which themselves outperform PQ. For example, \\\"Cartesian k-means\\\" by Norouzi and Fleet, or \\\"Approximate search with quantized sparse representations\\\" by Jain et al. These papers also use the SIFT dataset for their experimental result, so it would be great to compare against them.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Is it fair to compare the proposed algorithm to existing hashing algorithms and PQ\", \"review\": \"This paper proposed a new hashing algorithm with a new loss function. A multi-indexing scheme is adopted for search. There is one key issue: in general hashing is not good at multi-indexing search for vector-based search in the Euclidean distance or Cosine similarity. The advantage of hashing is reducing the code size and thus memory cost, but it is still not as good as quantization=based approach.\\n\\nHere are comments about the experiments.\\n(1) Table 1: do other algorithms also use multi-indexing or simply linear scan?\\n(2) Figure 4: HDT-E is better than PQ. It is not understandable. Something important is missing. How is the search conducted for PQ? Is multi-indexing used? It is also strange to compare the recall in terms of #(distance comparisons).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Inception V3 results\", \"comment\": \"I have run the Inception V3 benchmarks:\", \"16_bits\": \"85.3% MAP\", \"32_bits\": \"86.1% MAP\", \"64_bits\": \"85.1% MAP\\n\\nAs expected, this increased our MAP slightly, making our main result slightly more impressive.\\nWe are still working on adding Alexnet benchmarks.\"}",
"{\"comment\": \"Thank you for your response. I think you should at least include results trained on Inception V3 since this is the one you are mainly comparing to. AlexNet may not be necessary if results on Inception V3 is even better. The choice of ResNet V2 50 just seems odd. No other baseline papers currently use it. Add why not the original ResNet or ResNet-101? It makes it look like a hand picked model.\", \"title\": \"Inception V3 will be fine\"}",
"{\"title\": \"Addressing your ImageNet-100 concerns\", \"comment\": \"We tried to be as consistent as possible with the literature, and are currently working on adding more comparison datasets (i.e., MS COCO and NUS-Wide).\\n\\n(1) As we mentioned, our comparison numbers are drawn from the Lu et. al.'s DBR paper. We will switch to use the better of Cao et. al.'s results and Lu et. al.'s.\\n\\n(2) We chose ResNet 50 since there were already a few models being used in the literature, like DBR-v3 using InceptionV3 (a larger model with higher ImageNet accuracy than ResNet 50). We understand your concern, though, and will update with AlexNet results as soon as possible.\\n\\n(3) We address this thoroughly at the end of section 3.1.\\n\\n(4) Our method works for mutli-label datasets as well, since we can use any similarity matrix. As mentioned, we are working on adding more datasets.\\n\\nThank you\"}",
"{\"comment\": \"Dear authors, I have some concerns for the ImageNet-100 results in your paper:\\n\\n(1) Results from other methods such as the popular HashNet are not the same as reported in the original paper (for example 16 bits mAP@1000)\\n\\n(2) HashNet model use pretrained AlexNet whereas your model use pretrained ResNet 50. While I agree that ResNet is a much better choice, the choice of the model is very important for the final performance. For fair comparison, you should include results from AlexNet.\\n\\n(3) Resutls for ImageNet mAP@1000 goes down as number of bits increases, this does not seem right to me.\\n\\n(4) What about multi-label datasets such as MS COCO or NUS-Wide? \\n\\nThank you.\", \"title\": \"Some concerns regarding ImageNet-100 results\"}"
]
} |
|
SkgVRiC9Km | Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations | [
"Alex Lamb",
"Jonathan Binas",
"Anirudh Goyal",
"Dmitriy Serdyuk",
"Sandeep Subramanian",
"Ioannis Mitliagkas",
"Yoshua Bengio"
] | Deep networks have achieved impressive results across a variety of important tasks. However, a known weakness is a failure to perform well when evaluated on data which differ from the training distribution, even if these differences are very small, as is the case with adversarial examples. We propose \emph{Fortified Networks}, a simple transformation of existing networks, which “fortifies” the hidden layers in a deep network by identifying when the hidden states are off of the data manifold, and maps these hidden states back to parts of the data manifold where the network performs well. Our principal contribution is to show that fortifying these hidden states improves the robustness of deep networks and our experiments (i) demonstrate improved robustness to standard adversarial attacks in both black-box and white-box threat models; (ii) suggest that our improvements are not primarily due to the problem of deceptively good results due to degraded quality in the gradient signal (the gradient masking problem) and (iii) show the advantage of doing this fortification in the hidden layers instead of the input space. We demonstrate improvements in adversarial robustness on three datasets (MNIST, Fashion MNIST, CIFAR10), across several attack parameters, both white-box and black-box settings, and the most widely studied attacks (FGSM, PGD, Carlini-Wagner). We show that these improvements are achieved across a wide variety of hyperparameters. | [
"adversarial examples",
"adversarial training",
"autoencoders",
"hidden state"
] | https://openreview.net/pdf?id=SkgVRiC9Km | https://openreview.net/forum?id=SkgVRiC9Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BklKhP9zxE",
"ryxxQlaWy4",
"r1gedxOiCX",
"S1gcR8XcAX",
"BkeHBHy50Q",
"Bkgo0MaF0m",
"HJe8N-s_CQ",
"HyxDFKXdC7",
"SkenZJ6URQ",
"SklESyJGCm",
"SklG26PgAm",
"Syell2DxRQ",
"SyeyA7BlA7",
"ByxB567yRQ",
"B1ekccmyAQ",
"r1gVS6NaaX",
"rJgZu_ba6X",
"BJxi0d9q6Q",
"H1e2sO9567",
"Bye09P9caQ",
"S1eCNv5c6X",
"SyeaoM55aX",
"rJeUD3u5a7",
"HkeWZ2_qT7",
"B1li57iX6Q",
"r1eUwHib6Q",
"BylDnFpep7",
"SJeYd58kpQ",
"rkxYnt8JpQ",
"ByxZaNAChm",
"Skew_xxphQ",
"rygsxS39nm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"comment",
"comment",
"comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"comment",
"official_review",
"comment",
"comment",
"comment",
"official_review",
"official_review"
],
"note_created": [
1544886192643,
1543782423590,
1543368807765,
1543284434330,
1543267645480,
1543258834717,
1543184686281,
1543154046830,
1543061251772,
1542741819585,
1542647210085,
1542646759851,
1542636486611,
1542565261244,
1542564487086,
1542438204110,
1542424680789,
1542265043262,
1542264995910,
1542264725594,
1542264629771,
1542263461389,
1542257758138,
1542257657155,
1541809043257,
1541678430012,
1541622190847,
1541528177161,
1541527985273,
1541493944824,
1541369967460,
1541223666831
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper884/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper884/Authors"
],
[
"ICLR.cc/2019/Conference/Paper884/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper884/Authors"
],
[
"ICLR.cc/2019/Conference/Paper884/Authors"
],
[
"ICLR.cc/2019/Conference/Paper884/Authors"
],
[
"ICLR.cc/2019/Conference/Paper884/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper884/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper884/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper884/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper884/Authors"
],
[
"ICLR.cc/2019/Conference/Paper884/Authors"
],
[
"ICLR.cc/2019/Conference/Paper884/Authors"
],
[
"ICLR.cc/2019/Conference/Paper884/Authors"
],
[
"ICLR.cc/2019/Conference/Paper884/Authors"
],
[
"ICLR.cc/2019/Conference/Paper884/Authors"
],
[
"ICLR.cc/2019/Conference/Paper884/Authors"
],
[
"ICLR.cc/2019/Conference/Paper884/AnonReviewer4"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper884/AnonReviewer2"
],
[
"~Ian_Goodfellow1"
],
[
"~Ian_Goodfellow1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper884/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper884/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper suggests a method for defending against adversarial examples and out-of-distribution samples via projection onto the data manifold. The paper suggests a new method for detecting when hidden layers are off of the manifold, and uses auto encoders to map them back onto the manifold.\\n\\nThe paper is well-written and the method is novel and interesting. However, most of the reviewers agree that the original robustness evaluations were not sufficient due to restricting the evaluation to using FGSM baseline and comparison with thermometer encoding (which both are known to not be fully effective baselines). \\n\\nAfter rebuttal, Reviewer 4 points out that the method offers very little robustness over adversarial training alone, even though it is combined with adversarial training, which suggests that the method itself provides very little robustness.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"The ideas are quite novel and promising, but there is no sufficient justification of claims made\"}",
"{\"title\": \"New Results Summary\", \"comment\": \"Hello,\\n\\nWe've updated the paper with new results on the PGD attack with many more iterations, architectures (including large architectures like wideresnet), and setups (especially see Tables 2 and 3). This directly addresses the over-reliance on FGSM as an attack, which was the focus of Ian Goodfellow's comment.\"}",
"{\"title\": \"New experiments show significantly lower gains from fortification\", \"comment\": \"In the new experiments conducted by the authors on the two ResNet models, the additional benefits of fortification are even less significant (about 1%, close to error margins).\\n\\nIf this degree of robustness was attained by a technique which completely *replaces* adversarial training, I think it would indeed be valuable. But in this paper, the proposed method *augments* adversarial training with additional loss terms, and so one can argue that most of the robustness comes from adversarial training itself and the benefits of the fortified layers are marginal. \\n\\nThus, based on the empirical results, I do not think that the contribution of the proposed approach as a defense against adversarial attacks is sufficient.\"}",
"{\"title\": \"Rebuttal Summary and Highlights\", \"comment\": \"We thank all of the reviewers and commenters for their feedback, which has done a great deal to improve the quality of the paper. The main points raised by reviewers and commenters were related to the experimental results, the motivation, gradient obfuscation tests, and related work. All points have been addressed in the revised manuscript, and are summarized in the following.\\n\\n1. Stronger Attacks: We strongly agree that the FGSM attack is not a strong attack and to that end we have conducted new experiments against the PGD attack with up to 200 steps as well as a range of epsilons from 0.03 to 0.3 (Table 2). The improvements from Fortified Networks with 200 steps are similar to the improvements over baseline with 7 steps, and also Fortified Networks improve results over the baseline when using larger epsilons. \\n\\n2. Motivation for Fortified Networks: we have clarified that our motivation for fortified networks is that the autoencoders map some points from off of the manifold back onto the manifold. This in turn reduces the potential space of adversarial examples (because most of the space is off-manifold), which then makes adversarial training more efficient. These off-manifold points are not necessarily adversarial examples and not all adversarial examples are off the manifold (Gilmer 2018). However, our main claim is that some of the adversarial examples are off of the manifold, and thus when we use adversarial training, it is more effective and efficient when we have the autoencoders in the network. \\n\\n3. Gradient Obfuscation and Masking: We strongly agree that it is important to show that the improvements are due to actual improvements in robustness and not merely a degradation in the quality of the gradient signal. To address this, we have run PGD with a greater number of steps (up to 200). We have also run some variants of the attack which address issues related to gradient obfuscation (Table 2). For example we have run with larger epsilons and found that the model is still able to find adversarial attacks. Additionally we have run the network without noise and with attacks where gradient skips the autoencoder (BPDA), and found that Fortified Networks still improve robustness in both cases. \\n\\n4. Baselines and Related Work: We have added new results with PreActResNet18 and WideResNet28-10 on CIFAR-10 (Table 3), which are relatively competitive architectures. In both cases we found significant improvements using Fortified Networks and intriguingly we saw almost no change in the clean test accuracy. This is strong evidence that the resulting improvement does not trivially come from added capacity, as was suggested as a possibility by R4. Additionally we conducted an experiment where we simply added a square loss on the hidden layers (similar to ALP except on all layers) and found that this did not improve results.\"}",
"{\"title\": \"Clarification\", \"comment\": [\"\\\"- Table 2 CIFAR-10 argues PGD eps=0.03 error of the baseline network is 38.1%.\", \"Table 8 CIFAR-10 argues PGD eps=0.03 error of the baseline network is 33.0% or 31.4% for 7 or 200 (respectively) iterations of gradient descent. Why is this different? How many iterations did you use in Table 2?\", \"Table 7 argues 100 iterations of PGD at eps=0.03 has an error rate of 35.3% on \\\"basline with extra layers\\\"\", \"Table 8 argues 50/200 iterations of PGD at eps=0.03 has an error rate of 32.5/32.2 (respectively for the same model. Because 50<100<200 I would expect that the 35.3 should be something smaller. Why is this?\\u201d\", \"Because Fortified Networks adds capacity to the model, using the network without the fortified layers is a weak baseline. The discrepancy that you point to results from two different ways of adding activations to the baseline model. Essentially, the lower result uses the same number of layers with activations as fortified networks, but the higher number has more activations, and in some sense this makes it a higher capacity model. Nonetheless the paper has been updated with the discrepancy explained (Table 2).\"]}",
"{\"title\": \"Thanks for the Feedback - Response\", \"comment\": \"\\u201c1. The proposed method is not an alternative to adversarial training, but instead augments it with an additional objective from the denoising autoencoder. The authors are also claiming only ~5% improvement over the baseline. One might argue that the benefits of the proposed approach over adversarial training are marginal. Even if we assume that the 5% is significant, it is not clear how accurate the baseline evaluation is. I agree with one of the anonymous comments in this regard. The authors use a non-standard model, and their PGD baseline is quite a bit lower than the state-of-the-art. I would really like to see the results on a state-of-the-art model to be convinced that the benefit is not just an artifact of a weak baseline.\\u201d\", \"we_conducted_experiments_using_two_much_stronger_models\": \"PreActResNet18 and WideResNet28-10. All experiments ran for 200 epochs.\\n\\nPreActResNet18\", \"baseline\": \"43.28% (20 step PGD), 87.42% (clean test accuracy)\", \"fortified_networks\": \"44.06% (20 step PGD), (87.40% clean test accuracy)\\n\\n\\u201c2. If I correctly understand the new results posted by the authors, their model obtains ~10-13% accuracy against an Linf adversary of eps>0.1 on CIFAR-10. It has been shown that an eps~0.125 is already too large - one can perturb the image to actually be from another class (also shown in the ICLR submission that the authors linked - \\u201cRobustness may be at odds with accuracy\\u201d https://openreview.net/forum?id=SyxAb30cY7). I do not understand how the fortified model can get an accuracy > 0% for such large epsilons, which are probably impossible to be robust to. Have the authors checked what the adversarial examples look like for these large eps? What about trying a nearest neighbor attack from the test set? Seeing a non-zero robust accuracy to such large epsilons makes me doubt the correctness of the attack setup within the experimental evaluation.\\u201d\\n\\nOur robustness with an epsilon of 0.3 is very similar to what\\u2019s reported in (Madry 2018), especially Figure 6c:\", \"wideresnet28_10\": \"\", \"https\": \"//openreview.net/pdf?id=SyxZJn05YX\\n\\n\\nOne possibility is that for some examples, it is possible to find a real example with a different class within an epsilon ball of size 0.3 - but there is a small fraction of examples where this isn\\u2019t possible. \\n\\n\\u201c3. The proposed defense seems to use random noise (as part of the denoising stage). Have the authors tried multiple gradient queries per PGD step? \\u201c\\n\\nWe conducted a very similar experiment to this where we ran both the forward and the backward pass without any injected noise and we showed that Fortified Networks retained a significant improvement over the baseline.\"}",
"{\"title\": \"Difference Between the Papers\", \"comment\": \"Hello,\\n\\nOur method and the \\\"High-Level Representation Guided Denoiser\\\" are very different. We ran our attacks and evaluate on the full model, end-to-end, including the autoencoders. This is a major difference from [1], and that change is what broke the paper you referenced. The paper that you referenced did not perform adversarial training on the main part of the network, and only trained the autoencoder, keeping the classifier network itself fixed. \\n\\nWe also conducted an experiment with BPDA (Athalye 2018), where we consider skipping the autoencoders in the backward pass (i.e. using the identity function to compute the gradients) as well as running the forward and backward pass of the network with no noise injected and we produced, and the advantage of fortified networks was preserved.\"}",
"{\"comment\": \"Hi,\\nI found the idea is very similar to \\\"Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser\\\"\", \"http\": \"//openaccess.thecvf.com/content_cvpr_2018/papers_backup/Liao_Defense_Against_Adversarial_CVPR_2018_paper.pdf\\n\\nCould you please clarify the difference between your work and this paper? Thanks.\", \"title\": \"A closed related paper\"}",
"{\"comment\": \"https://arxiv.org/pdf/1706.06083.pdf\\n\\nThe original paper linked above claims an accuracy of ~10% at eps=30/255 (eps~0.118). I do not think the argument that \\\"adversarial examples exist => PGD will find it\\\" is accurate. In fact, https://arxiv.org/pdf/1706.06083.pdf has a disclaimer that suggests such points may well be present and PGD may well be unable to find them.\", \"title\": \"Comment 2. does not seem fully accurate\"}",
"{\"comment\": \"https://arxiv.org/pdf/1810.12042.pdf\\n\\nHowever, they may be complementary and can be combined?\", \"title\": \"This paper seems to indicate ALP results in robustness, comparable to your approach\"}",
"{\"comment\": [\"I'm having a hard time interpreting the new results.\", \"Table 2 CIFAR-10 argues PGD eps=0.03 error of the baseline network is 38.1%.\", \"Table 8 CIFAR-10 argues PGD eps=0.03 error of the baseline network is 33.0% or 31.4% for 7 or 200 (respectively) iterations of gradient descent. Why is this different? How many iterations did you use in Table 2?\", \"Table 7 argues 100 iterations of PGD at eps=0.03 has an error rate of 35.3% on \\\"basline with extra layers\\\"\", \"Table 8 argues 50/200 iterations of PGD at eps=0.03 has an error rate of 32.5/32.2 (respectively for the same model. Because 50<100<200 I would expect that the 35.3 should be something smaller. Why is this?\", \"Because the improvement gain for fortified networks is relatively small, these ~5% differences add up. Are they due to random initializations? In that case, could we get some margin-of-error results for these tables?\"], \"title\": \"Confusion over new results\"}",
"{\"comment\": \"Thank you, that makes sense.\\n\\nI agree running a resnet would be very important. Madry et al. state that one of the reasons their defense works is that you have to have large network capacity.\", \"title\": \"Makes sense\"}",
"{\"title\": \"Response to Experiments\", \"comment\": \"I am still not convinced by the empirical evaluation performed by the authors. My concerns are:\\n\\n1. The proposed method is not an alternative to adversarial training, but instead augments it with an additional objective from the denoising autoencoder. The authors are also claiming only ~5% improvement over the baseline. One might argue that the benefits of the proposed approach over adversarial training are marginal. Even if we assume that the 5% is significant, it is not clear how accurate the baseline evaluation is. I agree with one of the anonymous comments in this regard. The authors use a non-standard model, and their PGD baseline is quite a bit lower than the state-of-the-art. I would really like to see the results on a state-of-the-art model to be convinced that the benefit is not just an artifact of a weak baseline.\\n\\n2. If I correctly understand the new results posted by the authors, their model obtains ~10-13% accuracy against an Linf adversary of eps>0.1 on CIFAR-10. It has been shown that an eps~0.125 is already too large - one can perturb the image to actually be from another class (also shown in the ICLR submission that the authors linked - \\u201cRobustness may be at odds with accuracy\\u201d https://openreview.net/forum?id=SyxAb30cY7). I do not understand how the fortified model can get an accuracy > 0% for such large epsilons, which are probably impossible to be robust to. Have the authors checked what the adversarial examples look like for these large eps? What about trying a nearest neighbor attack from the test set? Seeing a non-zero robust accuracy to such large epsilons makes me doubt the correctness of the attack setup within the experimental evaluation.\\n\\n3. The proposed defense seems to use random noise (as part of the denoising stage). Have the authors tried multiple gradient queries per PGD step? \\n\\n4. I would also like to see the standard (non-robust) accuracies of the models (specifically the baseline model, baseline with extra layers and fortified networks) to make sure that the 5% gain in robustness is not an artifact of larger expressivity of the proposed model.\"}",
"{\"title\": \"Reason\", \"comment\": \"Thanks, the reason is that the baseline is a 4-layer CNN and not a resnet. When we run with the resnet our results are about the same as Madry, but our goal in the rebuttal has been to get the results with as many types of attacks/setups as possible to ensure that the improvements are a not result of gradient masking.\\n\\nWe can add more experiments with ResNets as well.\"}",
"{\"comment\": \"You claim that PGD adversarial training as a baseline gives a robustness of 35% at eps=0.03. However, to the best of my knowledge, no prior paper has reduced the accuracy below 44%.\\n\\nCan you account for this difference? Are you able to lower the accuracy the Madry et al. defense to 38%? \\n\\nWhile a gap of ~6% might not typically be important, you are only claiming a gain of about 5%. So in this case, it's absolutely critical that we can be sure it's not just that you have a weak baseline you're comparing against.\", \"title\": \"Why is your baseline weaker than Madry et al.?\"}",
"{\"title\": \"Thanks for your feedback\", \"comment\": \"We thank the commenter for their valuable feedback and suggestions for more thorough experimentation. We have run many of the suggested tests to address the question of gradient obfuscation, which was also raised by others.\\n\\n\\u201cWhy is CIFAR only evaluated against FGSM? Shouldn't you at least try PGD on CIFAR-10? Why not try out PGD/CW on CIFAR-10? It is not obvious that the method will scale to complex datasets such as CIFAR-10 (leave alone Imagenet).\\u201d\\n\\nWe have added new results with PGD on CIFAR-10 with many more iterations at evaluation time.\\n\\n# steps | Baseline | Baseline w/ extra layers | Fortified Networks\\n 7 steps | 33.0 | 34.2 | 45.0\\n 50 steps | 31.6 | 32.5 | 42.1\\n200 steps | 31.4 | 32.2 | 41.5\\n\\n\\u201cFor how many iterations was PGD run? I think this information is critical. How many random restarts? There is some recent work (https://arxiv.org/abs/1810.12042) that indicates large number of restarts/iteration steps might be necessary for a meaningful evaluation\\u201d\\n\\nWe have added new results with PGD run for many iterations (up to 200), and with several restarts (up to 50), as well as for different epsilon values (0.03 to 0.3). Our model outperforms baseline models in all cases, demonstrating effectiveness of the method even under these more difficult conditions.\\n\\n\\u201cWhy not baseline against adversarial logit pairing? (investigations by third parties have shown that while ALP does not help as much as claimed with Imagenet, it does help with CIFAR and MNIST).\\u201d\\n\\nWe ran ALP-like experiments, wherein we added an adversarial loss on the hidden layers instead of adding fortified layers . We could not achieve competitive performance with this system. In fact, it did not perform better than just an adversarially trained baseline system, however, we have not exhaustively explored this approach.\"}",
"{\"comment\": \"Can you try out SPSA and confirm your results? The public discussion on the link below shows that PGD with large iterations cannot actually detect masked gradients fully, but SPSA sort of pretty much cancels all their gains from their method!\\n\\nSee discussion on this page.\", \"https\": \"//openreview.net/forum?id=Bylj6oC5K7¬eId=H1leI9Iah7\", \"title\": \"SPSA?\"}",
"{\"title\": \"Motivation for Fortified Networks\", \"comment\": \"\\u201c I do not really understand the motivation behind using an autoencoder here. Firstly, it is not clear that adversarial examples lie off the data manifold - they could form a very small set on the data manifold and thereby not affect standard generalization.\\u201d\\n\\nOur main claim is that using a model which can perform reconstruction can map points off of the data manifold back onto the manifold. For example, we can imagine that unusual noise patterns would not appear in the reconstructions. These off-manifold points are not necessarily adversarial examples and not all adversarial examples are from off of the manifold (Gilmer 2018). However, our claim is only that *some* of the adversarial examples are off of the manifold, and thus when we use adversarial training, it is more effective and efficient when we have the autoencoders in the network, as it reduces the space that we need to search over. \\n\\nEvidence that some adversarial examples are off of the manifold (at least for an undefended network) is in our paper in figure 1. Some additional qualitative evidence supporting this claim is provided by another submission. Figure 2 and Figure 3 of the \\u201cRobustness May be at Odds with Accuracy\\u201d paper (https://openreview.net/pdf?id=SyxAb30cY7), show the perturbations for a defended model appear to be somewhat unrealistic (although much less so then for an undefended model).\"}",
"{\"title\": \"Thanks - Response to Comments on Experiments and Gradient Obfuscation\", \"comment\": \"Thank you for your feedback. We strongly agree that it is absolutely essential to show that the improvements are not a result of gradient obfuscation.\\n\\n\\u201cThe authors mostly evaluate their defense using FGSM (particularly on CIFAR). To truly establish the merit of a new defense, the authors must benchmark against state-of-the-art defenses such as PGD. \\u201d\\n\\nWe added new results with PGD on CIFAR-10 with many more PGD-steps for evaluation. We evaluated a convolutional network on CIFAR-10 with 4 convolutional layers followed by a single fully-connected layer. We trained fortified networks, where we added an autoencoder following each hidden layer. We also added a baseline \\u201cExtra Layers\\u201d where we trained with the layers added to match the capacity of Fortified Networks (same number of parameters). \\n\\n# steps | Baseline | Baseline w/ extra layers | Fortified Networks\\n 7 steps | 33.0 | 34.2 | 45.0\\n 50 steps | 31.6 | 32.5 | 42.1\\n200 steps | 31.4 | 32.2 | 41.5\\n\\nEven when running PGD for 200 steps, we found large and consistent advantages for fortified networks, which are not primarily attributable to adding additional layers. \\n\\n\\u201cIt also seems like the epsilon values used for the PGD attacks are fairly small. The authors should report accuracies to a range of epsilon values for the PGD attack, as is standard.\\u201d\\n\\nThis is a great point and we performed an additional experiment using the same convolutional neural network discussed above. Using 7 and 100 steps of PGD, we attacked our fortified nets model with varying epsilons: \\n\\nPGD, 100 steps\\nEpsilon | Baseline with extra layers | Fortified Networks\\n0.03 | 35.3 | 39.2\\n0.04 | 24.8 | 28.0\\n0.06 | 14.3 | 15.6\\n0.08 | 12.0 | 13.0\\n 0.1 | 11.7 | 12.9\\n 0.2 | 10.2 | 11.3\\n 0.3 | 8.4 | 9.6\\n\\n\\u201cWhen the authors attack their models using PGD/FGSM, is this only on the classification loss or does this also include the denoising terms? Similar defenses which use denoisers have been broken once you run PGD on the full model [1].\\u201d\\n\\nWe run our attacks on the full model, end-to-end, including the autoencoders. This is a major difference from [1], and that change is what broke the paper you referenced. The paper that you referenced did not perform adversarial training on the main part of the network, and only trained the autoencoder, keeping the classifier network itself fixed. \\n\\nWe also conducted a new experiment with BPDA (Athalye 2018), where we consider skipping the autoencoders in the backward pass (i.e. using the identity function to compute the gradients) as well as running the forward and backward pass of the network with no noise injected. \\n\\nWe also ran some new experiments for this using eps=0.03 and 100 steps of PGD using the same CNN architecture discussed earlier. \\n\\n33.4 (baseline, normal attack)\\n40.1 (Fortified Networks, normal attack)\\n38.2 (Fortified Networks, no noise during attack)\\n67.1 (Fortified Networks, skip DAE during attack, BPDA)\\n\\nThis is strong evidence that skipping the autoencoders while generating the attacks significantly weakens them, but turning the noise off slightly strengthens the attack, but it is still much stronger as a defense than the baseline adversarially trained model with the same number of parameters and capacity. \\n\\n\\u201cSecondly, have the authors tried a simple regularization loss based on the error between hidden layer representations to a natural examples and the corresponding adversarial example? I think the authors must motivate the use of denoising autoencoders here by comparing to such a simple baseline\\u201d\\n\\nYes, we conducted new experiments to directly address this issue (Adversarial Logit Pairing is a special case of the regularizer that you describe), but we also note that our method provides improvements even when we don\\u2019t use the L_adv loss comparing the adversarial input\\u2019s hidden states to the clean input\\u2019s h. This is with the same CNN architecture discussed earlier. \\n\\nPGD, 7 iterations:\\n43.3 (Fortified Networks)\\n38.1 (Adv. training baseline)\\n34.2 (Penalty between layers)\\n\\nPGD, 100 iterations:\\n39.2 (Fortified Networks)\\n35.3 (Adv. training baseline)\\n32.2 (Penalty between layers)\\n\\nWe found that this penalty between the hidden states, where we attracted the hidden states in the network on adversarial inputs to the hidden states of the network on clean states (applied at every layer), hurts robustness somewhat, but it may be possible that such an approach depends on exactly how it\\u2019s used. We also add that unlike adversarial logit pairing, our improvements hold up after running PGD for a large number of iterations, whereas the benefits from adversarial logit pairing almost entirely disappear. \\n\\n\\u201cAlso, which approach are the authors denoting as \\u2018baseline adv. Train\\u2019 in the tables?\\u201c\\n\\nThis refers to the PGD training of Madry 2017.\"}",
"{\"title\": \"Thanks for the feedback\", \"comment\": \"\\u201cThe major issue, which was left as an open question in the end of Section 3, is that when and where to use fortified layers. The authors discussed this issue, but did not solve this issue. Nevertheless, I do believe solving this issue requires a sequence of papers. Overall the paper reads very well, but there are a number of minor places to be improved.\\u201c\\n\\nWe thank the reviewer for the positive and constructive feedback.\\n\\nWe also like to point out that we\\u2019ve conducted new experiments to help to demonstrate that our method isn\\u2019t benefiting from obfuscated gradients and additionally we ran PGD attacks with many more iterations (200) on CIFAR-10 (see the response to reviewer 4).\"}",
"{\"title\": \"Thank you for your feedback.\", \"comment\": \"\\u201cThis paper presents an approach of fortifying the neural networks to defend attacks. The major component should be a denoising autoencoder with noise in the hidden layer.\\nHowever, from the paper, I am still not convinced why this defends the FGSM attack. From my perspective, a more specifically designed algorithm could attack the network described in the paper as the old way\\u201d\\n\\nIn our experiments (except where we explicitly test against a special BPDA) we backpropagate errors through the autoencoders, such that the the autoencoders are not hidden from the attacker. Indeed we found that skipping the autoencoders when running the attacks makes them significantly weaker, but in our main experiments we backpropagate through the autoencoders and allow the attacker to use this information. \\n\\n\\u201cand what is the insight of defending the attacks, whether this objective function is harder to find to adversarial examples, or have to use more adversarial examples?\\u201d\\n\\nOur main claim is that using a model which can perform reconstruction can map points off of the data manifold back onto the manifold. For example, we can imagine that unusual noise patterns would not appear in the reconstructions. These off-manifold points are not necessarily adversarial examples and not all adversarial examples are from off of the manifold (Gilmer 2018). However, our claim is only that *some* of the adversarial examples are off of the manifold, and thus when we use adversarial training, it is more effective and efficient when we have the autoencoders in the network, as it reduces the space that we need to search over. \\n\\nEvidence that some adversarial examples are off of the manifold (at least for an undefended network) is in our paper in figure 1. Some additional qualitative evidence supporting this claim is provided by another submission. Figure 2 and Figure 3 of the \\u201cRobustness May be at Odds with Accuracy\\u201d paper (https://openreview.net/pdf?id=SyxAb30cY7), show the perturbations for a defended model appear to be somewhat unrealistic (although much less so then for an undefended model). \\n\\n\\u201cAnother problem rise from Ian Goodfellow's comment. I am trying not to be biased. So if the author could address his comments properly, I am willing to change the rating.\\u201d\\n\\nWe strongly believe that it is essential to show that the improvements do not result from gradient obfuscation as well as to demonstrate improvements against strong attacks (such as PGD) on CIFAR-10. We have thus run additional experiments demonstrating effectiveness of the method on CIFAR-10, on a CNN as well as a ResNet architecture. We ran validation experiments to confirm that our method does not simply operate by obfuscating gradients.\"}",
"{\"title\": \"Thanks for the feedback\", \"comment\": \"\\u201cThe method works by substituting a hidden layer with a denoised version.\\nNot only it enable to provide more robust classification results, but also to sense and suggest to the analyst or system when the original example is either adversarial or from a significantly different distribution.\\nImprovements in adversarial robustness on three datasets are significant.\\nBibliography is good, the text is clear, with interesting and complete experimentations.\\u201d\\n\\nThank you for your feedback. We have obtained several new results to address concerns related to gradient obfuscation raised by other reviewers and the public comment.\"}",
"{\"title\": \"Thanks\", \"comment\": \"We have removed the thermometer coding reference from the results table and we have also run new experiments to attack fortified networks using BPDA.\"}",
"{\"title\": \"Main Claim Clarification\", \"comment\": \"\\u201cIs there any proof that using an autoencoder maps the data back in the manifold? Especially against adversarial perturbations?\\u201d\", \"to_clarify\": \"our motivation is that the autoencoders map some points from off of the manifold back onto the manifold. This in turn reduces the potential space of adversarial examples (because most of the space is off-manifold), which then makes adversarial training more efficient. These off-manifold points are not necessarily adversarial examples and not all adversarial examples are off the manifold (Gilmer 2018). However, our main claim is that some of the adversarial examples are off of the manifold, and thus when we use adversarial training, it is more effective and efficient when we have the autoencoders in the network.\\n\\nEvidence that some adversarial examples are off of the manifold (at least for an undefended network) is in our paper in figure 1. Some qualitative evidence supporting this claim is provided by another submission. Figure 2 and Figure 3 of the \\u201cRobustness May be at Odds with Accuracy\\u201d paper (https://openreview.net/pdf?id=SyxAb30cY7), show the perturbations for a defended model appear to be somewhat unrealistic (although much less so then for an undefended model).\"}",
"{\"title\": \"Empirical results are not sufficient to demonstrate the strength of the proposed defense\", \"review\": [\"This paper proposes a new defense to adversarial examples based on the 'fortification' of hidden layers using a denoising autoencoder. While building models that are robust to adversarial examples is an important and relevant research problem, I am not convinced by the evaluation of the defense. Specific comments:\", \"The authors mostly evaluate their defense using FGSM (particularly on CIFAR). To truly establish the merit of a new defense, the authors must benchmark against state-of-the-art defenses such as PGD. It also seems like the epsilon values used for the PGD attacks are fairly small. The authors should report accuracies to a range of epsilon values for the PGD attack, as is standard.\", \"When the authors attack their models using PGD/FGSM, is this only on the classification loss or does this also include the denoising terms? Similar defenses which use denoisers have been broken once you run PGD on the full model [1].\", \"I do not really understand the motivation behind using an autoencoder here. Firstly, it is not clear that adversarial examples lie off the data manifold - they could form a very small set on the data manifold and thereby not affect standard generalization. Secondly, have the authors tried a simple regularization loss based on the error between hidden layer representations to a natural examples and the corresponding adversarial example? I think the authors must motivate the use of denoising autoencoders here by comparing to such a simple baseline.\"], \"general_comment\": \"The results hard to parse given the arrangement of figures and tables. Also, which approach are the authors denoting as \\u2018baseline adv. Train\\u2019 in the tables?\\n\\nOverall I feel like building defenses to adversarial examples is a challenging problem and the empirical investigation in this paper is not sufficient to illustrate any real progress on this front.\\n\\n[1] Athalye, A., & Carlini, N. (2018). On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses. arXiv preprint arXiv:1804.03286.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"comment\": \"Is there any proof that using an autoencoder maps the data back in the manifold? Especially against adversarial perturbations?\\n\\nHave the authors tried their method with networks that include residual connections? It will be interesting to verify that mapping back to the manifold indeed works with such connections that can amplify perturbations through the skip connections.\", \"title\": \"Is there proof for the main claim?\"}",
"{\"title\": \"A sensible approach, but needs to justify the experiments more strongly\", \"review\": \"This paper presents an approach of fortifying the neural networks to defend attacks. The major component should be a denoising autoencoder with noise in the hidden layer.\\n\\nHowever, from the paper, I am still not convinced why this defends the FGSM attack. From my perspective, a more specifically designed algorithm could attack the network described in the paper as the old way, and what is the insight of defending the attacks, whether this objective function is harder to find to adversarial examples, or have to use more adversarial examples?\\n\\nAnother problem rise from Ian Goodfellow's comment. I am trying not to be biased. So if the author could address his comments properly, I am willing to change the rating.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"comment\": \"I haven't read this paper but a colleague told me that it quotes the accuracy numbers from the original Buckman et al paper and uses them as a point of comparison. I'm a co-author of thermometer coding, and I'm here to say it's important to understand that a new attack, BPDA, was able to break the model from our paper: https://arxiv.org/abs/1802.00420\\n\\nIn our own follow-up experiments, we found that if we retrain using BPDA for adversarial training, models that use thermometer coding perform about the same as models that use real numbers for input. Thus it's probably best to just use adversarial training as the baseline.\", \"title\": \"Thermometer coding does not improve adversarial robustness\"}",
"{\"comment\": \"I'm not trying to weigh in on whether or not the paper should be accepted and I haven't read the paper; I'm just trying to provide the reviewers with good information on how to interpret FGSM experiments. I'm commenting because a colleague told me that this information would be relevant to reviewing this paper.\\n\\nI developed the FGSM attack, and I'd like to comment that it's not intended to be a strong attack.\\n\\nThe FGSM was mostly intended to be used for a scientific experiment to show that linear information is sufficient to break undefended neural nets. It's not meant to be a strong attack.\\n\\nUntil a few years ago, FGSM was also a good \\\"unit test\\\" to see if a defense was strong. By now, I personally don't even use FGSM as a unit test anymore. Performance on FGSM does not correlate well with performance on the strongest attacks.\\n\\nIt's fine if you want to use FGSM as a unit test but success on FGSM shouldn't be regarded as strong evidence that a defense works in a particular threat model. The reviewers should check what specific claims are made in the paper and if there are claims of a strong defense these claims should be supported by something other than FGSM.\", \"title\": \"FGSM is not a strong attack\"}",
"{\"comment\": \"I like the writing, but I have some core problems with the experimental evaluation.\", \"some_questions\": \"1. Why is CIFAR only evaluated against FGSM? Shouldn't you at least try PGD on CIFAR-10? Why not try out PGD/CW on CIFAR-10? It is not obvious that the method will scale to complex datasets such as CIFAR-10 (leave alone Imagenet). \\n\\n2. Why not try out NES/SPSA/ElasticNet attacks as evidence against gradient-masking?\\n\\n3. Blackbox accuracy seems to be slightly worse than white-box. Is this a sign that there is some gradient masking going on? \\n\\n4. For how many iterations was PGD run? I think this information is critical. How many random restarts? There is some recent work (https://arxiv.org/abs/1810.12042) that indicates large number of restarts/iteration steps might be necessary for a meaningful evaluation\\n\\n5. Why not baseline against adversarial logit pairing? (investigations by third parties have shown that while ALP does not help as much as claimed with Imagenet, it does help with CIFAR and MNIST). \\n\\nThe evidence against gradient masking given in the paper is also presented by ALP (https://arxiv.org/abs/1803.06373). But, this paper (https://arxiv.org/abs/1807.10272) shows that these signs may very well be present in defenses that rely on gradient obfuscation. \\n\\nOverall, there are no theoretical guarantees and I am not convinced that there are actually any gains compared to ALP/other SOTA defenses... especially with the fact that the possibility of gradient obfuscation has not been fully explored, and that most experiments are limited to MNIST!\", \"title\": \"Experimental Evaluation Not Convincing\"}",
"{\"title\": \"Improving the robustness of deep Networks by modeling the manifold of hidden representations is original, efficient and well motivated\", \"review\": \"The method works by substituting a hidden layer with a denoised version.\\nNot only it enable to provide more robust classification results, but also to sense and suggest to the analyst or system when the original example is either adversarial or from a significantly different distribution.\\nImprovements in adversarial robustness on three datasets are significant.\\n\\nBibliography is good, the text is clear, with interesting and complete experimentations.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"good work\", \"review\": \"In this paper, the authors proposed a fortified network model, which is an extension to denoising autoencoder. The extension is to perform the denoising module in the hidden layers instead of input layer. The motivation of this extension is that the denoising part is more effective in the hidden layers. Overall, this extension is quite sensible, and empirical results justify the utility of this extension. The major issue, which was left as an open question in the end of Section 3, is that when and where to use fortified layers. The authors discussed this issue, but did not solve this issue. Nevertheless, I do believe solving this issue requires a sequence of papers. Overall the paper reads very well, but there are a number of minor places to be improved.\\n\\n \\n(1) a grammar error at \\\"provide a reliable signal of the existence of input data that do not lie on the manifold on which it the network trained.\\\"\\n\\n(2) a grammar error at \\\"This expectation cannot be computed, therefore a common approach is to to minimize the empirical risk\\\"\\n\\n(3) The sentence \\\"For a mini-batch of N clean examples, x(1), ..., x(N), each hidden layer h(1)_k, ..., h(N)_k is fed into a DAE loss\\\" is a little confusing to me. \\\"h(1)_k, ..., h(N)_k\\\" is only for one hidden layer, rather than \\\"each hidden layer\\\". Right?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
SkxXCi0qFX | ProMP: Proximal Meta-Policy Search | [
"Jonas Rothfuss",
"Dennis Lee",
"Ignasi Clavera",
"Tamim Asfour",
"Pieter Abbeel"
] | Credit assignment in Meta-reinforcement learning (Meta-RL) is still poorly understood. Existing methods either neglect credit assignment to pre-adaptation behavior or implement it naively. This leads to poor sample-efficiency during meta-training as well as ineffective task identification strategies.
This paper provides a theoretical analysis of credit assignment in gradient-based Meta-RL. Building on the gained insights we develop a novel meta-learning algorithm that overcomes both the issue of poor credit assignment and previous difficulties in estimating meta-policy gradients. By controlling the statistical distance of both pre-adaptation and adapted policies during meta-policy search, the proposed algorithm endows efficient and stable meta-learning. Our approach leads to superior pre-adaptation policy behavior and consistently outperforms previous Meta-RL algorithms in sample-efficiency, wall-clock time, and asymptotic performance. | [
"Meta-Reinforcement Learning",
"Meta-Learning",
"Reinforcement-Learning"
] | https://openreview.net/pdf?id=SkxXCi0qFX | https://openreview.net/forum?id=SkxXCi0qFX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r3NxQ3BiQkq",
"S1eNM6U9x4",
"BylH5tG_gE",
"H1g9ZwLWeE",
"H1eJ4OsokE",
"r1lq0oIiyV",
"HyginiQ60m",
"ByxeiCr3C7",
"r1e_2SCDAQ",
"SyeydrCDAX",
"HJlP-mCDAm",
"B1lMrTR7Am",
"SyewFM2lRX",
"B1xzeWBWp7",
"SJgxb2j16m",
"HklUqmw637",
"r1gzs8Eth7",
"SkxFayudnX",
"BylRj7uw3Q",
"HyxFKoWI2Q",
"Ske9gRGSn7",
"B1gEq3VN3Q",
"BkgQ8duX3m",
"H1lnbxMbn7",
"rklCmE7ai7",
"HylBEICnom",
"BJgVCjFGo7",
"SkeoPJlncX"
],
"note_type": [
"comment",
"official_comment",
"comment",
"meta_review",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"comment",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1644570026949,
1545395468420,
1545247117368,
1544804098396,
1544431654564,
1544412113938,
1543482291121,
1543425687952,
1543132591970,
1543132519510,
1543131902903,
1542872377866,
1542664830919,
1541652713722,
1541549048434,
1541399437619,
1541125786088,
1541074880968,
1541010342383,
1540918145124,
1540857330293,
1540799628071,
1540749386967,
1540591619588,
1540334630428,
1540314669278,
1539640268390,
1539207011465
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper882/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper882/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper882/Authors"
],
[
"~Ankesh_Anand1"
],
[
"ICLR.cc/2019/Conference/Paper882/Authors"
],
[
"ICLR.cc/2019/Conference/Paper882/Authors"
],
[
"ICLR.cc/2019/Conference/Paper882/Authors"
],
[
"ICLR.cc/2019/Conference/Paper882/Authors"
],
[
"ICLR.cc/2019/Conference/Paper882/Authors"
],
[
"ICLR.cc/2019/Conference/Paper882/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper882/AnonReviewer2"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper882/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper882/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper882/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper882/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper882/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper882/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper882/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"comment\": \"A typo: Equation(2) should log\\\\pi\\\\theta instead of log\\\\pi\\\\theta'\", \"q1\": \"Figure 4, why are the verticle axis of the top three plots the average return but the title is gradient variance?\", \"q2\": \"In Figure 5,\\n In every method, is the pre-update policy just the random policy or the policy after meta-training?\\n Are all trajectories of post-update taken after three inner adaptation steps?\", \"q3\": \"I am wondering about the advantage of multiple outer updates. As described in Algorithm 1, a full update contains one inner update + multiple outer updates (meta-update) without re-sampling. However, I think multiple outer updates with the same sample = one outer update with a larger learning rate, right?\", \"title\": \"Questions about Figure 4-5\"}",
"{\"title\": \"LVC-TPRO\", \"comment\": \"Thanks a lot for implementing the proposed method in PyTorch and sharing your insights. For a subset of the environments we have results have results for LCV-TRPO. While LVC-TRPO seems to be better in in the FwdBack environments, it performs slightly worse than MAML-TRPO in the AntRandDir environment.\\nFrom a mathematical standpoint, MAML is more biased than LVC, because the H_1 RL-hessian term is missing in MAML when compared to the hessian estimates of LCV. We believe that one the following may explain the findings.\", \"hypothesis_1\": \"Since MAML doesn\\u2019t do any pre-adaptation credit assignment, i.e. the \\\\nabla J_pre term is 0, the MAML meta-gradients are easier to estimate and might exhibit less variance. In meta-environments where task identification is trivial, pre-adaptation credit assignment plays a minor role and the higher variance of LVC might make it less sample-efficient compared to MAML. However, this hypothesis can hardly explain fact that LCV consistently outperforms MAML with VPG as outer optimizer.\", \"hypothesis_2\": \"As the monotonic policy improvement theory theory suggest, we must 1) account for changes in the pre-update action distribution and 2) bound changes in the pre-update state visitation distribution. Based on this, TRPO makes the maximally large step while fulfilling these conditions / constraints. However, as we point out in section 6, in Meta-RL, we have to fulfill the conditions for both the pre-adaptation and post-adaptation policy. If we just use TRPO as the outer optimizer, it will just fulfill 1) and 2) for the post-adaptation policy. As a result, the step and direction may be too large in order to suffice the conditions for the pre-adaptation policy. Since MAML ignores the pre-adaptation sampling distribution anyway, this would not be too problematic for MAML. But in case of LVC, this might hurt performance and outwage the benefits of LVC.\\n\\nIn environments where task-identification is non-trivial (i.e. when we have non-dense rewards) such as in Fig. 5, pre-adaptation credit assignment plays a major role. In such environments, the average performance of MAML is far behind LVC. In general, the Meta-RL benchmarks we have right now in Fig. 3 are still pretty naive. It would be nice to see some more results in this direction with harder Meta-RL tasks etc. Please let us know if you have any experiment results in this direction and what you think about the hypotheses. As we have a proper way of interpreting the LCV-TRPO results, we are happy to include corresponding results in the paper. Thanks a lot for your support.\"}",
"{\"comment\": \"Hi authors,\\n\\nI used PyTorch to implement LVC on TRPO, and compare it with MAML+TRPO. It turns out that LVC has a lower variance and a worse average reward. When implement LVC on PPO, which has a name ProMP in your paper, it has a lower variance than MAML+TRPO and a higher average reward. \\n\\nThis indicates that LVC can reduce variance, but the bias of LVC has a significant bad effect on performance. The reason ProMP has a better performance is probably because of the advantage of PPO over TRPO.\\n\\nI know you reported VPG based comparison, but VPG is well know for its instability, thus not a good testbed here.\\nEven if so, it's hard to see LVC+VPG is better than MAML+VPG.\\n\\nI suggest you to report TRPO based results if you did experiments on it.\", \"title\": \"weak support to the claim\"}",
"{\"metareview\": \"The paper studies the credit assignment problem in meta-RL, proposes a new algorithm that computes the right gradient, and demonstrates its superior empirical performance over others. The paper is well written, and all reviewers agree the work is a solid contribution to an important problem.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting work, novel contribution\"}",
"{\"title\": \"Algorithm 1\", \"comment\": \"Thank you for your comment.\\n\\nOne of the main advantages of ProMP is that it can perform multiple meta-gradient steps without re-sampling trajectories. The mechanisms in the ProMP objective (likelihood ratio + clipping + KL penalty) stabilize the meta-optimization and ensure that no policy collapse happens when doing multiple gradient steps with the same data. This makes the algorithm more sample-efficient and faster in compute time.\\n\\nHence, the indentation of step 7-10 in algorithm 1 is intended since we only want to sample trajectories once and then perform N meta-gradient steps with it. In practice, we usually set N=3 or N=5. Due to page constraints in the paper, we may have explained this only insufficiently. We aim to better clarify this in the camera-ready version of the paper.\"}",
"{\"comment\": \"Steps 7-10 should occur for all the values of n right, not just n=0? Thus steps 7-10 might be need to unindented by one level.\", \"title\": \"Typo / Indentation issue in Algorithm 1?\"}",
"{\"title\": \"Updates in the paper\", \"comment\": \"Addressing the reviewers concerns and suggestions, we added further experimental results and explanations to the paper. In summary, the following changes have been made:\\n\\n1) We extended the gradient variance experiments to more iterations and three environments. In accordance, we updated the respective experiment section.\\n\\n2) We included DiCE into our performance benchmarks. Since there were already many curves in the benchmark figure, we split it into two figures. One figure with the full algorithms and the other with focus on the underlying gradient estimators\\nFig. 2: ProMP, MAML-TRPO, E-MAML-TRPO, MAML-VPG\\nFig. 3: LVC-VPG, DiCE-VPG, MAML-VPG, E-MAML-VPG\\n\\n3) In section 5, we extended the explanation why the original RL-MAML implementation does not perform any pre-adaptation credit assignment\\n\\n4) We fixed minor typos and notational inconsistencies throughout the paper\"}",
"{\"title\": \"Re: Questions about experiments\", \"comment\": \"Thanks a lot for the excellent comment!\\n\\nThe returns in Fig. 2 are estimated by sampling a batch of tasks from the task distribution and rolling out a number of trajectories with the adapted policy in each of the tasks. Then we average over all sampled tasks, trajectories and seeds.\\n\\nRegarding meta-testing, gradient-based meta-learning methods just provide guarantees of adaptation after one gradient step (or a few if you meta-trained for it). This is exactly what we measure our our benchmarks on throughout the meta-training process. Nevertheless, we agree that it is important and interesting to evaluate how the method performs after more than one adaptation step is performed, and how it behaves in out-of-distribution tasks. These results will be added in the camera ready, and we can clarify any questions regarding these experiments.\\n\\nThe FwdBackw environments have been used as benchmark for meta-learning papers [1, 2]. To ensure that the reader is familiar with at least some of the meta-environments, we included the in our benchmarks. The task distribution in the FwdBackw environments just has two tasks in its support, the tasks drawn during meta-training and meta-testing are identical. We fully agree, this is far away from optimal for evaluating the meta-generalization capabilities of the algorithms. Hence, we can only draw conclusions w.r.t. the meta-training performance in case of the FwdBackw environments. Finally, we emphasize the importance of better meta-RL benchmark environments and would highly welcome any work in this direction.\\n\\n[1] Chelsea Finn, Pieter Abbeel, Sergey Levine. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. ICML 2017.\\n[2] Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A Simple Neural Attentive Meta-Learner. In ICLR 2018.\"}",
"{\"title\": \"Thank you for your feedback!\", \"comment\": \"We thank the reviewer for the valuable feedback. Indeed, the LVC-VPG learning curves exhibit high noise. We believe that the bias introduced by LVC makes the learning less stable when VPG is used as outer optimizer. However, the mechanisms in ProMP that ensure proximity w.r.t. to the policy\\u2019s KL-divergence may counteract these instabilities, explaining why ProMP works so well in practice.\\n\\nFollowing the suggestion of the reviewer we have extended our comparison and the analysis of the variance. In particular, we added learning curves for DiCE-VPG to the benchmarks in section 7.1. Furthermore, we have extended the analysis of the variance to more environments and and more training iterations. The result show that LVC has a substantially higher data-efficiency and its meta-gradients consistently exhibit a lower variance than DiCE. \\n\\nWe hope that this results further underpin the soundness of our claims and show the importance of our method.\"}",
"{\"title\": \"Thank you for your feedback!\", \"comment\": \"We thank the reviewer for the valuable feedback provided. As suggested by the reviewer, we have added more experiments in order to underpin the advantage of the LVC over the DiCE estimator.\\n\\nIn particular, we have included DiCE-VPG in the benchmark section 7.1. Results in all environments demonstrate that the learning performance of DiCE is inferior to LVC. In many of the environments, DiCE learns very slowly when compared to the other methods. We ascribe the poor learning performance of DiCE to the high variance of its meta-gradient estimates. \\nTo further strengthen this hypothesis, we have extended the meta-gradient variance experiments to more environments and more training iterations.\\n\\nAll in all, the bias introduced by LVC seems to make the learning a little bit more unstable when VPG is used as outer optimizer. However, the gains in data-efficiency substantially outwage this disadvantage. Ultimately, the mechanisms in ProMP that ensure proximity w.r.t. to the policy\\u2019s KL-divergence may counteract these instabilities during training, giving us a stable and efficient meta-learning algorithm.\\n\\nWe hope that the experiments and discussions, added to the paper, further substantiate the soundness of our claims.\"}",
"{\"title\": \"Thank you for your feedback!\", \"comment\": \"We thank the reviewer for the valuable feedback. The main concern of the reviewer is that the difference in performance between using equation (4) and (3) is not as significant as we claim.\\n\\nFirst, we want to clarify what might be a misunderstanding. The results labeled as MAML in Fig. 2 are obtained using the original MAML implementation, which, due to the use of a normal score function estimator, computes the wrong meta-gradient instead of the one given by Eq. 3 (you can find a discussion on this in section 5, further elaborated in the appendix). We have added some further explanations in section 5 to further clarify this. \\nOverall, here is a legend for what each name refers to: :\", \"maml\": \"no pre-adaptation credit assignment, i.e. \\\\nabla J = \\\\nabla J_post, i.e. how MAML was implemented for the original MAML paper (but this actually doesn\\u2019t follow the math correctly)\", \"e_maml\": \"naive pre-adaptation credit assignment as in Eq. 3\", \"dice\": \"(unbiased but high variance) credit assignment as in Eq. 4\", \"lvc\": \"(slightly biased but low variance) credit assignment as in Eq. 4\", \"promp\": \"our final method (described in section 6)\\n\\nWith the nomenclature clarified, let us highlight how our experiments showcase the difference w.r.t the credit assignment.\\n\\nFirst, we have added a plot showing the effect of each formulation when the same optimizer is used (see Figure 3). We performed this experiment as an ablation study in order eliminate possible influences of the outer optimizer, i.e. PPO and TRPO. These results consistently show the superior performance of the low variance version of Eq. 4 (LVC-VPG) when compared with Eq. 3 (E-MAML-VPG). Due to the high variance nature of Eq. 4 (DiCE-VPG) its performance saturates below the other formulations. This effect is discussed in section 7.2.\\n\\nSecond, the experiment in Fig. 5 illustrates the differences w.r.t. the meta-learned pre-adaptation policy behavior. Since MAML does not assign any credit to the pre-adaptation policy, it fails so solve the task. Though, E-MAML (Eq. 3) is able to solve the task, it does not learn an effective task identification policy since it can only assign credit to batches of pre-adaptation trajectories. In contrast, LVC (Eq. 4) can assign credit to individual pre-adaptation trajectories which is reflected by its superior task identification behavior.\\n\\nThird, the fact that the results of MAML and E-MAML in Fig. 2 and Fig. 3 are comparable underpins the ineffectiveness of the naive credit assignment: there is little difference between zero pre-adaptation credit assignment and the credit assignment of E-MAML (discussed in section 4).\\n\\nFinally, the experiment in Fig. 6 depicts computed gradients and convergence properties corresponding to Eq. 3 and Eq. 4 in a simple toy environment. Once more, this experiment shows the advantage of formulation I over formulation II. \\n\\nWe have clarified this in the experiment sections of the paper, and will be happy to add further \\nclarifications if the reviewer requests it.\"}",
"{\"title\": \"Computation of meta-gradient variance\", \"comment\": \"Thanks a lot for your effort of reviewing and understanding our code! For experimentation purposes the code you refer to computes the gradients / gradient-variance of both the inner gradients and the meta-gradients. The respective statistics are logged in the lines 121-123 and 135-138 of meta_trainer_gradient_variance.py. What we report in Fig. 3 is the \\u201cMeta-GradientRStd\\u201d and corresponds to the meta-gradients which are used to update the original policy parameters \\\\theta (as you suggested). So, everything should comply with the experiment description in the paper.\"}",
"{\"comment\": \"Given Hessian decomposition \\\\nabla_\\\\theta^2 J^{inner}(\\\\theta) = H_1 + H_2 + H_{12} + H_{12}^T and your proposed approximation \\\\E[\\\\nabla_\\\\theta^2 J^{LVC}(\\\\theta)] = H_1 + H_2, what's the gradient should be evaluated? I believe that since you are arguing that you have a lower variance estimation of \\\\nabla_\\\\theta^2 J^{inner}(\\\\theta) which is \\\\E[\\\\nabla_\\\\theta^2 J^{LVC}(\\\\theta)], the gradient to be evaluated should be the gradient resulting from Hessian-vector product, namely, the gradient used to update the original parameter of policy \\\\theta.\\nWhat I see in your codes is you are actually evaluating variance of \\\\nabla_\\\\theta J^{LVC}(\\\\theta), which is a mid-product of Hessian. It has no relation to the gradient of interest.\", \"title\": \"Gradient computation seems wrong in your codes for the variance of gradient experiments\"}",
"{\"title\": \"an interesting trial to correct the current algorithm, but weak support to the claim\", \"review\": \"In this paper, the authors investigate the gradient calculation in the original MAML (Finn et al. 2017) and E-MAML (Al-Shedivat et al. 2018). By comparing the differences in the gradients of these two algorithms, the authors demonstrate the advantages of the original MAML in taking the casual dependence into account. To obtain the correct estimation of the gradient through auto-differentiation, the authors exploit the DiCE formulation. Considering the variance in the DiCE objective formulation, the authors finally propose an objective which leads to low-variance but biased gradient. The authors verify the proposed methods in meta-RL tasks and achieves comparable performances to MAML and E-MAML. \\n\\n\\nAlthough the ultimate algorithm proposed by this paper is not far away from MAML and E-MAML, they did a quite good job in clarify the differences in the existing variants of MAML from the gradient computation perspective and reveal the potential error due to the auto-differentiation. The proposed new objective and the surrogate is well-motivated from such observation and the trade-off between variance and bias. \\n\\n\\nMy major concern is how big the effect is if we use (3) comparing to (4) in calculate the gradient. As the authors showed, the only difference between (3) and (4) is the weights in front of the term \\\\nabla_\\\\theta\\\\log\\\\pi_\\\\theta: the E-MAML is a fixed weight and the MAML is using a adaptive through the inner product. Whether the final difference in Figure 4 between MAML and E-MAML is all caused by such difference in gradient estimation is not clearly. In fact, based on the other large-scale high-dimension empirical experiments in Figure 2, it seems the difference in gradient estimator (3) and (4) does not induced too much difference in final performances between MAML and E-MAML. Based on such observation, I was wondering the consistent better performance of the proposed algorithm might not because the corrected gradient computation from the proposed objective. It might because the clip operation or other components in the algorithm. To make a more convincing argument, it will be better if the authors can evaluate different gradient within the same updates.\\n\\nI am willing to raise my score if the author can address the question.\", \"minor\": \"The gradients calculation in Eq (2) and (3) are not consistent with the Algorithm and the appendix.\", \"the_notation_is_not_consistent_with_common_usage\": \"\\\\nabla^2 is actually used for denoting the Laplace operator, i.e., \\\\nabla^2 = \\\\nabla \\\\cdot \\\\nable, which is a scalar.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"comment\": \"Hello, I have some questions about experiments in the paper.\\n\\nI'd like to clarify my understanding of the experimental setup for results in Fig. 2.\\nAre the returns plotted on the y-axis averaged over the the tasks that were sampled *for meta-training* during the most recent update? If so, then how/where is the performance of meta-testing measured?\", \"for_fwdback_environments\": \"If appears that the task distribution is rather simple, containing only two tasks. Is this correct?\\nIf yes, then during meta-training the method quickly sees all possible tasks from the training distribution and so no task at any meta-test time would be novel at all. How can one interpret the utility of these environments for evaluating Meta-RL methods?\\n\\nThanks.\", \"title\": \"Questions about experiments\"}",
"{\"title\": \"Review\", \"review\": \"In this paper, the author proposed an efficient surrogate loss for estimating Hessian in the setting of Meta-reinforcement learning (Finn.et al, 2017), which significantly reduce the variance while introducing small bias. The author verified their proposed method with other meta-learning algorithms on the Mujoco benchmarks. The author also compared with unbiased higher order gradient estimation method-DiCE in terms of gradient variance and average return.\\n\\nThe work is essentially important due to the need for second-order gradient estimation for meta-learning (Finn et al., 2017) and other related work such as multi-agent RL. The results look promising and the method is easy to implement. I have two detail questions about the experiment:\\n\\n1) As the author states, the new proposed method introduces bias while reducing variance significantly. It is necessary to examine the MSE, Bias, Variance of the gradient estimatorsquantitatively for the proposed and related baseline methods (including MAML, E-MAML-TRPO, LVC-VPG, etc). If the bias is not a big issue empirically, the proposed method is good to use in practice.\\n\\n2) The author should add DiCE in the benchmark in section 7.1, which will verify its advantage over DiCE thoroughly.\\n\\nOverall this is a good paper and I vote for acceptance.\\n\\n\\nFinn, Chelsea, et al. \\\"Model-agnostic meta-learning for fast adaptation of deep networks.\\\" ICML 2017.\\n\\nFoerster, Jakob, et al. \\\"DiCE: The Infinitely Differentiable Monte-Carlo Estimator.\\\" ICML 2018.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Strong paper, Strong accept\", \"review\": \"The paper first examines the objective function optimized in MAML and E-MAML and interprets the terms as different credit assignment criteria. MAML takes into account the dependences between pre-update trajectory and pre-update policy, post-update trajectory and post-update policy by forcing the gradient of the two policies to be aligned, which results in better learning properties.\\nThought better, the paper points out MAML has incorrect estimation for the hessian in the objective. To address that, the paper propose a low variance curvature estimator (LVC). However, naively solving the new objective with LVC with TRPO is computationally prohibitive. The paper addresses this problem by proposing an objective function that combines PPO and a slightly modified version of LVC.\", \"quality\": \"strong, clarity:strong, originality:strong, significance: strong,\", \"pros\": [\"The paper provides strong theoretical results. Though mathematically intense, the paper is written quite well and is easy to follow.\", \"The proposed method is able to improve in sample complexity, speed and convergence over past methods.\", \"The paper provides strong empirical results over MAML, E-MAML. They also show the effective of the LVC objective by comparing LVC over E-MAML using vanilla gradient update.\", \"Figure 4 is particularly interesting. The results show different exploration patterns used by different method and is quite aligned with the theory.\"], \"cons\": [\"It would be nice to add more comparison and analysis on the variance. Since LVC is claimed to reduce variance of the gradient, it would be nice to show more empirical evidences that supports this. (By looking at Figure 2, although not directly related, LVC-VPG seems to have pretty noisy behaviour)\"], \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Thanks for the suggestions\", \"comment\": \"Thanks for your suggestions on how to improve the paper! Since the performance gap between DiCE as LVC was so substantial, we had little doubts about the validity of the argument and thus cut down the number of experiments w.r.t. to DiCE. We agree that providing a LVC vs. DiCE comparison just in one environment is suboptimal. Hence, we will consider running more experiments with DiCE-MAML and adding the respective results to the paper.\"}",
"{\"comment\": \"Dear authors, well, I'm not denying your contribution but instead trying to convey a feeling of a reader in this field(might be wrong), that is I feel like it would be more convincing if you could revise the following points:\\n\\n1. I understand that explicitly evaluate variance can take more time and computations than policy optimization, but it still needs to be conducted on at least two environments thoroughly, instead of evaluating some iterations on one environment.\\n\\n2. It is desired and scientifical to compare reward learning curves of MAML-DICE with original MAML and MAML-LVC in different environments, instead of ignoring MAML-DICE. Because both DICE and LVC are also ways of reducing variance.\\n\\nThanks!\", \"title\": \"suggestions on comparisons\"}",
"{\"title\": \"LVC vs DiCE gradient estimator\", \"comment\": \"Unfortunately, neither computational resources nor space in the paper are unlimited. In order to estimate the variance of the meta-policy gradients, we must compute the meta-policy gradients across many batches of tasks and trajectories. Computing the data for the plot takes 10 - 20 time (and compute expenses) than running DiCE MAML or LVC normally.\\nSince the performance gap between DiCE and LVC is so substantial we were confident that one plot suffices to convince the readers of this arguments, which is also backed by the theoretical elaborations.\\n\\nWe suspect that the increasing variance for LVC is mainly due to the fact that the meta-policy search steers the policy faster towards a parameter configuration where the trace of the Fisher Information Matrix is large, i.e. a small change in parameters causes a large change in the policy distribution which allows good adaptation. However, in regions where the policy distribution is more sensible the parameters, monte-carlo gradient estimates naturally exhibit a larger variance. Throughout the meta-training we observe that the KL-divergence between pre- and pos-update policy grows, indicating increasing adaptability of the policy. For tasks such as HalfCheetahRandomDirection a high adaptability of the policy is necessary to be able to shift the policy distribution with one gradient step between running forward and backward. In conclusion, our intuition is that the faster increase in meta-gradient variance in case of LVC is mainly due to the fact that LVC learns faster. When you look at the plots, there seems to be a positive correlation between the slope of learning curve and the meta-gradient variance increase. \\n\\nWe also got in contact with the authors of the DiCE paper which told us that they\\u2019d have the same problems with the high variance of the DiCE gradients. \\nIn our experiments with DiCE-MAML we observed that it performs substantially worse than LVC across different environments. All in all, we a highly confident that the LVC leads to superior performance over DiCE. We encourage you to run your own experiments and share your insights with us.\"}",
"{\"comment\": \"It is not convincing to compare LVC and MAML+DICE only on one environment, with only a little portion of timesteps, far from converging. Especially, considering at first DICE+MAML is better despite higher variance, after some iterations, the variance of DICE+MAML and LVC are becoming closer and closer.\", \"title\": \"comparisons are not convincing\"}",
"{\"title\": \"re: answer regarding experiment settings\", \"comment\": \"In Figure 2, MAML-TRPO corresponds to the original implementation that Finn et al (2017) used in their paper. So, it does not have the correct hessian of the expected return (it just has H_1).\\n\\nA proper comparison of gradient variance is made in section 7.2 (Figure 3), where we compare the \\\"correct\\\" hessian (DICE) vs our low variance hessian (LVC).\"}",
"{\"comment\": \"then is MAML-TRPO used in your experiments in Fig.2 implemented with or without hessian, I mean is it the original implementation as Finn et al (2017)? Considering you compare gradient variance of DICE and LVC, we expect it is implemented with correct hessian so as to validate your point?\", \"title\": \"re: answer regarding experiment settings\"}",
"{\"comment\": \"Thanks for your responding!\\n\\nA question about your experiment settings, namely, what does the 'MAML' in Figure 2. means? MAML+TRPO or MAML+TRPO+DICE?\", \"title\": \"thanks for reply and question about experiment settings\"}",
"{\"title\": \"why not just implementing the meta-gradients directly\", \"comment\": \"Indeed, it would be possible to code up the analytical expression of the LVC gradients (i.e. the terms H_1 and H_2 as derived in Equation 99 and 100). However, to do so is tedious and error prone. Researchers / engineers usually prefer to implement a \\u201csurrogate loss\\u201d such as the LVC objective which is more elegant and clean. Furthermore, it is not straightforward to code up the LVC gradients with tensorflow since tf.gradients and tf.hessians does not return a batch of gradients / hessians but instead the sum of gradients over the batch (see https://www.tensorflow.org/api_docs/python/tf/gradients). Thus we are not aware of any efficient way of computing H_2 (outer product of grad log_probs) in batch.\"}",
"{\"comment\": \"Hey, thanks for your response.\\n\\nIn your submission, you argue that DICE need calculation of the outer product of sum, which leads to high variance, then you guys propose a method to somehow get rid of it, what I don't understand is why not use something like `'tf.hessians' to analytical compute the gradient.\", \"title\": \"thanks and question about hessian\"}",
"{\"title\": \"typo in equation 43\", \"comment\": \"Thanks for pointing out the typo. Indeed, there should be no expectation around J_inner, since J_inner is already an expectation itself. This has no mathematical implications but is unnecessary. We have fixed the denoted typo as well as further minor typos we have found in the appendix. The changes will appear in the pdf as soon as we are able to update it during the rebuttal.\"}",
"{\"comment\": \"Hey, is there a typo in equation 43? I mean on the left side there should be a Hessian instead of an expectation of a Hessian.\", \"title\": \"typo in equation 43\"}"
]
} |
|
HyNmRiCqtm | CDeepEx: Contrastive Deep Explanations | [
"Amir Feghahati",
"Christian R. Shelton",
"Michael J. Pazzani",
"Kevin Tang"
] | We propose a method which can visually explain the classification decision of deep neural networks (DNNs). There are many proposed methods in machine learning and computer vision seeking to clarify the decision of machine learning black boxes, specifically DNNs. All of these methods try to gain insight into why the network "chose class A" as an answer. Humans, when searching for explanations, ask two types of questions. The first question is, "Why did you choose this answer?" The second question asks, "Why did you not choose answer B over A?" The previously proposed methods are either not able to provide the latter directly or efficiently.
We introduce a method capable of answering the second question both directly and efficiently. In this work, we limit the inputs to be images. In general, the proposed method generates explanations in the input space of any model capable of efficient evaluation and gradient evaluation. We provide results, showing the superiority of this approach for gaining insight into the inner representation of machine learning models. | [
"Deep learning",
"Explanation",
"Network interpretation",
"Contrastive explanation"
] | https://openreview.net/pdf?id=HyNmRiCqtm | https://openreview.net/forum?id=HyNmRiCqtm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SygmEhKNgV",
"BygO3b47kE",
"ryxftKl0RX",
"rJgSW0AK0Q",
"H1x6HWaF07",
"H1eiGWpt0m",
"HyxoFlTF0X",
"BJlWvUII0m",
"BJg3248ICQ",
"HklmqfFmAX",
"B1g1tMt7AX",
"HyxLHfF7RQ",
"B1xsXfY7RQ",
"S1exWztQR7",
"rkg0nroYn7",
"ryxtBh5_h7",
"rJx5ghGShX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545014315038,
1543877039849,
1543534969544,
1543265789448,
1543258436660,
1543258387146,
1543258243032,
1543034457434,
1543034036506,
1542849163005,
1542849142827,
1542849086034,
1542849059086,
1542849016444,
1541154229604,
1541086273124,
1540856818325
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper881/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper881/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper881/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper881/Authors"
],
[
"ICLR.cc/2019/Conference/Paper881/Authors"
],
[
"ICLR.cc/2019/Conference/Paper881/Authors"
],
[
"ICLR.cc/2019/Conference/Paper881/Authors"
],
[
"ICLR.cc/2019/Conference/Paper881/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper881/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper881/Authors"
],
[
"ICLR.cc/2019/Conference/Paper881/Authors"
],
[
"ICLR.cc/2019/Conference/Paper881/Authors"
],
[
"ICLR.cc/2019/Conference/Paper881/Authors"
],
[
"ICLR.cc/2019/Conference/Paper881/Authors"
],
[
"ICLR.cc/2019/Conference/Paper881/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper881/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper881/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"Paper studies an important problem -- producing contrastive explanations (why did the network predict class B not A?). Two major concerns raised by reviewers -- the use of one learned \\\"black-box\\\" method to explain another and lack of human-studies to quantify results -- make it very difficult to accept this manuscript in its current state. We encourage the authors to incorporate reviewer feedback to make this manuscript stronger for a future submission; this is an important research topic.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}",
"{\"title\": \"Post-response comments\", \"comment\": \"I thank the authors for addressing my comments and my follow-up question. While no extra experiments on birds/cancer-related datasets are required, a good motivation should be provided to highlight how relevant the analyzed contrastive setting is. Especially given its divergence towards traditional methods for visual explanations which focus on explaining \\\"why classA?\\\"\\n\\nHaving gone through the manuscript, and seen the other reviews, I agree on that the point raised by the other reviewers, regarding the used of one black-box model to predict another black-box model.\\n It is unclear to me how much of the appealing results come from the GAN model and how much come from truly interpreting the network.\\n\\n\\nWhile I think the manuscript has improved w.r.t. its initial state, the issue raised by other reviewers and the lack of quantitative are too critical as warrant a publication at ICLR'19. \\nI consider addressing this two issues will strengthen significantively the manuscript.\"}",
"{\"title\": \"Comments on Author Response\", \"comment\": \"I agree the metrics in place for evaluating likelihood-based or point-sample based generative models may not be truly reflective of what is being evaluated but having a sense of where these metrics lie for the generator being used here would have made the paper stronger.\\n\\nIn addition, as mentioned in my review above -- with respect to contrastive explanations, experiments on datasets where distractor classes (y_probe) are present in addition to the class interest (y_true) seem important to me.\"}",
"{\"title\": \"Examples of why this research is important\", \"comment\": \"Could you indicate some concrete examples of scenarios where answering the \\\"why A and not B?\\\" question is critical?\\n\\n\\nLet me elaborate a little bit.\\nIn medical diagnosis, what we are doing is called differential diagnosis. \\nOne question is \\\"what are the symptoms of a flu.\\\" In differential diagnosis, one asks how can i tell a flu from a cold? or why is this image basal cell carcinoma instead of melanomia?\\n\\nIn the bird watching scenario, one could ask how can you tell a grebe from a duck (Both are waterbirds, but the grebe has a long neck). One could also ask why is the a Western Grebe vs. A Clark's Grebe (because the western grebe has a dark patch below its eye).\\n\\nWe concentrate on MNIST in this paper because the average computer scientist reading it has knowledge of the digits but not necessarily bird or skin cancer.\"}",
"{\"title\": \"Explanations do not come from Generative models\", \"comment\": \"You argue, gradient descent on the input image is not providing a satisfactory explanation because of adversarial effects, and I agree.\\nHowever even with a VAE or GAN, it is unclear how much of the explanation comes from what the network thinks vs what the generative model thinks.\", \"response\": \"We added another section to our supplementary materials, showing the effect of generative models on the explanations.\"}",
"{\"title\": \"Other generative models\", \"comment\": \"Finally, if the L2 distance is small this does not mean that we are close by, especially as we\\nincrease the number of dimensions. Consider https://openreview.net/pdf?id=S1xoy3CcYX figure 5. \\nThey have visualised two images with similar L2 distance, however \\\"small\\\" L2 does not mean are similar.\\nTherefore I believe it is crucial how you traverse space and that this path is short (for an appropriate\\nmeasure of length) and does not traverse multiple decision boundaries.\", \"response\": \"We agree. Yet, the L2-norm gives us an smooth transition. Traversing multiple decision boundaries is\\nnot necessarily a disadvantage. Also experiments on xGEMs shows that minimizing the length of this optimization\\nmay not lead to something sensible in the image space. One may conclude that the network has not learned \\nrelated concepts, while using our method shed more light on what concepts network has learned.\"}",
"{\"title\": \"Examples of why this research is important\", \"comment\": \"comment: Could you indicate some concrete examples of scenarios where answering the \\\"why A and not B?\\\" question is critical?\\n\\t\\nFor instance, in medical diagnosis contrasting the results helps the doctor to not miss details or explains the\\n\\toutcome to patient. \\n\\tOther use can be in training humans. Suppose you have a network trained on classifying birds. It can be used to \\n\\ttrain amateur bird watchers. For instance, different types of Warblers are very similar to each other for amateur watcher. \\n\\tThey can provide a picture of a bird to network and ask what needs to be changed in the image to change the bird's label.\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thanks you for your clarifications.\\n\\nYou argue, gradient descent on the input image is not providing a satisfactory explanation because of adversarial effects, and I agree. \\nHowever even with a VAE or GAN, it is unclear how much of the explanation comes from what the network thinks vs what the generative model thinks. \\n\\nI would highly recommend not abusing other methods in a comparison. It is fine to discuss their limitations and argue for the need for a new approach. However, it would be more useful to use the space now devoted to this skewed comparison to highlight the effectiveness of the proposed approach.\"}",
"{\"title\": \"Clarification\", \"comment\": \"Thank you for your response, I should have been more clear in my original review.\\n\\nThe reason I suggested the additional baseline is that I expect it to provide results of similar (visual) quality to the proposed approach. \\n\\nTherefore, the important issue is:\\n\\\"Does the proposed approach truly visualize what the network thinks?\\\"\\n\\nThe approach consists of two parts. The first part is gradient descent through the network, \\\"i.e. what the network thinks\\\", the second part is continuing this all the way through the GAN to generate the right image. \\n\\nIf we would not use the GAN, but use standard gradients on the images, we can find an image that has even smaller L2 distance. However, I think that most of us would agree that these adversarial approaches do not visualise what the network thinks.\", \"now_the_question_becomes\": \"why does the GAN visualise what the network thinks, where gradient descent does not?\\nAssume the following generative network is used (fully connected network):\\n- 100 dimensional code\\n- The weights are the top 100 PCA directions.\\nWe know that we can implement PCA using an auto-encoder. So this is a neural network generative model. \\n\\nWhile the auto-encoder is not as good as a GAN as a generative model, it is unclear to me why would this approach would be worse at visualizing what the network thinks. It is crucial that this aspect can be experimentally validated. \\n\\n(There is a similar problem with deep-dream style approaches: https://distill.pub/2017/feature-visualization/ the latest iterations are generating amazing pictures, but it is unclear how much of this comes from the network and how much comes from the other tricks).\\n\\nFinally, if the L2 distance is small this does not mean that we are close by, especially as we increase the number of dimensions. Consider https://openreview.net/pdf?id=S1xoy3CcYX figure 5. They have visualised two images with similar L2 distance, however \\\"small\\\" L2 does not mean are similar. Therefore I believe it is crucial how you traverse space and that this path is short (for an appropriate measure of length) and does not traverse multiple decision boundaries.\"}",
"{\"title\": \"Answers part 2\", \"comment\": \"Figure 5 shows that multiple descision boundaries are crossed. Is\\nthis behaviour desired? It seems very likely to me that it should be\\npossible to move from 9 to 8 while staying on the manifold without\\npassing through 5? Since the method takes a detour through 5s is this\\ncommon behaviour?\", \"response\": \"It is not about desirability. This figure shows that without\\nadding constraints, the method may go through other parts of manifold,\\nresulting in a wrong speculation image. That is, xGEMs will not find a\\npoint on the boundary between the two desired classes, but somewhere else\\n(where the two classes have equal probability, but lower than some third\\nclass). The classification space is complicated enough that \\\"staying near\\\"\\nthe input is not sufficient.\\n\\nFigure 5a, top shows that the network knows about the importance of the\\ncurves, if we impose our constraints. If we remove the constraints (like\\nin xGEMs), we lose this explanation and revert to something meaningless,\\nas the optimization path explores other classes.\\n\\nNote that these are just paths of the optimizer (only the point at the end\\nof the optimization path is the \\\"answer\\\"). However, they demonstrate the\\ndifficulty with optimization in this complex decision space. Figure 5b\\nshows the problem, where xGEMs (with the constraints) fails to keep the\\nanswer in the area of high likelihood for either of the classes.\"}",
"{\"title\": \"Answers part 1\", \"comment\": \"I do not understand the second paragraph of section 4.1. As\\nmentioned in the paper, these other methods were not designed to generate\\nthis type of application. Therefore the comparison could be considered\\nunfair.\", \"response\": \"The only method we found before submitting the paper which was\\nable to answer the contrastive explanation was xGems. However, other\\nmethods could be shoe-horned into trying to answer the question of \\\"why A\\nand not B?\\\" and so we figured we should demonstrate that they were not\\nsufficient and that a new method (like ours) was necessary.\"}",
"{\"title\": \"Answers\", \"comment\": \"The reported results are mostly qualitative. I find the set of\\nprovided qualitative examples quite reduced. In this regard, I encourage\\nthe authors to update the supplementary material in order to show extended\\nqualitative results of the explanations produced by their method.\", \"response\": \"We have added a supplementary section, adding more qualitative\\nresults. Thank you for your suggestion.\", \"problem\": \"for a given example (perhaps not even from the training set),\\nwhy is it not class B?\", \"arxiv\": \"1712.06302 seem to display similar properties in their explanations\\nwithout the need of explicit constractive pair-wise training/testing. The\\nmanuscript would benefit from positioning the proposed method w.r.t. these\\nworks.\"}",
"{\"title\": \"Answers\", \"comment\": \"Section 7 in Gradcam (https://arxiv.org/pdf/1610.02391.pdf)\\nprovides a procedure to generate counter-factual explanations using\\nGradcam. Is there a particular reason the authors did not choose to adopt\\nthe above technique as a baseline?\", \"response\": \"The the proposed counter-factual experiment for GCAM produces\\n*any* counter-factual explanation, not a targeted explanation. It answers\\n\\\"why A?\\\" and not \\\"why A and not B?\\\" as we do in this paper.\"}",
"{\"title\": \"General Comment\", \"comment\": \"We thank the reviewers for their comments and reviews. We address many\\nof the comments below, including by adding additional experiments, as\\nsuggested. \\n\\nWe would like to stress again that the purpose of the method is\\nto determine how the neural network is making decisions, *not* necessarily\\nto find general distinctions in the data, although the two\\nare certainly related.\\n\\nWe have submitted a revised version of our paper. This revision contains\", \"the_following\": \"1 - Experiments with VAE instead of GAN, showing the robustness of our approach.\\n2 - Adding comparisions with Layerwise Relevance Propagation (LRP)\\n3 - Adding experiments in supplementary section using VAE instead of GAN\\nusing xGem method, showing the importance of the constraints regardless\\nof the model being used.\\n4 - adding more qualitative samples in the supplementary section.\\n5 - The structure of the networks we have used.\"}",
"{\"title\": \"Interesting idea but lacks experimental justification\", \"review\": [\"The paper proposes an approach to provide contrastive visual explanations for deep neural networks -- why the network assigned more confidence to some class A as opposed to some other class B. As opposed to the applicability of previous approaches to this problem -- the approach is designed to directly answer the contrastive explanations question rather adapting other visual saliency techniques for the same. Overall, while I find the proposed approach simple -- the paper needs to address some issues regarding the claims made and should provide more quantitative experimental results justifying the same.\", \"Apart from some flaws in the claims made in the paper, the paper is easy to follow and understand.\", \"Assuming the availability of a latent model over the images of the input distribution, the proposed approach is directly applicable and faster.\", \"The authors clearly highlight the problems associated with existing explanation modalities and approaches; ranging from ones applicable to only specific deep architectures to ones using backpropagation based heuristics.\", \"The proposed approach to generate contrastive explanations is simple and is structured along the lines of methods utilizing probe images to explain decisions -- except for the added advantage that the provided explanations are instance-agnostic due to the assumption of a latent model over the input distribution.\"], \"comments\": [\"One of the problems highlighted in the paper regarding existing explanation modalities is the use of another black-box to explain the decisions of an existing deep network (also somewhat of a black-box) which the authors claim their model does not suffer from. The proposed approach provides explanations by operating in the latent space of a learned generative model of the input distribution. The learned generator in itself is somewhat of a black-box itself -- there has been prior work indicating how much of the input distribution are GANs able to capture. As such, conditioning on a generative model to propose such contrastive explanations is to some extent using another black-box (generator) to explain the decisions of an existing one. Thus, the above claim made in the paper does not seem well-founded. Furthermore, in experiments, the paper does not provide any quantitatively convincing results to suggest the generator in use is a good one.\", \"While the authors suggest that a latent model over the input distribution needs to be trained only once and is applicable off-the-shelf for any further contrastive explanations regarding any network operating on the same dataset -- learning such a model of the input space is an overhead in itself. In this light, experiments demonstrating comparisons between GANs and VAEs as the reference generative model for explanations would have made the paper stronger (as the proposed approach relies explicitly on how good the generative model is).\", \"The paper proposes an interesting experiment to show that the proposed approach is somewhat capable of capturing slightly adversarial biases in the input domain (adding square to the top-left of images of class \\u20188\\u2019). While I like this experiment, I feel this has not been explored to completion in the sense of experimenting with robustness with respect to structured as well as unstructured perturbations.\", \"Typographical Errors: Section 3.1 repeats the use of D for a discriminator as well as the input distribution. Procedure 1 and Procedure 2 share the same titles -- which is slightly misleading. In addition, Procedure 1 is not referenced in the text which makes is hard to understand the utility of the same. In Section 4.1, the use of Gradcam and Lime to generate counterfactual explanations is not very clear and makes it slightly hard to follow. Citations used for Gradcam are wrong -- Sundarajan et al., 2016 should be changed to Selvaraju et al., 2017.\"], \"experimental_issues\": \"- Experimental results are provided only on MNIST and Fashion-MNIST. Since the paper focuses explicitly on providing contrastive explanations for choosing a class A over another class B -- experiments on datasets which do not have real-images seem insufficient. Additional experiments on at least ImageNet would have made the paper stronger.\\nRegarding contrastive explanations, experiments on datasets where distractor classes (y_probe) are present in addition to the class interest (y_true) seem important -- PASCAL VOC, COCO, etc. Specifically, since the explanations provided are visual saliency maps the paper would have been stronger if there were experiments suggesting -- what needs to change in a region of an image classified as a \\u2018cat\\u2019 to be classified as a \\u2018dog\\u2019 while there is an instance of the class - \\u2018dog\\u2019 present in the image itself. Also, section 7 in Gradcam (https://arxiv.org/pdf/1610.02391.pdf) provides a procedure to generate counter-factual explanations using Gradcam. Is there a particular reason the authors did not choose to adopt the above technique as a baseline?\\n- Experimental results provided in the paper are only qualitative -- as such, I do not find the comparisons (and improvements) over the existing approaches convincing enough. Since, there is no clear metric to evaluate contrastive explanations -- human studies to judge the class-discriminativeness (or trust) of the proposed approach would have made the paper stronger.\\n\\nThe authors adressed the issues raised/comments made in the review. In light of my comments below to the author responses -- I am not inclined towards increasing my rating and will stick to my original rating for the paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"interesting idea with potential\", \"review\": \"The paper addresses the problem of providing saliency-based visual explanations of deep models tasked at image classification. More specifically, instead of generating visualizations directly highlighting the image pixels that support the the decision of an image belonging to class A, it generates \\\"contrastive\\\" visualizations indicating the pixels that should be added or suppressed in order to support the decision of a image belonging to class A and not to class B.\\n\\nThe method formulates the generation of these contrastive explanations through a generative adversarial network (GAN), where the discriminator D is the image classification model to be explained and the generator G is a generative model trained to produce images from the dataset used to train D.\\n\\nExperiments on the MNIST and fashion-MNIST datasets compares the performance of the proposed method w.r.t. some methods from the literature.\\n\\n\\nOverall the manuscript is well written and its content is relatively easy to follow. The idea of generating contrastive explanations through a GAN-based formulation is well motivated and seems novel to me.\", \"my_main_concern_with_the_manuscript_are_the_following\": \"i) The proposed method seems to be specifically designed for the generation of contrastive explanations, i.e. why the model predicted class A and not class B. While the generation of this type of explanations is somewhat novel, from the text it seems that the proposed method may not be able to indicate what part of the image content drove the model to predict class A. Is this indeed the case?\\n\\nii) Although the idea of generating contrastive explanations is quite interesting, it is not that novel. See Kim et al., NIPS'16, Dhurandhar et al., arXiv:1802.07623. Moreover, regarding the presented results on the MNIST dataset (Sec 4.1) where some of the generated explanations highlight gaps to point differences between digit classes. The work from Samek et al., TNNLS'17 and Oramas et al., arXiv:1712.06302 seem to display similar properties in their explanations without the need of explicit constractive pair-wise training/testing. The manuscript would benefit from positioning the proposed method w.r.t. these works.\\n\\niii) Very related to the first point, in the evaluation section (Sec.4.1) the proposed method is compared against other methods in the literature. Three of these methods, i.e. Lime, GradCam, PDA, are not designed for producing contrastive explanations, so I am not sure to what extend this comparison is appropriate.\\n\\niv) Finally, the reported results are mostly qualitative. I find the set of provided qualitative examples quite reduced. In this regard, I encourage the authors to update the supplementary material in order to show extended qualitative results of the explanations produced by their method.\\nIn addition, I recommend complementing the presented qualitative comparisons with quantitative evaluations following protocols proposed in existing work, e.g. a) occlusion analysis (Zeiler et al., ECCV 2014, Samek et al.,2017), a pointing experiment (Zhang et al., ECCV 2016), or c) a measurement of explanation accuracy by feature coverage (Oramas et al. arXiv:1712.06302).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review\", \"review\": \"The idea proposed in this paper is to aid in understanding networks by showing why a network chose class A over class B. To do so, the goal is to find an example that is close to the original sample, but belongs to the other class. As is mentioned in the paper, it is crucial to stay on the data manifold for this to be meaningful. In the paper, an approach using a GAN to traverse the manifold is proposed and the experimental evaluation is done on MNIST.\", \"if_my_understanding_is_correct_the_proposed_approach_requires\": \"Finding a noise code z_0 such that the GAN generates an image G(z_0) close to the original input x. As a metric L2 distance is proposed.\\nFind a point close to z_b that is close z_0 s.t. Class B is the most likely class and class A is the second most likely prediction. Specifically it is required that\\nThe log likelihood of but classified as class B with the same log likelihood of class B for G(z_b) is the same as the log likelihood of class A for the input x.\\nSuch that all other classes have a log likelihood that is at least epsilon lower than both the one of class A and class B.\\n\\nThe proposed approach is compared to a set of other interpretability methods, which were \\nGrad-Cam, lime, PDA, xGEM on MNIST AND Fashion MNIST data. The proposed evaluation is all qualitative, i.e. subjective. It must also be noted that in the methods used for comparison are not used as originally intended.\\n\\n\\nCurrently, I do not recommend this paper to be accepted for the following reasons.\\nThe idea of using a GAN is to generate images in input space is not novel by itself. Although the application for interpretability by counterfactuals is. It is unclear to me how much of the appealing results come from the GAN model and how much come from truly interpreting the network. I have detailed this below by proposing a very simplistic baseline which could get similar results.\\nThe experimental approach is subjective and I am not convinced by the experimental setup.\\nOn the other hand, I do really appreciate the ideas of traversing the manifold. \\n\\nRemarks \\nRelated work and limitations of existing interpretability methods are discussed properly. Of course, the list of discussed methods is not exhaustive. The work on the PPGAN and the \\u201cSynthesizing the preferred inputs for neurons in neural networks via deep generator networks\\u201d is not mentioned although it seems very related to the proposed approach to traverse the manifold. What that work sets apart from the proposed approach is that is could be applied to imaganet and not just MNIST. \\n\\nTraversing the manifold to generate explanations is certainly a good idea and one that I completely support. One limitation of the proposed approach is that it is unclear to me whether a point on the decision boundary is desirable or that a point that is equally likely is desirable. My reasoning is that the point on the decision boundary is the minimal change and therefore the best explanation. In such a setup, the GAN is still crucial to make sure the sample remains on the data manifold and is not caused by adverarial effects.\\n\\nThe exact GAN structure and training approach should be detailed in this paper. Now only a reference is provided. \\n\\nCan you clarify how the constraints are encoded in the optimization problem?\\n\\nThe grad cam reference has the wrong citation\\n\\nI do not understand the second paragraph of section 4.1. As mentioned in the paper, these other methods were not designed to generate this type of application. Therefore the comparison could be considered unfair. \\n\\nI would propose the following baseline. For image x from class A, find image y from class B such that x-y has minimal L2 norm and is correctly classified. Use y instead of the GAN generated image. Is the result much less compelling? Is it actually less efficient that the entire GAN optimization procedure on these relatively small datasets? \\n\\n\\nI do have to say that I like the experiment with the square in the upper corner. It does show that the procedure does not necesarrily exploits adversarial effects. However, the baseline proposed above would also highlight that specific square?\\n\\n\\nFigure 5 shows that multiple descision boundaries are crossed. Is this behaviour desired? It seems very likely to me that it should be possible to move from 9 to 8 while staying on the manifold without passing through 5? Since the method takes a detour through 5\\u2019s is this common behaviour?\\n\\n\\nFINAL UPDATE\\n--------------------\\nUnfortunately, I am not entirely convinced by the additional experiments that we are truly looking into the classifier instead of analyzing the generative model. \\nI believe this to be currently the key issue that, even after the revision, needs to be addressed more thoroughly before it can be accepted for publication.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HyM7AiA5YX | Complement Objective Training | [
"Hao-Yun Chen",
"Pei-Hsin Wang",
"Chun-Hao Liu",
"Shih-Chieh Chang",
"Jia-Yu Pan",
"Yu-Ting Chen",
"Wei Wei",
"Da-Cheng Juan"
] | Learning with a primary objective, such as softmax cross entropy for classification and sequence generation, has been the norm for training deep neural networks for years. Although being a widely-adopted approach, using cross entropy as the primary objective exploits mostly the information from the ground-truth class for maximizing data likelihood, and largely ignores information from the complement (incorrect) classes. We argue that, in addition to the primary objective, training also using a complement objective that leverages information from the complement classes can be effective in improving model performance. This motivates us to study a new training paradigm that maximizes the likelihood of the ground-truth class while neutralizing the probabilities of the complement classes. We conduct extensive experiments on multiple tasks ranging from computer vision to natural language understanding. The experimental results confirm that, compared to the conventional training with just one primary objective, training also with the complement objective further improves the performance of the state-of-the-art models across all tasks. In addition to the accuracy improvement, we also show that models trained with both primary and complement objectives are more robust to single-step adversarial attacks.
| [
"optimization",
"entropy",
"image recognition",
"natural language understanding",
"adversarial attacks",
"deep learning"
] | https://openreview.net/pdf?id=HyM7AiA5YX | https://openreview.net/forum?id=HyM7AiA5YX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1x-RbBJgE",
"rJe3xlA2yN",
"r1eubJAnJN",
"BkxPea7hyV",
"HygCehX3yN",
"r1lArwq9Am",
"Syx-XwqcCQ",
"H1lJy06u07",
"rkg-IxqSAQ",
"BkxI3CtBCQ",
"HJlWte4WR7",
"SyeTbV6eA7",
"BkeB3kugCQ",
"S1ebZ7sRam",
"rJeeh8J5pm",
"S1eR7XqFaQ",
"S1eWWmcKaQ",
"rJg8OW9FT7",
"r1ejg15YaQ",
"HJlOtAKta7",
"B1lv2Bdph7",
"Syx8_6U63X",
"r1gc8uIw27"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544667593446,
1544507379606,
1544507135909,
1544465646571,
1544465397995,
1543313221533,
1543313177177,
1543196119040,
1542983752544,
1542983341920,
1542697081198,
1542669317211,
1542647724558,
1542529784813,
1542219432211,
1542198053799,
1542198009104,
1542197613921,
1542196978963,
1542196863534,
1541404079218,
1541397870497,
1541003345865
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper880/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper880/Authors"
],
[
"ICLR.cc/2019/Conference/Paper880/Authors"
],
[
"ICLR.cc/2019/Conference/Paper880/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper880/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper880/Authors"
],
[
"ICLR.cc/2019/Conference/Paper880/Authors"
],
[
"ICLR.cc/2019/Conference/Paper880/Authors"
],
[
"ICLR.cc/2019/Conference/Paper880/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper880/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper880/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper880/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper880/Authors"
],
[
"ICLR.cc/2019/Conference/Paper880/Authors"
],
[
"ICLR.cc/2019/Conference/Paper880/Authors"
],
[
"ICLR.cc/2019/Conference/Paper880/Authors"
],
[
"ICLR.cc/2019/Conference/Paper880/Authors"
],
[
"ICLR.cc/2019/Conference/Paper880/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper880/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper880/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes adding a second objective to the training of neural network classifiers that aims to make the distribution over incorrect labels as flat as possible for each training sample. The authors describe this as \\\"maximizing the complement entropy.\\\" Rather than adding the cross-entropy objective and the (negative) complement entropy term (since the complement entropy should be maximized while the cross-entropy is minimized), this paper proposes an alternating optimization framework in which first a step is taken to reduce the cross-entropy, then a step is taken to maximize the complement entropy. Extensive experiments on image classification (CIFAR-10, CIFAR-100, SVHN, Tiny Imagenet, and Imagenet), neural machine translation (IWSLT 2015 English-Vietnamese task), and small-vocabulary isolated-word recognition (Google Commands), show that the proposed two-objective approach outperforms training only to minimize cross-entropy. Experiments on CIFAR-10 also show that models trained in this framework have somewhat better resistance to single-step adversarial attacks. Concerns about the presentation of the adversarial attack experiments were raised by anonymous commenters and one of the reviewers, but these concerns were addressed in the revision and discussion. The primary remaining concern is a lack of any theoretical guarantees that the alternating optimization converges, but the strong empirical results compensate for this problem.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Novel training objective for deep learning with strong empirical results\"}",
"{\"title\": \"Response to Area Chair1\", \"comment\": \"You are totally right. We did negate the complement entropy term (and added it to the primary objective) for maximizing complement entropy. We are sorry about the confusion and we will update the final manuscript to make this more clear: minimizing cross-entropy and maximizing complement entropy (e.g., in Algorithm 1).\"}",
"{\"title\": \"Response to Area Chair1\", \"comment\": \"Thanks for the comment. We summed the cross-entropy with the normalized complement entropy (Eq.3), and the corresponding advantages were discussed in Section 3.1.\"}",
"{\"title\": \"And the complement entropy was negated, right?\", \"comment\": \"Also, since the objective is to minimize cross-entropy and maximize complement entropy, I assume that when you tested the unified objective you actually negated the complement entropy term.\"}",
"{\"title\": \"Which objectives were added together?\", \"comment\": \"Did you sum the cross-entropy with the complement entropy (Eq. 2) or the normalized complement entropy (Eq. 3)?\"}",
"{\"title\": \"Follow-up\", \"comment\": \"Thank you for the ideas. Yes, we indeed directly added the two objectives together in our experiments. We agree that introducing two additional weights to merge the primary and complement objectives is a good idea, and with proper tuning, this approach may further improve the model's performance and reduce the training time. We aimed to design a methodology with fewer hyper-parameters, so we didn't explore this direction, and our current proposed method works in many scenarios, as shown in our experiments. With these promising results, we will continue to explore the approach of merging the two objectives, and build connections between these two approaches, in our immediate future work.\\n\\nRegarding reporting the increase in training time, we have added the information of training time in section 2.2 (on the top of page 4).\"}",
"{\"title\": \"Follow-up\", \"comment\": \"Thanks for your clarification. Based on all of the experiment results we have so far, such as loss gap values, we are only able to claim that models trained by COT generalize better (i.e., better performance on separate test sets). While achieving better performance on separate test sets is a good indicator that COT does not produce models that overfit, further experiments and theoretical investigations on whether COT can be a rigorous option to guard against overfitting is left as a future work.\"}",
"{\"title\": \"Manuscript Updated\", \"comment\": \"We thank all reviewers and the anonymous for the constructive comments. We have updated the manuscript in Abstract, Section 2, Section 3.4, Conclusion and Appendix A to address your feedback and concerns. Here we provide a summary of these updates:\\n\\n(1) For AnonReviewer3\\u2019s main suggestion of forming adversarial attacks using \\u201cboth\\u201d gradients from both primary and complement objectives, we have designed and conducted the additional FGSM (single-step) white-box experiments. The experiments set adversarial perturbations to be generated based on the sum of the primary gradient and the complement gradient (i.e., the gradient calculated from complement objective), while the results indicate that COT is more robust to single-step adversarial attacks under standard settings [1].\\n\\n(2) To provide more precise claim, we update the original claim \\u201crobustness to adversarial attacks\\u201d into \\u201crobustness to single-step adversarial attacks\\u201d according to (1). Additionally, more details of the original transfer attack experiments are provided in the manuscript.\\n\\n(3) We have added a description about the increase of training time and corrected typos pointed out by the reviewers in the manuscript.\\n\\n[1] Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. \\u201cExplaining and harnessing adversarial examples\\u201d. In ICLR\\u201915.\"}",
"{\"title\": \"Merging the objectives\", \"comment\": \"This is interesting. From the response of the authors, I presume that the authors have simply added the two objectives together. However, it is more common to merge multiple objectives by premultiplying them with some weights. Since there are only two objectives, these two weights could be set with some kind of grid search (maybe along with cross-validation). I believe the tables given in the response would then change and the training times would decrease.\\n\\nPlease report the increase in training time in the manuscript.\"}",
"{\"title\": \"Comparison against regularization\", \"comment\": \"I understand. Unfortunately, the loss gap values in the table do not say much. I apologize for my typo \\\"complement from overfitting.\\\" It should be \\\"complement overfitting.\\\" To clarify my question, I wonder whether COT can be considered as a complementary or an alternative option against overfitting?\"}",
"{\"title\": \"Follow-up\", \"comment\": \"Thank you for the clarifications on the recent research trending in adversarial attacks as well as your great suggestions on making the claim precise. We will adopt your suggestion and make it clear in the paper that the proposed training objectives make the models more robust to single-step adversarial attacks instead of claiming general robustness. We will use this new statement consistently across our updated version of the paper.\"}",
"{\"comment\": \"I agree with the Nov/19 anonymous comment, and one thing that I'll add is that I think it's worth discussing robustness to the FGSM attack, because it means that the decision boundary is being moved away from the data points, in a certain subset of directions. I think this is different from adversarial robustness in general, which considers perturbations which give maximum error.\\n\\nIt would be interesting to think about something like \\\"the volume of the subset of the epsilon-ball around the data points which increases error by k%\\\" - and then we could claim that some methods reduce that volume without claiming that every single point in the epsilon-ball has low error.\", \"title\": \"How to talk about FGSM Results\"}",
"{\"comment\": \"I understand this is not the main purpose of your paper, but again, you claim \\\"we also show that models trained with both primary and complement objectives are more robust to adversarial attacks.\\\" At present, you simply have not shown that fact.\\n\\nThank you for running some white-box numbers, but FGSM is unfortunately not sufficient. I hate to appeal to authority, to argue this, but see ( https://openreview.net/forum?id=SkgVRiC9Km¬eId=rkxYnt8JpQ¬eId=rkxYnt8JpQ ).\\n\\nPrior work, and papers under submission this year, make very careful claims with respect to adversarial examples. See for example the Manifold Mixup paper under submission this year that instead writes the correct and honest statement \\\"Manifold Mixup achieves ... robustness to single-step adversarial attacks\\\". You should claim only what you can demonstrate.\\n\\nIt is perfectly fine that you want to only show adversarial robustness as a side-effect of your main work, but you should be accurate in how you phrase what you have shown. There is a big difference between being robust to single-step attacks and transfer attacks, and actually being robust. Hundreds of papers claim the former, very few claim the latter.\", \"title\": \"FGSM results are not strong attacks\"}",
"{\"title\": \"Thank you for the comments\", \"comment\": \"Thank you for your comments. We understand that the adversarial attack techniques used here may not be state-of-the-art methods; however, we want to emphasize that the primary goal of this paper is to improve model's accuracy, although experimental results do show that robustness is also one of the benefits of the models trained by COT.\\n\\nWe agree with the reviewer that transfer adversarial attack is different from the classic settings of adversarial attacks. To verify our method under standard adversarial attacks, we have conducted additional experiments on white-box attack, and provided the results below; the experimental results confirmed that COT is indeed more robust to this type of attacks, and therefore we believe the main conclusion that COT is more robust (compared to baselines) to adversarial attack still holds. We will add these results of the white-box attack into the final version of the paper. Additionally, we will rename the current experiments to \\u201ctransfer attacks\\u201d to avoid confusions. The definition of the transfer attacks can be found in several recent publications [1, 2, 3].\\n\\nFor the white-box attacks, we conducted the experiments as also suggested by AnonReviewer3. The update is to set adversarial perturbations to be Epsilon * Sign (Primary gradient + Complement gradient). Results indicate that COT is more robust to this type of white-box attacks under standard settings.\\n\\nTest errors on Cifar10 under FGSM white-box adversarial attacks\\n===========================================================\\n\\t\\t\\t\\t Baseline\\t COT\\nResNet-110 \\t\\t\\t 62.23% \\t\\t52.72%\\nPreAct ResNet-18 65.60% \\t\\t56.17%\\nResNeXt-29 (2\\u00d764d) 70.24% \\t\\t61.55%\\nWideResNet-28-10\\t 59.39% \\t\\t55.53%\\nDenseNet-BC-121 65.97% \\t\\t55.99%\\n===========================================================\\n \\nThe reviewer also suggested to try out several recent methods on white-box and black-box attacks. We do agree with the reviewer that it's a great idea. However, since the main focus of the current paper is to improve accuracy, and the manuscript is already close to the page limit, we feel it's better to study this problem in a separate paper. As a matter of fact, we are planning on a follow-up work with the focus on the robustness of the models trained with COT. \\n \\n[1] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow. \\u201cTransferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples.\\u201d Arxiv, 2016\\n\\n[2] Yanpei Liu, Xinyun Chen, Chang Liu, Dawn Song. \\u201cDelving into Transferable Adversarial Examples and Black-box Attacks.\\u201d In International Conference on Learning Representation, 2017.\\n\\n[3] Wieland Brendel, Jonas Rauber, Matthias Bethge. \\u201cDecision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models.\\u201d In International Conference on Learning Representation, 2018.\"}",
"{\"comment\": \"This paper argues in the abstract that \\\"we also show that models trained with both primary and complement objectives are more robust to adversarial attacks.\\\"\\n\\nHowever, in the evaluation section, the authors only attempt a very simple transferability attack: generate adversarial examples on one model, and transfer them to another. This does not imply adversarial robustness, neither in the white-box nor black-box setting.\\n\\nTo argue black-box robustness, the authors should evaluate against more recent black-box attacks such as the Boundary Attack (ICLR'18) or SPSA (ICML'18). Both of these attacks have effectively broken many black-box defenses in the past.\\n\\nIf the authors wish wish to argue full white-box adversarial robustness, they should further try optimization based attacks (Madry et al. 2018, Carlini & Wagner 2017).\\n\\nAs is, this paper should not claim robustness to adversarial examples: at best, it can claim a 10% improvement in accuracy to transfer attacks.\", \"title\": \"Adversarial robustness claim is highly misleading\"}",
"{\"title\": \"Response to AnonReviewer3 [1/2]\", \"comment\": \"We sincerely thank the reviewer for the useful and detailed comments. Below we provide explanations for each of your comments or questions. \\n\\n\\n(Q1) End of page 1: \\\"the model behavior for classes other than the ground truth stays unharnessed and not well-defined\\\". The probabilities should still sum up to 1, so if the ground truth one is maximized, the others are actually implicitly minimized. No?\\n\\n(A1) Your understanding is totally correct. We have changed the original text to a more clear statement:\\n\\n\\u201cTherefore, for classes other than the ground truth, the model behavior is not explicitly optimized --- their predicted probabilities are indirectly minimized when \\u0177_ig is maximized since the probabilities sum up to 1.\\u201d\\n\\nWe want to thank the reviewer again for crystalizing the manuscript.\\n\\n\\n(Q2) Page 3, sec 2.1: \\\"optimizing on the complement entropy drives \\u0177_ij to 1/(K \\u2212 1)\\\". I believe that it drives each term \\u0177_ij /(1 \\u2212 \\u0177_ig ) to be equal to 1/(K-1). Therefore, it drives \\u0177_ij to (1 \\u2212 \\u0177_ig)/(K-1) for j!=g.\\n\\nThis indeed flattens the \\u0177_ij for j!=g, but the effect on \\u0177_ig is not controlled. In particular this latter can decrease. Then in the next step of the algorithm, \\u0177_ig will be maximized, but with no explicit control over the complementary probabilities. There are two objectives that are optimized over the same variable theta. So the question is, are we sure that this procedure will converge? What prevents situations where the probabilities will alternate between two values? \\n\\nFor example, with 4 classes, we look at the predicted probabilities of a given sample of class 1:\\nSuppose after step 1 of Algo 1, the predicted probabilities are: 0.5 0.3 0.1 0.1\", \"after_step_2\": \"0.1 0.3 0.3 0.3\", \"then_step_1\": \"0.5 0.3 0.1 0.1\", \"then_step_2\": \"0.1 0.3 0.3 0.3\\nAnd so on... Can this happen? Or why not? Did the algorithm have trouble converging in any of the experiments?\\n\\n(A2) Thanks for the detailed comment. As the reviewer pointed out, \\u201cdrives \\u0177_ij to 1/(K \\u2212 1)\\u201d was indeed a typo and should be corrected to \\u201cdrive \\u0177_ij /(1 \\u2212 \\u0177_ig) to 1/(K-1)\\u201d. We have modified the manuscript correspondingly. Indeed, maximizing complement entropy in Eq(2) only drives \\u201c\\u0177_ij /(1 \\u2212 \\u0177_ig) to 1/(K-1)\\u201d, and therefore in the example provided above, the predicted probabilities after step 2 can be \\u201c0.1 0.3 0.3 0.3\\u201d or \\u201c0.5, (1 - 0.5)/3, (1 - 0.5)/3, (1 - 0.5)/3\\u201d, or other values so long as the incorrect classes (\\u0177_ij's) receive similar predicted probabilities. According to our observations from the experiments, the probabilities tend to converge to \\u201c0.5, (1 - 0.5)/3, (1 - 0.5)/3, (1 - 0.5)/3\\u201d. Experiments show that the algorithm does not have trouble converging; the algorithm converges smoothly in all the experiments we have conducted. Again, we thank the reviewer for the insightful comment; studying the theory of COT convergence is an intriguing topic and we leave it as a future work.\\n\\n\\n(Q3) Sec 3.1: \\\"additional efforts for tuning hyper-parameters might be required for optimizers to achieve the best performance\\\": Which hyper-parameters are considered here? If it is the learning rate, why not use a different one, tuned for each objective?\\n\\n(A3) Hyper-parameters in this statement indeed refer to the learning rate, and we have modified the statement in the manuscript to avoid confusion; the modified statement is provided below:\\n\\n\\u201ctherefore, additional efforts for tuning learning rates might be required for optimizers to achieve the best performance.\\u201d\\n\\nRegarding the second question about tuning learning rates, we have conducted several experiments with different learning rates specifically tuned for each objective. The experimental results show that using the same learning rate for both primary and complement objectives leads to the best performance when Eq(3) is used as the complement objective.\\n\\n\\n(Q4) Sec 3.2: The additional optimization makes each training iteration more costly. How much more? How do the total running times of COT compare to the ones of the baselines? I think this should be mentioned in the paper.\\n\\n(A4) Yes, one additional backpropagation is required in each iteration when applying COT. On average, the total training time is about 1.6 times longer compared to the baselines. Thanks for the suggestion, and we have included this in the latest manuscript (section 2.2).\"}",
"{\"title\": \"Response to AnonReviewer3 [2/2]\", \"comment\": \"(Q5) Sec 3.4: As the authors mention, the results are biased and so the comparison is not fair here. Therefore I wonder about the relevance of this section. Isn't there an easy way to adapt the attacks to the two objectives to be able to illustrate the conjectured robustness of COT? For example, naively having a two steps perturbation of the input: one based on the gradient of the primary objective and then perturb the result using the gradient of the complementary objective?\\n\\n(A5) Thanks for the comment. We should have made clear that \\u201cblack box\\u201d [1] (rather than \\u201cwhite box\\u201d) adversarial attacks are considered in the manuscript. Specifically, we follow the common practice of generating adversarial examples using both FGSM and I-FGSM methods with the gradients from a baseline model; this way, the model trained by COT is actually a \\u201cblack box\\u201d to these attacks. We have modified the manuscript to clarify this part. Also, thanks for the great suggestion of forming adversarial attacks using \\u201cboth\\u201d gradients (from both primary & complement objectives). We are designing and conducting experiments at the moment and will share results when ready.\\n\\n\\nFor the part of secondary comments and typos, we appreciate your thorough reading again and have corrected all these typos according to your suggestions. Meanwhile, in the following, we also provided explanations to your secondary comments.\\n\\n\\n(Q1) Page 3, sec 2.1: \\\"...the proposed COT also optimizes the complement objective for neutralizing the predicted probabilities...\\\", using maximizes instead of optimizes would be clearer.\\n\\n(A1) Thanks for the suggestion. We have reworded the manuscript to \\u201cmaximizes.\\u201d\\n\\n\\n(Q2) In the definition of the complement entropy, equation (2), C takes as parameter only y^hat_Cbar but then in the formula, \\u0177_ig appears. Shouldn't C take all \\\\hat_y as an argument in this case?\\n\\n(A2) Since the probabilities sum up to one, \\u0177_ig can be inferred from y^hat_Cbar. Also, for us, it seems more direct and clear to show that complement entropy is calculated from y^hat_Cbar when C takes y^hat_Cbar as the only argument. Therefore, we incline to keep the orignal formulation. If the reviewer has strong preference, please kindly let us know and we are happy to make changes accordingly.\\n\\n\\n(Q3) Algorithm 1 page 4: I find it confusing that the (artificial) variable that appears in the argmin (resp. argmax) is theta_{t-1}\\n(resp. theta'_t) which is the previous parameter. Is there a reason for this choice?\\n\\n(A3) Thanks for the comment. Originally, we want to notify readers that there are two backprops within one iteration. We agree that those symbols are confusing and therefore we have modified the manuscript with those symbols removed.\\n\\n\\n(Q4) Sec 3.2 Figure 4: why is the median reported and not the mean (as in Figure 3, Tables 2 and 3)?\\n\\n(A4) Thanks for pointing this out. This is a typo and we have already corrected it in the manuscript: median -> mean.\\n\\n\\n(Q5) Sec 3.2, Table 3 and 4: why is it the validation error that is reported and not the test error?\\n\\n(A5) Thanks for the detailed comment. For a fair comparison, we report the error in the exact same way as the open-sourced repo from the ResNet authors:\", \"https\": \"//github.com/KaimingHe/deep-residual-networks.\\n\\n\\n(Q6) Sec 3.3: \\\"Neural machine translation (NMT) has populated the use of neural sequence models\\\": populated has not the intended meaning.\\n\\n(A6) We thank the reviewer for pointing out this typo. We have already corrected it in our manuscript: populated -> popularized\\n\\n\\n(Q7) \\\"Studying on COT and adversarial attacks..\\\" --> could be better formulated\\n\\n(A7) Thanks for the comment again. We have modified the manuscript as follows: \\\"Studying on the relationship between COT and adversarial attacks\\u2026\\u201d\\n\\n\\n[1] Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz. \\u201cMixup: Beyond Empirical Risk Minimization.\\u201d In International Conference on Learning Representation, 2018.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"(Q1) One small suggestion is that the authors can also make some comments on the connection between the two-step update algorithm (Algorithm 1) with multi-objective optimization. In particular, I would suggest the authors also try some multi-objective optimization techniques apart from the simple but effective heuristics, and see if some Pareto-optimality can be guaranteed and better practical improvement can be achieved.\\n\\n(A1) We sincerely thank the reviewer for the helpful and constructive suggestion about associating COT with multi-objective optimization. This is really a brilliant idea. As a straight-line future work, we will survey multi-objective optimization techniques, and explore the direction of formulating COT into a multi-objective optimization problem.\"}",
"{\"title\": \"Response to AnonReviewer1 [1/2]\", \"comment\": \"We would like to thank the reviewer for all the insightful feedbacks. Below we provide the explanations for each question or comment raised by the reviewer:\\n\\n\\n(Q1) How is this idea related to regularization? If we increase the regularization parameter, we can attain sparse parameter vectors. \\n\\n(A1) Conventionally, regularization techniques (e.g., Ridge or Lasso) are applied on the parameter space. We want to point out that all the results reported in the manuscript, for both baselines and models trained by COT, have already used L2-norm regularization on the parameter space, exactly as specified in the original papers (e.g., ResNet [1], WideResNet [2], and DenseNet [3]). In other words, COT is applied on top of the existent of those regularization techniques.\\n\\nIf your questions haven\\u2019t been addressed satisfactorily, please kindly let us know and we will be happy to discuss further.\\n\\n\\n(Q2) Would this method also complement from overfitting?\\n\\n(A2) Thank you for the comment. We would like to further clarify what you meant by saying \\u201ccomplement from overfitting.\\u201d Our interpretation of the question is: whether COT could be used to fight against overfitting. Overfitting means a model fails to generalize, and in our paper we have reported the generalized performance of models trained by COT on the test data, which confirms models trained by COT generalize better. In addition, we also calculate the loss gap \\\"(testing loss - training loss)\\\" and report the results in the following table, where a smaller gap indicates that a model generalizes better. Experimental results confirm that models trained by COT seem to generalize better due to the smaller gap between training and testing loss.\\n\\n \\\"(Testing loss - training loss)\\u201d from the state-of-the-art architectures on Cifar10 \\n==================================================\\n\\t\\t\\t\\t Baseline\\t COT\\nResNet-110 \\t\\t\\t 0.36 0.33\\nPreAct ResNet-18 0.28 0.26\\nResNeXt-29 (2\\u00d764d) 0.20 0.19\\nWideResNet-28-10\\t\\t0.23 0.21\\nDenseNet-BC-121 \\t0.22 0.22\\n=================================================\\n\\n\\n(Q3) In the numerical experiments, the comparison is carried out against a \\\"baseline\\\" method. Do the authors use regularization with these baseline methods? I believe the comparison will be fair if the regularization option is turned on for the baseline methods.\\n\\n(A3) Yes, the regularization (e.g., L2 Norm) techniques are used in all of the baseline methods, as specified in their original papers (e.g., ResNet [1], WideResNet [2], and DenseNet [3]). We agree with the reviewer that \\u201cthe comparison will be fair if the regularization option is turned on for the baseline methods,\\u201d and that is exactly we did in our paper: all the hyper-parameters, regularization and other training techniques are configured in the same way as in the original papers. For the details of experimental setup, please refer to the Section 3.2 in our manuscript.\"}",
"{\"title\": \"Response to AnonReviewer1 [2/2]\", \"comment\": \"(Q4) Why combining the two objectives in a single optimization problem and then solving the resulting problem is not an option instead of the alternating method given in Algorithm 1?\\n\\n(A4) We are very grateful for this novel idea, and we have conducted several preliminary experiments to explore this idea. Below are the comparisons between (a) the original COT method, and (b) the approach of combining the two objectives into one single objective. The experimental results show that the original COT method works better in almost all cases, and we conjecture that these two methods converge to different local minima. This idea is worth exploring, and we leave it as a straight-line future work. \\n\\nTest error of the state-of-the-art architectures on Cifar10 \\n===========================================================\\n\\t\\t\\t\\t Combining into one objective\\t COT\\nResNet-110 \\t\\t\\t 7.42% \\t\\t 6.84%\\nPreAct ResNet-18 4.92% \\t\\t 4.86%\\nResNeXt-29 (2\\u00d764d) 4.79% \\t\\t 4.55%\\nWideResNet-28-10\\t\\t4.00% \\t\\t 4.30%\\nDenseNet-BC-121 \\t4.64% \\t\\t 4.62%\\n===========================================================\\n\\nTest error of the state-of-the-art architectures on Cifar100\\n===========================================================\\n\\t\\t\\t\\t Combining into one objective\\t COT\\nResNet-110 \\t\\t\\t 28.80% \\t\\t 27.90%\\nPreAct ResNet-18 25.30% \\t\\t 24.73%\\nResNeXt-29 (2\\u00d764d) 23.20% \\t\\t 21.90%\\nWideResNet-28-10\\t\\t 21.96% \\t\\t 20.99%\\nDenseNet-BC-121 \\t 22.17% \\t\\t 20.54%\\n===========================================================\\n\\n\\n(Q5) How does alternating between two objectives change the training time? Do the authors use backpropagation?\\n\\n(A5) Yes, we do use backpropagation. One additional backpropagation is required in each iteration when applying COT, and therefore the overall training time is about 1.6 times longer according to our experiments.\\n\\n\\n[1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. \\u201cDeep Residual Learning for Image Recognition.\\u201d In IEEE Conference on Computer Vision and Pattern Recognition, 2016.\\n[2] Sergey Zagoruyko, Nikos Komodakis. \\u201cWide Residual Networks\\n.\\u201d In British Machine Vision Conference, 2016.\\n[3] Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger, David Lopez-Paz. \\u201cDensely Connected Convolutional Networks\\n.\\u201d In IEEE Conference on Computer Vision and Pattern Recognition, 2017.\"}",
"{\"title\": \"Nice idea but leaves several questions not answered\", \"review\": \"In this manuscript, the authors propose a secondary objective for softmax minimization. This complementary objective is based on evaluating the information gathered from the incorrect classes. Considering these two objectives leads to a new training approach. The manuscript ends with a collection of tests on a variety of problems.\", \"this_is_an_interesting_point_of_view_but_the_manuscript_lacks_discussion_on_several_important_questions\": \"1) How is this idea related to regularization? If we increase the regularization parameter, we can attain sparse parameter vectors. \\n2) Would this method also complement from overfitting?\\n3) In the numerical experiments, the comparison is carried out against a \\\"baseline\\\" method. Do the authors use regularization with these baseline methods? I believe the comparison will be fair if the regularization option is turned on for the baseline methods.\\n4) Why combining the two objectives in a single optimization problem and then solving the resulting problem is not an option instead of the alternating method given in Algorithm 1?\\n5) How does alternating between two objectives change the training time? Do the authors use backpropagation?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Simple and sensible heuristic with impressive improvement\", \"review\": \"This paper considers augmenting the cross-entropy objective with \\\"complement\\\" objective maximization, which aims at neutralizing the predicted probabilities of classes other than the ground truth one. The main idea is to help the ground truth label stands out more easily by smoothing out potential peaks in non-ground-truth labels. The wide application of the cross-entropy objective makes this approach applicable to many different machine/deep learning applications varying from computer vision to NLP.\\n\\nThe paper is well-written, with a clear explanation for the motivation of introducing the complement entropy objective and several good visualization of its empirical effects (e.g., Figures 1 and 2). The numerical experiments also incorporate a wide spectrum of applications and network structures as well as dataset sizes, and the performance improvement is quite impressive and consistent. In particular, the adversarial attacks example looks very interesting.\\n\\nOne small suggestion is that the authors can also make some comments on the connection between the two-step update algorithm (Algorithm 1) with multi-objective optimization. In particular, I would suggest the authors also try some multi-objective optimization techniques apart from the simple but effective heuristics, and see if some Pareto-optimality can be guaranteed and better practical improvement can be achieved.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting new idea, good experimental results, some points to clarify.\", \"review\": \"========\\nSummary\\n========\\n\\nThe paper deals with the training of neural networks for classification or sequence generation tasks, using a cross-entropy loss. Minimizing the cross-entropy means maximizing the predicted probabilities of the ground-truth classes (averaged over the samples). The authors introduce a \\\"complementary entropy\\\" loss with the goal of minimizing the predicted probabilities of the complementary (incorrect) classes. To do that, they use the average of sample-wise entropy over the complement classes. By maximizing this entropy, the predicted complementary probabilities are encouraged to be equal and therefore, the authors claim that it neutralizes them as the number of classes grows large. The proposed training procedure, named COT, consists of alternating between the optimization of the two losses.\\n\\nThe procedure is tested on image classification tasks with different datasets (CIFAR-10, CIFAR-100, Street View House Numbers, Tiny ImageNet and ImageNet), machine translation (training using IWSLT dataset, validation and test using TED tst2012/2013 datasets), and speech recognition (Gooogle Commands dataset). In the experiments, COT outperforms state-of-the-art models for each task/dataset.\", \"adversarial_attacks_are_also_considered_for_the_classification_of_images_of_cifar_10\": \"using the Fast Gradient Sign and Basic Iterative Fast Gradient Sign methods on different models, adversarial examples specifically designed for each model, are generated. Then results of these models are compared to COT on these examples. The authors admit\\nthat the results are biased since the adversarial attacks only target part of the COT objective, hence more accurate comparisons should be done in future work.\\n\\n===========================\\n Main comments and questions\\n===========================\", \"end_of_page_1\": \"\\\"the model behavior for classes other than the ground truth stays unharnessed and not well-defined\\\". The probabilities should still sum up to 1, so if the ground truth one is maximized, the others are actually implicitly minimized. No?\\n\\nPage 3, sec 2.1: \\\"optimizing on the complement entropy drives \\u0177_ij to 1/(K \\u2212 1)\\\". I believe that it drives each term \\u0177_ij /(1 \\u2212 \\u0177_ig ) to be equal to 1/(K-1). Therefore, it drives \\u0177_ij to (1 \\u2212 \\u0177_ig)/(K-1) for j!=g.\\n\\nThis indeed flattens the \\u0177_ij for j!=g, but the effect on \\u0177_ig is not controlled. In particular this latter can decrease. Then in the next step of the algorithm, \\u0177_ig will be maximized, but with no explicit control over the complementary probabilities. There are two objectives that are optimized over the same variable theta. So the question is, are we sure that this procedure will converge? What prevents situations where the probabilities will alternate between two values? \\n\\nFor example, with 4 classes, we look at the predicted probabilities of a given sample of class 1:\\nSuppose after step 1 of Algo 1, the predicted probabilities are: 0.5 0.3 0.1 0.1\", \"after_step_2\": \"0.1 0.3 0.3 0.3\", \"then_step_1\": \"0.5 0.3 0.1 0.1\", \"then_step_2\": \"0.1 0.3 0.3 0.3\\nAnd so on... Can this happen? Or why not? Did the algorithm have trouble converging in any of the experiments?\\n\\nSec 3.1:\\n\\\"additional efforts for tuning hyper-parameters might be required for optimizers to achieve the best performance\\\": Which hyper-parameters are considered here? If it is the learning rate, why not use a different one, tuned for each objective?\\n\\nSec 3.2:\\nThe additional optimization makes each training iteration more costly. How much more? How do the total running times of COT compare to the ones of the baselines? I think this should be mentioned in the paper.\\n\\nSec 3.4:\\nAs the authors mention, the results are biased and so the comparison is not fair here. Therefore I wonder about the relevance of this section. Isn't there an easy way to adapt the attacks to the two objectives to be able to illustrate the conjectured robustness of COT? For example, naively having a two steps perturbation of the input: one based on the gradient of the primary objective and then perturb the result using the gradient of the complementary objective?\\n\\n===========================\\nSecondary comments and typos\\n===========================\\n\\nPage 3, sec 2.1: \\\"...the proposed COT also optimizes the complement objective for neutralizing the predicted probabilities...\\\", using maximizes instead of optimizes would be clearer.\\n\\nIn the definition of the complement entropy, equation (2), C takes as parameter only y^hat_Cbar but then in the formula, \\u0177_ig appears. Shouldn't C take all \\\\hat_y as an argument in this case?\", \"algorithm_1_page_4\": \"I find it confusing that the (artificial) variable that appears in the argmin (resp. argmax) is theta_{t-1}\\n(resp. theta'_t) which is the previous parameter. Is there a reason for this choice?\", \"sec_3\": \"\\\"We perform extensive experiments to evaluate COT on the tasks\\\" --> COT on tasks\\n\\n\\\"compare it with the baseline algorithms that achieve state-of-the-art in the respective domain.\\\" --> domainS\\n\\n\\\"to evaluate the model\\u2019s robustness trained by COT when attacked\\\" needs reformulation.\\n\\n\\\"we select a state- of-the-art model that has the open-source implementation\\\" --> an open-source implementation\\n\\nSec 3.2:\", \"figure_4\": \"why is the median reported and not the mean (as in Figure 3, Tables 2 and 3)?\", \"table_3_and_4\": \"why is it the validation error that is reported and not the test error?\\n\\nSec 3.3:\\n\\\"Neural machine translation (NMT) has populated the use of neural sequence models\\\": populated has not the intended meaning.\\n\\n\\\"We apply the same pre-processing steps as shown in the model\\\" --> in the paper?\\n\\nSec 3.4:\\n\\\"We believe that the models trained using COT are generalized better\\\" --> \\\"..using COT generalize better\\\"\\n\\n\\\"using both FGSM and I-FGSM method\\\" --> methodS\\n\\n\\\"The baseline models are the same as Section 3.2.\\\" --> as in Section 3.2.\\n\\n\\\"the number of iteration is set at 10.\\\" --> to 10\\n\\n\\\"using complement objective may help defend adversarial attacks.\\\" --> defend against\\n\\n\\\"Studying on COT and adversarial attacks..\\\" --> could be better formulated\", \"references\": [\"there are some inconsistencies (e.g.: initials versus first name)\", \"Pros\", \"====\", \"Paper is clear and well-written\", \"It seems to me that it is a new original idea\", \"Wide applicability\", \"Extensive convincing experimental results\", \"Cons\", \"====\", \"No theoretical guarantee that the procedure should converge\", \"The training time may be twice longer (to clarify)\", \"The adversarial section, as it is, does not seem relevant for me\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
BygmRoA9YQ | Mixture of Pre-processing Experts Model for Noise Robust Deep Learning on Resource Constrained Platforms | [
"Taesik Na",
"Minah Lee",
"Burhan A. Mudassar",
"Priyabrata Saha",
"Jong Hwan Ko",
"Saibal Mukhopadhyay"
] | Deep learning on an edge device requires energy efficient operation due to ever diminishing power budget. Intentional low quality data during the data acquisition for longer battery life, and natural noise from the low cost sensor degrade the quality of target output which hinders adoption of deep learning on an edge device. To overcome these problems, we propose simple yet efficient mixture of pre-processing experts (MoPE) model to handle various image distortions including low resolution and noisy images. We also propose to use adversarially trained auto encoder as a pre-processing expert for the noisy images. We evaluate our proposed method for various machine learning tasks including object detection on MS-COCO 2014 dataset, multiple object tracking problem on MOT-Challenge dataset, and human activity recognition on UCF 101 dataset. Experimental results show that the proposed method achieves better detection, tracking and activity recognition accuracies under noise without sacrificing accuracies for the clean images. The overheads of our proposed MoPE are 0.67% and 0.17% in terms of memory and computation compared to the baseline object detection network. | [
"noise robust",
"object detection"
] | https://openreview.net/pdf?id=BygmRoA9YQ | https://openreview.net/forum?id=BygmRoA9YQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJgIZ6vVk4",
"BygKuqmYAm",
"Bye-NqXt0X",
"HyxYstmF0Q",
"SklUqujvp7",
"S1e467GThm",
"B1x6GVBxhQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1543957757555,
1543219825114,
1543219753030,
1543219617448,
1542072462232,
1541379003630,
1540539412741
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper878/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper878/Authors"
],
[
"ICLR.cc/2019/Conference/Paper878/Authors"
],
[
"ICLR.cc/2019/Conference/Paper878/Authors"
],
[
"ICLR.cc/2019/Conference/Paper878/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper878/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper878/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"As the reviewers point out, the paper seems to be below the ICLR publication bar due to low novelty and limited significance.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"decision\"}",
"{\"title\": \"Thanks for the feedback\", \"comment\": \"Thank you for the valuable reviews.\\n\\nQ1 \\u2013 How good is the gate performance?\\n(Ans) The performance of the gating network was above 99% which haven\\u2019t included in the paper. Instead, we have shown sample images in figure 5 to show the effect of MoPE.\\n\\nQ2 - what happen if you use only one of the trained experts for all the clean/noisy test data?\\n(Ans) Please check the performance of the model 4 in table 2. Model 4 uses denoise network as a preprocessing for all the clean/noisy data without having gating network. The denoise net is trained on the noisy images with various noise levels including the clean images. When sigma is 0, the input image is actually the clean image (Please see the section 5 for implementation details). \\n\\nQ3 - It is not clear how you combined the results of the two experts.\\n(Ans) Please see the figure 1. The shape of the preprocessing output is the same with that of the input image. Once we obtain the pre-processed images, the coefficient of the gating network is multiplied per each pre-processing and summed. The sum of the coefficients of the gating network is 1 as the output of the gating network is softmax function (See the section 4.2 for details).\\n\\nQ4 - Did you try to use a hard decision gating at test time?\\n(Ans) No, we have used the same configuration as for the training time.\"}",
"{\"title\": \"Thanks for the feedback\", \"comment\": \"Thank you for the valuable reviews.\\n\\nQ1 \\u2013 Results should be compared to other image analysis methodologies\\n(Ans) The purpose of the paper is to enhance the performance of object detection and its related other tasks (multiple object tracking and activity recognition) under \\u201cboth\\u201d noisy/clean condition with \\u201climited overhead\\u201d in terms of memory/computation (table 5). Existing denoising techniques like BM3D [1] require significant computation per image, thus, are not practical to be used in real time embedded system. And use of gating network for avoiding smoothing when not required is not necessarily obvious. Gaussian noise is random, thus, one might think it is hard to learn pattern of gaussian noise using neural network which is considered to be good for the data having pattern. And we have showed that we can distinguish the clean and noisy images with gating network.\\nInstead, we have added comparison between data augmentation techniques, with/without gating network and with/without fine-tuning to show the effect of the proposed MoPE in table 2/3/4.\\n[1] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on Image Processing, 16(8):2080\\u20132095, 2007.\\n\\nQ2 - reason of including Table 1\\n(Ans) We wanted to address that adding additional loss without changing network architecture doesn\\u2019t work for object detection which is not true for the image recognition. And this was the motivation of this work. We eventually proposed MoPE for object detection under clean/noisy/resolution variant conditions with small overhead (table 5).\"}",
"{\"title\": \"Thanks for the feedback\", \"comment\": \"Thank you for the valuable reviews.\\n\\nQ1 \\u2013 Originality and significance:\\n\\n(Ans) In contrast to many other DL works focused on denoising or image classification on noisy images [1-2], the main contribution of this paper is to enhance the performance of object detection and its related other tasks (multiple object tracking and activity recognition) under \\u201cboth\\u201d noisy/clean condition with \\u201climited overhead\\u201d in terms of memory/computation (table 5). Also, we have discovered that adding average filter and U-net [3] like skip connection are beneficial for denoising.\\nFor the practical usefulness, it would be great if we can incorporate those actual noises as a future work. Thanks for your feedback.\\n\\n[1] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, P.-A. Manzagol, \\\"Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion\\\", J. Mach. Learn. Res., vol. 11, no. 11, pp. 3371-3408, 2010.\\n[2] S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide. Dirty pixels: Optimizing image classification architectures for raw sensor data. arXiv preprint arXiv:1701.06487, 2017.\\n[3] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI (3), volume 9351 of Lecture Notes in Computer Science, pp. 234\\u2013241. Springer, 2015.\\n\\nQ2 \\u2013 Clarity:\\n(Ans) We think that the title of the paper is valid. It is true, in this paper, we have used identity mapping for the clean and low-resolution images as a preprocessing. This was the result from our experiments, not necessarily obvious one. There could be better preprocessing for low-resolution images that we haven\\u2019t explored. \\n\\nQ3 - lightweight preprocessing\\n(Ans) Please see the table 5. Also, it is obvious that average filter requires less computation than the denoise net since denoise net includes average filter as a part (Please see the section 3.2 Pre-processing for the noisy images).\", \"q4___figure_5\": \"(Ans) We have changed the figure.\"}",
"{\"title\": \"Weak novelty and significance\", \"review\": \"Summary\\nThis paper introduced a parameterized image processing technique to improve a robustness of visual recognition systems against noisy input data. The proposed method is composed of two components; a denoising network that suppresses the noise signals in an image, and gating network that predicts whether to use the original input image or the one produced by the denoising network. The proposed idea is evaluated on three tasks of object detection, tracking and action recognition.\", \"originality_and_significance\": \"The originality of the paper is very limited since the paper simply combines the existing image denoising technique with the idea of gating. The practical significance of the work is also limited since the model is trained and evaluated with only synthetically generated noise patterns; it is not surprising that the proposed method (both denoising and gating networks) works under this setting, as the noise is created synthetically under the same setting in both training and testing. To demonstrate the practical usefulness, it would be great if the model is evaluated with the actual source of noises (e.g. noises from input sensors, distortion by image compression, etc).\", \"clarity\": \"I think the title of the paper is misleading; the proposed model is actually not a mixture of preprocessing units, as it combines *a* denoising unit together with identity mapping. The gating network is also not designed to incorporate a mixture of more than two preprocessing units, as it outputs only \\u201con/off switches\\u201d instead of weights for K mixture components (K>2).\", \"minor_comments\": \"1) the paper argued the importance of lightweight preprocessing but have not provided analysis on computation costs. From the current results, I don\\u2019t see the clear benefit of the proposed method (denoising network) over the average filtering considering the tradeoff between computation vs. performance. \\n2) In Figure 5, I suggest highlighting the differences among the examples for clarity.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Synthetic naive approach to handling distorted images by deep neural networks\", \"review\": \"The paper presents a synthetic naive approach to analyzing distorted, especially noisy, images through deep neural networks. It uses an existing gating network to discriminate between clean and noisy images, averaging and denoising the latter, so as to somewhat improve the results obtained if no such separation was used. It deals with a well known problem using the deep neural network formulation. Results should be compared to other image analysis methodologies, avoiding smoothing when not required, that can be used for the same purpose. This should also be reflected in related work in section 2; the reason of including Table 1 in it seems unclear.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"A reasonable approach for noise robust training but there is a lack of novelty\", \"review\": \"The paper addresses the problem of training an object detection network that can achieve good performance on both clean and noisy images.\\nThe proposed approach is based on a gating network that decides whether\\nthe image is clean or noisy. in case of noisy image a denoising method is applied. The network components form a mixture of experts architecture and are jointly trained after a component-level pretraining.\\nHow good is the gate performance? what happen if you use only one of the trained experts for all the clean/noisy test data? It is not clear how you combined the results of the two experts. Are you computing a weighted average of the original and the enhanced images? Did you try to use a hard decision gating at test time?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rJeXCo0cYX | BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning | [
"Maxime Chevalier-Boisvert",
"Dzmitry Bahdanau",
"Salem Lahlou",
"Lucas Willems",
"Chitwan Saharia",
"Thien Huu Nguyen",
"Yoshua Bengio"
] | Allowing humans to interactively train artificial agents to understand language instructions is desirable for both practical and scientific reasons. Though, given the lack of sample efficiency in current learning methods, reaching this goal may require substantial research efforts. We introduce the BabyAI research platform, with the goal of supporting investigations towards including humans in the loop for grounded language learning. The BabyAI platform comprises an extensible suite of 19 levels of increasing difficulty. Each level gradually leads the agent towards acquiring a combinatorially rich synthetic language, which is a proper subset of English. The platform also provides a hand-crafted bot agent, which simulates a human teacher. We report estimated amount of supervision required for training neural reinforcement and behavioral-cloning agents on some BabyAI levels. We put forward strong evidence that current deep learning methods are not yet sufficiently sample-efficient in the context of learning a language with compositional properties. | [
"language",
"learning",
"efficiency",
"imitation learning",
"reinforcement learning"
] | https://openreview.net/pdf?id=rJeXCo0cYX | https://openreview.net/forum?id=rJeXCo0cYX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJlVmS0llV",
"H1xW8wbllN",
"BkxNrTwayE",
"SJe8Euj9RX",
"rylOdOTB0Q",
"SJl9AvKsam",
"rJgn5DFo6X",
"rylPWDFjpQ",
"S1gl2Itop7",
"ByeCy_PDp7",
"BJlsYhEgp7",
"SkebaWdinm",
"HyliNl09h7",
"r1eeyarz3m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544770844164,
1544718153507,
1544547643899,
1543317550386,
1542998128138,
1542326226410,
1542326163935,
1542326014583,
1542325927961,
1542055909613,
1541586051137,
1541271992917,
1541230642979,
1540672727587
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper877/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper877/Authors"
],
[
"ICLR.cc/2019/Conference/Paper877/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper877/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper877/Authors"
],
[
"ICLR.cc/2019/Conference/Paper877/Authors"
],
[
"ICLR.cc/2019/Conference/Paper877/Authors"
],
[
"ICLR.cc/2019/Conference/Paper877/Authors"
],
[
"ICLR.cc/2019/Conference/Paper877/Authors"
],
[
"ICLR.cc/2019/Conference/Paper877/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper877/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper877/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper877/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper presents \\\"BabyAI\\\", a research platform to support grounded language learning. The platform supports a suite of 19 levels, based on *synthetic* natural language of increasing difficulties. The platform uniquely supports simulated \\\"human-in-the-loop\\\" learning, where a human teacher is simulated as a heuristic expert agent speaking in synthetic language.\", \"pros\": \"A new platform to support grounded natural language learning with 19 levels of increasing difficulties. The platform also supports a heuristic expert agent to simulate a human teacher, which aims to mimic \\\"human-in-the-loop\\\" learning. The platform seems to be the result of a substantial amount of engineering, thus nontrivial to develop. While not representing the real communication or true natural language, the platform is likely to be useful for DL/RL researchers to perform prototype research on interactive and grounded language learning.\", \"cons\": \"Everything in the presented platform is based on synthetic natural language. While the use of synthetic language is not entirely satisfactory, such limit is relatively common among the simulation environments available today, and lifting that limitation is not straightforward. The primary contribution of the paper is a new platform (resource). There are no insights or methods.\", \"verdict\": \"Potential weak accept. The potential impact of this work is that the platform will likely be useful for DL/RL research on interactive and grounded language learning.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"a new platform that supports interactive and grounded language learning\"}",
"{\"title\": \"Paper Title Change\", \"comment\": \"Hello,\\n\\nFollowing internal discussions, we have decided, in agreement with two reviewers, to change the title of the paper. The new title will be: \\\"BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning\\\".\\n\\nKindest regards,\\n\\n- The BabyAI team\"}",
"{\"title\": \"Post rebuttal\", \"comment\": \"I would like to thank the authors for their rebuttal, particularly the discussion putting similar environments in perspective with the BabyAI environment, and positive modifications to the paper. I will leave my rating unchanged, which means I would continue to argue for acceptance of this paper. This is on the basis that the proposed environment platform is unique and useful given the synthetic language and verifier.\\n\\nHowever, as with R3, I would still encourage the authors to change the title. More experienced researchers than me have argued that the title \\\"should capture what is special about the paper\\\", and \\\"from the title you should have a guess of the content of the paper, and recalling the title should help recall the paper\\\". Since there are no human-in-the-loop experiments in the paper, including this phrase in the title creates a disconnect between the title and the content, which might lead to disappointment on the part of the reader.\"}",
"{\"title\": \"revised my score.\", \"comment\": \"Thanks for your responses.\\nYes, I was referring to use the gated attention approach to use as another baseline and use different metric (rather than data efficiency ) to evaluate this framework.\\n\\nMost of my concerns were addressed. As a result, I've updated my score. That said, I am hoping that the authors will add more tasks and metrics to evaluate this platform. Data efficiency metric is good but not enough.\"}",
"{\"title\": \"Paper updated based on reviewer feedback\", \"comment\": \"Dear reviewers, thank you for your useful feedback. Following your recommendations, we have made the following changes to improve our paper:\\n\\n1. We now use the term \\u201cbot\\u201d as often as possible, instead of heuristic expert.\\n\\n2. We have highlighted the connection between gated attention and FiLM when describing our model in Section 4.1.\\n\\n3. A paragraph that discusses other existing simulation package was added to the related work, with a citation of the AI2-Thor and Matterport paper. We have also added a better explanation of why we have built MiniGrid.\\n\\n4. In the experiments section, we highlight the counter-intuitive finding Table 4, where data requirements for GoToObjMaze \\\"With Pretraining\\\" are greater than \\\"Without Pretraining\\\".\\n\\n5. A detailed explanation of how instructions are translated into an internal system of subgoals by the bot was added to Appendix B.\\n\\nWe hope that these changes effectively address your concerns,\\n\\nKindest regards,\\n\\n- The BabyAI team\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank Reviewer 3 (R3) for their review that nicely summarizes our paper. In our response we discuss the two main questions that R3 asked, namely why we have not yet performed experiments with an actual human and how other learning approaches can be studied using the BabyAI platform.\\n\\nWe completely agree with R3 that \\u201cit is still unclear how to effectively learn with a human in the loop.\\u201d Our baselines studies on the BabyAI platform have indeed shown that standard approaches for grounded language understanding would need from thousands to hundreds of thousands of demonstrations by the human to learn even the simplest tasks. In order to proceed to studies with actual humans in the loop, the human should be able to see in real time how the agent is progressing, and with the approaches that we evaluated such progress would be unbearably slow. We have postponed studies with actual humans until a sufficient progress is made on BabyAI, that is until we have levels that (at least with pretraining) can be mastered with hundreds of demonstrations. For now, we believe the BabyAI platform is already useful as it supports rigorous studies on data efficiency of grounded language understanding, so that we can measure progress towards the goal of learning with a human in the loop . \\n\\nThe reviewer correctly pointed out that there are other ways in which a human could teach an agent for which we do not provide baseline results, in particular learning with preferences. To the best of our understanding studies of learning with preferences could be done using the BabyAI platform, since human preferences could be simulated by using the instruction verifier. We would however expect the data efficiency of learning with preferences to be closer to that of RL (i.e. millions of episodes) than that of imitation learning (hundreds of thousand demonstrations).\\n\\nWe hope that R3 finds the clarifications that we present in this response informative and useful.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We are grateful for the detailed review provided by Reviewer 2 (R2), their careful reading of the paper, and their suggestions. We will do our best to incorporate these.\\n\\nR2 has asked when the platform will be available to the public, and we are happy to announce that that the platform is already available on Github. We will add a link to the repository when the review process is complete in order to preserve anonymity. We have also published a Docker image with the baseline models on Docker Hub so that our results can be easily replicated.\", \"we_respond_below_to_the_5_specific_comments_that_r2_made_in_their_review_in_same_order_as_they_were_originally_presented\": \"1) We will cite the two highly relevant papers that R2 suggested in our Related Work section.\\n2) We will improve the description of the stack-based bot in the text of Section 3.4, and we will furthermore add the complete pseudocode for the bot in an Appendix. \\n3) We will follow R2\\u2019s suggestion and be more consistent the choice of terms \\u201csimulated human\\u201d, \\u201cheuristic expert\\u201d and \\u201cbot\\u201d, in particular we will try to use \\u201cbot\\u201d as often as possible. We would still like to retain occasional references to human in the loop training, since this it is this kind of training that our research aspires to eventually enable. The bot\\u2019s role in this context is simulating a hypothetical human teacher, which is why occasionally the paper uses the words \\u201csimulated human\\u201d.\\n4) We fully share R2\\u2019s concerns with regard to a counterintuitive decrease of data efficiency on GoTo that is observed when the model is first pretrained on GoToObjMaze. We are however confident that this result is correct, and moreover we find it rather interesting. An agent that is pretrained on \\u201cGoToObjMaze\\u201d level knows how to efficiently navigate a 3x3 maze of 6x6 rooms (Table 1), but somehow this pretraining does not help at all when the same agent is then trained to perform a range of similar tasks (defined by instructions like \\u201cgo to red ball\\u201d) and in the presence of distractor objects. We treat this as an evidence that the current deep learning architectures for language understanding do not lend themselves to curriculum learning, and we hope that BabyAI will support studies on how to improve them in this respect. We believe that such much needed improvements in curriculum learning will play a key role in enabling actual human in the loop training, which is the main aspiration of the paper. \\n5) We thank R2 for the pointer to the \\u201cGated-Attention Architectures for Task-Oriented Language Grounding\\u201d paper. We have already cited this paper in Introduction and Related Work. We were not sure which of the two ways of understanding the author\\u2019s suggestion that \\u201cThis task will be a better measure than current baselines, especially for RL case\\u201d is right, and below we comment on both. \\n\\nIf R2\\u2019s suggestion was to consider using the VizDoom environment from the aforementioned paper, then we would like to note that we did consider the option of using this environment along with other ones mentioned in our Related Work section. We concluded that in order to have the specific combination of features that we wanted (high speed, interacting with objects in the environment, systematically designed language), we could not use existing environments, such as VizDoom, and had to build a new MiniGrid environment and implement the Baby language in it. \\n\\nIf instead R2 suggested to use the gated attention approach to combine representations of images and instructions, then we would like to note that the FiLM [1] layers that we use in our model perform a very similar computation (and in fact FiLM and the \\u201cTask-Oriented Language Grounding\\u201d were both presented at the same AAAI 2018 conference). We will make this connection more clear in the text of the paper where we describe the model that we use in our experiments. \\n\\nWe hope that R2 finds our clarifications and comments helpful.\\n\\n[1] FiLM: Visual Reasoning with a General Conditioning Layer (https://arxiv.org/abs/1709.07871)\"}",
"{\"title\": \"Response to Reviewer 1 (part 2 of 2)\", \"comment\": \"(see part 1 for the beginning)\\n\\nWe fully agree with Reviewer 1 that there are a number of existing options in the space of environments for instruction following. We have examined these options before embarking on this project, and determined that none of them provided the specific combination of features that we wanted, along with a systematically designed language. To the best of our knowledge, the environment we have created is unique in a number of important ways.\\n\\nWe chose a gridworld rather than a 3D environment such as in [1, 2] because we wanted an experimental setup that was fast, lightweight, and easy to modify. Using a gridworld means we can run simulations at several thousands of frames per second on a single computer, and train with larger batch sizes. Even so, on some of the more difficult BabyAI levels, training time can take up to a week on a modern GPU. Had we used a 3D environment which was more computationally expensive, the training time requirements would have put this line of research out of our reach.\\n\\nThere are already existing options in terms of gridworld packages, such as MazeBase [3] and PyCoLab [4]. However, we wanted a environments that are partially observable, and feature language. Had we used these packages, we would have had to extensively modify them, thereby making any results incomparable to the existing literature. The closest thing to our setup, that we are aware of, is the Crafting 2D env used in the Policy Sketches paper [5]. This environment is interesting, but a quick inspection of the repository will reveal that it is a bare source code dump, with no documentation whatsoever, no installation script, and no maintenance commits in the last two years. This environment is also not compatible with OpenAI Gym. \\n\\nIn designing our environment, we wanted a principled approach towards language design, rather than something ad-hoc based on patterns (hence the BNF grammar). We also attempted a principled segmentation of levels in terms of competencies required to solve them. We also believe that ability to scale up/down the difficulty of levels by adjusting various parameters in a fine-grained manner is important to enable curriculum learning, and for research in general, because it can help us establish precisely which aspects of the environment make learning more difficult. Our environment was designed with this in mind.\\n\\n[1] Grounded Language Learning in a Simulated 3D World (https://arxiv.org/abs/1706.06551)\\n[2] Project Malmo (https://github.com/Microsoft/malmo)\\n[3] MazeBase (https://github.com/facebook/MazeBase)\\n[4] PyCoLab (https://github.com/deepmind/pycolab)\\n[5] Policy Sketches implementation (https://github.com/jacobandreas/psketch)\"}",
"{\"title\": \"Response to Reviewer 1 (part 1 of 2)\", \"comment\": \"We thank Reviewer 1 (R1) for their careful and detailed review of the paper. In our response we will try to justify the choice of the title, explain why we believe that building a new environment was warranted, and also discuss the difference between imitation learning results obtained with the heuristic expert and an RL-trained agent.\\n\\nReviewer 1 has suggested that our aspiration to make progress towards human in the loop training may be insufficient to use the phrase \\u201chuman in the loop\\u201d in the title. With all due respect we would like to argue that the title of a research paper should inform the reader of the high-level goals that are being pursued in the presented research effort. Since the goal of BabyAI platform is to support tangible steps towards human in the loop training, we think that the title \\u201cFirst Steps Towards Grounded Language Learning With a Human In the Loop\\u201d is sufficiently accurate. We are open to a continuation of this discussion, but so far our understanding is having \\u201cFirst Steps Towards \\u2026\\u201d should make it sufficiently clear that training with a human in the loop is something that we aspire to, and not necessarily something that we can already demonstrate.\\n\\nWe thank Reviewer 1 for pointing out that a further discussion of why the synthetic teacher is useful even though an RL-trained agent is easier to imitate may be necessary. The main reason why we chose to build the heuristic expert is that, although RL training did well on some of the simpler levels, it struggled to reach a high success rate on the harder levels. A secondary reason is that, in order to allow further investigations to DAGGER and other more advanced interactive teaching methods, we wanted to have a teacher which could give advice to a learner on which action to take from any state. Unfortunately, RL agents struggle to do this in practice. They generalize poorly to states which they do not normally visit.\\n\\nAs for why RL agents are easier to imitate than the heuristic expert, this is likely because the policy implemented by the RL expert is easier for a neural network to implement. RL is an optimization technique. By design, it attempts to adjust the weights of a neural network so as to maximize the reward obtained on a given problem. In other words, RL will try to find a policy which is the best (in terms of both performance and learnability) for the expert\\u2019s neural network. Thus, it may be more natural for a learner that has the same neural network as the expert to imitate such a policy rather than imitating a computer program, such as our heuristic expert. Informally, we found that the RL-trained policy is more reactive (i.e. based on the current/recent observations), whereas the heuristic expert takes advantage of its perfect memory. In the view of the fact that training RL agents for harder levels is extremely hard, we believe that having a heuristic expert that can solve all levels is highly useful. \\n\\n(see part 2 for continuation)\"}",
"{\"title\": \"Clarification on Observation Format\", \"comment\": \"Thank you for your question. The agent indeed receives 3 integers for each tile. In our preliminary investigations we tried converting these to one-hot embeddings first, but we did not observe a big difference in the results.\\n\\nPlease let us know if you have any further questions.\"}",
"{\"comment\": \"Very interesting work!\\n\\nIn Appendix A.4, you describe how the observations are encoded: 'Each tile is encoded using 3 integer values: one describing\\nthe type of object contained in the cell, one describing its color, and a flag indicating whether doors\\nare open or closed.'. I was wondering whether these are given to the agent as actual integers, or they are one-hot encoded first. Could you please comment on this?\", \"title\": \"Format of the observation\"}",
"{\"title\": \"Studies grounded language learning with a human in the loop by removing the human (and natural language)\", \"review\": [\"This paper focuses on grounded language learning with a human in the loop, in the sense where the language is synthetic, the environment is a 2D grid world, and the human is a simulated human teacher implemented using heuristics. This setup is dubbed the BabyAI platform, and includes curriculum learning over 19 levels of increasing difficulty.\", \"Overall, the BabyAI platform is conceptually similar to numerous previous works that seek to learn grounded language in simulation environments. These efforts differ along various axes, for example visual realism, 2D vs 3D, partially vs. fully observed, different tasks, world-state manipulation or not, etc. The main original aspect of the BabyAI platform is the simulated human-teacher.\", \"Strengths\", \"Learning with a human in the loop is an extremely important problem to study, although currently efforts are hampered by cost, lack or reproducibility, and the sample inefficiency of existing learning methods. This paper addresses all three of these issues, albeit by removing the human and natural language. This is simultaneously the greatest weakness of this approach. The contribution of this paper therefore rests on the quality/interestingness/utility of the provided synthetic language and the synthetic teacher.\", \"Fortunately, the synthetic language does exhibit interesting compositional properties, it is readily extensible, it has the appealing property that it can be readily interpreted as a subset of english, and it is accompanied by a verifier to check if the specified actions were completed.\", \"Weaknesses\", \"If the ultimate goal is learning with a human in the loop, the usefulness of the synthetic teacher is not clear, particularly as it is apparently easier to imitate from an RL trained agent than the teacher. The explanation 'This can explained by the fact that the RL expert has the same neural network architecture as the learner' does no seem obvious to me.\", \"Regarding the human in the loop, since this is aspirational and not an aspect of the paper, the title of the paper does not seem reflective of its content (even with the 'First steps' qualifier).\", \"If the main unique aspect is the simulated human-teacher, it is not clear why it is necessary to create a new environment, rather than re-using an existing environment. The effect of this is to limit comparisons with recent work and an increasing fragmentation of research across tasks that are related but can\\u2019t be compared.\"], \"summary\": \"This paper represents an important direction, in that it provides a testbed for studying the sample efficiency of grounded language learning in a simplified (yet still challenging and compositional) environment. I believe the environment and the provided synthetic language and verifier will prove useful to the community, and despite some reservations about the title and the simulated human-teacher, I recommend acceptance.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"New platfrom for research\", \"review\": \"Summary:\\nThis paper presents a research platform with a simulated human (a.k.a bot) in the loop for learning to execute language instructions in which language has compositional structures. The language introduced in this paper can be used to instruct an agent to go to objects, pick up objects, open doors, and put objects next to other objects. MiniGrid is used to build the environments used for this platform. In addition to introducing the platform, they evaluate the difficulty of each level by training an imitation learning baseline using one million demonstration episodes for each level and report results. Moreover, the reported results contain data efficiencies for imitation learning and reinforcement learning based approaches to solving BabyAI levels. \\n\\nA platform like this can be very useful to expedite research in language learning, machine learning, etc. In my view, work like this should be highly encouraged by this conference and alike.\", \"comments\": \"1. There are following papers should be cited as they are very related to this paper:\\n a) Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments\", \"https\": \"//arxiv.org/abs/1706.07230\\nThe code is available for this paper.\", \"question\": \"When this platform will be available for public?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting direction and open-source platform, but paper falls short of human evaluation\", \"review\": \"Summary\\n\\nThe authors introduce BabyAI, a platform with the aim to study grounded language learning with a human in the loop. The platform includes a *simulated* human expert (bot) that teaches a neural learner. The current domain used in a 2D gridworld and the synthetic instructions require the agent to navigate the world (including unlocking doors) and move objects to specified locations. They also introduce \\\"Baby Language\\\" to give instructions to the agent as well as to automatically verify their execution.\\n\\nThe paper includes a detailed description of the minigrid env with the included tasks and instruction language set. \\n\\nAuthors trained small and large LSTM models on the tasks on a variety of standard learning approaches, using pure exploration (RL) and imitation from a synthetic bot (IL). They show IL is much more data efficient than RL in this domain as well. Also, a curriculum approach is evaluated (pre-train on task 1-N, then train on task N+1). \\n\\nPro\\n- Human-in-the-loop research is an exciting direction.\\n- The language instruction set is a starting point for high-level human instructions. \\n\\nCon\\n- It is still unclear how to effectively learn with human-in-the-loop. The authors don't actually evaluate \\n1) how well the bot imitates a human, or \\n2) how an actual human would interact and speed up learning. \\nAll experiments are done with standard learning approaches with a synthetic bot. \\n- The authors assume that human feedback comes as instructions or demonstrations. These are not the only forms of feedback possible (e.g., preferences). (Does the platform easily support those?)\\n\\nReproducibility\\n- Open-sourcing the platform is a good contribution to the community.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
BJGfCjA5FX | PAIRWISE AUGMENTED GANS WITH ADVERSARIAL RECONSTRUCTION LOSS | [
"Aibek Alanov",
"Max Kochurov",
"Daniil Yashkov",
"Dmitry Vetrov"
] | We propose a novel autoencoding model called Pairwise Augmented GANs. We train a generator and an encoder jointly and in an adversarial manner. The generator network learns to sample realistic objects. In turn, the encoder network at the same time is trained to map the true data distribution to the prior in latent space. To ensure good reconstructions, we introduce an augmented adversarial reconstruction loss. Here we train a discriminator to distinguish two types of pairs: an object with its augmentation and the one with its reconstruction. We show that such adversarial loss compares objects based on the content rather than on the exact match. We experimentally demonstrate that our model generates samples and reconstructions of quality competitive with state-of-the-art on datasets MNIST, CIFAR10, CelebA and achieves good quantitative results on CIFAR10. | [
"Computer vision",
"Deep learning",
"Unsupervised Learning",
"Generative Adversarial Networks"
] | https://openreview.net/pdf?id=BJGfCjA5FX | https://openreview.net/forum?id=BJGfCjA5FX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SkgmeCK9JE",
"B1l5iE4nRm",
"r1eSfCljRX",
"ryl6XhpFRQ",
"HkllV5Tt0X",
"SyeeLRnYRQ",
"BklqJdQs3m",
"ryeeGRn_hX",
"BJgC-o2xnX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544359403300,
1543419041595,
1543339533385,
1543261220702,
1543260711941,
1543257672053,
1541253090096,
1541094919784,
1540569861784
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper876/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper876/Authors"
],
[
"ICLR.cc/2019/Conference/Paper876/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper876/Authors"
],
[
"ICLR.cc/2019/Conference/Paper876/Authors"
],
[
"ICLR.cc/2019/Conference/Paper876/Authors"
],
[
"ICLR.cc/2019/Conference/Paper876/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper876/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper876/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes an augmented adversarial reconstruction loss for training a stochastic encoder-decoder architecture. It corresponds to a discriminator loss distinguishing between a pair of a sample from the data distribution and its augmentation and pair containing the sample and its reconstruction. The introduction of the augmentation function is an interesting idea, intensively tested in a set of experiments, but, as two of the reviewers pointed out, the paper could be improved by deeper investigation of the augmentation function and the way of choosing it, which would increase significance of the contribution.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Intersting idea that needs a bit more investigation\"}",
"{\"title\": \"Thanks for the fair remark\", \"comment\": \"Thank you for your clarification on this theoretical issue. Indeed, the support of p(y|x) can be low-dimensional if dim(z) < dim(y). As you noticed it can be addressed by techniques used in [1, 2].\\n\\n[1] Sonderby et al. Amortised MAP Inference for Image Super-resolution. ICLR 2017.\\n[2] Roth et al. Stabilizing Training of Generative Adversarial Networks through Regularization. NIPS 2017.\"}",
"{\"title\": \"Reply\", \"comment\": \"Thank you for clarification. I appreciate your efforts on including more details about the design of a(x).\\n\\nOn the theoretical part, this problem cannot be easily solved.\\n1. Even when the encoder q(z|x) is stochastic, the support of p(y|x) can still be low-dimensional if dim(z) < dim(y) and the decoder is deterministic.\\n2. The augmentation distribution r(y|x) is not guaranteed to have full support. E.g. conditioned on x, the random-crop operator a(x) cannot generate augmentations that look like Gaussian blur. \\n3. So now given that p(y|x) and r(y|x) do not have full support, with probability 1 they will have mismatched supports.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Dear reviewer,\\nThank you very much for the encouraging and constructive comments. We will address each your question below:\\n\\n> There is one theoretical issue for the defined \\\"reconstruction\\\" loss (for JS and f-divergences). Because decoder(encoder(x)) is a deterministic function of x, this means p(y|x) is a delta function. With r(y|x) another delta function (even that is not delta(y=x)), with probability 1 we will have mismatched supports between p(y|x) and r(y|x).\\n\\nIt appears that there is a misunderstanding. In Section 5 we notice that encoder is stochastic and has a fully factorized Gaussian distribution. Therefore, decoder(encoder(x)) is a stochastic function, too. The conditional distribution r(y|x) is not a delta function because we use the stochastic augment mapping a(x) (the combination of reflecting pad and the random crop). As a result, supports of distributions p(y|x) and r(y|x) have non-empty intersection and Jensen-Shanon divergence is defined correctly for them. \\n\\n> I think another big issue for the paper is the lack of discussion on how to choose r(y|x), or equivalently, a(x).\\n\\nWe agree with you that there should be more discussion on how to choose the augmentation function a(x) and more experiments with different type of augmentations.\\n\\nIndeed, the choice of the augment mapping a(x) can significantly impact PAGAN performance. We analyzed another augmentations such as a Gaussian blur and a random contrast normalization before we chose the augment function a(x) mentioned in the paper. As a result, we selected a combination of reflecting pad and the random crop based on the visual judgment of generated samples and reconstructions. We have added a new subsection \\\"Choice of Augmentation\\\" to Section 5 where we provide experiments with other types of augmentations as well as with different padding width for the pad-crop augmentation type. We also add some discussion on intuition behind selecting the most efficient augmentation.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Dear reviewer,\\nWe would like to thank you for your thoughtful review and valuable suggestions. We will address each your question below:\\n\\n1. > Therefore, the motivation in the introduction may be some modification.\\n\\nIt appears that there is a misunderstanding. It is true that in ALI and ALICE, they use one discriminator to classify pairs (z, x) and (z, G(z)). However, this paragraph was devoted to the ALICE paper where authors introduced an additional cycle consistency term in the eq. (8) in the form of adversarially learned discriminator on pairs (x, x) and (x, G(E(x))). Later in the Proposition 1 they showed the degeneracy of the straightforward approach: the discriminator tends to learn delta(x-G(E(x))) classification rule. One of the motivation of using the augmentation is to avoid this issue and to allow the discriminator to distinguish pairs based not on the raw pixels but on the high level features which capture the perceptual similarity of images.\\n\\n2. > The authors failed to compare their model with SVAE [1] and MINE [2], which are improved versions of ALICE. And we also have other ways to match the distribution such as Triple-GAN [3] and Triangle-GAN [4], I think the authors need to run some comparison experiments.\\n\\nThank you for pointing out these recent papers we missed to cite and compare to. We discuss each paper below. \\n\\nTriple-GAN and Triangle-GAN are semi-supervised generative adversarial models. Triple-GAN allows a class conditional generation, Triangle-GAN is applied for cross-domain joint distribution matching. However, these models do not have an encoder part and are not fully unsupervised. In our paper, we compare the proposed method only with other unsupervised bidirectional GANs. Therefore, we do not consider Triple-GAN and Triangle-GAN as our baselines. \\n\\nMINE is an improved version of ALICE model. Unfortunately, in the original paper, authors do not provide results for CIFAR10 dataset. It is hard to reproduce them because the source code of the method is not provided. \\n\\nSVAE is a generative model which improves the variational auto-encoder (VAE). It is closely related to our work. We have added quantitative comparisons with SVAE to a new version of the paper (see Table 1, Table 2). \\n\\n3. > The authors should discuss more about the augment mapping a(x), i.e., how to choose a(x). I think this is quite important for this paper. At least some empirical results and analysis, for example, how inception score / FID score changes when using different choices of a(x).\\n\\nIndeed, the choice of the augment mapping a(x) can significantly impact PAGAN performance. We analyzed another augmentations such as a Gaussian blur and a random contrast normalization before we chose the augment function a(x) mentioned in the paper. As a result, we selected a combination of reflecting pad and the random crop based on the visual judgment of generated samples and reconstructions. We have added a new subsection \\\"Choice of Augmentation\\\" to Section 5 where we provide experiments with other types of augmentations as well as with different padding width for the pad-crop augmentation type. We also add some discussion on intuition behind selecting the most efficient augmentation. \\n\\n4. > This paper claims that the proposed method can make the training more robust, but there is no such experiment results to support the argument.\\n\\nAs we understand, this question relates to the following quote from the paper: \\\"To ensure good reconstructions, we introduce an augmented adversarial reconstruction loss ... This enforces the discriminator to take into account content invariant to the augmentation, thus making training more robust.\\\" So, we claim that the augmented pairs (x, a(x)) in contrast to pairs (x, x) allow the discriminator not to degrade to a delta function, thus making training more robust. We empirically support this argument by the experiment provided in subsection \\\"Importance of augmentation\\\" in Section 5 and in Table 3 where we compare the model with augmented pairs (x, a(x)) versus the model with pairs (x, x). \\n\\n[1] chen et al. Symmetric variational autoencoder and connections to adversarial learning, AISTATS 2018.\\n[2] Belghazi et al, Mutual Information Neural Estimation, ICML 2018.\\n[3] Li et al. Triple Generative Adversarial Nets, NIPS 2017.\\n[4] Gan et al. Triangle generative adversarial networks, NIPS 2017.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Dear reviewer,\\nWe would like to thank you for the thoughtful review. The main concern you raised is about the novelty of the paper. We will address each point of the review below:\\n\\n> Comparing to the existing works, the main contribution is the introducing of an augmented reconstruction loss by training a discriminator to distinguish the augmentation data from the reconstructed data.\\n\\nThe summary of our contribution is accurate at a high-level. Just to be sure that there is no misunderstanding we add the key detail: the discriminator distinguishes pairs (x, a(x)) from the pairs (x, G(E(x))) where x is a real object, a(x) is its augmentation and G(E(x)) is its reconstruction. Adding x as the first element in each pair is crucial because it ensures that reconstructions G(E(x)) will correspond to the source object x. Otherwise, if we classify just the augmentation data a(x) from the reconstructions G(E(x)) instead of pairs the auto-encoding model will not be penalized for incorrect reconstructions. \\n\\n> The techniques used to train a bidirectional GAN are very standard. The only new stuff may be is the proposed reconstruction loss defined on augmented samples and reconstructed ones. But this is also not a big contribution, seems just using a slightly different way to guarantee reconstruction.\\n\\nIt is true that the key distinction of our method from other algorithms of training a bidirectional GAN is the proposed adversarial reconstruction loss defined on pairs. Despite the simplicity of the concept, to the best of our knowledge it is the first successful attempt of applying the content-aware trainable distance between two images. \\n\\nOther approaches [1, 2, 3, 4, 5, 6] mainly utilize standard L1 and L2 distances which lead to undesirable artifacts and blurriness in reconstructions. In ALICE [6] paper authors consider a discriminator on pairs (x, x) and (x, G(E(x)) without augmentation and mention that it degrades to a delta function which is even worse than L1 and L2. Our proposed augmentation allows for the discriminator to classify pairs based not on the pixels but on the content of the image which is invariant to the augmentation. \\n\\nIn the paper, the main motivation of the adversarial reconstruction loss over the standard pixel-wise losses is as follows: the latter match images in the space of pixels which is highly noisy and does not capture perceptual similarity while the proposed loss matches images in the space of high level features learned by the discriminator on pairs. In experiments, we show that introducing the adversarial reconstruction loss instead of L1 distance significantly improves both the visual quality of generated images and reconstructions and standard metrics such as Inception Score and Frechet Inception Distance. Therefore, we argue that the proposed loss is conceptually very different from the standard pixel-wise losses. \\n\\nAdditionally, we want to notice that in the paper we introduce a novel metric Reconstruction Inception Dissimilarity (RID) as alternative to the standard RMSE. We empirically show that RID is more robust to content-preserving transformations and captures perceptual similarity between source image and its reconstruction rather than a pixel-wise coincidence. \\n\\n[1] Variational Approaches for Auto-Encoding Generative Adversarial Networks, \\\\\\\\ https://arxiv.org/abs/1706.04987\\n[2] It Takes (Only) Two: Adversarial Generator-Encoder Networks, https://arxiv.org/abs/1704.02304\\n[3] Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks, https://arxiv.org/abs/1703.10593\\n[4] Neural Photo Editing with Introspective Adversarial Networks, https://arxiv.org/abs/1609.07093\\n[5] Wasserstein Auto-Encoders, https://arxiv.org/abs/1711.01558\\n[6] ALICE: Towards Understanding Adversarial Learning for Joint Distribution Matching, https://arxiv.org/abs/1709.01215\"}",
"{\"title\": \"Adding and encoder for the GANs is studied. But the distinctions from existing models are not obvious.\", \"review\": \"The paper propose a adversary method to train a bidirectional GAN with both an encoder and decoder. Comparing to the existing works, the main contribution is the introducing of an augmented reconstruction loss by training a discriminator to distinguish the augmentation data from the reconstructed data. Experimental results are demonstrated to show the generating and reconstruction performance.\\n\\nThe problem studied in this paper is very important, and has drawn a lot of researchers' attentions in recent years. However, the novelties of this paper is very limited. The techniques used to train a bidirectional GAN are very standard. The only new stuff may be is the proposed reconstruction loss defined on augmented samples and reconstructed ones. But this is also not a big contribution, seems just using a slightly different way to guarantee reconstruction.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Adversarial reconstruction loss is an interesting idea, but the paper need more polishing\", \"review\": \"==============Updated=====================\\nThe authors addressed some of my concern, and I appreciated that they added more experiments to support their argument.\\nAlthough I still have the some consideration as R3, I will raise the rating to 6.\\n\\n===========================================\\n\\nThis paper is easy to follow. Here are some questions:\\n\\n1. The argument about ALI and ALICE in the second paragraph of the introduction, \\u201c\\u2026 by introducing a reconstruction loss in the form of a discriminator which classifies pairs (x, x) and (x, G(E(x)))\\u201d, however, in ALI and ALICE, they use one discriminator to classify pairs (z, x) and (z, G(z)). Therefore, \\u201c\\u2026 the discriminator tends to detect the fake pair (x, G(E(x))) just by checking the identity of x and G(E(x)) which leads to vanishing gradients\\u201d is problematic. Therefore, the motivation in the introduction may be some modification.\\n\\n2. The authors failed to compare their model with SVAE [1] and MINE [2], which are improved versions of ALICE. And we also have other ways to match the distribution such as Triple-GAN [3] and Triangle-GAN [4], I think the authors need to run some comparison experiments.\\n\\n3. The authors should discuss more about the augment mapping a(x), i.e., how to choose a(x). I think this is quite important for this paper. At least some empirical results and analysis, for example, how inception score / FID score changes when using different choices of a(x).\\n\\n4. This paper claims that the proposed method can make the training more robust, but there is no such experiment results to support the argument.\\n\\n[1] chen et al. Symmetric variational autoencoder and connections to adversarial learning, AISTATS 2018.\\n[2] Belghazi et al, Mutual Information Neural Estimation, ICML 2018.\\n[3] Li et al. Triple Generative Adversarial Nets, NIPS 2017.\\n[4] Gan et al. Triangle generative adversarial networks, NIPS 2017.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Quite a lot of experiments, but the choice of r(y|x) is not well justified, and some theoretical issues\", \"review\": \"Thank you for an interesting read.\\n\\nThe paper proposes adding an adversarial loss to improve the reconstruction quality of an auto-encoder. To do so, the authors define an auxiliary variable y, and then derive a GAN loss to discriminate between (x, y) and (x, decoder(encoder(x))). The algorithm is completed by combining this adversarial \\\"reconstruction\\\" loss with adversarial loss functions that encourages the matching of marginal distributions for both the observed variable x and the latent variable z. \\n\\nExperiments present quite a lot of comparisons to existing methods as well as an ablation study on the proposed \\\"reconstruction\\\" loss. Improvements has been shown on reconstructing input images with significant numbers.\\n\\nOverall I think the idea is new and useful, but is quite straight-forward and has some theoretical issues (see below). The propositions presented in the paper are quite standard results derived from the original GAN paper, so for that part the contribution is incremental and less interesting. The paper is overall well written, although the description of the augmented distribution r(y|x) is very rush and unclear to me.\\n\\nThere is one theoretical issue for the defined \\\"reconstruction\\\" loss (for JS and f-divergences). Because decoder(encoder(x)) is a deterministic function of x, this means p(y|x) is a delta function. With r(y|x) another delta function (even that is not delta(y=x)), with probability 1 we will have mismatched supports between p(y|x) and r(y|x). \\n\\nThis is also the problem of the original GAN, although in practice the original GAN with very careful tuning seem to be OK... Also it can be addressed by say instance noise or convolving the two distributions with a Gaussian, see [1][2].\\n\\nI think another big issue for the paper is the lack of discussion on how to choose r(y|x), or equivalently, a(x). \\n\\n1. Indeed matching p_{\\\\theta}(x) to p^*(x) and q(z) to p(z) does not necessarily returns a good auto-encoder that makes x \\\\approx decoder(encoder(x)). Therefore the augmented distribution r(y|x) also guides the learning of p(y|x) and with appropriately chosen r(y|x) the auto-encoder can be further improved.\\n\\n2. The authors mentioned that picking r(y|x) = \\\\delta(y = x) will result in unstable training. But there's no discussion on how to choose r(y|x), apart from a short sentence in experimental section \\\"...we used a combination of reflecting 10% pad and the random crop to the same image size...\\\". Why this specific choice? Since I would imagine the distribution r(y|x) has significant impact on the results of PAGAN, I would actually prefer to see an in-depth study of the choice of this distribution, either theoretically or empirically. \\n\\nIn summary, the proposed idea is new but straight-forward. The experimental section contains lots of results, but the ablation study by just removing the augmentation cannot fully justify the optimality of the chosen a(x). I would encourage the authors to consider the questions I raised and conduct extra study on them. I believe it will be a significant contribution to the community (e.g. in the sense of connecting GAN literature and denoising methods literature).\\n\\n[1] Sonderby et al. Amortised MAP Inference for Image Super-resolution. ICLR 2017.\\n[2] Roth et al. Stabilizing Training of Generative Adversarial Networks through Regularization. NIPS 2017.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJlfAo09KX | Guaranteed Recovery of One-Hidden-Layer Neural Networks via Cross Entropy | [
"Haoyu Fu",
"Yuejie Chi",
"Yingbin Liang"
] | We study model recovery for data classification, where the training labels are generated from a one-hidden-layer fully -connected neural network with sigmoid activations, and the goal is to recover the weight vectors of the neural network. We prove that under Gaussian inputs, the empirical risk function using cross entropy exhibits strong convexity and smoothness uniformly in a local neighborhood of the ground truth, as soon as the sample complexity is sufficiently large. This implies that if initialized in this neighborhood, which can be achieved via the tensor method, gradient descent converges linearly to a critical point that is provably close to the ground truth without requiring a fresh set of samples at each iteration. To the best of our knowledge, this is the first global convergence guarantee established for the empirical risk minimization using cross entropy via gradient descent for learning one-hidden-layer neural networks, at the near-optimal sample and computational complexity with respect to the network input dimension. | [
"cross entropy",
"neural networks",
"parameter recovery"
] | https://openreview.net/pdf?id=HJlfAo09KX | https://openreview.net/forum?id=HJlfAo09KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Skg0kCeTyE",
"rygK6cLo3m",
"rklsJtp93m",
"HylHS_0L37"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544519141576,
1541266112670,
1541228770593,
1540970557414
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper875/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper875/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper875/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper875/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper shows local convergence results for gradient descent on one hidden layer network with Gaussian inputs and sigmoid activations. Later it shows global convergence by using spectral initialization. All the reviewers agree that the results are similar to existing work in the literature with little novelty. There are also some concerns about the correctness of the statements expressed by some reviewers.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"ICLR 2019 decision\"}",
"{\"title\": \"Lack of practicality and theoretical depth.\", \"review\": \"The paper presents theoretical analysis for recovering one-hidden-layer neural networks using logistic loss function. I have the following major concerns:\\n\\n(1.a) The paper does not mention identifiability at all. As has been known, neural networks with even only one hidden layer are not identifiable. The authors need to either prove the identifiability or cite existing references on the identifiability. Otherwise, the parameter recovery does not make sense.\", \"example\": \"The linear network takes f(x) = 1'Wx/k, where 1 is a vector with every entry equal to one. Then two models with parameters W and V are identical as long 1'W = 1'V.\\n\\n(1.b) If the equivalent parameters are not isolated, the local strong convexity is impossible to hold. The authors need to carefully justify their claim.\\n\\n(2) When using Sigmoid or Tanh activation functions, the output is bounded between [0,1] or [-1,+1]. This is unrealistic for logistic regression: The output of [0,1] means that the posterior probability has to be bounded between 1/2 and e/(1+e); The output of [-1,1] means that the posterior probability has to be bounded between 1/(1+e) and e/(1+e).\\n\\n(3) The most challenging part of the logistic loss is the lack of curvature, when neural networks have large magnitude outputs. Since this paper assumes that the neural networks takes very small magnitude outputs, the extension from Zhong et al. 2017b to the logistic loss is very straightforward. \\n\\n(4) Spectral initialization is very impractical. Nobody is using it in practice. The spectral initialization avoids the challenging global convergence analysis.\\n\\n(5) Theorem 3 needs clarification. Please explicitly write the RHS of (7). The result would become meaningless, if under the scaling of Theorem 2, is the RHS of (7) smaller than RHS of (5).\\n\\nI also have the following minor concerns on some unrealistic assumptions, but these concerns do not affect my rating. These assumptions have been widely used in many other papers, due to the lack of theoretical understanding of neural networks in the machine learning community.\\n\\n(6)\\tThe neural networks take independent Gaussian input.\\n(7)\\tThe model is assumed to be correct.\\n(8)\\tOnly gradient descent is considered.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Incremental work, not strong enough\", \"review\": \"This paper studies the problem of learning the parameter of one hidden layer neural network with sigmoid activation function based on the negative log likelihood loss. The authors consider the teacher network setting with Gaussian input, and show that gradient descent can recover the teacher network\\u2019s parameter up to certain statistical accuracy when the initialization is sufficiently close to the true parameter. The main contribution of this paper is that the authors consider the classification problem with negative log likelihood loss, and provide the local convergence result for gradient descent. However, based on the previous results in Mei et al., 2016 and Zhong et al., 2017, this work is incremental, and current results in this paper is not strong enough. To be more specific, the paper has the following weaknesses:\\n\\n1.\\tThe authors show the uniformly strongly convex and smooth property of the objective loss function which can get rid of the sample splitting procedure used in Zhong et al., 2017. However, the method for proving this uniform result has been previously used in Mei et al., 2016. And the extension to the negative log likelihood objective function is straightforward since the derivate and Hessian of the log likelihood function can be easily bounded given the sigmoid activation function. \\n2.\\tThe authors employ a tensor initialization algorithm proposed by Zhong et al, 2017 to satisfy their initialization requirement. However, it seems like that the tensor initialization almost enables the recovery as it already lands on a point close to the ground truth, the role of GD is somehow not \\nthat crucial. If the authors can prove the convergence of GD with random initialization, the results of this paper will be much stronger.\\n3.\\tThe presentation of the current paper needs to be improved. The authors should distinguish \\\\cite and \\\\citep. There are some incomplete sentences in the current paper, such as in page 3, \\u201cMoreover, (Zhong et al., 2017b) shows\\u2026the ground truth From a technical perspective, our\\u2026\\u201d.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"Paper Summary:\\nThis paper studies the problem of recovering a true underlying neural network (assuming there exists one) with cross-entropy loss. This paper shows if the input is standard Gaussian, within a small ball around the ground truth, the objective function is strongly convex and smooth if there is a sufficiently large number of samples. Furthermore, the global minimizer is actually the true neural network. This geometric analysis implies applying gradient descent within this neighborhood, one can recover the underlying neural network. This paper also proposed a provable method based on spectral learning to find a good initialization point. Lastly, this paper also provides some simulation studies.\", \"comments\": \"This paper closely follows a recent line of work on recovering a neural network under Gaussian input assumption. While studying cross-entropy loss is interesting, the analysis techniques in this paper are very similar to Zhong et al. 2017, so this paper is incremental. I believe studying the global convergence of the gradient descent or relaxing the Gaussian input assumption is more interesting.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
H1gMCsAqY7 | Slimmable Neural Networks | [
"Jiahui Yu",
"Linjie Yang",
"Ning Xu",
"Jianchao Yang",
"Thomas Huang"
] | We present a simple and general method to train a single neural network executable at different widths (number of channels in a layer), permitting instant and adaptive accuracy-efficiency trade-offs at runtime. Instead of training individual networks with different width configurations, we train a shared network with switchable batch normalization. At runtime, the network can adjust its width on the fly according to on-device benchmarks and resource constraints, rather than downloading and offloading different models. Our trained networks, named slimmable neural networks, achieve similar (and in many cases better) ImageNet classification accuracy than individually trained models of MobileNet v1, MobileNet v2, ShuffleNet and ResNet-50 at different widths respectively. We also demonstrate better performance of slimmable models compared with individual ones across a wide range of applications including COCO bounding-box object detection, instance segmentation and person keypoint detection without tuning hyper-parameters. Lastly we visualize and discuss the learned features of slimmable networks. Code and models are available at: https://github.com/JiahuiYu/slimmable_networks | [
"Slimmable neural networks",
"mobile deep learning",
"accuracy-efficiency trade-offs"
] | https://openreview.net/pdf?id=H1gMCsAqY7 | https://openreview.net/forum?id=H1gMCsAqY7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Byewd0_mlE",
"BJxHanYjJV",
"Skxg-ZFjkE",
"S1g9rv-fJN",
"SyeIKWl11N",
"Byg13FQjRm",
"HJgcCEmjCQ",
"rkgxjlo9Am",
"rylzSkDA6X",
"r1xT_6ICTX",
"HJe6pnUA6m",
"HyeH_nUAaX",
"rkgg7sI0TQ",
"rkxP-cICpm",
"ryl2OdD8p7",
"r1gelBGgTm",
"rklrgxdT37",
"BJez1UBa27",
"H1lytDzo2Q",
"Byg-61r4n7"
],
"note_type": [
"meta_review",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1544945263502,
1544424636590,
1544421624255,
1543800642007,
1543598462135,
1543350694615,
1543349457831,
1543315608107,
1542512442395,
1542511989002,
1542511812818,
1542511725341,
1542511384416,
1542511103238,
1541990516405,
1541575912179,
1541402605455,
1541391834438,
1541248886961,
1540800441478
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper874/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper874/Authors"
],
[
"~Bohan_Zhuang1"
],
[
"ICLR.cc/2019/Conference/Paper874/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper874/Authors"
],
[
"~Ji_Lin1"
],
[
"ICLR.cc/2019/Conference/Paper874/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper874/Authors"
],
[
"ICLR.cc/2019/Conference/Paper874/Authors"
],
[
"ICLR.cc/2019/Conference/Paper874/Authors"
],
[
"ICLR.cc/2019/Conference/Paper874/Authors"
],
[
"ICLR.cc/2019/Conference/Paper874/Authors"
],
[
"ICLR.cc/2019/Conference/Paper874/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper874/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper874/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper874/AnonReviewer3"
],
[
"~Jason_Kuen1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposed a method that creates neural networks that can run under different resource constraints. The reviewers have consensus on accept. The pro is that the paper is novel and provides a practical approach to adjust model for different computation resource, and achieved performance improvement on object detection. One concern from reviewer2 and another public reviewer is the inconsistent performance impact on classification/detection (performance improvement on detection, but performance degradation on classification). Besides, the numbers reported in Table 1 should be confirmed: MobileNet v1 on Google Pixel 1 should have less than 120ms latency [1], not 296 ms.\\n\\n\\n[1] Table 4 of https://arxiv.org/pdf/1801.04381.pdf\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"train a single neural network at different widths\"}",
"{\"title\": \"Authors' Reply to Comment\", \"comment\": \"Thanks for your interest in our work! We will add the citation once revision period is re-opened.\"}",
"{\"comment\": \"Dear authors,\\n\\nThis is a very interesting work. And I think it is closely related to the mutual learning frameworks [1,2], where the core idea is also to jointly train several models for improving the performance of training each model separately. The main difference is with/without weight sharing, which is one of the contributions of the paper. And I recommend you to cite these works in the paper.\", \"1\": \"Zhang et al. \\\"Deep Mutual Learning\\\", CVPR2018. http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhang_Deep_Mutual_Learning_CVPR_2018_paper.pdf\", \"2\": \"Zhuang et al. \\\"Towards Effective Low-bitwidth Convolutional Neural Networks\\\", CVPR2018\", \"http\": \"//openaccess.thecvf.com/content_cvpr_2018/papers/Zhuang_Towards_Effective_Low-Bitwidth_CVPR_2018_paper.pdf\", \"title\": \"Very interesting work, and recommend some related works\"}",
"{\"title\": \"Authors' Reply to Comment\", \"comment\": \"Thanks for your interest in our work! However, we can not fully agree with your suggestion. Our reasons are summarized below:\\n\\n1. In your referenced paper [1], the major focus is to compress (Section 3) and sparsify/pruning filters, channels and layers with scheduling (Section 4), and get a \\\"nested sparse networks\\\". The resulted network can be used for model compression, knowledge distillation and hierarchical classification (Section 5).\\nIn our work, the focus is not to compress, sparsify or pruning, but to simply train a single neural network executable at different width, with the spotlight on the accuracy/performance of standard image recognition benchmarks (ImageNet classification, COCO object detection, instance segmentation, keypoints detection). While the motivation is similar, our focus, methodology, analysis and experimental results are completely different.\\n\\n2. Moreover, the only related experiment, hierarchical classification, is also different to our experiments and standard benchmarks. In your referenced paper [1] in Section 5.3:\\n\\n\\\"We also provide experimental results on the ImageNet (ILSVRC 2012) dataset. From the dataset, we collected a subset, which consists of 100 diverse classes including natural objects, plants, animals, and artifacts.\\\"\\n\\nIn efficient deep learning, none of MobileNet v1 [2], MobileNet v2 [3], ShuffleNet [4] evaluate proposed methods on Cifar-10, Cifar-100 or sub-sampled \\\"100-class ImageNet\\\". Many methods that work on toy dataset can not generalize to real scenarios in the topic of efficient models, thus we think challenging settings like standard 1000-class ImageNet is essential to make the work solid and to ensure fair comparisons. Since the motivation is similar, we will be happy to add a citation in related work. We will always be happy to highlight and add comparison to any work that is related and has standard benchmark results.\\n\\n\\n[1] Kim, Eunwoo, Chanho Ahn, and Songhwai Oh. \\\"Learning Nested Sparse Structures in Deep Neural Networks.\\\" arXiv preprint arXiv:1712.03781 (2017).\\n[2] Howard, Andrew G., et al. \\\"Mobilenets: Efficient convolutional neural networks for mobile vision applications.\\\" arXiv preprint arXiv:1704.04861 (2017).\\n[3] Sandler, Mark, et al. \\u201cMobileNet v2: Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation.\\\" arXiv preprint arXiv:1801.04381 (2018).\\n[4] Zhang et al. Shufflenet: An extremely efficientconvolutional neural network for mobile devices.arXiv preprint arXiv:1707.01083, 2017.\"}",
"{\"comment\": \"This paper introduces a deep neural network that provides different inference paths with respect to different widths for accuracy-efficiency trade-off at test time, but the concept has been already introduced in prior work\\n[Kim et al., NestedNet: Learning Nested Sparse Structures in Deep Neural Networks, CVPR, 2018]\\nwhich suggests a nested network to produce multiple different inference paths with different widths (they call \\\"channel scheduling\\\" which is one of their strategies to allow multiple different sparse networks).\\n\\nExcept the missing related work, your paper still has value in terms of different methodology as well as promising experimental results including detection and semantic segmentation.\\n\\nIt would be good to not only introduce additional related work but make the contribution/positioning clear.\", \"title\": \"Missing related work\"}",
"{\"title\": \"Authors' Reply to Comment\", \"comment\": \"Thanks for your interest in our work! We will add the citation once revision period is re-opened. Code will be released soon and we warmly welcome the community to work together on related topics!\"}",
"{\"comment\": \"Very interesting work and congratulations!\\n\\nI am the first author of paper Runtime Neural Pruning (RNP, in NIPS 2017), where we also partitioned the channels of each convolutional layers into 4 equal sets and used a reinforcement learning agent to determine how many sets to run according to the difficulty of input images, in an incremental way. RNP can also adjust the workload according to the available hardware resources by adjusting the computation penalty.\\n\\nI think your paper has solved some of the training difficulty in RNP, and it would be very interesting to try a network-level dynamic inference according to the input image. Also, it would be very nice if you can include a reference to our paper. Thanks!\", \"title\": \"Interesting work and a related paper\"}",
"{\"title\": \"Thanks for addressing my questions\", \"comment\": \"Thanks for addressing my questions!\"}",
"{\"title\": \"Authors' Reply to Review\", \"comment\": \"Thanks for your review efforts! We have addressed all questions below:\\n\\n1. We aim to train single neural network executable at different widths. We find slimmable networks achieve better results especially for small models (e.g., 0.25x) on detection tasks. We have mentioned that it is probably due to implicit distillation, richer supervision and better learned representation (since detection results are based on pre-trained ImageNet learned representation). We try to avoid strong claims of any deep reason because none of them is strictly proved by us yet. Explaining deep reasons for improvements are not the motivation or the focus of this paper. But we are actively exploring on these questions!\\n\\n2. In fact, on average the image classification results are also improved (0.5 better top-1 accuracy in total), especially for small models. After submission, we have improved accuracy of S-ShuffleNet due to an additional ReLU layer (our implementation bug) between depthwise convolution and group convolution (Figure 2 of ShuffleNet [3]). Our models will be released.\\n\\n3. Thanks for the good suggestion! Currently we conduct detection experiments mainly on Detectron [1] and MMDetection [2] framework where ResNet-50 is among the most efficient models. We do value this suggestion and will try to implement mobilenet-based detectors. Besides, all code (including classification and detection) and pre-trained models will be released soon and we warmly welcome the community to work on together.\\n\\nThanks!\\n\\n\\n[1] https://github.com/facebookresearch/Detectron\\n[2] https://github.com/open-mmlab/mmdetection\\n[3] Zhang et al. Shufflenet: An extremely efficientconvolutional neural network for mobile devices.arXiv preprint arXiv:1707.01083, 2017.\"}",
"{\"title\": \"Authors' Reply to Review\", \"comment\": \"Thanks for your review efforts! We have addressed all three questions below:\\n\\n1. As mentioned in Section 3.3, the only modification is to accumulate all gradients from different switches. It means that the optimizer (SGD for image recognition tasks) is exactly the same as training individual models (same momentum, etc.). The only difference is the value of gradient for each parameter. In Algorithm 1, we follow pytorch-style API and use optimizer.step() to indicate applying gradients. We have not observed any difficulty in optimization of slimmable networks using default optimizer in Algorithm 1.\\n\\n2. There is no \\\"unbalanced gradient\\\" problem in training slimmable networks (it may seem like so). The parameters of 0.25x seem to have \\\"more gradients\\\", but in the forward view, these parameters of 0.25x are also used four times in Net 0.25x, 0.5x, 0.75x and 1.0x. It means the parameters in 0.25x are more important for the overall performance of slimmable networks. In fact, back-propagation is strictly based on forward feature propagation. In the forward view, as mentioned in Section 3.3, our primary objective to train a slimmable network is to optimize its accuracy averaged from all switches.\\n\\n3. Our reported ResNet-50 accuracy is correct (23.9 top-1 error). We evaluate single-crop testing accuracy instead of 10-crop following all our baselines. The ResNet-50 single-crop testing accuracy is publicly reported in ResNeXt paper (Table 3, 1st row) [1], released code [2] and many other publications. Our ResNet-50 has same implementation with PyTorch official pre-trained model zoo [3] where the top-1 error is also 23.9 instead of <21% (in fact ResNet-152 still has > 21% single-crop top-1 error rate).\\n\\nWe sincerely hope the rating can be reconsidered if it was affected by above questions. Thanks for your time and review efforts!\\n\\n\\n[1] Xie, Saining, et al. \\\"Aggregated residual transformations for deep neural networks.\\\" Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on. IEEE, 2017.\\n[2] https://github.com/facebookresearch/ResNeXt\\n[3] https://pytorch.org/docs/stable/torchvision/models.html\"}",
"{\"title\": \"Authors' Reply to Review\", \"comment\": \"Thanks for your positive review and encouragements! We also believe the discovery of slimmable network opens up the possibility to many related fields including model distillation, network compression and better representation learning. We are actively exploring on these topics and hope this submission may contribute to ICLR community.\"}",
"{\"title\": \"Authors' Reply to Comment\", \"comment\": \"Thanks for your interest in our work! However, we cannot agree with your comments. We have addressed your questions and concerns below:\\n\\n1. As introduced in Sec. 1 and concluded by all reviewers, this work aims to \\\"train a single neural network executable at different widths for different devices\\\" \\nWe never claim \\\"training runtime is the key problem\\\". And our focus is not on \\\"training a single network\\\" but on \\\"a single network executable at different widths\\\". The testing runtime and flexible accuracy-efficiency trade-offs are what we care. \\n2. In Table 3 for ImageNet classification, the top-1 accuracy is actually improved by 0.5 in total.\\n\\n3. Although all experiments are conducted with same settings for both individual and slimmable models, we also noticed that the reproduced performance of individual models was lower than original papers. A potential reason is included in Appendix B of the first submitted version (original *-RCNN papers use ResNet-50 with strides on 1x1 convolution, while we follow PyTorch official implemented ResNet-50 with strides on 3x3). After submission, we found a recently released detection framework MMDetection [1] that has settings for pytorch-style ResNet-50. Thus we have conducted another set of detection experiments and included the results in Appendix C (same mAP is reproduced, for example, Faster-R-50-FPN-1x with 36.4 mAP).\", \"and_our_conclusion_still_holds\": \"on detection tasks, slimmable models have better performance than individually trained models, especially for small models. Specifically, for 0.25x models, slimmable network has 2.0+ mAP, which is indeed significant. For 1.0x models, slimmable also have 0.4+ mAP, 0.7+ mAP for Faster-RCNN and Mask-RCNN. We will fully release our code (both training and testing) and pre-trained models on both ImageNet classification and COCO detection sets.\\n\\n4. Image classification trains models from scratch, while COCO detection fine-tunes pre-trained ImageNet models. The improvement on detection may due to better learned representation of slimmable models on ImageNet when transfer to COCO tasks. We have also mentioned in our submission that it is probably due to implicit distillation and richer supervision. The reason behind the improvements is beyond the motivation of this submission and requires future investigation. We try to avoid strong claims of any deep reason because none of them is strictly proved by us yet.\\n\\nWe sincerely thank you for posting these concerns and we will always try our best to address them. Please let us know if you have further question or concern. Thanks!\\n\\n\\n[1] https://github.com/open-mmlab/mmdetection\"}",
"{\"title\": \"Authors' Reply to Comment\", \"comment\": \"Thanks for your interest in our work. Our claim is correct: at runtime reducing depth cannot reduce memory footprint.\\n\\nFor a simple example, consider a layer-by-layer network stacking same convolution layers, the output of layer N can always be placed into the memory of its input after computation, and feed into next layer (N+1). Because at runtime, there is no need to store feature of previous layers generally (in training, they are required for gradient computation).\\n\\nA good reference is MobileNet v2 paper [1], section 5.1 memory efficient inference. It shows that the memory footprint can be simplified to: M = max_{layer_i \\\\in all layers} (memory_input of layer_i + memory_output of layer_i).\\n\\nThe memory footprint M is a MAX operation over all layers, instead of SUM, during inference.\\n\\n\\n[1] Sandler, Mark, et al. \\u201cMobileNet v2: Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation.\\\" arXiv preprint arXiv:1801.04381 (2018).\"}",
"{\"title\": \"Authors' Reply to Comment\", \"comment\": \"Thanks for your interest in our work! We have added the citation.\"}",
"{\"comment\": \"It is claimed in the 3rd paragraph in introduction that,\\n\\n \\\"Nevertheless, in contrast to width (number of channels), reducing depth cannot reduce memory footprint which is commonly constrained during runtime.\\\"\\n\\nHowever, in my understanding, the momory reduces linearly when reducing depth for deep neural network. Could you please explain more on this?\", \"title\": \"\\\"reducing depth cannot reduce memory footprint\\\"?\"}",
"{\"comment\": \"The motivation to train one model end deploy in multiple devices is quite interesting. However, the experimental results are not convincing.\\n\\nIn Table 3, most of the S-networks reduce performance compared to their individual counterparts. It's not cumbersome to train individual slimmed model that has higher accuracy in portable device and the same FLOPs as the S-model, since training runtime is not the key problem with increasing amount of computational powers.\\n\\nIn Table 5, the baselines of R-50-FPN-1\\u00d7 are much lower than those reported in the original paper of Faster R-CNN and Mask R-CNN. In previous work, the box and mask AP of Mask+R-50-FPN-1\\u00d7 are 37.3 and 33.7, while box AP for Faster+R-50-FPN-1\\u00d7 is 36.4. These results are already comparable and even better than the S-networks. The same problem applies to the keypoints. Therefore, it is unclear that S-model would still bring performance gain when the standard baselines are employed.\\n\\nAnother concern is that S-model seems to degenerate performance in ImageNet, as the paper mentioned \\\"a slimmable network is expected to have lower performance than individually trained ones intuitively\\\". But it turns out that the pretrained S-model in ImageNet has large improvement when finetuned in detection and segmentation. This violates common sense.\", \"title\": \"Good motivation but not convincing results\"}",
"{\"title\": \"The paper proposes an idea of combining different size models together into one shared net. And the performance is claimed to be slightly worse for classification and much better for detection.\", \"review\": \"The idea is really interesting. One only need to train and maintain one single model, but use it in different platforms of different computational power.\\n\\nAnd according to the experiment results of COCO detection, the S-version models are much better than original versions (eg. faster-0.25x, from 24.6 to 30.0) . The improvement is huge to me. However the authors do not explain any deep reasons.\\n\\nAnd for classification, there are slightly performance drop instead of a large improvement which is also hard to understand. \\n\\nFor detection, experiments on depth-wise convolution based models (such as mobilenet and shufflenet) are suggested to make this work more solid and meaningful.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Very exciting work\", \"review\": \"This paper presents a straightforward looking approach for creating a neural networks that can run under different resource constraints, e.g. less computation but lower quality solution and expensive high quality solution, while all the networks are having the same filters. The idea is to share the filters of the cheapest network with those of the larger more expensive networksa and train all those networks jointly with weight sharing. One important practical observation is that the batch-normalization parameters should not be shared between those filters in order to get good results. However, the most interesting surprising observation, that is the main novelty of the work that even the highest quality vision network get substantially better by this training methodology as compared to be training alone without any weight sharing with the smaller networks, when trained for object detection and segmentation purposes (but not for recognition). This is a highly unexpected result and provides a new unanticipated way of training better segmentation models. It is especially nice that the paper does not pretend that this phenomenon is well understood but leaves its proper explanation for future work. I think a lot of interesting work is to be expected along these lines.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"algo details and numbers\", \"review\": \"This paper trains a single network executable at different widths. This is implemented by maintaining separate BN parameter and statistics for different width. The problem is well-motivated and the proposed method can be very helpful for deployment of deep models to devices with varying capacity and computational ability.\\n \\nThis paper is well-written and the experiments are performed on various structures. Still I have several concerns regarding the algorithm.\\n1. In algo 1, while gradients for convolutional and fully-connected layers are accumulated for all switches before update, how are the parameters for different switches updated?\\n2. In algo 1, the gradients of all switches are accumulated before the update. This may result in implicit unbalanced gradient information, e.g. the connections in 0.25x model in Figure 1 has gradient flows on all four different switches, while the right-most 0.25x connections in 1.0x model has only one gradient flow from the 1.0x switch, will this unbalanced gradient information increase optimization difficulty and how is it solved?\\n3. In the original ResNet paper, https://arxiv.org/pdf/1512.03385.pdf, the top-1 error of RestNet-50 is <21% in Table 4. The number reported in this paper (Table 3) is 23.9. Where does the difference come from?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"comment\": \"Nice work! I had a paper published at CVPR 2018 on training convolutional networks that support instant and adaptive accuracy-efficiency trade-offs at runtime, via early downsampling rather than networking slimming. My paper also includes a similar technique of using independent BatchNorm parameters (just means and stds in my paper, whereas you \\\"unshare\\\" all of BatchNorm parameters) for different trade-off configurations.\\n\\nI'd appreciate if you would include a reference to it - \\\"Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks\\\". Thanks.\", \"title\": \"A related work\"}"
]
} |
|
BygMAiRqK7 | Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs | [
"Yogesh Balaji",
"Hamed Hasani",
"Rama Chellappa",
"Soheil Feizi"
] | Building on the success of deep learning, two modern approaches to learn a probability model of the observed data are Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs). VAEs consider an explicit probability model for the data and compute a generative distribution by maximizing a variational lower-bound on the log-likelihood function. GANs, however, compute a generative model by minimizing a distance between observed and generated probability distributions without considering an explicit model for the observed data. The lack of having explicit probability models in GANs prohibits computation of sample likelihoods in their frameworks and limits their use in statistical inference problems. In this work, we show that an optimal transport GAN with the entropy regularization can be viewed as a generative model that maximizes a lower-bound on average sample likelihoods, an approach that VAEs are based on. In particular, our proof constructs an explicit probability model for GANs that can be used to compute likelihood statistics within GAN's framework. Our numerical results on several datasets demonstrate consistent trends with the proposed theory. | [
"GAN",
"VAE",
"likelihood estimation",
"statistical inference"
] | https://openreview.net/pdf?id=BygMAiRqK7 | https://openreview.net/forum?id=BygMAiRqK7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Byez6b98lE",
"B1lXCkeqAX",
"SklzmcPOTQ",
"H1esAdwu6X",
"SJlPLdvdaQ",
"SJeUt8H62X",
"HygcBfTO2m",
"Hkleh_K_3Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545146809841,
1543270346734,
1542122010241,
1542121682741,
1542121551272,
1541391997710,
1541096002256,
1541081256273
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper873/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper873/Authors"
],
[
"ICLR.cc/2019/Conference/Paper873/Authors"
],
[
"ICLR.cc/2019/Conference/Paper873/Authors"
],
[
"ICLR.cc/2019/Conference/Paper873/Authors"
],
[
"ICLR.cc/2019/Conference/Paper873/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper873/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper873/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper's strength is in that it shows the log likelihood objective is lower bounded by a GAN objective plus an entropy term. The theory is novel (but it seems to relate closely to the work https://arxiv.org/abs/1711.02771.) The main drawback the reviewer raised includes a) it's not clear how tight the lower bound is; b) the theory only applies to a particular subcase of GANs --- it seems that the only reasonable instance that allows efficient generator is the case where Y = G(x)+\\\\xi where \\\\xi is Gaussian noise. The authors addressed the issue a) with some new experiments with linear generators and quadratic loss, but it lacks experiments with deep models which seems to be necessary since this is a critical issue. Based on this, the AC decided to recommend reject and would encourage the authors to add more experiments on the tightness of the lower bound with bigger models and submit to other top venues.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}",
"{\"title\": \"Summary of the authors' response\", \"comment\": \"We thank the reviewers for their comments. In summary, there were two key comments raised by the reviewers that we have addressed as follows (details have been posted in our point by point responses to the comments):\\n\\n(1) The assumption that \\u201cthe generator function is injective\\u201d has been relaxed. By using the data processing inequality from information theory, we show that our results hold even without this assumption. \\n(2) We have added a new section with empirical analysis highlighting the tightness of the entropic GAN lower bound for the log-likelihood function.\"}",
"{\"title\": \"Authors' response\", \"comment\": \"We thank the reviewer for the valuable comments. We have addressed them in the revised version of the paper. Below, we provide point to point responses to the comments:\", \"pros\": \"Thank you for these comments.\", \"cons\": \"(1) Assumption of injectivity: Thank you for this comment. We agree that having an injective G was a strong assumption. Fortunately, we do not need this assumption at all. In the revised version of the paper, we show that our results hold without this assumption. In order to do this, we use the data-processing inequality from Information Theory which indicates that for a random variable X, the entropy of G(X) is always less or equal to the entropy of X. For more details, please see equation A.5 and Section 4 in the revised paper.\\n\\n(2) Stationarity of inner dual problem: Thanks for the comments. First, the likelihood surrogate is computed using Corollary 2 which has three terms visualized in Fig. 1. Marginalizing the transportation map of Eq. 3.7 is a necessary step to compute the likelihood surrogate (this part corresponds to the encoder part in VAE formulations).\\n\\nWe compute the surrogate likelihoods only after the generative model is trained. In our approach, Entropic GANs are trained using Algorithm 1 of (Sanjabi et al, NIPS 2018) (as mentioned in Appendix E). It has been shown in Theorem 4.2 of that paper that Algorithm 1 leads to a close approximation of stationary solutions of the Entropic GAN objective. We have added further explanations about this to the revised paper.\\n\\n(3) Likelihood computation at intermediate iterations: We thank the reviewer for pointing out that the stationarity assumption will not be satisfied at the intermediate iterations of training, and hence likelihood computations may not be accurate. To fix this, we re-ran the discriminator updates for 100 steps before computing the surrogate likelihoods at intermediate iterations. We obtained almost the same behavior in likelihoods. We have updated plots in the revised version of the paper and have added further explanations about this experiment.\\n\\n(4)\\tUnregularized GANs: Thank you for the comment. First note that the coupling P_X|y (i.e. the transportation map) is always a valid density function. Second note that our results in Theorem 1 and Corollary 2 hold for a general GAN formulation (not just the entropic GAN). However, for a general GAN, it may not be easy to compute P_X|y using GAN\\u2019s dual formulation. For the entropic GAN, eq 3.7 gives a closed form relationship between GAN\\u2019s dual solutions and P_X|y. For un-regularized GANs, in some special cases that we discuss in Appendix C, such a closed-form relationship exists (but not in general). In experiments of Section 4.3, we \\u2018approximate\\u2019 P_X|y with a delta function spiked on the closest latent sample generating y. The heuristic used in this section imitates eq 3.7 by taking into account both the likelihood of the latent variable as well as the distance between y and the model. In general, understanding the behavior of the optimal coupling P_X|y using GAN\\u2019s dual solutions is an interesting direction for the future work. \\n\\n(5) Minor comments: Thank you for pointing out these typos. We have modified the paper accordingly. The names used for W_1 and W_2 (the first and second order Wasserstein distances) are common names used in the optimal transport literature. For example, see Villani\\u2019s book titled \\u201cOptimal transport: old and new\\u201d. However, we agree that in the machine learning literature, these names are less common. For example, WGAN (Wasserstein GAN) is in fact using the first-order Wasserstein distance in its formulation. We have added further explanations about these names to the revised version of the paper.\"}",
"{\"title\": \"Authors' response\", \"comment\": \"We thank the reviewer for the valuable comments. We have addressed them in the revised version of the paper. Below, we provide point to point responses to the comments:\\n\\n(1)\\tThank you for this comment. We agree that having an injective G was a strong assumption. Fortunately, we do not need this assumption at all. In the revised version of the paper, we show that our results hold without this assumption. In order to do this, we use the data-processing inequality from Information Theory which indicates that for a random variable X, the entropy of G(X) is always less or equal to the entropy of X. For more details, please see equation A.5 and Section 4 in the revised paper. \\n \\n(2)\\tYou are right about the definition of the Shannon entropy. What we meant in that phrase was that the strongly-convex regularization term is the negative Shannon entropy, i.e. -H(P_{Y,\\\\hY}) (because the Shannon entropy is a concave function and we need convexity in minimization). We have clarified this in the revised version of the paper.\\n\\n(3)\\tThank you for pointing out this typo. We have updated the paper accordingly.\\n\\n(4)\\tThank you for pointing out this relevant reference. We have added a discussion about it to the introduction of the revised version of the paper.\"}",
"{\"title\": \"Authors' response\", \"comment\": \"We thank the reviewer for the valuable comments. We have addressed them in the revised version of the paper. Below, we provide point to point responses to the comments:\\n\\n(1)\\tTightness of the entropic GAN lower bound: in Theorem 1, we showed that the Entropic GAN objective provides a lower-bound on the average sample log-likelihoods. This result is in the same flavor of variational lower bounds used in VAEs, thus providing a principled connection between GANs and VAEs. One drawback of VAEs (which has been echoed in reviewer\\u2019s comment 1) is about the lack of the tightness analysis of the employed variational lower bound. \\nTo address this comment, we have empirically analyzed the tightness of the entropic GAN lower bound for some simple generative models. We have explained our results in detail in a new Appendix Section (Appendix B) which includes a table (Table 1) and a figure (Figure 4). Briefly, our result in Corollary 2 indicates that the approximation gap can be quantified as the KL divergence between P_{X|Y=\\\\by} (the latent variable distribution resulted from the entropic GAN optimization) and f_{X|Y=\\\\by} (the latent variable distribution according to the true model of the data). We evaluate this approximation gap for a linear generative model and a quadratic loss function. Our empirical results show that the approximation gap is orders of magnitudes smaller than the log-likelihood values (see Table 1 and Figure 4). This approach can potentially be used in the tightness analysis of VAEs as well.\\n\\n(2)\\tThank you for your comment. Let us explain our result and the data model a bit further. A classical approach to compute a generative model using some observed samples is to consider a parametric family of density functions (referred to as f(.), the data model) and optimize its parameters using maximum likelihood. VAEs are approximations of this approach. GANs, however, seemingly do not take this traditional approach to the generative problem. GANs compute a generative distribution $G^*(X)$ that minimizes a distance (such as the optimal transport distance) to the observed distribution. However, GANs do not make any density assignments to the points outside of the range of G^* and that is the key issue because after training a GAN, if we observe a new point y^{test}, it is very likely that this point does not lie exactly on the range of G^*. Thus, GANs are unable to assign a reasonable probability to this point. Intuitively, we can imagine that if this point is \\u2018close\\u2019 to G^*(X), it is more likely to be generated from this model. Our result provides a theoretically justified way to define what we mean by being \\u2018close\\u2019 to G^*.\\n\\nOur key idea is to consider an explicit model for the data in GAN\\u2019s framework so that we can compute sample likelihoods. Similar to VAEs, f(.) in our model is the underlying data distribution. We assume that the data is generated as per Eq. 2.4 using a ground truth (and unknown) function G. This is a reasonable model for the data since G can be a complex function. By training the entropic GAN, we essentially estimate this function using the generator network of GANs. We have added further explanations about this to the introduction of the paper.\\n\\n(3)\\tThe histograms are plotted using the likelihood estimator presented in Corollary 2. As mentioned in Corollary 2, the proposed estimator of sample likelihood approaches true likelihoods when the KL divergence term approaches 0. The newly included Appendix B section presents empirical evidence that the approximation error is orders of magnitude smaller than the likelihood values for linear models. Also, in the real image datasets where computing true likelihoods are difficult, our proposed estimator exhibits sensible trends indicating that the proposed estimator is a good estimator of sample likelihoods.\"}",
"{\"title\": \"Interesting connection, but lacks clarity\", \"review\": \"Summary\\nThe authors notice that entropy regularized optimal transport produce an upper bound of a certain model likelihood. Then, the authors claim it is possible to leverage that upper bound to come up with a measure of 'sample likelihood', the probability of a certain sample under the model.\\n\\nEvaluation\\nThe idea is certainly interesting and novel, as it allows to bridge two distinct worlds (VAE and GANs). However, I am concerned about the message (or lack of thereof) that is conveyed in the paper. Particularly, the following two points makes me be reluctant to recommend an acceptance:\\n\\n1)There is no measure on the tightness of the lower bound. How can we tell if this bound isnt tight? All results are dependent on the bound being close to the true value. No comments about this are given.\\n2)The sample likelihoods are dependent on a certain \\\"model\\\". Here the nomenclature is confusing because I thought GANS were a probabilistic model, but now there is an additional model regarding a function f. How these two relate? What happens if I change f? to which extent the results depend on f?\\n3)related to 2): the histograms in figure 2 are interesting, but they are not conclusive that the measure that is being proposed is a 'bona fide' sample likelihood.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting idea, but need more polishing\", \"review\": \"1. The assumption made by the authors that \\\"generator is injective\\\" is problematic or even wrong, as it is well known that GAN suffers from mode collapsing problem. \\n\\n2. It is very confusing when the authors mentioned the negative Shannon entropy. Because the equation the authors wrote is the Shannon entropy, not the negative version.\\n\\n3. In the 5th paragraph in the introduction section, the paper (Cuturi, 2013) has nothing to do with \\\"improve computational aspect of GAN\\\", maybe the authors want to cite this paper \\\"Learning Generative Models with Sinkhorn Divergences\\\".\\n\\n4. The authors failed to discuss their paper with \\\"ON THE QUANTITATIVE ANALYSIS OF DECODERBASED GENERATIVE MODELS\\\", which uses AIS to estimate the likelihood.\", \"suggestion\": \"1. Please use \\\\cdot instead of , i.e. F(\\\\cdot) instead of F(.)\\n2. Typo: in Appendix ?? and ??, in section 4\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting attempt on theory of entropic GANs\", \"review\": \"The contribution of the paper is to show that WGAN with entropic regularization maximize a lower bound on the likelihood of the observed data distribution. While the WGAN formulation minimizes the Wasserstein distance of the transformed latent distribution and the empirical distribution which is already a nice measure of \\\"progress\\\", having a bound on the likelihood can be interesting.\", \"pros\": [\"I like the entropic GAN formulation and believe it is very interesting as it gives access to the joint distribution of latent and observed variables.\", \"While there are some doubtful statements, overall the paper is well written and easy to read.\"], \"cons\": [\"The assumption of injectivity of the generator could be problematic, as it might not be fulfilled due to mode collapse.\", \"I feel the theory is not very deep. Since one has a closed form of the transportation map (Eq. 3.7), the likelihood of the data is obtained by marginalizing out the latent space. However, this assumes that the inner dual maximization problem is solved to stationarity so that Eq 3.7 holds, which is not the case in practice (5 discriminator updates).\", \"Thus in Sec. 4.1 for the likelihood at various points in training it is not clear what is actually happening.\", \"Sec 4.3 for unregularized GANs might be problematic. In general, the transportation plan is not a density function, so I'm not certain whether Theorem 1 / Corollary 2 still hold. Furthermore, the heuristic for \\\"inverting\\\" G^* is very crude.\", \"There are also some minor problematic statements in the paper. While they can be easily fixed, they give me doubts:\", \"The original VAE paper is not cited in the introduction for VAEs\", \"The 2013 paper by Cuturi cited on page 2 has nothing to do with \\\"computational aspects of GANs\\\". It is about fast computation of approximate OT between two discrete prob. measures.\", \"First-order / second-order Wasserstein distance is I think a bit unusual name for W_1, W_2\", \"On pg. 4, the point of the entropy term is to make the objective strongly convex. Strict convexity has no computational benefits.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
HyxzRsR9Y7 | Learning Self-Imitating Diverse Policies | [
"Tanmay Gangwani",
"Qiang Liu",
"Jian Peng"
] | The success of popular algorithms for deep reinforcement learning, such as policy-gradients and Q-learning, relies heavily on the availability of an informative reward signal at each timestep of the sequential decision-making process. When rewards are only sparsely available during an episode, or a rewarding feedback is provided only after episode termination, these algorithms perform sub-optimally due to the difficultly in credit assignment. Alternatively, trajectory-based policy optimization methods, such as cross-entropy method and evolution strategies, do not require per-timestep rewards, but have been found to suffer from high sample complexity by completing forgoing the temporal nature of the problem. Improving the efficiency of RL algorithms in real-world problems with sparse or episodic rewards is therefore a pressing need. In this work, we introduce a self-imitation learning algorithm that exploits and explores well in the sparse and episodic reward settings. We view each policy as a state-action visitation distribution and formulate policy optimization as a divergence minimization problem. We show that with Jensen-Shannon divergence, this divergence minimization problem can be reduced into a policy-gradient algorithm with shaped rewards learned from experience replays. Experimental results indicate that our algorithm works comparable to existing algorithms in environments with dense rewards, and significantly better in environments with sparse and episodic rewards. We then discuss limitations of self-imitation learning, and propose to solve them by using Stein variational policy gradient descent with the Jensen-Shannon kernel to learn multiple diverse policies. We demonstrate its effectiveness on a challenging variant of continuous-control MuJoCo locomotion tasks. | [
"Reinforcement-learning",
"Imitation-learning",
"Ensemble-training"
] | https://openreview.net/pdf?id=HyxzRsR9Y7 | https://openreview.net/forum?id=HyxzRsR9Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJg6RN7Nv4",
"Syx8rMBgP4",
"H1xAY5F1xE",
"Hkgln7Zw14",
"HJxRqW-v14",
"r1g9FkFUk4",
"r1lt10yBJE",
"rkerMF4K0X",
"HylcXomtC7",
"rJxAv8QYAQ",
"HkxD8U7tCQ",
"H1xJ2emFA7",
"BkeCkTGFAQ",
"H1e1gh6sTX",
"Skx-JJf62Q",
"B1gAK7lT2m",
"H1lUeG3vnQ"
],
"note_type": [
"official_comment",
"comment",
"meta_review",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1552327892948,
1552073278095,
1544686214426,
1544127400015,
1544126869579,
1544093569620,
1543990752765,
1543223564883,
1543220001813,
1543218790132,
1543218766982,
1543217319276,
1543216358508,
1542343654800,
1541377753406,
1541370757883,
1541026285819
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper872/Authors"
],
[
"~Yuchen_Lu1"
],
[
"ICLR.cc/2019/Conference/Paper872/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper872/Authors"
],
[
"ICLR.cc/2019/Conference/Paper872/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper872/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper872/Authors"
],
[
"ICLR.cc/2019/Conference/Paper872/Authors"
],
[
"ICLR.cc/2019/Conference/Paper872/Authors"
],
[
"ICLR.cc/2019/Conference/Paper872/Authors"
],
[
"ICLR.cc/2019/Conference/Paper872/Authors"
],
[
"ICLR.cc/2019/Conference/Paper872/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper872/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper872/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper872/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Code to be released soon.\", \"comment\": \"Hi,\\nThanks for your interest in our paper! We are in the process of cleaning up the code for release. We expect it to be ready by the end of this month.\"}",
"{\"comment\": \"It's a very interesting approach. Is there still any plan on releasing the source code?\", \"title\": \"On the status of source code\"}",
"{\"metareview\": \"This paper proposes a reinforcement learning approach that better handles sparse reward environments, by using previously-experienced roll-outs that achieve high reward. The approach is intuitive, and the results in the paper are convincing. The authors addressed nearly all of the reviewer's concerns. The reviewers all agree that the paper should be accepted.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"meta review\"}",
"{\"title\": \"Thank you for the updated rating.\", \"comment\": \"We would merge pieces from the Appendix into the main sections for better coherence. Also, we would make our source code and scripts public.\"}",
"{\"title\": \"Implementation Details\", \"comment\": \"1. Experiments in section 3.1 use a parameterized discriminator since a single network suffices for self-imitation. Experiments in section 3.2 use $\\\\psi$ networks for computational efficiency with policy ensembles.\\n\\n2. In practice, to get the complete SVPG gradient, we calculate the exploitation and exploration components, and then do a convex combination as: (1-p)*exploitation + p*exploration, where p is linearly decayed from 1 to 0. The temperature (T) is held constant at 0.5 (2D-navigation) and 0.2 (Locomotion).\"}",
"{\"comment\": \"Thank you for your reply!\\nAnd I have two more questions.\\n\\n1. Whether discriminator or $\\\\psi$ network did you use for getting results you write in your paper?\\n2. What number you use for $\\\\alpha$ for SVPG and T for JS kernel?\\n\\nThank you!\", \"title\": \"Whether discriminator or $\\\\psi$ network did you use? and about hyparas\"}",
"{\"title\": \"Thank You!\", \"comment\": \"Thank you for providing a detailed reply. I hope authors will incorporate these points into the paper (specifically the results on a more comprehensive benchmark suite (my concern in my 4th point).\\n\\nI also hope authors will release code and scripts to reproduce the results in the paper, so as to make future comparisons possible.\"}",
"{\"title\": \"Thank you for your question and interest in our paper.\", \"comment\": \"You are correct in observing that if we use parameterized discriminator networks to estimate the ratio $r^{\\\\phi}_{ij} = \\\\rho_{\\\\pi_i}(s,a) / [\\\\rho_{\\\\pi_i}(s,a) + \\\\rho_{\\\\pi_j}(s,a)]$ for the SVPG exploration rewards, then we would need O(n^2) discriminator networks, for n policies in the ensemble. To ensure scalability to ensembles of large number of policies, we opt for explicit modeling of the state-action visitation density for each policy (i) by a parameterized network $\\\\psi_i$. With this, we can obtain the ratios for the SVPG exploration rewards using the n $\\\\psi$ network, reducing the complexity to O(n). Please check the recently added Appendix 5.8.2 in our revision for more details. We would be happy to answer any further questions you may have on this.\"}",
"{\"title\": \"Response to AnonReviewer3. Thank you for your comments!\", \"comment\": \"1- Concerning \\u201cPoints 1. and 2. under Weaknesses\\u201d : \\n\\nWe do not wish to claim or motivate that self-imitation would suffice if the task is \\u201csparse\\u201d in the sense that most of the episodes don\\u2019t see *any* rewards. This would fall under the limitations of self-imitation which we discuss in the paper; we could rely on population-based exploration methods (e.g. SVPG, Section 2.3) and draw on the rich literature on single-agent exploration methods like curiosity/novelty-search or parameter noise to alleviate this to an extent. Instead, we focus on scenarios where \\u201csparse\\u201d feedback is available within an episode. We will make this very clear in our revision. For example, our experiments in Section 3.1 consider tasks where some feedback is available in an episode - either only once at the end of the episode, or at very few timesteps during an episode. We find self-imitation to be highly beneficial (compared to standard policy gradients) on these \\u201csparse\\u201d constructions. Some practical situations of the kind include a.) robotics tasks where rewards in an episode could be intermittent or delayed by arbitrary timesteps due to the inverse kinematics operations b.) cases where a mild feedback on the overall quality of the episode is available, but designing a dense reward function manually is prohibitively hard; an interesting example of this is [5].\\n\\nAlso, although our algorithm exploits \\u201cgood\\u201d trajectories from agent\\u2019s past experience, the demands on the \\u201cgoodness\\u201d of the trajectories are very relaxed. Indeed, the trajectories imitated during the initial phases of learning have quite low overall scores, and they gradually improve in quality.\\n\\n[5] Christiano, Paul F., et al. \\\"Deep reinforcement learning from human preferences.\\\" Advances in Neural Information Processing Systems. 2017.\\n\\n\\n2- Concerning \\u201cPoint 3. under Weaknesses -- comparison to off-policy RL methods\\u201d:\\n\\nOur approach makes use of a replay memory to store and exploit past good rollouts of the agent. Off-policy RL methods such as DQN, DDPG also accumulate agent experience in a replay buffer and reuse them for learning (e.g. by reducing TD-error). We run new experiments with a recent off-policy RL method based on DDPG - Twin Delayed Deep Deterministic policy gradient (TD3; [2]). Appendix 5.10 evaluates its performance on MuJoCo tasks under the various reward distributions we used in our paper. We find that the performance of TD3 suffers appreciably under the episodic case and when the rewards are masked out with 90% probability (p_m=0.9). We therefore believe that popular off-policy algorithms (DDPG, TD3) do not exploit the past experience in a manner that accelerates learning when rewards are scarce during an episode. The per-timestep (dense) pseudo-rewards that we obtain with the divergence-minimization objective help in temporal credit assignment, resulting in good policies even under the episodic and noisy (p_m=0.9) settings (Table 1, Section 3.1).\\n\\n[2] Fujimoto, Scott, Herke van Hoof, and Dave Meger. \\\"Addressing Function Approximation Error in Actor-Critic Methods.\\\" International Conference on Machine Learning. 2018.\\n\\n\\n4- Concerning \\u201cPoint 4. under Weaknesses \\u201d: \\n\\nWe have added Appendix 5.7 with results on more MuJoCo tasks. Combined with Table 1. in the paper, we believe our overall set to be fairly representative. For reference, the PPO paper [6], which forms our baseline, uses the same set of benchmarks (Figure 3 in their paper). \\n\\n[6] Schulman, John, et al. \\\"Proximal policy optimization algorithms.\\\" arXiv preprint arXiv:1707.06347 (2017).\"}",
"{\"title\": \"Response to AnonReviewer1. Thank you for your comments! (Part 2/2)\", \"comment\": \"6- Concerning \\u201cHow is the priority list threshold and size chosen?\\u201d: \\n\\nOur implementation stores the top-C trajectories in the priority queue based on cumulative trajectory-return. We fix the capacity (C) to 10 trajectories for all our experiments. This number was chosen after a limited hyperparameter grid search on Humanoid and Hopper (Appendix 5.4). In general, we didn\\u2019t find our method to be particularly sensitive to the choice of C.\\n\\n\\n7- Concerning \\u201cWould a softer version of the priority queue update do anything useful?\\u201d:\\n\\nIn our initial experiments, we tested with using more relaxed update rules for the priory queue, but found that storing the overall top-C trajectories gave the best results. Nonetheless, the various options for storing and reusing past experiences present interesting trade-offs, and we hope to look deeper into this in the future.\\n\\n\\n8- Concerning \\u201cThe update in (3) seems quite similar to what GAIL would do. What is the difference there?\\u201d: \\n\\nYes, as we mention in the derivation (Appendix 5.1), GAIL does a similar update, but using external expert trajectories rather than using self-imitation. An implementation-specific difference is that while GAIL uses discriminator networks to implicitly estimate the ratio required in the policy gradient theorem, we (when using SVPG exploration in Algorithm 2) learn separate state-action density estimation networks (psi), and explicitly compute the required ratios. This is done for reasons of computational efficiency (Appendix 5.8.2). \\n \\n\\n9- Concerning \\u201cwhy higher asymptotic performance but is often slower in the beginning than the other methods in Fig 3\\u201d: \\n\\nConsider SparseHopper as an example. There is a local minima where the agent can stand still (i.e. no hopping) and collect the per-timestep survival bonus given for not falling down. Baseline algorithms such as PPO-Independent or SI-independent quickly get into this local minima since they greedily exploit the survival bonus readily available. Hence, they reach a score of ~1000 quickly. In, SI-Interact-JS, however, the JS repulsion forces the agents to be diverse and explore the state-space much more effectively. The highest scoring agent in this ensemble (which is plotted in Figure 3.) discovers the hopping behavior eventually. However, during its learning lifetime, it takes varied actions to reach states different from other agents, due to JS repulsion. The score grows gradually since many of the attempts in the beginning lead to the agent falling down (and therefore episode termination) in the process of trying something different. The agent does not quickly accumulate the survival bonus and stand still, unlike the baselines. The asymptotic score is higher since the forward hopping is rewarded higher compared to the survival bonus.\"}",
"{\"title\": \"Response to AnonReviewer1. Thank you for your comments! (Part 1/2)\", \"comment\": \"1- Concerning \\u201cWhy is self-imitation more effective than standard policy gradients, and if the source of stability can be explained intuitively\\u201d : \\n\\nWe believe that learning pseudo-rewards with self-imitation helps in the temporal credit assignment problem in the sparse- or episodic-reward setting. For instance, in the episodic setting, where a reward is only provided at episode termination, standard policy gradient algorithms reinforce the actions towards the beginning of the episode based on a reward signal which is obtained after multiple timesteps and convolves the effect of many intermediate actions. This signal is potentially sparse and diluted, and may deteriorate with task horizon. With our approach, since we learn \\u201cper-timestep\\u201d pseudo-rewards with self-imitation, we expect this greedy signal to help in attributing credit to actions more effectively, leading to faster training.\\n\\nQualitatively, the stability of the self-imitation algorithm could also be understood by viewing it as a form of curriculum learning [4]. Unlike learning from perfect demonstrations by external experts, our learner at any point in time is imitating only a slightly different version of itself. The demonstrations, therefore, increase in complexity gradually over time, resulting in an implicit, adaptive curriculum which stabilizes learning and avoids catastrophic forgetting of behaviors. \\n\\n[4] Bengio, Yoshua, et al. \\\"Curriculum learning.\\\" Proceedings of the 26th annual international conference on machine learning. ACM, 2009.\\n\\n\\n2- Concerning \\u201cRe-phrases in various sections\\u201d: \\n\\nWe have incorporated all the suggested changes in the revision with extra discussion. We have also added the missing reference to Guided Policy Search and expanded on GAIL. DIAYN (Eysenbach et al 2018) is included in Appendix 5.6.\\n\\n\\n3- Concerning \\u201cComparison to Oh et al. (2018)\\u201d: \\n\\nWe have added a new section (Appendix 5.9) focussed on the algorithm (SIL) by Oh et al. (2018). Therein, we mention the update rule for SIL and the performance of PPO+SIL on MuJoCo tasks under the various reward distributions we used in our paper. We summarize our observations here (please see Appendix 5.9 for more details). The performance of PPO+SIL suffers under the episodic case and when the rewards are masked out with 90% probability (p_m=0.9). Our intuition is that this is because PPO+SIL makes use of the \\u201ccumulative return\\u201d from each transition of a past good rollout for the update. When rewards are provided only at the end of the episode, for instance, cumulative return does not help with the temporal credit assignment problem and hence is not a strong learning signal. \\n\\n\\n4- Concerning \\u201cComparing SVPG exploration (Figure 3) to novelty/curiosity based exploration schemes\\u201d: \\n\\nWe have added a new section (Appendix 5.11) on comparing SVPG exploration to a novelty-based exploration baseline - EX2 [3]. The EX2 algorithm does implicit density estimation using discriminative modeling, and uses it for novelty-based exploration. We report results on the hard exploration MuJoCo tasks considered in Section 3.2, using author provided code and hyperparameters. Table 5 in Appendix 5.11 shows that we compare favorably against EX2 on the tasks evaluated. \\n\\n[3] Fu, Justin, John Co-Reyes, and Sergey Levine. \\\"Ex2: Exploration with exemplar models for deep reinforcement learning.\\\" Advances in Neural Information Processing Systems. 2017.\\n\\n\\n5- Concerning \\u201cWhat is psi in appendix 5.3? \\u201d: \\n\\nWe apologize for skimping the details on this. \\u201cpsi\\u201d denotes the parameters of neural networks that are used to model the state-action visitation distribution (rho) of the policy. Therefore, for an ensemble of n policies, there are n \\u201cpsi\\u201d networks. The motivation behind using these networks is as follows. To calculate the gradient of JS, we need the ratio denoted by r^{\\\\phi} in the paper. This ratio can be obtained implicitly by training a parameterized discriminator network. However, when using SVPG exploration with JS kernel, this method would require us to train O(n^2) discriminator networks, one each for calculating the gradient of JS between a policy pair (i,j). To reduce the computational and memory resource burden to O(n), we opt for explicit modeling of the state-action visitation distribution (rho) of the policy by a network with parameters \\u201cpsi\\u201d. The \\u201cpsi\\u201d networks are trained using the JS optimization (Equation 2.) and we can then obtain the ratio explicitly from these \\u201cpsi\\u201d networks. We have added these details (and more) to Appendix 5.8.2. It also contains proper symbols (in Latex) for easier reading.\"}",
"{\"title\": \"Response to AnonReviewer2. Thank you for your comments!\", \"comment\": \"1- Concerning \\u201cSection 2.3 being too dense\\u201d : \\n\\nWe have re-organized the writing. Specifically, we have added more details on SVPG exploration with the JS-kernel in Appendix 5.8. Appendix 5.8.1 includes some more intuition and theory behind Stein Variational Gradient Descent (SVGD) and Stein Variational Policy Gradient (SVPG). Appendix 5.8.2 contains details on our implementation such as calculation of SVPG exploration rewards by each agent, and state-value function baselines, along with better explanation of symbols used in our full algorithm (Algorithm 2).\\n\\n2- Concerning \\u201cMinor points\\u201d: \\n\\nThank you for pointing these out. We have changed Table 1. in the revision to include all the suggested changes, in the hope that the table becomes self-explanatory. We have also rephrased the text to clarify that we compare performance with two different reward masking values - suppressing each per-timestep reward r_t with 90% probability (p_m = 0.9), and with 50% probability (p_m=0.5).\"}",
"{\"title\": \"General response to the reviewers\", \"comment\": \"We would like to thank the anonymous reviewers for their comments and constructive feedback. We address each reviewer's comments individually and summarize the major additions to the revision here:\\n\\n1. Added Appendix 5.7 with results on more MuJoCo tasks\\n2. Added Appendix 5.8 with SVPG background and our implementation details. \\n3. Added Appendix 5.9 on comparison to Oh et al. (2018) [1]\\n4. Added Appendix 5.10 on comparison to off-policy RL (TD3, Fujimoto et al. (2018)) [2]\\n5. Added Appendix 5.11 on comparing SVPG exploration to a novelty-based baseline (EX^2, Fu et al. (2017)) [3]\\n\\n[1] Oh, Junhyuk, Yijie Guo, Satinder Singh, and Honglak Lee. \\\"Self-Imitation Learning.\\\" International Conference on Machine Learning. 2018.\\n[2] Fujimoto, Scott, Herke van Hoof, and Dave Meger. \\\"Addressing Function Approximation Error in Actor-Critic Methods.\\\" International Conference on Machine Learning. 2018.\\n[3] Fu, Justin, John Co-Reyes, and Sergey Levine. \\\"Ex2: Exploration with exemplar models for deep reinforcement learning.\\\" Advances in Neural Information Processing Systems. 2017.\"}",
"{\"comment\": \"I enjoyed reading your interesting submission, and I have one question about implementation.\\n\\nHow did you calculate JS kernel, k(theta_j , theta_i)=exp(-D_JS(rho_pi_theta_i, rho_pi_theta_j)/T)?\\n\\nI think in order to calculate D_JS(rho_pi_theta_i, rho_pi_theta_j), we have to train discriminators which differentiate between trajectories from rho_pi_theta_i and trajectories from rho_pi_theta_j. If this thought is right, we have to 28 discriminators for all combinations of 8 policies. However, this is not practical.\\n\\nIf replay memory is shared, D_JS can be calculated by using 2 discriminators, r^phi_i and r^phi_j. This is because rho_pi_theta_i/rho_pi_theta_j = rho_pi_theta_i/rho_pi_E * rho_pi_theta_E/rho_pi_theta_j = r^phi_i / (1-r^phi_i) * (1 -r^phi_j)/r^phi_j . However, in your paper, replay memories are not shared.\\n\\nTherefore, I would like to know how to calculate JS kernel.\\n\\nThank you!!\", \"title\": \"How to Calculate JS kernel\"}",
"{\"title\": \"intuitive/elegant idea, well-written, convincing results\", \"review\": \"The paper describes a method to improve reinforcement learning for task with sparse rewards signals.\\n\\nThe basic idea is to select the best episodes from the system's experience, and learn to imitate them step by step as the system evolves, aiming at providing a less sparse learning signal.\\n\\nThe math works out to a gradient that is of similar form as a policy gradient, which makes it easy to interpolate both of them. The resulting training procedure is a policy gradient that gets additional reinforcement of the system's best runs.\\n\\nThe experiments show the validity especially for the most extreme case (episodic rewards), while, as expected, for the other extreme of dense rewards, the method's effect is not consistently positive.\", \"the_paper_then_critiques_its_own_method_and_identifies_a_critical_weakness\": \"the reliance on good exploration. I like that a lot. The paper goes on to suggest an extension to address this by training an ensemble, and shows the effectiveness of this for a number of tasks. However, I feel that the description of this extension is less clear than that of the core idea, and introduces too many new ideas and concepts in a too condensed text.\\n\\nThe paper seems a significant in that it provides a notable improvement for sparse-rewards tasks, which are a common sub-class of real-world problems.\\n\\nMy background is not RL. While I am quite confident in my understanding of the paper's math, I am not 100% familiar with the typical benchmark sets. Hence, I cannot judge whether the results include good baselines, or whether the task selection is biased. I can also not judge the completeness of the related work, and how novel the work is. For these questions, I hope that the other reviewers can provide more information.\", \"pros\": [\"intuitive idea for a common problem\", \"solution elegantly has the form of a modified policy gradient\", \"convincing experimental results\", \"self-critique of core idea, and extension to address its main weakness\", \"nicely written text, does not leave a lot of questions\"], \"cons\": \"- while the core idea is nicely motivated and described and good to follow, Section 2.3 feels very dense and too short.\\n\\nOverall, I find the core idea quite intuitive and elegant. The paper's background, motivation, and core method are well-written and, with some effort, quite readable for someone who is not an RL expert. I found that several questions I had during reading were preempted promptly and addressed. However, the description of the secondary method (Section 2.3) is too dense.\\n\\nTo me, the paper solidly meets the threshold of publication. Since I have no good comparison to other papers, I rate it a \\\"clear accept\\\" (8).\", \"minor_points\": \"I noticed a few superfluous \\\"the\\\", please double-check.\\n\\nIn Table 1, please use the same exponent for directly comparable numbers, e.g. instead of \\\"1.8e5 4.4e4\\\", say \\\"18e4 4.4e4\\\". Or best just print the full numbers without exponent, I think you have the space.\\n\\nWhen reading Table 1, I could bnot immediately line up \\\"PPO\\\" and \\\"Self-imitation\\\" in the caption with the table columns. It took a while to infer that PPO refers to \\\\nu=0, and SI to \\\\nu=0.8. Can you add PPO and SI to the table headings?\\n\\nYou define p as \\\"the masking probability\\\", but it is not clear whether that is the probability for keeping a \\\"1\\\" in the mask,\\nor for masking out the value. I can only guess from the results. I suggest to rephrase as \\\"the probability of retaining a reward\\\". Also, how about using plain words in Table 1's heading, such as \\\"Noisy rewards\\\\nSuppressing 10% of rewards\\\", so that one can understand the table without having to search for its description in the text?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Well written paper that explores an interesting idea, weak experimental evaluation\", \"review\": \"The paper proposes how previously experienced high reward trajectories can be used to generate dense reward functions for more for efficient training of policies in context of reinforcement learning. The paper does this by computing the state-action pair distribution of high rewarding trajectories in the replay buffer, and using a surrogate reward that measures the distance between this distribution and the current state-action pair distribution. The paper derives approximate policy gradients for this surrogate reward function. The paper then describes limitations of doing this: possibility of getting stuck in the local neighborhood of currently well-performing trajectories. It also describes an extension based on Stein variational policy gradients to diversify behavior of an ensemble of policies that are learned together. The paper shows experimental results on a number of MuJoCo tasks.\", \"strengths\": \"1. Adequately leveraging high-return roll-outs for effective learning of policies is an important problem in RL. The paper proposes and empirically investigates a reasonable approach for doing this. The paper shows how using the proposed additional rewards leads to better performance on the choses benchmarks than baseline methods without the proposed rewards.\\n\\n2. I also like that the paper details the short-comings of the proposed approach, and how these could be fixed.\", \"weaknesses\": \"1. The paper uses sparse rewards in RL as a motivation. However, the proposed approach crucially relies on the fact that a good trajectory has at least been encountered once in the past to be of any use. I am not sure if how the proposed approach does justice to the motivation in the paper. The paper should re-write the motivation, or better explain why the proposed method addresses the motivation.\\n\\n2. Additionally, the paper does not provide adequate experimental validation. The experiment that I think will make the case for the paper is one that shows the sample efficiency of the proposed approach over other baseline methods, when given a successful past roll-out. The current experimental setup emphasizes the sparse reward scenario in RL, and it is just not clear to me as to why this is a good benchmark to study the effects of the proposed method. \\n\\n3. The paper primarily makes comparisons to on-policy methods. This may not be a fair comparison, as the proposed method uses past trajectories from a replay buffer (to compute reward). Perhaps improvements are coming because of use of this off-policy information. The paper should design experiments to de-conflate this: perhaps by also comparing to how these additional rewards will compare in context of off-policy methods (like Q-learning).\\n\\n4. I also do not understand how the benchmark tasks were chosen? Are the MuJoCo tasks studied here a fair representative of MuJoCo tasks studied in literature, or are these selected in any manner? While selecting and modifying benchmarks for the purpose of making a specific point is acceptable, it is important to include benchmark results on a full suite of tasks. This can help understand (desirable or un-desirable) side-effects of proposed ideas.\\n\\nAfter reading author response and the extra experiments, I have changed my rating to 6 (from the original rating of 5).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Good paper, accept\", \"review\": \"Overall impression:\\nI think that this is a well written interesting paper with strong results. One thing I\\u2019d have liked to see a bit more is an explanation of why self imitation is more effective than standard policy gradient? Where does the extra supervision/stability come from, and can this be explained intuitively? I\\u2019ve suggested some small changes/clarifications to be made inline, and a few more comparisons to add. But overall, I very much like this line of work and I recommend accepting this paper.\", \"abstract\": \"We demonstrate its effectiveness on a number of challenging tasks. -> be more specific.\\n\\nThe term single-timestep optimization is not very clear. Can this be clarified?\\n\\nthey are more widely applicable in the sparse or episodic reward settings -> it is likely important to mention that they are agnostic to horizon of the task.\", \"related_works\": \"Guided Policy Search also does divergence minimization. GAIL considers the imitation learning work as a sort of divergence minimization problem as well, which should be explicitly mentioned. Other work for good exploration include DIAYN (Eysenbach et al 2018). The difference in resulting updates between (Oh et al) and this work should be clearly discussed in the methods section. \\n\\n\\u201cwe learn shaped, dense rewards\\u201d-> too early in the paper for this to make sense. can provide some contextt\\n\\nSection 2.2:\\nfully decides the expected return -> clarify this a bit. I think what you mean is that the dynamics are wrapped into this already, so it accounts for this, but this can be made explicit.\\n\\nSmall typos in appendix 5.1 (r should be replaced by the density ratio)\\n\\nThe update in (3) seems quite similar to what GAIL would do. What is the difference there? Or is the difference just in the fact that the experts are chosen from \\u201cself\\u201d experiences. \\n\\nHow is the priority list threshold and size chosen?\\n\\nWould a softer version of the priority queue update do anything useful? Or would it just reduce to policy gradient when weighted by rewards?\\n\\nAppendices are very clear and very informative while being succinct!\\n\\nI would have liked to see Appendix 5.3 in the main text (maybe a shorter form) to clarify the whole algorithm \\n\\nWhat is psi in appendix 5.3? The algorithm remains a bit unclear without this clarification\\n\\nExperiments. \\nOnly 1 question to answer in this section is labelled? Put 2) and 3) appropriately. \\n\\nCan a comparison to Oh et al 2018 be added to this for the sake of completeness? Also can this be compared to using novelty/curiosity based exploration schemes?\\n\\nCan the authors comment on why the method reaches higher asymptotic performance but is often slower in the beginning than the other methods in Fig 3.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rkgW0oA9FX | Graph HyperNetworks for Neural Architecture Search | [
"Chris Zhang",
"Mengye Ren",
"Raquel Urtasun"
] | Neural architecture search (NAS) automatically finds the best task-specific neural network topology, outperforming many manual architecture designs. However, it can be prohibitively expensive as the search requires training thousands of different networks, while each training run can last for hours. In this work, we propose the Graph HyperNetwork (GHN) to amortize the search cost: given an architecture, it directly generates the weights by running inference on a graph neural network. GHNs model the topology of an architecture and therefore can predict network performance more accurately than regular hypernetworks and premature early stopping. To perform NAS, we randomly sample architectures and use the validation accuracy of networks with GHN generated weights as the surrogate search signal. GHNs are fast - they can search nearly 10× faster than other random search methods on CIFAR-10 and ImageNet. GHNs can be further extended to the anytime prediction setting, where they have found networks with better speed-accuracy tradeoff than the state-of-the-art manual designs. | [
"neural",
"architecture",
"search",
"graph",
"network",
"hypernetwork",
"meta",
"learning",
"anytime",
"prediction"
] | https://openreview.net/pdf?id=rkgW0oA9FX | https://openreview.net/forum?id=rkgW0oA9FX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1gXoGggxV",
"rkg3cupE14",
"rylzxtxbkN",
"rklXC6R5RQ",
"SkgA5aR5R7",
"S1g3LTC9C7",
"S1lkmT09CQ",
"SJlF5lI52X",
"HklyTNSthQ",
"rJgpSxAQ27",
"rJgN1Zfz3m",
"SkxLWX-e2m",
"rJxZH6PLsX",
"rJl4OSAVo7",
"HygnM4GEs7",
"S1lpn7Pcc7"
],
"note_type": [
"meta_review",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_review",
"comment",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1544712859062,
1543981204246,
1543731434136,
1543331274524,
1543331221968,
1543331155780,
1543331094569,
1541197968680,
1541129399400,
1540771908846,
1540657371520,
1540522749693,
1539894584916,
1539790188016,
1539740692074,
1539105717265
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper871/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper871/AnonReviewer2"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper871/Authors"
],
[
"ICLR.cc/2019/Conference/Paper871/Authors"
],
[
"ICLR.cc/2019/Conference/Paper871/Authors"
],
[
"ICLR.cc/2019/Conference/Paper871/Authors"
],
[
"ICLR.cc/2019/Conference/Paper871/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper871/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper871/Authors"
],
[
"ICLR.cc/2019/Conference/Paper871/AnonReviewer1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper871/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper871/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes an architecture search method based on graph hypernetworks (GHN). The core idea is that given a candidate architecture, GHN predicts its weights (similar to SMASH), which allows for fast evaluation w/o training the architecture from scratch. Unlike SMASH, GHN can operate on an arbitrary directed acyclic graph. Architecture search using GHN is fast and achieves competitive performance. Overall, this is a relevant contribution backed up by solid experiments, and should be accepted.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"interesting contribution, competitive results\"}",
"{\"title\": \"Thanks for the response\", \"comment\": \"Thanks for the further explanation especially on the memory usage. I'm fine with this part. However, the authors seem not adequatly addressed the other two concerns.\\n\\nFirst, for the LSTM encoding baseline, I'm not quite sure about the validness of \\\"the number of neighbours has been conventionally fixed for LSTM representations\\\" since we can always perform traversal on graph to form a sequence. Even though the authors are right, I'm not convinced about the importance to \\\"handle a varying number of neighbours\\\" in NAS, since there are no empirical evidences supporting that. \\n\\nSecond, the authors have not mentioned anything about code publish/reproducibility. I do agree with the public comment that *the reproduction of code is an essential step to make a solid NAS paper*, otherwise the community has nothing except yet another paper/(unfair) baseline. I have to be quite conservative to recommend an acceptance if I'm not guaranteed that the experimental results could be reproduced without pain.\"}",
"{\"comment\": \"There was a previous question about code availability, and the response was:\\n\\n\\\"We cannot say for certain at this moment, but we will consider releasing code after acceptance.\\\"\\n\\nI'd just like to emphasize that this would be very important for reproducibility, since there are a lot of moving pieces in this paper and I'm unsure whether the results can be reproduced without code. Also, to give a positive spin, if code was available I would expect a lot more excitement about this paper than otherwise (e.g., the DARTS paper led to a lot of excitement, in large part due to people being able to play with the code right away).\\n\\nTherefore, I'd appreciate if the authors tried really hard to release their code. Thanks!\", \"title\": \"Code availability?\"}",
"{\"title\": \"Overall response to reviewers\", \"comment\": \"We thank the reviewers for their comments. In addition to responding to the questions, we have updated the paper accordingly.\\n\\nRegarding concerns around novelty, we agree that the idea of extending hypernetworks with graph neural networks is a natural one. However, as Reviewer 3 has mentioned, we argue that the design of GHN itself is nontrivial. We investigate how various aspects of the design impact the performance through extensive ablation studies. For example, we show the benefits of a novel forward-backward graph propagation scheme, and stacking GNNs in the depth dimension for parameter sharing on an architectural-motif level.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for their evaluation! To answer the questions:\\n\\n>> \\u201cSection 4.2: It's not entirely clear how this setup allows for variable sized kernels or variable #channels \\u2026 is the #channels in each node held fixed with a predefined pattern, or also searched for? Are the channels for each node within a block allowed to vary relative to one another?\\u201d\\n\\nYes, the output of H is as large as the largest parameter tensor and sliced as necessary. The number of channels is held fixed with a predefined pattern (doubling after each reduction). They are not searched for and do not vary relative to one another\\n\\n>> \\u201cDo you sample a new, random architecture at every SGD step during training of the GHN?\\u201d\\n\\nYes, a new, random architecture is sampled at every SGD step during training of the GHN\\n\\n>> \\u201cGPU-days is an okay metric, but it's also problematic, since it will of course depend on the choice of GPU (e.g. you can achieve a 10x speedup just from switching from a 600-series to a V100! How does using 4 GPUS for 1 hour compare to 1 GPU for 4 hours? How does this change if you have more CPU power and can load data faster? What if you're using a DL framework which is faster than your competitor's?) Given that the difference here is an order of magnitude, I don't think it matters, but if authors begin to optimize for GPU-milliseconds then it will need to be better standardized.\\u201d\\n\\nYes, we agree that a standardized metric may be necessary as GPU timings become lower and lower. To be clear, for our experiments, we use a single GTX 1080Ti with PyTorch. Additionally, we don\\u2019t find data-loading to be a bottleneck for CIFAR-10.\\n\\n>>\\u201dFor Section 5.3, I found the choice to use unseen architectures a little bit confusing. I think that even for this study, there's no reason to use a held-out set, as we seek to scrutinize the ability of the system to approximate performance even with architectures it *does* see during training. \\u201d\\n\\nWe initially used a repeated held-out set to save computation during earlier experiments. Note that in practice due to the size of the search space, no architecture is seen twice anyways. However, an interesting avenue for future work would be investigating a hypernetwork\\u2019s ability to \\u2018overfit\\u2019 to architectures.\\n\\n>> \\u201cHow much does the accuracy drop when using GHN weights? I would like to see a plot showing true accuracy vs. accuracy with GHN weights for the random-100 networks, as using approximations like this typically results in the approximated weights being substantially worse. I am curious to see just how much of a drop there is.\\u201d\", \"regarding_accuracy_dropoff\": \"Please see the updated appendix with plots comparing accuracy with generated weights vs. trained weights\\n\\n>>\\u201dSection 5.4: it's interesting that performance is stronger when the GHN only sees a few (7) nodes during training, even though it sees 17 nodes during testing. I would expect that the best performance is attained with training-testing parity. Again, as I do not have any expertise in graph neural networks, I'm not sure if this is common (to train on smaller graphs and generalize to larger ones), so if the authors or another reviewer would like to comment and further illuminate this behavior, that would be helpful.\\u201d\\n\\nWe suspect that the GHN has difficulty learning due to the vanishing gradients when passing messages across large graphs. We believe that the forward-backward passing scheme partially addresses this as it reduces the total number of messages passed. Exploring additional methods to help the GHN learn on larger graphs is an interesting avenue for future work.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for their evaluation!\", \"to_answer_the_questions\": \">> \\u201cThe authors mention that \\u2018the first hypernetwork to generate all the weights of arbitrary CNN networks rather than a subset (Brock et al. 2018)\\u2019. I\\u2019m sorry that I do not understand the particular meaning of such a statement, especially given the only difference of this work with (Brock et al. 2018) lies in how to represent NN architectures. I am not clear that why encoding via 3D tensor cannot \\u201cgenerate all weights\\u201d, but can only generate only \\u201ca subset\\u201d. \\n\\nThe SMASH encoding method is formulated such that it generates weights only for the 1x1 convolution bottleneck layers. While it certainly may be possible for a to augment SMASH or propose a new 3D tensor encoding method to generate all weights, we are not aware of such a method yet. However, the graph representation lends itself to straightforwardly generate all weights. \\n\\n\\n>> \\u201cFurthermore, I\\u2019m very curious about the effectiveness of representing the graph using LSTM encoding, and then feeding it to the hypernetworks, since simple LSTM encoding is shown to be very powerful [1]. This at least, should act as a baseline\\u201d \\n\\nUnfortunately, we have not run an LSTM-Hypernet baseline, and are not aware of any existing methods, and we agree this would be interesting future work. However, we do compare with ENAS, which uses a weight sharing mechanism and an LSTM encoding with a controller. As Reviewer 2 has pointed out, [1] has shown very strong results with an LSTM controller and a continuous optimization method. However, the graph method does carry some distinct advantages. For example, as Reviewer 1 pointed out, the graph representation is flexible enough to handle a varying number of neighbours (where the number of neighbours has been conventionally fixed for LSTM representations).\\n\\n>> \\u201cCan the authors give more insights about why they can search on 9 operators within less than 1 GPU day? I mean that for example ENAS, can only support 5 operators due to GPU memory limitation (on single GPU card). Do the authors use more than one GPU to support the search process? \\u201c\\n\\nFor ENAS, it must store all the parameters in memory because it finds paths in a larger model. Thus the memory requirements are O (KN) where K is the number of operations and N is the number of nodes in the candidate architecture. In contrast, the memory requirement for GHNs is O(N) + O(K) for the candidate architecture and GHN respectively. Thus, memory is not an issue, and we conduct GHN training on a single GTX 1080Ti.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for their evaluation! To answer the questions:\\n\\n>> \\u201cI\\u2019m also curious about the stability of the algorithm and the confidence of the final results. What would be the standard deviation of the final performance if you repeat the entire experiments from scratch (training GHN+random search+architecture selection) using different random seeds?\\u201d\\n\\nWe did not observe large variance when training the GHN on different seeds, and the variance for 10 architectures selected by the GHN is reported in Table 1.\\n\\n>> \\u201cA related question is whether the final performance can be improved with more compute. The algorithm is terminated at 0.84 GPU day, but I wonder how the performance would change if we keep searching for longer time (with more architecture samples). It would be very informative to see the curve of performance vs search cost.\\u201d\\n\\nTraining was halted after the HyperNetwork showed convergence. We saw conducting the random search for longer lead to marginal improvements. Extending the random search to 4 GPU days gave 97.24 $\\\\pm$ 0.05, compared to 97.16 $\\\\pm$ 0.07 using 0.84 GPU days as reported. However, we suspect a more advanced search method would be able to utilize the additional compute time more efficiently.\"}",
"{\"title\": \"Interesting method with solid results.\", \"review\": \"The authors propose to use a graph hypernetwork (GHN) to speedup architecture search. Specifically, the architecture is formulated as a directed acyclic graph, which will be encoded by the (bidirectional) GHN as a dense vector for performance prediction. The prediction from GHN is then used as a proxy of the final performance during random search. The authors empirically show that GHN + random search is not only efficient but also performs competitively against the state-of-the-art. Additional results also suggest predictions from GHN is well correlated with the ground truth obtained by the standard training procedure.\\n\\nThe paper is well-written and technically sound. While the overall workflow of hypernets + random search resembles that of SMASH (Brock et al., 2018), the architecture of GHN itself is a nontrivial and useful contribution. I particularly like the facts that (1) GHN seems flexible enough to handle richer topologies than many prior works (where each node in the graph is typically restricted to have a fixed number of neighbors), thanks to graphnets (2) the authors have provided convincing empirical evidence to back up their design choices about GHN through ablation studies.\\n\\nIn terms of experiments, perhaps one missing piece is to investigate alternative hypernet architectures in a controlled setting. For example, the authors could have implemented the tensor encoding scheme as in SMASH in their codebase to compare the capabilities of graph vs. non-graph structured hypernetworks. \\n\\nI\\u2019m also curious about the stability of the algorithm and the confidence of the final results. What would be the standard deviation of the final performance if you repeat the entire experiments from scratch (training GHN+random search+architecture selection) using different random seeds?\\n\\nA related question is whether the final performance can be improved with more compute. The algorithm is terminated at 0.84 GPU day, but I wonder how the performance would change if we keep searching for longer time (with more architecture samples). It would be very informative to see the curve of performance vs search cost.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Combing Graph Neural Networks and Hyper Networks for NAS\", \"review\": \"This paper proposes using graph neural network (GNN) as hypernetworks to generate free weight parameters for arbitrary CNN architectures. The achieved performance is satisfactory (e.g., error rate < 3 on CIFAR-10 with cutout). I\\u2019m particularly interested in the results on ImageNet: it seems the discovered arch on CIFAR-10 (with less than 1 GPU day) successfully transferred to ImageNet.\\n\\nGenerally speaking, the paper is comprehensive in studying the effects of GNN acting as hypernetworks for NAS. The idea is clear, and the experiments are satisfactory. There are no technical flaws per my reading. The writing is also easy to follow.\\nOn the other hand, the extension of using GNN is indeed natural and straightforward compared with (Brock et al. 2018). Towards that end, the contribution and novelty of the paper is largely marginal and not impressive.\", \"question\": \"1.\\tThe authors mention that \\u2018the first hypernetwork to generate all the weights of arbitrary CNN networks rather than a subset (Brock et al. 2018)\\u2019. I\\u2019m sorry that I do not understand the particular meaning of such a statement, especially given the only difference of this work with (Brock et al. 2018) lies in how to represent NN architectures. I am not clear that why encoding via 3D tensor cannot \\u201cgenerate all weights\\u201d, but can only generate only \\u201ca subset\\u201d. Furthermore, I\\u2019m very curious about the effectiveness of representing the graph using LSTM encoding, and then feeding it to the hypernetworks, since simple LSTM encoding is shown to be very powerful [1]. This at least, should act as a baseline. \\n\\n2.\\tCan the authors give more insights about why they can search on 9 operators within less than 1 GPU day? I mean that for example ENAS, can only support 5 operators due to GPU memory limitation (on single GPU card). Do the authors use more than one GPU to support the search process? \\nFinally, given the literature of NAS is suffering from the issue of reproduction, I do hope the authors could release their codes and detailed pipelines. \\n\\n[1] Luo, Renqian, et al. \\\"Neural architecture optimization.\\\" NIPS (2018).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Thanks\", \"comment\": \"The top-5 is 91.3\\nWe will also update the paper.\", \"edit\": \"Corrected top-5 value.\"}",
"{\"title\": \"Review 1 for \\\"Graph HyperNetworks for Neural Architecture Search\\\"\", \"review\": \"This paper proposes to accelerate architecture search by replacing the expensive inner loop (wherein candidate architectures are trained to completion) with a HyperNetwork which predicts the weights of candidate architectures, as in SMASH. Contrary to SMASH, this work employs a Graph neural network to allow for the use of any feedforward architecture, enabling fast architecture search through parameter prediction using highly performant search spaces. The authors test their system and show that performance using Graph HyperNet-generated weights correlates with performance when trained normally. The authors benchmark their method against competing approaches (\\\"traditional\\\" NAS techniques which incur the full expense of the inner loop, and one-shot techniques which learn a large model then select architectures by searching for paths in said model) and show competitive performance.\\n\\nThis is a solid technical contribution with a well-designed set of experiments. While the novelty is not especially high, the paper does a good job of synthesizing existing tools and achieves reasonably strong results with much less compute, making for a strong entry into the growing table of fast architecture search methods. I argue in favor of acceptance.\", \"notes\": \"-Whereas SMASH is limited to architectures which can be described with its proposed encoding scheme, GHNs only requires that the architecture be represented as a graph (which, to my knowledge, means it can handle any feedforward architecture). \\n\\n-Section 4.2: It's not entirely clear how this setup allows for variable sized kernels or variable #channels. Is the output of H simply as large as the largest allowable parameter tensor, and sliced as necessary? A snippet of code might be more illuminating here than a set of equations. Additionally (I may have missed this in the text) is the #channels in each node held fixed with a predfined pattern, or also searched for? Are the channels for each node within a block allowed to vary relative to one another?\\n\\n-Do you sample a new, random architecture at every SGD step during training of the GHN?\\n\\n-I have no expertise in graph neural networks, and I cannot judge the efficacy of this scheme wrt other GNN techniques, nor can I judge the forward-backward message passing scheme of section 4.4. If another reviewer has expertise in this area and can provide an evaluation that would be great.\\n \\n-GPU-days is an okay metric, but it's also problematic, since it will of course depend on the choice of GPU (e.g. you can achieve a 10x speedup just from switching from a 600-series to a V100! How does using 4 GPUS for 1 hour compare to 1 GPU for 4 hours? How does this change if you have more CPU power and can load data faster? What if you're using a DL framework which is faster than your competitor's?) Given that the difference here is an order of magnitude, I don't think it matters, but if authors begin to optimize for GPU-milliseconds then it will need to be better standardized.\\n \\n-Further empirical evidence showing the correlation between approximate performance and true performance is also strong. I very much like that this study has been run for a method based on finding paths in a larger model (ENAS) and shows that ENAS' performance does indeed correlate with true performance, *but* not perfectly, something which (if I recall correctly) is not addressed in the original paper.\\n \\n-It is worth noting that for ImageNet-Mobile and CIFAR-10 they perform on par with the top methods but tend to use more parameters. \\n\\n-I like figures 3 and 4, the comparisons against MSDNet and random networks as a function of op budget is good to see.\\n\\n-Table 4 shows that the correlation is weaker (regardless of method) for the top architectures, which I don't find surprising as I would expect the variation in performance amongst top architectures to be lower. It would be interesting to also see what the range of error rates are; I would expect that the correlation is higher when the range of error rates across the population of architectures is large, as it is easier to distinguish very bad architectures from very good architectures. Distinguishing among a set of good-to-very-good architectures is likely to be more difficult.\\n\\n-For Section 5.3, I found the choice to use unseen architectures a little bit confusing. I think that even for this study, there's no reason to use a held-out set, as we seek to scrutinize the ability of the system to approximate performance even with architectures it *does* see during training. \\n\\n-How much does the accuracy drop when using GHN weights? I would like to see a plot showing true accuracy vs. accuracy with GHN weights for the random-100 networks, as using approximations like this typically results in the approximated weights being substantially worse. I am curious to see just how much of a drop there is.\\n\\n-Section 5.4: it's interesting that performance is stronger when the GHN only sees a few (7) nodes during training, even though it sees 17 nodes during testing. I would expect that the best performance is attained with training-testing parity. Again, as I do not have any expertise in graph neural networks, I'm not sure if this is common (to train on smaller graphs and generalize to larger ones), so if the authors or another reviewer would like to comment and further illuminate this behavior, that would be helpful.\", \"some_typos\": \"\", \"abstract\": \"\\\"prematured\\\" should be \\\"premature\\\"\\n\\nIntroducton, last paragraph: \\\"CNN networks.\\\" CNN already stands for Convolutional Neural Network.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"comment\": \"Thanks for your reply.\\nMay I ask the top-5 accuracy of \\\"GHN Top-Best, 1K\\\" in Table 3?\", \"title\": \"Thanks for your reply.\"}",
"{\"title\": \"We are happy to answer any more questions!\", \"comment\": \"Hi,\\n\\n1. We will update the paper with the specific number of FLOPS.\\n2. 1x1 convolution means \\\"ReLU-1x1Conv-BN\\\"\\n\\nThanks again!\"}",
"{\"comment\": \"Thanks for your reply. I still have two questions :)\\n1. Table 3 misses the FLOPs for each method. Since we care about the computation cost on ImageNet, most previous methods show the FLOPs. I understand that the mobile set is < 600M FLOPs, but more precious FLOPs can always be helpful.\\n2. Does \\\"1x1 convolution\\\" mean \\\"ReLU-1x1Conv-BN (3 layers)\\\"? Or a stack of two 1x1 Conv like the separable conv? \\n\\nBest Regards,\", \"title\": \"Most of my concerns are addressed.\"}",
"{\"title\": \"Thank you for your interest!\", \"comment\": \"(1)\\nFor Table 2, the process takes 6 hours (GHN training) + 4 hours (15 sec/model evaluating) + 10 hours (retraining top 10 to select top 1). Note that 4 hours is an overestimate, as the code is not heavily optimized.\\nFor Table 1, there is no retraining phase.\\n\\n(2)\\nFollowing (Liu et al., 2018c; Pham et al., 2018) , the first conv actually has 3x the number of channels F. So (F=32) means 96(first conv)-32(6 cells)-64(6 cells)-128(6 cells). \\n\\n(3)\\nThe hyperparameters for CIFAR-10 and ImageNet are chosen to match Liu et al., (2018c). One difference is the drop-path probability (0.4 vs 0.3). This was chosen ad hoc in earlier experiments and we did not observe a difference in results. \\nFor anytime, the hyperparameters are identical to Huang et al. (2018)\\nOverall, we did not perform a grid search over hyperparameters. So it is possible that a grid search would improve our results. \\n\\nPerhaps the largest difference is that we accelerate training in a distributed fashion. However, our experiments showed negligible differences in accuracy compared to single GPU training.\\n\\n(4)\\nWe cannot say for certain at this moment, but we will consider releasing code after acceptance. \\n\\nThanks again for your interest! Please let us know if any answers are unclear or if there are additional questions.\"}",
"{\"comment\": \"This is a nice work. I have a few questions about the experiments.\\n(1) In Table 2, the search cost includes the training time and evaluation time on 1K models. Would you mind to also let us know the separate time of these two procedures?\\n(2) In Table 1 and Table 2, does (F=32) mean that the channels in the CIFAR architecture are 32(first conv)-32(6 cells)-64(6 cells)-128(6 cells)?\\n(3) In Sec.7.3, the hyper-parameters for optimization algorithms are different with some compared algorithms. These differences may lead a higher (or lower) accuracy. Would it be a little bit unfair?\\n(4) Do you plan to release the codes? \\nThanks again, this is an interesting paper!\", \"title\": \"Interesting Work!\"}"
]
} |
|
SyNbRj09Y7 | Visual Imitation Learning with Recurrent Siamese Networks | [
"Glen Berseth",
"Christopher J. Pal"
] | People are incredibly skilled at imitating others by simply observing them. They achieve this even in the presence of significant morphological differences and capabilities. Further, people are able to do this from raw perceptions of the actions of others, without direct access to the abstracted demonstration actions and with only partial state information. People therefore solve a difficult problem of understanding the salient features of both observations of others and the relationship to their own state when learning to imitate specific tasks.
However, we can attempt to reproduce a similar demonstration via trail and error and through this gain more understanding of the task space.
To reproduce this ability an agent would need to both learn how to recognize the differences between itself and some demonstration and at the same time learn to minimize the distance between its own performance and that of the demonstration.
In this paper we propose an approach using only visual information to learn a distance metric between agent behaviour and a given video demonstration.
We train an RNN-based siamese model to compute distances in space and time between motion clips while training an RL policy to minimize this distance.
Furthermore, we examine a particularly challenging form of this problem where the agent must learn an imitation based task given a single demonstration.
We demonstrate our approach in the setting of deep learning based control for physical simulation of humanoid walking in both 2D with $10$ degrees of freedom (DoF) and 3D with $38$ DoF. | [
"Reinforcement Learning",
"Imitation Learning",
"Deep Learning"
] | https://openreview.net/pdf?id=SyNbRj09Y7 | https://openreview.net/forum?id=SyNbRj09Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJgM7kYZeN",
"Hyg-Mg6VJE",
"SJx23ymL0m",
"SJlSXJmUAX",
"H1xY9HL7Cm",
"BkeohNIQ0m",
"SyeuNZLmRQ",
"r1erAUxeaQ",
"rylk79B52Q",
"BJg5LiSD27"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544814361573,
1543979016725,
1543020467700,
1543020316696,
1542837648526,
1542837427466,
1542836528499,
1541568205266,
1541196310535,
1541000018177
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper870/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper870/Authors"
],
[
"ICLR.cc/2019/Conference/Paper870/Authors"
],
[
"ICLR.cc/2019/Conference/Paper870/Authors"
],
[
"ICLR.cc/2019/Conference/Paper870/Authors"
],
[
"ICLR.cc/2019/Conference/Paper870/Authors"
],
[
"ICLR.cc/2019/Conference/Paper870/Authors"
],
[
"ICLR.cc/2019/Conference/Paper870/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper870/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper870/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes an approach for imitation learning from video data. The problem is important and the contribution is timely. The reviewers brought up several concerns regarding the clarity of the paper and the lack of sufficient comparisons. The authors have improved the paper significantly, adding several new comparisons and improving the presentation. However, concerns still remain regarding the description of the method and the presentation of the results. Hence, the reviewers agree that the paper does not meet the bar for publication.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta review\"}",
"{\"title\": \"Sorry for the confusion\", \"comment\": \"My apologies about the ambiguity over RSI and EESP. Reference State Initialization (RSI) is a term from Peng et al 2018. Each simulated tasks has a motion capture clip that is used in the reward function for computed distances between the agent and desired motion where the timing for the motion capture clip is controlled by the environment. RSI makes further use of the motion capture in the environment by sampling initial poses for episodes randomly from this motion capture clip. The idea is to help the agent see more states that are close to the desired motion and tends to improve the overall motion quality. However, for our work we find that too much random pose initialization leads to a bad distance function estimator.\\n\\nEESP is new. If you have a sequence of length T one might consider cropping out a window of this sequence for training. We find that randomly starting the cropping window by choosing a w_s (window start) from 0, ... , T with equal probability helps, as does cropping the end of the sequence w_e (window end) in w_s + 1, ..., T. However, this randomization will cut off the beginnings of sequences too often. It is important that the distance metric be accurate at the beginning of a the episode. To assist in this we perform EESP where the probability of choosing w_s from 0, ..., T decays linearly with the probability of w_s = (T - i)/sum(0, ..., T). This same decaying probability is applied to the end of the crop window as well, prob w_e = ((T-w_s)-i)/sum(0, ..., T-w_s). This increases the probability of starting the cropping window earlier and increases the probability of shorter windows which helps overall distance metric accuracy for earlier parts of episodes.\"}",
"{\"title\": \"Paper updated\", \"comment\": \"Thank you for your patience. An updated version of the paper has been posted.\"}",
"{\"title\": \"New paper revision\", \"comment\": \"As requested, we have added numerous additional experiments which we outline below:\\n\\nWe have added two additional types of baseline comparisons. One comparing our method to GAIL and a VAE, and another comparing our method to a non-recurrent version that is similar to TCN. Please see figure 4a of the revised manuscript.\", \"based_on_these_additional_experiments_we_observe\": \"that training a VAE to learn an encoding to compute distances using an Euclidean norm is not effective.\\nPolicies trained with GAIL often stand still or are jerky. Standing still is within the distribution of example imitation data.\", \"examining_the_new_figure_4b\": \"We find that our method that takes advantage of temporal structure works well. Our method does not assume time alignment like many current TCN like methods and makes use of temporal structure that many current GAIL-based methods do not.\\nWe also compare our method to the default manual reward function. We consistently find that both TCN and our method learn faster compared to learning with a manual reward function. We believe this is an indication that learned reward functions may provide a more dense reward landscape compared to the manual reward that can appear more sparse after the agent diverges from the desired behaviour.\\nWe conduct an additional ablation analysis in figure 3c to compare the effects of particular methods used to assist in training the recurrent siamese network.\\nWe find it very helpful to reduce Reference State Initialization (RSI). If more episodes start in the same state it increases the temporal alignment of training batches for the RNN.\\nWe believe it is very important that the distance metric be most accurate for the earlier states in an episode so we use EESP (Early Episode Sequence Priority). Meaning we give the chances of cropping the window used for RNN training batches closer to the beginning of the episode. Also, we give higher probability to shorter windows. As the agent gets better the average length of episodes increases and so to will the average size of the cropped window.\\nLast, we tried pretraining the distance function. This leads to mixed results. Often pre training overfits the initial data collected leading to poor early RL training. However, in the long run pretraining does appear to improve over its non pretrained version.\\nWe have also added experiments learning more tasks, including running and back/front flipping. We remark that:\\nThe quality of the policy for running is reasonable (figure 6)\\nWhile the quality of the flipping is not as high as other methods it is important to remark that here we provide neither motion phase information to the policy nor any of the many \\u201chints\\u201d that have been used to engineer other humanoid simulation RL techniques.\\nStill we are able to see some interesting behaviour that attempts to reproduce safer motions that match the motion timing -- in other words, our methods appear to learn in a ways closer to how real humans learn.\\n\\nThank you for your suggestions we have made numerous updates and added extensive imagery to illustrate these additions, as such the paper may take a long time to download. The paper is one page longer to accommodate the requested additional experimental work; however, we will work to compress the paper further should it be accepted. Finally, we also provide a new video at the link below where we have added new results for the additional humanoid3d tasks:\\n\\nThanks for the various pointers to related work. We have added and discussed a number of them in the revised manuscript. Please note however that a number of these papers appeared after our paper was submitted.\", \"new_video_of_results_can_be_found_here\": \"https://youtu.be/KGrDedTfclY\"}",
"{\"title\": \"Specific Comments\", \"comment\": \"re: meaning of \\\"desired behaviour\\\"\\n\\nWe are trying to get away from saying the video, movie or motion. We are really trying to learn something more abstract here. We are using videos in this case, we will be more clear in the paper but we are trying to highlight the temporal complexity of imitation.\", \"re\": \"The RL simulation environment is it made in-house, based on bullet or something else?\\n\\nIt is based on Bullet and is made in-house and will be released with this work.\"}",
"{\"title\": \"Sqecific comments\", \"comment\": \"re: reward \\\"normalization\\\"\\n\\nGood question. I believe this has to do with the initial outputs from the siamese network. When training starts the values can be between -50 and 50 which is rather large. The RL method constantly updates reward normalization statistics so the initial high variance data reduces the significance of better distance metric values produced later on by scaling them to very small numbers...\", \"re\": \"training details.\\nWe are editing these now.\"}",
"{\"title\": \"Comments and baselines\", \"comment\": \"We would like to thank the reviewers for their great feedback.\\n\\nThe motivation for our work is two-fold. We understand that the dense pose or robot state space is a poor space to compare distances. We instead believe that a more appearance based distance function based on video will allow us to learn a better distance function and therefore policy. We focus on sequence-based distances as we believe that imitation often involves an inherent temporal structure that is not captured by current versions of GAIL. The extension that video-based methods may extend the imitation capabilities of robots even further is also a desired consequence.\\n\\nCurrently, we use rendered video data from the simulation. This gives us the possibility to continuously generate more video data. However, we are interested in extending this work to taking video input from other sources, for example, youtube or possibly the kinetics or NTURGB-D datasets. We leave this as exciting future work.\\n\\nWe have included additional motion tasks for imitation. These tasks include running backflipping and front flipping for the 3D biped. While the resulting policies are not of remarkable quality I would like to note that compared to prior methods published at SIGGRAPH we don\\u2019t provide motion phase information to the policy and our reward function is pure imitation and does not contain the many \\u201chints\\u201d used in the DeepMimic/SVF/OpenAIGym environments for humanoid controllers.\", \"baselines\": \"We have processed additional comparisons to other baselines. These include a version of GAIL, non-recurrent vs recurrent examples and TCN examples. Most of these benchmarked on the 2d biped walking example. We are also working on an ablation study. We have also performed new comparisons to a multi-modal version of the recurrent method. Where we learn a distance function between the agents pose and the imitation video. This will also be included in the analysis.\", \"related_work_and_details\": \"Thank you very much for the additional related papers. \\n\\nWe are currently editing the paper, this will all be included in an updated version of the paper in a few days. This updated version will also give many more details on the training process. In particular details on how the positive and negative examples are generated. In short we use a sequence-based version of a TCN like loss combine with class-by-class loss for the 3d humanoid where we have examples motions from other classes (running, backflipping and frontflipping).\\n\\nWe also thank the reviewers for their editing comments.\"}",
"{\"title\": \"Issues with significance of results\", \"review\": \"This paper proposes an imitation learning method solely from video demonstrations by learning recurrent image-based distance model and in conjunction using RL to track that distance.\", \"clarity\": \"The paper writing is mostly clear. The motivation for using videos as a demonstration source could be more clearly stated. One reason is because it would pave the way to learn from real-world video demonstrations. Another reason is that robot's state space is an ill-suited space to be comparing distances over and image space is more suitable. Choosing one would help the readers identify the paper's motivation and contribution.\", \"originality\": \"The individual parts of this work (siamese networks, inverse RL, learning distance functions for IRL, tracking from video) have all been previously studied (which would be good to discuss in a relate work section), so nothing stands out as original, however the combination of existing ideas is well-chosen and sensible.\", \"significance\": \"There are a number of factors that limit the significance of this work.\\n\\nFirst, the demonstration videos come from synthetic rendered systems very similar the characters that imitate them, making it hard to evaluate whether this approach can be applied to imitation of real-world videos (and if this is not the goal, please state this explicitly in the paper). Some evaluation of robustness due to variation in the demonstration videos (character width, color, etc) could have been helpful to assure the reader this approach could scale to real-world videos.\\n\\nSecond, only two demonstrations were showcased - 2D walking and 3D walking. It's hard to judge how this method (especially using RNNs to handle phase mismatch) would work for other motions.\\n\\nThird, the evaluation to baselines is not adequate. Authors mention that GAIL does not work well, but hypothesize it may be due to not having a recurrent architecture. This really needs to be evaluated. A possibility is to set up a 2x2 matrix of tests between [state space, image space] condition and [recurrent, not recurrent] model. Would state space + not recurrent reduce to GAIL?\\n\\nFourth and most major to me is that looking at the videos the method doesn't actually work very well qualitatively, unless I'm misunderstanding the supplementary video. The tracking of 2D human does not match the style of the demonstration motion, and matches even less in 3D case. Even if other issues were to be addressed, this would still be a serious issue to me and I would encourage authors to investigate the reasons for this when attempting to improve their work.\\n\\nOverall, I do not think the results as presented in the submission are up to the standards for an ICLR publication.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting solution to a challenging problem, but lacking in quantitative results\", \"review\": \"Summary: This paper aims to imitate, via Imitation Learning, the actions of a humanoid agent given only video demonstrations of the desired task, including walking, running, back-flipping, and front-flipping. Since the algorithm does not have direct access to the underlying actions or rewards, the agent aims to learn an embedding space over instances with the hope that distance in this embedding space corresponds to reward. Deep RL is used to optimize a policy to maximize cumulative reward in an effort to reproduce the behavior of the expert.\", \"high_level_comments\": [\"My biggest concern with this paper is the lack of a baseline example that we can use to evaluate performance. The walking task is interesting, but the lack of a means by which we can evaluate a comparison between different approaches makes it very difficult to optimize. This makes evaluation of quality and significance rather difficult. A number of other questions I have stem from this concern:\", \"= The paper is missing a comparison between the recurrent Siamese network and the non-recurrent Siamese network. The difficulty in comparing these approaches without a quantitative performance metric.\", \"= The authors also mention that they tried using GAIL to solve this problem, but do not show these results. Again, a success metric would be very helpful here.\", \"= Finally, a simpler task for which the reward is more easily specified may be a better test case for the quantitative results. Right now, the provided example of walking agents seems to only provide quantitative results.\", \"The authors need to be more clear about the structure of the training data and the procedure. As written, the structure of the triplet loss is particular ambiguous: the condition for positive/negative examples is not clearly specified.\", \"There are a number of decisions made in the paper that feel rather arbitrary or lack justification. In particular, the \\\"normalization\\\" scaling factor fits into this category. Some intuition or explanation for why this is necessary (or why this functional form should be preferred) would be helpful.\", \"A description of what the error bars represent in all of the plots is necessary.\"], \"more_minor_comments_and_questions\": [\"The choice of RL algorithm is not the purpose of this paper. Much of this section, and perhaps many of the training curves, are probably better suited to appear in the Appendix. Relatedly, why are training curves only shown for the 2D environment? If space was a concern, the appendix should probably contain these results.\", \"An additional agent that may be a useful comparison is one that is directly provided the actions. It might then be more clear how well. (Again, this would require a way to compare performance between different approaches.)\", \"How many demonstrations are there? At training vs testing?\", \"Where are the other demonstrations? The TSNE embedding plot mentions other tasks which do not appear in the rest of the paper. Did these demonstrations not work very well?\"], \"a_comment_on_quality\": \"Right now, the paper needs a fair bit of cleaning up. For instance, the word \\\"Rienforcement\\\" is misspelled in the abstract. There is also at least one hanging reference. Finally, a number of references need to be added. For example, when the authors introduce GAIL, they mention GANs and cite Goodfellow et al. 2014, but do not cite GAIL. There is also a lot of good research on Behavioral Cloning, and where it can go wrong, that the authors mention, but do not cite.\", \"conclusion\": \"At this point it is difficult to recommend this paper for acceptance, because it is very hard to evaluate performance of the technique. With a more concrete way of evaluating performance on a different task with a clearer reward function for comparison, the paper could be much stronger, because this would allow the authors to compare the techniques they propose to one another and to other algorithms (like GAIL).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting idea, writing needs significant work and comparisons\", \"review\": \"\", \"brief_summary\": \"This work proposes a way to perform imitation learning from raw videos of behaviors, without the need for any special time-alignment or actions present. They are able to do this by using a recurrent siamese network architecture to learn a distance function, which can be used to provide rewards for learning behaviors, without the need for any explicit pose estimation. They demonstrate effectiveness on 2 different locomotion domains.\", \"overall_impression\": \"Overall, my impression from this paper is that the idea is to use a recurrent siamese network to learn distances which make sense in latent space and provide rewards for RL. This is able to learn interesting behaviors for 2 tasks. But I think the writing needs significant work for clarity and completeness, and there needs to be many more baseline comparisons.\", \"abstract_comments\": \"trail and error -> trial and error\", \"introduction_comments\": \"Alternative reasons why pose estimation won\\u2019t work is because for any manipulation tasks, you can\\u2019t just detect pose of the agent, you also have to detect pose of the objects which may be novel/different\\n\\nFew use image based inputs and none consider the importance of learning a distance function in time as well as space -> missed a few citations (eg imitation from observation (Liu, Gupta, et al))\\n\\nTherefore we learned an RNN-based distance function that can give reward for out of sync but similar behaviour -> could be good to emphasize difference from imitation from observation (Liu, Gupta, et al) and TCN (Semanet et al), since they both assume some sort of time alignment\\n\\nMissing related work section. There is a lot of related work at this point and it is crucial to add this in. Some things that come to mind beyond those already covered are:\\n1. Model-based Imitation Learning from State Trajectories\\n2. Reward Estimation via State Prediction\\n3. infoGAIL\\n4. Imitation from observation \\n5. SFV: Reinforcement Learning of Physical Skills from Videos\\n6. Universal planning networks\\n7. https://arxiv.org/abs/1808.00928\\n8. This might also be related to VICE (Fu, Singh et al), in that they also hope to learn distances but for goal images only.\\nIt seems like there is some discussion of this in Section 3.1, but it should be it\\u2019s own separate section.\", \"section_3_comments\": \"a new model can be learned to match this trajectory using some distance metric between the expert trajectories and trajectories produced by the policy \\u03c0 -> what does this mean. Can this be clarified?\\n\\nThe first part of Section 3 belongs in preliminaries. It is not a part of the approach. \\n\\nSection 3.2\\nEquations 9 and 10 are a bit unnecessary, take away from the main point\\n\\nWhat does distance from desired behaviour mean? This is not common terminology and should be clarified explicitly.\\n\\nEquation 11 is very confusing. The loss function is double defined. what exactly Is the margin \\\\rho (is it learned?) The exact rationale behind this objective, the relationship to standard siamese networks/triplet losses like TCN should be discussed carefully. This is potentially the most important part of the paper, it should be discussed in detail.Also is there a typo, should it be || f(si) - f(sn)|| if we want it to be distances? Also the role of trajectories is completely not discussed in equation 11.\\n\\nSection 3.3 \\nThe recurrent siamese architecture makes sense, but what the positive and negative examples are, what exactly the loss function is, needs to be defined clearly. Also if there are multiple demonstrations of a task, which distance do we use then?\\n\\nThe RL simulation environment is it made in-house, based on bullet or something else?\\n\\nData augmentation - how necessary is this for method success? Can an ablation be done to show the necessity of this?\\n\\nAlgorithm 1 has some typos \\n- > is missing in line 3\\n- Describe where reward r is coming from in line 10\\n\\nSection 4.1\\nWalking gate -> walking gait\\n\\nThere are no comparisons with any of the prior methods for performing this kind of thing. For example, using the pose estimation baseline etc. Using the non-recurrent version. Using TCN type of things. It\\u2019s not hard to run these and might help a lot, because right now there are no baseline comparisons\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SJMZRsC9Y7 | A NON-LINEAR THEORY FOR SENTENCE EMBEDDING | [
"Hichem Mezaoui",
"Isar Nejadgholi"
] | This paper revisits the Random Walk model for sentence embedding in the context of non-extensive statistics. We propose a non-extensive algebra to compute the discourse vector. We argue that by doing so we are taking into account high non-linearity in the semantic space. Furthermore, we show that by considering a non-extensive algebra, the compounding effect of the vector length is mitigated. Overall, we show that the proposed model leads to good sentence embedding. We evaluate the embedding method on textual similarity tasks. | [
"sentence embedding",
"generative models"
] | https://openreview.net/pdf?id=SJMZRsC9Y7 | https://openreview.net/forum?id=SJMZRsC9Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Syxgbg34mV",
"SJg6oHJK1N",
"HkxvJlI1aX",
"rJgV3Kf_2Q",
"B1gXRA-u37"
],
"note_type": [
"official_comment",
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1548169208409,
1544250789017,
1541525471424,
1541052843816,
1541050058872
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper869/Authors"
],
[
"ICLR.cc/2019/Conference/Paper869/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper869/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper869/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper869/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Thank you for your helpful feedback\", \"comment\": \"We thank the reviewers for providing productive comments and critiques. We believe this input is very useful. We are working on improving the presentation of the paper as well as extending the non-extensive theory to several other applications. We will present our enhanced model in future opportunities.\"}",
"{\"metareview\": \"The paper is poorly written and below the bar of ICLR. The paper could be improved with better exposition and stronger experiment results (or clearer exposition of the experimental results.)\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}",
"{\"title\": \"Insufficient contents and experiments. Not ready to be published.\", \"review\": \"Summary: this paper discussed an incremental improvement over the Random Walk based model for sentence embedding.\", \"conclusion\": \"this paper is not ready for publication, very poor written and well below the bar of ICLR-caliber papers.\", \"more\": \"This paper spent the majority of its content explaining background (those paragraphs were very poor written and difficult to read), and very briefly introduced their methodology with some mathematical derivations and equations, most of which can be put in the supplement instead of main context. The author didn't quite explain how the proposed method, such as why using non-extensive statistic in this context,\\n\\nThe experiment results aren't convincing and lack sufficient information for reproducibility.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"This paper is not positioned well with respect to the literature. I am not sure what are its key contributions and how significant they are. The technical exposition also appears somewhat incoherent and not well-justified (see my specific comments below).\", \"review\": \"PAPER SUMMARY:\\n\\nThis paper introduces a non-extensive statistic random walk model to generate sentence embedding while accounting for \\nhigh non-linearity in the semantic space.\\n\\nNOVELTY & SIGNIFICANCE:\\n\\nI am not sure what the main focus of this paper is. It seems accounting for non-linearity in the semantic space while generating sentence embedding has already been achieved by existing LSTM models -- the goal seems to be more about interpretability and computational efficiency but the paper did not really discuss these in detail (more on this later).\\n\\nIn terms of the proposed solution, I am also not sure what is the significance of using non-extensive statistic in this context. In fact, the background section gave the impression that the non-linear form of q-exponential is the main reason to advocate this approach. But, if it is only about handling non-linearity, there are plenty of alternatives and it is important to point out exactly what advantages non-extensive statistic has over the existing literature (e.g., why is it more interpretable than LSTM). Please expand the respective background section to clarify this.\", \"technical_soundness\": \"There are parts of the technical exposition that appear confusing and somewhat incoherent. For instance, what exactly is this confounding effect of vector length & why do we need to address this issue if according to the Section 2.2, it has already been addressed in the same context?\\n\\nSection 2.2 seems to discuss this effect but the exposition is unclear to me. The authors start with an example and a bunch of assumptions that lead to a contradiction. \\n\\nIt is then concluded that the cause of this is due to the linearity assumption (what about the other assumptions?) in estimating the discourse vector. \\n\\nI do not really follow this reasoning and it would be good if the authors can elaborate more on this.\", \"clarity\": \"The paper seems to focus too much on technical details and does not give enough discussion on its positioning. The significance of the proposed solution with respect to the literature remains unclear.\", \"empirical_results\": \"I am not an expert in this field and cannot really judge the significance of the reported results. I do, however, have a few questions: in all benchmarks, are the algorithms tested on a different domain than the domain it was trained on? \\n\\nHave the authors compared the proposed sentence embedding framework with the LSTM literature mentioned in the introduction? I noticed there was a LSTM AVG in the comparison table.\\n\\nIs that the simple averaging scheme mentioned in the introduction when the authors discussed transferrable sentence embedding?\\n\\nIs there any reason for not comparing with RNN (Cho et al., 2014)? \\n\\nIn terms of the computation processing cost, how efficient is the proposed method (as compared to existing literature)?\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"This paper claims to introduce non linearity in the discourse vector framework defined by Arora et.al. While the motivations for the non extensive statistics seem interesting and warrant a thought, the experiments are severely lacking in providing any insight into the method.\", \"review\": \"This paper while presenting interesting ideas, is very poorly written. It seems as though the authors were in a rush to submit a manuscript and did not even bother with basic typesetting.\\nFirstly, the paper spends too much time motivating and re-introducing the model of Arora et.al. Note to the authors here, they cite the same paper from Arora et.al for 2017 twice. The first time the model they refer to was introduced by the paper \\\"RAND-WALK: A latent variable model approach to word embeddings\\\", this is probably what the authors mean by the 2016 reference?\\n\\nNow coming to the experiments, the results are presented in a table that is poorly formatted. The section partitions are not clearly delimited, making for a hard read. Even if we overcome that and look at the results, the presented numbers are incredibly confusing. On the STS 13 and 15 data sets, Ethayarajh 2018's numbers are much better at 66.1 and 79.0. Coming to STS14 Ethayarajh attain 78.4 while the proposed method achieves 78.1. If we discount this for the moment, and look at the results on STS12 where the proposed method achieves 71.4, this is the only data set where the proposed method does better than the other baselines.\\n\\nSo almost on 3 of the 4 datasets Ethayarajh 2018 does better. This makes me question what exactly is the proposed model improving?\\n\\nCoupled with the fact that there is no motivation to explain results or future work, this makes for a very poorly written paper that is very challenging to read.\\n\\nIt is very likely that there is some merit to the proposed methods that introduce non linearity, but these points simply get lost in the mediocre presentation.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
B1eZRiC9YX | Sufficient Conditions for Robustness to Adversarial Examples: a Theoretical and Empirical Study with Bayesian Neural Networks | [
"Yarin Gal",
"Lewis Smith"
] | We prove, under two sufficient conditions, that idealised models can have no adversarial examples. We discuss which idealised models satisfy our conditions, and show that idealised Bayesian neural networks (BNNs) satisfy these. We continue by studying near-idealised BNNs using HMC inference, demonstrating the theoretical ideas in practice. We experiment with HMC on synthetic data derived from MNIST for which we know the ground-truth image density, showing that near-perfect epistemic uncertainty correlates to density under image manifold, and that adversarial images lie off the manifold in our setting. This suggests why MC dropout, which can be seen as performing approximate inference, has been observed to be an effective defence against adversarial examples in practice; We highlight failure-cases of non-idealised BNNs relying on dropout, suggesting a new attack for dropout models and a new defence as well. Lastly, we demonstrate the defence on a cats-vs-dogs image classification task with a VGG13 variant. | [
"Bayesian deep learning",
"Bayesian neural networks",
"adversarial examples"
] | https://openreview.net/pdf?id=B1eZRiC9YX | https://openreview.net/forum?id=B1eZRiC9YX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkgL41RixE",
"HJeW0wtXeN",
"rkeva90Y0X",
"H1xFIiTF0X",
"SkgFUoWvCX",
"rylZfdlvRQ",
"S1emCwxwCQ",
"SkgDcwewR7",
"rkl4RU1x0Q",
"ByxhiLkl07",
"Bkl-WjLkCX",
"BygefbgaaX",
"S1xHAlqnpm",
"HJl6wl9nT7",
"BJxd8NvS6X",
"BklG9a0GaX",
"rkgPrxEGpQ",
"SJgfWxoZTm",
"BklJx49-67",
"ryerzw45nX",
"SJlJrN8U37"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1545490222368,
1544947657102,
1543264958855,
1543261009449,
1543080784668,
1543075849253,
1543075786999,
1543075726695,
1542612684286,
1542612644131,
1542576889138,
1542418695711,
1542394061486,
1542393956619,
1541923920361,
1541758345719,
1541713982949,
1541677050477,
1541673958670,
1541191436880,
1540936759344
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper867/Authors"
],
[
"ICLR.cc/2019/Conference/Paper867/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper867/Authors"
],
[
"ICLR.cc/2019/Conference/Paper867/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper867/Authors"
],
[
"ICLR.cc/2019/Conference/Paper867/Authors"
],
[
"ICLR.cc/2019/Conference/Paper867/Authors"
],
[
"ICLR.cc/2019/Conference/Paper867/Authors"
],
[
"ICLR.cc/2019/Conference/Paper867/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper867/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper867/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper867/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper867/Authors"
],
[
"ICLR.cc/2019/Conference/Paper867/Authors"
],
[
"ICLR.cc/2019/Conference/Paper867/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper867/Authors"
],
[
"ICLR.cc/2019/Conference/Paper867/Authors"
],
[
"ICLR.cc/2019/Conference/Paper867/Authors"
],
[
"ICLR.cc/2019/Conference/Paper867/Authors"
],
[
"ICLR.cc/2019/Conference/Paper867/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper867/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Summary of reviewers' points and rebuttal\", \"comment\": \"We summarise the reviewers' remaining points after our rebuttals for the convenience of new readers (together with some more comments):\\n\\n***\", \"r1_acknowledges_after_discussing_with_us\": \"* \\\"[The] paper is the first to provide theory for findings from theoretical papers, namely offering an explanation for why BNNs work well in practice (beyond the non-explanation that \\\"they're Bayesian\\\") and the link between epistemic uncertainty and adversarial examples. As I mentioned in my review, they are indeed addressing important questions posted by previous research in novel ways.\\\"\\n* \\\"After reading the other reviews and rebuttals, I see that perhaps the main contribution of the paper is not meant to be technical but rather to aid with future research.\\\"\\n* The reviewer still posits \\\"That being said, I am still not convinced by the applicability.\\\"\\n--- We summarise the main points on applicability we presented to the reviewer:\\n--- First, for the first time in the literature (as far as we know), we studied in detail a proposed mechanism to explain why BNNs have been observed to be empirically more robust to adversarial examples.\\n--- Second, for the first time in the literature, we proved (and gave empirical evidence in Fig 3) that a connection exists between epistemic uncertainty/input density and adversarial examples, a link which was only *hypothesised* previously. \\n--- Further, our proof highlighted sufficient conditions for robustness which resolved the inconsistency between (Nguyen et al., 2015; Li, 2018) and (Gilmer et al., 2018).\\n--- Apart from resolving an open dispute in the literature, our observations above also suggest future research directions towards tools which answer our conditions better. We believe that our observations set major milestones for the community to tackle.\\n--- Lastly, we proposed new synthetic datasets for the community to experiment with adversarial examples (where we can calculate ground truth image density, something which was only done in hand-wavey ways so far by looking at images by hand). We also demonstrated our ideas in ***real world imagenet-like tasks***, and in the process also exposed new attacks and defences on Bayesian techniques - see section 5.\\n\\n***\", \"r2_acknowledges_after_discussing_with_us\": \"* \\\"[Results] are novel to the best of my knowledge\\\"\\n* \\\"I do agree that the paper makes a meaningful contribution\\\" / \\\"the paper identifies concrete goals to pursue\\\", with the remaining criticism that \\\"I cannot judge the value of the paper based solely on the potential future research that will utilize the proposed framework\\\".\\n--- We would highlight that we give many examples of new and previously undiscussed insights (with many also highlighted through the rebuttal). These observations are overlooked in the literature, and they have important implications for the way we think about the problem.\\n\\n***\", \"r3_acknowledges_after_discussing_with_us\": [\"\\\"I didn\\u2019t read the appendix for my original review. Appendix C has several discussions about these conditions [which the reviewer got confused about], which in my opinion are interesting and important, and should be in the main content\\\"\", \"\\\"I would like to emphasize that the presentation of this paper needs to be improved.\\\" Eg \\\"definition of the \\\\delta ball\\\" and others\", \"--- we did not think some standard terms merited their own definition. Instead we put this in a footnote to clarify for non-technical readers.\"]}",
"{\"metareview\": \"This paper conducts a study of the adversarial robustness of Bayesian Neural Network models. The reviewers all agree that the paper presents an interesting direction, with sound theoretical backing. However, there are important concerns regarding the significance and clarity of the work. In particular, the paper would greatly benefit from more demonstrated empirical significance, and more polished definitions and theoretical results.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"An intriguing idea but the exact impact is unclear\"}",
"{\"title\": \"further reply\", \"comment\": \"On randomisation, sorry for not being clear. I didn't mean that randomised model s are less robust - here, we meant to refer to the phenomenon that randomised models can provide a false sense of security because they randomise the gradients, which makes life harder for adversarial image generation algorithms that use gradient based optimisation, but don't really get rid of the problem in the sense that the adversarial regions of data space remain. The reviewers characterisation of this as an evaluation problem is also a fair characterisation, in the sense that we have to resort to randomisation to evaluate integrals approximately.\\nThere are, indeed, other explanations of what dropout does that are potentially useful. Sensitivity to high frequency signals is a possible cause of adversarial behaviour in image models, and one can see dropout as a regularisation technique that reduces this. However, we do not think this would be sufficient- one can fairly easily conceive of situations in which a model that only fits low frequency functions would have adversarial examples (there is an interesting discussion of this in https://arxiv.org/abs/1806.08734 - see figure 2. for an example.). It could certainly have 'garbage' examples far from all data on which it was very confident.\\nIt is a reasonable point that we do have some assumptions on the data, but we think they are weaker than 'separable with large margins'. \\n\\nThis second point is interesting. This is related to the distinction in the paper between epistemic and aleatoric uncertainty. This situation corresponds to high aleatoric uncertainty (i.e the data is intrinsically noisy), or possibly to very fine distinctions in the data. This actually exists in some datasets - for example, on mnist, it is very easy to change a 7 into a 9 with a relatively small perturbation. However, this is of course not an 'adversarial' perturbation - we have really changed the data class. In this sense, eta does depend on the data - it must be 'small enough' to not change the class. However, we do not think that the property of eta is used anywhere in the argument. (i.e if it is smaller than delta it doesn't change the class prediction, and if it is larger then the uncertainty will be high). However, clearly this could be explained more clearly, and we will work on this in future drafts\\nThe distinction between epistemic and aleatoric uncertainty is discussed in the paper in more detail in section 3 - in particular see figure 1. The epistemic uncertainty should be low in the situation described here, where we are really changing the class in question\"}",
"{\"title\": \"Thanks for the reply\", \"comment\": \"\\u201cWe would argue that the connection to randomisation would in fact be bad.\\u201d Not sure if I understand this part correctly. I would argue that this is actually an evaluation problem. Overall, I don\\u2019t see any evidence that randomized models are less robust. More importantly, I was raising this point to evaluate the contribution of this paper to the fundamental understanding. The \\u201crandomisation\\u201d mentioned here should be taken as a general term. If the authors think the importance of performing approximate inference is the main conceptual contribution of the paper, then it should be emphasized and reflected in the paper. I don\\u2019t see it from the current version of the paper.\\n\\nI mentioned the \\u2018high frequency signals\\u2019 to provide an alternative explanation for why dropout would help in defence (dropout as a way in approximately removing high frequent signals). So to me, \\u201cdropout as performing approximate inference\\u201d proposed in this paper is not the unique possibility. What makes this explanation/hypothesis significant (more significantly compared to other alternatives)? \\n\\nArguably, the conditions in this paper also depend on the data distribution. The first condition that the training set can represent the real data manifold is basically an implicit assumption (if ). Also, if the manifolds of two different classes are close to each other, then a \\u201cno-adversarial-example\\u201d model would have to make lots of low-output-probability predictions, even for the real data. (Imagine x1 and x2 close to each other, both real data but with different true class labels.) This phenomenon traces back to the \\\\eta in definition 1. Again, definition 1 is not rigorous, as it is missing a rigorous definition for \\\\eta here (which should depend on the data distribution).\"}",
"{\"title\": \"continued response\", \"comment\": \"Let us also briefly answer the reviewers second question\\n1.The reviewer comments that we should make the definitions \\u2018small\\u2019 and \\u2018high\\u2019 precise. These are indeed not particulary well defined, but we do not really use the property that eta is small or the probability is high in the proof - this is more to accord with the \\u2018common sense\\u2019 definition of an adversarial example, where small means \\u2018small enough i consider it adversarial\\u2019. In the general case, we think it would be difficult to define this in a sensible way (would the argument substantially change if we said \\u2018we consider examples with eta < 0.3243 adversarial\\u2019?\\n\\n2. In this case, P(x,y) is the joint distribution of the data (we assume iid sampling), so its support is (some subset of) the space X x Y. The transformations are transformations in the input space, so they have domain/codomain t : X -> X. We thought that this was fairly obvious from common convention and the context in which t(x) is used, but we could clarify in the future. \\nSorry for this inconsistency. However, both of X,Y and D for the dataset is pretty standard notation. \\n\\n4. Yes - this is the predictive entropy of a model, so it is conditioned on the dataset\\n5. Does the reviewer mean the definition of the Euclidean ball? This is indeed clarified in a footnote to avoid ambiguity in lemma one, but a 'delta ball' is just a way of saying 'a euclidean ball of radius delta', so we did not think this merited its own definition.\\n6. We will bear this feedback in mind for future drafts\"}",
"{\"title\": \"response\", \"comment\": [\"We thank the reviewer for their feedback and for taking part in an active discussion with us, and are glad that they found the discussion in the appendices helpful. This feedback on the structure and presentation of the paper has been fairly consistent among the reviewers, and we would appreciate the reviewer\\u2019s specific feedback on where they think they got confused with the contribution of the paper being otherwise than stated by our title and abstract (and introduction). We would strive to update and clarify this before the end of the revision period.\", \"Answering some of the reviewer\\u2019s questions and comments:\", \"We would argue that the connection to randomisation would in fact be bad. As mentioned by reviewer 1, \\u2018mere\\u2019 randomisation of networks can provide a false sense of security. We would argue that the important part of the randomisation is performing approximate inference. Randomised models and Bayesian models are not equivalent - if we were able to evaluate the integral over the posterior in closed form as we can for a GP, then presumably this would be far better than a monte carlo approximation in terms of robustness to adversarial examples.\", \"We are not sure what the reviewer is referring to by the \\u2018high frequency signals\\u2019 alternative. We assume the reviewer is referring to the theory that adversarial examples are due to models fitting high frequency noise in the dataset. This may very well also be true (it is likely that adversarial examples do not have a single cause).\", \"The example that separable data with high margins would also not have adversarial examples is probably accurate, but this is a sufficient condition on the data, which we would argue is less interesting that sufficient conditions on the kind of model. After all, we do not have the luxury of choosing the data that we work with.\", \"Lastly, we are glad that the reviewer found the appendices helpful, and we will bear this in mind in future work. It is unfortunate that these could not all be incorporated into the main body of the paper under the page constraint without making other parts less clear.\"]}",
"{\"title\": \"response\", \"comment\": \"We thank the reviewer for taking part in an active discussion with us. We will attempt to answer the reviewers new questions.\\n\\n\\u201cAnother point I wanted to bring up (also brought up by other reviewers) is that real-life adversarial perturbations may be quite small but still larger than the distance for which Bayesian neural networks will assign low value probability. Is there a reason to believe that in practice this distance is covered?\\u201d\\n\\n* We are unclear exactly what the reviewer meant in this question - if the reviewer meant the BNN assigning a low *confidence* (rather than low probability, which can still be of high confidence for a certain class), then we actually demonstrate this empirically both in MNIST experiments with HMC (near idealised inference) as well as real-world imagenet-like cats vs dogs classification (last appendix). In both we see that the BNN forces the perturbations to be much larger than otherwise (deterministic model), with the uncertainty being statistically significant to identify often when an image has been manipulated. From the theoretical \\u201cguarantee\\u201d perspective, this is one of our points we highlight as of interest for future research in S6 - where we exactly identify this as a question the community should concentrate on rather than the question of coverage.\"}",
"{\"title\": \"response\", \"comment\": \"We thank the reviewer for taking part in an active discussion with us and for modifying their initial score.\\n\\nWe do see the theorem as, in the reviewers words, more of a perceptual rather than technical contribution (as our title states, **the contribution of the paper is placing forward sufficient conditions** for idealised models to have no adversarial examples). The reviewer raises some valid criticism about whether this was communicated clearly in the main body of the paper, and we would appreciate the reviewer\\u2019s feedback on where they think they got confused with the contribution of the paper being otherwise. We would strive to update this before the end of the revision period.\", \"in_response_to_some_of_the_specific_questions_in_the_last_reply_from_the_reviewer\": [\"\\u2018The theorem is not directly connected to the accuracy of the model\\u2019 - this depends on the definition of adversarial examples that we use. We would argue that adversarial examples are an distinct issue to model accuracy - what would be an \\u2018adversarial example\\u2019 for an input the model already classifies incorrectly? Since adversarial examples are only really suprising for models with high expected accuracy (otherwise normal data is frequently misclassified) we freely assume that the model is perfectly accurate, at least on data sampled from the true distr. The purpose of the transformation set is to allow meaningful generalisation without simply requiring infinite amounts of data. If you allow infinite data, for example, then we could simply use nearest neighbours as our idealised classifier. The introduction of a transformation set is an attempt to introduce a class of models whose generalisation is not totally trivial, though as many of the reviewers have pointed out the condtition is still very strong.\", \"Indeed, for a neural network the value of delta can be very small. As mentioned above, in our experiments with HMC inference on NNs, we see evidence of this being the case.\", \"It is always fair to be sceptical of results on synthetic or partially synthetic data, but they also allow us to investigate questions we are otherwise unable to - in particular it is impossible to objectively evaluate the likelihood of a datapoint in a real dataset (without making equally unrealistic assumptions).\", \"Lastly, while it is true that proposed adversarial defences are often not sufficiently carefully evaluated, we would defend our evaluation here: we do not claim to defeat the attack, just that our procedure increases the size of perturbation needed to fool the model, which I believe is in agreement with C&Ws paper. In that paper\\u2019s conclusion, the authors say that the dropout based \\u2018defense\\u2019, while bypassable in isolation, does increase the required distortion, which is what we mean by increased robustness, and is what we oberved in our experiments. We will clarify this in our paper.\"]}",
"{\"title\": \"Con't\", \"comment\": \"Lastly, I would like to emphasize that the presentation of this paper needs to be improved.\\n1. For example, the \\u201csmall\\u201d \\\\eta and \\u201chigh\\u201d probability should be rigorous to make definition 1 a definition.\\n2. In definition 2, what is the support of p, and what are the domain and codomain of T? Is the set of all the such transformations \\\\mathcal{T} well defined?\\n3. In the paragraph after definition 3, \\u201c(X,Y)\\u201d is used to represent the training set. Later in the paragraph after definition 4, \\u201cD\\u201d is used to represent the training set. \\n4. In the paragraph after definition 4, is the H(p) here conditioned on the training set?\\n5. The definition of the \\\\delta ball is actually in a lemma, Lemma 1.\\n6. For me, Appendix B,C are more important than the synthesis experiments in section 5. Also the verbose explanation before theorem 1 could also be improved.\\n\\nIn summary, I think it is interesting to connect Bayesian networks and robustness in this paper. However, the investigation in the paper is a bit shallow, and it is not well justified how significant this contribution is neither in fundamental understanding or practical guidance. I would be happy to discuss further about the concerns I have above.\", \"question\": \"In section D, is it assumed that BNN has the property that the uncertainty increases fast away from the training set?\"}",
"{\"title\": \"Sorry for the late reply, and thanks for the patient clarification.\", \"comment\": \"I am sorry for the confusion in my review. I indeed misunderstood the \\\\eta in definition 1, where originally I thought it was decided by Lemma 1, and the role of the invariance under all transformations. Thanks for the clarification.\\n\\nI think my evaluation still holds. To avoid other possible misunderstandings, let me summarize the theoretical result of the paper here: The main result of the paper is Theorem 1, which basically says under the following 3 conditions, classifier f has no adversarial examples.\\n(1) The training set X can present the data manifold (under all the transformations that maps);\\n(2) f has 0 training error;\\n(3) f has low output probability on D\\u2019.\\nBased on Definition 1, the definition of adversarial examples, this theorem directly follows. (this point is also mentioned by Reviewer #2.)\\nI think it is fair to say that the technical contribution is weak. Therefore, a key point of the review would be the contribution in fundamental understanding, and the practical guidance of the theory.\", \"fundamental_understanding\": \"1. I think it is interesting to connect bayesian networks with robustness. Introducing randomness has been shown to be helpful, and the \\u201cepistemic uncertainty\\u201d could be one of the perspectives. \\n2. However, theorem 1 actually gives pretty strong assumptions. For example, the training set should be able to cover the data manifold to guarantee the generalization on clean data. Arguably it is easy to provide many \\u201csufficient\\u201d conditions, and what differs them is how natural/weak these conditions are (Of course, sufficient and necessary conditions would be ideal). This is essential for the meaningfulness/importance of a theorem. The contribution of the current result, in my opinion, is solely connecting bayesian networks with the randomness in achieving robustness, which seems to be lower than the ICLR bar.\\n3. It is argued in the paper that this theory provides an explanation for the dropout inference. But on the other hand, as mentioned above, its contribution in fundamental understanding seems incremental. It is also not clear why this explanation is the correct one, or better than other alternatives, for example like high frequency signals. (The high frequency view may arguably connect to the bayesian view here too, which in fact hurt the novelty of the results in this paper.)\\n4. In the rebuttal, \\u201cour conditions provide a useful framework to set directions for future work. We would point out that as far as we know there are no other attempts to formalise what an idealised model without adversarial examples would be like,\\u2026 .\\u201d In general I don\\u2019t see why this framework is useful. (See the comments above.) It is easy to provide sufficient conditions, e.g. separable data with large margins. \\n5. I didn\\u2019t read the appendix for my original review. Appendix C has several discussions about these conditions, which in my opinion are interesting and important, and should be in the main content. I think the presentation of the paper needs to be significantly improved.\", \"practical_guidance\": \"Given the strong conditions in theorem 1, it is hard to see how it can guide practical algorithms. Results in section 5 is only on synthesis data, while in section G, the idea of ensembling is not really new in the literature. It would have justified this point much more solidly, if the theorem could hint a new algorithm that empirically improve the results on some popular benchmarks . The essential difficulty in following the conditions here is how to achieve a model that would decreases its output probability slowly, while increases its uncertainty fast. It is also discussed in Appendix B (Again I think this part is also interesting and should be in the main content), but deferred as a hypothesis about Lipschitz constant without definitive answer. In fact, I think the modulus of continuity is more relevant here. But again, solely by itself such conceptual extension is not really useful.\"}",
"{\"title\": \"response to reviewers\", \"comment\": \"I want to thank the authors for responding to each of my points, and I wanted to respond to their clarifications.\\n\\nI agree with the authors that their paper is the first to provide theory for findings from theoretical papers, namely offering an explanation for why BNNs work well in practice (beyond the non-explanation that \\\"they're Bayesian\\\") and the link between epistemic uncertainty and adversarial examples. As I mentioned in my review, they are indeed addressing important questions posted by previous research in novel ways. \\n\\nAppendix C gives an argument for why low test error implies high data coverage (and thus why the data invariance assumption may not be necessary). In Appendix B, the authors argue that the spheres data set, which appears to be a counterexample, may not be representative since in practice we tend to capture this coverage (notably on vision tasks). I believe this claim, but the transformation invariance assumption still exists for non-image domains.\\n\\nAnother point I wanted to bring up (also brought up by other reviewers) is that real-life adversarial perturbations may be quite small but still larger than the distance for which Bayesian neural networks will assign low value probability. Is there a reason to believe that in practice this distance is covered?\\n\\nAfter reading the other reviews and rebuttals, I see that perhaps the main contribution of the paper is not meant to be technical but rather to aid with future research. That being said, I am still not convinced by the applicability. How much do the authors' contributions actually explain what is seen in practice? If the data invariance assumption can be discounted for applications to images, doesn't the proof boil down to continuity of BNNs (for which delta can be very small)? I stand by my claim that the paper is a promising theoretical paper, although I'm not convinced of the real-world applications of their theory.\"}",
"{\"title\": \"Response\", \"comment\": \"I thank the authors for responding to my concerns. Given their clarifications, I understand that this paper is not meant as a technical contribution but rather as a conceptual direction for future research. As such, the paper identifies concrete goals to pursue and conducts preliminary experiments. From this point of view, I do agree that the paper makes a meaningful contribution and that my initial score was too harsh. (I want to note however that the paper reads differently than the points made in the author's response and I would encourage the authors to consider changing the narrative at a few places.)\\n\\nNevertheless, I cannot judge the value of the paper based solely on the potential future research that will utilize the proposed framework. I believe it is the responsibility of the authors to clearly demonstrate the potential and applicability of their approach. Thus I still don't consider the paper suitable for ICLR in its current state. I updated my score from 3 to 5 and edited my initial review to reflect this discussion.\", \"i_respond_to_specific_comments_below\": [\"Let me clarify what I mean by \\\"tautological\\\" and \\\"obvious\\\". I apologize if this came out as too harsh. From my point of view, the main theorem says: \\\"If the classifier has confidence 1 on training examples and low confidence away from training examples, then there are no high-confidence wrong classifications.\\\". Is there some deeper insight that I am missing? If not, then I would argue that the technical depth of the theorem is limited. I want to note that I don't view technical depth as a fundamental requirement for a theorem. My original perception was that you considered the theorem as more of a technical then a perceptual contribution. I never implied that the results were not novel, as they are novel to the best of my knowledge.\", \"\\\"this argument completely ignores the accuracy of the model\\\" I apologize for the poor wording, this was not meant as criticism. This sentence is meant to explain why the set T is necessary. Perhaps a better wording would be \\\"the theorem is not directly connected to the accuracy of the model\\\".\", \"I thank the authors for clarifying their point. So, if I understand correctly, the concrete practical recommendation is to focus on models that only exhibit high confidence on the training set and then work towards models that incorporate enough invariances of the data distribution to ensure that these models generalize. If this is the case, then I do think that this is an interesting direction to pursue.\", \"Let me clarify my point about high confidence regions. Consider a test example that is correctly classified with confidence 1. Consider the radius of the largest L2 ball such that every point in this ball is classified correctly with high confidence. Then that radius is very small. In other words it is very easy to find a nearby misclassification for any point. By the continuity of the classifier there will be low confidence points between the original and misclassified points. Hence the value of delta is typically very small.\", \"I agree that the results on the synthetic dataset are encouraging. However, it is hard to draw conclusions about the behavior of real world datasets given these results.\", \"Attacks cannot be considered powerful in isolation but only with respect to the particular model they evaluate. Drawing conclusions from evaluations that are not fully reliable can be misleading. This is why I think it is important to evaluate on attacks that accurately reveal the robustness of the model.\"]}",
"{\"title\": \"[continued response]\", \"comment\": [\"\\\"How do these results guide our design of ML models?\\\" / \\\"the value of delta_x is very small\\\"\", \"Both these questions are closely related to each other, and addressed explicitly in our discussion of future research in section 6. We acknowledge that there's very much room for extending our results. However, it is impossible to follow up on all possible threads of future research in a single paper: We must first lay down the foundations for such future research before we can start to pursue it (and we give initial results in our experiments sections). More specifically, given our insights brought in our results, we direct future research to study how different model architectures can yield different uncertainty properties that can increase fast enough / be large for regions far away from the data.\", \"In fact, ***one of our main insights/conclusions is that the community's concentration on robustness of models which can generalise beyond \\u201cmemorising their data\\u201d (ie increasing coverage) is misguided, and research should concentrate on model architectures that increase _uncertainty_ fast enough*** (see eg fig 7).\", \"We are slightly confused by some of the reviewers points - they write that the region around each point where the model assigns high confidence is very small for real NNs, which they say causes adversarial examples. But surely the opposite is true - real NNs are very confident about *large* regions of input space which are not close to any training data (see figures 7 and 9 for example). What we assume the reviewer means is that this region could be very small in our idealisation of real NNs. Indeed we observe this to an extent - one notable result from the experiments on manifold mnist (table 1) is that with idealised inference, HMC networks are less robust to *random* noise - that is, they become uncertain quickly when small amounts of noise are added to the input, which deterministic networks do not. These inputs do not follow the data distribution though, and indeed validation error with respect to noise distribution is low for the model.\", \"\\\"The dataset considered (\\\"ManifoldMNIST\\\") is essentially synthetic with access to the ground-truth probability of each sample\\\"\", \"We don't understand how the reviewer sees this as a critical point? It is completely valid to say that these experiments are in a contrived setting, but we do not think this makes them irrelevant. The aim of these experiments is to see whether we can approximate the conditions in our proof, by using MCMC inference, as close as we can get to for an idealised situation with NNs as far as we know. Since this experiment uses HMC both for inference and for evaluating the probability of data under the generative model, it would be difficult to scale to a more realistic synthetic dataset. The relevance of these experiments is that firstly, they demonstrate that (in this dataset at least) adversarial examples are in regions of lower likelihood under the data distribution. This has often been taken as a given in the literature justifying Bayesian methods for adversarial example defence, but there are also counter-examples making other assumptions (Gilmer et. al., the spheres dataset referenced in the paper). Secondly, our experiments demonstrate that for a model with idealised inference, the mutual information is a proxy for the support of the data distribution (alternatively, the input density).\", \"This is another example of a novel insight partly derived from our proof (and presented for the first time in the literature as far as we know), and we would argue that showing that this indeed holds in a real experiment is of significant interest. This experiment is indeed still in a toy setting, but it is considerably closer to conditions in real networks than in the proof.\", \"\\\"the results on real datasets are unreliable [..] using a single gradient estimation query is not enough\\\"\", \"The reviewer raises concerns about the difficulties of testing adversarial examples on stochastic networks, as this can lead to gradient masking, making adversarial examples more difficult to find rather than preventing them from existing. We are aware of these issues, and do not claim that the results in this appendix constitute a full defence against adversarial examples on this dataset. We do not think that our result that dropout requires higher perturbation than normal models (though small perturbation examples can still be found) is particularly controversial, and it is in agreement with previous critical papers on the subject (for example [Carlini and Wagner, 2017]). We are only aiming to show here that ensembling dropout models provides more robustness than a single one, backing up another insight from the visualisations on MNIST in a more realistic situation.\", \"Lastly, we would like to stress that we report results on a powerful attack, the momentum method, that won the NIPS attack competition and was designed to be successful against ensembles of models.\"]}",
"{\"title\": \"Response to reviewer\", \"comment\": \"We thank the reviewer for the time spent reading our paper.\\n\\nWe would like to dispute the reviewer\\u2019s extremely strong claim of \\u201ctautological results of questionable significance\\u201d. Our proof is by no means tautological (see below), and a proof does not need to be intricate or over-complicated to shed light on a matter of interest. On the contrary - it is often the simplest of proofs that gives the most interesting insights. We would further like to draw the reviewer\\u2019s attention to the fact that this boils down to a matter of presentation. Papers can either over-complicate things to make the results look like an impressive theoretical contribution, or do their best to explain things with the clearest possible presentation. We chose the latter, making an effort to move as much construction outside of the proof into our well-designed premise setup in order to make the the proof clear.\\n\\nWe also strongly disagree with the reviewer\\u2019s post-hoc view claiming \\\"this is obvious\\\", a claim which was made with hindsight after reading our proof and problem definition, and which could be made with any proof after understanding its premise. We give many examples of new and previously undiscussed insights (with many highlighted below). These might seem obvious after reading our submission, but these observations are overlooked in the literature, and they have important implications for the way we think about the problem.\\n*** We would ask the reviewer explicitly where in the literature they saw the result presented in this work, or any ideas following from the results we presented here?\\n\\n--\\nThe rest of the reviewer's criticism falls into two parts - a criticism of the theoretical content of the paper, and of the relevance of the experiments to real world adversarial example defence. We will address these in order.\\n\\n* \\\"Moreover, this argument [the reviewer's own argument] completely ignores the accuracy of the model. [..] In order to escape this issue, the authors propose [..]\\\"\\n- The reviewer brings up an argument and then criticizes their own argument as lacking! (reviewer\\u2019s argument: \\\"continuous models with a confidence of 1 on the training set [..] exists an L2 ball [..] the classifier has high confidence [..] low certainty on all points outside D\\u2019\\\"). This is exactly the reason why we introduce the concept of T, and which allows us to ***expose a sufficient set of requirements for a model to be robust to adversarial examples***. The reviewer then criticizes the proof as tautological, and says they do not see how the results give anything of value. We mention above that we see Bayesian methods and their robustness as an important direction for future research on adversarial examples, a stance the reviewer agrees with. The purpose of the proof is to illustrate which properties of Bayesian classifiers are necessary for adversarial robustness. Simply being a Bayesian model, for example, is clearly insufficient - linear classifiers will have adversarial examples in high dimensions regardless of whether they are Bayesian or not because the model class is not expressive enough for the uncertainty to increase far away from the data. As mentioned in our reply to the second reviewer, we do not claim that our assumptions are realistic or that our proof is particularly difficult if we assume them. \\nHowever, the assumptions do ***shed light on which properties of Bayesian models are important for being robust to adversarial examples***. \\n\\n* \\\"this assumption [existence of a set of transformations T both model and data are invariant to] is not connected to the main theorem at all\\\"\\n- We are not sure why the reviewer claims that the assumption is not connected to the main theorem? As the reviewer themselves points out, without the assumption of invariance to a class of transformations it is possible for a model which is \\u201cuncertain about all inputs not extremely close to a training input\\u201d to satisfy our definition of being robust. This is clearly not a very interesting class of models. We introduce the invariance class as a way to avoid this degeneracy. Further, it is, as the reviewer points out, not a very realistic assumption for real models. We do not claim that it is an assumption that holds in practice - we observe that with this assumption we can prove robustness, and from there we proceed to examine the counterpart properties of non-idealised networks (which we discuss in appendix C). \\n\\n[response continued in a separate message]\"}",
"{\"title\": \"An interesting potential direction the significance of which is still to be demonstrated.\", \"review\": \"The paper studies the adversarial robustness of Bayesian classifiers. The authors state two conditions that they show are provably sufficient for \\\"idealised models\\\" on \\\"idealised datasets\\\" to not have adversarial examples. (In the context of this paper, adversarial examples can either be nearby points that are classified differently with high confidence or points that are \\\"far\\\" from training data and classified with high confidence.) They complement their results with experiments.\\n\\nI believe that studying Bayesian models and their adversarial robustness is an interesting and promising direction. However I find the current paper lacking both in terms of conceptual and technical contributions.\\n\\nThey consider \\\"idealized Bayesian Neural Networks (BNNs)\\\" to be continuous models with a confidence of 1.0 on the training set. Since these models are continuous, there exists an L2 ball of radius delta_x around each point x, where the classifier has high confidence (say 0.9). This in turn defines a region D' (the training examples plus the L2 balls around them) where the classifier has high confidence. By assuming that an \\\"idealized BNN\\\" has low certainty on all points outside D' they argue that these idealized models do not have adversarial examples. In my opinion, this statement follows directly from definitions and assumptions, hence having little technical depth or value. From a conceptual point of view, I don't see how this argument \\\"explains\\\" anything. It is fairly clear that classifiers only predicting confidently on points _very_ close to training examples will not have high-confidence adversarial examples. How do these results guide our design of ML models? How do they help us understand the shortcomings of our current models?\\n\\nMoreover, this argument is not directly connected to the accuracy of the model. The idealized models described are essentially only confident in regions very close to the training examples and are thus unlikely to confidently generalize to new, unseen inputs. In order to escape this issue, the authors propose an additional assumption. Namely that idealized models are invariant to a set of transformations T that we expect the model to be also invariant to. Hence by assuming that the \\\"idealized\\\" training set contains at least one input from each \\\"equivalence class\\\", the model will have good \\\"coverage\\\". As far as I understand, this assumption is not connected to the main theorem at all and is mostly a hand-wavy argument. Additionally, I don't see how this assumption is justified. Formally describing the set of invariances we expect natural data to have or even building models that are perfectly encoding these invariances by design is a very challenging problem that is unlikely to have a definite solution. Also, is it natural to expect that for each test input we will have a training input that is close to L2 norm to some transformation of the test input?\\n\\nAnother major issue is that the value of delta_x (the L2 distance around training point x where the model assigns high confidence) is never discussed. This value is _very_ small for standard NN classifiers (this is what causes adversarial examples in the first place!). How do we expect models to deal with this issue? \\n\\nThe experimental results of the paper are essentially for a toy setting. The dataset considered (\\\"ManifoldMNIST\\\") is essentially synthetic with access to the ground-truth probability of each sample. Moreover, the results on real datasets are unreliable. When evaluating the robustness of a model utilizing dropout, using a single gradient estimation query is not enough. Since the model is randomized, it is necessary to estimate the gradient using multiple queries. By using first-order attacks on these more reliable gradient estimates, an adversary can completely bypass a dropout \\\"defense\\\" (https://arxiv.org/abs/1802.00420).\\n\\nOverall, I find the contributions of the paper limited both technically and conceptually. I thus recommend rejection.\\n\\n[UPDATE]: Given the discussion with the authors, I agree that the paper outlines a potentially interesting research direction. As such, I have increased my score from 3 to 5 (and updated the review title). I still do not find the contribution of the paper significant enough to cross the ICLR bar.\", \"comments_to_the_authors\": \"-- You cite certain detention methods for adversarial examples (Grosse et al. (2017), Feinman et al. (2017)) that have been shown to be ineffective (that is they can be bypassed by an adaptive attacker) by Carlini and Wagner (https://arxiv.org/abs/1705.07263)\\n-- The organization of the paper could be improved. I didn't understand what the main contribution was until reading Theorem 1 (this is 6 pages into the paper). The introduction is fairly vague about your contributions. You should consider moving related work later in the paper (most of the discussion there is not directly related to your approach) and potentially shortening your background section.\\n-- How is the discussion about Gaussian Processes connected with your results?\\n-- Consider making the two conditions more prominent in the text.\\n-- In Definition 5, the wording is confusing \\\"We define an idealized BNN to be a Bayesian idealized NN...\\\"\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Question to reviewer\", \"comment\": \"Could we ask if the above clarified the misunderstandings the reviewer had? We are happy to give further clarifications if the reviewer has further questions\"}",
"{\"title\": \"Question to reviewer\", \"comment\": \"\\\"Overall, this is a promising theoretical paper although I'm not currently convinced of the real-world applications beyond the somewhat small examples in the experiments section.\\\"\\n- Given our answers above, could we ask the reviewer to explain if they think this might still be the case, and if so why?\"}",
"{\"title\": \"Response to reviewer 2\", \"comment\": [\"We thank the reviewer for spending time reading our paper. The reviewer seems to have misunderstood large portions of our submission, and we would like to clarify some of these misunderstood points.\", \"The reviewer wrongly characterises the paper as focusing on \\u2018faraway\\u2019 or garbage examples only, rather than adversarial examples generated by nearby perturbations (\\\"Overall, I don\\u2019t see the investigation on the \\u201ctypical\\u201d definition of adversarial examples\\\"; \\\"The focus of the paper is rather on detecting \\u201cfaraway\\u201d data points.\\\"). This is incorrect (see Definition 1 and Theorem 1). We think the confusion might have been caused because the reviewer misunderstood our notation x + \\u03b7 (a standard notation in the field for a small perturbation \\u03b7 around x, see eg (Papernot et al., 2016; Goodfellow et al., 2014), and our Definition 1 which states \\u201csmall perturbation \\u03b7\\u201d). Theorem one clearly deals with the case of x + \\u03b7 as well as \\u201cgarbage points\\u201d.\", \"The reviewer writes \\u2018the nearby perturbation part is taken care of by the concept of all possible transformations\\u2019. This is not what we intended. Rather, the concept of the invariances of the dataset is included in order to give the idealised classifier non-vacous coverage. Without this condition, it is possible for an idealised model to \\u201creject all points it has never seen before\\u201d as adversarial. For example, consider a classifier that keeps the dataset {x_i, y_i} in memory, and classifies new points x by returning y_i with probability 1 if there exists an i such that x_i = x, and returns a uniform probability vector otherwise. This satisfies our definition of \\u2018high uncertainty away from the training data\\u2019 but is clearly not very useful. Introducing the class of transformations gives the idealised classifier *in the proof* the ability to generalise in a non-trivial way, though it is clearly a very strong assumption that does not hold in real networks. This assumption, as well as real-world alternatives to high coverage, is discussed in Appendix C.\", \"The reviewers second point is that our conditions are very strong, and it is not clear they are necessary. We do not claim that the conditions are *necessary* - our results are giving *sufficient* conditions for robustness (as our paper title says). Conditions do not need to be necessary for them to be useful. We think that our conditions provide a useful framework to set directions for future work. We would point out that as far as we know there are no other attempts to formalise what an idealised model without adversarial examples would be like, and we think it is fair to say that empirical attempts to find sufficient conditions by producing a model which does not exhibit adversarial examples have been unsuccessful as of yet.\", \"We are unsure what the reviewer means by \\u2018generalising to the neighbourhood of the true data manifold\\u2019. If by the true data manifold they mean the support of P(x), then \\u2018generalising\\u2019 outside this region is more or less meaningless. This is mentioned in the related work section about oracles - \\u201ccan a point outside the true data distribution be assigned a label in a meaningful way?\\u201d. We would also highlight that the delta balls are a subset of the neighbourhood around the true data manifold.\", \"Lastly, the simulations are not designed to resemble real data necessarily, but to show under some idealised conditions (but less idealised than used in the proof) that the properties described do indeed approximately hold for realisable networks. Indeed, one of the experiments (on manifold mnist) is designed to address the previous point about the data support, by showing that adversarial examples are being \\u2018moved away\\u2019 from the data distribution. In Appendix G we give results on real world cats-vs-dogs classification as well.\", \"--\"], \"to_address_the_questions_the_reviewer_had_about_the_trend_of_the_paper\": [\"GPs are the infinite limit of BNNs, and share many properties with them (Matthews et al., 2017). The point of the reference to GPs is that these are an example of a model for which the uncertainty is easy to evaluate. This is then used to illustrate what we hope to obtain in the case of idealised neural networks (gp-like uncertainty).\", \"Eta is a vector in R^dim, the image space, where we follow standard notation as mentioned above. There is no commonly agreed on definition of adversarial examples more rigorous than the one we provide. However, the argument in the paper does not rely on any properties of eta other than there is *some* eta we consider \\u2018small enough\\u2019 to be adversarial.\", \"As mentioned above, this condition is essential to avoid a classifier that is uncertain everywhere from satisfying our definition of an idealised model.\", \"D refers to the training set, which is mentioned above equation 1. D\\u2019 refers to the union of delta balls around the training set, which is in definition 5.\", \"Thanks for pointing this out, we will fix this in our next draft.\"]}",
"{\"title\": \"response to reviewer 1\", \"comment\": \"We thank the reviewer for the detailed feedback and for acknowledging that we tackled an important issue with theoretical and technical tools that are not used often enough in machine learning. The reviewer\\u2019s summary of our theoretical development is faithful, but we would like to address the reviewer\\u2019s comments on impact and real-world usefulness. This is followed by detailed answers to the minor comments and suggestions.\\n--\", \"major_points\": \"First, for the first time in the literature (as far as we know), we studied in detail a proposed mechanism to explain why BNNs have been observed to be empirically more robust to adversarial examples (with such empirical observations given by (Li & Gal, 2017; Feinman et al., 2017; Rawat et al., 2017; Carlini & Wagner, 2017)). We argue that idealised BNNs cannot have adversarial examples, and related these to real-world conditions on BNNs. More specifically, see the sequence of experiments starting with idealised models with idealised data (section 5.1), going through idealised models with real world data (table 1), and finishing with real world data and inference (dropout approx inference, section 5.2, as well as appendix G - REAL-WORLD CATS VS DOGS CLASSIFICATION). \\n\\nSecondly, for the first time in the literature, we proved (and gave empirical evidence in Fig 3) that a connection exists between epistemic uncertainty/input density and adversarial examples, a link which was only *hypothesised* previously (Nguyen et al., 2015). Further, our proof highlighted sufficient conditions for robustness which resolved the inconsistency between (Nguyen et al., 2015; Li, 2018) and (Gilmer et al., 2018).\\n\\nWith regards to the the assumption that \\u201cthe model is invariant to all transformations that the data distribution is invariant under\\u201d not being addressed: this assumption was used in the formal proof to get non-vacuous coverage. In appendix C we give a real-world alternative to this assumption, and in our experiments we demonstrate high coverage empirically (since this property is mostly impossible to assess otherwise, as the reviewer mentioned).\\n\\nApart from resolving an open dispute in the literature, our observations above also suggest future research directions towards tools which answer (or approximate) our conditions better. More specifically, we highlighted (section 6) that the main difficulty with modern BNN robustness is not coverage (as speculated by some), but rather that approximate inference doesn\\u2019t increase the uncertainty fast enough with practical BNN tools (Fig 7). We believe that our observations set major milestones for the community to tackle, and are not of \\\"no real world use\\\". \\n\\n--\", \"minor_comments\": [\"We appreciate the reviewer\\u2019s comments that the appendices were clear and helpful for explaining the main ideas, and acknowledge that our notation might be a bit cumbersome at times. We will improve notation as suggested.\", \"The reviewer mentions in the review the fact that the experiments with HMC included in the paper do not address the data invariance property. This is not the aim of the experiments - rather this is to back up the claim that in the idealised case a BNN increases it\\u2019s uncertainty far from the data, as we assume in our idealised model (\\u201cwe show that near-perfect epistemic uncertainty correlates to density under the image manifold\\u201c). It is not obvious a priori that this property is actually true of neural networks in the same way that it is the case for kernel machines with stationary covariance kernels, for example. Previous work on Bayesian techniques for adversarial example detection has invariably used approximate inference, and these proposed defences have generally increased robustness but not eliminated adversarial examples, so we felt that experimentally demonstrating BNNs have this property in the case of idealised *inference* was important.\", \"\\u201cIndeed a model can prevent adversarial examples by predicting high uncertainty for all points that are not near the training examples\\u201d: The reviewer correctly points out that a BNN of this kind could fall back to predicting high uncertainty far away from all the data if the invariance property does not hold (e.g a model that predicts P = \\u00bd everywhere is \\u2018robust\\u2019 to adversarial examples but not very interestingly). The reviewer's point that this is a very strong assumption is valid, and is explicitly discussed in the paper (appendix B PROOF CRITIQUE).\", \"I(w,p) is defined in as equation 1 - we will make this clearer in the text.\", \"We are uncertain what the reviewer means by VI - as argued by (Gal & Gharimani, 2015) dropout is a form of variational approximation. Other flavours of VI such as mean field exist as well, but based on previous results these are unlikely to be significantly better than dropout unless a far more expressive varational distribution is used, leading to scaling issues (Gal, 2016). We agree that a more scalable method would be highly desirable (as discussed in section 6).\"]}",
"{\"title\": \"Interesting and important theory, with questions about usefulness\", \"review\": \"In this paper, the authors posit a class of discriminative Bayesian classifiers that, under sufficient conditions, do not have any adversarial examples. They distinguish between two sources of uncertainty (epistemic and aleatoric), and show that mutual information is a measure of epistemic uncertainty (essentially uncertainty due to missing regions of the input space). They then define an idealised Bayesian Neural Network (BNN), which is essentially a BNN that 1) outputs the correct class probability (and always with probability 1.0) for each input in the training set (and for each transformation of training inputs that the data distribution is invariant under), and 2) outputs a sufficiently high uncertainty for each input not in the union of delta-balls surrounding the training set points. Similarly, an example is defined to be adversarial if it has two characteristics: it 1) lies far from the training data but is classified with high output probability, and it 2) is classified with high output probability although it lies very close to another example that is classified with high output probability for the other class. Condition 1) of an idealised BNN prevents Definition 2) of an adversarial example using the fact that BNNs are continuous, and Condition 2) prevents Definition 1) of an adversarial example since it will prevent \\\"garbage\\\" examples by predicting with high uncertainty (of course I'm glossing over many important technical details, but these are the main ideas if I understand correctly).\\n\\nThe authors backed up their theoretical findings with empirical experiments. In the synthetic MNIST examples, the authors show that adversarial attacks are indeed correlated with lower true input probability. They show that training with HMC results in high uncertainty for inputs not near the input-space, a quality certainly not shared with all other deep models (and another reason that Bayesian models should be preferred for preventing adversarial attacks). On the other hand, the out-of-sample uncertainty for training with dropout is not sufficient to prevent adversarial attacks, although the authors posit a form of dropout ensemble training to help prevent these vulnerabilities. \\n\\nThe authors are tackling an important issue with theoretical and technical tools that are not used often enough in machine learning research. Much of the literature on adversarial attacks is focused on finding adversarial examples, without trying to find a unifying theory for why they work. They do a very solid exposition of previous work, and one of the strengths of this paper comes in presenting their findings in the context of previously discovered adversarial attacks, in particular that of the spheres data set. \\n\\nUltimately, I'm not convinced of the usefulness of their theoretical findings. In particular, the assumption that the model is invariant to all transformations that the data distribution is invariant under is an unprovable assumption that can expose many real-world vulnerabilities. This is the case of the spheres data set without a rotation invariance from Gilmer et al. (2018). In the appendix, the authors mention that the data invariance property is key for making the proof non-vacuous, and I would agree. Without the data invariance property, the proof mainly relies on the fact that BNNs are continuous. The experiments are promising in support of the theory, but they do not seem to address this data invariance property. Indeed a model can prevent adversarial examples by predicting high uncertainty for all points that are not near the training examples, which Bayesian models are well equipped to do.\\n\\nI also thought the paper was unclear at times. It is not easy to write such a technical and theoretical paper and to clearly convey all the main points, but I think the paper would've benefited from more clarity. For example, the notation is overloaded in a way that made it difficult to understand the main proofs, such as not clearly explaining what is meant by I(w ; p) and not contrasting between the binary entropy function used for the entropy of a constant epsilon and the more general random variable entropy function. In contrast, I thought the appendices were clearer and helpful for explaining the main ideas. Additionally, Lemma 1 follows trivially from continuity of a BNN. Perhaps removing this and being clearer with notation would've allowed for more room to be clearer for the proof of Theorem 1. \\n\\nA more minor point that I think would be interesting is comparing training with HMC to training with variational inference. Do the holes that come from training with dropout still exist for VI? VI could certainly scale in a way that HMC could not, which perhaps would make the results more applicable. \\n\\nOverall, this is a promising theoretical paper although I'm not currently convinced of the real-world applications beyond the somewhat small examples in the experiments section.\\n\\nPROS\\n-Importance of the issue\\n-Exposition and relation to previous work\\n-Experimental results (although these were for smaller data sets)\\n-Appendices really helped aid the understanding\\n\\nCONS\\n-Real world usefulness\\n-Clarity\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Results seem to be shallow and vague\", \"review\": \"This paper extends the definition of adversarial examples to the ones that are \\u201cfar\\u201d from the training data, and provides two conditions that are sufficient to guarantee the non-existence of adversarial examples. The core idea of the paper is using the epistemic uncertainty, that is the mutual information measuring the reduction of the uncertainty given an observation of the data, to detect such faraway data. The authors provided simulation studies to support their arguments.\\n\\nIt is interesting to connect robustness with BNN. Using the mutual information to detect the \\u201cfaraway\\u201d datapoint is also interesting. But I have some concerns about the significance of the paper:\\n1. The investigation of this paper seems shallow and vague. \\n (1). Overall, I don\\u2019t see the investigation on the \\u201ctypical\\u201d definition of adversarial examples. The focus of the paper is rather on detecting \\u201cfaraway\\u201d data points. The nearby perturbation part is taken care by the concept of \\u201call possible transformations\\u201d which is actually vague.\\n (2). Theorem 1 is basically repeating the definition of adversarial examples. The conditions in the theorem hardly have practical guidance: while they are sufficient conditions, all transformations etc.. seem far from being necessary conditions, which raises the question of why this theory is useful? Also how practical for the notion of \\u201cidealized NN\\u201d?\\n (3). What about the neighbourhood around the true data manifold? How would the model succeed to generalize to the true data manifold, yet fail to generalize to the neighbourhood of the manifold in the space? Delta ball is not very relevant to the \\u201ctypical\\u201d definition of adversarial examples, as we have no control on \\\\delta at all.\\n2. While the simulations support the concepts in section 4, it is quite far from the real data with the \\u201ctypical\\u201d adversarial examples. \\n\\nI also find it difficult to follow the exact trend of the paper, maybe due to my lack of background in bayesian models. \\n1. In the second paragraph of section 3, how is the Gaussian processes and its relation to BNN contributing to the results of this paper?\\n2. What is the rigorous definition for \\\\eta in definition 1?\\n3. What is the role of $\\\\mathcal{T}$, all the transformations $T$ that introduce no ambiguity, in Theorem 1. Why this condition is important/essential here?\\n4. What is the D in the paragraph right after Definition 4? What is D\\u2019 in Theorem 1?\\n5. Section references need to be fixed.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HyMxAi05Km | Dual Learning: Theoretical Study and Algorithmic Extensions | [
"Zhibing Zhao",
"Yingce Xia",
"Tao Qin",
"Tie-Yan Liu"
] | Dual learning has been successfully applied in many machine learning applications, including machine translation, image-to-image transformation, etc. The high-level idea of dual learning is very intuitive: if we map an x from one domain to another and then map it back, we should recover the original x. Although its effectiveness has been empirically verified, theoretical understanding of dual learning is still missing. In this paper, we conduct a theoretical study to understand why and when dual learning can improve a mapping function. Based on the theoretical discoveries, we extend dual learning by introducing more related mappings and propose highly symmetric frameworks, cycle dual learning and multipath dual learning, in both of which we can leverage the feedback signals from additional domains to improve the qualities of the mappings. We prove that both cycle dual learning and multipath dual learning can boost the performance of standard dual learning under mild conditions. Experiments on WMT 14 English↔German and MultiUN English↔French translations verify our theoretical findings on dual learning, and the results on the translations among English, French, and Spanish of MultiUN demonstrate the efficacy of cycle dual learning and multipath dual learning. | [
"machine translation",
"dual learning"
] | https://openreview.net/pdf?id=HyMxAi05Km | https://openreview.net/forum?id=HyMxAi05Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SklOqm0ZJE",
"BJgJmn5YCm",
"Ske8LgLJ6m",
"SygP9_2p2X",
"rye3SauchX"
],
"note_type": [
"meta_review",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1543787407959,
1543248918671,
1541525581772,
1541421199237,
1541209411933
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper865/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper865/Authors"
],
[
"ICLR.cc/2019/Conference/Paper865/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper865/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper865/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers vary in their scores but overall there is agreement that this paper is not ready for acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Reject\"}",
"{\"title\": \"Thank you!\", \"comment\": \"We thank all reviewers for the very helpful comments and suggestions! We will revise our paper accordingly.\"}",
"{\"title\": \"Theoretical study and empirical testing of improvement of machine translation classifiers using dual and cyclical paths among languages\", \"review\": \"The paper addresses a means of boosting the accuracy of automatic translators (sentences) by training dual models (a.k.a. language A to B, B to A), multipath (e.g. A to B to C) and cyclical (e.g. A to B to C to A) while starting with well initialized models for translating simple pairs. The idea that additional errors are revealed and allow classifiers to adapt, thus boosting classifier performance, is appealing and intuitive. That a theoretical framework is presented to support this assumption is encouraging, but the assumptions behind this analysis (e.g. Assumption 1) are rather strong. Equation 2 assumes independence. Equations 3-5 can be presented more clearly. Not only are translation errors not uniform in multiclass settings, but they can be highly correlated - this being a possible pitfall of boosting approaches, seen as general classifier errors. The same strong assumptions permeate from the dual to the cyclical case. On the other hand, the (limited) empirical test presented, using a single dataset and baseline classifier method, does support the proposed improvement by boosting, albeit leading to improvements which are slight.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Possibly some interesting ideas here, but has errors, and also has a very strong assumption (that each sentence has a unique translation)\", \"review\": \"The paper gives theorems concerning \\\"dual learning\\\" - that is, making\\nuse of round-trip consistency in learning of translation and other\\ntasks.\\n\\nThere are some interesting ideas here. Unfortunately, I think there\\nare issues with clarity/choice of notation and correctness (errors\\nresulting from problems with the notation - or at least it's very\\nhard to figure out if things are correct under some intepretation).\\n\\nMore specifically, I'm uneasy about the use of x^i and x^j as defined\\nin section 2. In some cases x^j is a deterministic function of x^i, in\\nsome cases it's a random variable, these cases are mixed. Section 2\\nbecomes tangled up in this issue. It would be much better I think to\\ndefine a function f_ij(x) for each (i,j) pair that maps a sentence\\nx \\\\in S^i to its correct translation f_ij(x) \\\\in S^j.\\n\\nA critical problem with the paper is that Eq. 2 is I think incorrect.\\nClearly,\\n\\nPr(T_ij(x_i) = f_ij(x_i), T_ji(f_ij(x_i)) = x_i) [1]\\n=\\nPr(T_ij(x_i) = f_ij(x_i)) [2]\\n*\\nPr(T_ji(f_ij(x_i)) = x_i) | T_ij(x_i) = f_ij(x_i)) [3]\\n\\nI think [1] is what is meant by the left-hand-side of Eq 2 in the paper -\\nthough the use of x^j is ambiguous (this ambiguity is a real issue).\\n\\nIt can be verified that\\n\\nPr(T_ij(x_i) = f_ij(x_i)) = p_ij\\n\\nhowever\\n\\nPr(T_ji(f_ij(x_i)) = x_i) | T_ij(x_i) = f_ij(x_i)) \\\\neq p^r_ji\\n\\nThe definition of p^r_ji is that of a different quantity.\\n\\nThis problem unfortunately permeates the statement of theorem 1, and\\nthe proof of theorem 1. It is probably fixable but without a\\nsignificantly revised version of the paper a reader/reviewer is\\nbasically guessing what a corrected version of the paper would\\nbe. Unfortunately I think publishing the paper with errors such as\\nthis would be a problem.\", \"some_other_points\": \"[1] The theorem really does have to assume that there is a unique\\ncorrect translation f_ij(x^i) for each sentence x^i. Having multiple\\npossible translations breaks things. The authors say in section 2 \\\"In\\npractice, we may select a threshold BLEU (Papineni et al., 2002a)\\nscore, above which the translation is considered correct\\\": this seems\\nto imply that the results apply when multiple translations (above\\na certain BLEU score) are possible. But my understanding is that this\\nwill completely break the results (or at least require a significant\\nmodification of the theory).\\n\\n[2] A further problem with ambiguity/notation is that T^d is never\\nexplicitly defined. Presumably we always have T_ij^s(x^i) = T_ij(x^i) if\\nT_ji(T_ij(x^i)) = x^i? That needs to be explicitly stated.\\n\\n[3] There may be something interesting in theorem 1 - putting aside point\\n[1] above - but I am just really uneasy with this theorem and its proof\\ngiven that it uses p^r_ji, and the issue with Eq. 2.\\n\\n[4] Another issue with the definition of p^r_ij: the notation\\nP_{X^(j, r) ~ \\\\mu}(...) where the expression ... does not refer\\nto X^{j, r} (instead it refers to x^j) is just really odd,\\nand confusing.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Overlooked assumption?\", \"review\": \"This paper provides a theoretical perspective of the dual learning tasks and proposes two generalizations (multipath/cycle dual learning) that utilize multiple language sets. Through experiments, the paper discusses the relationship between theoretical perspective and actual translation quality.\\n\\nOverall, the paper is well written and discussed enough. My concern is about Theorem 1 that could be a critical problem.\\nIn the proof of Theorem 1, it discussed that the dual learning can minimize Case 2. This assumption is reasonable if the vanilla translator is completely fixed (i.e., no longer updated) but this condition may not be assumed by the authors as far as I looked at Algorithm 2 and 3 that update the parameters of vanilla translators directly. The proof is constructed by only the effect against Case 2. However, if the vanilla translator is directly updated through dual training, there should exist some random changes in also Case 1 and this behavior should also be included in the theorem.\", \"correction_and_suggestions_writing\": [\"It is better to introduce an additional $\\\\alpha$, $\\\\beta$ and $\\\\gamma$ for the vanilla translation accuracy (e.g., $\\\\alpha_0 := p_{ij}p_{ji}^r$) so that most formulations in Section 3 can be largely simplified.\", \"In Justification of Assumption1 ... \\\"the probability of each cluster is close to $p_max$\\\" -> maybe \\\"greater than $p_max$\\\" to satisfy the next inequality.\", \"Eq. (3) ... $T_{ji}^d(T_{ij}^d(x^{(i)})$ -> $T_{ji}^d(x^{(j)})$ to adjust other equations.\", \"Section 3 Sentence 1: \\\"translatorby\\\" -> \\\"translator by\\\"\", \"Section 4.2: ${\\\\rm Pr}_{ X^{(3)} \\\\sim T_{23}^d (X^{(1)}) }$ -> $... (X^{(2)}) }$\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
HJex0o05F7 | UaiNets: From Unsupervised to Active Deep Anomaly Detection | [
"Tiago Pimentel",
"Marianne Monteiro",
"Juliano Viana",
"Adriano Veloso",
"Nivio Ziviani"
] | This work presents a method for active anomaly detection which can be built upon existing deep learning solutions for unsupervised anomaly detection. We show that a prior needs to be assumed on what the anomalies are, in order to have performance guarantees in unsupervised anomaly detection. We argue that active anomaly detection has, in practice, the same cost of unsupervised anomaly detection but with the possibility of much better results. To solve this problem, we present a new layer that can be attached to any deep learning model designed for unsupervised anomaly detection to transform it into an active method, presenting results on both synthetic and real anomaly detection datasets. | [
"Anomaly Detection",
"Active Learning",
"Unsupervised Learning"
] | https://openreview.net/pdf?id=HJex0o05F7 | https://openreview.net/forum?id=HJex0o05F7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJlqBUqGeV",
"Sye1KuXPR7",
"rklEM4DVRX",
"B1xA8-vVCQ",
"Sygui0INCX",
"B1gpbIRl6Q",
"S1ghJ5FqnX",
"ryg9Strt2m",
"BklLwmi_n7",
"rkl6GQj_3X",
"rkx-wWBacQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"official_comment",
"comment"
],
"note_created": [
1544885826138,
1543088247036,
1542906891965,
1542906198254,
1542905503646,
1541625348659,
1541212644120,
1541130561719,
1541088093990,
1541088020621,
1539293529435
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper864/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper864/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper864/Authors"
],
[
"ICLR.cc/2019/Conference/Paper864/Authors"
],
[
"ICLR.cc/2019/Conference/Paper864/Authors"
],
[
"ICLR.cc/2019/Conference/Paper864/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper864/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper864/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper864/Authors"
],
[
"ICLR.cc/2019/Conference/Paper864/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"Following the unanimous vote of the reviewers, this paper is not ready for publication at ICLR. The most significant concern raised is that there does not seem to be an adequate research contribution. Moreover, unsubstantiated claims of novelty do not adequately discuss or compare to past work.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Lacks demonstrated research contribution beyond past work\"}",
"{\"title\": \"Addresses some of the review comments\", \"comment\": \"Dear Authors, I appreciate your addressing some of the review comments. However, some major issues with the paper remain:\\n\\n1. Simply plugging deep-learning with active learning (for anomaly detection) is not a significant contribution.\\n\\n2. The theory in the paper is completely redundant and its implications are already well-known and well-appreciated. There is no need to introduce a superficial 'no free anomaly' theorem. It is independent of the algorithm and hence seems quite out-of-place.\\n\\n3. It is correct that the paper lacks space for some critical aspects. However, this is a problem arising out of the paper's organization. For instance, the theory and the results on the synthetic data such as Figure 3 are not relevant and uninteresting. You could remove all these and try addressing the more important concerns which are of significance to anomaly detection and which possibly extend active learning with deep learning into new territory. The current approach is merely plugging in one black box over other older ones.\"}",
"{\"title\": \"Thank you for your very thorough and detailed review! And further clarifications.\", \"comment\": \"We would like to again thank you for your very thorough and detailed review.\\nWe tried to incorporate all the feedback into the manuscript, starting by changing its title to 'UaiNets: From Unsupervised to Active Deep Anomaly Detection'.\", \"we_believe_this_title_better_represents_the_main_point_of_this_work\": \"translating unsupervised deep anomaly detection models to active ones.\\nWe also tried to change the verbiage in the paper to make this clearer.\\n\\nWe have already addressed most of the points raised in the commentary bellow (before rebuttal period), but now that we incorporated it into the manuscript would like to readdress a few.\\n\\n1. Related Works: \\\"...active anomaly detection remains an under-explored approach to this problem...\\\"\\n-> We still believe it is under explored, although we made it clear that there are some very interesting prior work on this.\\n\\n4. The paper mentions that s(x) might not be differentiable. However, the sigmoid form of s(x) is differentiable.\\n-> We ran it allowing gradients through s(x) and the network improves on most datasets.\\n-- But, since the underlying models might have non differentiable s(x), we kept the old results in the paper. We believe they are more representative.\\n\\n5. Does not acknowledge the well-known result that mixture models are unidentifiable. The math in the paper is mostly redundant. Some references:\\n-> We added a short acknowledgement in Section 2.1.\\n\\n6. Does not acknowledge existing work that adds classifier over unsupervised detectors (such as AI2). This is very common.\\n-> We read AI2 carefully and, if we understood correctly, it does not add a classifier over unsupervised detectors.\\n-- It uses both a supervised classifier (random forest) and an unsupervised ensemble in a parallel manner to find anomalies.\\n-- It trains both the unsupervised and supervised using the same set of features M, although the supervised model is only trained on the already labeled instances.\\n-- So the only other work that adds classifier over unsupervised detectors would be LODA-AAD and Tree-AAD, which we address as really important prior work.\\n-- We do it differently than they do though.\\n\\n-- To the best of our knowledge ours is the first work which applies deep learning to active anomaly detection.\\n-- We believe this is also the first work to approach active anomaly detection in an end-to-end manner.\\n--- Other work, such as LODA-AAD and Tree-AAD have a phase where they train their underline algorithms, and another which uses labeled instances to learns weights to change these underlying results.\\n--- At the same time, AI^2 learns two separate models, an unsupervised and a supervised one, each with its advantages.\\n--- Ours uses a composite loss function composed of the underlying model's loss added to the UAI layer one (binary cross entropy only applied to labeled instances).\\n--- We tried to make this clearer in the text.\\n-- Figures 9 and 10 in Appendix C.1 show why learning representations end-to-end is important.\\n\\n8. Does not compare against current state-of-the-art Tree-based AAD\\n-> Added comparison\\n\\n9. The 'Generalized' in the title is incorrect and misleading. This is specific to deep-networks. Stacking supervised classifiers on unsupervised detectors is very common. See comments on related works.\\n-> We changed the title.\\n\\n10. Does not propose any other query strategies than greedily selecting top.\\n-> We believe this would be a very interesting research topic, one we want to study, but it deserves a paper on its own.\\n-- It would be too dense a topic to cram it in here, so we followed the same strategy as prior work when doing this.\\n-- This paper already had 25 pages and we could not address this issue here.\"}",
"{\"title\": \"Thank you for your comments!\", \"comment\": \"Thank you for your comments. We tried to change the verbiage in the paper to make it clearer.\", \"we_address_each_point_bellow\": \"the introduction is unusually short, with a 1st paragraph virtually unreadable due to the abuse of citations. Two additional paragraphs, covering in an intuitive manner both the proposed approach & the main results, would dramatically improve the paper's readability\\n->We added a more detailed explanation of the architecture into the introduction and moved the schematic in Figure 1 here as well to try to exemplify it.\\n\\nsection 2.1 starts quite abruptly with he two Lemmas 7 and Theorem 3 (which, in fact, is Theorem 1). This section would probably read a lot better without the two Lemmas, as the authors only refer to the main result in the Theorem. The second, intuitive part of 2.1 is extremely helpful.\\n-> We also moved the lemmas to the appendix, trying to make this clearer.\\n\\nit is unclear why the authors have applied the approach in \\\"4.3\\\" only to a single dataset, rather than all the 11 datasets\\n-> There are only two datasets that already have a test set with new classes of anomalies: KDDCUP and KDDCUP-rev.\\n-- We ran only on KDDCUP-rev because the LODA-AAD takes too long on KDDCUP for this experiment.\\n--- There are 311029 test instances in KDDCUP and 67908 in KDDCUP-rev\\n--- And there are 494021 train instances in KDDCUP and 121597 in KDDCUP-rev\\n--- The analysis already took a couple of days on KDDCUP-rev.\\n--- It ran 16 times slower in KDDCUP (4 times less expert feedback and 4 times less test data)\\n-- For each expert feedback iteration, we ran the model in the full test set.\\n\\nplease change the color schemes for Figures 3 & 4, where the red/orange (Fig 3) and various blues (Fig 4) are difficult to distinguish\\n-> There was a problem with the hard drive where results were saved and we lost them.\\n-- We will rerun experiments and try to fix this.\", \"bottom_of_page_3\": \"\\\"are rare as expected\\\" --> \\\"are as rare as expected\\\"\\n-> Changed it\\n\\nWe hope this clears any confusing points the paper might have made. If it doesn't, we would be pleased to answer any other questions/suggestions.\"}",
"{\"title\": \"Thank you for the feedback!\", \"comment\": \"We would like to start by thanking you for the feedback, we tried to incorporate it in the manuscript.\", \"we_also_address_each_of_your_points_bellow\": \"The paper provided a convincing and intuitive motivation regarding the need for active learning in unsupervised anomaly detection.\\n-> Thank you!\\n\\nHowever the proposed approach of requesting expert feedback for the top ranked anomalies is straightforward and unsurprising, given past work on active learning.\\n-> Could you elaborate more in which way it is unsurprising?\\n-- We tried to make clearer that the main contribution of our work is not that it uses active learning, but how it does so.\\n-- To the best of our knowledge this is the first work which applies deep learning to active anomaly detection.\\n-- We believe this is also the first work to approach active anomaly detection in an end-to-end manner.\\n--- Other work, such as LODA-AAD and Tree-AAD have a phase where they train their underlying algorithms, and another which uses labeled instances to learns weights to change these underlying results.\\n--- At the same time, AI^2 learns two separate models, an unsupervised and a supervised one, each with its advantages.\\n--- Ours uses a composite loss function composed of the underlying model's one added to the UAI layer loss (binary cross entropy only applied to labeled instances).\\n--- We tried to make this clearer in the text.\\n-- Figures 9 and 10 in Appendix C.1 show why learning representations end-to-end is important.\\n\\nThe experiments on synthetic data are also unsurprising. Moreover these are based on a questionable premise: the instances that are \\\"hard\\\" to classify are treated as anomalies. This is not very realistic.\\n-> This experiments' results may be unsurprising.\\n-- Nonetheless, we believe they are interesting to test the assumption that our model can indeed deal with different 'types' of anomalies, even when the underlying model can't.\\n-- Presenting empirical results for this.\\n-- Furthermore, we believe that even if one of the anomaly types is not realistic they serve their purpose, of testing the robustness of the UaiNets compared to the underlying models.\", \"regarding_the_real_data_experiments\": \"In Table 1 the results for DAE_uai are based on which budget b? How does the result vary with b?\\n-> For all these experiments we used b as the number of anomalies in the dataset.\\n-- The model is robust to the choice of b, this can be seen in appendix C.2.\\n-- The larger the b the better the algorithm becomes, but it can usually already learn pretty well even with few labels.\\n-- Figure 11 shows that for KDDCUP, Thyroid, Arrhythmia and KDDCUP-Rev the algorithm improves significantly with a few examples.\"}",
"{\"title\": \"An interesting problem : active learning for anomaly detection; method suffering from a lack of novelty; questions about experiments\", \"review\": \"The paper provided a convincing and intuitive motivation regarding the need for active learning in unsupervised anomaly detection.\\nHowever the proposed approach of requesting expert feedback for the top ranked anomalies is straightforward and unsurprising, given past work on active learning. \\nThe experiments on synthetic data are also unsurprising. Moreover these are based on a questionable premise: the instances that are \\\"hard\\\" to classify are treated as anomalies. This is not very realistic.\", \"regarding_the_real_data_experiments\": \"In Table 1 the results for DAE_uai are based on which budget b? How does the result vary with b?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting paper that can be significantly improved by a better organization.\", \"review\": [\"This is an interesting paper on a topic with real-world application: anomaly detection.\", \"The paper's organization is, at times quite confusing:\", \"the introduction is unusually short, with a 1st paragraph virtually unreadable due to the abuse of citations. Two additional paragraphs, covering in an intuitive manner both the proposed approach & the main results, would dramatically improve the paper's readability\", \"section 2.1 starts quite abruptly with he two Lemmas 7 and Theorem 3 (which, in fact, is Theorem 1). This section would probably read a lot better without the two Lemmas, as the authors only refer to the main result in the Theorem. The second, intuitive part of 2.1 is extremely helpful.\", \"it is unclear why the authors have applied the approach in \\\"4.3\\\" only to a single dataset, rather than all the 11 datasets\"], \"other_comments\": [\"please change the color schemes for Figures 3 & 4, where the red/orange (Fig 3) and various blues (Fig 4) are difficult to distinguish\", \"bottom of page 3: \\\"are rare as expected\\\" --> \\\"are as rare as expected\\\"\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Active anomaly detection technique employing existing approaches and lacking appropriate literature review\", \"review\": [\"(Since the reviewer was unclear about the OpenReview process, this review was earlier posted as public comment)\", \"Most claims of novelty can be clearly refuted such as the first sentence of the abstract \\\"...This work presents a new approach to active anomaly detection...\\\" and the paper does not give due credit to existing work. Current research such as Das et al. which is the most relevant has been deliberately not introduced upfront with other works (because it shows lack of the present paper's novelty) and instead deferred to later sections. The onus of a thorough literature review and laying down a proper context is on the authors, not the reviewers. Detailed comments are below.\", \"1. Related Works: \\\"...active anomaly detection remains an under-explored approach to this problem...\\\"\", \"Active learning in anomaly detection is well-researched (AI2, etc.). See related works section in Das et al. 2016 and:\", \"K. Veeramachaneni, I. Arnaldo, A. Cuesta-Infante, V. Korrapati, C. Bassias, and K. Li, \\\"Ai2: Training a big data machine to defend,\\\" International Conference on Big Data Security, 2016.\", \"2. \\\"To deal with the cold start problem, for the first 10 calls of select_top...\\\":\", \"No principled approach to deal with cold start and one-sided labels (i.e., the ability to use labels when instances from only one class are labeled.)\", \"3. Many arbitrary hyper parameters as compared to simpler techniques:\", \"The number of layers, nodes in hidden layers.\", \"The number of instances (k) per iteration\", \"The number of pretraining iterations\", \"The number of times the network is retrained (100) after each labeling call\", \"Dealing with cold start (10 labeling iterations of 10 labels each, i.e. 100 labels).\", \"4. The paper mentions that s(x) might not be differentiable. However, the sigmoid form of s(x) is differentiable.\", \"5. Does not acknowledge the well-known result that mixture models are unidentifiable. The math in the paper is mostly redundant. Some references:\", \"Identifiability Of Nonparametric Mixture Models And Bayes Optimal Clustering (pradeepr/arxiv npmix v.pdf)\\\" target=\\\"_blank\\\" rel=\\\"nofollow\\\">https://www.cs.cmu.edu/ pradeepr/arxiv npmix v.pdf)\", \"Semiparametric estimation of a two-component mixture model by Bordes, L., Kojadinovic, I., and Vandekerkhove, P., Annals of Statistics, 2006 (https://arxiv.org/pdf/math/0607812.pdf)\", \"Inference for mixtures of symmetric distributions by David R. Hunter, Shaoli Wang, Thomas P. Hettmansperger, Annals of Statistics, 2007 (https://arxiv.org/pdf/0708.0499.pdf)\", \"Inference on Mixtures Under Tail Restrictions by K. Jochmans, M. Henry, and B. Salanie, Econometric Theory, 2017 (http://econ.sciences-po.fr/sites/default/files/file/Inference.pdf)\", \"6. Does not acknowledge existing work that adds classifier over unsupervised detectors (such as AI2). This is very common.\", \"This is another linear model (logistic) on top of transformed features. The difference is that the transformed features are from a neural network and optimization can be performed in a joint fashion. The novelty is marginal.\", \"7. While the paper argues that a prior needs to be assumed, it does not use any in the algorithm. There seems to be a disconnect. It also does not acknowledge that AAD (LODA/Tree) does use a prior. Priors for anomaly proportions in unsupervised algorithms are well-known (most AD algos support that such as OC-SVM, Isolation Forest, LOF, etc.).\", \"8. Does not compare against current state-of-the-art Tree-based AAD\", \"Incorporating Expert Feedback into Tree-based Anomaly Detection by Das et al., KDD, 2017.\", \"9. The 'Generalized' in the title is incorrect and misleading. This is specific to deep-networks. Stacking supervised classifiers on unsupervised detectors is very common. See comments on related works.\", \"10. Does not propose any other query strategies than greedily selecting top.\", \"11. Question: Does this support streaming?\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Thank you for the feedback. - Continuation\", \"comment\": \"6. We do not think the novelty is marginal.\\n Deep Learning architectures excel exactly as feature extractors, being great for learning representations.\\n Besides, we show through the experiments in Appendix C.1 that end to end learning helps the base architecture learn better feature representations for anomaly detection.\\n\\n7. We do not argue that a prior needs to be assumed for all cases (although the no free lunch theorem does). We only argue that unsupervised anomaly detection needs one.\\n Supervised algorithms have, in general, presented good priors for most supervised learning problems, and the UAI layer learns in a supervised (active) way.\\n We also show in Section 4.1 that although unsupervised active learning have to trade off accuracy in a setting for another, active algorithms are robust to their choice and can give good results in all analyzed settings.\\n\\n8. We should have compared to them. Here are the results:\\nTree-AAD 0.89* 0.29 0.86 0.50 0.32 0.53 0.69 0.76 0.94 0.59 0.92\\nDAE_uai 0.94 0.47 0.57 0.91 0.33 0.55 0.66 0.64 0.86 0.60 0.93\", \"in_order\": \"KDDCUP, Arrhythmia, Thyroid, KDDCUP-Rev, Yeast, Abalone, CTG, Credit Card, Covtype, MMG, Shuttle\\n* to to run Tree-AAD on KDDCUP we needed to limit its memory about the anomalies it had already learned, forgetting the oldest ones. This reduced its runtime complexity from O(b^2) to O(b) in our tests, where b is the budget limit for the anomaly detection task.\\n\\n9. We can see how it might be misleading and will consider changing the title.\\n\\n10. Greedily selecting top is a good strategy in practical settings and (Das et al. 2016) and (Das et al. 2017) also use it.\\n In practical scenarios we want to have the most anomalies for a given budget, so selecting the most anomalous instance at a time is a useful strategy.\\n Also, if we select a non anomalous instance, we will use it to correct our probability distribution, improving our results in the next iteration.\\n Finally, since anomaly detection is already a highly imbalanced setting, we might not get anomalous instances even when picking top anomalous results, so actively searching them might be a good choice.\\n\\n11. What do you mean by streaming?\\n Section 4.3 shows a setting when we have new anomalous instances arriving and we want to detect anomalies in it.\\n If you wanted to run it on streaming data you would need to revisit the previously labeled instances every one in a while to keep training on them, while continuing training the base model on the streamed data.\"}",
"{\"title\": \"Thank you for the feedback.\", \"comment\": \"Thank you for your comment and we appreciate the feedback, we will incorporate suggestions in our manuscript. In this work we present new methods based on the proposed new architectures (UaiNets), which we see as a new approach to active anomaly detection. This might be better phrased as \\\"This work presents new active anomaly detection methods\\\". And we do give credit to Das et al. stating \\\" The most similar prior work to ours in this setting is (Das et al., 2016), which proposed an algorithm that can be employed on top of any ensemble methods based on random projections.\\\", but we should have mentioned it in Section 3.1 when we describe our approach and we will fix this during the rebuttal period. Nonetheless this was not ill intended or deliberate.\", \"we_address_each_of_your_detailed_comments_bellow\": \"1. We still believe it is an under-explored approach to this problem. In the well known (Chandola et al. 2009) survey, they don't mention active learning at all. Only citing (Abe et al. 2006) as supervised anomaly detection. (Das et al. 2016) only has 12 citations and (Das et al. 2017) has only one self-citation. These are some really interesting works in this area, but we believe if this was a well-researched topic they would have more recognition (assuming citations can be used as a measure for recognition).\\n Nonetheless, we should indeed have cited (Veeramachaneni et al. 2016) and (Das et al. 2017). We will add it during the rebuttal phase.\\n- Chandola, V., Banerjee, A. and Kumar, V., 2009. Anomaly detection: A survey. ACM computing surveys (CSUR), 41(3), p.15.\\n- Abe, N., Zadrozny, B. and Langford, J., 2006, August. Outlier detection by active learning. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 504-509). ACM.\\n- Das, S., Wong, W.K., Dietterich, T., Fern, A. and Emmott, A., 2016, December. Incorporating expert feedback into active anomaly discovery. In Data Mining (ICDM), 2016 IEEE 16th International Conference on (pp. 853-858). IEEE.\\n- Das, S., Wong, W.K., Fern, A., Dietterich, T.G. and Siddiqui, M.A., 2017. Incorporating Feedback into Tree-based Anomaly Detection. arXiv preprint arXiv:1708.09441.\\n- Veeramachaneni, K., Arnaldo, I., Korrapati, V., Bassias, C. and Li, K., 2016, April. AI^ 2: training a big data machine to defend. In Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High Performance and Smart Computing (HPSC), and IEEE International Conference on Intelligent Data and Security (IDS), 2016 IEEE 2nd International Conference on (pp. 49-54). IEEE.\\n\\n2. Our architecture can be built on top of state-of-the-art unsupervised anomaly detection models, so using them during our models cold start is a good option. it gives state of the art anomaly detection in these first steps.\\n We state in the paper though that an interesting future work would be \\\"using the UAI layers confidence in its output to dynamically choose between either directly using its scores, or using the underlying unsupervised model\\u2019s anomaly score to choose which instances to audit next\\\".\\n This is not straight forward though, since confidence scores from deep learning architectures are usually unregulated.\\n\\n3. Our model has several hyper parameters, but we show through our experiments that the network can produce good results to all analyzed datasets with the same choice of hyper parameters.\\n We only change k when dealing with datasets with few anomalies to give the model the chance to further interact with labels.\\n Our algorithm is robust to k. For k \\u2208 {5; 10; 20; 30; 40; 50; 100}, using KDDCUP-rev, we get F1 scores of {0.90; 0.91; 0.90; 0.90; 0.91; 0.91; 0.91}, respectively, with no statistical difference between them (p < 0.1).\\n The choice of k is left to the user, since it might depend on their business model.\\n A large company with several experts might want to parallelize the models feedback and get more instances per iteration with the model.\\n\\n4. Both base models used have differentiable s(x) (squared error in DAE and sigmoid in classifier), but we wanted to build an architecture which could (potentially) be applied to different deep learning models in the future. Since this models might have non differentiable s(x) we didn't allow gradients to flow through it in our experiments.\\n we didn't test this, but we believe we might actually see better results if we allowed gradients through s(x).\\n\\n5. We believe our results expand on the unidentifiability of mixture models, showing that in this case *all* possible options are equally unidentifiable.\\n Nonetheless we should have cited these results and will mention and compare to them during rebuttal phase.\"}",
"{\"comment\": [\"This work starts making clearly refuted claims of novelty right from the first sentence of the abstract \\\"...This work presents a new approach to active anomaly detection...\\\" and does not give due credit to existing work. Current research such as Das et al. which is the most relevant has been deliberately not introduced upfront with other works (because it shows lack of the present paper's novelty) and instead deferred to later sections. The onus of a thorough literature review and laying down a proper context is on the authors, not the reviewers. Detailed comments are below.\", \"1. Related Works: \\\"...active anomaly detection remains an under-explored approach to this problem...\\\"\", \"Active learning in anomaly detection is well-researched (AI2, etc.). See related works section in Das et al. 2016 and:\", \"K. Veeramachaneni, I. Arnaldo, A. Cuesta-Infante, V. Korrapati, C. Bassias, and K. Li, \\\"Ai2: Training a big data machine to defend,\\\" International Conference on Big Data Security, 2016.\", \"2. \\\"To deal with the cold start problem, for the first 10 calls of select_top...\\\":\", \"No principled approach to deal with cold start and one-sided labels (i.e., the ability to use labels when instances from only one class are labeled.)\", \"3. Many arbitrary hyper parameters as compared to simpler techniques:\", \"The number of layers, nodes in hidden layers.\", \"The number of instances (k) per iteration\", \"The number of pretraining iterations\", \"The number of times the network is retrained (100) after each labeling call\", \"Dealing with cold start (10 labeling iterations of 10 labels each, i.e. 100 labels).\", \"4. The paper mentions that s(x) might not be differentiable. However, the sigmoid form of s(x) is differentiable.\", \"5. Does not acknowledge the well-known result that mixture models are unidentifiable. The math in the paper is mostly redundant. Some references:\", \"Identifiability Of Nonparametric Mixture Models And Bayes Optimal Clustering (https://www.cs.cmu.edu/~pradeepr/arxiv_npmix_v2.pdf)\", \"Semiparametric estimation of a two-component mixture model by Bordes, L., Kojadinovic, I., and Vandekerkhove, P., Annals of Statistics, 2006 (https://arxiv.org/pdf/math/0607812.pdf)\", \"Inference for mixtures of symmetric distributions by David R. Hunter, Shaoli Wang, Thomas P. Hettmansperger, Annals of Statistics, 2007 (https://arxiv.org/pdf/0708.0499.pdf)\", \"Inference on Mixtures Under Tail Restrictions by K. Jochmans, M. Henry, and B. Salanie, Econometric Theory, 2017 (http://econ.sciences-po.fr/sites/default/files/file/Inference.pdf)\", \"6. Does not acknowledge existing work that adds classifier over unsupervised detectors (such as AI2). This is very common.\", \"This is another linear model (logistic) on top of transformed features. The difference is that the transformed features are from a neural network and optimization can be performed in a joint fashion. The novelty is marginal.\", \"7. While the paper argues that a prior needs to be assumed, it does not use any in the algorithm. There seems to be a disconnect. It also does not acknowledge that AAD (LODA/Tree) does use a prior. Priors for anomaly proportions in unsupervised algorithms are well-known (most AD algos support that such as OC-SVM, Isolation Forest, LOF, etc.).\", \"8. Does not compare against current state-of-the-art Tree-based AAD\", \"Incorporating Expert Feedback into Tree-based Anomaly Detection by Das et al., KDD, 2017.\", \"9. The 'Generalized' in the title is incorrect and misleading. This is specific to deep-networks. Stacking supervised classifiers on unsupervised detectors is very common. See comments on related works.\", \"10. Does not propose any other query strategies than greedily selecting top.\", \"11. Question: Does this support streaming?\"], \"title\": \"Ignores existing work in statistics and semi-supervised anomaly detection, and does not make principled effort to overcome practical challenges like cold start and one-sided labels.\"}"
]
} |
|
ryxxCiRqYX | Deep Layers as Stochastic Solvers | [
"Adel Bibi",
"Bernard Ghanem",
"Vladlen Koltun",
"Rene Ranftl"
] | We provide a novel perspective on the forward pass through a block of layers in a deep network. In particular, we show that a forward pass through a standard dropout layer followed by a linear layer and a non-linear activation is equivalent to optimizing a convex objective with a single iteration of a $\tau$-nice Proximal Stochastic Gradient method. We further show that replacing standard Bernoulli dropout with additive dropout is equivalent to optimizing the same convex objective with a variance-reduced proximal method. By expressing both fully-connected and convolutional layers as special cases of a high-order tensor product, we unify the underlying convex optimization problem in the tensor setting and derive a formula for the Lipschitz constant $L$ used to determine the optimal step size of the above proximal methods. We conduct experiments with standard convolutional networks applied to the CIFAR-10 and CIFAR-100 datasets and show that replacing a block of layers with multiple iterations of the corresponding solver, with step size set via $L$, consistently improves classification accuracy. | [
"deep networks",
"optimization"
] | https://openreview.net/pdf?id=ryxxCiRqYX | https://openreview.net/forum?id=ryxxCiRqYX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkeloZBJgN",
"r1xCRiQa07",
"BklKOMzu6m",
"r1lyXMGOaQ",
"SyeW5ZMOpX",
"H1xrLZGO6Q",
"H1gYZ-zO6Q",
"rJxVaxzdTX",
"SJeDMeMuTQ",
"BylxH1MM6X",
"rkeJkv2s2Q",
"rylFpJr_3Q",
"H1lSf8Ni9Q",
"rkxrOJlc5Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1544667543729,
1543482326026,
1542099568796,
1542099479166,
1542099336917,
1542099276984,
1542099200535,
1542099131644,
1542098958657,
1541705527641,
1541289687096,
1541062592655,
1539159565383,
1539075948662
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper863/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper863/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper863/Authors"
],
[
"ICLR.cc/2019/Conference/Paper863/Authors"
],
[
"ICLR.cc/2019/Conference/Paper863/Authors"
],
[
"ICLR.cc/2019/Conference/Paper863/Authors"
],
[
"ICLR.cc/2019/Conference/Paper863/Authors"
],
[
"ICLR.cc/2019/Conference/Paper863/Authors"
],
[
"ICLR.cc/2019/Conference/Paper863/Authors"
],
[
"ICLR.cc/2019/Conference/Paper863/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper863/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper863/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper863/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper relates deep learning to convex optimization by showing that the forward pass though a dropout layer, linear layer (either convolutional or fully connected), and a nonlinear activation function is equivalent to taking one \\u03c4-nice proximal gradient descent step on a a convex optimization objective. The paper shows (1) how different activation functions correspond to different proximal operators, (2) that replacing Bernoulli dropout with additive dropout corresponds to replacing the \\u03c4-nice proximal gradient descent method with a variance-reduced proximal method, and (3) how to compute the Lipschitz constant required to set the optimal step size in the proximal step. The practical value of this perspective is illustrated in experiments that replace various layers in ConvNet architectures with proximal solvers, leading to performance improvements on CIFAR-10 and CIFAR-100. The reviewers felt that most of their concerns were adequately addressed in the discussion and revision, and that the paper should be accepted.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting new perspective on deep learning\"}",
"{\"title\": \"response to author feedback\", \"comment\": \"I highly appreciate the author response to my reviews. I think most of my concerns have been addressed.\"}",
"{\"title\": \"Comment on a concurrent ICLR submission\", \"comment\": \"We would like to bring to the attention of the reviewers a well-rated concurrent ICLR submission, ``\\\"The Singular Values of Convolutional Layers\\\". The main result of that paper (Theorem 6) is equivalent to Lemma 2 in our submission. In our submission, this is a supporting result, the proof of which is relegated to the supplement. Our proof is simple and is one page long. This is enabled by Lemma 1, which enables the use of tensor theory in connecting convolutional and fully-connected layers in a unified framework. We believe this speaks to the value of our work as a whole.\"}",
"{\"title\": \"Response to R3\", \"comment\": \"* We recommend reading this response in the revised PDF uploaded to OpenReview. This response is in Appendix M. The mathematical notation is easier to read in the PDF.\\n\\nWe thank R3 for the positive review and feedback. Below are our responses to all concerns.\\n\\nOn the major concerns.\\n\\n(1) Some clarity on the forward/backward pass of networks with Prox solvers.\\n\\nR3's description of the forward pass through the network with a Prox solver is correct. In general, the best way to understand how a network with a Prox solver operates in both forward and backward passes is to think of that layer with a Prox solver as a recurrent neural network. Thus, asking how one performs a backward pass through such a layer is equivalent to asking how one would perform a backward pass through a recurrent neural network. The backward pass through the Prox solver is still performed: not through the original network as R3 thought, but through the same network with the Prox solver, akin to backpropagation-through-time (BPTT). This resembles the parameter update procedure in recurrent neural networks.\\n\\n(2) Adjusting baselines to perform the same amount of computation for a fair comparison.\\n\\nR3 is correct about the fact that networks with Prox solvers do in fact perform more computation. However, the capacity of both networks (baseline and the Prox solver network) is identical. This means that both networks have the same exact number of parameters and there is no advantage of the Prox solvers in terms of capacity over the baseline. This is the essential factor for a fair comparison, since accuracy is often considered as a function of network capacity rather than the amount of computation. Moreover, note that to the best of our knowledge we have reported the best results for the baselines in comparison to any publicly available online repository that performs training without using any test data statistics (i.e. proper training). For instance, the results of VGG16 on CIFAR-10 are comparable to or better than some ResNet architectures on the same dataset. We are not aware of better numbers for the corresponding networks on the corresponding datasets. \\n\\nOn the minor issues.\\n\\n(1) Missing definitions.\\n\\nWe have addressed this in the revised version. We provided several examples of $g(\\\\mathbf{x})$ when it was first introduced in the first page. We have explicitly, as suggested, defined $p$ in Proposition 1.\\n\\n(2) Give examples of where the prox problems in Table 1 show up in practice (outside of activation functions in neural networks).\\n\\nWe are only aware of applications in activation functions in neural networks. But this is sufficient motivation for us.\\n\\n(3) On the statement ``for different choices of dropout rate the baseline can always be improved by...\\\" in the Experiments.\\n\\nWe have softened the statement in the revision. The statement is now ``we observe that for different choices of dropout rate the baseline performance improves upon replacing ...\\\".\\n\\n(4) Include results for Dropout rate p=0 in Table 5.\\n\\nWe have added this experiment to Table 5.\"}",
"{\"title\": \"Continuation to the Response to R2 (4/4)\", \"comment\": \"(3) The experiments are only conducted on two datasets (i.e., CIFAR-10, CIFAR-100). It would be better to compare more baselines on other datasets, such as ImageNet.\\n\\nWe were mostly interested in the theory around connecting stochastic solvers to feed-forward passes through networks and generalizing linear layers through tensor theory, which will potentially inspire the design of new dropout layers. Many machine learning and optimization papers conduct experiments on MNIST only, or MNIST and CIFAR-10 [8,9,10]. We have conducted experiments on both CIFAR-10 and CIFAR-100, which match or exceed the complexity of datasets used in multiple comparable papers published in NIPS/ICML/ICLR in recent years. We do not think that it is necessary for our work to conduct experiments on ImageNet.\", \"references\": \"[1] \\\"Minimizing Finite Sums with the Stochastic Average Gradient\\\", Mark Schmidt, Nicolas Le Roux, Francis Bach.\\n[2] \\\"Stochastic Dual Coordinate Ascent Methods for Regularized Loss\\nMinimization\\\" JMLR14, Shai Shalev-Shwartz, Tong Zhang.\\n[3] \\\"SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives\\\", NIPS14, Aaron Defazio, Francis Bach, Simon Lacoste-Julien.\\n[4] \\\"Stochastic Proximal Gradient Descent with Acceleration Techniques\\\". NIPS14, Atsushi Nitanda.\\n[5] \\\"Introductory Lectures on Convex Optimization: A Basic Course (Applied Optimization)\\\". Kluwer Academic Publishers, 2004, Yuri Nesterov.\\n[6] \\\"On Multi-Layer Basis Pursuit, Efficient Algorithms and Convolutional Neural Networks\\\", arXiv 2018, Jeremias Sulam, Aviad Aberdam and Michael Elad.\\n[7] \\\"ISTA-Net: Iterative Shrinkage-Thresholding Algorithm Inspired Deep Network for Image Compressive Sensing\\\", CVPR18, Jian Zhang and Bernard Ghanem.\\n[8] \\\"Neural Ordinary Differential Equations\\\", NIPS18, Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, David Duvenaud.\\n[9] \\\"Variational Dropout and the Local Reparameterization Trick\\\", NIPS15, Diederik P. Kingma, Tim Salimans, and Max Welling.\\n[10] \\\"Variational Dropout Sparsifies Deep Neural Networks\\\", ICML17, Dmitry Molchanov, Arsenii Ashukha, Dmitry Vetrov.\\n[11] \\\"High order tensor formulation for convolutional sparse coding\\\", ICCV17, Adel Bibi and Bernard Ghanem.\\n[12] \\\"Factorization strategies for third-order tensors\\\", Linear Algebra and its Applications 2011, Misha Kilmer, Carla Martin.\"}",
"{\"title\": \"Continuation to the Response to R2 (3/4)\", \"comment\": \"(6) On the proof of Lemma 2 and the orthogonality of $\\\\mathbf{F}_H \\\\otimes \\\\mathbf{F}_W \\\\otimes \\\\mathbf{I}_{n_1}$. Is the third and fourth equalities in (24) wrong? On the fourth equality in (24).\\n\\nThe proof in Lemma 2 is correct. First, note that the normalized DFT matrices $\\\\mathbf{F}_{H}$ and $\\\\mathbf{F}_W$ and the identity matrix $\\\\mathbf{I}$ are orthogonal/unitary. It is trivial to prove using properties of the Kronecker product that the Kronecker product of orthogonal matrices is an orthogonal matrix. Note that for matrices $\\\\mathbf{A}, \\\\mathbf{B},\\\\mathbf{C},\\\\mathbf{D}$ of appropriate sizes, $(\\\\mathbf{A} \\\\otimes \\\\mathbf{B})^\\\\top = \\\\mathbf{A}^\\\\top \\\\otimes \\\\mathbf{B}^\\\\top$ and $(\\\\mathbf{A} \\\\otimes \\\\mathbf{B}) (\\\\mathbf{C} \\\\otimes \\\\mathbf{D}) = \\\\mathbf{A} \\\\mathbf{C} \\\\otimes \\\\mathbf{B} \\\\mathbf{D}$. Using the previous properties, the proof follows trivially. The only typo is in the text below Eq (24). $\\\\mathcal{\\\\hat{A}}(:,:,i,j) \\\\mathcal{\\\\hat{A}}^{\\\\textbf{H}}(:,:,i,j) = \\\\mathbf{U}_{ij} \\\\Sigma_{ij} \\\\mathbf{U}^{\\\\mathbf{H}}_{ij}$. That is, we are performing an eigendecomposition of the faces of the tensors which are matrices. We have corrected the typo and changed the notation a bit in Eq (24) for further clarity.\\n\\nComments on Experiments.\\n\\n(1) Training Networks is equivalent to optimizing proximal solvers. Why can training networks with solvers replacing blocks of layers improve accuracy? Reasonable explanations should be provided.\\n\\nIn general, we do not have theoretical generalization bounds justifying the improvement. However, similar observations [6,7] have been made in various contexts upon designing networks. It seems that the recurrent structure in hidden activations has merits in improving generalization but no one has thorough theoretical foundations for this yet. Such behaviour has also been observed very recently in [8] where ODE solvers are used rather than optimization solvers. Our hypothesis is that since the output of a feed-forward network approximates the optimal solution of some convex optimization in each layer, attaining better solutions by performing more iterations of the corresponding solver may improve accuracy.\\n\\n\\n(2) Optimizing a convex objective can easily obtain the optimal solution. What happens if solvers are used to replace more blocks of layers? Complexity analysis for these should be provided.\\n\\nOptimizing convex objectives is easier than other problems in the sense of global optimality guarantees and analysis of various deterministic and stochastic algorithms. However, achieving high accuracy solutions may still require performing a large number of iterations (e.g., $>10^3$). In our framework, larger number of iterations will increase the computational complexity of the network, particularly during training. On the other hand, replacing more blocks of layers with solvers (each with a small number of iterations) has the potential to improve the performance as highlighted in Table (4). However, replacing more blocks of feed-forward networks with solvers will suffer from diminishing returns, i.e. the improvements will tend to be smaller with more blocks replaced with solvers. This is firstly because many networks are already close to saturating datasets (e.g., VGG16 on CIFAR10). That is to say that the baseline networks are already achieving high accuracy ($> 90\\\\%$). Secondly, some networks do not have enough capacity to improve on certain datasets (e.g., AlexNet on CIFAR-100). Note that replacing blocks of layers with solvers does not increase capacity; thus the improvement is attributed to the new structure and not to any form of over-parameterization of the network. As for complexity, adding $t$ layers of a solver is as expensive as performing $t$ feed-forward passes through a single layer.\"}",
"{\"title\": \"Continuation to the Response to R2 (2/4)\", \"comment\": \"Detailed Comments.\\n\\n(1) Definition of $g(\\\\mathbf{x})$ and $f_i(\\\\mathbf{x})$.\\n\\nThe structure in problem (1) is standard in the machine learning and optimization communities [1,2,3,4]. As suggested by R2 and R3, we have added further details and examples in the first page. Note that for instance, if $f_i(\\\\mathbf{a}_i^\\\\top \\\\mathbf{x}) = \\\\frac{1}{2}(\\\\mathbf{a}_i^\\\\top \\\\mathbf{x} - \\\\mathbf{y}_i)^2$ and $g(\\\\mathbf{x}) = \\\\frac{1}{2}\\\\|\\\\mathbf{x}\\\\|_2^2$, we recover ridge regression, while with $g(\\\\mathbf{x}) = \\\\|\\\\mathbf{x}\\\\|_1$ we recover LASSO regression.\\n\\n(2) The motivation and some details of Function (2). In addition, $x$ should be corrected as $x^l$.\\n\\nThe missing superscript was corrected. It is not clear to us what R2 is asking for in terms of extra motivation and details. Could you clarify, so that we can alleviate this concern in the paper?\\n\\n(3) Is Equation (3) wrong?\\n\\nNo, Eq (3) is correct. Note that $\\\\mathbf{x}^l$ is the optimization variable while $\\\\mathbf{x}^{l-1}$ is a fixed optimization parameter. R2 is confusing $\\\\mathbf{x}^{l-1}$ (the activation output of the $l-1$ layer) with the optimization variable $\\\\mathbf{x}^l$ (the output activations of layer $l$). Thus, unlike what is stated by R2, the Prox-GD update has the following form: $\\\\mathbf{x}^l \\\\leftarrow \\\\text{Prox} \\\\left(\\\\mathbf{x}^l - \\\\frac{1}{L} \\\\nabla F(\\\\mathbf{x}^l)\\\\right)$. Consequently, we have $\\\\mathbf{x}^l \\\\leftarrow \\\\text{Prox} \\\\left(\\\\mathbf{x}^l - \\\\frac{1}{L}\\\\left(\\\\mathbf{A}\\\\mathbf{A}^\\\\top\\\\mathbf{x}^l - \\\\mathbf{A} \\\\mathbf{x}^{l-1} - \\\\mathbf{b}\\\\right)\\\\right)$, which is identical to Eq (3). Lastly, R2 is asking to prove ``the Lipschitz constant w.r.t. maximal eigenvalue''. We do not understand the request. It is well-known that the maximum eigenvalue of the Hessian of $F(\\\\mathbf{x}^l)$ induces the tightest quadratic upper bound for the smooth part $F(\\\\mathbf{x})$. This leads to the optimal largest step size $\\\\frac{1}{L}$ in Prox-GD. This is well-known and straightforward to show [5]. \\n\\n(4) Definition of $\\\\text{fold}_{\\\\text{H0}}(.)$? Is the dimensionality of $\\\\text{bdiag}(\\\\mathcal{D})$ wrong? Why is $\\\\text{bdiag}(\\\\mathcal{D})$ an identity mapping when $n_3=n_4$?\\n\\na) The definition of $\\\\text{fold}_{\\\\text{HO}}(.)$ can be found in [11] and in [12] for third-order tensors. The operator $\\\\text{fold}_{\\\\text{H0}}(.)$ reshapes the elements of a matrix into a tensor. The precise reshape procedure can be best described as the inverse reshape to the tensor-to-matrix unfold operator $\\\\text{MatVec}_{\\\\text{HO}}(.)$ such that the following holds: $\\\\text{fold}_{\\\\text{HO}} \\\\left(\\\\text{MatVec}_{HO}\\\\left(\\\\mathcal{A}\\\\right)\\\\right)= \\\\mathcal{A}$. For example, for a third-order tensor $\\\\mathcal{A} \\\\in \\\\mathbb{R}^{n_1 \\\\times n_2 \\\\times n_3}$, $\\\\text{MatVec}_{\\\\text{HO}}\\\\left(\\\\mathcal{A}\\\\right) \\\\in \\\\mathbb{R}^{n_1 n_3 \\\\times n_2}$, which is a matrix, while $\\\\text{fold}_{\\\\text{HO}} \\\\left(\\\\text{MatVec}_{HO}\\\\left(\\\\mathcal{A}\\\\right)\\\\right)$ reshapes that matrix into the original tensor $\\\\mathcal{A} \\\\in \\\\mathbb{R}^{n_1 \\\\times n_2 \\\\times n_3}$.\\n\\nb) Regarding $\\\\text{bdiag}(\\\\mathcal{D})$, there are indeed two typos. The dimensionality of $\\\\text{bdiag}(\\\\mathcal{D})$ is $\\\\mathbb{C}^{n_1 n_3 n_4 \\\\times n_2 n_3 n_4}$. If $n_3 = n_4 = 1$ then $\\\\text{bdiag}(\\\\mathcal{D})$ is an identity mapping. We have corrected the typos in the revision.\\n\\n(5) Some issues in Eqs (7,8). Does Eq (25) miss the operator $\\\\text{fold}_{\\\\text{H0}(.)}$ in Appendix G?\\n\\t\\nEqs (7,8,25) are correct with nothing missing. Eq (25) is not missing any operator. The reason $\\\\text{fold}_{\\\\text{HO}}(.)$ did not appear in Eq (25) simply follows from the fact that the Frobenius norm of a tensor and the Frobenius norm of the unfolded tensor are identical since $\\\\text{fold}_{\\\\text{HO}}(.)$ performs reordering of the elements. That is, $\\\\|\\\\text{MatVec}_{\\\\text{HO}} \\\\left(\\\\mathcal{A}\\\\right)\\\\|_F^2 = \\\\|\\\\text{fold}_{\\\\text{HO}}\\\\left(\\\\text{MatVec}_{\\\\text{HO}} \\\\left(\\\\mathcal{A}\\\\right)\\\\right)\\\\|_F^2$. This has also been discussed in the text below Eq (23) as this identity appears there too.\"}",
"{\"title\": \"Response to R2 (1/4)\", \"comment\": \"* We recommend reading this response in the revised PDF uploaded to OpenReview. This response is in Appendix L. The mathematical notation is easier to read in the PDF.\\n\\nWe thank R2 for the comments and the positive feedback on the novelty of our approach.\\n\\nR2 raised several issues regarding the proofs. We have proofread all the proofs and there are no factual errors. In fact, some of the main results, e.g. Lemma 2, have also been verified numerically. However, there were some minor typos and non-standard notation that may have been confusing. We have corrected these typos in the revised version uploaded to OpenReview. Below is a detailed answer to all of R2's concerns.\\n\\n\\nGeneral Comments.\\n\\n(1) Unclear technical details and mistakes in the proofs.\\n\\nThere are no mistakes in the proofs. We have thoroughly checked them and they are all correct. There were some typos that may have been behind R2's confusion. We have corrected these typos and improved the notation.\\n\\n(2) Limitations of the current method. What about ResNets and BatchNorm?\\n\\nOur framework can be directly applied to ResNets. A ResNet block can be viewed as two consecutive stochastic solvers. We have added Appendix J discussing this in the supplementary material. This approach is somewhat simplistic; there is room for exciting future work. Normalization layers are easy to handle. Note that during test time normalization layers are linear; thus they can be combined with the fully-connected or convolutional layer as a single linear layer.\"}",
"{\"title\": \"Response To R1\", \"comment\": \"We thank R1 for the positive comments and review.\\n\\n(1) Definition of $\\\\lambda_{\\\\max}$ in Eq (3).\\n\\nWe have adjusted the text below Eq (3) to clearly state that $\\\\lambda_{\\\\text{max}}(.)$ is the maximum eigenvalue function.\\n\\n(2) On the implementation of solvers.\\n\\nThe block of layers (dropout followed by linear and nonlinear layers) is replaced with an iterative solver, i.e. recurrent layer, that performs the Prox operator several times before generating an output. This is for the forward pass through the network. As for backpropagation, it is performed by simply unrolling the layers. This is commonly referred to as backpropagation through time in recurrent neural networks. While this is not particularly efficient in general, there are several potential ways to improve this by taking gradients implicitly through the argmin operator [1,2]. We leave this to future work.\\n\\n(3) On the number of iterations.\\n\\nThe number of iterations was always kept constant. It was set to 10 as stated at the end of page 6 for all small networks. As for larger networks such as VGG16, this number is fixed to 30 iterations as discussed in page 7 (just below Table 3). At present we do not have a universal criterion for choosing the number of iterations; this is treated as a hyperparameter.\\n\\n\\n[1] ``Techniques for Gradient-Based Bilevel Optimization with Non-smooth Lower Level Problems\\\", Peter Ochs, Ren\\u00e9 Ranftl, Thomas Brox, Thomas Pock.\\n\\n[2] ``On Differentiating Parameterized Argmin and Argmax Problems with Application to Bi-level Optimization\\\", Stephen Gould, Basura Fernando, Anoop Cherian, Peter Anderson, Rodrigo Santa Cruz, Edison Guo.\"}",
"{\"title\": \"Review of Deep Layers as Stochastic Solvers\", \"review\": \"Overview: This paper shows that the forward pass of a fully-connected layer (generalized to convolutions) followed by a nonlinearity in a neural network is equivalent to an iteration of a prox algorithm, where different regularizers in the objective of the related prox problem correspond to different nonlinearities such as ReLu. This connection is quite interesting. They further relate different stochastic prox algorithms to different dropout layers and show results of improved performance on CIFAR-10 and CIFAR-100 on several architectures. The paper is well-written.\", \"major_concerns\": \"1. While the equivalence of one iteration of a prox algorithm and a single forward pass of the block is understandable, it is not clear what happens from making several iterations (10 in the case of fully-connected layers in the experiments) of the prox algorithm. It seems that this would be equivalent to making a forward pass through 10 equivalent blocks (i.e., 10 layers with the same weights and biases). But then the backward pass is still through the original network, so the problem being solved is not clear. Clarity on this would help.\\n\\n2. Since the equivalence of 10 forward passes of a block are done at each iteration, using solvers does more computations (can be thought of as extra forward passes through extra layers as noted above), which makes the comparison not completely fair. Either adding more batches or more passes over the same batch multiple times (or at least for a few batches just to use the some computational power) would be more fair and likely improve the performance of the baseline networks.\", \"minor_issues\": \"1. missing definitions such as g(x) at beginning of Section 3 and p in Proposition 1.\\n\\n2. Give examples of where the prox problems in Table 1 show up in practice (outside of activation functions in neural networks)\\n\\n3. It says \\\"for different choices of dropout rate the baseline can always be improved by...\\\" in the Experiments. This is not provable.\\n\\n4. Include results for Dropout rate p=0 in Table 5.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review comments on \\u201cDeep Layers as Stochastic Solvers\\u201d\", \"review\": \"This paper theoretically verifies an equivalence between stochastic solvers on a particular class of convex optimization problems and a forward pass through a dropout layer followed by a linear layer and a non-linear activation. Experiments show that replacing a block of layers with multiple iterations of the corresponding solver improves classification accuracy. My detailed comments are as follows.\\n\\n*Positive points: \\n\\n1. The perspective is novel and interesting, i.e., training a forward pass through a dropout layer followed by a linear layer and a non-linear activation is equivalent to optimizing a convex problem by a Proximal Stochastic Gradient method. More importantly, this perspective has been theoretically verified. \\n\\n2. In the experiments, training networks with solvers replacing deep layers is able to improve accuracy significantly. \\n\\n*Negative points:\\n\\n1. Some technical details are not clear and many notations are used without clear explanations. Specifically, many notations based on (Bibi & Ghanem, 2017) make the paper hard to follow. Moreover, there are many mistakes in proofs. Please revise the paper according to the following comments.\\n\\n2. There are many limitations for the proposed method. Specifically, the theoretical results are hard to be extended to more general neural networks (e.g., ResNet) with Batch Normalization which are widely used.\\n\\n3. The experiment section should be significantly improved. There are only two datasets (i.e., CIFAR-10, CIFAR-100). It would be convincing that more baselines are compared on other datasets, such as ImageNet.\\n\\n*Detailed comments:\\n\\n**Comments on technical issues.\\n\\n1. In Problem (1), the definition of $g(x)$ and $f\\u00ac_i()$ should be provided for clarity.\\n\\n2. The motivation and some details of Function (2) should be provided since $F(x^l)$ is important for proving the equivalence between stochastic solvers and a forward network. In addition, $x$ should be corrected as $x^l$.\\n\\n3. Is Equation (3) wrong? Based on the definition of Prox-GD in (Xiao & Zhang, 2014), it should be $x^l=Prox(x^{l-1} \\u2013 1/L \\\\nabla F(x^l)) = Prox((I-1/L A)x^{l-1} + 1/L (AA^T x^l + b))$ which is different from Equation (3). Moreover, the Lipschitz constant w.r.t. maximal eigenvalue should be proved.\\n\\n4. In Definitions D.1 and D.2, what is the definition of $fold_{H0}$? Is the dimensionality of $bdiag(D)$ wrong? Why is $bdiag(D)$ an identity mapping when $n_3=n_4$?\\n\\n5. There are some issues on Equation (7) and its proofs. Is $A(:, i, :, :)$ and $\\\\vec{X}(i, :, :, :) $ wrong? It affects the results of Equation (8). Does Equation (25) miss the operator $fold_{HO}$ in Appendix G? Please check the proofs of Proposition 1.\\n\\n6. There are some issues on proofs of Lemma 2. Why are $F_H \\\\otimes F_W \\\\otimes I_{n_1}$ and $F_H \\\\otimes F_W \\\\otimes I_{n_2}$ orthogonal? Is the third and fourth equality in (24) wrong? For the fourth equality in (24), Eigen decomposition seems to be for a matrix, not a tensor.\\n\\n\\n**Comments on Experiments\\n\\n1. Training Networks is equivalent to optimizing proximal solvers. Why can training networks with solvers replacing blocks of layers improve accuracy? Reasonable explanations should be provided.\\n\\n2. Optimizing a convex optimization problem can easily obtain the optimal solution. What happens if solvers are used to replace more blocks of layers? Complexity analysis for these should be provided.\\n\\n3. The experiments are only conducted on two datasets (i.e., CIFAR-10, CIFAR-100). It would be better to compared more baselines on other datasets, such as ImageNet.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting paper, should be accepted\", \"review\": \"This paper presents a very interesting interpretation of the neural network architecture.\\n\\nI think what is remarkable is that the author presents the general results (beyond the dense layer) including a convolutional layer by using the higher-order tensor operation.\\nAlso, this research gives us new insight into the network architecture, and have the potential which leads to many interesting future directions. \\nSo I think this work has significant value for the community.\\n\\nThe paper is clearly written and easy to follow in the meaning that the statement is clear and enough validation is shown. (I found some part of the proof are hard to follow.)\\n\\n\\\\questions\\nIn the experiment when you mention about \\\"embed solvers as a replacement to their corresponding blocks of layers\\\", I wonder how they are implemented. About the feedforward propagation, I guess that for example, the prox operator is applied multiple times to the input, but I cannot consider what happens about the backpropagation of the loss.\\n\\nIn the experiment, the author mentioned that \\\"what happens if the algorithm is applied for multiple iterations?\\\". From this, I guess the author iterate the corresponding algorithms several times, but actually how many times were the iterations or are there any criterion to stop the algorithm?\\n\\n\\\\minor comments\\nThe definition of \\\\lambda_max below Eq(3) are not shown, thus should be added.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"Thank you for the feedback\", \"comment\": \"Thank you for pointing out this reference which is indeed relevant to our work. We will include it in the revised version of the paper. Note that Kobler et al. propose an architecture (variational networks) for the task of image reconstruction that is motivated by approximate connections between ResNet blocks and proximal point methods. Their approach naturally fits in the line of work discussed in our Related Work section on the merits of guiding the design of deep networks using optimization algorithms to derive architectures for a given task (such as image reconstruction). In our work, we study the connections between general, existing models and optimization problems. We explain Dropout layers in this framework and introduce a practical way to compute the Lipschitz constant. This allows us to introduce the general strategy of replacing layers of existing architectures with solvers.\"}",
"{\"comment\": \"The authors provide an interesting view on layers of typical feed forward (C)NNs by drawing connections to proximal operators and thus convex optimization. In Section 3.2 equation (3), the authors highlight that a single layer can be interpreted as a proximal gradient step on a convex function F(x) + g(x). A closely related connection was also drawn by [1] for residual networks. Kobler et al. analyzed the connections to incremental proximal gradient methods for both convex and non-convex F(x). Using this relation, Kobler et al. interpreted a block of residual layers as a sequence of proximal incremental gradient steps that minimize composite functions just like equation (1) in this paper.\\n\\nDue to the aforementioned relations to [1], the authors should consider citing [1].\\n\\n[1] Kobler et al. \\\"Variational networks: connecting variational methods and deep learning\\\", German Conference on Pattern Recognition, 2017.\", \"title\": \"Relation to incremental proximal gradient methods\"}"
]
} |
|
S1lg0jAcYm | ARM: Augment-REINFORCE-Merge Gradient for Stochastic Binary Networks | [
"Mingzhang Yin",
"Mingyuan Zhou"
] | To backpropagate the gradients through stochastic binary layers, we propose the augment-REINFORCE-merge (ARM) estimator that is unbiased, exhibits low variance, and has low computational complexity. Exploiting variable augmentation, REINFORCE, and reparameterization, the ARM estimator achieves adaptive variance reduction for Monte Carlo integration by merging two expectations via common random numbers. The variance-reduction mechanism of the ARM estimator can also be attributed to either antithetic sampling in an augmented space, or the use of an optimal anti-symmetric "self-control" baseline function together with the REINFORCE estimator in that augmented space. Experimental results show the ARM estimator provides state-of-the-art performance in auto-encoding variational inference and maximum likelihood estimation, for discrete latent variable models with one or multiple stochastic binary layers. Python code for reproducible research is publicly available. | [
"Antithetic sampling",
"variable augmentation",
"deep discrete latent variable models",
"variance reduction",
"variational auto-encoder"
] | https://openreview.net/pdf?id=S1lg0jAcYm | https://openreview.net/forum?id=S1lg0jAcYm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkgbEpWZ44",
"BJli5EK3y4",
"BkewAbI90m",
"S1e-Q1U9RQ",
"Hyxt0RfFAX",
"rklw0lkOA7",
"BkgaNi6vRX",
"r1gn81e_aX",
"rkl9cK9Uam",
"HJgQoB0An7",
"rkgyPSRRhX",
"r1eg1BCR3X",
"rJxEaPoRn7",
"rygqdW892X",
"S1xuO4yqn7"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1548979497430,
1544488082923,
1543295438593,
1543294745177,
1543216849331,
1543135438668,
1543129908885,
1542090579759,
1542003089951,
1541494170977,
1541494103452,
1541493976254,
1541482427644,
1541198193541,
1541170288405
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper862/Authors"
],
[
"ICLR.cc/2019/Conference/Paper862/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper862/Authors"
],
[
"ICLR.cc/2019/Conference/Paper862/Authors"
],
[
"ICLR.cc/2019/Conference/Paper862/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper862/Authors"
],
[
"ICLR.cc/2019/Conference/Paper862/Authors"
],
[
"ICLR.cc/2019/Conference/Paper862/Authors"
],
[
"ICLR.cc/2019/Conference/Paper862/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper862/Authors"
],
[
"ICLR.cc/2019/Conference/Paper862/Authors"
],
[
"ICLR.cc/2019/Conference/Paper862/Authors"
],
[
"ICLR.cc/2019/Conference/Paper862/Authors"
],
[
"ICLR.cc/2019/Conference/Paper862/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper862/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Author response\", \"comment\": \"We thank the AC for his/her comments. Below please find our response.\\n\\nWe'd like to clarify that while we focus on presenting the test negative log-likelihoods/ELBOs in the tables, we have provided the trace plots of training and validation negative ELBOs in Figures 2, 6, 7. \\n\\nThe toy experiment of maximizing E_z [(z-p_0)^2] indeed is very easy for the proposed ARM estimator, but becomes very challenging for REBAR and RELAX when p_0 approaches 0.5.\\n\\nWe use $\\\\phi$ for the parameters to avoid confusion for our experiments in discrete variational autoencoders, where $\\\\phi$ is commonly used to denote the encoder parameter, while $\\\\theta$ is commonly used to denote the decoder parameter. While the derivation may still appear complicated and heavy in notation, the actual implementation is in fact rather straightforward.\\n\\nAntithetic sampling for variance reduction is indeed old. However, antithetic sampling only becomes useful after performing variable augmentation and REINFORCE in the augmented space; without the Augment and REINFORCE steps, it is unclear how antithetic sampling can be applied to binary variables. \\n\\nCategorical extension of the binary ARM estimator involves much more sophisticated variable-swap and merge operations (much more notation heavy). We had a preliminary solution, which can be found in https://arxiv.org/abs/1807.11143 , and we have recently discovered another significantly improved solution. We plan to update that ArXiv submission in the near future.\"}",
"{\"metareview\": \"This paper introduces a new way to estimate gradients of expectations of discrete random variables by introducing antithetic noise samples for use in a control variate.\", \"quality\": \"The experiments are mostly appropriate, although I disagree with the choice to present validation and test-set results instead of training-time results. If the goal of the method is to reduce variance, then checking whether optimization is improved (training loss) is the most direct measure. However reasonable people can disagree about this.\\n\\nI also think the toy experiment (copied from the REBAR and RELAX paper) is a bit too easy for this method, since it relies on taking two antithetic samples. I would have liked to see a categorical extension of the same experiment.\", \"clarity\": \"I think that this method will not have the impact it otherwise could because of the authors' fearless use of long equations and heavy notation throughout. This is unavoidable to some degree, but\\n1) The title of the paper isn't very descriptive\\n2) Why not follow previous work and use \\\\theta instead of \\\\phi for the parameters being optimized?\\nThe presentation has come a long way, but I fear that few besides our intrepid reviewers will have the stomach. I recommend providing more intuition throughout.\", \"originality\": \"The use of antithetic samples to reduce variance is old, but this seems like a well-thought-through and non-trivial application of the idea to this setting.\", \"significance\": \"Ultimately I think this is a new direction in gradient estimators for discrete RVs. I don't think this is the last word in this direction but it's both an empirical improvement, and will inspire further work.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good contribution, still a slog to read\"}",
"{\"title\": \"Thank you for re-evaluating our paper based on our revision and response\", \"comment\": \"We greatly appreciate that you have taken our revision and response into consideration and moved your rating upwards.\\n\\nWe agree with your suggestion on using \\\"variable augmentation\\\" when describing the augmentation of a random variable.\"}",
"{\"title\": \"Thank you for your additional feedback and we have made revisions accordingly\", \"comment\": \"We greatly appreciate your additional comments and suggestions. Below please find our point-by-point response:\\n\\nThe positive (or negative) assumption on f is true if f is the ELBO or log-likelihood. While we sometimes make this assumption, mainly for the purpose of simplifying some of the theoretical analyses (e.g., guaranteeing there will always be variance reduction), we don't believe it is a critical requirement. We are extending the ARM estimator to several other applications, in some of which (such as reinforcement learning) f can take both positive and negative values, and we are further investigating how the property of f influences the performance. We hope we can get better insights and report our findings in future publications. \\n\\nWe have changed \\\"as originally derived by\\\" to \\\"as we initially derived it by\\\" in the revised paper.\\n\\nWe have changed $\\\\nu$ to $j$ in (8)-(10).\\n\\nFor the third row of Figure 1, we have now increased the samples to K=5000 to compute the empirical gradient variances. The reason that we do not use even more is because 1) the empirical variance plots now appear sufficiently smooth with K=5000, and 2) it already takes RELAX about 10 minutes to produce a single trace plot with K=5000, and use K=1000,000 would take a much longer time. \\n\\nWe have now added the theoretical values, which can be viewed as the sample means with K = infinity, of the gradient variance into the updated Figure 1. \\n\\nWe have modified the figures to use the same scale on the y-axis for better visual comparison, except if these values differ so significantly in scale that using the same scale may lose useful information.\\n\\nWe have added the theoretical values, which can be viewed as the sample means with K = infinity, of the gradient standard deviation and SNR of REINFORCE, AR, and ARM into the updated Figure 5.\\n\\nWe have now defined and plotted SNR as the absolute value in Figure 5.\"}",
"{\"title\": \"Thanks for all the improvements, which made the paper even clearer and stronger.\", \"comment\": \"I really appreciate the authors taking my feedback on board in the the way they did. Section 2.1 in particular is now extremely clear and concise, and I think the paper is stronger because of it. The additional plots are also extremely helpful.\", \"a_few_minor_comments\": \"In a number of places the theoretical analysis presented assumes f is always positive (or negative). I would point out that this is quite artificial. In practice, when using say, REINFORCE, one would subtract at least a very approximate mean f value from f in order to reduce the gradient estimate variance.\\n\\nIn section 2.1, \\\"as originally derived\\\" to me makes it sound like it was previously derived in another paper (especially if people aren't familiar with the references).\\n\\nUse of $\\\\nu$ and $v$ in (8), (9), (10) seems unnecessarily confusing visually.\\n\\nIn general it's extremely helpful for comparable plots for different systems to have the same axis scales. Is there any reason to use the differing scales in Figure 1? The difference in overall scale is often the main point, and easy to miss if all the plots are scaled differently. Also, why not use K = 100000 or something to get nice smooth plots of variance?\\n\\nSame comments about using a consistent scale for Figure 4 and Figure 5. And again, why not use K = very large for Figure 5 to get nice smooth plots (since it's essentially a theoretical analysis)? Also, plot SNR as the absolute value?\"}",
"{\"title\": \"Response to AR3's minor comments\", \"comment\": \"\", \"to_minor_comments\": \"Q1) In the introduction, \\\"*approximately* maximizing the marginal likelihood\\\" might be more accurate, since as given in (28) the exact marginal likelihood is not optimized in practice, and the exact marginal likelihood is not of the form (1) but is rather the logarithm of something of the form (1).\", \"a1\": \"We agree with you and we have revised accordingly.\\n\\nQ2) In section 2.3, I don't see any real reason the estimates in (9) and (11) \\\"could be highly positively correlated\\\", other than an argument along the lines of the simple one given in section 2.6 that they're often equal and so zero.\", \"a2\": \"The intuition is that if $f$ is always positive/negative, the scales of the two quantities $f(1_[\\\\epsilon_1 e^{-\\\\phi}< \\\\epsilon_2])(1-\\\\epsilon_1)$ and $f(1_[\\\\epsilon_2 e^{-\\\\phi}< \\\\epsilon_1])(1-\\\\epsilon_1)$ could be mainly influenced by the (1-\\\\epsilon_1) term, which are shared by both quantities. To make it more concrete, in Proposition 3 and Appendix C of revised version, we have mathematically shown the variance of ARM with K Monte Carlo samples is lower than the AR estimator with 2K samples, suggesting the two quantities are positively correlated if $f$ is always positive/negative.\\n\\nQ3) As an aside, in section 3.1, it is great not to assume conditional independence of the binary latent variables across layers, but assuming conditional independence within each layer is still very restrictive. It is reasonable for the generative distribution to have this property, since the resulting net can still be essentially \\\"universal\\\" by stacking enough layers, but assuming this factorization in the variational distribution is highly restrictive with hard-to-reason-about consequences for the learned generative model. I realize this is a commonly used assumption and the authors are interested in the variance reduction properties of their approach rather than the training itself, but I just mention that it would be great to see extensions of the current work that can cope tractably with correlated latent variables within each layer.\", \"a3\": \"Thank you for these nice suggestions! Currently we follow the common assumption that the latent variables in a certain layer given the latent variables from its upper layer are conditionally independent (marginally they are dependent). We will carefully consider possible extensions in our future research that can utilize correlated latent variables within each layer for the variational distribution.\\n\\nQ4) In section 4, it would be great to see some plots of explicit variance estimates of the different methods, given the overall goal of the paper (unless I just missed this?), even though figure 1 gives some insight into the variance characteristics.\", \"a4\": \"We have added more gradient variance plots in Figures 1, 3, & 5.\\n\\nQ5) In section 4.2, the expression log 1/K \\\\sum_k Bernoulli... differs in the placement of log from Jang et al (2017). Which is the standard convention for this task?\", \"a5\": \"If sample h_k i.i.d from p(h \\\\given x_u), by the law of large numbers, log ( 1/K \\\\sum_k p(x_l \\\\given h_k) ) will converge to the desired log marginal likelihood log( p(x_l \\\\given x_u) ), which is a standard way to estimate log marginal likelihood, for example in \\u201cImportance weighted autoencoders (Burda et al., 2016).\\\" Jang et al (2017) used 1/K \\\\sum_k log(p(x_l \\\\given h_k)) as the training target, but since Gumbel-softmax and ARM both use a single Monte Carlo sample based gradient estimate (i.e. K=1), the two expressions are the same as log(p(x_l \\\\given h_1)) in practice.\"}",
"{\"title\": \"Summary of major improvements\", \"comment\": [\"We thank the reviewers for their valuable comments and suggestions that have helped us to improve the paper. Listed below please find the major improvements we have made in our revision:\", \"Following the suggestions of AR3, we have now presented a much simplified derivation of the ARM estimator in the main body of the paper, and moved the original derivations to the appendix.\", \"Based on the feedback of AR2, we have now added the empirical gradient variance in Figure 1 for the toy example and in Figure 3 for the discrete latent variable model experiments.\", \"Following the comments of AR3 and AR2, we have more deeply investigated the variance reduction mechanism of the ARM estimator, and summarized our findings as Propositions 2-4 and Corollary 5, as shown in Section 2.3.\", \"Following the suggestions of AR1 and AR3, we have added more results in the toy example in Figures 1, 4, and 5, such as adding the results of the AR estimator and RELAX, and comparing signal-to-noise ratios between various gradient estimators.\", \"Below please also find our point-by-point response to each reviewer's comments.\"]}",
"{\"title\": \"Reverse-engineering the ARM estimator is leading to clearly simplified derivation and improved presentation\", \"comment\": \"Thank you for your insightful and constructive comments and suggestions. Below please find a detailed point-by-point response.\", \"to_major_comment_1\": \"We totally agree with you that it would be better to derive the univariate AR and ARM estimators from the analytic gradient via one dimensional integrations, and generalize them to multivariate ones using the law of total expectation. Motivated by your suggestion, we have reverse-engineered the proposed ARM estimator to considerably simplify its derivation. We have now derived the univariate AR (ARM) estimator with as few as two (three) equations, as shown in the revised Section 2.1, and have moved the original more complicated and longer derivation to the Appendix.\", \"to_major_comment_2\": \"We have substantially expanded our analysis of variance in Section 2.3 of the revised paper. \\n\\nFirst, we have added in Proposition 2 the theoretical analysis of ARM vs REINFORCE for the univariate case, and a deeper theoretical analysis of ARM vs AR in Propositions 3 and 4 and Corollary 5. Empirically, we find the performance of AR is quite comparable to that of REINFORCE for all experiments, and we have now added the results of AR into Figures 1, 4, & 5 and Table 2 (a). \\n\\nSecondly, we have added the sample mean, sample stdev, and their ratio for the ARM and several other estimators in Figure 5, and added related discussions in the paragraph right before Section 4.1. \\n\\nThirdly, we have provided more discussion about the spiking behavior of the ARM gradient in the univariate case, in the first paragraph of page 7. \\nWe believe what ARM does in the univariate case is ``not learning at most of the iterations in order to occasionally move with big steps towards the right direction,'' which provides high gradient-to-noise ratio when it does move. By contrast, the REINFORCE and AR estimators are oscillating back and forth, as shown in Figure 1, resulting in ``learning all the time but frequently moving towards the wrong direction.''\\n\\nFourthly, in addition to the newly added Propositions 2-4 and Corollary 5, we have added the plots of the empirical gradient variance into Figures 1, 3, and 5.\"}",
"{\"title\": \"An interesting paper with the potential to inspire many possible extensions, but overly complicated presentation (addressed in review).\", \"review\": \"In this paper the authors propose a new variance-reduction technique to use when computing an expected loss gradient where the expectation is with respect to independent binary random variables, e.g. for training VAEs with a discrete latent space. The paper is interesting, highly relevant, simple to implement, suggests many possible extensions, and shows good results on the experiments performed. However the exposition leaves a lot to be desired.\", \"major_comments\": \"The authors devote several pages of fairly dense mathematics to deriving the ARM estimate in section 2 (up to section 2.5). However I found it relatively easy to derive (15) directly, using elementary results such as the law of total expectation and a single 1-dimensional integral, in about 10 lines of equations. As the authors note, deriving (4) from (15) requires an extra line or two. In my opinion it would greatly improve the clarity of the paper to use a more direct and straightforward derivation (perhaps with the interesting historical account of how the authors first derived this result given in an appendix). I could understand the more lengthy derivation being helpful if it gave insight into the source of variance reduction, but I don't see this personally, and the current discussion of variance reduction does not refer to the derivation of (15) at all.\\n\\nThe analysis of variance in section 2.6 leaves a lot to be desired. The central claim of the paper is that this method reduces variance, so it is an important section! Firstly, the variance of ARM vs AR is interesting, but the variance of ARM vs REINFORCE seems also highly relevant. Secondly, it seems like it would be very informative to look at the ratio of stdev to the mean for the ARM gradient estimate, since the true gradient is multiplied by sigmoid(phi) sigmoid(-phi) and so is very small if the probability of z = 1 is close to 0 or 1, exactly in the same regime where ARM has an advantage in variance reduction over AR. For example, it may be that learning in this regime is very difficult due to the weak gradient even if the estimate is extremely low variance. Thirdly and somewhat relatedly, in this same regime (z = 1 close to 0 or 1) the ARM gradient estimate is very often 0, meaning no learning takes place, so it seems a bit strange to argue that the new method is fantastic in the regime where it's almost always not learning! Of course, not learning is better than adding lots of spurious variance as reinforce would, but perhaps this could be made clearer. Finally, the theoretical analysis involving correlation gives very little insight and is extremely hand-wavy. A short worked example in the 1D or 2D case explicitly computing the variance of REINFORCE, AR and ARM seems like it would be highly informative.\", \"minor_comments\": \"In the introduction, \\\"*approximately* maximizing the marginal likelihood\\\" might be more accurate, since as given in (28) the exact marginal likelihood is not optimized in practice, and the exact marginal likelihood is not of the form (1) but is rather the logarithm of something of the form (1).\\n\\nI wasn't clear why \\\"equal in distribution\\\" was used a few things for things that are simply equal, such as just above (5).\\n\\nIn section 2.3, I don't see any real reason the estimates in (9) and (11) \\\"could be highly positively correlated\\\", other than an argument along the lines of the simple one given in section 2.6 that they're often equal and so zero.\\n\\nAs an aside, in section 3.1, it is great not to assume conditional independence of the binary latent variables across layers, but assuming conditional independence within each layer is still very restrictive. It is reasonable for the generative distribution to have this property, since the resulting net can still be essentially \\\"universal\\\" by stacking enough layers, but assuming this factorization in the variational distribution is highly restrictive with hard-to-reason-about consequences for the learned generative model. I realize this is a commonly used assumption and the authors are interested in the variance reduction properties of their approach rather than the training itself, but I just mention that it would be great to see extensions of the current work that can cope tractably with correlated latent variables within each layer.\\n\\nIn section 3.2, according to my understanding of standard terminology, \\\"maximum likelihood inference\\\" is a misnomer and would normally be \\\"maximum likelihood estimation\\\", since maximum likelihood is a method for estimating parameters whereas inference is about inferring latent variable values given parameters.\\n\\nIn section 4, it would be great to see some plots of explicit variance estimates of the different methods, given the overall goal of the paper (unless I just missed this?), even though figure 1 gives some insight into the variance characteristics.\\n\\nIn section 4.2, the expression log 1/K \\\\sum_k Bernoulli... differs in the placement of log from Jang et al (2017). Which is the standard convention for this task?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Point-by-point response, part 3\", \"comment\": \"Q14) Figure 1 I believe contains an error for the REINFORCE figure. In my own research I have run these experiments myself, with a value of p close to the one used by the authors. REBAR and RELAX both reduce to a REINFORCE gradient estimator with a control variate that is differentiably reparametrizable, and so the erratic behaviour of the REINFORCE estimator in this case is likely wrong.\", \"a14\": \"REINFORCE without an appropriate control variate may have huge variance and hence has erratic behavior at certain iterations (e.g., wrong convergence point) even if the step-size is set to be small. Note we set the step-size as one in the original paper. We have now reduced the REINFORCE stepsize for the toy example to be 0.1, under which the parameter changes more smoothly and converges more slowly, but it sometimes still diverges due to high gradient variance.\\n\\nWe appreciate if the reviewer could try our demo code \\\"ARM_toy.py\\\" in the provided anonymous GitHub code repository if he/she still believes there is an error for the REINFORCE figure. \\n\\nQ15) There is a mysterious sentence on page 6 that refers to ARM adjusting the \\\"frequencies, amplitudes, and signs of its gradient estimates with larger and more frequent spikes for larger true gradients\\\"\", \"a15\": \"We have added more explanations about that sentence. Please see the first Paragraph in Page 7 for details.\\n\\nQ16) The value to the community of another gradient estimator for binary random variables is low, given the plethora of other methods available.\", \"a16\": \"ARM runs much faster, is unbiased and has low variance, delivers similar or higher testing log-likelihood and ELBOs, and is almost as simple as REINFORCE to implement. We have added more discussions in the revised paper about how it is different from previously proposed ones. We believe its practical and theoretical value will be appreciated by the community and it can be potentially plugged into many other research tasks, such as the ones mentioned in Conclusions. In fact, we have recently found a paper submitted to ICLR this year that had independently verified the correctness and good performance of ARM in its experiments (to preserve anonymity, we cannot reveal the name of that paper).\\n\\nQ17) Given the questions remaining about this methodology and its experiments, I recommend against publication on this basis also.\", \"a17\": \"We believe we have addressed all your questions, and hence we appreciate if you could take another look at our paper and reconsider your rating.\\n\\nQ18) Table 2 compares results that mix widely different architectures against each other, some taken directly from papers, others possibly retrained. This is not a valid comparison to make when evaluating a new gradient estimator, where the model must be fixed.\", \"a18\": \"For the results taken from literature, we have tried our best to ensure that the models are the same as the ones we use, i.e., only the gradient estimator is different (we had communicated with some authors of these papers to double check); for the models which are different from the original papers, we modify the author provided code with the same network. All efforts had been made to ensure a fair and meaningful comparison. Please also see our response A4 to Reviewer 1.\"}",
"{\"title\": \"Point-by-point response, part 2\", \"comment\": \"Q6) Similarly, the term \\\"merge\\\" is not explained, despite the subheading 2.3.\", \"a6\": \"We thought we had clearly defined \\\"merge\\\" as \\\"sharing the same set of standard exponential random variables for Monte Carlo integration, ..., More specifically, simply taking the average of (9) and (11) leads to (12).\\\" Our new simplified derivation no longer requires this \\\"merge\\\" step, which is now deferred to the Appendix as part of the original derivation for ARM.\\n\\nQ7) Computational issues are not addressed in the paper. Whether or not this method is useful in practice depends on computational complexity\", \"a7\": \"We totally agree that \\\"Whether or not this method is useful in practice depends on computational complexity\\\" but we respectively disagree \\\"Computational issues are not addressed in the paper.\\\" We'd like to emphasize that in Figures 2, 5, 6 (Figures 2, 6, 7 of the revised paper), we plot the calculated training and validation ELBOs against the number of processed mini-batches (steps) in the first row, and replot the same ELBOs against the computational time in the second row. These Figures suggest ARM takes clearly shorter time to finish the same (or more) number of iterations. We have added more explanations to these Figures in our revision.\\n\\nQ8) No effort is made to diagnose the source of the variance reduction, other than in the special case of analytically comparing with the Augment-REINFORCE estimator, which does not appear in any of the experiments.\", \"a8\": \"This is a good point. We have added theoretical variance reduction of the ARM gradient estimator over REINFORCE and Augment-REINFORCE (AR) estimators in Section 2.3 of the revised version. The newly added Proposition 2 compares ARM with REINFORCE, Propositions 3-4 compare ARM with AR, and Corollary 5 compares ARM with a constant based baseline. The -log p(x) for AR on MNIST are 164.1, 114.6, and 162.2 for the \\u201cLinear,\\u201d \\u201cNonlinear,\\u201d and \\u201cTwo layers\\u201d networks, respectively, which are comparable to these of REINFORCE. We have added them to Table 2.\\n\\nQ9) No effort is made to empirically characterize the variance of the gradient estimator, unlike Tucker et al (2017) and Grathwohl et al. (2018).\", \"a9\": \"This is a good point. We have added empirical variance plots in Figures 1, 3, 5, following Tucker et al (2017) and Grathwohl et al. (2018).\\n\\nQ10) The algorithm presented in the appendix appears to only address single-layer stochastic binary networks, which are uninteresting in practice.\", \"a10\": \"Algorithm 1 in Appendix is describing a generic ARM algorithm (please note we had the following clarification in Appendix A: \\\"For stochastic transforms, the implementation of ARM gradient is discussed in Section 3.\\\"). Describing the ARM algorithm for a multi-stochastic-layer network is the sole purpose of Section 3. For a network with multiple stochastic hidden layers, the ARM algorithm is described in Proposition 2 (Proposition 6 of the revised paper) if variational auto-encoder is used, and in Proposition 3 (Proposition 7 in the revised paper) if maximum likelihood is used. In the revision, we have added Algorithm 2 to further describe the ARM gradient for a multi-stochastic-layer network.\\n\\nQ11) Figure 2 (d), (e), and (f) all show that ARM was stopped early. Given that RELAX and REBAR overfit, this is a little troubling.\", \"a11\": \"All methods in Figure 2 are compared by running the same number of iterations. If an algorithm is faster, it will take less time to complete the given number of iterations. In Figure 2 (d)-(f), ARM appeared to be stopped early only because it takes much less time than REBAR/RELAX does to finish the same number of iterations because it is much faster per iteration. We have revised the paper accordingly to enhance clarity. Please also see our response in A7.\\n\\nQ12) Overal, these results are not very convincing that ARM is better, particularly in the absence of variance analysis (empirically, or other than w.r.t. the same algorithm without the merge step).\", \"a12\": \"As commented in A8 and A9, we have added variance analysis both empirically and theoretically in the revised version.\\n\\nQ13) All algorithms should be run for the same number of steps, particularly in cases where they may be prone to overfitting.\", \"a13\": \"We totally agree with the comment and this was actually what we did, as shown in Figure 2 (a)-(c) (in fact, we had tried running ARM with more number of steps to see whether it would overfit eventually; we did not observe overfitting with more iterations). Please also see our response in A11.\"}",
"{\"title\": \"Point-by-point response to address the raised issues/concerns of AR2\", \"comment\": \"\", \"comments\": \"The authors present ... The approach is somewhat novel. I have not seen other authors attempt to apply REINFORCE in an augmented space and with antithetic samples / common random numbers, and Rao-Blackwellization. This combination of techniques may be a good idea in the case of Bernoulli random variables. However, due to a number of issues discussed below, this claim is not possible to evaluate from the paper.\", \"response\": \"We thank reviewer 2 for his/her detailed comments. It appears that the reviewer has a good understanding about the technical novelty of the paper, but is not convinced by the claim of the paper. Below please find our point-by-point response, which we believe will be able to address all the raised issues/concerns.\\n\\nQ1) I assess the paper in its current form as too far below the acceptable standard in writing and in clarity of presentation, setting aside other conceptual issues which I discuss below.\", \"a1\": \"We have tried very hard to make the paper easy to follow. We have further simplified the derivation of ARM gradient in the revised version with the longer original derivation deferred to the Appendix B.\\n\\nQ2) The paper contains many typos and a few run-on sentences that span 5-7 lines. This hinders understanding substantially.\", \"a2\": \"We sincerely apologize for possible typos and we appreciate if Reviewer 2 could help point them out. We have rewritten several long sentences into shorter ones.\\n\\nQ3) A number key terms are not explained, irregularly. Although the paper assumes that readers do not know the mean and a variance of a Bernoulli random variable, or theof definition of an indicator function, it does not explain what random variable augmentation means.\", \"a3\": \"We consider that variable augmentation is a well-known concept to readers familiar with statistical models and inference algorithms (such as the EM algorithm). In addition, we thought Eq (6) (Eq 27 of the revised paper) is self-explanatory given the provided background information about exponential random variables and Eq (5) (Eq 26 of the revised paper).\\n\\nIn the revision, we have added citations to classical papers on variable augmentation. Moreover, our derivation of the univariate ARM estimator, shown in Section 2.1 of the revised paper, is now significantly simplified and no longer relies on variable augmentation. \\n\\nQ4) The one sentence that comes close to explaining it seems to have a typo: \\\"From (5) it becomes clear that the Bernoulli random variable z ~ Bernoulli(\\u03c3(\\u03c6)) can be reparameterized by racing two augmented exponential random variables ...\\\". It is not clear what is meant by \\\"racing,\\\" here, and I do not find it clear from equation (5) what is going on.\", \"a4\": \"\\\"Racing,\\\" which is related to the well-known \\\"Exponential Race Problem,\\\" is not a typo. We apologize if we did not make the analogy between \\\"racing two exponential random variables\\\" and \\\"treating the smaller one of two exponential random variables as the winner\\\" clear. We have changed \\\"racing\\\" to \\\"comparing\\\" in the revised paper.\\n\\nQ5) Unfortunately, in the abstract, the paper claims that variance reduction is achieved by \\\"data augmentation,\\\" which has a very specific meaning in machine learning unrelated to augmented random variables, further obfuscating meaning.\", \"a5\": \"As a compromise, we have changed \\\"data augmentation\\\" to \\\"variable augmentation\\\" to reduce confusion. The reason we used \\\"data augmentation\\\" was because it is very widely used in both statistics and machine learning literature. Its origin is often attributed to the following highly cited paper:\\n\\nM. A. Tanner and W. H. Wong, The Calculation of Posterior Distributions by Data Augmentation (Discussion Article), Journal of the American Statistical Association, June 1987.\\n\\nBelow please find several additional examples that can help justify our use of \\\"data augmentation:\\\"\\n\\nD. A. van Dyk and X.-L. Meng, The Art of Data Augmentation (Discussion Article), Journal of Computational and Graphical Statistics, Mar., 2001.\\n\\nM. A. Tanner and W. H. Wong, From EM to Data Augmentation: The Emergence of MCMC Bayesian Computation in the 1980s, Statistical Science, 2010.\\n\\nN. G. Polson and S. L. Scott, Data Augmentation for Support Vector Machines, Bayesian Analysis, 2011.\\n\\nK. P. Murphy, Machine Learning: A Probabilistic Perspective (Chapter 24.2.7, Page 847), 2012.\\n\\nM. Xu, J. Zhu, and B. Zhang, Fast Max-Margin Matrix Factorization with Data Augmentation, ICML 2013.\\n\\nZ. Gan, R. Henao, D. Carlson, and L. Carin, Learning Deep Sigmoid Belief Networks with Data Augmentation, AISTATS 2015.\\n\\nAgain, we have changed \\\"data augmentation\\\" to \\\"variable augmentation\\\" to avoid possible confusions. We have also added two classical references for this concept.\"}",
"{\"title\": \"We are making all the suggested minor revisions\", \"comment\": \"Thank you very much for your positive feedback. We have made all these suggested minor revisions in the updated paper. Please see our point-by-point response to your comments below.\\n\\nQ1) 1) In figure 1, it seems difficult to decide which one is better from the trace plots of the true/estimated gradients.\", \"a1\": \"For this univariate binary toy example, the trace plots of the true gradients of $\\\\phi$, shown in the first subplot of the top row, seem very different from these of the estimated gradients with ARM, shown in the last subplot of the top row. However, the trace plots of the Bernoulli probability parameter $\\\\sigma(\\\\phi)$ updated with the true gradient, shown in the first subplot of the second row, are almost indistinguishable from these updated with the ARM gradient, shown in the last subplot of the second row. Given the same step-size of one, it is hard to tell whether the ARM or true gradient is better for updating $\\\\phi$, which is showing how surprisingly well ARM works!\", \"q2\": \"Also, why the author choose to compare the REINFORCE instead of REBAR and RELAX, since REBAR and RELAX improve on REINFORCE by introducing stochastically estimated control variates.\", \"a2\": \"In the revised paper, we have revised Figure 1 to add the trace plots of the estimated gradients, Bernoulli probability parameters, and empirical gradient variance for five different algorithms: True grad, REINFORCE, Augment-REINFORCE, RELAX, and ARM. We have also added Figures 4 and 5 to provide more information.\", \"q3\": \"Also, about trace plots of the loss functions, I am curious why REINFORCE has a big vibration during 1500~2000 iterations.\", \"a3\": \"The REINFORCE has large variance, which sometimes leads to divergence. In the revised paper, we have tried a much reduced step-size for REINFORCE and updated the figures accordingly. We find that the volatility can be clearly reduced but the objective could sometime still converge to the wrong point as the learning progresses.\\n\\nIf you are interested in playing with this toy examples by yourself, you may run ARM_toy.py provided in the anonymous Github code repository.\", \"q4\": \"2) About Table 2, are all compared methods in the same experimental settings?\", \"a4\": \"We tried our best to make a comparison that is as fair as possible: 1) We have ensured that we are using the same version of banarized MNIST for all algorithms. 2) We have ensured all methods use the same network size; if the original paper used different ones, we have modified and run the author provided code (eg. LeGrad) to ensure the comparability. 3) We have run five independent trials to add error bars to make the comparison more meaningful (many previous work did not report error bars).\"}",
"{\"title\": \"REVISED: ARM algorithm is an interesting approach to a limited domain of interest in ML. While limited, it may spark new research into augmentation of random variables for variance reduction\", \"review\": [\"Overview.\", \"The authors present an algorithm for lowering the variance of the score-function gradient estimator in the special case of stochastic binary networks. The algorithm, called Augment-REINFORCE-merge proceeds by augmenting binary random variables. ARM combines Rao-Blackwellization and common random numbers (equivalent to antithetic sampling in this case, due to symmetry) to produce what the authors claim to be a lower variance gradient estimator. The approach is somewhat novel. I have not seen other authors attempt to apply REINFORCE in an augmented space and with antithetic samples / common random numbers, and Rao-Blackwellization. This combination of techniques may be a good idea in the case of Bernoulli random variables. However, due to a number of issues discussed below, this claim is not possible to evaluate from the paper.\", \"Issues/Concerns\", \"I assess the paper in its current form as too far below the acceptable standard in writing and in clarity of presentation, setting aside other conceptual issues which I discuss below. The paper contains many typos and a few run-on sentences that span 5-7 lines. This hinders understanding substantially. A number key terms are not explained, irregularly. Although the paper assumes that readers do not know the mean and a variance of a Bernoulli random variable, or theof definition of an indicator function, it does not explain what random variable augmentation means. The one sentence that comes close to explaining it seems to have a typo: \\\"From (5) it becomes clear that the Bernoulli random variable z \\u223c Bernoulli(\\u03c3(\\u03c6)) can be reparameterized by racing two augmented exponential random variables ...\\\". It is not clear what is meant by \\\"racing,\\\" here, and I do not find it clear from equation (5) what is going on. Unfortunately, in the abstract, the paper claims that variance reduction is achieved by \\\"data augmentation,\\\" which has a very specific meaning in machine learning unrelated to augmented random variables, further obfuscating meaning. Similarly, the term \\\"merge\\\" is not explained, despite the subheading 2.3.\", \"Computational issues are not addressed in the paper. Whether or not this method is useful in practice depends on computational complexity\", \"No effort is made to diagnose the source of the variance reduction, other than in the special case of analytically comparing with the Augment-REINFORCE estimator, which does not appear in any of the experiments.\", \"No effort is made to empirically characterize the variance of the gradient estimator, unlike Tucker et al (2017) and Grathwohl et al. (2018).\", \"The algorithm presented in the appendix appears to only address single-layer stochastic binary networks, which are uninteresting in practice.\", \"Figure 2 (d), (e), and (f) all show that ARM was stopped early. Given that RELAX and REBAR overfit, this is a little troubling. Overal, these results are not very convincing that ARM is better, particularly in the absence of variance analysis (empirically, or other than w.r.t. the same algorithm without the merge step). All algorithms should be run for the same number of steps, particularly in cases where they may be prone to overfitting.\", \"Figure 1 I believe contains an error for the REINFORCE figure. In my own research I have run these experiments myself, with a value of p close to the one used by the authors. REBAR and RELAX both reduce to a REINFORCE gradient estimator with a control variate that is differentiably reparametrizable, and so the erratic behaviour of the REINFORCE estimator in this case is likely wrong.\", \"There is a mysterious sentence on page 6 that refers to ARM adjusting the \\\"frequencies, amplitudes, and signs of its gradient estimates with larger and more frequent spikes for larger true gradients\\\"\", \"-The value to the community of another gradient estimator for binary random variables is low, given the plethora of other methods available. Given the questions remaining about this methodology and its experiments, I recommend against publication on this basis also.\", \"Table 2 compares results that mix widely different architectures against each other, some taken directly from papers, others possibly retrained. This is not a valid comparison to make when evaluating a new gradient estimator, where the model must be fixed.\", \"EDIT: I have re-evaluated the careful and comprehensive response to my concerns by the authors. I thank them for their effort in this. As many of the concerns were related to communication and have been addressed in the most recent draft, I think it is appropriate to move my review upwards. The revisions make this paper quite different from the original, and I am happy to re-evaluate on that basis--this is a peculiarity of the ICLR open review procedure, but I consider it a strength.\", \"I note that \\\"data augmentation\\\" in machine learning appears to have collided with a term in the Bayesian statistics literature, and the authors have provide a number of citations to support this. I strongly recommend \\\"variable augmentation\\\" going forward, as that is an accurate description (you are augmenting a random variable, rather than the input data domain). This appears to be one of the growing pains of the field of ML which has distinct and often orthogonal concerns to classical statistics around density approximation and computational issues.*\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This paper is very good but needs improvement\", \"review\": \"For binary layers, how to calculate and backpropagate gradients is a big problem, particularly for the binary neural networks. To solve the problem, this paper proposes an unbiased and low variance augment-REINFORCE-merge (ARM) estimator. With the help of an appropriate reparameterization, the antithetic sampling in an augmented space can be used to drive a variance-reduction mechanism. The experimental results show that ARM estimator converges fast, has low computational complexity, and provides advanced prediction performance.\\n\\nThis paper is well-organized. The motivation of the proposed model is well-driven and algorithm is articulated clearly. Meanwhile, the derivations and analysis of the proposed algorithm are correct. The experimental results show that the proposed model is better than the other existing methods.\\n\\nA few minor revision are list below.\\n1) In figure 1, it seems difficult to decide which one is better from the trace plots of the true/estimated gradients. Also, why the author choose to compare the REINFORCE instead of REBAR and RELAX, since REBAR and RELAX improve on REINFORCE by introducing stochastically estimated control variates. Also, about trace plots of the loss functions, I am curious why REINFORCE has a big vibration during 1500~2000 iterations. \\n2) About Table 2, are all compared methods in the same experimental settings?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
HyexAiA5Fm | Scalable Unbalanced Optimal Transport using Generative Adversarial Networks | [
"Karren D. Yang",
"Caroline Uhler"
] | Generative adversarial networks (GANs) are an expressive class of neural generative models with tremendous success in modeling high-dimensional continuous measures. In this paper, we present a scalable method for unbalanced optimal transport (OT) based on the generative-adversarial framework. We formulate unbalanced OT as a problem of simultaneously learning a transport map and a scaling factor that push a source measure to a target measure in a cost-optimal manner. We provide theoretical justification for this formulation, showing that it is closely related to an existing static formulation by Liero et al. (2018). We then propose an algorithm for solving this problem based on stochastic alternating gradient updates, similar in practice to GANs, and perform numerical experiments demonstrating how this methodology can be applied to population modeling. | [
"unbalanced optimal transport",
"generative adversarial networks",
"population modeling"
] | https://openreview.net/pdf?id=HyexAiA5Fm | https://openreview.net/forum?id=HyexAiA5Fm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SyehNuRHlN",
"Byx88RU9kE",
"B1xdSvrYy4",
"rygfS6-Qk4",
"Hkenm8WQJV",
"H1xQ6gNIC7",
"B1lLbqXI0m",
"rygDCYmLA7",
"By6VIm8AQ",
"rklJuu_Spm",
"B1lUURJanX",
"BklmCVWU3X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545099316096,
1544347214362,
1544275776098,
1543867705541,
1543865892304,
1543024827243,
1543023101997,
1543023054755,
1543022132516,
1541929063344,
1541369422024,
1540916426847
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper861/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper861/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper861/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper861/Authors"
],
[
"ICLR.cc/2019/Conference/Paper861/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper861/Authors"
],
[
"ICLR.cc/2019/Conference/Paper861/Authors"
],
[
"ICLR.cc/2019/Conference/Paper861/Authors"
],
[
"ICLR.cc/2019/Conference/Paper861/Authors"
],
[
"ICLR.cc/2019/Conference/Paper861/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper861/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper861/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"After revision, all reviewers agree that this paper makes an interesting contribution to ICLR by proposing a new methodology for unbalanced optimal transport using GANs and should be accepted.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"An interesting algorithm for unbalanced optimal transport\"}",
"{\"title\": \"thanks for your revision\", \"comment\": \"i have read the revised version. i also support accept. i have revised my score upwards.\"}",
"{\"title\": \"revision\", \"comment\": \"I have read the revised manuscript. I found that the revised version is more clear, more precise and reflects better the quality of the underlying ideas. It is better at distinguishing between what was known and what is new. I have also appreciated the new numerical experiment. For these reasons and also the ones I mentioned in my previous review, I suggest acceptance and update my score to 6.\"}",
"{\"title\": \"Will move zebrafish results to main paper\", \"comment\": \"We are happy to hear that you appreciated the revisions and will be happy to include the zebrafish experiment in the main paper for the camera ready. Thanks again for all of your helpful feedback.\"}",
"{\"title\": \"Thanks for the updates and the revision\", \"comment\": \"I have read the rebuttal and the revision of the authors. I thank authors for updating their manuscript that improved a lot. The proof in the appendix is now much easier to follow thanks for improving it. Authors answered most of my concerns.\\n\\n I think the new experiment Zebrafish embrogenesis is interesting and deserves to be in the main paper you can move it to the main paper (you are allowed up to 10 pages in the main). it would be great to explore more interesting applications like this one. \\n\\nIn conclusion the paper is in much better shape for publication and hence I am increasing my score to 6.\"}",
"{\"title\": \"theoretical parts has been revised extensively and a new experiment has been added in response to feedback\", \"comment\": \"Thank you for the very helpful feedback. We have heavily revised the theoretical parts for clarity and added an experiment to showcase the usefulness of the scaling factor. We believe the problematic aspects have been corrected by revising the proof for clarity and adding a citation for the step that was found questionable.\\n\\n- Originality\\n\\nThanks for pointing out the Monge formulation by Chizat et al. (2015). We have revised Section 3 accordingly and now start by pointing out this relation at the beginning of the section.\\n\\n- Correctness\\n\\nWe have rewritten the proof to improve clarity. A source of confusion may have been that we were not sufficiently clear about our choice of \\\\Z and \\\\lambda: in particular, the lemma holds when \\\\lambda is an atomless measure on \\\\Z. In this case it follows from standard results (now cited from Dudley's Real Analysis and Probability) that there exists a measurable function T_x from \\\\Z to \\\\Y such that \\\\gamma_{y|x} is the pushforward of \\\\lambda under T_x. The choice of \\\\Z, \\\\lambda has been clarified in the revised version of the main text, and the steps of the proof in the appendix have been rewritten.\\n\\n- The experiments don't show any benefit for learning the scaling factor, are there any applications in biology that would make a better case for this method?\\n\\nAn important problem in biology is lineage tracing of cells between different stages (e.g. of development or disease progression). In these applications it is important to account for the scaling factor since the transport is not balanced; particular cells in the earlier stage are poised to develop into cells seen in the later stage, and those cells should have higher scaling factors. To showcase the relevance of learning the scaling factor for determining these poised cells, we have added an application to single-cell gene expression data taken during zebrafish embryogenesis (see the end of the paper and Appendix D). Namely, we found that the cells in the source population with higher scaling factors were significantly enriched for genes associated with differentiation and development of the mesoderm. This experiment shows that analysis of the scaling factor can be applied towards interesting and meaningful biological discovery.\\n\\n- What was the architecture used to model T, xi, and f?\\n\\nThanks for pointing out the missing information. For our experiments, we used fully-connected feedforward networks with ReLU activations. The network for \\\\xi has a softplus activation layer at the end to enforce non-negative values. We now describe this in Appendix C.\\n\\n- Improved training dynamics in the appendix, it seems you are ignoring the weighting while optimizing on theta? than how would the weighing be beneficial ?\\n\\nFor training f and \\\\xi, the weights are directly used. For training T, while the weights are not directly used, they are still indirectly beneficial to T (theta) because they directly affect the training of f which in turn directly affects the training of T. \\n\\nThanks again for helping us improve our paper with your insightful comments.\"}",
"{\"title\": \"Section 3 has been revised extensively and a new experiment has been added to Section 4 in response to feedback (2/2)\", \"comment\": \"(continued)\\n\\n- Somewhere in Lemma 3.2 the fact that you had to use an alternative definition \\\\tilde{W} (by restricting the class of couplings) is not really clarified to the reader. Qualitatively, what does it mean that you restrict the class of couplings to have the same support as \\\\mu? In which situations would \\\\tilde{W} be very different from W_{ub} ?\\n\\nIn the optimal entropy-transport problem (3), the objective contains a \\\\psi-divergence that penalizes the difference between \\\\mu and \\\\gamma_X (the marginal of \\\\gamma with respect to \\\\X). Depending on which \\\\psi-divergence is chosen, it is possible that \\\\gamma_X has non-zero measure outside of the support of \\\\mu. Intuitively, this means that the optimal transport scheme adds some mass to \\\\X where there was previously no mass (since it is outside of the support of \\\\mu) and then transports this mass to \\\\Y. But in the asymmetric Monge formulation of (6), all the mass transported to \\\\Y must come from somewhere within the support of \\\\mu, since the scaling factor \\\\xi allows mass to grow but not to materialize outside of its original support. Qualitatively, this is the effect of the support restriction. Thanks for pointing out the lack of clarity; we revised the text accordingly to make this clear to the readers. \\n\\n- I think it would help for the simple sake of readability to add integration domains under your \\\\int symbols.\\n\\nDone; thanks for pointing this out.\\n\\n- T is used as a subset in Lemma 3.1, while it is used after and before as a map of (x,z)\\n\\nWe agree that this was confusing and we adjusted the notation accordingly. Thanks for pointing this out.\\n\\n- T(x,z) looks intuitively like a noisy encoder as in Wasserstein AEs (with, of course, the addition of your term \\\\xi). Could you elaborate?\\n\\nIf one disregards the scaling factor \\\\xi and the unbalanced aspect of our problem, both the WAE paper and our work present Monge-like formulations of the OT problem, where the objective is to learn a stochastic transport map to push one distribution to the other. In our paper, the stochastic transport map is T(x,z). In their paper, since there is a latent space, the stochastic map is the composition of the noisy encoder with the decoder map G. The notation of z is unrelated, however -- we use z as a random variable that introduces randomness into the map T, while in their work it denotes the variable in the latent space. \\n\\n- I have scanned the paper but did not see how you set lambda.\\n\\nThanks for pointing this out. We added it to the paper in Appendix C, namely: \\\"One can take \\\\lambda to be the standard Gaussian measure if a stochastic mapping is desired ... if a deterministic mapping is desired, then \\\\lambda can be set to a deterministic distribution.\\\"\\n\\nThanks again for helping us improve our paper with your insightful comments.\"}",
"{\"title\": \"Section 3 has been revised extensively and a new experiment has been added to Section 4 in response to feedback (1/2)\", \"comment\": \"Thanks for your helpful comments. We have heavily revised section 3 to clarify our contributions and the relation to previous literature, taking into account the comments of all reviewers.\\n\\n- The experiments are underwhelming. For faces they happen in latent spaces, and therefore one recovers transport between latent spaces later re-visualized through a decoder. For digits, all is fairly simple. \\n\\nWe agree that learning transport maps between these domains is nothing new. Rather, the main innovation in our numerical experiments is the simultaneous learning of the scaling factor that adjusts mass and accounts for class imbalances between the distributions. For example, in the MNIST experiment, the scaling factor reflects the digit imbalances between the datasets; and in the CelebA faces experiment, the scaling factor reflects the gender imbalance (i.e. predominance of males) in the aged group.\\n\\nTo further showcase the usefulness of learning the scaling factor, we have added an application to genomics, namely based on single-cell gene expression data taken during zebrafish embryogenesis (see the end of the paper and Appendix D). When modeling transport between populations of cells from different stages of development, one needs to account for the scaling factor since the transport is not balanced: particular cells in the earlier stage are poised to develop into cells seen in the later stage and are thus overrepresented in the later stage. The new experiment shows that the scaling factor can discover these poised cells. Namely, we found that the cells in the source population with higher scaling factors were significantly enriched for genes associated with differentiation and development of the mesoderm. This experiment shows that analysis of the scaling factor can be applied towards interesting and meaningful biological discovery.\\n\\n-They do not clearly mention whether this alternative UOT approach approximates UOT at all.\\n\\nOur algorithm solves the formulation of unbalanced OT in (6). The relation to optimal entropy-transport is now clarified in Section 3; namely, the formulations are equivalent when the support of \\\\gamma for optimal-entropy transport is subject to a support restriction. Therefore our approach does approximate unbalanced OT. Thanks for pointing out the lack of clarity. \\n\\n- Is the reference to a local scaling (\\\\xi) for unbalanced transport entirely new? your paper is not clear on that, and it seems to me this idea already appears in the OT literature.\\n\\nReviewer 2 provided a reference to an existing formulation that uses \\\\xi. The relation to our work is now made clear in the revised version at the beginning of Section 3. \\n\\n- I do not understand the connexion you make with GANs. In what sense can you interpret any of your networks as generators?...\\n\\nIn the revised version, the connection with GANs is clarified. We discuss how one can interpret T as a generator and Algorithm 1 as a generative-adversarial game between (T, \\\\xi) and f, similar to a GAN. In particular,\\n\\n- T takes a point x ~ \\\\lambda and transports it from X to Y by generating T(x, z) where z ~ \\\\lambda.\\n- \\\\xi determines the importance weight of each transported point\\n- their shared objective is to minimize the divergence between transported samples and real samples from \\\\nu that is measured by the adversary f\\n- cost functions c_1 and c_2 encourage T, \\\\xi to find the most cost-efficient strategy\\n\\nTo clarify, our paper does not contain results where images are generated from random noise; the generator in our framework is the transport map that takes a random sample from the source distribution and generates a sample in the target distribution. This is in line with previous works (e.g. unpaired image translation, CycleGAN by Zhu et al, https://arxiv.org/abs/1703.10593) where the generator in the GAN transports samples between domains rather than generating samples from random noise. \\n\\n- Numerical benchmarks... Is there a way you could compare yourselves with Chizat's approach?\\n\\nA numerical comparison of the methods would not really be meaningful. For discretized problems, we would expect the Chizat et al. method to outperform our method, since it was designed particularly for the discrete setting and solves a convex optimization problem with convergence guarantees. However, for high-dimensional/continuous problems, Chizat et al. cannot be used. Hence the methods should be considered complementary, each with its own application domains.\"}",
"{\"title\": \"Theoretical parts have been revised extensively and discussion improved in response to feedback\", \"comment\": \"Thanks for your kind and constructive comments.\\n\\nWe agree that section 3 could have been written more clearly, both in terms of connecting our work to existing work and in terms of motivating the material better and making it more accessible to readers. We heavily revised section 3 based on your feedback. In particular, we now begin the section by relating our formulation to the formulation of unbalanced Monge OT by Chizat et al. (2015) and then equate the relaxed problem with the optimal entropy-transport problem by Liero et al. (2018) as per your suggestion. The point that optimal entropy-transport is the convex/transport plan version of (6) is now conveyed more clearly. What we meant by \\\"directly modeling mass variation\\\" is that for applications, it is often important or more intuitive to directly learn the scaling factor that indicates how much local mass dilation/contraction there is. We did not mean to imply that optimal entropy-transport does not involve mass variation; we clarified this in our revision. Additionally, the discussion comparing our approach with the existing methods based on the convex formulation has been expanded at the end of Section 3.\\n\\nIn general, Appendix C has been expanded with more implementation details. In response to specific comments:\", \"appendix_c\": \"- (i) how precisely is the correct range enforced? This should be stated.\\n\\nWe have added to Table 1 in the Appendix some examples of final layers that show precisely how the correct range is enforced. \\n\\n- (ii) a Lipschitz penalty on f yields a class of functions which is very unlikely to have the properties of Lemma 3.1 ; in fact, this amounts to replacing the last term in (6) by a sort of \\\"bounded Lipschitz\\\" distance which has very different property from a f-divergence. This makes the theory of section 3 a bit disconnected from the practice of section 4.\\n\\nIt should be noted that our algorithm also works without the gradient penalty on f. We added the gradient penalty since in practice this improves the stability of the training, as has also been reported in the GAN literature. \\n\\nIn addition, we describe what the theoretical implications are of using the gradient penalty in the Appendix as follows:\\n\\n \\\"A gradient penalty on f changes the nature of the relaxation of (5) to (6): the right-hand side of (7) [convex conjugate form of divergence] is no longer equivalent to the \\\\psi-divergence, but is rather a lower-bound with a relation to bounded Lipschitz metrics (Gulrajani 2017). In this case, while the problem formulation is not equivalent to optimal entropy transport, it is still a valid relaxation of (5) [unbalanced Monge OT].\\\"\\n\\nThanks again for helping us improve our paper with your insightful comments.\"}",
"{\"title\": \"An adversarial formulation for unbalanced optimal transport with promising practical results and many potential applications but the theoretical part needs improvements.\", \"review\": \"REVIEW\\n\\nThe authors propose a novel approach to estimate unbalanced optimal transport between sampled measures that scales well in the dimension and in the number of samples. This formulation is based on a formulation of the entropy-transport problems of Liero et al. where the transport map, growth maps and Lagrangian multipliers are parameterized by neural networks. The effectiveness of the approach is shown on some tasks.\\n\\nThis is overall an ingenious contribution that opens a venue for interesting uses of optimal transport tools in learning problems (I can think for instance of transfer learning). As such I think the idea would deserve publication. However, I have some concerns with the way the theory is presented and with the lack of discussions on the theoretical limitations. Also, the theory seems a bit disconnected from the practical set up, and this should be emphasized. These concerns are detailed below. \\n\\nREMARKS ON SECTION 3\\n\\nI think the theoretical part does not exhibit clearly the relationships with previous literature. The formulation proposed in the paper (6) is not new and consists in solving the optimal entropy-transport problem (2) on the set of product measures gamma that are deterministic, i.e. of the form\\ngamma(x,y) = (id x T)_# (xi mu) for some T:X -> Y and xi : X -> R_+ (here (id x T)(x) =(x,T(x)) )\\nIt is classical in optimal transport to switch between convex/transport plan formulation (easier to study) to non-convex/transport map formulations (easier to interpret). (As a technical note, the support restriction in Lemma 3.2 is automatically satisfied for all feasible plans, for super-linear costs c_2=phi_1).\\n\\nMore precisely, since the authors introduce a reference measure lambda on a space Z (these objects are not motivated anywhere, but I guess are used to allow for multivalued transport maps?), they look for plans of the form\\ngamma(x,y) = (pi_x x T)_# (xi mu otime lambda) where (pi_x x T)(x,z) = (x,T(x,z) and \\\"otime\\\" yields product measures) (it is likely that similar connections could be made with the \\\"static\\\" formulations in Chizat et al.).\\n\\nIntroduced this way, the relationship to previous literature would have been clearer and the theoretical results are simple consequences of the results in Liero et al., who have characterized when optimal solutions of this form exist. Also this contradicts the remark that the authors make that it is better to model \\\"directly mass variation\\\" as their formulation is essentially equivalent.\\n\\nThe paragraph \\\"Relation to Unbalanced OT\\\" is, in my opinion, incomplete. The switch to non-convex formulation introduce many differences to convex approaches that are not mentioned: there is no guarantee that a minimizer can be found, there is a bias introduced by the architecture of the neural network, ... Actually, it is this bias that make the formulation useful in high dimension since it is know that optimal transport suffers from the curse of dimensionality (thus it would be useless to try to solve it exactly in high dimension). I suggest to improve this discussion.\\n\\nOTHER REMARKS\", \"a_small_remark\": \"lemma 3.1 is the convex conjugate formula for the phi-divergence in the first argument. I suggest to call it this way to help the reader connect with concepts he or she already knows. Its rigorous proof (with measurability issues properly dealt with) can be found, for instance, in Liero et al. Theorem 2.7. It follows that the central objective (8) is a Lagrangian saddle-point formulation of the problem of Liero et al., where transport plans, scalings and Lagrange multipliers are parameterized by neural networks. I generally think it is best to make the link with previous work as simple as possible.\\n\\nAlso, Appendix C lacks details to understand precisely how the experiments where done. It is written :\\n\\\"In practice, [the correct output range] can be enforced by parameterizing f using a neural network with a final layer that maps to the correct range. In practice, we also found that employing a Lipschitz penalty on f stabilizes training.\\\"\", \"this_triggers_two_remarks\": [\"(i) how precisely is the correct range enforced? This should be stated.\", \"(ii) a Lipschitz penalty on f yields a class of functions which is very unlikely to have the properties of Lemma 3.1 ; in fact, this amounts to replacing the last term in (6) by a sort of \\\"bounded Lipschitz\\\" distance which has very different property from a f-divergence. This makes the theory of section 3 a bit disconnected from the practice of section 4.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"an (alternative) stochastic min-max algorithm to compute unbalanced optimal transport, using local scaling (dilatation of mass)\", \"review\": \"In this paper the authors consider the unbalanced optimal transport problem between two measures with different total mass. The authors introduce first the now standard Kantorovich-like formulation, which considers a coupling whose marginals are penalized to look like the two target measures. The authors introduce a second formulation in (2), somewhat a Kantorovich/Monge hybrid that involves a \\\"random\\\" Monge map where the target point T(x) of a point x now depends also on an additional random variable z, to desribe T(x,z). The authors also consider a local mass creation term (\\\\xi) to weight the initial measure \\\\mu.\\n\\nThe authors emphasize the interest of the 2nd formulation, which, much like the original Monge problem, has an intractable push-forward constraint. This formulation is similar to recent work on Wasserstein Autoencoders (to which is added the scaling parameter). As with WAE, this constraint is relaxed to penalize the deviation between the \\\"random\\\" push-forward and the desired marginal. \\n\\nThe authors show then that the resulting problem, which involves a transportation cost integrated both on the random variable z and on the input domain x, weighted by xi + a simple penalization for xi + a divergence penalizing the deviation between push-forward and desired marginal, can be optimized altogether by using three NN: 1 for the parameterization of T, 1 for the parameterization of \\\\xi, and one to optimize using a function f a variational bound on the divergence. 2 gradient descents (T,\\\\xi), 1 gradient ascent (f, variational bound).\\n\\nThe authors then make a link between that penalize formulation and something that resembles unbalanced transport (I say resembles because there is some assymetry, and that the type of couplings is restricted). Finally the authors show that by letting increase the penalty in front of the divergence in (6) they recover something that looks like the solution of (2).\\n\\nFor the sake of completeness, the authors provide in the appendix an implementation of a simple dual ascent scheme to approximate unbalanced OT inspired from previous work by Seguy'17, and show that, unlike that work, their implicit parameterization of the scaling factor \\\\xi can help, and illustrate this numerically.\\n\\nI give credit to the authors for addressing a new problem and providing an algorithmic formulation to do so. That algorithm is itself recovered from an alternative formulation of unbalanced OT, and is therefore interesting in its own right. Unfortunately, I have found the presentation rushed. I really believe the paper would deserve an extensive re-write. Everything is fairly clear until Section 3. Then, the authors introduce their main contribution. Basically the section tries to prove two things at the same time, without really completing its job. One is to prove that \\\"dualizing\\\" the scaling+ random push-forward equality constraint is ok if one uses big enough regularizers (intuitive), the other that this scaled + random push-forward formulation is closely related to W_{ub}. This is less clear to me (see below). \\n\\nThe experiments are underwhelming. For faces they happen in latent spaces, and therefore one recovers transport between latent spaces later re-visualized through a decoder. For digits, all is fairly simple. They do not clearly mention whether this alternative UOT approach approximates UOT at all. Despite the title, there's no generation. Therefore my grade is really split between a 5 and a 6.\", \"minor_comments_and_questions\": [\"Is the reference to a local scaling (\\\\xi) for unbalanced transport entirely new? your paper is not clear on that, and it seems to me this idea already appears in the OT literature.\", \"I do not understand the connexion you make with GANs. In what sense can you interpret any of your networks as generators? To me it just feels like a simultaneous optimization of various networks, yet without a clear generative purpose. Technically there may be several similarities (as we optimize on networks), but I am not sure this justifies referencing GANs in the title. Additionally, and almost mechanically, putting GAN in your paper, the reader will expect some generation results..\", \"Numerical benchmarks: Is the technique you propose supposed to approximate the optimal value of Unbalanced OT at all? If yes, is there a way you could compare yourselves with Chizat's approach?\", \"Somewhere in Lemma 3.2 the fact that you had to use an alternative definition \\\\tilde{W} (by restricting the class of couplings) is not really clarified to the reader. Qualitatively, what does it mean that you restrict the class of couplings to have the same support as \\\\mu? In which situations would \\\\tilde{W} be very different from W_{ub} ? (which, if I understand correctly, only appears in (2) but not elsewhere in the paper?)\", \"I think it would help for the simple sake of readability to add integration domains under your \\\\int symbols.\", \"T is used as a subset in Lemma 3.1, while it is used after and before as a map of (x,z)\", \"T(x,z) looks intuitively like a noisy encoder as in Wasserstein AEs (with, of course, the addition of your term \\\\xi). Could you elaborate?\", \"I have scanned the paper but did not see how you set lambda.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"good effort for scalable unbalanced OT, theoretical aspect might be problematic\", \"review\": \"### post rebuttal### authors addressed most of my concerns and greatly improved the manuscript and hence I am increasing my score.\", \"summary\": \"The paper introduces a static formulation for unbalanced optimal transport by learning simultaneously a transport map T and scaling factor xi .\\n\\nSome theory is given to relate this formulation to unbalanced transport metrics such as Wasserstein Fisher Rao metrics for e.g. Chizat et al 2018. \\n\\nThe paper proposes to relax the constraint in the proposed static formulation using a divergence. furthermore using a bound on the divergence , the final discrepancy proposed is written as a min max problem between the witness function f of the divergence and the transport map T , and scaling factor xi. \\n\\nAn algorithm is given to find the optimal map T as a generator in GAN and to learn the scaling factor and the witness function of the divergence with a neural network paramterization , the whole optimized with stochastic gradient. \\n\\nSmall experimentation on image to image transportation with unbalance in the classes is given and show how the scaling factor behaves wrt to this kind of unbalance.\", \"novelty_and__originality\": \"The paper claims that there are no known static formulations known with a scaling factor and a transport map learned simultaneously. We refer the authors to Unbalanced optimal Transport: Geometry and Kantrovich Formulation Chizat et al 2015. In page 19 in this paper Equation 2.33 a similar formulation to Equation 4 in this paper is given. (Note that phi corresponds to T and lambda to xi). This is known as the monge formulation of unbalanced optimal transport. The main difference is that the authors here introduce a stochastic map T and an additional probabilty space Z. Assuming that the mapping is deterministic those two formulations are equivalent.\", \"correctness\": \"\", \"the_metric_defined_in_this_paper_can_be_written_as_follow_and_corresponds_to_a_generalization_of_the_monge_formulation_in_chizat_2015\": \"L(mu,nu)= inf_{T, xi} int c_1(x,T_x(z) ) xi(x) lambda(z) dmu(x) + int c_2(x_i(x)) dmu(x)\\n \\t\\t s.t T_# (xi mu)=nu\\nIn order to get a kantorovich formulation out of this chizat et al 2015 defines semi couplings and the formulation is given in Equations 3.1 page 20. \\n\\nThis paper proposes to relax T_# (xi mu)=nu with D_psi (xi \\\\mu, \\\\nu) and hence proposes to use:\\n\\nL(mu,nu)= inf_{T, xi} int c_1(x,T_x(z) ) xi(x) lambda(z) dmu(x) + int c_2(x_i(x)) dmu(x)+ D_psi (xi \\\\mu, \\\\nu)\\n\\nLemma 3.2 of the paper claims that the formulation above corresponds to the Kantrovich formulation of unbalanced transport. I doubt the correctness of this:\\n\\nInspecting the proof of Lemma 3.2 L \\\\geq W seems correct to me, but it is unclear what is going on in the proof of the other direction? The existence of T_x is not well supported by rigorous proof or citation? Where does xi come from in the third line of the equalities in the end of page 14? I don\\u2019t follow the equalities written at the end of page 14. \\n\\nAnother concern is the space Z, how does the metric depend on this space? should there be an inf on all Z?\", \"other_comments\": [\"Appendix A is good wish you baselined your experiments with those algorithms.\", \"The experiments don\\u2019t show any benefit for learning the scaling factor, are there any applications in biology that would make a better case for this method?\", \"What was the architecture used to model T, xi, and f?\", \"Improved training dynamics in the appendix, it seems you are ignoring the weighting while optimizing on theta? than how would the weighing be beneficial ?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
ByxkCj09Fm | DEEP HIERARCHICAL MODEL FOR HIERARCHICAL SELECTIVE CLASSIFICATION AND ZERO SHOT LEARNING | [
"Eliyahu Sason",
"Koby Crammer"
] | Object recognition in real-world image scenes is still an open problem. With the growing number of classes, the similarity structures between them become complex and the distinction between classes blurs, which makes the classification problem particularly challenging. Standard N-way discrete classifiers treat all classes as disconnected and unrelated, and therefore unable to learn from their semantic relationships. In this work, we present a hierarchical inter-class relationship model and train it using a newly proposed probability-based loss function. Our hierarchical model provides significantly better semantic generalization ability compared to a regular N-way classifier. We further proposed an algorithm where given a probabilistic classification model it can return the input corresponding super-group based on classes hierarchy without any further learning. We deploy it in two scenarios in which super-group retrieval can be useful. The first one, selective classification, deals with the problem of low-confidence classification, wherein a model is unable to make a successful exact classification.
The second, zero-shot learning problem deals with making reasonable inferences on novel classes. Extensive experiments with the two scenarios show that our proposed hierarchical model yields more accurate and meaningful super-class predictions compared to a regular N-way classifier because of its significantly better semantic generalization ability. | [
"deep learning",
"large-scale classificaion",
"heirarchical classification",
"zero-shot learning"
] | https://openreview.net/pdf?id=ByxkCj09Fm | https://openreview.net/forum?id=ByxkCj09Fm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJew6nYge4",
"HJe5zaIiCX",
"ByemSkfqCQ",
"B1l57E9OAQ",
"B1lPUv5yTm",
"rkeQcM9y6m",
"rJeu1Ensnm",
"BygR5oLw27",
"rygXQEFS2m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544752318553,
1543363858279,
1543278394839,
1543181346275,
1541543758633,
1541542539224,
1541288928051,
1541004181721,
1540883483060
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper860/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper860/Authors"
],
[
"ICLR.cc/2019/Conference/Paper860/Authors"
],
[
"ICLR.cc/2019/Conference/Paper860/Authors"
],
[
"ICLR.cc/2019/Conference/Paper860/Authors"
],
[
"ICLR.cc/2019/Conference/Paper860/Authors"
],
[
"ICLR.cc/2019/Conference/Paper860/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper860/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper860/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes to take into accunt the label structure for classification\\ntasks, instead of a flat N-way softmax. This also lead to a zero-shot setting\\nto consider novel classes. Reviewers point to a lack of reference to prior\\nwork and comparisons. Authors have tried to justify their choices, but the\\noverall sentiment is that it lacks novelty with respect to previous approaches.\\nAll reviewers recommend to reject, and so do I.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}",
"{\"title\": \"reply to reviwer 1\", \"comment\": \"Hi,\\nThanks you very much on your detailed feedback. I will reply according to the issue you mentioned.\", \"references_part\": \"You are truly right in that. One of my colleague told me too that I need to improve this issue, and I had worked on this issue mainly before I got your feedback. In my last revision I improved this issue even more. I find interesting claims regards to \\u201cLarge-scale object classification using label relation graphs\\u201d work. In my results I show that their exclusion mechanism may be too strict, by giving similarity\\nweight even to labels which share predecessor we improve semantic ability.\", \"comparison_to_prior_work\": \"1) I tried to work on this issue too. I go over many articles, but I find only few works which report a their semantic metric like hp@k, unfortunately all those works report their performance only on small datasets like: cifar100 or AWA, which are relatively to ILSVRC12. Our aim is to deals with semantics in large-scale scenarios, which are far more comlpex. Because of this issue I didn't refer to those work. \\nI compared my work to the standard hard-model and to DeVise. DeVise has two parts in their work and one of it's claims is that this model has a good semantic ability. In the first experiment I refer to this claim.\\nI added samples of my soft-model results which shows many interesting semantic abilities aspects. \\n2) Regards the zero-shot issue. In the new revision I emphasis that we are dealing with a soften version of zero-shot which I called hierarchical zero-shot. Nevertheless, I think that our method is very reasonable, while state of the art methods on zero-shot learning which try to give a specific class, gives about 5%. Our method gives more than 70% with identifying a high quality super-category i.e which is close related to the true class. Once you have a super-class one can used fined-grained method to give even more specific class. \\n3) Another interesting result regarded the benefits in super-group retrieval is where the standard model miss the top-1. We obtain on this case more than 80%. You can look on those samples.\\n\\nEXPERIMENTAL SETUP. \\n1) Sorry, but I think that maybe I wasn't very clear in my first revision. The standard and our soft-models trained only on the leaf nodes. That is these classifiers return scores for ILSVRC12-1K classes only.\\nI didn't make any learning on the ancestors of the 1K classes. \\n2) I proposed an algorithm which gets a probabilistic model as mentioned. \\nMoreover, by assuming that the leaf's taxonomy is known our algorithm can used this taxonomy to return super-class. That is I can tell who are the ancestor from knowing who are the relevant children. \\n3) Super-group retrieval as I mentioned is a novel concept. Therefore, I define a new evaluation metric and couldn't compare to prior work directly. The two cases deals with scenarios when trying to return the specific category we get very poor results I indicate this, In such cases we can benefit from super-class if we get it with significant performance improvement. As I shown.\\nI added my results statistics and added more illustrations which show the advantages of this issue. \\n\\nWRITING\\nI made much work with it. I added samples of models outcomes. I hope that it is much clear. I can check about sending my work to professional editor if needed. \\n\\nMOREOVER\\nIn order to give a stronger justification to the soft weighting mechanism I added in the appendix a section which deals with this issue.\"}",
"{\"title\": \"response to reviewer 2\", \"comment\": \"Hi,\\nThanks on your feedback. \\nTruly, as u wrote the super-group concept is truly very interesting and novel.\\nI emphasis this in the zero-shot results part and in conclusion sections. Although the super-group concept is based on understanding the data and benefits from the soft-model, the result are fantastic. and need to be considered to be publish in the conference.\\nA) for zero-shot learning state of the art methods gives about 5% top1 accuracy, while trying to give a specific class. While our method gives more than 70% with identifying a valid super-category which is close related to the true class. Once you have a super-class one can used fined-grained method to give even more specific class.\\nB) Another interesting result is when trying to retrieve super-class in the cases where the standard model miss the top-1. We obtain in this case more than 80%. You can look on those samples which are very impressive.\\n\\nI think that the concept of the soft-weighting mechanism is truly straight-forward, but it is very interesting too. In order to give more insight on this part I add an explanation and justification in appendix C.\\n\\nTruly, the taxonomy labeling issue can be an interesting investigation. I tried to make some experiments in this direction, but I still have more work to do. At this stage I added this issue to future work in conclusion part.\\nMoreover, I indirectly refer to this issue in the part SUPER-GROUP RETRIEVAL: CHOOSING HYPER-PARAMETERS.\\nI tried to see what is the impact of different f values, which compared to hard-model. In those cases I tried to show what is the impact of weighting far and more far distance levels. another case is where I skip a full distance level. From those cases we can get an intuition about absence of information in the taxonomy.\\n\\nBeyond those issues, I made much editing work, put more related work, and illustrate the soft-model benefits with many examples. \\n\\nBest.\"}",
"{\"title\": \"revision 2.0, 2.1,2.2\", \"comment\": [\"revision 2.0\", \"The big difference in this revision from revision 1 is a much clear work.\", \"Significant English improvement and notation issues.\", \"Add explanations about concepts which was not clear to readers.\", \"Add samples which illustrates the importance of this work.\", \"Emphasize the the contribution of this work.\", \"-Add more related works which deals with semantic classification and refer to state of the art zero-shot works.\", \"revision 2.1\", \"Give a justification for the weighting mechanism (in appendix).\", \"revision 2.2\", \"-fix space issues in order to be in 10 pages limits.\", \"put appendix after bib. as was asked in ICLR guidelines.\", \"next revision\", \"Add samples for hierarchical zero-shot.\", \"add the h-Correct set generation algorithm for completeness of supplementary information.\", \"14.12 - I improved grammar English issues. If needed as mentioned I will be able to send the article to professional editor.\"]}",
"{\"title\": \"response to reviewer 3\", \"comment\": \"hi,\\nFirst, thanks on reviewing my work. your feedback is very important to me.\\n1) This is my first published academic doc, so I worked according to the guidelines that I saw. I didn't see a direct guideline about the ack part. So made this mistake because lacking experience. I can propose that for next a direct guideline for blind names from ack. will be mentioned as it done in the authors names part.\\nhope that u can reply to me although about next issues\\u2026. \\n\\n2) The article has two main novelties the first is as u mentioned. The second is the proposed algorithm where given a probabilistic model it can return 'good' super-class based only on the taxonomy without another learning process. I showed that the soft model gives better super-class on two scenarios.\\n3) You mentioned that the experiment part is weak. Which experiments should I add?\\n - In order to prove the semantic generalization ability, I compared the soft-model to hard-model using two different topologies (Alexnet and Resnet50) and to DeVise.\\n - In order to prove that the model has better super-class. I compared the soft-model to the hard-model on the two scenarios. \\n4) Can u please give me a reference to an article about zero-shot where this soft-taxonomy based is made?\\n5) Truly I used Frome proposed metric to measure the semantic ability of a model. But, our work is completely different. Frome is an embedding based solution. I propose a loss function which exploits the inter-class hierarchy. \\n6) I justify the weighting through experiments which visualizing in Fig. 2 left and table 2.\\nI can add another theoretical justification in the appendix if u give me another chance.\\n7) About the equation in page 8, I put a reference where I found a version of this equation.\\nthe work of defining the selective term is not mine.\\nBest Regards, Thx\"}",
"{\"title\": \"revision 1\", \"comment\": \"1) Clarify the novelties of the article in the abstract and in introduction too.\\n2) Put more related work (main part)\\n3) Add another experiment in section 5.3 ZERO SHOT LEARNING on bigger zero-shot dataset called 3-hops relative to the 2-hop dataset.\\n4) add more conclusions and future work.\\n5) Improve grammar issues structure issues (moving taxonomy figure)\\n6) remove the ack part.\\n\\n\\nI mainly had updated this revision before have got a feedback. I am going to improve it more according the mentioned issues. Still I wanted to put this revision because I think this is much better version I still have a work to do and will work on it.\\nRegards\"}",
"{\"title\": \"Missing key references\", \"review\": \"SUMMARY\\nThe paper presents a method for classification which takes into account the semantic hierarchy of output labels, rather than treating them as independent categories. In a typical classification setup, the loss penalizes the KL-divergence between the model\\u2019s predicted label distribution and a one-hot distribution placing all probability mass on the single ground-truth label for each example. The proposed method instead constructs a target distribution which places probability mass not only on leaf category nodes but also on their neighbors in a known semantic hierarchy of labels, then penalizes the KL-divergence between a model\\u2019s predicted distribution and this target distribution. This model is used for classification on ImageNet-1k, and for zero-shot classification on ImageNet-21k where a model must predict superclasses seen during training for images of leaf categories not seen during training.\", \"pros\": [\"Method is fairly straightforward\", \"Modeling relationships between labels is an important problem\"], \"cons\": \"- Missing references to key prior work in this space\\n- Minimal comparison to prior work\\n- Confusing experimental setup\\n- Paper is difficult to read\\n\\nMISSING REFERENCES\\nThis paper is far from the first to consider the use of a semantic hierarchy to improve classification systems; see for example:\\n\\nDeng et al, \\u201cHedging your bets: Optimizing accuracy-specificity trade-offs in large scale visual recognition\\u201d, CVPR 2012\\n\\nDeng et al, \\u201cLarge-scale object classification using label relation graphs\\u201d, ECCV 2014 (Best Paper)\\n\\nJiang et al, \\u201cExploiting feature and class relationships in video categorization with regularized deep neural networks\\u201d, TPAMI 2017\\n\\nNone of these are cited in the submission. [Deng et al, 2014] is particularly relevant, as it considers not just \\u201cis-a\\u201d relationships as in this submission, but also mutual exclusion relationships between categories. Without citation, discussion, and comparison with some of these key pieces of prior work, the current submission is incomplete.\\n\\nCOMPARISON TO PRIOR WORK\\nThe only direct comparison to prior work in the paper is the comparison to DeViSE on ILSVRC12 classification performance in Table 3. However since DeViSE was intended to be used for zero-shot learning and not traditional supervised classification, this comparison seems unfair.\\n\\nInstead the authors should compare their method against DeViSE and ConSE for zero-shot learning. Indeed, in Section 4.3 the authors construct a test set \\u201cin a [sic] same manner defined in Frome et al\\u201d but do not actually compare against this prior work.\\n\\nI suspect that the authors chose not to perform this comparison since unlike DeViSE and ConSE their method cannot predict category labels not seen during training; instead it is constrained to predicting a known supercategory when presented with an image of a novel leaf category. As such, the proposed method is not really \\u201czero-shot\\u201d in the sense of DeViSE and ConSE.\\n\\nEXPERIMENTAL SETUP\\nFrom Section 3.1, \\u201cwe adopt a subset of ImageNet the ILSVRC12 dataset which gather [sic] 1K classes [...]\\u201d. The 1000 category labels in ILSVRC12 are mutually exclusive leaf nodes; when placed in the context of the WordNet hierarchy there are 820 internal nodes between these leaves and the WordNet root. As a result, for the method to make sense I assume that all models must be trained to output classification scores for all 1820 categories rather than the 1K leaf categories. This should be made more explicit in the paper, as it means that none of the performance metrics reported in the paper are comparable to other results on ILSVRC12 which only measure performance on the 1K leaf categories.\\n\\nThe experiments on zero-shot learning are also confusing. Rather than following the existing experimental protocol for evaluating zero-shot learning from [Frome et al, 2013] and [Norouzi et al, 2013] the authors evaluate zero-shot learning by plotting SG-hit vs SG-specificity; while these are reasonable metrics, they make it difficult to compare with prior work.\\n\\nPOOR WRITING\\nThe paper is difficult to follow, with confusing notation and many spelling and grammatical errors.\\n\\nOVERALL\\nOn the whole, the paper addresses an important problem and presents a reasonable method. However due to the omission of key references and incomplete comparison to prior work, the paper is not suitable for publication in its current form.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting paper but can still be improved.\", \"review\": \"This paper proposes a new soft negative log-likelihood loss formulation for multi-class classification problems. The new loss is built upon the taxonomy graph of labels, which is provided as external knowledge, and this loss provides better semantic generalization ability compared to a regular N-way classifier and yields more accurate and meaningful super-class predictions.\\n\\nThis paper is well-written. The main ideas and claims are clearly expressed. The main benefits of the new loss are caused by the extra information contained by the taxonomy of labels, and this idea is well-known and popular in the literature. Based on this reason, I think the main contribution of this paper is the discussion on two novel learning settings, which related to the super-classes. However, the formulation of the new soft NLL loss and the SG measurement involves lots of concepts designed based on experiences, so it\\u2019s hard to say whether these are the optimal choices. So, I suggest the authors discuss more on these designs.\\nAnother thing I concern about is the source of label taxonomy. How to efficiently generate the taxonomy? What if the taxonomy is not perfect and contains noises? Will these significantly affect the models\\u2019 performance? I think it\\u2019s better to take these problems into consideration. \\nIn conclusion, I think this is an interesting paper but can still be improved.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The paper violates the double blind review policy\", \"review\": \"First of all, the paper cannot be accepted because it violates the double blind submission policy by including an acknowledgments section.\\n\\nNonetheless, I will give some brief comments:\\n\\n The paper proposes a probabilistic hierarchical approach to perform zero-shot learning.\\nInstead of directly optimizing the standard cross-entropy loss, the paper considers some soft probability scores that consider some class graph taxonomy.\\n\\n The experimental section of the paper is strong enough although more baselines could have been tested. The paper only compares the usual cross entropy loss with their proposed soft-classification framework. \\nNonetheless, different architectures of neural networks are tested on ImageNet and validate the fact that the soft probability strategy improves performance on the zero-shot learning task.\\n\\n \\nOn the other hand, the theoretical aspect is weak. The proposed method seems to be a straightforward extension of Frome et al., NIPS 2013. The main contribution is that soft probability scores are used to perform classification instead of using only class membership information.\\n\\nSome weighting strategy is proposed in Section 2.2 but the proposed steps seem very ad hoc with no theoretical justification. The first equation on page 8 has the same problem where some random definition is provided.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
r1gkAoA5FQ | A bird's eye view on coherence, and a worm's eye view on cohesion | [
"Woon Sang Cho",
"Pengchuan Zhang",
"Yizhe Zhang",
"Xiujun Li",
"Mengdi Wang",
"Jianfeng Gao"
] | Generating coherent and cohesive long-form texts is a challenging problem in natural language generation. Previous works relied on a large amount of human-generated texts to train neural language models, however, few attempted to explicitly model the desired linguistic properties of natural language text, such as coherence and cohesion using neural networks. In this work, we train two expert discriminators for coherence and cohesion to provide hierarchical feedback for text generation. We also propose a simple variant of policy gradient, called 'negative-critical sequence training' in which the reward 'baseline' is constructed from randomly generated negative samples. We demonstrate the effectiveness of our approach through empirical studies, showing improvements over the strong baseline -- attention-based bidirectional MLE-trained neural language model -- in a number of automated metrics. The proposed model can serve as baseline architectures to promote further research in modeling additional linguistic properties for downstream NLP tasks. | [
"text generation",
"natural language processing",
"neural language model"
] | https://openreview.net/pdf?id=r1gkAoA5FQ | https://openreview.net/forum?id=r1gkAoA5FQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Syeq9UiexV",
"BkxvpTtKRQ",
"Byl8EpYFAQ",
"HyxYypFKCX",
"HygrK3FFCX",
"Hygv83tF0m",
"Hkg4oaLsn7",
"Bkekcch5nX",
"r1e1frrd37"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544758929534,
1543245247022,
1543245102378,
1543245024538,
1543244924565,
1543244878926,
1541266843529,
1541225095235,
1541063943309
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper859/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper859/Authors"
],
[
"ICLR.cc/2019/Conference/Paper859/Authors"
],
[
"ICLR.cc/2019/Conference/Paper859/Authors"
],
[
"ICLR.cc/2019/Conference/Paper859/Authors"
],
[
"ICLR.cc/2019/Conference/Paper859/Authors"
],
[
"ICLR.cc/2019/Conference/Paper859/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper859/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper859/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper attempts at modeling coherence of generated text, and proposes two kinds of discriminators that tries to measure whether a piece of text is coherent or not.\\n\\nHowever, the paper misses several related critical references, and also lacks extensive evaluation (especially manual evaluation).\\n\\nThere is consensus between the reviewers that this paper needs more work before it is accepted to a conference such as ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta Review\"}",
"{\"title\": \"Addressing the common misunderstanding\", \"comment\": \"Let us elaborate, through examples, on the types of negative samples the discriminators are trained to discriminate. Consider the two real reviews, A and B, taken from our dataset.\\n\\nReview A (1/2): we had several recommendations from friends where to stay , but opted to take a chance here . the reviews sounded quite positive and convincing . we were not disappointed . the room was small but very clean , and we were able to store our luggage . the shower was hot with great water pressure , but it was a handheld shower which makes a quick shower difficult .\\n\\nReview A (2/2): the location could not be beat . we were able to walk so many places and the metro access was right nearby . the location was quiet because it was tucked in just off the beaten path . the staff was the best part . they were able to help us find our way anywhere and made great recommendations for dinner .\\n\\nReview B (1/2): after reading various reviews , i decided to stay at the best western downtown while visiting my daughter in vancouver . after one night , i knew it was the wrong place for me . the room was tiny although clean with nice amenities . the noise from the street was terrible in spite of the fact that i had a sleep machine running . i like a dark room to sleep in , and light streamed through the edges of the curtains .\\n\\nReview B (2/2): worst of all , the neighborhood made me uncomfortable , not dangerous but panhandlers and homeless people on the street corners . i have never been uncomfortable walking in downtown vancouver , but i did in this area . i was also very disappointed that there was no complimentary breakfast , only an overpriced , average restaurant attached to the hotel . considering all the good places to eat in vancouver , you don t want to waste your money at the white spot restaurant . we checked out as soon as possible and moved to the hampton inn , which is where we should have stayed from the beginning .\\n\\nSuppose you read [Review B (1/2), Review A(2/2)] and [Review A (1/2), Review B (2/2)]. These are aspects of incoherence that the coherence discriminator learns to pick up by seeing a large number of positive samples and such negative samples. In fact this is one example of incoherence. To address your concern, some other negative samples exhibit redundancy and the discriminator learns to distinguish through modifying source and target networks that have different parameters.\"}",
"{\"title\": \"Author response to AnonReviewer2\", \"comment\": \"Thank you for the comments and mentioning relevant prior work.\\n\\n(a)\\nWe believe there are some misunderstanding about the main technique that underlies our proposed discriminators. Regarding the use of cosine similarity, each source and target in a sample pair is processed through different networks which have the same architecture but different parameters. See Figure 1 for the illustration. Therefore, the example of two pairs provided in your comments in fact give two \\u2018different\\u2019 cosine similarity scores. This technique, in various forms, has been applied in information retrieval, web search ranking, image captioning to list a few. See Huang et al. CIKM 2013.\\n\\nHere we provide an example taken from the TripAdvisor dataset and scored through our pre-trained cohesion model.\\n\\nit is full of homeless people . however , they did not bother us . \\u2192 0.14\\n\\nhowever , they did not bother us . it is full of homeless people . \\u2192 -0.12\\n\\nWe revised our writing appropriately. \\n\\n(b)\\nOur model is not directly comparable to Learning2Write (L2W) model for a number of reasons. There are differences in the architecture. For example, we do not use adaptive softmax for modeling the probability distribution of the vocabulary, nor do we have the same vocabulary implementation structures and word embedding functions. Yet, these architectural differences can be solved via engineering. The more fundamental problem is that we cannot use the modified beam search decoding scheme as proposed in their work. In L2W, the modified beam search decoding scheme is integrated with their discriminators, and is coded for only a simple sample input, and no batch sample which disables creating negative sample batch for our discriminators at decoding time. Also our discriminators provide scores on fully generated sequences rather than partially generated sequence. These key differences in scoring and sampling process make it difficult to compare. They do not provide the full data so we are currently working on our own implementation to compare with, and show efficacy of our discriminators.\\n\\n(c)\\nThank you for letting us know the missed prior work. We have modified our writing accordingly. We are currently working on human evaluation.\"}",
"{\"title\": \"Author response to AnonReviewer3\", \"comment\": \"Thank you for your thoughtful comments. Your comments have been helpful.\\n\\nWe agree that our tables and figures are presented in a confusing manner. We revised our submission to make our presentation clear. \\n\\n(a)\\n\\u201cPerhaps the biggest confusion for me was the difference between cohesion and coherence, and in particular how they are modeled. The intro does a good job of describing the two concepts, and making the contrast between local and global coherence, but when I was reading 3.1 I kept thinking this was describing cohesion (\\\"T that follows S in the data\\\" - sounds local, no?). And then 3.2 seems to suggest that coherence and cohesion essentially are being modeled in the same way, except shuffling happens on the word level? I suppose what I was expecting was some attempt at a global model for coherence which goes beyond just looking at consecutive sentence pairs.\\u201c\\n\\nWe would like to clarify how our coherence and cohesion models work. The input to the coherence model is a pair of sequence of `sentence\\u2019 embeddings (from source text chunk and target text chunk), whereas the input to the cohesion model is a pair of sequence of `word\\u2019 embeddings (from two consecutive sentences). This is the fundamental difference between the coherence and cohesion models. Thus the coherence model has a global view to judge an entire paragraph. On the other hand, the cohesion model has a local view to judge any two neighboring sentences. \\n\\nIn section 3.1, \\u201cT that follows S in the data\\u201d contains the global coherence relation, because T and S are multi-sentence text chunks. In section 3.2, consecutive sentence pair (s_{i,1} and s_{i,2}) contains the local cohesion relation.\\n\\n(b)\\n\\u201cI wonder why you didn't try a sequence model of sentences (eg bidirectional LSTM). These are so standard now it seems odd not to have them. \\u201c\\n\\nWe chose the CNN encoder because this is the standard architecture in relevant text generation works using GANs, such as SeqGAN, TextGAN or LeakGAN. However, we took this opportunity to test bidirectional RNN and posted new results. \\n\\nNotice that for RNNs outperform CNNs for coherence models, and CNNs outperform RNNs for cohesion models. One explanation is that RNNs are effective in encoding a sequential input yet exhibit drawbacks when encoding into hidden states at both ends of a `long\\u2019 input, otherwise well-known as a long-range dependency problem. \\n\\n(c)\\n\\u201cDo you describe the decoding procedure (greedy? beam?) at test time anywhere?\\u201d\\n\\nWe used greedy decoding at test time. When we experimented with a larger beam size, we did not see significant difference in text generation quality and it was more computationally intensive.\"}",
"{\"title\": \"Author response to AnonReviewer1 (2)\", \"comment\": \"(c)\\n\\u201cthese negative samples are not (I don't think) at all reflective of the types of texts that the discriminators actually need to discriminate, i.e. automatically generated texts.\\u201d \\n\\nWe admit that the name \\\"discriminator\\\" is confusing. We use the discriminator in a RL fashion, instead of a GAN fashion. In the RL framework, these negative samples to train the ``discriminator\\u2019\\u2019 do not need to reflect the automatically generated texts. Our loss is MLE loss + RL loss. The RL reward only plays a role of regularizer. More precisely, during the training, the discriminator rewards the generated sentences similar to real data and penalizes generated sentences that are similar to our constructed negative examples. The MLE loss part penalizes all samples, including the automatically generated texts, that are not the same with the training data.\\n\\nIt is straightforward to extend our RL framework to a GAN framework, where the discriminators are jointly trained with the generator, as we pointed out in our future work. In this case, the discriminator will directly use the generated sentences as negative examples and the reward function even does not need the MLE part any more.\\n\\n(d)\\n\\u201cYou only use automated metrics, despite acknowledging that there is no good way to evaluate generation. Why not use human eval? This is not difficult to carry out, and when you are arguing about such subtle properties of language, human eval is essential. There is no reason that BLEU, for example, would be sensitive to coherence or cohesion, so why would this be a good way to evaluate a model aimed to capture exactly those things?\\u201d\\n\\nYes, we agree that human evaluation is essential and we are currently working on this. To answer your concern, in fact, our coherence and cohesion models do have implications on BLEU as they are trained to provide rewarding or penalizing signals. Notice that cohesion model in particular sees constructed negative samples with lower BLEU scores, thus able to provide rewarding or penalizing signals. The coherence model assesses how sentences are organized and the generator model, which is continually trained via MLE, is regularized modifying its policy and mimicking data distribution.\\n\\n(e)\\n\\u201cThus, even ignoring the fact that I disagree with the authors on exactly what the discriminators are/should be doing, it is still not clear to me that the discriminators are well trained to do the thing the authors want them to do. \\u2026 Do they correlate with human judgments of coherence and cohesion?\\u201c \\n\\nIt is possible that the correlation we capture is not coherence (or cohesion) which people typically have in mind, because our approach is learning purely from raw data. We showed examples from our trained coherence (or cohesion) model, and showed that its rating indeed aligns well with our typical impression; see our experiment section. Therefore, our coherence (or cohesion) models indeed learn some correlation that align well with the coherence (or cohesion) concept.\"}",
"{\"title\": \"Author response to AnonReviewer1 (1)\", \"comment\": \"Thank you for your comments.\\n\\nWe would like to clarify our methodology. Apparently, our AnonReviewer2 also had the same misunderstanding so we append the same example to further help you understand our approach. We revised our writing to make this point clear. We would like to respond to your comments hereafter. \\n\\n(a)\\n\\u201cAs I understand it, these models are just being trained to incentivize high cosine similarity between the words in the first/second half of a document (or sentence/following sentence). That is not reflective of the definitions of coherence and cohesion, which should reflect deeper discourse and even syntactic structure. Rather, these are just models which capture topical similarity, and naively at that. \\u201c \\n\\nOn one hand, our model has the potential to capture the coherence (cohesion) correlation between two halves of a paragraph. If the source and the target networks shared the parameters, then your intuition is indeed correct. Quoting your words, the discriminators would be \\u201ctrained to incentivize high cosine similarity between the words in the first/second half of a document\\u201d and simply \\u201ccapture topical similarity, and naively at that\\u201d. However, coherent and cohesive paragraphs have particular \\u201csyntactic structures\\u201d which we are glad that you mentioned, therefore we modeled feature extraction through convolutional layers and process the first and second half of a paragraph with different networks (yet same architecture). Differences in parameter weights in the convolution layer and fully connected layers will govern how semantic and syntactic features are extracted for either half of the paragraph. \\n\\nRegarding the use of cosine similarity, each source and target in a sample pair is processed through different networks which have the same architecture but different parameters. See Figure 1 for the illustration. Therefore, the example of two pairs provided in AnonReviewer2's comments in fact give two \\u2018different\\u2019 cosine similarity scores. This technique, in various forms, has been applied in information retrieval, web search ranking, image captioning to list a few. See Huang et al. CIKM 2013.\\n\\nHere we provide an example taken from the TripAdvisor dataset and scored through our pre-trained cohesion model.\\n\\nit is full of homeless people . however , they did not bother us . \\u2192 0.14\\n\\nhowever , they did not bother us . it is full of homeless people . \\u2192 -0.12\\n\\nOn the other hand, it is possible that the correlation we capture is not coherence (or cohesion) which people typically have in mind, because our approach is learning purely from raw data. We showed examples from our trained coherence (or cohesion) model, and showed that its rating indeed aligns well with our typical impression; see our experiment section. Therefore, our coherence (or cohesion) models indeed learn some correlation that align well with the coherence (or cohesion) concept.\\n\\nI hope this answers your concerns.\\n\\n(b) \\n\\u201cMoreover, training this model to discriminate real text from randomly perturbed text seems problematic since 1) randomly shuffled text should be trivially easy to distinguish from real text in terms of topical similarity \\u2026 You mention several times that these models will pick up on redundancy. It is not clear to me how they could do that. Aren't they simply using a cosine similarity between feature vectors? Perhaps I am missing something, but I don't see how this could learn to disincentivize redundancy but simultaneously encourage topical similarity. Could you explain this claim? \\u201c\\n\\nLet us consider generating three types of negative samples for coherence discriminator. Given a batch of source text and target text pairs, we mismatch the source and target pairs, which is what we mean by \\u2018rotating target texts with source texts fixed\\u2019. Also given the true source text and target text pair, we shuffle the sentence-wise orders of the target text. Finally, combine for previous two methods. \\n \\nAlthough we admit that some negative samples may be trivially easy to distinguish, some other negative samples may be difficult if only the sentence order is perturbed, even difficult to us humans.\"}",
"{\"title\": \"Missing relevant comparisons, evaluations, and references\", \"review\": \"This paper addresses long-text generation, with a specific task of being given a prefix of a review and needing to add the next five sentences coherently. The paper proposes adding two discriminators, one trained to maximize a cosine similarity between source sentences and target sentences (D_{coherence}) and one trained to maximize a cosine similarity between two consecutive sentences. On some automatic metrics like BLEU and perplexity, an MLE model with these discriminators performs a little bit better than without.\\n\\nThis paper does not include any manual evaluation, which is critical for evaluating the quality of generated output, especially for evaluating coherence and cohesion. This paper uses the task setup and dataset from \\\"Learning to Write with Cooperative Discriminators\\\", Holtzman et al., ACL 2018. That paper also includes many specified aspects to improve the coherence (from the abstract of that paper \\\"Human evaluation demonstrates that text generated by our model is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations.\\\"). But this paper:\\n--Does not compare against the method described in Holtzman et al., or any other prior work\\n--Does not include any human evaluations, even though they were the main measure of evaluation in prior work.\\n\\nThis paper states that \\\"To the best of our knowledge, this paper is the first attempt to explicitly capture cross-sentence linguistic properties, i.e., coherence and cohesion, for long text generation.\\\" There is much past work in the NLP community on these. For example, see:\\n \\\"Modeling local coherence: An entity-based approach\\\" by Barzilay and Lapata, 2005 (which has 500+ citations). \\nIt has been widely studied in the area of summarization, for example, \\n\\\"Using Cohesion and Coherence Models for Text Summarization\\\", Mani et al., AAAI 1998, and follow-up work.\\nAnd in more recent work, the \\\"Learning to Write\\\" paper that the dataset and task follow from addresses several linguistically informed cross-sentence issues like repetition and entailment. \\n\\nThe cosine similarity metric in the model is not very well suited to the tasks of coherence and cohesion, as it is symmetric, while natural language isn't. The pair:\\n\\\"John went to the store to buy some milk.\\\"\\n\\\"When he got there, they were all out.\\\"\\n\\nand \\n\\n\\\"When he got there, they were all out.\\\"\\n\\\"John went to the store to buy some milk.\\\"\\n\\nwould have identical scores according to a cosine similarity metric, while the first ordering is much more coherent than the second.\\n\\nThe conclusion says \\\"we showed a significant improvement\\\": how was significance determined here?\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"overall weak evaluation and too many unsubstantiated claims\", \"review\": [\"The paper proposes a method for improving the quality of text generation by optimizing for coherence and cohesion. The authors develop two discriminators--a \\\"coherence discriminator\\\" which takes as input all of the sentence embeddings (i.e. averaged word embeddings) of the document and assigns a score, and a \\\"cohesion discriminator\\\" which takes as input the word embeddings of two consecutive sentences and assigns a score. In the former, the score is the cosine similarity between the encodings of the first and second half of the document. In the latter, the score is the cosine similarity between the encodings of the two sentences. Both discriminators use CNNs to encode the inputs. The discriminators are trained to rank true text over randomly drawn negative samples, which consist of randomly permuted sentence orderings and/or random combinations of first/second half of documents. This discriminators are then used to train a text generation model. The output of the text generation model is scored by various automatic metrics, including NLL, PPL, BLEU, and number of unique ngrams in the outputs. The improvements over a generically-trained generation model are very small.\", \"Overall, I did not find this paper to be convincing. The initial motivation is good--we need to find a way to capture richer linguistic properties of text and to encourage NLG to produce such properties. However, the discriminators presented do not actually capture the nuances that they purport to capture. As I understand it, these models are just being trained to incentivize high cosine similarity between the words in the first/second half of a document (or sentence/following sentence). That is not reflective of the definitions of coherence and cohesion, which should reflect deeper discourse and even syntactic structure. Rather, these are just models which capture topical similarity, and naively at that. Moreover, training this model to discriminate real text from randomly perturbed text seems problematic since 1) randomly shuffled text should be trivially easy to distinguish from real text in terms of topical similarity and 2) these negative samples are not (I don't think) at all reflective of the types of texts that the discriminators actually need to discriminate, i.e. automatically generated texts. Thus, even ignoring the fact that I disagree with the authors on exactly what the discriminators are/should be doing, it is still not clear to me that the discriminators are well trained to do the thing the authors want them to do. I have various other concerns about the claims, the approach, and the evaluation. A list of more specific questions/comments for the authors is below.\", \"There are a *lot* of unsubstantiated claims and speculation about the linguistic properties that these discriminators capture, and no motivation of analysis as to how they are capturing it. Claims like the following definitely need to be removed: \\\"learn to inspect the higher-level role of T, such as but not limited to, whether it supports the intent of S, transitions smoothly against S, or avoids redundancy\\\", \\\"such as grammar of each of the sentences and the logical flow between arbitrary two consecutive sentences\\\"\", \"You only use automated metrics, despite acknowledging that there is no good way to evaluate generation. Why not use human eval? This is not difficult to carry out, and when you are arguing about such subtle properties of language, human eval is essential. There is no reason that BLEU, for example, would be sensitive to coherence or cohesion, so why would this be a good way to evaluate a model aimed to capture exactly those things?\", \"Also related to human eval, there should be an intrinsic evaluation of the discriminators. Do they correlate with human judgments of coherence and cohesion? You cannot take it for granted that they capture these things (I very much believe they do not), so present some evidence that the models do what you claim they do.\", \"The reported improvements are minuscule, to the extent that I would read them as \\\"no difference\\\". The only metric where there is a real difference is on number of unique ngrams generated cross inputs, which is presumably because its just learning (being encouraged to) spit out words that were in the input. I'd like to see the baseline of just copying the input as the output.\", \"You mention several times that these models will pick up on redundancy. It is not clear to me how they could do that. Aren't they simply using a cosine similarity between feature vectors? Perhaps I am missing something, but I don't see how this could learn to disincentivize redundancy but simultaneously encourage topical similarity. Could you explain this claim?\"], \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting proposal to use discriminators to model coherence in NLG, but completely ignores prior work and presentation is confusing\", \"review\": \"The idea of training discriminators to determine coherence and cohesion, and training those discriminators as part of an NLG system using policy gradients, is an interesting one. However, there are two major problems with the papers as it stands:\\n\\n1) it completely ignores the decades of NLG literature on this topic before the \\\"neural revolution\\\" in NLP;\\n2) the presentation of the paper is confusing, in a number of respects (some details below).\\n\\nTo claim that this is the first paper to capture cross-sentence linguistic properties for text generation is the sort of comment that is likely to make experienced NLG researchers very grumpy. A good place to start looking at the extensive literature on this topic is the following paper:\", \"modeling_local_coherence\": \"An Entity-Based Approach, Barzilay and Lapata (2007)\\n\\nOne aspect in which the presentation is muddled is the order of the results tables. Table 2 is far too early in the paper. I had no idea at that point why the retrieval results were being presented (or what the numbers meant). You also have cohesion in the table before the cohesion section in 3.2. Likewise, Table 1, which is on p.2 and gives examples of system output, is far too early.\\n\\nPerhaps the biggest confusion for me was the difference between cohesion and coherence, and in particular how they are modeled. The intro does a good job of describing the two concepts, and making the contrast between local and global coherence, but when I was reading 3.1 I kept thinking this was describing cohesion (\\\"T that follows S in the data\\\" - sounds local, no?). And then 3.2 seems to suggest that coherence and cohesion essentially are being modeled in the same way, except shuffling happens on the word level? I suppose what I was expecting was some attempt at a global model for coherence which goes beyond just looking at consecutive sentence pairs.\\n\\nI wonder why you didn't try a sequence model of sentences (eg bidirectional LSTM). These are so standard now it seems odd not to have them.\\n\\nDo you describe the decoding procedure (greedy? beam?) at test time anywhere?\\n\\nI liked Table 4 and found the example pairs with the scores to be useful qualitative analysis.\\n\\n\\\"Based on automated NLP metrics, we showed a significant improvement\\\" - which metrics? not clear to me that the improvements in Table 3 are significant.\\n\\nMinor presentation points\\n--\\n\\n\\\"followed by a logically sound sentence\\\" - might want to rephrase this, since you don't mean logical soundness in a technical sense here (I don't think).\\n\\nThe comment in the conclusion about being \\\"convinced\\\" the architecture generalizes well to unseen texts is irrelevant without some evidence.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rJe10iC5K7 | Unsupervised Discovery of Parts, Structure, and Dynamics | [
"Zhenjia Xu*",
"Zhijian Liu*",
"Chen Sun",
"Kevin Murphy",
"William T. Freeman",
"Joshua B. Tenenbaum",
"Jiajun Wu"
] | Humans easily recognize object parts and their hierarchical structure by watching how they move; they can then predict how each part moves in the future. In this paper, we propose a novel formulation that simultaneously learns a hierarchical, disentangled object representation and a dynamics model for object parts from unlabeled videos. Our Parts, Structure, and Dynamics (PSD) model learns to, first, recognize the object parts via a layered image representation; second, predict hierarchy via a structural descriptor that composes low-level concepts into a hierarchical structure; and third, model the system dynamics by predicting the future. Experiments on multiple real and synthetic datasets demonstrate that our PSD model works well on all three tasks: segmenting object parts, building their hierarchical structure, and capturing their motion distributions. | [
"Self-Supervised Learning",
"Visual Prediction",
"Hierarchical Models"
] | https://openreview.net/pdf?id=rJe10iC5K7 | https://openreview.net/forum?id=rJe10iC5K7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Skl4HIG-xN",
"HyxfvG4EJN",
"BklGbM4VkV",
"S1lfKWE4J4",
"ByeN7W4VyE",
"SJxW2sw5CX",
"r1g-PiwqC7",
"Hklh_47xC7",
"SJgp510jaX",
"ryeEFJAja7",
"rkgSvJCi6Q",
"HJewHyCs6m",
"BJl2l10ipQ",
"BylaoAToa7",
"Hye8zH-GpQ",
"HyxX4NWbam",
"ryesfnW1T7",
"r1gn3jco3Q",
"S1DRG7chm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"comment",
"official_review",
"official_review"
],
"note_created": [
1544787515541,
1543942746187,
1543942650094,
1543942521841,
1543942428007,
1543302056891,
1543301976914,
1542628467698,
1542344597346,
1542344571900,
1542344541068,
1542344511393,
1542344436458,
1542344357338,
1541702925572,
1541637162558,
1541508115307,
1541282740082,
1541186254516
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper858/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper858/Authors"
],
[
"ICLR.cc/2019/Conference/Paper858/Authors"
],
[
"ICLR.cc/2019/Conference/Paper858/Authors"
],
[
"ICLR.cc/2019/Conference/Paper858/Authors"
],
[
"ICLR.cc/2019/Conference/Paper858/Authors"
],
[
"ICLR.cc/2019/Conference/Paper858/Authors"
],
[
"ICLR.cc/2019/Conference/Paper858/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper858/Authors"
],
[
"ICLR.cc/2019/Conference/Paper858/Authors"
],
[
"ICLR.cc/2019/Conference/Paper858/Authors"
],
[
"ICLR.cc/2019/Conference/Paper858/Authors"
],
[
"ICLR.cc/2019/Conference/Paper858/Authors"
],
[
"ICLR.cc/2019/Conference/Paper858/Authors"
],
[
"ICLR.cc/2019/Conference/Paper858/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper858/AnonReviewer2"
],
[
"~Sjoerd_van_Steenkiste1"
],
[
"ICLR.cc/2019/Conference/Paper858/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper858/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes a novel method that learns decompositions of an image over parts, their hierarchical structure and their motion dynamics given temporal image pairs. The problem tackled is of great importance for unsupervised learning from videos. One downside of the paper is the simple datasets used to demonstrate the effectiveness of the method. All reviewers though agree on it being a valuable contribution for ICLR.\\n\\nIn the related work section the paper mentions \\\"...Some systems emphasize\\nlearning from pixels but without an explicitly object-based representation (Fragkiadaki et al., 2016 ...\\\". The paper you cite in fact emphasized the importance of having object-centric predictive models and the generalization that comes from this design choice, thus, it may be potentially not the right citation.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"novel method for learning part hierarchies and their motion dynamics\"}",
"{\"title\": \"Our Response and Revision\", \"comment\": \"Dear Reviewer 3,\\n\\nThanks again for your constructive reviews, which have helped us improved the quality and clarity of the paper. In particular, in the revision, we have cited and discussed the suggested related work, included an algorithm box, revised the method section, and updated the figure to better explain the algorithm and its setup (taking two images during training, and only one during testing). Per your suggestion, we have also compared our model with the state of the art (3DcVAE) on future prediction.\\n\\nAs the discussion period is about to end, please don\\u2019t hesitate to let us know if there are any additional clarifications that we can offer, as we would love to convince you of the merits of the paper. We appreciate your suggestions. Thanks!\"}",
"{\"title\": \"Our Response and Revision\", \"comment\": \"Dear Reviewer 1,\\n\\nThank you for your constructive review! Based on your suggestion, we have cited and discussed the suggested related work. We also included an algorithm box, revised the method section, and updated the figure to better explain the algorithm and its setup. We hope the revision is now better.\\n\\nAs the discussion period is about to end, please don\\u2019t hesitate to let us know if there are any additional clarifications that we can offer. We appreciate your suggestions. Thanks!\"}",
"{\"title\": \"Our Response and Revision\", \"comment\": \"Dear Reviewer 2,\\n\\nWe'd like to thank you again for your constructive reviews, which have helped us make the paper better. Based on your review, in the revision, we have revised the method section and discussed more details of the structural loss. We have also included a systematic comparison with the state of the art on both structure recovery (R-NEM) and future prediction (3DcVAE).\\n\\nAs the discussion period is about to end, please don\\u2019t hesitate to let us know if there are any additional clarifications that we can offer, as we would love to convince you of the merits of the paper. We appreciate your suggestions. Thanks!\"}",
"{\"title\": \"Our Response and Revision\", \"comment\": \"Dear Reviewer 4,\\n\\nThanks again for your constructive reviews, which have helped us improved the quality and clarity of the paper. In particular, in the revision, we have included an algorithm box, revised the method section, and updated the figure to better explain the algorithm and its setup. We have also included a systematic comparison with the state of the art on both structure recovery (R-NEM) and future prediction (3DcVAE).\\n\\nAs the discussion period is about to end, please don\\u2019t hesitate to let us know if there are any additional clarifications that we can offer, as we would love to convince you of the merits of the paper. We appreciate your suggestions. Thanks!\"}",
"{\"title\": \"Our Response to Reviewer 4\", \"comment\": \"Thank you very much for the comments.\\n\\n(1) We agree that the true manifold of motion is 2D. Here, we focus on learning the conditional motion distribution: for a particular segment (e.g. left arm), its possible motion distribution in the training set does not encompass all possible 2D motions. As you suggested, we\\u2019re assuming that the set of conditional motions lie on a \\u2018true\\u2019 1D manifold. \\n\\n(2) Thanks for the valuable suggestion. We\\u2019ve added an algorithm box in the revision to demonstrate the training and evaluation setup.\\n\\nThe general response above summarized the other changes we\\u2019ve made in the revision. Thanks again for your comments, and please don\\u2019t hesitate to let us know if you have additional feedback.\"}",
"{\"title\": \"Our General Response\", \"comment\": \"We thank all reviewers for their comments. We have revised our manuscript accordingly. Specific changes include\\n\\n1) We\\u2019ve added an algorithm box to show how the model works in training and testing: during training, it learns from unlabeled, paired frames; during testing, it segments object parts, infers their dynamics, and synthesize multiple possible future frames, all from a single image.\\n\\n2) We have systematically compared our model with the state of the art (R-NEM) on this very challenging task. While R-NEM only works on binary images of shapes and digits, our model works well on real images of complex texture and background (Section 5.3). We\\u2019ve also included a systematic study about R-NEM\\u2019s ability to handle occluded objects in Appendix A.5, Figure A3, and Table A1.\\n\\n3) We\\u2019ve included comparison on future prediction with 3DcVAE [1] (Section 5.3 and Figure 11).\\n\\n4) We\\u2019ve added quantitative results on unsupervised human part segmentation on real images in Table 2 and Section 5.3.\\n\\n5) We\\u2019ve revised the method section (Section 3) and the corresponding Figure 3, and rewrote the captions for better clarity.\\n\\n6) We\\u2019ve cited and discussed the suggested related work in Section 2.\"}",
"{\"title\": \"Answer\", \"comment\": \"(1) The example you give with GANs mapping 100D onto image space doesn't really compare to this. There, it is assumed that images lie on a \\\"true\\\" 100D manifold. Here, you know that the \\\"true\\\" manifold of motion is >1D, so using a 1D latent variable seems like an odd choice.\\n\\n(2) Thank you for clarifying how the test-time situation differs. I think the paper would benefit greatly from an \\\"Algorithms\\\" box, where you explicitly spell out the training and test time performance, and, e.g. how the structural descriptor is calculated.\"}",
"{\"title\": \"Our General Response\", \"comment\": \"We thank all reviewers for their comments. In addition to the specific response below, here we summarize our task, setup, and the changes planned to be included in the revision.\\n\\nOur task is to simultaneously learn, without annotation,\\n1) segmenting object parts; \\n2) the hierarchical structure of low-level concepts; \\n3) their dynamics for future prediction.\\n\\nWhile our model sees two frames during training, it takes only a **single** image as input during testing. It segments object parts in the image, infers their structure, and synthesizes multiple future frames based on the inferred segments, structure, and the sampled motion vector z. The task is highly non-trivial, as it requires the model to associate object appearance with their possible motion purely from unannotated data. \\n\\nOur model differs from pure video prediction algorithms, which do not model object structure or relation. Only very recently, researchers have started to build models for the same purpose (RNEM). While RNEM works on synthetic binary images, our model performs well on real color images that are much more complex.\\n\\nFollowing the reviews, we plan to include the following changes in the revision by Nov. 26 (the new official revision deadline, extended from Nov. 23)\\n1) We will cite and discuss the suggested related work.\\n2) We will revise the method section and the figures for better clarity.\\n3) While future prediction is not our focus, we agree with the reviewers that it\\u2019d be important to add more baselines. Note that our model takes a single image for future prediction, while most video prediction algorithms require multi-frame input. We will make a comparison with 3DcVAE [1] which only needs one frame as input.\\n4) We will include quantitative results on unsupervised part segmentation on real images, in addition to the results on shapes and digits.\\n5) We will include the results of RNEM on longer video sequences, as suggested in the public comment.\\n\\nPlease don\\u2019t hesitate to let us know for any additional comments on the paper or on the planned changes.\", \"reference\": \"[1] Li, Yijun and Fang, Chen and Yang, Jimei and Wang, Zhaowen and Lu, Xin and Yang, Ming-Hsuan. Flow-Grounded Spatial-Temporal Video Prediction from Still Images. In ECCV, 2018.\"}",
"{\"title\": \"Our Response to Reviewer 4\", \"comment\": \"Thank you very much for the constructive comments. We respond to the major comments below. We\\u2019ll also update figures, rewrite captions, and clarify notations in our revision.\\n\\n1. Dimensionality of the latent variables\\nWe agree that motion has more than one degree of freedom; here, the network learns a mapping from 1-D variable to the motion manifold (similar to GANs that learn to map a 100-D variable to the image manifold).\\n\\n2. Problem setup and baselines\\nOur model can be considered as a conditional VAE, and it behaves differently during training and testing.\\n\\n1) During training, we feed the image of current frame I1 and the flow between current and next frame M = flow(I1, I2) into the model as inputs. Our model tries to reconstruct the flow M and leverages it to synthesis the image of next frame I2.\\n\\n2) During testing, the model only sees a **single** frame. It samples the latent variable to generate possible motion kernels. It then makes use of the sampled motions to estimate the flow between current and next frame, and leverages the flow to synthesis possible next frames.\\n\\nDuring training, we choose not to feed the input flow directly into the image decoder, because the model will have no access to the flow during testing. We will revise our paper to emphasize the different inputs we are using during the training and testing time (in Figure 3).\\n\\nOur goal is to learn the prior that ties part structure and dynamics to their appearance. Only with the learned prior, our model can segment object parts and synthesize motion from a single image. For example, the \\u201ctorso\\u201d is always the parent of the \\u201cleg\\u201d, and the motion of the leg is always affected by the motion of the torso. Therefore, in our datasets, parts always have the same hierarchical tree structure. \\n\\nWe agree with the reviewers that it\\u2019d be important to add more baselines on future prediction. Note that our model takes a single image for segmentation and future prediction, while most baselines require multi-frame input (e.g., requiring the previous motion field). We will make a comparison with 3DcVAE [1], which only needs one frame as input. For object segmentation, we have included quantitative results on shapes and digits, and will also include numbers for humans in the revision.\\n\\n3. Structural descriptor\\nWe study the problem where object parts share the same hierarchical structure (e.g. \\u2018leg\\u2019 is always part of \\u2018full torso\\u2019). Therefore in our framework, the structural matrix S is shared across data points. In Equation 3, the binary indicator $[i \\\\in P_k]$ represents whether $O_i$ is an ancestor of $O_k$, and we replace this binary indicator with a continuous value $S_{ik}$ to make the entire framework differentiable: $S_{ik} = sigmoid(W_{ik})$, where $W_{ik}$ are trainable parameters. Then, we make use of these relaxed indicators $S_{ik}$ to combine the motion maps using Equation 1-3. During evaluation, we binarize the values of $S_{ik}$ by a threshold of 0.5 to obtain the hierarchical tree structure. We\\u2019ll include this in the revision.\\n\\n4. Human studies\\nThanks for the suggestion. We agree that the experiment is not well explained and will remove it in the revision.\\n\\nWe have also listed all other planned changes in our general response above. Please don\\u2019t hesitate to let us know for any additional comments on the paper or on the planned changes.\", \"reference\": \"[1] Li, Yijun and Fang, Chen and Yang, Jimei and Wang, Zhaowen and Lu, Xin and Yang, Ming-Hsuan. Flow-Grounded Spatial-Temporal Video Prediction from Still Images. In ECCV, 2018.\"}",
"{\"title\": \"Our Response to Reviewer 2\", \"comment\": \"Thank you very much for the constructive comments.\\n\\n1. Baselines\\nAs you summarized, the goal of this paper is beyond from video prediction: From pairs of unlabeled frames, our model learns to solve three tasks at the same time: 1) learning to segment object parts; 2) learning their hierarchical structure; 3) learning their dynamics for future prediction. During testing, from a single image, our model segments its parts and synthesizes possible future frames.\\n\\nMost video prediction methods do not learn part hierarchy. The only previous method that attempts to solve the three problems at the same time is RNEM, which we\\u2019ve compared with in the paper. During testing, with 20 frames as input, RNEM performs well on binary images (black/white), but does not learn meaningful concepts on grayscale or color images. In comparison, our model performs well on real data, even with cluttered background, using just a single image as input during testing. Our datasets are highly challenging, and our model achieves significant performance gain.\\n\\n\\nOur goal is to learn the prior that ties part structure and dynamics to their appearance. Only with the learned prior, our model can segment object parts and synthesize motion from a single image. For example, the \\u201ctorso\\u201d is always the parent of the \\u201cleg\\u201d, and the motion of the leg is always affected by the motion of the torso. Therefore, in our datasets, parts always have the same hierarchical tree structure. \\n\\nWhile future prediction is not our focus, we agree with the reviewers that it\\u2019d be important to add more baselines. Note that our model takes a single image for future prediction, while most video prediction algorithms require multi-frame input. We will make a comparison with 3DcVAE, which only needs one frame as input.\\n\\n2. Data-efficiency and robustness\\nOur model is data-efficient. When the motion is simple (Atari games), our models learns from only 5K pairs of frames (i.e., 10K images). On real data, our model learns from 9K pairs of images with cluttered backgrounds (the yoga dataset). Our model is robust to unaligned objects: on the shapes and digits datasets, the object positions are random. We agree with the reviewer that discovering hierarchical parts and their motions from purely unlabeled, in-the-wild videos would be an ultimate goal. At the same time, we also believe our model has been making solid and significant progress compared with the state-of-the-art, which, as mentioned above, only works on binary images.\\n\\nThanks for the observation on the flipped left/right legs. They are indistinguishable in our current setup---we\\u2019re learning purely from motion signals, and these parts have identical motion no matter whether they\\u2019re flipped on not. This suggests an important future research direction---how we can develop a model that learns to discover semantically rich concepts from videos, with minimal supervision. \\n\\n3. Structural loss\\nWe apply the structural loss on local motion fields, not on the structural matrix. In this way, the structural loss serves as a regularization, encouraging the motion field to have small values. This is different from the traditional L1 sparseness loss, which encourages values to be 0. We\\u2019ve also experimented with the L1 loss on the shapes dataset, and found that using L1 or L2 structural loss leads to similar results. We\\u2019ll include this discussion into the revision. \\n\\nWe have also listed all other planned changes in our general response above. Please don\\u2019t hesitate to let us know for any additional comments on the paper or on the planned changes.\"}",
"{\"title\": \"Our Response to Reviewer 3\", \"comment\": \"Thank you very much for the constructive comments.\\n\\n1. Problem setup and baselines\\nWe would like to clarify that while our model sees two frames during training, it only takes only a **single** image as input during testing. It segments object parts in the image, infers their structure, and synthesizes multiple future frames based on the inferred segments, structure, and the sampled motion vector z. The task is highly non-trivial, as it requires the model to associate object appearance with their possible motion purely from unannotated data. \\n\\nOur goal is to learn the prior that ties part structure and dynamics to their appearance. Only with the learned prior, our model can segment object parts and synthesize motion from a single image. For example, the \\u201ctorso\\u201d is always the parent of the \\u201cleg\\u201d, and the motion of the leg is always affected by the motion of the torso. Therefore, in our datasets, parts always have the same hierarchical tree structure. \\n\\nWe agree with the reviewers that it\\u2019d be important to add more baselines on both motion segmentation and future prediction. Note that our model takes a single image for segmentation and future prediction, while most baselines require multi-frame input (e.g., requiring the previous motion field). We will make a comparison with 3DcVAE [1], which only needs one frame as input.\\n\\n2. Related work\\nThank you for pointing out the missing related work, which we will cite and discuss.\\n\\n3. Human studies\\nIn our human study, we first explained the definition of hierarchical structure to subjects. We then showed the segmentation masks of our model (Figure 11c-11g) and asked the subjects to generate the tree hierarchy based on the segments. \\n\\nThe \\u2018same tree structure\\u2019 means that corresponding nodes (segments) have the same parent across the two trees. The alternative response provided by the subjects is a two-level hierarchy, putting the arm directly below the full torso instead of the upper torso (Figure 11h). \\n\\nWe agree that the experiment is not well explained and will remove it in the revision.\\n\\nWe have also listed all other planned changes in our general response above. Please don\\u2019t hesitate to let us know for any additional comments on the paper or on the planned changes.\", \"reference\": \"[1] Li, Yijun and Fang, Chen and Yang, Jimei and Wang, Zhaowen and Lu, Xin and Yang, Ming-Hsuan. Flow-Grounded Spatial-Temporal Video Prediction from Still Images. In ECCV, 2018.\"}",
"{\"title\": \"Our Response to Reviewer 1\", \"comment\": \"Thank you very much for the constructive comments.\\n\\n1. Presentation\\nIn the revision, we\\u2019ll include a separate paragraph in the related work to discuss hierarchical motion decomposition methods. We\\u2019ll revise the method section for a better presentation of the model and the equations. We\\u2019ll also revise sentence about symbolic representation and Figure 3 as suggested. \\n\\n2. Structural loss\\nWe apply the structural loss on local motion fields, not on the structural matrix. In this way, the structural loss serves as a regularization, encouraging the motion field to have small values. This is different from the traditional L1 sparseness loss, which encourages values to be 0. Following your suggestion, we\\u2019ve also experimented with the L1 loss on the shapes dataset, and found that using L1 or L2 structural loss leads to similar results. We\\u2019ll include this discussion into the revision. \\n\\n3. Atari dataset\\nThe purpose of the Atari dataset is to demonstrate that our model works well on a different domain and learns to discover interesting structure (the ball belongs to the offensive player). There, as the concepts and structure are relatively simple, we found that 5000 frames are sufficient for our purpose.\\n\\nWe have also listed all other planned changes in our general response above. Please don\\u2019t hesitate to let us know for any additional comments on the paper or on the planned changes.\"}",
"{\"title\": \"Our Response\", \"comment\": \"Thank you very much for the comments.\\n\\nIn our experiments, we trained the R-NEM / RNN-EM on sequences of 20 frames, where the two input frames appear repetitively: (I1, I2, I1, I2, \\u2026, I1, I2). We found that using only two frames is not sufficient because of the exact reason as you mentioned. For evaluation, we still feed 20-frame sequences into the R-NEM / RNN-EM, while our PSD model only takes a single frame as input. In the revision, we will include results on a new dataset of longer video sequences. Thanks again for your suggestions.\\n\\nWe feel R-NEM / RNN-EM and our PSD model focus on complementary topics: R-NEM / RNN-EM learns to identify instances through temporal reasoning, using signals across the entire video to group pixels into objects; our PSD model learns the appearance prior of objects: by watching how they move, it learns to recognize how object parts can be grouped based on their appearance and can be applied on static images. An interesting future work is to explore how these models can be integrated.\"}",
"{\"title\": \"Official Review\", \"review\": \"==== Review Summary ====\\n\\nThe paper demonstrates an interesting and potentially useful idea. But much of it is poorely explained, and experimental results are not strongly convincing. The only numerical evaluations are on a simple dataset that the authors made themselves. The most interesting claim - that this network can learn unsupervised hierarchical object segmentation based on unlabelled video data - is not well supported by the paper. \\n\\n==== Paper Summary ====\\n\\nThis paper presents a deep neural network which learns object Segmentation, Structure, and Dynamics from unlabelled video. The idea is quite useful, as it is a step towards learning models that can \\\"discover\\\" the concept of objects in a visual scene without any supervision.\", \"the_central_contributions_of_this_paper_are\": \"(1) To show how one can use coherent motion (the fact that different parts of an object move together) to learn unsupervised object segmentation.\\n(2) To show how once can learn the relation between objects (e.g. \\\"arm\\\" is part of \\\"body\\\") by observing relative motion between segments.\\n\\n==== General Feedback ====\\n\\nThe paper would benefit a lot from better explanations and being better tied together (see \\\"Details\\\" below for examples). Figure captions would benefit from much better explanations and integration with the text - each figure caption should at least describe what the figure is intended to demonstrate. Variables such as ($\\\\mathcal M$, $\\\\mathcal S$, $\\\\mathcal I$, $\\\\mathcal L$) should be indicated in figures . \\n\\nMany things would benefit from being defined precisely with Equations. For example I have no idea how the \\\"soft\\\" structural descriptor S is computed. Is it (A) a parameter that is shared across data points and learned? or (B) is it computed per-frame from the network? And after it is calculated, how are the S_{ik} values (which fall between 0 and 1) used to combine the motion maps? \\n\\n==== Scientific Questions ===\\n\\nI'm confused as to what the latent variables z \\\"mean\\\". It seems strange that there is a 1-d latent variable representing the motion of each part. Translation of a segment within an image is 2D. 3D if you include planar rotation, then there's scaling motion and out-of-plane rotations, so it seems an odd design choice that motion should be squeezed into a 1D representation.\\n\\nI find it difficult to assess the quality of the \\\"next frame predictions\\\". There's lots other literature on next-frame prediction to compare against (e.g. https://arxiv.org/abs/1605.08104). At least you could compare to a naive baseline of simply shifting pixels based on the optical flow. \\n\\nI'm confused about how you are learning the \\\"estimated flow\\\". My impression is that the input flow is calculated between the last 2 frames $\\\\hat M = flow(I_{t-1}, I_t)$. And that the \\\"estimated\\\" flow is an estimate of $flow(I_{t}, I_{t+1})$. But in Section 4.3 you seem to indicate that the \\\"estimated\\\" flow is just trained to \\\"reconstruct\\\" the input flow.... In that case why not just feed the input flow directly into the Image Decoder? What I guess you're doing is trying to Predict the next flow ($flow(I_{t}, I_{t+1})$) but if you're doing this neither Figure 3 nor Section 4.2 indicates this, hence my confusion. \\n\\n==== Details ====\", \"figure_3\": \"----\\nThe \\\"shapes\\\" example is odd, because it's not obvious that there's a structural hierarchy linking the shapes. Maybe a \\\"torso/left arm/right arm\\\" would be better for illustrative purposes?\\nIt would be helpful to put the variable names ($\\\\mathcal M_k$, etc) on the figure.\\nShould add a second arrow coming into the (f) the Structural descriptor from a leaf-variable $p_k$\\nAlso it would be helpful to indicate where the losses are evaluated.\\n\\\"Next Frame\\\" should probably be \\\"Reconstruction\\\" (though \\\"Prediction\\\" might be a more accurate word).\\n---\\n\\nSection 4.2:\\nNotational point, it seems k can be from 1 to d. But in Section 3 you say it can be from 1 to \\\"n\\\". Maybe it would be clearer to change both (\\\"n\\\" and \\\"d\\\") to \\\"K\\\" to emphasize that \\\"k\\\" is an index which goes up to \\\"K\\\". (edit... now after reading 5.1: Latent Representation, I understand. If there are n parts in the dataset you use d>n dimensions and the network learns to \\\"drop\\\" the extra ones... it would help to clarify that here).\", \"structural_descriptor\": \"You say \\\"in practice, we relax the binary constraints\\\"... Yet this is important.. should write down the equation and say how the \\\"soft\\\" version of [i \\\\in S] calculated.\\nSection 4.3\\n\\\"elbo\\\" seems like the wrong name for this loss, as it's not an Evidence Lower BOund. The elbo would be the sum of this loss and the first component of L_{recon}. It is a regularizing term, so you could call it L_reg.\\nIt's not obvious that sparse local motion maps mean a heirarchical tree structure, but I see how it could help. I suggest that without evidence for this loss you soften the claim to \\\"this is intended to help encourage the model to learn a heirarchical tree structure\\\"\", \"figure_4\": \"It appears that each row indicates a different video in the dataset, but then in 4f you still have two rows but they appear to correspond to different algorithms... a vertical separator here might help show that the rows in 4f do not correspond to the rows in 4a-e.\\n\\\"Next Frame\\\" here appears to refer to ground truth, but in Figure 3 \\\"Next Frame\\\" appears to be the prediction (which you call reconstruction). \\nSection 5.1\", \"future_prediction\": \"These images are from the test set, right? If so, that is worth mentioning.\\nObject Segmentation (\\\"in several different dimensions\\\" -> \\\"corresponding to the active latent dimensions?\\\")\", \"figure_9\": \"What does this show? The figure does not obviously demonstrate anything. Maybe compare to ground-truth future frames?\\nSection 5.3:\", \"object_segmentation\": \"Visually, it looks nice, but I have now idea how good this segmentation is. You compare verbally to R-NEM and PSD, but there're no numbers.\\nHuman Studies... The line \\\"For those whose predicted tree structures are not consistent with ours, they all agree with our results and believe ours are more reasonable than others\\\" .. brings to mind an image of a room full of tortured experimental subjects not being allowed to leave until they sign a confession that their own tree structures were foolish mistakes and your tree structures are far superior.... So probably it should be removed because it sounds shady.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting and novel works but only tested on simple dataset\", \"review\": \"The paper proposes an unsupervised learning model that learns to (1) disentangle object into parts, (2) predict hierarchical structure for the parts and (3), based on the disentangled parts and the hierarchy, predict motion. The model is trained to predict future frames and motion with L2 loss given current frame and observed motion. The overall objective is similar to a standard VAE.\\n\\nOne interesting module proposed in this work is the structural descriptor which assumes motions are additive and global motion of an object part can be recovered by adding the local motion of this object with the global motions of its parents. The equation can be applied recursively and it generalizes to any arbitrary hierarchy depth.\", \"pros\": \"The overall methodology is quite novel and results look good. Merging hierarchical inference into the auto-encoder kind of structure for unsupervised learning is new.\\nThe results are tested on both synthetic and real videos.\", \"cons\": \"The idea is only tested on relatively simple dataset. For the synthetic data, the objects only have very restrictive motions (ex. Circles always move diagonally). It is also unclear to me whether all the samples in the dataset share the same hierarchical tree structure or not (For human, it seems like every sample uses the same tree). If this is the case, then it means you need 100K data to learn one hierarchical relationship for very simple video.\\nFrom the human dataset results, since the appearance and motions become so different across videos, making the video clean and making the objects aligned (so that motions are aligned) seems important to make the model work. For example, from figure 11(f)(g), the right and left legs are exchanged for the person on the top. This brings up the concern that the model is fragile to more complicated scenes such that objects are not super aligned and the appearances differ a lot. (ex. Videos of people playing different sports shooting from different views)\\nShould add more baselines. There are a lot of unsupervised video prediction works which also unsupervisedly disentangle motions and contents.\", \"others\": \"The sparsity constraint seems incorrect\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"comment\": \"Hi,\\n\\nI am the first author of the R-NEM paper. I read your paper and was very impressed with the results.\\n\\nOne thing I was confused about is the performance of R-NEM / RNN-EM on the shapes and digits task (figure 7). You report that the performance is poor because \\\"... [R-NEM / RNN-EM] cannot deal with highly occluded objects\\\". However, this claim does not match the results from our own experiments, eg. on the flying shapes task [1] in Figure 4 one can see 5 object simultaneously occluding one another, or on the bouncing balls with curtain task [2] an invisible curtain occludes the balls.\\n\\nThis leads me to wonder how you train R-NEM / RNN-EM in your experiment. Since your approach trains on pairs of (x_t, flow(x_{t-1}, x_t)) -> (x_{t+1}), could it be that you are only using sequences of length T=2 time-steps to train R-NEM / RNN-EM?\\n\\nIf this is the case then that would explain its poor performance. R-NEM / RNN-EM rely on iterative inference, requiring several steps (each approximately corresponding to an EM step) to obtain good masks from the random initial masks. In our experiments on videos we have always opted to take 1 EM step per time-step as we considered long sequences (>20 steps), which would ensure convergence. An example of this convergence behavior can be seen in the first couple of steps in Figure 4 in [1].\\n\\nCheers,\\n\\nSjoerd\\n\\n[1] Greff, K., van Steenkiste, S., & Schmidhuber, J. (2017). Neural expectation maximization. In Advances in Neural Information Processing Systems (pp. 6691-6701).\\n[2] van Steenkiste, S., Chang, M., Greff, K., & Schmidhuber, J. (2018). Relational neural expectation maximization: Unsupervised discovery of objects and their interactions. (2018). International Conference on Learning Representations.\", \"title\": \"R-NEM / RNN-EM being unable to deal with occlusion\"}",
"{\"title\": \"Interesting idea\", \"review\": \"The paper describes a method, which learns the hierarchical decomposition of moving objects into parts without supervision, based on prediction of the future. A deep neural network is structured into a sequence of encoders and decoders: the input image is decomposed into objects by a trained head, then motion is estimated from predicted convolutional kernels whose model is trained on optical flow; the latent motion output is encoded into separated motion fields for each object and then composed into a global model with a trainable structured matrix which encodes the part hierarchy. The latent space is stochastic similar to VAEs and trained with similar losses.\", \"strengths\": \"The idea is interesting and nicely executed. I particularly appreciated the predicted kernels, and the trainable structure matrix. Although the field of hierarchical motion segmentation is well studied, up to my knowledge this method seems to be the first of its kind based on a fully end-to-end trainable method where the motion estimators, the decomposition and the motion decoders are learned jointly.\\n\\nThe method is evaluated on different datasets including fully synthetic ones with synthetic shapes or based on MNIST; very simple moving humans taken from ATARI games, and realistic humans from two different pose datasets. The motion decomposition is certainly not as good as the definition and the output of a state of the art human pose detector; however, given that the decomposition is discovered, the structure looks pretty good.\\n\\nWeaknesses\\n\\nI have two issues with the paper. First of all, although the related work section is rich, the methods based on hierarchical motion decompositions are rarer, although the field is quite large. Below are a couple of references:\\n\\nMihir Jain, Jan Van Gemert, Herve\\u0301 Je\\u0301gou, Patrick Bouthemy, and Cees GM Snoek. Action localization with tubelets from motion. CVPR, 2014.\\n\\nChenliang Xu and Jason J Corso. Evaluation of super-voxel methods for early video processing. CVPR, 2012.\\n\\nJue Wang, Bo Thiesson, Yingqing Xu, and Michael Cohen. Image and video segmentation by anisotropic kernel mean shift. ECCV, 2004 \\n\\nChenliang Xu, Caiming Xiong, and Jason J Corso. Streaming hierarchical video segmentation. ECCV 2012.\\n\\nMatthias Grundmann, Vivek Kwatra, Mei Han, and Irfan Essa. Efficient hierarchical graph-based video segmentation. CVPR, 2010.\\n\\nPeter Ochs, Jitendra Malik, and Thomas Brox. Segmentation of moving objects by long term video analysis. IEEE PAMI, 2014.\\n\\nDiscovering motion hierarchies via tree-structured coding of trajectories\\nJuan-Manuel P\\u00e9rez-R\\u00faa, Tomas Crivelli, Patrick P\\u00e9rez, Patrick Bouthemy, BMVC 2016.\\n\\nSamuel J Gershman, Joshua B Tenenbaum, and Frank Ja\\u0308kel. Discovering hierarchical motion structure. Vision Research, 2015.\\n\\nSecondly, the presentation is not perfect. The paper is densely written with lots of information thrown rapidly at the reader. Readers familiar with similar work can understand the paper (I needed a second pass). But many parts could be better formulated and presented.\\n\\nI understood the equations, but I needed to ignore certain thinks in order to understand them. One of them is the superscript in the motion matrices M, which does not make sense to me. \\u201cg\\u201d seems to indicate \\u201cglobal\\u201d and \\u201cl\\u201d local, but then again a local parent matrix gets index \\u201cg\\u201d, and this index seems to switch whether the same node is seen as the current node or the parent of its child. \\n\\nFigure 3 is useful, but it is hard to make the connection with the notation. Symbols like z, M etc. should be included in the figure.\\n\\nThe three lines after equations 2 and 3 should be rewritten. They are understandable but clumsy. Also, we can guess where the binary constraints come from, but they should be introduced nevertheless.\\n\\nIn essence, the paper is understandable with more efforts than there should be necessary.\", \"other_remarks\": \"The loss L_struct is L_2, I don\\u2019t see how it can favor sparsity. This should be checked and discussed.\\n\\nA symbolic representation is mentioned in the introduction section. I am not sure that this notion is completely undisputed in science, it should at least not be presented as a fact.\\n\\nThe ATARI dataset seems to be smallish (a single video and 5000 frames only).\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Overall interesting approach and well written paper but limited experimental results\", \"review\": \"This paper presents a method for learning about the parts and motion dynamics of a video by trying to predict future frames. Specifically, a model based on optical flow is defined, noting that the motion of hierarchically related parts are additive. Flow fields are represented using an encoder/decoder architecture and a binary structural matrix encodes the representations between parts. This matrix is predicted given the previous frame and flow field. This is then used to estimate a new flow field and generate a possible future frame. The system is trained to predict future frames using an L2 loss on the predicted image and motion field and regularized to prefer more compact and parsimonious representations.\\n\\nThe method is applied to synthetic datasets generated by moving shapes or MNIST digits and shown to work well compared to some recent baseline methods for part segmentation and hierarchy representation. It is also applied and qualitatively evaluated for future frame prediction on an atari video game and human motion sequences. The qualitative evaluation shows that part prediction is plausible but the results for future frame prediction are somewhat unclear as there are no baseline comparisons for this aspect of the task.\\n\\nOverall the approach seems very interesting and well motivated. However, the experimental comparisons are limited and baselines are lacking. Further, some relevant related work is missing.\", \"specific_concerns\": [\"Motion segmentation has been studied for a long time in computer vision, a comparison against some of these methods may be warranted. See, e.g., Mangas-Flores and Jepson, CVPR 2013.\", \"There is some missing related work on learning part relations. See, e.g., Ross, et al IJCV 2010 and Ross and Zemel JMLR 2006.\", \"There is also some missing work on future frame prediction. In particular, PredNet seems relevant to discuss in the context of this work and as a baseline comparison method. See Lotter et al ICLR 2017.\", \"A reasonable baseline might be simply to apply the previous frames motion field to generate the next frame. This would be a good comparison to include.\", \"The \\\"Human Studies\\\" section is very unclear. How is \\\"same tree structure\\\" defined exactly and how were humans asked to annotate the tree structure? If it's about the hierarchical relationship, then I would expect humans to always be pretty consistent with the hierarchy of body parts and suggests that the model is doing relatively poorly. If it's some other way, then this needs to be clarified. Further, how was this study performed? If this section can't be thoroughly explained it should be removed from the paper as it is at best confusing and potentially very misleading.\", \"The system only considers a single frame and flow-field for part prediction. From this perspective, the effectiveness of the method seems somewhat surprising.\", \"The system takes as input both a frame and a flow field. I assume that flow field is computed between I0 and I1 and not I1 and I2, however this is never specified anywhere I can find in the manuscript. If this is not the case, then the problem setup is (almost) trivial.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
HJgyAoRqFQ | State-Denoised Recurrent Neural Networks | [
"Michael C. Mozer",
"Denis Kazakov",
"Robert V. Lindsey"
] | Recurrent neural networks (RNNs) are difficult to train on sequence processing tasks, not only because input noise may be amplified through feedback, but also because any inaccuracy in the weights has similar consequences as input noise. We describe a method for denoising the hidden state during training to achieve more robust representations thereby improving generalization performance. Attractor dynamics are incorporated into the hidden state to `clean up' representations at each step of a sequence. The attractor dynamics are trained through an auxillary denoising loss to recover previously experienced hidden states from noisy versions of those states. This state-denoised recurrent neural network (SDRNN) performs multiple steps of internal processing for each external sequence step. On a range of tasks, we show that the SDRNN outperforms a generic RNN as well as a variant of the SDRNN with attractor dynamics on the hidden state but without the auxillary loss. We argue that attractor dynamics---and corresponding connectivity constraints---are an essential component of the deep learning arsenal and should be invoked not only for recurrent networks but also for improving deep feedforward nets and intertask transfer. | [
"recurrent nets",
"attractor nets",
"denoising",
"sequence processing"
] | https://openreview.net/pdf?id=HJgyAoRqFQ | https://openreview.net/forum?id=HJgyAoRqFQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1lkDRr8xN",
"rkgctaSiCQ",
"Ske6I8SiAQ",
"HJx7C-SoCQ",
"Bye5F_Hb6Q",
"BJgkzBas37",
"S1gsJSsy2m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545129559267,
1543359874350,
1543358036808,
1543356874549,
1541654658462,
1541293319390,
1540498659331
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper857/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper857/Authors"
],
[
"ICLR.cc/2019/Conference/Paper857/Authors"
],
[
"ICLR.cc/2019/Conference/Paper857/Authors"
],
[
"ICLR.cc/2019/Conference/Paper857/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper857/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper857/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper is well written and develops a novel and original architecture and technique for RNNs to learn attractors for their hidden states (based on an auxiliary denoising training of an attractor network). All reviewers and AC found the idea very interesting and a promising direction of research for RNNs. However all also agreed that the experimental validation was currently too limited, in type and size of task and data, as in scope. Reviewers demand experimental comparisons with other (simpler) denoising / regularization techniques; more in depth experimental validation and analysis of the state-denoising behaviour; as well as experiments on larger datasets and more ambitious tasks.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Promising novel idea for RNN training, with too limited experiments\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Your major point is appreciated, but we worry we have misled readers by using the term 'noise' in a fast and loose manner. Certainly corruption to the hidden state due to untrained or poorly trained weights is _not_ anything close to Gaussian. We see that we have been misleading in suggesting that our denoising training procedure is designed to eliminate noise, especially in the context of training it on inputs with added Gaussian noise. What the training procedure actually does is to establish (nonGaussian) attractor manifolds that cause a set of hidden states to be clustered together. (See our Response to AnonReviewer3 for additional details.) It is this clustering of states where we believe the attractor dynamics are valuable. Denoising facilitates this clustering. The advantage of clustering with attractor dynamics over something like K-Means is that (1) attractor dynamics are flexible in terms of the number and shape of clusters, (2) we can compute gradients through attractor dynamics. While we certainly can and should conduct simulations with other regularizers, we are highly confident that they will not have the same property as our attractor net denoising.\\n\\nYou asked whether it matters whether the 'c' variable is used as a bias rather than initial condition. It is absolutely essential for c to be a bias as it has a persistent effect on a final state. In Figure 3, treating c as a bias achieves a type of skip connection (see blue lines) that facilitates back propagation. Finally, note that even in Hopfield nets (and certainly in Boltzmann machines), the external input must serve as a persistent bias that helps to shape energy landscapes. We did some experiments in which c is _also_ used as the initial state, but doing so did not affect the results.\\n\\nThe Hopfield net training algorithm (Hebbian learning) is simpler than our loss-minimizing training procedure. But the Hopfield algorithm does not support hidden attractor state. We will note this in subsequent revisions.\"}",
"{\"title\": \"Responses to AnonReviewer3\", \"comment\": \"We anonymized the 1994 citation because one of the current paper authors was a co-author on the 1994 paper.\\n\\nThank you for your spot-on summary of what denoising is intended to do by reference to Hopfield nets. Hopfield nets can take a corrupted or partial input and reconstruct the stored memory. An important property of Hopfield nets is that if two stored memories are very close, they can combine into a manifold that contains both memories (and other similar states), thereby performing a type of implicit clustering. It is this clustering of nearby states that we leverage when we train the attractor net on the set of hidden states reached by the RNN. By mapping similar hidden states to the same attractor or attractor manifold, we impose a bias on the network to ignore small variations in the hidden state. This bias is valuable for symbolic tasks and imposes a form of regularization early in training.\", \"you_asked_about_time_scale\": \"The time scale of change to the hidden state over the elements of a sequence is completely orthogonal to the time scale of attractor dynamics. In Figure 3, the time scale of hidden state evolution is represented by the columns and the time scale of the attractor net evolution is represented by the green rows of neurons.\\n\\nConcerning the Pascanu et al. (arXiv 1312.6026) paper: Our SDRNN and RNN+A architectures certainly fall into the class of models with deep hidden-to-hidden transitions, as described by Pascanu. As Pascanu noted, these models do not train well without skip connections. Indeed, our approach also leverages skip connections of a sort (as represented by the blue edges in Figure 3). However, the skip connections and architecture depth are not in and of themselves sufficient: our RNN+A fails, although it has both, whereas our SDRNN success, because it has the auxillary denoising loss. We will add citations to Pascanu.\"}",
"{\"title\": \"Responses to AnonReviewer2\", \"comment\": \"It would indeed be interesting to have more insight into the factors contributing to the simulations in Section 2.1 (Figure 2). However, we limited our investigation of this stand-alone attractor net because it is based on randomly placed target states in the input/output space, and this assumption is most certainly violated when the target states are constrained by task context (as they are when the attractor net is incorporated into the sequence-processing net).\\n\\nWe have to agree with you that our experiments are modest in size, and we've done more data set exploration since the submission deadline. We have found that tasks with a nontrivial symbolic component (e.g., language processing) benefit the most, and other large-scale problems typically do not.\", \"concerning__clarification_of_step_3_of_the_training_procedure\": \"In step 3, we give a pointer to section 2.1 which describes the denoising training procedure in detail.\", \"concerning_computational_cost_of_denoising\": \"As you suspect, this method adds significant more overhead to training. Our initial goal was to understand if and under what circumstances the architecture yields better generalization.\\n\\nConcerning RNN+A: the attractor component still has weight constraints that ensure attractor dynamics. Our main goal with RNN+A was to show that the architecture alone is inadequate to obtain good results; rather, the mix of training objectives is critical.\"}",
"{\"title\": \"Interesting use of denoising based on attractor dynamics in RNNs, but weak experimental validation.\", \"review\": \"The authors propose to embed in a recurrent neural network (RNN) a multistage subnetwork that is trained to denoise its own state. This is done with an additional denoising cost term that essentially encourages the recurrent subnetwork to suppress noise during as the recurrence is unfolded in time.\\nThe authors first demonstrate the denoising properties of this architecture, and then demonstrate its performance on a series of tasks combining it with regular tanh and GRU recurrent units.\\n\\nThe paper is clear and the main idea is rather interesting, but the presented experimental validations are arguably weak. The demonstration of the denoising properties of the network is rather superficial, in the sense that it does not give much insight into the functioning of the architecture, despite that presumably being the main goal of the section. In particular, it is not clear where the non-monotonic change in denoising as a function of network size comes from. Based on the attractor neural network literature that the authors cite at the beginning of the paper, it could be due to either the presence of spurious attractors, the absence of fixed-point attractors or the fact that the attractor network is trained above capacity. But the authors never go into a detailed analysis that could reveal the detailed functioning of their architecture and merely mention the hypothesis that for large networks denoising performance would decrease because of overfitting.\\nAs for the experiments that are presented in the rest of the paper, while relevant, the types of tasks and datasets on which the proposed architecture is being tested are rather small.\", \"here_are_some_more_specific_comments_and_questions\": [\"It would help to clarify the training procedure to explicitly mention in step 3 that training proceeds on sequences with added noise.\", \"It is not clear how many times in each experiments step 3 is being repeated for each mini-batch, i.e. what computational overhead is required for the training of the SDRNN compared to a regular RNN.\", \"It is not clear whether the recurrent neural net called RNN with attractors (RNN+A) is indeed an attractor neural network. Does the state of the network indeed always converge to an attractor?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting submission, though more analysis could help\", \"review\": \"I think overall I appreciate the idea behind the work. I think the work is quite novel, and it also connects to bodies of literature (hopfield networks -- attractors based and more mainstream GRU/LSTM nets). Here are some notes that I have:\\n\\n1) There is a citation to anonymous 1994 -- not sure if it helps with anything. Is this work published? Why not, 1994 is quite a bit ago! I can\\u2019t see any reason why from 1994 until now this should stay anonymous. \\n\\n2) Intuitively I like the idea of denoising. Though not sure exact what is denoised and towards what? In particular for hopfield networks (and I think most of the body of work that this paper points to), the idea is you have a set of sequences that you want to *memorize*. So you build a point attractor for each of this sequence, such that when starting the dynamical system in the vicinity of the point attractor (in its basin of attraction) then you converge to it (remembering the wanted sequence). Going back to this work, what is this sequence of patterns that we want to remember? More explicitly, for SDRNN you do backprop to get the h you would want and that make that a target (second loss) of the attractor net. But I'm confused about timescale. If h is not stable for a longer time, do you really converge on the attractor net ? Do we have evidence of that? Is this even meaningful early on in training, it feels like it should hurt.\\n\\n3) Connecting to this, I would really love to see more analysis, going beyond measuring entropy. How do we now this is not just more capacity and the auxiliary loss just helps the optimization. Particularly since the problems are synthetic, not large scale more analysis should be possible. How does this compare to training the simple RNN but with gaussian noise on h (to learn to be robust to it). Can we control for capacity between RNN and SDRNN? \\n\\n4) To that point there is this work (not citet as far as I can tell) : https://arxiv.org/abs/1312.6026. It does have a structure somewhat similar, though none of the denoising perspective or the auxiliary loss used in this work. However the work points out that if you make the network deep in a similar way to how it was done here even though technically it is a more powerful model, gradients do not propagate well. The solution was skip connection. In the baseline that was run you do not have skip connections, and the auxiliary loss might play the role of what skip connections or a more powerful optimizer would have played.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review for State-Denoised Recurrent Neural Networks\", \"review\": \"In this paper the authors develop the clever idea to use attractor networks, inspired by Hopfield nets, to \\u201cdenoise\\u201d a recurrent neural network. The idea is that for every normal step of an RNN, one induces an additional \\\"dimension\\\" of recurrency in order to create attractor dynamics around that particular hidden state. The authors introduce their idea and run some basic experiments. This paper is well written and the idea is novel (to me) and worthy of exploration. Unfortunately, the experiments are seriously lacking in my opinion, as I believe *the major focus* of those experiments should be comparisons to other denoising / regularization techniques.\\n\\nMAJOR\\n\\nThe point is taken that RNNs are susceptible to noise due to iterated application of the function. In my experience, countering noise (in the sense of gaussian noise added) isn\\u2019t a huge problem in practice because there are many regularization methodologies to handle it. This leads me to the point that I think the experiments need to compare across a number of regularization techniques. The paper is motivated by discussion of noise, \\u201cnoise robustness is a highly desirable property in neural networks\\u201d, and the experiments show improved performance on smaller datasets, all of which speak to regularization. So I believe comparisons with regularization techniques are pretty important here. \\n\\nMODERATE\\n\\nThere is some motivation at the beginning of this piece, in particular about language, and does not contain citations, but should.\\n\\n\\u201cTraining is in complete batches to avoid the noise of mini-batch training.\\u201d Please explain, I guess this is not a type of noise that the method handles? \\n\\nWhat about problems that require graded responses, which is likely anything requiring integration? For example, what happens in the majority task if the inputs were switched to a non-discrete version, where one must hold analog numbers?\\n\\n\\nMINOR\\n\\nAny discussion about the (presumably dramatic) increase in training time due to the attractor dynamics unrolling + additional batching due to noise vectors (if I understood correctly)?\\n\\nWhat are your confidence intervals over? Presumably, we\\u2019d like to get confidence over multiple network instantiations.\\n\\nPg 1. Articulated neural network? \\n\\n\\nQUESTIONS\\n\\nDoes using a the \\u2018c\\u2019 variable as a bias instead of an initial condition really matter? \\n\\nHow does supervised training via eqn (4) relate to the classic training of Hopfield nets? I assume not at all, but it would be useful to clarify?\\n\\nWhat RNN architecture did you use in the Figure 5 simulations (tanh vanilla RNN or GRU?)\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
B1xJAsA5F7 | Learning Multimodal Graph-to-Graph Translation for Molecule Optimization | [
"Wengong Jin",
"Kevin Yang",
"Regina Barzilay",
"Tommi Jaakkola"
] | We view molecule optimization as a graph-to-graph translation problem. The goal is to learn to map from one molecular graph to another with better properties based on an available corpus of paired molecules. Since molecules can be optimized in different ways, there are multiple viable translations for each input graph. A key challenge is therefore to model diverse translation outputs. Our primary contributions include a junction tree encoder-decoder for learning diverse graph translations along with a novel adversarial training method for aligning distributions of molecules. Diverse output distributions in our model are explicitly realized by low-dimensional latent vectors that modulate the translation process. We evaluate our model on multiple molecule optimization tasks and show that our model outperforms previous state-of-the-art baselines by a significant margin.
| [
"graph-to-graph translation",
"graph generation",
"molecular optimization"
] | https://openreview.net/pdf?id=B1xJAsA5F7 | https://openreview.net/forum?id=B1xJAsA5F7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Bkgjce41xE",
"HylNHt4h0X",
"r1gHailcAQ",
"B1lysCy5CX",
"r1xXOkQO07",
"SkgWrRs_Tm",
"SyghQTiupQ",
"Syl_q2s_67",
"Skl19Q6Hpm",
"H1xuZ76rTm",
"Hkl9bvO52Q",
"SyeFxjBc37",
"HkgKBKlq2m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544663187400,
1543420220032,
1543273405339,
1543270039346,
1543151466697,
1542139449432,
1542139172473,
1542139023896,
1541948294789,
1541948159824,
1541207810028,
1541196529354,
1541175617508
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper856/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper856/Authors"
],
[
"ICLR.cc/2019/Conference/Paper856/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper856/Authors"
],
[
"ICLR.cc/2019/Conference/Paper856/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper856/Authors"
],
[
"ICLR.cc/2019/Conference/Paper856/Authors"
],
[
"ICLR.cc/2019/Conference/Paper856/Authors"
],
[
"ICLR.cc/2019/Conference/Paper856/Authors"
],
[
"ICLR.cc/2019/Conference/Paper856/Authors"
],
[
"ICLR.cc/2019/Conference/Paper856/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper856/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper856/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The revisions made by the authors convinced the reviewers to all recommend accepting this paper. Therefore, I am recommending acceptance as well. I believe the revisions were important to make since I concur with several points in the initial reviews about additional baselines. It is all too easy to add confusion to the literature by not including enough experiments.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"after revisions the reviewers reached a consensus on accepting the paper\"}",
"{\"title\": \"Thank you!\", \"comment\": \"Thank you for your insightful comments again! They are very helpful!\"}",
"{\"title\": \"score updated\", \"comment\": \"Thank you for updating the paper. I've updated the score as well.\"}",
"{\"title\": \"Paper updated based on your feedbacks\", \"comment\": \"Thank you very much for your insightful comments. We have removed claims about practical drug discovery, as well as claims that are not well supported by our current manuscript. For instance, we have modified the related work section (see point 3) and removed statements in the ablation study paragraph in the appendix (see point 5). We also updated statements in the experiment section since we have added MMPA and GCPN baselines.\\n\\n1a) How did the authors optimize the hyperparameters of the mmpdb algorithm?\\nThe current mmpdb program is very expensive to run. It takes about 4-5 hours to perform MMPA on 1000 molecules due to large number of extracted rules. Therefore, we performed limited amount of hyperparameter tuning on the validation set to find good hyperparameters. Moreover, some hyperparameters (e.g., the size of environment fingerprints) are hard-coded in the source code, and we couldn\\u2019t investigate how these hyperparameters will affect the model performance.\\n\\n1b) Why the authors need to translate each molecule 50 times? MMPA is deterministic, so one should just need to translate once and then pick the top 50 translated/transformed molecules with the highest expected improvement ...\\nWe did exactly what you describe here. Each test set molecule is translated \\u201conce\\u201d, but in this \\u201cone-time\\u201d translation, multiple matching transformation rules are applied to this compound. And we simply picked the top 50 transformed molecules within the similarity constraint. We defined \\u201cone\\u201d translation in MMPA as applying \\u201cone\\u201d transformation rule.\\n\\n2) Popova\\u2019s work, GCPN or any other comparable RL framework can be applied in straightforward way to lead optimization as well: One would just plugin a reward function of f(mol) = min( sim(startmol, mol),threshold ) + Property(mol) [...] Wouldn\\u2019t this be even more flexible & general compared to the method presented here?\\nWe agree that RL framework could be extended to our conditional translation scenario. However, adding similarity into the reward itself is not enough, unless you also feed the \\u201cstartmol\\u201d into the RL model so that it knows what the starting molecule looks like. Otherwise the RL model will get confused since the reward function will keep changing as the starting molecule changes during training. Therefore a successful extension of this algorithm would be a contribution in its own right.\\n\\n3) Re II 2: This reviewer remains unconvinced. This paragraph needs to be fixed in the manuscript because also implicit estimation is estimation.\\nWe suppose that you are referring to our response Part II 3 (not II 2, which is about MMPA instead of implicit property estimation). We agree that current manuscript does not provide enough evidence regarding this point. Therefore we have changed the paragraph in related work section. We removed statements involving \\u201csuboptimal property estimator\\u201d.\\n\\n4) The authors have scored all 250k/350k molecules using logP/QED/the DRD SVM, which are exactly the \\u201csuboptimal property predictors\\u201d that BayesOpt/RL would use for scoring, and then created pairs from them? Doesn\\u2019t this imply the same suboptimal estimation is now baked into the translation model, but implicitly?\\nWe agree that the suboptimal property estimator can implicitly affect our model, given the way we created the training data. Therefore, we have removed these claims (see point 3). However, our graph-to-graph translation model can be trained on molecular pairs constructed based on their measured properties without any property estimation models. We couldn\\u2019t do this experiment as such datasets are not publicly available, but they often exist in pharma companies. In contrast, prior models require property predictor to be an integral part of the model.\\n\\n5) The authors state that \\u201cIn a real-world drug discovery setting, there is usually a budget on how many drug candidates can be tested in the laboratory [\\u2026] This is beneficial as it requires fewer experiments in the real scenario.\\u201d, but then require 250k/350k samples to train the model. Isn\\u2019t this a contradiction?\\nWe have removed these sentences as they can be misleading and they are irrelevant to the ablation comparison. Please note that the goal of this ablation study is to investigate the importance of the adversarial learning component.\\nRegarding your \\u201ccontradiction\\u201d concern, we used 250k/350k samples as they were readily available. The question of data efficiency applies to all neural models, including RL models for drug discovery and neural models for property prediction. To investigate this, we trained VJTNN on the logP task (delta=0.4) using only 3k molecular pairs, as compared to 120k pairs extracted from the full dataset. The test set result is 1.26 +/- 1.53 (full dataset performance was 3.3 +/- 1.8). Indeed, learning graph translation is challenging under low-resource scenario, and we leave this issue for future work.\"}",
"{\"title\": \"reply\", \"comment\": \"First, thanks a lot for the authors efforts, this is much appreciated!\\nNevertheless, this reviewer thinks the paper is still overselling the results, and hides limitations, which is unfortunate and unnecessary, since the modeling idea is actually promising.\", \"comments\": \"In terms of modeling, there is indeed a distinction between mapping from molecules to better molecules over other generative models, e.g. variational autoencoders or graph-convolutional policy networks.\\n\\nHowever, in practice, there is no distinction, since *in effect* both models perform the optimization of molecular properties with respect to the molecules. In fact, the same scoring functions that are used in this paper here could be used by a VAE+Bayesian optimization or an RL model as the reward, and are applied in practice to hit/lead optimization as well as library generation. The former application is even more frequent in practice than the latter.\", \"comments_to_the_authors_comments\": \"\", \"re\": \"I 1)\\nThanks for running the mmpdb baseline! A few questions on that:\\n\\na) How did the authors optimize the hyperparameters of the mmpdb algorithm?\\nb) This reviewer does not fully understand why the authors need to translate each molecule 50 times? MMPA is determistic, so one should just need to translate once and then pick the top 50 translated/transformed molecules with the highest expected improvement that are within the similarity constraint. Can the authors comment on that in more detail?\", \"re_i_2\": \"Thank you for running the GCPN baseline!\\nPlease note that Popova\\u2019s work, GCPN or any other comparable RL framework can be applied in straightforward way to lead optimization as well: One would just plugin a reward function of f(mol) = min( sim(startmol, mol),threshold ) + Property(mol), and wouldn\\u2019t actually have to worry about pretraining, Wouldn\\u2019t this be even more flexible & general compared to the method presented here?\", \"re_ii_2\": \"This reviewer remains unconvinced. This paragraph needs to be fixed in the manuscript because also implicit estimation is estimation.\", \"re_ii_6\": \"So, if this reviewer understands correctly, the authors have scored all 250k/350k molecules using logP/QED/the DRD SVM, which are exactly the \\u201csuboptimal property predictors\\u201d that BayesOpt/RL would use for scoring, and then created pairs from them? Doesn\\u2019t this imply the same suboptimal estimation is now baked into the translation model, but implicitly?\\n\\n\\nAlso in the (commendable) ablation study in the appendix, the authors state that \\u201cIn a real-world drug discovery setting, there is usually a budget on how many drug candidates can be tested in the laboratory, as biological experiments are time-consuming in general. [\\u2026] This is beneficial as it requires fewer experiments in the real scenario.\\u201d, but then require 250k/350k samples to train the model. Isn\\u2019t this a contradiction?\", \"overall\": \"\", \"so_to_be_crystal_clear\": \"The authors will need to remove any claims to practical drug discovery, and position their paper more realistically, then this reviewer will recommend acceptance. But in the current form, there are still too many unsupported and misleading claims.\"}",
"{\"title\": \"Response to Reviewer 2: Explanation to your questions\", \"comment\": \"Thank you very much for your insightful comments.\\n\\n1) Why VSeq2Seq is better than JT-VAE and GCPN?\\nThe main reason is that VSeq2Seq is trained with direct translation pairs through supervised learning, while JT-VAE and GCPN have to learn to discover these pairs in a weakly supervised manner. For instance, GCPN iteratively modifies a given molecule to maximize the predicted property score, where the translation pairs are discovered through reinforcement learning. JT-VAE optimizes a molecule by first mapping it into its latent representation and then performing gradient ascent in the latent space. In this case, translation pairs are discovered through the gradient signal given by the property predictor, which is trained on molecules with labeled properties. As the models are evaluated by translation quality, training the model directly with translation pairs is advantageous. \\n\\n2) Suppose we keep translating the molecule X1 -> X2 -> X3 ... using the learned translation model, would the model still get improvement after X2? When would it get maxed out?\\nOn the logP task, the model may still get improvements after X2, but we suspect this process will get maxed out after several steps because in general it is harder to optimize a molecule with high property scores. The QED and DRD2 tasks are different from logP task, as the target domain now becomes a closed set defined by the property range. As long as X2 belongs to the target domain (e.g., QED >= 0.9, DRD2 >= 0.5), this process will get maxed out since the model is trained only to improve molecules outside of the target domain.\\n\\n3) If we train with \\u2018path\\u2019 translation (i.e., train with improvement path with variable length), instead of just the pair translation, would that be helpful? \\nIn general, it is harder to collect \\u2018path\\u2019 translation data than translation pairs due to data sparsity. For instance, to find a translation path X1 -> X2 -> X3, we need (X1,X2) and (X2,X3) to be valid translation pairs (i.e., both pairs satisfying property improvement and similarity constraints). Nonetheless, we believe that training the model with path translation will be helpful for global optimization -- finding molecules with the best property scores in the entire molecular space.\"}",
"{\"title\": \"Response to Reviewer 1 (Part I): Probabilistic modeling of the involved components\", \"comment\": \"Thank you very much for your insightful comments. We want to provide more explanations on the probabilistic modeling of different involved components.\\n\\n1) Explicit probabilistic modeling of junction tree encoder-decoder (Section 3).\\nPrior work (Jin et al. 2018) found that it is beneficial to adopt a coarse-to-fine approach to generate molecular graphs: first generate the backbone structure (i.e., junction tree T) and then assemble the sub-graphs in the tree into a complete molecular graph Y. Thus\\n p(Y | X) = \\\\sum_T p(Y | T, X) p(T | X)\\nwhere p(Y | T, X) is the graph decoder and p(T | X) is the tree decoder. As the junction tree T of any graph is constructed through a deterministic tree decomposition algorithm, T does not function as a latent variable during training but is rather an intermediate object that can be predicted via supervised learning. Therefore, \\n p(Y | X) \\\\approx p(Y | T_y, X) * p(T_y | X)\\nwhere T_y is the junction tree underlying the target graph Y.\\n\\nThe tree decoder generates a tree in an autoregressive manner, based on a specific sequentialization of the tree structure. A tree T is laid out as a sequence of edges {(i_1, j_1), \\u2026, (i_m, j_m)} visited in the depth-first traversal over the tree. The probability of generating T is thus\\n p(T | X) = \\\\prod_t p( (i_t, j_t) | (i_1, j_1), \\u2026, (i_t-1, j_t-1), X )\\nwhere j_t always equals i_{t+1}. Probability of (i_t, j_t) depends on two factors: 1) whether j_t is a new node; 2) If j_t is a new node, what is its label; These two factors are modeled by the topological predictor (Eq. 4-6) and the label predictor (Eq. 8-9). The message passing procedure (Eq. 3) embeds the current partial tree realized by {(i_1, j_1), \\u2026, (i_t-1, j_t-1)} into a continuous representation. Beyond the above architecture, in this paper we introduced an attention mechanism to capture how the decoded tree unravels step-by-step in an input graph X dependent manner. \\n\\nThe graph decoder models the conditional probability p(Y | T_y, X). This is a structured prediction task since Y is a graph. The variables in this structured prediction problem are node assembling decisions between neighboring nodes in the tree. For efficiency reasons, the assembling decisions are solved locally, starting from the root and its direct neighbors. In other words, p(Y | T_y, X) is a product of probabilities of choosing the right graph attachments with each node\\u2019s neighbors, resulting in Eq. (10) (after taking log).\\n\\n2) Probabilistic modeling of multi-modal translation model (Section 4) \\nIn this paper, we aim to learn diverse multi-modal mappings between two molecular domains, as there are many different ways to improve a given molecule. This diversity is introduced via latent variables z:\\n p(Y | X) = \\\\int_z p(Y | X, z) p(z) dz\\nwhere prior p(z) models diverse strategies of improvement, independent of X, and is taken to be a standard Gaussian distribution. The overall model resembles a conditional variational autoencoder, learnable through reparameterization (Section 4.1). The approximate posterior Q(z | Y) only depends on the target Y so as to force z to capture resulting type of molecule, inferable from Y alone. \\n\\nThe proposed adversarial training technique (Section 4.2) is an additional regularization trying to discourage the model from generating undesirable outputs (e.g. molecules outside of the defined target domain). As a side note, p(Y | X, z) can be expanded as \\n p(Y | X, z) = p(Y | T_y, X, z) p(T_y | X, z)\\nwhere latent variable z is concatenated with the encoded representation of X (Eq. 11).\"}",
"{\"title\": \"Response to Reviewer 1 (Part II): Clarifications with paper updated to elaborate Section 3.3\", \"comment\": \"Thank you very much for your insightful comments. Our response to the issues you mentioned is the following:\\n\\n1) Please provide an explanation of why using a larger value for delta gives worse performance than a smaller value.\\nA larger delta implies a tighter similarity constraint. For instance, setting delta to 0.6 means the generated compounds Y have to be very similar to the input molecule X (sim(X,Y) > 0.6). When delta decreases to 0.4, the generated structures are allowed to deviate more from the starting point X (sim(X,Y) > 0.4). Therefore, one would naturally expect the model to perform better (find higher scoring molecules) when delta is smaller since the structures can be chosen from a larger set. \\n\\n2) Diversity could be influenced by the cardinality of the sample. Please discuss why diversity is (not) biased versus larger sets.\\nWe agree that the diversity depends on the sample size. Therefore, all the models are evaluated with the same sample size (K=50) for fair comparison. That is, for each molecule in the test set, we randomly sample 50 times from each model to compute the resulting diversity score.\\n\\n3) Tree and graph encoding: asynchronous update implies that T should be a multiple of the diameter of the input graph to guarantee a proper propagation of information across the graph.\\nWe agree that a number of iterations (T) is required for proper propagation of information across the input graph. However, T does not need to be larger than the diameter since we adopted an attention mechanism in the decoder. It can dynamically read the information across the input graph in different decoding steps. In fact, a large T (e.g., the diameter) may potentially lead to overfitting.\\n\\n4) Clarification of tree decoding step (Section 3.2)\\nFirst, the tree decoding process stops when it choose to backtrack at the root node. Second, we agree that this probability should depend on the number of nodes having been generated. This is implicitly captured by the neural message passing procedure. As noted in Eq. (4), the model makes this decision (expanding a new node or not) based on all the incoming messages at the current node. The messages carry information about the current (partial) tree structure, including potentially the number of nodes generated so far though not explicitly. \\n\\n5) Explanation of graph decoding step (Section 3.3)\\nWe added Figure 2 to illustrate why the graph decoding step is not deterministic and how one junction tree can be decoded into different molecular graphs. Regarding the likelihood of ground truth subgraphs, we applied teacher forcing, i.e., we feed the graph decoder with ground truth junction trees as input. Section 3.3 has been updated correspondingly.\"}",
"{\"title\": \"Response to Reviewer 3 (Part I): Required experiments added and paper updated\", \"comment\": \"Thank you very much for your insightful comments. We\\u2019d like to clarify first that our model is a conditional graph-to-graph translation model which maps a given precursor compound to another with more desirable properties. Our translation approach is therefore NOT equivalent to a generative model over molecular structures (i.e., for chemical library design). This conditional translation model is useful and important for hit/lead compound optimization.\\n\\nIn response to your suggestions, we added two additional experiments:\\n1) MMPA baseline: We utilized the open source tool \\u201cmmpdb\\u201d [1] to perform MMPA. For each task, we constructed a database of transformation rules extracted from the ZINC and Olivecrona et al. [3]\\u2019s dataset. Same as our methods, each test set molecule is translated 50 times using the matching rules found in the database. When there are more than 50 matching rules, we choose those having higher average property improvement in the database. This statistic is calculated during the database construction. More details can be found in the Appendix B.\\n\\nThe results are shown in Tables 1 and 2 in the updated paper. On the QED and DRD2 tasks, our model outperforms MMPA with significant margin in terms of translation success rate (56.9% vs 20.8% on QED and 81.0% vs 35.6% on DRD2). On the logP task, our model also outperforms MMPA in terms of average property improvement (3.37 vs 2.00 when delta=0.4 and 1.53 vs 1.41 when delta=0.6).\\n\\n2) GCPN baseline: We used You et al [4]\\u2019s open source implementation to train GCPN on the QED and DRD2 tasks. As stated in their paper [4], GCPN was trained in an environment whose initial state is one of the test set molecules. They kept all the molecules generated during training and reported the molecule with the best property improvement. For consistency, we adopted the same strategy in training and evaluation of GCPN (i.e., training on the test set of QED and DRD2). The performance is reported in Table 2. Our model greatly outperforms GCPN (56.9% vs 9.4% on QED and 81.0% vs 4.4% on DRD2).\\n\\nRegarding Popova et al.\\u2019s method [2], we have carefully read the paper and studied its open-sourced code. The model described in [2] is not directly applicable to our setting as it targets chemical library design while our focus is on lead optimization starting from a given precursor compound. Their model architecture would have to be modified so as to take a precursor compound as an input to be optimized / translated. In fact, Popova et al. list this task as a future work.\\n\\nDue to limited length, our response to your other questions is posted in another post.\\n\\nReferences\\n[1] A. Dalke, J. Hert, C. Kramer. mmpdb: An Open-Source Matched Molecular Pair Platform for Large Multiproperty Data Sets. J. Chem. Inf. Model., 2018, 58 (5), pp 902\\u2013910.\\n[2] M. Popova, O. Isayev, and A. Tropsha. Deep reinforcement learning for de novo drug design. Science advances, 4(7):eaap7885, 2018.\\n[3] M. Olivecrona, T. Blaschke, O. Engkvist, and H. Chen. Molecular de-novo design through deep reinforcement learning. Journal of cheminformatics, 9(1):48, 2017.\\n[4] J. You, B. Liu, R. Ying, V. Pande, and J. Leskovec. Graph convolutional policy network for goal-directed molecular graph generation. arXiv preprint arXiv:1806.02473, 2018\"}",
"{\"title\": \"Response to Reviewer 3 (Part II): Response to your other comments and questions\", \"comment\": \"Thank you very much for your insightful comments. Regarding your other comments and questions, our response is the following:\\n\\n1) \\u201cThe authors claim that MMPs \\u201conly covers the most simple and common transformation patterns\\u201d. This is not correct, since these MMP patterns can be as complex as desired.\\u201d\\nWe agree that MMP patterns can be as complex as desired. However, allowing the patterns to be arbitrarily complex will result in a huge number of transformation rules. For instance, we have extracted 12 million rules in total on the logP and QED tasks when no constraints are imposed. Therefore, we have updated this claim in the paper with the following statement: \\u201cMMPA's main drawback is that large numbers of rules have to be realized (e.g. millions) to cover all the complex transformation patterns.\\u201d\\n\\n2) \\u201cthe reason MMPA was introduced was to provide an easily interpretable method, which performs only local transformations at one part of the molecule. \\u2018Far more complex transformations\\u2019 may thus not be desirable in the context of MMPA.\\u201d\\nYes, we agree that there is always a trade-off between simple and understandable rules vs performance, and that the same trade-off is present in other machine learning applications (e.g., shallow decision trees vs neural networks). Our focus in this paper is on demonstrating the performance gains we can obtain by reformulating the task as a translation problem. Deriving interpretable explanations for the predictions is clearly an important future direction, but is orthogonal to our current effort.\\n\\n3) \\u201cThe authors state that they \\u201csidestep\\u201d the problem of non-generalizing property predictors in reinforcement learning \\u2026 How does the authors\\u2019 model not suffer from the same problem? Can they provide evidence that their model is better in property estimation than other models?\\u201d\\nWe want to clarify that our model does not explicitly estimate the properties. As a result, we can only provide indirect evidence showing that our model can nevertheless outperform other models in mapping precursor molecules into the target set of molecules with better properties. \\n\\n4) \\u201cCan the authors also comment on how they ensure the comparison to the GCPN and VSeq2Seq is fair?\\u201d\\nWhen comparing to VSeq2Seq, we ensure that all models have about the same number of parameters (3.8~3.9 million), trained on the same dataset with the same optimizer and the same number of epochs. Both models are evaluated with K=50 translation attempts for each test compound.\\nRegarding GCPN, their exact setup is not provided. As described in their paper [4], GCPN was trained in an environment whose initial state is one of the test set molecule of the logP task. They kept all the molecules generated during training and reported the molecule with the best logP improvement. We think this may bring more advantage to GCPN in our comparison, as our models do not have access to the test set.\\n\\n5) \\u201cCan the authors comment on why they think the penalized logP task is a good benchmark?\\u201d\\nWe evaluated on this task because some prior work (e.g. JT-VAE, GCPN) has been tested on this benchmark, and their results are readily available for comparison. Indeed, this benchmark itself is not comprehensive enough. We therefore tested on two more tasks (QED and DRD2) aiming to provide a more thorough evaluation.\\n\\n6) \\u201cHow exactly are the pairs selected? Where do the properties for the molecules come from? Were they calculated using the logP, QED and DRD2 models? How many molecules are used \\u2026?\\u201d\\nThose details have been discussed in the Appendix B. We updated the relevant paragraphs to make it more clear. To summarize, logP and QED scores are calculated with RDKit built-in functions. For DRD2 activity prediction, we directly used the pre-trained model in Olivecrona et al. [3].\\nOn the QED and DRD2 tasks, a molecular pair (X,Y) is selected if the Tanimoto similarity sim(X,Y) >= 0.4 and both X and Y fall into the source and target property range. On the logP task, we select molecular pairs when similarity sim(X,Y) >= delta and property improvement is greater than 0.5 (if delta=0.6) and 2.5 (if delta=0.4). In total 250K molecules are used for constructing the training pairs in the logP and QED tasks, and 350K molecules in the DRD2 task.\\n\\nReferences\\n[1] A. Dalke, J. Hert, C. Kramer. mmpdb: An Open-Source Matched Molecular Pair Platform for Large Multiproperty Data Sets. J. Chem. Inf. Model., 2018, 58 (5), pp 902\\u2013910.\\n[2] M. Popova, O. Isayev, and A. Tropsha. Deep reinforcement learning for de novo drug design. Science advances, 4(7):eaap7885, 2018.\\n[3] M. Olivecrona, T. Blaschke, O. Engkvist, and H. Chen. Molecular de-novo design through deep reinforcement learning. Journal of cheminformatics, 9(1):48, 2017.\\n[4] J. You, B. Liu, R. Ying, V. Pande, and J. Leskovec. Graph convolutional policy network for goal-directed molecular graph generation. arXiv preprint arXiv:1806.02473, 2018\"}",
"{\"title\": \"review on \\\"Learning Multimodal Graph-to-Graph Translation for Molecule Optimization\\\"\", \"review\": \"This paper proposed an extension of JT-VAE [1] into the graph to graph translation scenario. To help make the translation model predicting diverse and valid outcomes, the author added the latent variable to capture the multi-modality, and an adversarial regularization in the latent space. Experiment on molecule translation tasks show significant improvement over existing methods.\\n\\nThe paper is well written. The author explains the GNN, JT-VAE and GAN in a very organized way. The idea of modeling the molecule optimization as translation problem is interesting, and sounds more promising (and could be easier) than finding promising molecule from scratch. \\n\\nTechnically I think it is reasonable to use latent variable model to handle the multi-modality. Using GAN to align the distribution is also a well adapted method recently. Thus overall the method is not too surprising to me, but the paper executes it nicely. Given the significant empirical improvement, I think this paper has made a valid contribution to the area.\\n\\nRegarding the results in Table 1, I\\u2019m curious why the VSeq2Seq is better than JT-VAE and GCPN (given the latter two are the current state-of-the-art)? \\n\\nAnother thing I\\u2019m curious about is the \\u2018stacking\\u2019 of this translation model. Suppose we keep translating the molecule X1 -> X2 -> X3 ... using the learned translation model, would the model still gets improvement after X2? When would it get maxed out?\\nOr if we train with \\u2018path\\u2019 translation (i.e., train with improvement path with variable length), instead of just the pair translation, would that be helpful? I\\u2019m not asking for more experiments, but some discussion might be useful.\\n\\n[1] Jin et.al, Junction tree variational autoencoder for molecular graph generation, ICML 2018\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea, issues in the execution\", \"review\": \"Update:\\nThe score has been updated to reflect the authors' great efforts in improving the manuscript. This reviewer would suggest to accept the paper now.\", \"old_review_below\": \"The paper describes a graph-to-graph translation model for molecule optimization inspired from matched molecular pair analysis, which is an established approach for optimizing the properties of molecules. The model extends a chemistry-specific variational autoencoder architecture, and is assessed on a set of three benchmark tasks.\\n\\n\\nWhile the idea of manuscript is interesting and promising for bioinformatics, there are several outstanding problems, which have to be addressed before it can be considered to be an acceptable submission. This referee is willing to adjust their rating if the raised points are addressed. Overall, the paper might also be more suited at a domain-specific bioinformatics conference.\\n\\n\\nMost importantly, the paper makes several claims that are currently not backed up by experiments and/or data. \\n\\nFirst, the authors claim that MMPs \\u201conly covers the most simple and common transformation patterns\\u201d. This is not correct, since these MMP patterns can be as complex as desired. Also, it is claimed that the presented model is able to \\u201clearn far more complex transformations than hard-coded rules\\u201d. The authors will need to provide compelling evidence to back up these claims. At least, a comparison with a traditional MMPA method needs to be performed, and added as a baseline. Also, it has to be kept in mind that the reason MMPA was introduced was to provide an easily interpretable method, which performs only local transformations at one part of the molecule. \\u201cFar more complex transformations\\u201d may thus not be desirable in the context of MMPA. Can the authors comment on that?\\n\\nSecond, the authors state that they \\u201csidestep\\u201d the problem of non-generalizing property predictors in reinforcement learning, by \\u201cunifying graph generation and property estimation in one model\\u201d. How does the authors\\u2019 model not suffer from the same problem? Can they provide evidence that their model is better in property estimation than other models?\\n\\n\\nIn the first benchmark (logP) the GCPN baseline is shown, but in the second benchmark table, the GCPN baseline is missing. Why? The GCPN baseline will need to be added there. Can the authors also comment on how they ensure the comparison to the GPCN and VSeq2Seq is fair? Also, can the authors comment on why they think the penalized logP task is a good benchmark?\\n\\nAlso, the authors write that Jin et al ICML 2018 (JTVAE) is a state of the model. However, also Liu et al NIPS 2018 (CGVAE) state that their model is state of the art. Unfortunately, both JTVAE and CGVAE were never compared against the strongest literature method so far, by Popova et al, which was evaluated on a much more challenging set of tasks than JT-VAE and CGVAE. The authors cite this paper but do not compare against it, which should to be rectified. This referee understands it is more compelling to invent new models, but currently, the literature of generative models for molecules is in a state of anarchy due to lack of solid comparison studies, which is not doing the community a great service.\\n\\n\\nFurthermore, the training details are not described in enough detail. \\nHow exactly are the pairs selected? Where do the properties for the molecules come from? Were they calculated using the logP, QED and DRD2 models? How many molecules are used in total in each of these tasks?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"A paper proposing a quite complex system (with no explicit probabilistic factorisation) which seems to obtain good experimental results.\", \"review\": \"As a reviewer I am expert in learning in structured data domains.\\nThe paper proposes a quite complex system, involving many different choices and components, for obtaining chemical compounds with improved properties starting from a given corpora. \\nOverall presentation is good, although some details/explanations/motivations are missing. I guess this was due to the need to keep the description of a quite complex system in the given space limit. Such details/explanations/motivations could, however, have been inserted in the appendix. As an example, let consider the description of the decoding of the junction tree. In that section, it is not explained when the decoding process stops. My understanding is that this is when, being in the root node, the choice is to go back to the parent (that does not exist). In the same section, it is not explicitly discussed that the probability to select between adding a node or going back to the parent should have a different distribution according to \\\"how many\\\" nodes have been generated before, i.e. we do not want to have a high probability to \\\"go back\\\" at the beginning of the decoding, while I guess it is desirable that such probability increases proportionally with the number of generated nodes. This leads to an issue that I personally think is important: the paper does lack an explicit probabilistic modelling of the different involved components, which may help for a better understanding of all the assumptions made in the construction of the proposed system. \\nThe complexity of the proposed system is actually an issue since the author(s) do not attempt (except for the presence or absence of the adversarial scaffold regularization and the number of trials in appendix) an analysis of the influence of the different components (and corresponding hyper-parameters). \\nReference to previous relevant work seems to be complete.\\nI think the paper is relevant for ICLR (although there is no explicit analysis of the obtained hidden representations) and of interest for a good portion of attendees.\", \"minor_issues\": [\"Tree and Graph Encoding: asynchronous update implies that T should be a multiple of the diameter of the input graph to guarantee a proper propagation of information across the graph. A discussion about that would be needed.\", \"eq.(6): \\\\mathbb{u}^d is not defined.\", \"Section 3.3:\", \"first paragraph is not clear. An example and/or figure is needed to understand the argument, which is related to the presence of cycles.\", \"the definition of f(G_i) involves \\\\mathbb{x}_u. I guess they should be \\\\mathbb{x}_u^G.\", \"not clear how the log-likelihood of ground truth subgraphs is computed given that the predicted junction tree, especially at the beginning of training, may be way different from the correct one. Moreover, what is the assumed bias of this choice ?\", \"Table I: please provide an explanation of why using a larger value for \\\\delta does provide worst performance than a smaller value. From an optimisation point of view it should provide at least an as good performance. This is a clear indication that the used procedure is suboptimal.\", \"diversity could be influenced by the cardinality of the sample. Is this false ? please discuss why diversity is (not) biased versus larger sets.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJxyAjRcFX | Harmonizing Maximum Likelihood with GANs for Multimodal Conditional Generation | [
"Soochan Lee",
"Junsoo Ha",
"Gunhee Kim"
] | Recent advances in conditional image generation tasks, such as image-to-image translation and image inpainting, are largely accounted to the success of conditional GAN models, which are often optimized by the joint use of the GAN loss with the reconstruction loss. However, we reveal that this training recipe shared by almost all existing methods causes one critical side effect: lack of diversity in output samples. In order to accomplish both training stability and multimodal output generation, we propose novel training schemes with a new set of losses named moment reconstruction losses that simply replace the reconstruction loss. We show that our approach is applicable to any conditional generation tasks by performing thorough experiments on image-to-image translation, super-resolution and image inpainting using Cityscapes and CelebA dataset. Quantitative evaluations also confirm that our methods achieve a great diversity in outputs while retaining or even improving the visual fidelity of generated samples. | [
"conditional GANs",
"conditional image generation",
"multimodal generation",
"reconstruction loss",
"maximum likelihood estimation",
"moment matching"
] | https://openreview.net/pdf?id=HJxyAjRcFX | https://openreview.net/forum?id=HJxyAjRcFX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkgbB4TxeV",
"r1gdodFyx4",
"Skg_k4wklV",
"HJea5S_F1E",
"B1gG_obrAQ",
"B1l6yj-H0m",
"H1ljacZB0X",
"Byer85ZHAQ",
"B1e85wUE67",
"Hyxkbojj2Q",
"S1gPNubqn7",
"rJxtHml92m"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544766520867,
1544685728386,
1544676320330,
1544287636944,
1542949738183,
1542949604883,
1542949571258,
1542949453433,
1541855118146,
1541286646642,
1541179438988,
1541174081512
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper855/Authors"
],
[
"ICLR.cc/2019/Conference/Paper855/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper855/Authors"
],
[
"ICLR.cc/2019/Conference/Paper855/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper855/Authors"
],
[
"ICLR.cc/2019/Conference/Paper855/Authors"
],
[
"ICLR.cc/2019/Conference/Paper855/Authors"
],
[
"ICLR.cc/2019/Conference/Paper855/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper855/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper855/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper855/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Re: Re: Re-clarification to Reviewer3\\u2019s Updated Review\", \"comment\": \"We are deeply grateful to reviewer3 for a quick reply that reveals the detailed ground for the decision. Now we can understand the review much better to offer more focused answers to the concerns raised by reviewer3.\\n\\n1. Novelty\\n===================================\\nAccording to reviewer3\\u2019s clarification, per-pixel mean and variance prediction is the core of our methods, and thus our methods don\\u2019t have enough novelty compared to the cited papers.\\n\\nAlthough some of our methods involve mean and variance prediction, the key idea of our methods is matching the moments of the sample distribution to the maximum likelihood estimates of the real moments. As such, MLMM_1 and MCMLE_1, for example, do not use the variance prediction but achieve great diversity and quality. \\n\\nNote that our methods suggest two simple modifications to existing conditional GANs as final recipes; thus it would not be surprising that some previous work used similar techniques in other applications. However, we would like to emphasize that our methods are novel in the context of conditional GANs and mode collapse of GANs. \\n\\n\\n2. Theoretical results\\n===================================\\nWe would like to clarify that the proof that reviewer3 looks for is in section 4.4 not in section 3.2. During the rebuttal period, we reorganized section 3.2 and section 4.4 to reflect reviewer3\\u2019s comments and to streamline the logic. In the current draft, section 3.2 contains the proof about the conflict between the reconstruction loss and the GAN loss, while section 4.4 proves that our approach does not suffer from the same problem.\"}",
"{\"title\": \"Re: Re-clarification to Reviewer3\\u2019s Updated Review\", \"comment\": \"Addressing the concerns that my updated review was imprecise or unhelpful and point (3) about the authors' rebuttal being ignored, I hope that the following points make it clear that the rebuttal was carefully considered in making my decision.\\n\\n1. Regarding novelty\\n\\nMy initial concern was regarding the novelty of the proposed method and not that of the criticism of the use of reconstruction loss in conditional GANs. The authors responded in the rebuttal that their paper has significant novelties in (1) formal criticism of the use of reconstruction loss and (2) their proposed method in the context of conditional generation tasks. Regarding the concern that I did not leave any comment on this response, my concern was regarding point (2), as the papers I cited, though not directly applied to conditional generation, demonstrate that the proposed method lacks novelty. However, the authors have stressed that I have ignored (1). This point was never a concern for me as I do agree that such a criticism of reconstruction loss has not been shown in prior work.\\n\\nMy claim that the paper lacks novelty is specifically due to prior work such as CodeSLAM [1] using per-pixel mean and variance predictions. Regarding the point made in the authors' initial response that it is novel to combine these well-established ideas as a solution to loss of multimodality in conditional generation - it is indeed novel to combine or re-use existing ideas to new domains but in this case, the main approach proposed in the paper is solely a result of such re-use this significantly reduces the overall novelty of the paper.\\n\\n\\n2. Regarding theoretical results\\n\\n\\u201cOur methods are designed not to interrupt the GAN optimization and we proved it\\u201d\\nCorrect me if I\\u2019m wrong but I do not see any theoretical proof in the current revision of the paper showing that the proposed method does not interrupt GAN optimization. Section 3.2 in the current version seems to be the only theoretical proof which is criticizing the use reconstruction loss with GAN loss.\\n\\n\\u201creviewer3 seems to take the lack of proof that our model prevents mode collapse as a serious flaw in our work\\u201d\\nFor correctness, I was not looking for a specific proof about the proposed work preventing mode collapse. The question I would like to ask is - Given that the paper has shown that the optimizing reconstruction loss directly with GAN loss will lead to inevitable mode collapse, what is the proof that the proposed approach will not suffer the same fate? I agree that your proposed approach does not directly optimize the reconstruction loss, which gives intuition that the moment-matching approaches should not suffer from the same drawbacks. However, this does not constitute a theoretical result backing the claim that the proposed method will not interrupt GAN optimization.\", \"comparing_with_diversity_sensitive_conditional_generative_adversarial_networks\": \"I agree that their approach is vastly different, but the merits of that paper are completely different. Their proposed approach is novel and their analysis shows various perspectives of why their approach is effective. This again comes back to my point that this paper lacks theoretical analysis proving that the proposed method is effective.\\n\\n\\u201dIn contrast, we point out that the reconstruction loss conflicts with GANs in a way that reduces the output variance and proposes alternatives without such problem. Thus, we prove the problem of reconstruction loss and that our methods do not conflict with the GAN objective.\\u201d\\nMy response to this point is the same as what I have stated earlier. I do not see how the second sentence follows from the first - the proof in Section 3.2 does not say anything about whether the proposed approach has any guarantees against mode collapse or conflicts with the GAN objective.\\n\\n\\n3. Conclusion\\n\\nI wholeheartedly agree that the reviewers and authors must communicate in a precise and constructive way. I am happy to make my decision process transparent and continue this discussion.\\n\\n\\n[1] Bloesch, M., Czarnowski, J., Clark, R., Leutenegger, S., & Davison, A. J. (2018). CodeSLAM-Learning a Compact, Optimisable Representation for Dense Visual SLAM. CVPR 2018.\"}",
"{\"title\": \"Re-clarification to Reviewer3\\u2019s Updated Review\", \"comment\": \"We thank again the reviewer for the comments.\\nHowever, we have the impression that some critics are unfair, imprecise and unhelpful; thus, hardly acceptable for us. Please see below why. \\n\\n\\n1. Novelty\\n===================================\\nReviewer3 raised again the concern about novelty in the updated review. \\n\\n\\nIn our rebuttal, we clarified that our work is the first to analyze why the use of reconstruction loss leads to the mode collapse (lose of multimodality) in conditional GANs. Our work is also the first to propose alternatives to the reconstruction loss which greatly improve the multimodality of conditional GANs without losing the visual fidelity of the output samples.\\n\\nReviewer3 did not leave any comment on this clarification and failed to mention any specific works that undermine our novelty and how closely they are related to our work. In the initial review, reviewer3 referred several papers about variance prediction; however, these papers have no relation with conditional GANs or mode collapse. \\n\\nWe sincerely ask reviewer3 to be specific and detailed on the claim that our work lacks novelty with proper ground.\\n\\n\\n2. Theoretical results\\n===================================\\n\\u201cProving that the proposed method is actually effective in what is designed to do\\u201d\\nAccording to the modified review, reviewer3 seems to take the lack of proof that our model prevents mode collapse as a serious flaw in our work.\\nHowever, we think reviewer3 largely misunderstood the key to our paper. Our methods have no multimodality-enhancing mechanism; instead, GANs are responsible for multimodality. Our methods are designed not to interrupt the GAN optimization and we proved it. The methods simply offer training stability without interference. Thus, the multimodality observed in our methods is inherent from GANs, and we pointed out that it is suppressed by the reconstruction loss in existing conditional GANs.\\n\\nCompared with a parallel submission to ICLR 2019 below, it becomes more obvious that we provide necessary proofs.\\nDiversity-Sensitive Conditional Generative Adversarial Networks (https://openreview.net/forum?id=rJliMh09F7)\", \"both_papers_share_the_same_goal\": \"multimodal generation in conditional GANs. However, the approaches are vastly different. Unlike our work, they add a regularization term to the loss while keeping the reconstruction loss. Their regularization term directly forces the model to generate diverse outputs. In this case, the proof that it facilitates diversity is necessary, so they present it. In contrast, we point out that the reconstruction loss conflicts with GANs in a way that reduces the output variance and proposes alternatives without such problem. Thus, we prove the problem of reconstruction loss and that our methods do not conflict with the GAN objective.\\n\\n\\n3. Suggestion for better reviewing process \\n===================================\\nWe carefully prepared for the rebuttal to answer to the initial review critics. However, we feel that our rebuttal is completely ignored because we cannot find in the updated review which specific question is not answered by our rebuttal and why our clarification cannot be the answer to the original review questions. We strongly believe the communications between authors and reviewers should be precise, specific and helpful to one another.\"}",
"{\"metareview\": \"The paper presents new loss functions (which replace the reconstruction part) for the training of conditional GANs. Theoretical considerations and an empirical analysis show that the proposed loss can better handle multimodality of the target distribution than reconstruction based losses while being competitive in terms of image quality.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Intersting new loss function for cGANs\"}",
"{\"title\": \"Thank you for your interest in our paper.\", \"comment\": \"1. The scope of our proof\\n===================================\\nThat\\u2019s a great point. We have to make it clear in the draft. Our proof is confined to conditional GAN models with no explicit latent variable. Since the explicit latent variables provide the model with a vehicle that can represent variability and multimodality, our argument in section 4.4 may not be applicable to the models that explicitly encode latent variables. We add this discussion to the end of section 4.4.\\n\\n2. BicycleGAN\\n===================================\\nBicycleGAN has been applied to image-to-image translation, but not to image inpainting and super-resolution. Thus, we cannot find any standard implementation (or learned parameters) of BicycleGAN for the two tasks, which was the main reason why we did not report its results on the two tasks - image inpainting and super-resolution.\"}",
"{\"title\": \"Answers to Reviewer 3\", \"comment\": \"We are sincerely grateful for Reviewer 3\\u2019s thoughtful review. Please see blue fonts in the newly uploaded draft to check how our paper is updated.\\n\\n1. Novelty\\n===================================\", \"we_believe_that_our_work_has_significant_novelties_as_follows\": \"(1) To the best of our knowledge, our work is the first to formally criticize the use of reconstruction loss in conditional GANs. We also connect this problem to mode collapse (lose of multimodality). Of the prior works in conditional generation tasks, several papers empirically mention the loss of stochasticity in conditional GANs. However, they fail to analyze why this happens or propose what solutions can solve this problem. On the other hand, we reveal that the GAN loss and the reconstruction loss cannot coexist in harmony, and propose a solution to overcome this problem.\\n\\n(2) We propose alternatives to the reconstruction loss to greatly improve the multimodality of conditional GANs. As Reviewer 3 pointed out, the components of our methods, MLE and moment matching, are well-established ideas. However, it is novel to combine them as a solution to the loss of multimodality in conditional generation. Furthermore, we think the simplicity of our methods is not a weakness but a strength, which makes our methods easily applicable to a wide range of conditional generation tasks.\\n\\n2. Specific comments on organization and drawn conclusions\\n===================================\\nWe reorganize section 3.2 and 4.4 to reflect Reviewer 3\\u2019s suggestion. Specifically, we simplify section 3.2 and move some content about reconstruction loss from 4.4 to 3.2. \\n\\nWe agree with Reviewer 3 that the conclusion of section 4.4 may be rather over-stated. Our proof says that any generator cannot be optimal to both GAN and L2 loss simultaneously. It does not prove the generator is underperforming or suboptimal. Therefore, we remove the term \\u2018suboptimal\\u2019 and tone down the overall argument.\\n\\nWe also cite the papers that Reviewer 3 suggested.\"}",
"{\"title\": \"Answers to Reviewer 2\", \"comment\": \"We thank Reviewer 2 for positive and constructive reviews. Please see blue fonts in the newly uploaded draft to check how our paper is updated.\\n\\n1. Convergence speed\\n===================================\\nWe observe that our methods need more training steps (about 1.5x) to generate high-quality images compared to that with the reconstruction loss. It might be obvious because our methods train the model to generate a much wider range of outputs. We add some comments to Appendix B.1 regarding the convergence speed.\\n\\n2. Training stability\\n===================================\\nMLMM is similar to the reconstruction loss in terms of training stability. Encouragingly, our methods stably work with a large range of hyperparameter \\\\lambda. For example, the loss coefficient of MLMM is settable across several orders of magnitude (from tens to thousands) with similar results. However, as noted in the paper, MCMLE is unstable compared to MLMM.\\n\\n3. Why only MLMM_1 is not compared\\n===================================\\nDue to many combinations between our methods and tasks, we had to choose only a few of our methods for human evaluation. Although MLMM_1 and MLMM_{1/2} attained similar performance for all three tasks, we chose MLMM_{1/2} as the \\u2018default\\u2019 method because it better implements our idea - matching more statistics (i.e. not only means but also variances).\"}",
"{\"title\": \"Answers to Reviewer 1\", \"comment\": \"We thank Reviewer 1 for your encouraging and constructive comments. Please see blue fonts in the newly uploaded draft to check how our paper is updated.\\n\\n1. Ablation experiments\\n===================================\\nWe carry out the ablation experiments and present the results in appendix G (page 22). The results are indeed interesting. When trained with MLMM_1 or MCMLE_1 only, the outputs are indistinguishable from those with the reconstruction loss only, since there is no variation-inducing term to generate diverse output. In the case of MLMM_{1/2} and MCMLE_{1/2}, the model shows high variation in the output. However, the patterns of the variations differ greatly. Specifically, MLMM_{1/2} shows variations in low-frequency while MCMLE_{1/2} shows those in high-frequency.\\n\\nWe also add experiments of using GAN loss, MLMM_{1/2} loss, and reconstruction loss altogether. Whiling fixing the coefficient of GAN loss and MLMM loss to 1 and 10 respectively, we gradually increase the coefficient of reconstruction loss from 0 to 100. We find that the output variation decreases as the reconstruction loss increases. Interestingly, the sample quality is high when the reconstruction loss is absolutely zero or dominated by the MLMM loss. In contrast, the samples show poor quality when the reconstruction coefficient is 1 or 10. It seems that either method can assist the GAN loss to find visually appealing local optima but the joint use of them leads to a troublesome behavior.\\n\\n2. Shortcomings of un-correlated Gaussian\\n===================================\\nThis is a very interesting and profound question that may need to be further investigated in the future work. In summary, we believe that incorporating more statistics is not guaranteed to improve the performance, and un-correlated Gaussian may not be a bad choice.\\n\\nAn ideal GAN loss can match with any kind of statistics since it minimizes the JS divergence between sample distribution and real distribution. In this sense, additional loss term should be regarded as a \\u2018guidance\\u2019 signal, while the key player is still the GAN loss. However, it is unclear whether a tighter guidance necessarily yields better outputs.\\n\\nRegarding the tightness of guidance, the loss terms can be ordered as follows:\\nMLMM_1 = MCMLE_1 < MLMM_{1/2} = MCMLE_{1/2} < general covariance Gaussian.\\n\\nInterestingly, our qualitative evaluations show that MLMM_1 and MCMLE_1 generate comparable or even better outputs compared to MLMM_{1/2} and MCMLE_{1/2}. That is, matching means could be enough to guide GAN training in many cases. Adding more statistics may be helpful in some cases, but generally may not improve the performance. Moreover, we should consider the errors arising from the statistics prediction because a wrong estimation of statistics can even misguide the GAN training. \\n\\nPlease see blue fonts in section 5.2 of the newly uploaded draft to check how our paper is updated.\"}",
"{\"comment\": \"Hi,\\nI think this is an interesting work for improving the diversity of cGAN.\", \"but_i_have_some_questions\": \"1. The analysis in section 4.4 give a proof to mode collapse of some cGANs, such as pix2pix or UNIT. But the proof is not supported to the model that encode the laten representation to help generate images (Var(y|x,c)=0 is ok in this case), such as BicycleGAN or MUNIT. Right?\\n\\n2. The diversity scores in Table 1.(a) are remarkable. It will be interesting if you can present more comparisons with BicycleGAN in different tasks.\", \"title\": \"About suboptimal generator\"}",
"{\"title\": \"Accept\", \"review\": \"The paper describes an alternative to L1/L2 errors (wrt output and one ground-truth example) that are used to augment adversarial losses when training conditional GANs. While these augmented losses are often needed to stabilize and guide GAN training, the authors argue that they also bias the optimization of the generator towards mode collapse. To address this, the method proposes two kinds of alternate losses--both of which essentially generate multiple sample outputs from the same input, fit these with a Gaussian distribution by computing the generating sample mean and variance, and try to maximize the likelihood of the true training output under this distribution. The paper provides theoretical and empirical analysis to show that the proposed approach leads to generators that produce samples that are both diverse and high-quality.\\n\\nI think this is a good paper and solves an important problem---where one usually had to sacrifice diversity to obtain stable training by adding a reconstruction loss. I recommend acceptance.\\n\\nAn interesting ablation experiment might be to see what happens when one no longer includes the GAN loss and trains only with the MLMM or MCMLE losses, and compare this to training with only the L1/L2 losses. The other thing I'd like the authors to comment on are the potential shortcomings of using a simple un-correlated Gaussian to model the sample distributions. It seems that such a distribution may not capture the fact that multiple dimensions of the output (i.e., multiple pixel intensities) are not independent conditioned on the input. Perhaps, it may be worth exploring whether Gaussians with general co-variance matrices, or independent in some de-correlated space (learned from say simply the set of outputs) may increase the efficacy of these losses.\\n\\n====Post-rebuttal\\n\\nI've read the other reviews and retain my positive impression of the paper. I also appreciate that the authors have conducted additional experiments based on my (non-binding) suggestions---and the results are indeed interesting. I am upgrading my score accordingly.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting paper in analyzing and improving model collapse problems in conditional GANs\", \"review\": \"This paper analyzes the model collapse problems on training conditional GANs and attribute it to the mismatch between GAN loss and reconstruction loss. This paper also proposes new types of reconstruction loss by measuring higher statistics for better multimodal conditional generation.\", \"pros\": \"1.\\tThe analysis in Sec 4.4 is insightful, which partially explains the success of MLMM and MCMLE over previous method in generating diverse conditional outputs.\\n2.\\tThe paper is well written and easy to follow.\", \"cons\": \"Analysis on the experiments is a little insufficient, as shown below.\\n\\nI have some questions (and suggestions) about experiments. \\n1.\\tHow does the training process affected by changing the reconstruction loss (e.g., how the training curve changes?)? Do MLMM and MCMLE converge slower or faster than the original ones? What about training stability? \\n2.\\tWhy only MLMM_1 is not compared with other methods on SRGAN-celebA and GLCIC-A? From pix2pix cases it seems that Gaussian MLMM_1 performs much better than MLMM_{1/2}.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Lack of novelty and weak theoretical results\", \"review\": \"The paper proposes a modification to the traditional conditional GAN objective (which minimizes GAN loss as well as either L1 or L2 pixel-wise reconstruction losses) in order to promote diverse, multimodal generation of images. The modification involves replacing the L1/L2 reconstruction loss -- which predicts the first moment of a pixel-wise gaussian/laplace (respectively) likelihood model assuming a constant spherical covariance matrix -- with a new objective that matches the first and second moments of a pixel-wise gaussian/laplace likelihood model with diagonal covariance matrix. Two models are proposed for matching the first and second moments - the first one involves using a separate network to predict the moments from data which are then used to match the generator\\u2019s empirical estimates of the moments (using K samples of generated images). The second involves directly matching the empirical moment estimates using monte carlo.\\n\\nThe paper makes use of a well-established idea - modeling pixel-wise image likelihood with a diagonal covariance matrix i.e. heteroscedastic variance (which, as explained in [1], is a way to learn data-dependent aleatoric uncertainty). Following [1], the usage of first and second moment prediction is also prevalent in recent deep generative models (for example, [2]) i.e. image likelihood models predict the per-pixel mean and variance in the L2 likelihood case, for optimizing Equation 4 from the paper. Recent work has also attempted to go beyond the assumption of a diagonal covariance matrix (for example, in [3] a band-diagonal covariance matrix is estimated). Hence, the only novel idea in the paper seems to be the method for matching the empirical estimates of the first and second moments over K samples. The motivation for doing this makes intuitive sense since diversity in generation is desired, which is also demonstrated in the results.\", \"section_specific_comments\": \"- The loss of modality of reconstruction loss (section 3.2) seems like something which doesn\\u2019t require the extent of mathematical and empirical detail presented in the paper. Several of the cited works already mention the pitfalls of using reconstruction loss.\\n\\n- The analyses in section 4.4 are sound in derivation but not so much in the conclusions drawn. It is not clear that the lack of existence of a generator that is an optimal solution to the GAN and L2 loss (individually) implies that any learnt generator using GAN + L2 loss is suboptimal. More explanation on this part would help.\\n\\nThe paper is well written, presents a simple idea, complete with experiments for comparing diversity with competing methods. Some theoretical analyses do no directly support the proposition - e.g. sections 3.2 and 4.4 in my specific comments above. Hence, the claim that the proposed method prevents mode collapse (training stability) and gives diverse multi-modal predictions is supported by experiments and intuition for the method, but not so much theoretically. However, the major weakness of the paper is the lack of novelty of the core idea.\\n\\n=== Update after rebuttal:\\nHaving read through the other reviews and the author's rebuttal, I am unsatisfied with the rebuttal and I do not recommend accepting the paper. My rating has decreased accordingly.\\n\\nThe reasons for my recommendation, after discussion with other reviews, are -- (1) lack of novelty and (2) weak theoretical results (some justification of which was stated in my initial review above). Elaborating more on the second point, I would like to mention some points which came up during the discussion with other reviewers: The theoretical result which states that not using reconstruction loss given that multi-modal outputs are desired is a weaker result than proving that the proposed method is actually effective in what it is designed to do. There are empirical results to back that claim, but I strongly believe that the theoretical results fall short and feel out of place in the overall justification for the proposed method. This, along with my earlier point of lack of novelty are the basis for my decision.\", \"references\": \"[1] Kendall, Alex, and Yarin Gal. \\\"What uncertainties do we need in bayesian deep learning for computer vision?.\\\" Advances in neural information processing systems. 2017.\\n[2] Bloesch, M., Czarnowski, J., Clark, R., Leutenegger, S., & Davison, A. J. (2018). CodeSLAM-Learning a Compact, Optimisable Representation for Dense Visual SLAM. CVPR 2018.\\n[3] Dorta, G., Vicente, S., Agapito, L., Campbell, N. D., & Simpson, I. (2018, February). Structured Uncertainty Prediction Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
r14Aas09Y7 | COCO-GAN: Conditional Coordinate Generative Adversarial Network | [
"Chieh Hubert Lin",
"Chia-Che Chang",
"Yu-Sheng Chen",
"Da-Cheng Juan",
"Wei Wei",
"Hwann-Tzong Chen"
] | Recent advancements on Generative Adversarial Network (GAN) have inspired a wide range of works that generate synthetic images. However, the current processes have to generate an entire image at once, and therefore resolutions are limited by memory or computational constraints. In this work, we propose COnditional COordinate GAN (COCO-GAN), which generates a specific patch of an image conditioned on a spatial position rather than the entire image at a time. The generated patches are later combined together to form a globally coherent full-image. With this process, we show that the generated image can achieve competitive quality to state-of-the-arts and the generated patches are locally smooth between consecutive neighbors. One direct implication of the COCO-GAN is that it can be applied onto any coordinate systems including the cylindrical systems which makes it feasible for generating panorama images. The fact that the patch generation process is independent to each other inspires a wide range of new applications: firstly, "Patch-Inspired Image Generation" enables us to generate the entire image based on a single patch. Secondly, "Partial-Scene Generation" allows us to generate images within a customized target region. Finally, thanks to COCO-GAN's patch generation and massive parallelism, which enables combining patches for generating a full-image with higher resolution than state-of-the-arts. | [
"entire image",
"wide range",
"generated patches",
"generative adversarial network",
"gan",
"works",
"synthetic images",
"current processes"
] | https://openreview.net/pdf?id=r14Aas09Y7 | https://openreview.net/forum?id=r14Aas09Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkeNtWZ2kN",
"Byx3uk45R7",
"r1gNMG5FCX",
"HJlYrI8O0X",
"BJgGV8UdAQ",
"ryea-UIO07",
"rylP4BLOCX",
"r1xdfBUdRQ",
"B1lJ-SIuCQ",
"HklWyH8OAm",
"ryxR2VUORX",
"B1lvh93lA7",
"r1eLtajJR7",
"rJgZFBEY6m",
"HylqvBEYpQ",
"HJx6qgiU6m",
"SJeJgs98pX",
"HJexTqc8p7",
"HyeA1BiNTm",
"r1xwR_U4TX",
"ryxe9uUETm",
"Sye8NN5Xp7",
"BygrBnrZp7",
"Hkxi2VNbT7",
"ByxHxsaxTQ",
"HJeboszx6m",
"SJxgFjze67",
"Bygz7jMepm",
"SJepnczgam",
"S1gIoh_c2Q",
"r1gllwP937"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1544454523673,
1543286644312,
1543246348381,
1543165505328,
1543165482428,
1543165445114,
1543165230735,
1543165199636,
1543165175376,
1543165145198,
1543165110106,
1542666927223,
1542598013817,
1542174072717,
1542174050165,
1542004885315,
1542003430588,
1542003383903,
1541874917642,
1541855438710,
1541855367630,
1541805102355,
1541655613074,
1541649587321,
1541622509318,
1541577625333,
1541577592133,
1541577498301,
1541577397432,
1541209245859,
1541203688395
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper854/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper854/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/Authors"
],
[
"ICLR.cc/2019/Conference/Paper854/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper854/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper introduces a GAN architecture for generating small patches of an image and subsequently combining them. Following the rebuttal and discussion, reviewers still rate the paper as marginally above or below the acceptance threshold.\\n\\nIn response to updates, AnonReviewer3 comments that \\\"ablation experiments do make the paper stronger\\\" but it \\\"still lacks convincing experiments for its main motivating use case: generating outputs at a resolution that won't fit in memory within a single forward pass\\\".\\n\\nAnonReviewer2 points to the major shortcoming that \\\"throughout the exposition it is never really clear why COCO-GAN is a good idea beyond the fact that it somehow works. I was missing a concrete use case where COCO-GAN performs much better.\\\"\\n\\nThough authors provide additional experiments and reference high-resolution output during the discussion phase, they caution that these results are preliminary and could likely benefit from more time/work devoted to training.\\n\\nOn balance, the AC agrees with the reviewers that the paper contains some interesting ideas, but also believes that experimental validation simply needs more work, and as a result the paper does not meet the bar for acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview: interesting idea, experiments could be improved\"}",
"{\"title\": \"Thanks for support\", \"comment\": \"We agree with the suggestion from the reviewer. Since the Q-network is highly correlated to the \\u201cPatch-Inspired Image Generation\\u201d application, we would move it to the corresponding experiment section with a brief description and leave the details in the appendix for the final revision for better readability.\"}",
"{\"title\": \"LGTM, remove Q network from technical section\", \"comment\": \"I just looked over the revision and I'm happy with the changes and additional results.\\n\\nI'd recommend the authors to remove the Q-network from the main technical section for the final revision. It does hurt performance (contributes negatively). If the authors still want to talk about it, the appendix might be a better place for it.\"}",
"{\"title\": \"Rebuttal update\", \"comment\": \"Thanks for all the reviewers\\u2019 effort in the paper review, we received lots of valuable suggestions. We accordingly revised our paper and create a meta-summary of the rebuttal. Please kindly check it out and leave some comments. We are more than willing to discuss more and further polish our paper in the remaining rebuttal period!\", \"link_to_the_meta_summary_by_the_author\": \"\", \"https\": \"//openreview.net/forum?id=r14Aas09Y7¬eId=ryxR2VUORX\"}",
"{\"title\": \"Rebuttal update\", \"comment\": \"Thanks for all the reviewers\\u2019 effort in the paper review, we received lots of valuable suggestions. We accordingly revised our paper and create a meta-summary of the rebuttal. Please kindly check it out and leave some comments. We are more than willing to discuss more and further polish our paper in the remaining rebuttal period!\", \"link_to_the_meta_summary_by_the_author\": \"\", \"https\": \"//openreview.net/forum?id=r14Aas09Y7¬eId=ryxR2VUORX\"}",
"{\"title\": \"Rebuttal update\", \"comment\": \"Thanks for all the reviewers\\u2019 effort in the paper review, we received lots of valuable suggestions. We accordingly revised our paper and create a meta-summary of the rebuttal. Please kindly check it out and leave some comments. We are more than willing to discuss more and further polish our paper in the remaining rebuttal period!\", \"link_to_the_meta_summary_by_the_author\": \"\", \"https\": \"//openreview.net/forum?id=r14Aas09Y7¬eId=ryxR2VUORX\"}",
"{\"title\": \"A. Writing\", \"comment\": [\"We are notified that our paper writing has some flaws, and it somewhat obscures the main targets and benefits of COCO-GAN. We accordingly have been revising the introduction and conclusion section of the paper to ensure the paper to be as clear as possible to the readers. We also fix some minor mistakes pointed out by the reviewers as follows:\", \"We create a section in the future work section mentioning and discussing \\u201cSynthesizing Images of Humans in Unseen Poses\\u201d by G. Balakrishan et al.\", \"We rewrite some inaccurate statements in Section 3.4 and add more citations.\", \"Figure 2 is replaced with other samples and referenced to more generated samples in Figure 12.\", \"Note that we made some other minor modifications to fit the 10-page limit.\"]}",
"{\"title\": \"B. Ablation study\", \"comment\": \"We perform an ablation study and add a corresponding section (Section 3.3 and Appendix F) in the main paper based on all the reviewers\\u2019 suggestions. The ablation study is in two folds:\\n\\n1. Comparison with a straightforward approach (refered to as [M] in the comment area and \\\\mathcal{M} in the paper). The method [M] creates a full-sized generator, but trains and inferences with partial views by cropping corresponding feature maps during forward propagation. Despite the method [M] still implicitly uses the conditional coordinating strategy within the feature maps selection process, our experimental results in Table 1 suggest [M] and its variants (which are equipped with many COCO-GAN components at our best) cannot generate competitive results. The root cause of the poor results is unclear. Our hypothesis is the conditional batch normalization (CBN) in COCO-GAN is crucial for conditional coordinating. However, as the method [M] is not our main study target, we decide not to perform further analysis for improving the method [M].\", \"the_fid_curves_through_time_are_provided_in_the_following_anonymous_link\": \"\", \"https\": \"//www.dropbox.com/sh/y87ypswqslf9b5i/AAAmwXalLriX2ci5U7nkLxDZa?dl=0\", \"some_generation_samples_through_epochs_are_provided_in_the_following_anonymous_link\": \"https://www.dropbox.com/sh/pef4m1nz7wdo99i/AAB9dXVM0dMILU0Ojsw07Jl0a?dl=0\", \"table_1\": \"FID scores comparison between COCO-GAN and [M] method proposed by AnonReviewer3. All models are trained on CelebA dataset at 64x64 resolution. The results suggest COCO-GAN is more preferable in the same setting.\\n-----------------------------------------------------------------------------------\\nModel FID\\n-----------------------------------------------------------------------------------\\nCOCO-GAN 4.99\\n[M] 72.82\\n[M] + projection discriminator (100 epochs) 90.87\\n[M] + projection discriminator + macro discriminator 60.36\\n-----------------------------------------------------------------------------------\\n\\n\\n\\n2. We show an ablation study toward the trade-offs of each component in Table 2. The ablation study is conducted in the following five configurations:\\n - Continuous Sampling: Using the continuous uniform sampling strategy to sample spatial positions during training.\\n - Optimal Discriminator: The discriminator discriminates the full image, while the generator generates micro patches.\\n - Optimal Generator: The generator generates the full image, while the discriminator discriminates macro patches.\\n - Without Q Network: Removing the Q network which constructs the content consistency loss.\\n - Multiple Generators: Training an individual generator for each spatial position.\\n\\nWe empirically found that the Q network that constructs the content consistency is not required if not considering the \\u201cPatch-Inspired Image Generation\\u201d application. Surprisingly, despite the convergence speed is different, the \\u201cOptimal Discriminator\\u201d, \\u201cCOCO-GAN\\u201d, and \\u201cOptimal Generator\\u201d (ordered by convergence speed from fast to slow) can converge to similar final FID score if with sufficient training time. The difference in convergence speed is expected as \\u201cOptimal Discriminator\\u201d provides the generator with more accurate and global adversarial loss. In contrast, the \\u201cOptimal Generator\\u201d has relatively more parameters and layers to optimize, which causes the convergence speed slower than COCO-GAN. Lastly, \\u201cMultiple Generators\\u201d setting cannot converge well. Although it can also concatenate micro patches without obvious seams as COCO-GAN does, the full-image results often cannot agree and are not globally coherent. We provide the FID curve through time and the generated samples of the first 100 epochs in the following anonymous links:\", \"fid_curve_through_time\": \"\", \"generated_samples_through_epochs\": \"\", \"table_2\": \"FID scores comparison between COCO-GAN and its variants. All models are trained on CelebA dataset with the setting of 64x64 resolution, 16x16 micro patch, and 32x32 macro patch.\\n-------------------------------------------------------------------------------\\nModel FID\\n-------------------------------------------------------------------------------\\nCOCO-GAN (ours) 4.99\\nCOCO-GAN (Continuous Sampling) 6.13\\nCOCO-GAN (Optimal Discriminator) 4.05\\nCOCO-GAN (Optimal Generator) 6.12\\nCOCO-GAN (Remove Q Network) 4.87\\nMultiple Generators 7.26\\n-------------------------------------------------------------------------------\"}",
"{\"title\": \"C. Providing more generated examples\", \"comment\": \"As flagged by AnonReviewer1, we probably provide too few samples in the original paper, which causes the experimental result seems carefully curated rather than randomly sampled. We accordingly provide more samples at the following anonymous links. Note that the samples are provided in across epochs manner so that we can observe the seams disappears over epochs.\\n\\nCelebA 128x128 (micro patches 32x32, macro patches 64x64):\", \"https\": \"//www.dropbox.com/sh/l7i9lxgdw8ez69v/AABA4ok5ldQM-B8FVgORpE5Oa?dl=0\"}",
"{\"title\": \"D. Generating high-resolution images\", \"comment\": \"About validating generation quality directly on the high-resolution dataset, we decide to provide an anonymous link to a set of 1024x1024 samples generated by COCO-GAN. We run this experiment with our default hyperparameter configuration (this config was obtained from CelebA 64x64 experiment) on CelebA-HQ with 256x256 resolution patches.\\n\\nWe are hesitant about providing these images as we are afraid if the community may have an unfair impression about COCO-GAN\\u2019s generation quality. So far this experiment requires more hyperparameter tuning and the training may still not converge yet. We observe significant balancing problem in the G/D loss curve. Thus we suggest the results are taken as a reference only. \\n\\nNote that hyperparameter tuning for CelebA-HQ is very expensive and essential for GANs, but it is not affordable for us. For instance, PGGAN takes 2 weeks to train on CelebA-HQ, meanwhile, COCO-GAN should converge slower than PGGAN, since it does not adopt the \\u201cprogressive growing\\u201d strategy.\", \"anonymous_link_to_celeba_hq_1024x1024_generation_samples\": \"\", \"https\": \"//www.dropbox.com/sh/5ly8xk22cqhxt76/AAAee1E2D8rIPAwFZtympnnta?dl=0\"}",
"{\"title\": \"Rebuttal update\", \"comment\": \"Sincerely thanks for all the feedback from the reviewers. We summarize and respond to the reviewers\\u2019 suggestions and justifications in the following four aspects: writing, ablation study, providing more generated examples, and generating high-resolution images.\"}",
"{\"title\": \"Thanks for the questions!\", \"comment\": \"\\u201c- Agree to limit discussion of [M] (I think there's still a misunderstanding, I wasn't suggesting generating different graphs on the fly, but a single graph for each crop, and providing crop locations as tensorflow placeholders). I believe that the comparisons to the per-coordinate generators requested by the other reviewer will be a good enough equivalent baseline.\\u201d\\n\\nWe have finished a straightforward implementation of [M]. Although it already shows significant differences in quality between [M] and COCO-GAN, considering the model training is still not fully-converged, we will still wait for a couple of days to monitor it.\\n\\n\\u201c- About the high-resolution results: I think these results are important, since this is the main motivating use case of COCO-GAN. We saw a drop in scores compared to PGGAN at the lower resolution. The question is that does that drop become larger when one goes to higher resolution---i.e., is a single generator with co-ordinate input still able to generate the whole image. As you say, fewer parameters is better, but only if they still give comparable performance.\\u201d\\n\\nWe agree with the reviewer that an analysis toward real high-resolution image generation is important, and that is also our main motivation to run the CelebA-HQ experiment. However, note that we do not have sufficient computing power to perform extensive hyper-parameters tuning, and considering that GANs are highly sensitive to hyperparameters, it is hard to have a fair comparison in generation quality with PGGAN. In this work, we emphasize more on the contribution of demonstrating conditional coordinating is a possible solution and the generation quality is surprisingly better than expected. There may still be some unrecognized bottlenecks or design flaws not recognized by the authors, which requires more researchers and different opinions coming in for further study.\\n\\n\\u201cQuestion: for 1024x1024 full images, are you using 256x256 as the resolution of the micro patches ? What happens if you use the same hyper-parameters for the 128x128 dataset (i.e., 32x32 micro-patches) and use it to generate the 1024x1024 images ?\\u201d\\n\\nYes, we use 256x256 resolution micro patches. We believe using 256x256 resolution means the same hyperparameters setting to 128x128 dataset as it is 1/4-sized on each of the edges of the full image. \\nUsing 32x32 micro patches to generate 1024x1024 images is similar to using 4x4 resolution micro patches to generate 128x128 CelebA images. This setting is difficult if not impossible to train a COCO-GAN as both the generator and the discriminator cannot observe useful information from the extremely-small partial views to learn accurate condition for coordinates.\\n\\n\\u201cThis is again important in the context of the motivation of memory usage. Given a size of micro-patches that can be fit in memory, what factor larger images can one generate using COCO-GAN ? Is it limited to only 4x ?\\u201d\\n\\nNo, it is not limited to 4x only, the task of panorama generation using 12x4 micro patches is an experimental proof. Smaller patch size may cause both the generator and the discriminator to fail to learn accurate condition for coordinates, since small patches may not form a valid partial view for the models to learn useful information. It forms a trade-off between patch size and generation quality as a hyperparameter. \\n\\nThis is kind of similar to low-precision floating point training. Low precision training has many benefits (e.g., memory usage, computing power, and training time). But information loss of the low precision gradients causes many numerical problems, which many researchers are still working on reducing the side effects.\"}",
"{\"title\": \"Response\", \"comment\": [\"Agree to limit discussion of [M] (I think there's still a misunderstanding, I wasn't suggesting generating different graphs on the fly, but a single graph for each crop, and providing crop locations as tensorflow placeholders). I believe that the comparisons to the per-coordinate generators requested by the other reviewer will be a good enough equivalent baseline.\", \"About the high-resolution results: I think these results are important, since this is the main motivating use case of COCO-GAN. We saw a drop in scores compared to PGGAN at the lower resolution. The question is that does that drop become larger when one goes to higher resolution---i.e., is a single generator with co-ordinate input still able to generate the whole image. As you say, fewer parameters is better, but only if they still give comparable performance.\"], \"question\": \"for 1024x1024 full images, are you using 256x256 as the resolution of the micro patches ? What happens if you use the same hyper-parameters for the 128x128 dataset (i.e., 32x32 micro-patches) and use it to generate the 1024x1024 images ?\\n\\nThis is again important in the context of the motivation of memory usage. Given a size of micro-patches that can be fit in memory, what factor larger images can one generate using COCO-GAN ? Is it limited to only 4x ?\"}",
"{\"title\": \"Author's response [1/2]\", \"comment\": \"\\u201c- So for the static graph generation, I think there may have been a mis-understanding. You don't need to generate a graph for all the crops (for training or generation). You would generate the graph for one crop (with the location of the crop being a placeholder at test time, or randomly generated for training). For generation, you would call the same graph multiple times with the same latent vector and different crop locations, and then just concatenate the results on the CPU---just like you're calling this paper's generator with different co-ordinates and concatenating them together. Training would be with one random crop at a time.\\u201d\\n\\n\\nIf we didn\\u2019t get the reviewer\\u2019s meaning wrong, take COCO-GAN as an example, it takes average 120 seconds to build the graph and average 7 seconds to run global initializer on a single GTX 1080 GPU. We believe it is hard to generate the graph on the fly for each iteration.\\n\\nAs this is a little out of the scope, we hope the reviewers also agree that it would be better to prevent further discussions about the implementation detail of [M] here.\\n\\n\\u201c- About requiring conditional batch normalization (although, you could have different crops for different members in a batch---just like you have different co-ordinates in the current setting) and differences in parameters, yes, these are empirical questions---but that's why experiments would help to resolve this. It is possible that a cropping strategy may not work for GANs as it does for other tasks---or at least not as well as the proposed method, but again, experimental validation for this would be good.\\u201d\\n\\nWe think the difference in parameters is a critical problem that should be taken into consideration when an algorithm is designed, and is not just an empirical issue. If the number of parameters is not a problem, then, indeed, conditional generation will not be a problem since we can train N generators for N classes. Weight sharing is still preferable in many cases if possible.\"}",
"{\"title\": \"Author's response [2/2]\", \"comment\": \"\\u201c- I think more generally, the main issue with the experimental validation is that the method is motivated by the need to generate very high resolution volumes, but the experiments are run on standard small datasets (the panoramas are examples of stitching things together to being moderately large outputs---but these weren't actually trained on high-resolution panoramas, so it is difficult to figure out whether they are plausible).\\u201d\\n\\nWe validate that our method can learn to compose larger images with patches. Our method is compatible with most of the mainstream GAN methods. One can combine COCO-GAN with PGGAN without conflicts (except may require massive computing power for hyperparameters tuning) since COCO-GAN only affects the input/output of the model and introduces CBN to replace batch norm.\\n\\nWe think, at this point, it would be helpful to validate that COCO-GAN can still generate high-quality images even in the following cases:\\n 1. Directly concatenating the generated patches without observing extreme seams.\\n 2. Both the generator and the discriminator have only partial observations.\\n\\nAbout validating generation quality directly on the high-resolution dataset, we decide to provide an anonymous link to a set of 1024x1024 samples generated by COCO-GAN. We run this experiment with our default hyperparameter configuration (this config was obtained from CelebA 64x64 experiment) on CelebA-HQ with 256x256 resolution patches. \\n\\nWe are hesitant to provide these images as we are afraid if the community may have an unfair impression about COCO-GAN\\u2019s generation quality. So far this experiment requires more hyperparameter tuning and the training may still not converge yet. We observe significant balancing problem in the G/D loss curve. Note that hyperparameter tuning for CelebA-HQ is very expensive and essential for GANs, but we are not affordable for that. PGGAN takes 2 weeks to train on CelebA-HQ, meanwhile, COCO-GAN should converge slower than PGGAN, since it does not adopt the \\u201cprogressive growing\\u201d strategy.\", \"anonymous_link_to_celeba_hq_1024x1024_generation_samples\": \"\", \"https\": \"//www.dropbox.com/sh/5ly8xk22cqhxt76/AAAee1E2D8rIPAwFZtympnnta?dl=0\\n\\nNote that the spatial interpolation property does not exist in [M]. We believe the most important contribution of this work is whether our observation and some non-before-seen properties of COCO-GAN are interesting to the community. \\n\\n\\u201cI think a lot of the requests from reviewers for ablation and baselines would go away if the method were convincingly able to learn generation for truly high-dimensional images or signals---the kind where traditional GANs would run out of memory and clearly could not be used. That would be definitively demonstrate that method is able to \\\"achieve\\\" something that prior work wasn't able to.\\nWithout that kind of validation, I think the paper needs to run more of these baseline experiments to make its case.\\u201d\\n\\nWe believe COCO-GAN is technically a favorable solution toward high-resolution image generation. We provide 1024x1024 resolution generation, which is already on the state-of-the-art level in terms of resolution, though it still needs some hyperparameter tuning. We believe this is a direct evidence that COCO-GAN can apply to high-resolution image generation. If given sufficient computing power, more extensive hyperparameter tuning, and maybe combined with some mainstream GANs training frameworks (such as PGGAN), we are confident to show that COCO-GAN is an important building block for high-resolution image generation.\\n\\nNote that we are running the experiments for the five rebuttal requests from R1 and R2, it still needs to take some time to wait until the models converge.\"}",
"{\"title\": \"Response\", \"comment\": [\"So for the static graph generation, I think there may have been a mis-understanding. You don't need to generate a graph for all the crops (for training or generation). You would generate the graph for one crop (with the location of the crop being a placeholder at test time, or randomly generated for training). For generation, you would call the same graph multiple times with the same latent vector and different crop locations, and then just concatenate the results on the CPU---just like you're calling this paper's generator with different co-ordinates and concatenating them together. Training would be with one random crop at a time.\", \"About requiring conditional batch normalization (although, you could have different crops for different members in a batch---just like you have different co-ordinates in the current setting) and differences in parameters, yes, these are empirical questions---but that's why experiments would help to resolve this. It is possible that a cropping strategy may not work for GANs as it does for other tasks---or at least not as well as the proposed method, but again, experimental validation for this would be good.\", \"I think more generally, the main issue with the experimental validation is that the method is motivated by the need to generate very high resolution volumes, but the experiments are run on standard small datasets (the panoramas are examples of stitching things together to being moderately large outputs---but these weren't actually trained on high-resolution panoramas, so it is difficult to figure out whether they are plausible).\", \"I think a lot of the requests from reviewers for ablation and baselines would go away if the method were convincingly able to learn generation for truly high-dimensional images or signals---the kind where traditional GANs would run out of memory and clearly could not be used. That would be definitively demonstrate that method is able to \\\"achieve\\\" something that prior work wasn't able to.\", \"Without that kind of validation, I think the paper needs to run more of these baseline experiments to make its case.\"]}",
"{\"title\": \"Thanks for the rapid response [1/2]\", \"comment\": \"1.\\n\\u201cIt's not clear why you'd need conditional batch-norm or why it would be an issue to implement in a static graph framework like Tensorflow. You could simply add a tf.slice (with fixed size crop for each layer, but random co-ordinates) after the output of every conv2d_transpose layer, and then do regular batch-normalization on that sliced output. \\n\\nNote that in the proposed method, batch-normalization is also being done on a smaller sized feature maps (corresponding to each crop)---it would be the same in this case, except that those smaller sized feature maps would come from cropping a larger map.\\u201d\\n\\n** Conditional Batch Norm (CBN) **\\nAn empirical explanation is that the distribution of feature map crops are different in every spatial position, which requires different batch norm parameters to handle with (take CelebA as an example, the top side is mostly hair or hats, while the bottom side is mostly jaw or clothes). This is an empirical observation during COCO-GAN training, the model will result in extremely poor generation quality of without CBN. We expect [M] will have similar problems.\", \"we_provide_an_anonymous_image_link_showing_how_coco_gan_might_fail_if_it_uses_regular_batch_norm_instead_of_cbn\": \"\", \"https\": \"//pastebin.com/8Eh0rcNz\\n\\nA - The first problem is building the graph. For static graph frameworks, everything should be done in the graph building stage. If the output of the generator is of size HxW, it will need to process almost HxW slices while graph building. In our toy example, the graph building is slow, which will become even slower as H and W grow. If with 10240x10240 input, it is estimated to take 1.5 days to build the graph on a single GTX 1080 GPU.\\n\\nB - The second problem is the training speed. In the toy example, forward pass without training takes 3.2 seconds per sample on a single GTX 1080 GPU. Our hypothesis about the low speed is that the complex graph (which introduces lots of extra Tensorflow operators, created 135,168 slice operators in the toy example only) and random selection makes the framework cannot take advantage of caching. Note that this result is for a single batch and a single layer only, which is already 2x slower than a full iteration of COCO-GAN training.\\n\\nC - The third problem is implementation. One will need to calculate the layer-wise receptive field for each position of the next layer, and meanwhile, needs to be aware of edge conditions. Furthermore, the feature map indexing changes for each modification on architecture, hyperparameters, and dataset. This is not ideal for software development.\\n\\n-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-\\n\\nAside from the potential problems above, we may consider other cons (e.g. more parameters and training difficulty) and unknown implementation/training difficulties, [M] still requires researchers to further investigate before being used as a fair comparison target. Although [M] is straightforward at first glimpse, it is not simpler than or preferable to COCO-GAN from our point of view. \\n\\nIf we consider the case that all methods share the same final generation quality, both [M] and training multiple generators introduce more parameters to be trained, while COCO-GAN suggests this can be done by conditional coordinate. It suffices that COCO-GAN provides a possible solution in a different dimension that the final generation quality is surprisingly good without post-processing. This is, to our knowledge, the first work observing this phenomenon and corroborating that COCO-GAN-like framework can generate images which are competitive to state-of-the-art methods.\", \"an_anonymous_link_to_a_toy_example_code\": \"\"}",
"{\"title\": \"Thanks for the rapid response [2/2]\", \"comment\": \"2.\\n\\u201cNote that working with crops is a fairly common solution in a lot of other applications which produce full-sized maps like segmentation, super-resolution, depth estimation, etc. (although there the original input itself is randomly cropped) even though its not been used for GANs before.\\u201d\\n\\n\\nYes, to our knowledge in the GANs domain, we are the first work observing the phenomenon that the generated partial patches can be concatenated without post-processing and result in surprisingly high-quality images. Furthermore, our setting is very different from existing patch-based solutions for segmentation, super-resolution, depth estimation. First of all, those methods mostly have very strong semantic and structure self-similarity between inputs and outputs, which makes the result patches easy to align. Second, segmentation is relatively easy to retain continuity between patches, as they are not output in raw pixel space. Third, the patches generated by COCO-GAN are entirely non-overlapped, while normal patch-based approaches consider calculating the mean of multiple overlapped patches. Lastly, even the same setting in different domains have different problems to deal with. We propose, implement, analyze and verify the idea of COCO-GAN, and we show that it is a valid solution after overcoming many unexplored problems. In the belief that these are interesting observations and COCO-GAN may open many new possible applications, we are excited about sharing this work with the community.\\n\\n\\n\\n3.\\n\\u201cAt some level, cropping (that authors refer to as method [M]) is essentially the same as what R2 also asked for in their review---i.e., separate generators for each co-ordinate. [M] is basically a convolutional version of that (i.e., each crop of the full generator is essentially a separate generator for that crop's co-ordinates).\\u201d\\n\\nWe are running the experiment of training multiple generators based on this rebuttal request, which we expect may cause the optimization process to become slower due to a significant increase in the number of parameters. Just to make things more precise: COCO-GAN shares weights across different spatial positions and has fewer parameters. In contrast, training multiple generators introduces significantly more parameters, and [M] also needs to increase the number of parameters as more convolutional layers are needed.\\n\\nLastly, the multiple generators setting and [M] are both a special case of COCO-GAN except introducing more parameters. The former one explicitly introduces conditional coordinates via generator selection. The latter one implicitly introduces spatial conditional coordinate via selecting specific slices of feature maps. \\n\\nWe thank the reviewers for suggesting possible solutions to our original objective. The three methods (including COCO-GAN) have different pros and cons. Further empirical comparisons between these approaches may be an interesting future work direction, but probably not in the scope of this paper as we aim to introduce conditional coordinate (an explicitly or implicitly used feature for all three methods), exemplify the generation quality (while [M] needs some further work to justify) and show some of (but not limited to) the possible new applications.\"}",
"{\"title\": \"Not clear why normalization would be an issue\", \"comment\": \"It's not clear why you'd need conditional batch-norm or why it would be an issue to implement in a static graph framework like Tensorflow. You could simply add a tf.slice (with fixed size crop for each layer, but random co-ordinates) after the output of every conv2d_transpose layer, and then do regular batch-normalization on that sliced output.\\n\\nNote that in the proposed method, batch-normalization is also being done on a smaller sized feature maps (corresponding to each crop)---it would be the same in this case, except that those smaller sized feature maps would come from cropping a larger map.\\n\\nNote that working with crops is a fairly common solution in a lot of other applications which produce full-sized maps like segmentation, super-resolution, depth estimation, etc. (although there the original input itself is randomly cropped) even though its not been used for GANs before.\\n\\nAt some level, cropping (that authors refer to as method [M]) is essentially the same as what R2 also asked for in their review---i.e., separate generators for each co-ordinate. [M] is basically a convolutional version of that (i.e., each crop of the full generator is essentially a separate generator for that crop's co-ordinates).\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thanks for keeping in touch with us! Here are our responses:\\n\\n2. \\n\\u201cCan you show some concrete experiments supporting these claims?\\u201d\\n\\n\\n** Patch-Inspired Image Generation **\\nSection 3.4, Figure 8 and Appendix E are related to this application.\\n\\n** Partial-Scene Generation **\\nThis is mentioned in the last paragraph of Section 3.3. Generating partial views of CelebA or LSUN may not have real-world applications, but generating part of panoramas in virtual reality (VR) does have prospective benefits in reducing computations. We show that COCO-GAN can generate panorama in a cylindrical coordinate system. Although the full panorama resolution in our experiment is not significantly high, we believe our experiment is a valid proof of concept.\\n\\n** Computation-Friendly Generation **\\nSection 3.5 discusses this issue and our experiments support that the idea of generating images via concatenating multiple generated patches is possible.\\n\\n3. \\n\\u201cI still think ex 4 is very important. I understand that you'll have many more parameters, but the computation should be the same (or slightly smaller, since you do not have the conditional coordinate). However, only this experiment highlights the need for coordinate conditioning (which according to reply 4 is the essential component of this paper).\\u201d\\n\\nWe have kick-started this experiment under CelebA 64x64 setting.\\nOne of the critical problems of this solution is that, take the panorama generation setting for example, this solution will need to fit 48 different generators into the GPU memory during training phase. The scale of this problem grows rapidly as the image size increases. Although there may exist some engineering workarounds to make this possible, but when a weight-sharing strategy like COCO-GAN exists, we believe COCO-GAN will be more attractive.\"}",
"{\"title\": \"Thanks for the follow-up!\", \"comment\": \"Sincere thanks for the follow-up and suggestions, which are helpful for further highlighting the advantages of our method:\\n\\nAt first glance, a potential problem of the method (referred as [M] afterward) proposed by the reviewer is in the normalization layer. Most of the common normalization methods do not work with partial feature maps. A possible solution to this problem is using conditional batch normalization similar to which used in our paper. This makes [M] very similar to COCO-GAN as it introduces spatial conditions during generation, except [M] has much more parameters.\\n\\nAnother implementation difficulty is that [M] needs to consider padding of convolutional layers. The cropped feature map in consecutive layers is not trivially 1x or 2x sized. Let:\", \"s\": \"the size of the next feature map\", \"p\": \"the padding required\", \"f\": \"the fraction of down-scaling,\\nthe previous feature map requested is F*L+P, and P needs to consider paddings if the patch is occasionally near the edge of the feature map. COCO-GAN is relatively easy to implement and flexible to different architectures.\\n\\nHere\\u2019s a brief comparison of pros and cons between COCO-GAN and [M] from our point of view:\", \"coco_gan\": [\"Significantly fewer parameters\", \"Considers coordinate system, which shows appealing results in our panorama setting, and may further promote to other image types (such as 360 images with a hyperbolic coordinate system)\", \"The generation quality is surprisingly good, no obvious seams without post-processing.\", \"Flexible to different generator architectures.\", \"Need to define coordinate systems and patch size, which can be treated as a set of hyper-parameters.\", \"Slight but reasonable FID score drop relative to optimal model (generate and discriminate on full-image)\", \"[M]:\", \"Straightforward\", \"Friendly to dynamic graph frameworks, but relatively hard to implement for static graph frameworks (e.g. Tensorflow)\", \"To our knowledge, no related GAN publications or analysis about the potential problems.\", \"The normalization layer may need to adopt conditional batch norm, which makes it similar to COCO-GAN\", \"The generator and the discriminator are significantly imbalanced. This may cause the model hard to train in practice.\", \"The discriminator may need spatial positions as condition to form a cGAN or ACGAN loss, which is again similar to COCO-GAN.\", \"We hope our empirical analysis above is justifiable for the reviewer to agree that COCO-GAN has its value and advantages. Although [M] could also be a possible solution, it would require further study. Besides, given the tight time constraint of rebuttal period, it is hard to implement, debug, train and fine-tune [M] to sufficiently good quality to compare with COCO-GAN.\"]}",
"{\"title\": \"Follow-up\", \"comment\": \"Thanks for the reply.\\n\\nSo, it seems like the main motivation or use-case is when it wouldn't be possible to generate entire images (or volumes) at a time because the a single generator for the entire image doesn't fit in memory.\\n\\nBut it seems the straight-forward solution in that case would be to simply take crops of intermediate feature maps in the generator. For example, to generate a specific crop of a full-sized image with a typical progressive generator (that increases resolution with upsampled convolution layers), you could crop every intermediate layer's activations so as to only retain the portion that is part of the receptive field of the final output. \\n\\nThis is the normal approach to processing large scale inputs/outputs. Passing co-ordinates as input and then trying to achieve consistency post-facto seems to be un-necessary if memory constraints are the only issue. \\n\\nNote that you could apply this strategy during training as well if memory was an issue (choosing random final image-space crops, and cropping accordingly before passing to the discriminator).\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We sincerely appreciate the reviewer raises many important questions we are more than willing to discuss. All reviewers agree our work is novel, which is our target to introduce the new \\u201cconditional coordinate\\u201d idea to the community. Note that since some questions are correlated, our response is not in the reviewer\\u2019s original question order.\\n\\n1.\\n\\u201cAs a means of simply producing high-resolution images, it appears that \\\"PGGAN\\\" performs better than the proposed method. Therefore, the paper doesn't clearly explain the setting when the division into patches produces a better result. It is worth noting that the idea of applying \\\"local\\\" critics (i.e., discriminators acting on sub-regions) isn't new (e.g., Generative Image Inpainting with Contextual Attention in CVPR 2018). What's new is the proposed method's way of achieving consistency between different regions by providing the 'co-ordinate' of the patch as input (and seeking consistency in the latent vector through a loss)---rather than applying a discriminator at a coarser level on the downsampled image. But given the poorer performance compared to PGGAN, it isn't clear that there is a an advantage to this approach.\\u201d\\n\\n> The main target of COCO-GAN is exploring other applications instead of increasing generation quality. The low FID score is a by-product. \\n> In many real-world applications (e.g., VR, medical images), the data are normally too large to even fit into memory. Modern GAN architectures require the generator to generate the full image at once. This requirement makes generating these super-large images hard to achieve. As the result, we propose COCO-GAN, which can break full-image generation into patches generation. Furthermore, the discriminator also takes macro patches as input, since taking the full image as the input of the discriminator is infeasible in super-large image generation problem.\\n> COCO-GAN is orthogonal to PGGAN, one can still add the \\u201cprogressive growing\\u201d strategy to micro/macro patches generation/discrimination. However, this will introduce more hyperparameters, thus making it more challenging to balance everything.\\n\\n2.\\n\\u201cFirstly, it isn't clear to me why the further breakdown of the macro patch into micro patches is useful. There appears to be no separate loss on these intermediate outputs. Surely, a DC-GAN like architecture with sufficient capacity would be as well able to generate \\\"macro\\\" patches. The paper needs to justify this split into micro patches with a comparison to a direct architecture that generates the macro patches (everything else being the same). Note that applications like \\\"interpolation\\\" of micro patches could be achieved simply by interpolating crops of the macro patch.\\u201d\\n\\n\\n> The output of generator *must* be smaller than the input of the discriminator for COCO-GAN. This is for smoothening the seam between patches after concatenating multiple patches. The discriminator oversees whether the concatenated patches have discontinuities between the concatenated edges. \\n> In the CelebA 128x128 setting, our micro patches are of size 32x32, macro patches are of size 64x64. Even if the generator generates macro patches, it still needs to take care of seams while producing the full image. If one decides to make the discriminator taking the full image as input while the output of generator is a 64x64 patch, which can be done via concatenating four patches produced by the generator, then it becomes another special case of COCO-GAN.\\n> We provide an anonymous link below. The seam between patches is smoothed out through time. This suggests the adversarial loss takes cares of the seam between patches.\\n> As described in response to 3., we will update the methodology section to justify this design.\\n>\\n> The anonymous link to the per-epoch generated samples (CelebA 128x128 and LSUN 256x256):\\n> https://www.dropbox.com/sh/ucpthw2mnu3yw3g/AAC0AU5f7f1RfOvB3C5RM1YUa?dl=0\\n\\n\\n3.\\n\\u201cOverall, the paper brings up some interesting ideas, but it doesn't motivate all its design choices, and doesn't make a clear argument about the settings in which the proposed method would provide an actual advantage.\\u201d\\n\\n> We will update the methodology section to discuss the design motivations of each component and the introduction section to make arguments of actual advantages more clear.\"}",
"{\"title\": \"RE: Response to AnonReviewer2\", \"comment\": \"Thank you for the quick reply.\\n2. Can you show some concrete experiments supporting these claims?\\n\\n3. I still think ex 4 is very important. I understand that you'll have many more parameters, but the computation should be the same (or slightly smaller, since you do not have the conditional coordinate). However, only this experiment highlights the need for coordinate conditioning (which according to reply 4 is the essential component of this paper).\"}",
"{\"title\": \"Interesting Ideas, but not validated\", \"review\": \"The paper describes a GAN architecture and training methodology where a generator is trained to generate \\\"micro-\\\" patches, being passed as input a latent vector and patch co-ordinates. Micro-patches generated for different adjacent locations with the same latent vector are combined to generate a \\\"macro\\\" patch. This \\\"macro\\\" output is trained against a discriminator that tries to label this output as real and fake, as well as predict the location of the macro patch and the value of the latent vector. The generator is trained to fool the discriminator's label, and minimize the error in the prediction of location and latent vector information.\\n\\n- The paper proposes a combination of different interesting strategies. However, a major drawback of the method is that it's not clear which of these are critical to the quality of the generated output.\\n\\n- Firstly, it isn't clear to me why the further breakdown of the macro patch into micro patches is useful. There appears to be no separate loss on these intermediate outputs. Surely, a DC-GAN like architecture with sufficient capacity would be as well able to generate \\\"macro\\\" patches. The paper needs to justify this split into micro patches with a comparison to a direct architecture that generates the macro patches (everything else being the same). Note that applications like \\\"interpolation\\\" of micro patches could be achieved simply by interpolating crops of the macro patch.\\n\\n- As a means of simply producing high-resolution images, it appears that \\\"PGGAN\\\" performs better than the proposed method. Therefore, the paper doesn't clearly explain the setting when the division into patches produces a better result. It is worth noting that the idea of applying \\\"local\\\" critics (i.e., discriminators acting on sub-regions) isn't new (e.g., Generative Image Inpainting with Contextual Attention in CVPR 2018). What's new is the proposed method's way of achieving consistency between different regions by providing the 'co-ordinate' of the patch as input (and seeking consistency in the latent vector through a loss)---rather than applying a discriminator at a coarser level on the downsampled image. But given the poorer performance compared to PGGAN, it isn't clear that there is a an advantage to this approach.\\n\\nOverall, the paper brings up some interesting ideas, but it doesn't motivate all its design choices, and doesn't make a clear argument about the settings in which the proposed method would provide an actual advantage.\\n\\n===Post-rebuttal\\n\\nI'm upgrading my score from 5 to 6, because some of the ablation experiments do make the paper stronger. Having said that, I still think this is a borderline paper. \\\"Co-ordinate conditioning\\\" is an interesting approach, but I think the paper still lacks convincing experiments for its main motivating use case: generating outputs at a resolution that won't fit in memory within a single forward pass. (This motivation wasn't clear in the initial version, but is clearer now).\\n\\nThe authors' displayed some high-resolution results during the rebuttal phase, but note that they haven't tuned the hyper-parameter for these (and so the results might not be the best they can be). Moreover, they scale up the sizes of their micro and macro patches so that they're still the same factor below the full image. I think a version of this paper whose main experimental focus is on high-resolution data generation, and especially, from much smaller micro-macro patches, would make a more convincing case. \\n\\nSo while the paper is about at the borderline for acceptance, I do think it could be much stronger with a focus on high-resolution image experiments (which is after all, forms its motivation).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response to AnonReviewer1 [1/3]\", \"comment\": \"We sincerely appreciate the reviewer's effort on providing many useful comments in our paper. All reviewers agree our work is novel, which is our target to introduce the new \\u201cconditional coordinate\\u201d idea to the community. About the writing parts, we will update and reorganize the paper as soon as possible. And here are some responses to the reviewer's concerns and questions:\\n\\n1. \\n\\\"There are many grammar and syntax issues (e.g. the very first sentence of the introduction is not correct (\\u201cHuman perception has only partial access to the surrounding environment due to the limited acuity area of fovea, and therefore human learns to recognize or reconstruct the world by moving their eyesight.\\u201d). The paper goes to 10 pages but does so by adding redundant information (e.g. the intro is highly redundant) while some important details are missing\\\" \\n\\n> Thanks for pointing some of the writing problems, we will upload a revised version in the following few days.\\n\\n2. \\n\\\"The paper does not cite, discuss or compare with the related work \\u201cSynthesizing Images of Humans in Unseen Poses\\u201d, by G. Lalakrishan et al. in CVPR 2018.\\\"\\n\\n> We will accordingly mention this paper as an interesting related domain. However, COCO-GAN is not related to and comparable with the paper. We do not introduce any specific type of human or object prior to the model during training. Our solution is generalized to any type of image and it simply slices the input image into patches. Such prior can generalize to most of existing image types. This is related to 7. below.\\n\\n3. \\n\\\"Page. 3, in the overview the authors mention annotated components: in what sense, and how are these annotated? How are the patches generated? By random cropping? \\\"\\n\\n> We are not entirely sure if we catch the reviewer\\u2019s question correctly. We assume the reviewer is asking about \\u201cwhy we do not specify the patches sampling strategy in Page. 3.\\u201d\\n> We do not specify how the patches are selected in this section since it is related to the characteristics of the dataset (e.g., panorama can be trained in a cylindrical coordinate system, and one can sample patches which cross the left and right edges) and may have many different implementations as long as the patches are sampled nearby (e.g., any NxM patches are acceptable, as long as the computational budget is sufficient). \\n> We define the micro/macro coordinates and patches used in the experiment in ``the first paragraph of experiment section'' and ``the second paragraph of section 3.3''. The former one is a straightforward version validating the generation quality, while the latter one applies COCO-GAN to the cylindrical coordinate system for panoramas.\\n\\n4. \\n\\\"Still in the overview, page 3, the first sentence states that D has an auxiliary head Q, but later it is stated that D has two auxiliary prediction heads. Why is the content prediction head trained separately while the spatial one is trained jointly with the discriminator? Is this based on intuition or the result of experimentations?\\\"\\n\\n> This is because the content prediction head is designed based on info-GAN, where a Q network is trained separately. In the meanwhile, the spatial prediction head is similar to ACGAN, which is trained jointly in its original implementation. We decide to optimize these two losses with their original strategies.\\n\\n5. \\n\\\"What is the advantage in practice of using macro-patches for the Discriminator rather than full images obtained by concatenating the micro-patches? Has this comparison been done?\\\"\\n\\n> The discriminator with full image observation will surely lead to better full image generation quality. However, two of our three applications and benefits listed in the introduction require the discriminator to be trained with macro-patches.\\n> For \\u201cPatch-Inspired Image Generation\\u201d, the discriminator needs to learn a mapping from each macro patch to its original latent vector and spatial position.\\n> For \\u201cComputation-Friendly Generation\\u201d, although we only describe the benefits for the generator in the inference stage, however, the discriminator may also need patch-based training if the training process also reaches memory budget limit (e.g., for extremely large image generation or the model has relatively more parameters).\\n> We will follow up with an experiment which makes the discriminator training with the full image in CelebA 64x64 setting. But as a side effect, we will remove $L_{S}$ since the discriminator lacks a macro coordinate system.\"}",
"{\"title\": \"Response to AnonReviewer1 [2/3]\", \"comment\": \"6. \\n\\\"While this is done by concatenation for micro-patches, how is the smoothness between macro-patches imposed?\\\"\\n\\n> The smoothness between patches is taken care by the adversarial loss. The discriminator oversees whether the concatenated patches have discontinuities between the concatenated edges. In the meanwhile, the provided real samples have no such discontinuity. The adversarial loss guides the generator to match the real samples distribution, thus increasingly imposes the smoothness between the patches.\\n> The explanation can be supported by a series of generated samples through time. We attach a series of generated full images through time in the anonymous link below. Especially for the CelebA 128x128 setting (note that LSUN has much more iterations each epoch), we can observe the seam between patches fades away rapidly through time. \\n> The link to the per-epoch generated samples (CelebA 128x128 and LSUN 256x256):\\n> https://www.dropbox.com/sh/ucpthw2mnu3yw3g/AAC0AU5f7f1RfOvB3C5RM1YUa?dl=0\\n\\n7. \\n\\u201cHow would this method generalise to objects with less/no structure?\\u201d\\n\\n> If we understand correctly, by \\u201cobjects with structure\\u201d you mean the CelebA and LSUN are having strong structure priors (e.g. eyes positions of CelebA and bed position of LSUN).\\n> We believe the panoramas generated in Figure 6 & 7 are relatively unstructured data. Any item within the panorama does not have deterministic patterns of its position. COCO-GAN works reasonably nice with the panorama dataset in a cylindrical coordinate system and should work with most of the common image datasets without significant structure priors assumed.\\n\\n8. \\n\\\"In section 3.4, the various statements are not accompanied by justification or citations. In particular, how do existing image pinpointing frameworks all assume the spatial position of remaining parts of the image is known?\\\"\\n\\n> We assume there is a typo and the reviewer is referring to image \\u201cinpainting\\u201d frameworks. \\n> Thanks for pointing this problem out, we will modify our statements and properly cite papers.\\n> In many cases of real-world applications, some categories of damaged images do not preserve their spatial position in its original full-image (e.g., corrupted or cropped-out photos). Considering these cases, common image inpainting frameworks \\\\cite{A,B,C} only consider the remaining parts of the image are already in their optimal positions. In comparison, the discriminator of COCO-GAN is trained by $L_{S}$ to predict the expected placement of the given macro patch.\\n\\n> [A] Image Inpainting for Irregular Holes Using Partial Convolutions\\n> [B] Semantic Image Inpainting with Deep Generative Models\\n> [C] High-Resolution Image Inpainting using Multi-Scale Neural Patch Synthesis\\n\\n9. \\n\\u201cHow does figure 5 show that model can be misled to learn reasonable but incorrect spatial patterns?\\u201d\\n\\n> Thanks for pointing out that we should reference this statement to \\u201cSection 3.2, paragraph two of Spatial Positions Interpolation\\u201d.\\n> In Figure 5, let (x, y) be the zero-based position of the patch counting from top-to-bottom (x-axis) and left-to-right (y-axis). The patch at (4, 5) in both samples is expected to be a smooth area for a glabella between eyes. But in the generated full image, the generator is misled by the discrete spatial position sampling strategy. The generator learns to transform the shape of the eye to switch from one eye to another. This is a reasonable behavior due to the sparse sampling but is an incorrect pattern.\\n\\n10. \\n\\u201cIs there any intuition/justification as to why discrete uniform sampling would work so much better than continuous uniform sampling? Could these results be included?\\u201d\\n\\n> We can include the experimental results in the next paper revision. To ensure the results are not affected by any versioning problem, we will rerun the experiment in CelebA 64x64 setting, which will take couples of days to complete. \\n> Aside from the experimental observation, an empirical explanation is that only discrete spatial positions are used during inference. Directly optimizing on these discrete spatial positions can surely result in better inference time generation quality. However, it would take generalization as the trade-off.\"}",
"{\"title\": \"Response to AnonReviewer1 [3/3]\", \"comment\": \"11. \\n\\u201cHow were the samples in Figure.2 chosen? Given that the appendix. C shows mostly the same image, the reader is led to believe these are carefully curated samples rather than random ones.\\u201d\\n\\n> First, we only save 64 (8x8) randomly generated images for each epoch. Then we pick the epoch which has the lowest validation FID score. Since we believe showing fewer samples (which makes each generated sample larger) can make the generated samples clearer to see on the paper, so we just reuse and show the upper-left 25 samples. \\n> We thank for the reviewer pointing out that our process may lead the reader to doubt that these samples are cherry-picked. We believe by:\\n> 1. Release generated samples across epochs (each epoch with 64 images).\\n> 2. Release the source code after acceptance.\\n> 3. Replace the images in Figure. 2.\\n> 4. The FID score also matches the image quality.\\n> can support our samples are truly random-selected.\\n> \\n> The anonymous link to the per-epoch generated samples (CelebA 128x128 and LSUN 256x256):\\n> https://www.dropbox.com/sh/ucpthw2mnu3yw3g/AAC0AU5f7f1RfOvB3C5RM1YUa?dl=0\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Sincerely thanks for the valuable suggestions from the reviewer. All reviewers agree our work is novel, which is our target to introduce the new \\u201cconditional coordinate\\u201d idea to the community. Here are some responses to the reviewer\\u2019s question:\\n\\n1. \\n\\u201cThe presented idea is clearly new and a deviation from standard GAN architectures. I was surprised to see that this actually produces visually coherent results. I was certain that it would create ugly seams at the boundary. For this reason I like the submission overall.\\u201d\\n\\n> Thanks for being interested in one of our most important observations. We believe this characteristic provides many merits to different tasks, which is our main thread across all analysis, discussion and experiments.\\n\\n2.\\n\\u201cHowever, the submission has two major short-comings. First, throughout the exposition it is never really clear why COCO-GAN is a good idea beyond the fact that it somehow works. I was missing a concrete use case where COCO-GAN performs much better.\\u201d\\n\\n> We will update the introduction section in the following days to ensure the reader can have a fast and clear view of the main contributions and use cases of COCO-GAN.\\n> In general, we first observe COCO-GAN has fewer seams than we expected. This property enables both G and D to learn with partial views (i.e., micro patches and macro patches, respectively). We believe this non-before-seen property has three interesting merits:\\n> 1. Patch-Inspired Image Generation \\n> 2. Partial-Scene Generation\\n> 3. Computation-Friendly Generation.\\n> We further discuss and perform experiments to support these applications and benefits.\\n\\n3.\\n\\u201cSecond, I was missing any sort of ablation experiments. The authors only evaluate the complete system, and never show which components are actually needed. Specifically, I'd have liked to see experiments:\\n * with/without a context model Q\\n * with a standard discriminator (single output or convolutional), but a micro-coordinate generator\\n * with a macro-block discriminator, but a standard generator\\n * without coordinate conditioning, but different Generator parameters for each coordinate\\u201d\\n\\u201cThese experiments would help better understand the strength of COCO-GAN and how it fits in with other GAN models.\\u201d\\n\\n> Thanks for pointing this out. We agree although each component of COCO-GAN is necessary for specific applications in our work, some users may not necessarily need all applications or benefits at the same time.\\n> We will perform the former three ablation studies in the following days in CelebA 64x64 setting, which is relatively fast, and also other datasets afterward. \\n> However, the last one is slightly out-of-topic. This will result in a dramatic increase in the total number of parameters. In our basic setting, we split the full image into 4x4 micro patches, and 12x4 micro patches for panorama dataset. The suggested setting of the last ablation study might not a feasible solution to real-world applications. Furthermore, it is hard to perform a fair comparison between models with different numbers of total parameters and FLOPs. \\n\\n4.\\n\\u201cThe name of the method is not ideal. First, it collides with the COCO dataset. Second, it does not become clear why the proposed GAN uses a \\\"Conditional Coordinate until late in the exposition. Third, the main idea could easily stand without the coordinate conditioning (see above).\\u201d\\n\\n> We are aware of this problem. Conditional coordinate is our core idea and component. We believe it is important and should appear in the name of model. We will update the introduction to make the idea clearer.\\n> Lastly, we believe conditional coordinate is essential for our framework. The generator learns to generate micro patches based on their coordinate and take cares of edge smoothness with respect to their potential siblings, which are also defined by the coordinate system. Furthermore, generating panorama in the cylindrical coordinate system is also a nature and straightforward choice, and investigating other coordinate systems for different image types is also an interesting and unexplored research direction.\"}",
"{\"title\": \"interesting idea that surprisingly works\", \"review\": [\"Interesting and novel idea\", \"It works\", \"Insufficient ablation and comparison\", \"Unclear what the advantages of the presented framework are\", \"The presented idea is clearly new and a deviation from standard GAN architectures. I was surprised to see that this actually produces visually coherent results. I was certain that it would create ugly seams at the boundary. For this reason I like the submission overall.\", \"However, the submission has two major short-comings. First, throughout the exposition it is never really clear why COCO-GAN is a good idea beyond the fact that it somehow works. I was missing a concrete use case where COCO-GAN performs much better.\", \"Second, I was missing any sort of ablation experiments. The authors only evaluate the complete system, and never show which components are actually needed. Specifically, I'd have liked to see experiments:\", \"with/without a context model Q\", \"with a standard discriminator (single output or convolutional), but a micro-coordinate generator\", \"with a macro-block discriminator, but a standard generator\", \"without coordinate conditioning, but different Generator parameters for each coordinate\", \"These experiments would help better understand the strength of COCO-GAN and how it fits in with other GAN models.\"], \"minor\": \"The name of the method is not ideal. First, it collides with the COCO dataset. Second, it does not become clear why the proposed GAN uses a \\\"Conditional Coordinate until late in the exposition. Third, the main idea could easily stand without the coordinate conditioning (see above).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea but needs more work\", \"review\": \"This paper proposes to constrain the Generator of a WGAN-GP on patches locations to generate small images (\\u201cmicro-patches\\u201d), with an additional smoothness condition so these can be combined into full images. This is done by concatenating micro-patches into macro patches, that are fed to the Discriminator. The discriminator aims at classifying the macro-patches as fake or real, while additionally recovering the latent noise used for generation as well as the spatial prior.\\n\\nThere are many grammar and syntax issues (e.g. the very first sentence of the introduction is not correct (\\u201cHuman perception has only partial access to the surrounding environment due to the limited acuity area of fovea, and therefore human learns to recognize or reconstruct the world by moving their eyesight.\\u201d). The paper goes to 10 pages but does so by adding redundant information (e.g. the intro is highly redundant) while some important details are missing \\n\\nThe paper does not cite, discuss or compare with the related work \\u201cSynthesizing Images of Humans in Unseen Poses\\u201d, by G. Lalakrishan et al. in CVPR 2018. \\n\\nPage. 3, in the overview the authors mention annotated components: in what sense, and how are these annotated?\\nHow are the patches generated? By random cropping? \\n\\nStill in the overview, page 3, the first sentence states that D has an auxiliary head Q, but later it is stated that D has two auxiliary prediction heads. Why is the content prediction head trained separately while the spatial one is trained jointly with the discriminator? Is this based on intuition or the result of experimentations?\\n\\nWhat is the advantage in practice of using macro-patches for the Discriminator rather than full images obtained by concatenating the micro-patches? Has this comparison been done?\\n\\nWhile this is done by concatenation for micro-patches, how is the smoothness between macro-patches imposed?\\n\\nHow would this method generalise to objects with less/no structure?\\n\\nIn section 3.4, the various statements are not accompanied by justification or citations. In particular, how do existing image pinpointing frameworks all assume the spatial position of remaining parts of the image is known?\\n\\nHow does figure 5 show that model can be misled to learn reasonable but incorrect spatial patterns?\\n\\nIs there any intuition/justification as to why discrete uniform sampling would work so much better than continuous uniform sampling? Could these results be included?\\n\\nHow were the samples in Figure.2 chosen? Given that the appendix. C shows mostly the same image, the reader is led to believe these are carefully curated samples rather than random ones.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
SkVRTj0cYQ | Differentially Private Federated Learning: A Client Level Perspective | [
"Robin C. Geyer",
"Tassilo J. Klein",
"Moin Nabi"
] | Federated learning is a recent advance in privacy protection.
In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data.
However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization. In such an attack, a client's contribution during training and information about their data set is revealed through analyzing the distributed model.
We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization. The aim is to hide clients' contributions during training, balancing the trade-off between privacy loss and model performance.
Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance. | [
"Machine Learning",
"Federated Learning",
"Privacy",
"Security",
"Differential Privacy"
] | https://openreview.net/pdf?id=SkVRTj0cYQ | https://openreview.net/forum?id=SkVRTj0cYQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rkxuYr5fx4",
"r1g9TUss3m",
"r1eXjyItn7",
"BygeEICS3X",
"rkgvwEsl2m",
"Bkx4AgGjim",
"BklIfi7voQ",
"rJl-Xe68j7",
"S1x_i3hUsm",
"SkenUlAliQ"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"comment",
"comment",
"comment",
"comment",
"comment",
"comment",
"official_review"
],
"note_created": [
1544885632305,
1541285569930,
1541132187058,
1540904487857,
1540564062694,
1540198603637,
1539943182200,
1539915801402,
1539914911947,
1539526740246
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper853/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper853/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper853/AnonReviewer2"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper853/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"Following the unanimous vote of the reviewers, this paper is not ready for publication at ICLR. The greatest concern was that the novelty beyond past work has not been sufficiently demonstrated.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Needs significant justification of novelty\"}",
"{\"title\": \"Differentially private variant of the federated learning framework\", \"review\": \"The paper revisits the federated learning framework from McMahan in the context of differential privacy. The general concern with the vanilla federated learning framework is that it is susceptible to differencing attacks. To that end, the paper proposes to make the each of the interaction in the server-side component of the gradient descent to be differentially private w.r.t. the client contributions. This is simply done by adding noise (appropriately scaled) to the gradient updates.\\n\\nMy main concern is that the paper just described differentially private SGD, in the language of federated learning. I could not find any novelty in the approach. Furthermore, just using the vanilla moment's accountant to track privacy depletion in the federated setting is not totally correct. The moment's accountant framework in Abadi et al. uses the \\\"secrecy of the sample\\\" property to boost the privacy guarantee in a particular iteration. However, in the federated setting, the boost via secrecy of the sample does not hold immediately. One requirement of the secrecy of the sample theorem is that the sampled client has to be hidden. However, in the federated setting, even if one does not know what information a client sends to the servery, one can always observe if the client is sending *any* information. For a detailed discussion on this issue see https://arxiv.org/abs/1808.06651 .\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Well-motivated problem, but incremental improvement over previous work?\", \"review\": \"[Post-rebuttal update] No author response was provided to address the reviewer comments. In particular, the paper's contributions and novelty compared with previous work seem limited, and no author response was provided to address this concern. I've left my overall score for the paper unchanged.\\n\\n[Summary] The authors propose a protocol for training a model over private user data in a federated setting. In contrast with previous approaches which tried to ensure that a model would not reveal too much about any individual data point, this paper aims to prevent leakage of information about any individual client. (There may be many data points associated with a single client.)\\n\\n[Key Comments] The submission generally seems polished and well-written. However, I have the impression that it's largely an incremental improvement over recent work by McMahan et al. (2018).\\n* If the main improvement of this paper over previous work is the dynamic adaptation of weight updates discussed in Section 3, the experimental results in Table 1 should compare the performance of the protocol with vs. without these changes. Otherwise, I think it would be helpful for the authors to update the submission to clarify their contributions.\\n* Updating Algorithm 1 / Line 9 (computation of the median weight update norm) to avoid leaking sensitive information to the clients would also strengthen the submission.\\n* It would also be helpful if the authors could explicitly list their assumptions about which parties are trusted and which are not (see below).\\n\\n[Details]\\n[Pro 1] The submission is generally well-written and polished. I found the beginning of Section 3 especially helpful, since it breaks down a complex algorithm into simple/understandable parts.\\n\\n[Pro 2] The proposed algorithm tackles the challenging/well-motivated problem of improving federated machine learning with strong theoretical privacy guarantees.\\n\\n[Pro 3] Section 6 has an interesting analysis of how the weight updates produced by clients change over the course of training. This section does a good job of setting up the intuition for the training setup used in the paper, where the number of clients used in each round is gradually increased over the course of training.\\n \\n[Con 1] I had trouble understanding the precise threat model used in the paper, and I think it would be helpful if the authors could update their submission to explicitly list their assumptions in one place. It seems like the server is trusted while the clients are not. However, I was unsure whether the goal was to protect against a single honest-but-curious client or to protect against multiple (possibly colluding) clients.\\n\\n[Con 2] During each round of communication, the protocol computes the median of a set of values, each one originating from a different client, and the output of this computation is used to perform weight updates which are sent back to the clients. The authors note that \\\"we do not use a randomized mechanism for computing the median, which, strictly speaking, is a violation of privacy. However, the information leakage through the median is small (future work will contain such privacy measures).\\\" I appreciate the authors' honesty and thoroughness in pointing out this limitation. However, it does make the submission feel like a work in progress rather than a finished paper, and I think that the submission would be a bit stronger if this issue was addressed.\\n\\n[Con 3] Given the experimental results reported in Section 4, it's difficult for me to understand how much of an improvement the authors' proposed dynamic weight updates provide in practice. This concern could be addressed with the inclusion of additional details and baselines:\\n* Few details are provided about the model training setup, and the reported accuracy of the non-differentially private model is quite low (3% reported error rate on MNIST; it's straightforward to get 1% error or below with a modern convolutional neural network). The authors say they use a setup similar to previous work by McMahan et al. (2017), but it seems like that paper uses a model with a much lower error rate (less than 1% based on a cursory inspection), which makes direct comparisons difficult.\\n* The introduction argues that \\\"dynamically adapting the dp-preserving mechanism during decentralized training\\\" is a significant difference from previous work. The claim could be strengthened if the authors extended Table 1 (experimental results for differentially private federated learning) in order to demonstrate the effect of dynamic adaptation on model quality.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"comment\": \"Thank you very much for your question.\\nOn page 4, in the section 'Choosing S' we provide information about the scale. As proposed by Abadi et al. (2016) we chose S to be the median of the second norms of the client contributions. We thereby ensure that the noise does not explode when a client provides very large updates but also do not trim too much of the true updates.\", \"title\": \"Choosing S\"}",
"{\"comment\": \"It is unclear about the scale of the clip bound S. Could you please\\nadd some details about the scale of the S, due to the S\\nis a key factor to the final performance.\", \"title\": \"The scale of the S\"}",
"{\"comment\": \"Thank you for the comment. Indeed this is a mistake. In line 10 of the algorithm, \\\\sigma must be replaced with \\\\sigma_t.\\nThe parameter variance is introduced when defining the between clients variance (it is just an in-between step to make that definition easier).\", \"in_the_discussion_we_explain\": \"\", \"for_a_certain_noise_scale_at_iteration_t\": \"\\\\frac{sigma^2_t}{m_t}, the privacy loss is smaller for both sigma_t and m_t being small. Now if the clients provided very similar updates we would therefore go for small sigma_t and small m_t. But if the clients provided very distinct updates, a communication round with a small m_t would not improve the model even if the overall noise scale didn't change. (remember: In federated learning client-data might be non-IID).\\n\\nWe show that over the course of federated learning (for highly non-IID clients) the similarity of updates decreases (between clients variance increases) and it is therefore advantageous to start with a low m_t and keep increasing it during subsequent iteration rounds. If \\\\sigma_t is held constant for all t that means the noise scale decreases over the course of training. \\n\\nThe precise choices of \\\\sigma_t and \\\\m_t over the course of training highly depend on the federated learning scenario (the privacy budget, the data, the amount of clients and how data is distributed among them). We therefore cannot give a general iterative rule in the algorithm but just provide a tendency to be followed when these parameters are to be chosen for a new setting.\", \"title\": \"Missing _t\"}",
"{\"comment\": \"Thank you very much for your question.\\nWe do point out the similarity to LEARNING DIFFERENTIALLY PRIVATE RECURRENT LANGUAGE MODELS in the introduction. The research was conducted at the same time as ours but published at last year\\u2019s ICLR-conference, whereas we presented ours at a workshop.\\n\\nThe reason why we now decided to aim for a conference-publication is that the two research projects aimed at opposite extreme cases and we want to motivate research in ours. \\nLEARNING DIFFERENTIALLY PRIVATE RECURRENT LANGUAGE MODELS shows that with lots of clients, performance of language models can be maintained high while privacy is ensured. The work is centered around mobile phone users where hundreds of millions of clients are a realistic scenario.\", \"differentially_private_federated_learning\": \"A CLIENT LEVEL PERSPECTIVE aims at the other extreme. We were primarily interested in institutions such as hospitals jointly learning models. In these scenarios, the number of clients (e.g. hospitals) could be as low as a hundred. We want to motivate research of differentially private federated learning in this less commercial area and point out its potential for hospitals, laboratories and universities that have high privacy standards but could greatly benefit from one another (e.g. the authors of [Multi-Institutional Deep Learning Modeling Without Sharing Patient Data: A Feasibility Study on Brain Tumor Segmentation] state that the integration of differential privacy into their research would make it applicable to sensitive data.) In our research we focus on different phases of federated learning and how low numbers of participants influence these phases and the privacy loss during them.\\n\\nTLDR;\", \"we_did_not_want_to_show\": \"'We can include privacy into already existing language models learned from millions of clients without drawbacks'\", \"but_instead\": \"'Hospitals, labs or universities, that do not cooperate in learning models as of today, could greatly benefit from one another without revealing sensitive information.'\", \"title\": \"Opposite extreme cases\"}",
"{\"comment\": \"It remains unclear in the algorithm. Firstly, what's the purpose to introduce the parameter variance V?\\nThen, the author used \\\\sigma_t as the noise scale, but used \\\\sigma in the following updating.\\nPlease give some comment on how to change the noise scale at each iterative step.\", \"title\": \"Please give a more clear desribtion of the algorithm part.\"}",
"{\"comment\": \"This work is similar paradigm to LEARNING DIFFERENTIALLY PRIVATE RECURRENT LANGUAGE MODELS. So, what's the\\ndifference to previous work?\", \"title\": \"What's the contribution of this paper?\"}",
"{\"title\": \"interesting direction but confusing presentation\", \"review\": \"The main claim the authors make is that providing privacy in learning should go beyond just privacy for individual records to providing privacy for data contributors which could be an entire hospital. Adding privacy by design to the machine learning pipe-line is an important topic. Unfortunately, the presentation of this paper makes it hard to follow.\\n\\nSome of the issues in this paper are technical and easy to resolve, such as citation format (see below) or consistency of notation (see below). Another example is that although the method presented here is suitable only for gradient based learning this is not stated clearly. However, other issues are more fundamental:\\n1.\\tThe main motivation for this work is providing privacy to a client which could be a hospital as opposed to providing privacy to a single record \\u2013 why is that an important task? Moreover, there are standard ways to extend differential privacy from a single record to a set of r records (see dwork & Rote, 2014 Theorem 2.2), in what sense the method presented here different than these methods?\\n2.\\tAnother issue with the hospitals motivation is that the results show that when the number of parties is 10,000 the accuracy is close to the baseline. However, there are only 5534 registered hospitals in the US in 2018 according to the American Hospital Association (AHA): https://www.aha.org/statistics/fast-facts-us-hospitals. Therefore, are the sizes used in the experiments reasonable?\\n3.\\tIn the presentation of the methods, it is not clear what is novel and what was already done by Abadi et al., 2016\\n4.\\tThe theoretical analysis of the algorithm is only implied and not stated clearly\\n5.\\tIn reporting the experiment setup key pieces of information are missing which makes the experiment irreproducible. For example, what is the leaning algorithm used? If it is a neural network, what was its layout? What type of cross validation was used to tune parameters?\\n6.\\tIn describing the experiment it says that \\u201cFor K\\\\in\\\\{1000,10000} data points are repeated.\\u201d This could mean that a single client holds the same point multiple times or that multiple clients hold the same data point. Which one of them is correct? What are the implications of that on the results of the experiment?\\n7.\\tSince grid search is used to tune parameters, more information is leaking which is not compensated for by, for example, composition bounds\\n8.\\tThe results of the experiments are not contrasted against prior art, for example the results of Abadi et al., 2016.\\n\\nAdditional comments\\n9.\\tThe introduction is confusing since it uses the term \\u201cfederated learning\\u201d as a privacy technology. However federated learning discusses the scenario where the data is distributed between several parties. It is not necessarily the case that there are also privacy concerns associated, in many cases the need for federated learning is due to performance constraints.\\n10.\\tIn the abstract the term \\u201cdifferential attacks\\u201d is used \\u2013 what does it mean?\\n11.\\t\\u201cAn independent study McMahan et al. (2018), published at the same time\\u201d- since you refer to the work of McMahan et al before your paper was reviewed, it means that the work of McMahan et al came out earlier.\\n12.\\tIn the section \\u201cChoosing $\\\\sigma$ and $m$\\u201d it is stated that the higher \\\\sigma and the lower m, the higher the privacy loss. Isn\\u2019t the privacy loss reduced when \\\\sigma is larger? Moreover, since you divide the gradients by m_t then the sensitivity of each party is of the order of S/m and therefore it reduces as m gets larger, hence, the privacy loss is smaller when m is large. \\n13.\\tAt the bottom of page 4 and top of page 5 you introduce variance related terms that are never used in the algorithm or any analysis (they are presented in Figure 3). The variance between clients can be a function of how the data is split between them. If, for example, each client represents a different demography then the variance may be larger from the beginning.\\n14.\\tIn the experiments (Table 1), what does it mean for \\\\delta^\\\\prime to be e-3, e-5 or e-6? Is it 10^{-3}, 10^{-5} and 10^{-6}?\\n15.\\tThe methods presented here apply only for gradient descent learning algorithms, but this is not stated clearly. For example, would the methods presented here apply for learning tree based models?\\n16.\\tThe citations are used incorrectly, for example \\u201csometimes referred to as collaborative Shokri & Shmatikov (2015)\\u201d should be \\u201csometimes referred to as collaborative (Shokri & Shmatikov, 2015)\\u201d. This can be achieved by using \\\\citep in latex. This problem appears in many places in the paper, including, for example, \\u201cwe make use of the moments accountant as proposed by Abadi et al. Abadi et al. (2016).\\u201d Which should be \\u201cwe make use of the moments accountant as proposed by Abadi et al. (2016).\\u201d In which case you should use only \\\\cite and not quote the name in the .tex file.\\n17.\\t\\u201cWe use the same de\\ufb01nition for differential privacy in randomized mechanisms as Abadi et al. (2016):\\u201d \\u2013 the definition of differential privacy is due to Dwork, McSherry, Nissim & Smith, 2006\\n18.\\tNotation is followed loosely which makes it harder to follow at parts. For example, you use \\u201cm_t\\u201d for the number of participants in time t but in some cases, you use only m as in \\u201cChoosing $\\\\sigma$ and $m$\\u201d.\\n19.\\tIn algorithm 1 the function ClientUpdate receives two parameters however the first parameter is never used in this function. \\n20.\\tFigure 2: I think it would be easier to see the results if you use log-log plot\\n21.\\tDiscussion: \\u201cFor K=10000, the differrntially private model almost reaches accuracies of the non-differential private one.\\u201d \\u2013 it is true that the model used in this experiment achieves an accuracy of 0.97 without DP and the reported number for K=10000 is 0.96 which is very close. However, the baseline accuracy of 0.97 is very low for MNIST.\\n22.\\tIn the bibliography you have Brendan McMahan appearing both as Brendan McMahan and H. Brendan McMahan\\n\\n\\nIt is possible that underneath that this work has some hidden jams, however, the presentation makes them hard to find.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HyMRaoAqKX | Implicit Autoencoders | [
"Alireza Makhzani"
] | In this paper, we describe the "implicit autoencoder" (IAE), a generative autoencoder in which both the generative path and the recognition path are parametrized by implicit distributions. We use two generative adversarial networks to define the reconstruction and the regularization cost functions of the implicit autoencoder, and derive the learning rules based on maximum-likelihood learning. Using implicit distributions allows us to learn more expressive posterior and conditional likelihood distributions for the autoencoder. Learning an expressive conditional likelihood distribution enables the latent code to only capture the abstract and high-level information of the data, while the remaining information is captured by the implicit conditional likelihood distribution. For example, we show that implicit autoencoders can disentangle the global and local information, and perform deterministic or stochastic reconstructions of the images. We further show that implicit autoencoders can disentangle discrete underlying factors of variation from the continuous factors in an unsupervised fashion, and perform clustering and semi-supervised learning. | [
"Unsupervised Learning",
"Generative Models",
"Variational Inference",
"Generative Adversarial Networks."
] | https://openreview.net/pdf?id=HyMRaoAqKX | https://openreview.net/forum?id=HyMRaoAqKX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1l3JS65lN",
"Syg3M8NwxV",
"B1gi2qkAkV",
"SylTQ_kR1N",
"rJl-qNJ0yN",
"BJxHUqBopm",
"H1gmOkF5T7",
"HJxMkUiu6m",
"rke6Qaqu6m",
"HJg9eC9Anm",
"Sklgza8C3m",
"BkxEplmjnm"
],
"note_type": [
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545422051565,
1545188883950,
1544579763080,
1544579109012,
1544578184558,
1542310476965,
1542258539278,
1542137306484,
1542135076766,
1541479922476,
1541463304185,
1541251259954
],
"note_signatures": [
[
"~Mingzhang_Yin1"
],
[
"ICLR.cc/2019/Conference/Paper852/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper852/Authors"
],
[
"ICLR.cc/2019/Conference/Paper852/Authors"
],
[
"ICLR.cc/2019/Conference/Paper852/Authors"
],
[
"ICLR.cc/2019/Conference/Paper852/Authors"
],
[
"ICLR.cc/2019/Conference/Paper852/Authors"
],
[
"ICLR.cc/2019/Conference/Paper852/Authors"
],
[
"ICLR.cc/2019/Conference/Paper852/Authors"
],
[
"ICLR.cc/2019/Conference/Paper852/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper852/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper852/AnonReviewer2"
]
],
"structured_content_str": [
"{\"comment\": \"I appreciate the proposed IAE method which uses implicit distribution for both encoder and decoder!\\n\\nI would also like to point out some related works using implicit distributions in the variational inference/VAE that may serve as proper comparisons.\", \"https\": \"//arxiv.org/abs/1810.02789\", \"title\": \"Related references\"}",
"{\"metareview\": \"The paper proposes an original idea for training a generative model based on an objective inspired by a VAE-like evidence lower bound (ELBO), reformulated as two KL terms, which are then approximately optimized by two GANs. They thus use implicit distributions for both the posterior and the conditional likelihood. The idea is original and intriguing. But reviewers and AC found that the paper currently suffered from the following weaknesses: a) The presentation of the approach is unclear, due primarily to the fact that it doesn't throughout unambiguously enough separate the VAE-like ELBO *inspiration*, from what happens when replacing the two KL terms by GANs, i.e. the actual algorithm used. This is a big conceptual jump that would deserve being discussed and analyzed more carefully and thoroughly. b) Reviewers agreed that the paper does not sufficiently evaluate the approach in comparative experiments with alternatives, in particular its generative capabilities, in addition to the provided evaluations of the learned representation on downstream tasks.\\nReviewers did not reach a clear consensus on this paper, although discussion led two of them to revise their assessment score slightly towards each other's. One reviewer judged the paper currently too confusing (point a) putting more weight on this aspect than the other reviewers. \\nBased on the paper and the review discussion thread, the AC judges that while it is an original, interesting and potentially promising approach, its presentation can and should be much clarified and improved.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Interesting idea whose presentation could be less confusing\"}",
"{\"title\": \"Follow up\", \"comment\": \"We thank the reviewer again for the feedback. We were wondering if our rebuttal addressed the concerns of the reviewer.\"}",
"{\"title\": \"Response to the updated review\", \"comment\": \"We noticed that the reviewer has reduced the rating without modifying the review. We were wondering if there is any new concern that we can address.\"}",
"{\"title\": \"Response to the updated review\", \"comment\": \"We thank the reviewer for updating the review. We try to further clarify additional points that were brought up in the updated review.\\n==========\\n\\\"While I appreciate that there is a clear history of using GANs to target otherwise intractable objectives. I still feel like those papers are all very explicit about the fact that they are modifying the objective when they do so.\\\"\\n\\nWe have explicitly mentioned the objective of IAE and how it is optimized with GANs in several parts of the paper. We appreciate any suggestions on how we can make this more explicit.\\n==========\\n\\\"I find this paper confusing and at times erroneous. The added appendix on the bits back argument for instance I believe is flawed.\\\"\\n\\nAs we describe below, we believe the arguments of the paper that the reviewer is referring to are correct.\\n==========\\n\\\"False. The sender is not trying to send an unconditional latent code, they are trying to send the code for a given image.\\\"\\n\\nWe believe both the reviewer's and our derivation of the bits-back argument are correct. The reviewer's derivation corresponds to the VAE form of the ELBO, and ours correspond to the IAE form of the ELBO. In the bits-back argument, we are trying to transmit the data-distribution to the receiver. The x and z come from the joint data-distribution q(x,z)=p_data(x)q(z|x)=q(z)q(x|z), and we use a source code designed under the joint model distribution p(x,z)=p(z)p(x|z). We can construct the two-part code in two different ways, depending on how we sample from q(x,z). In the reviewer's derivation (VAE form), we use the equation q(x,z)=p_data(x)q(z|x), and sample from q(x,z) by first sampling x from the data-distribution and then sampling z from the conditional q(z|x). We then encode *conditional* z using p(z) which requires E_q(z|x)[-log p(z)] bits, and after deducting the bits-backs H(z|x), we send E_q(x,z)[-log p(z)] - H(z|x) = E_x KL(q(z|x)||p(z)) bits, as the reviewer points out. After that, we send the remaining bits in the second message. However, there is another way to sample from q(x,z) and construct the two-part code, which corresponds to the IAE form of the ELBO. In this method, we use the equation q(x,z)=q(z)q(x|z), and sample from q(x,z) by first sampling *unconditionally* from the aggregated posterior q(z) and then sampling from the conditional q(x|z). In this case, the first message of the two-part code only encodes the *unconditional* sample z from the marginal q(z) using the source code designed under p(z) which requires KL(q(z)||p(z)) extra bits, and the second message corresponds to the bits required to encode the uncertainty of q(x|z). These two methods of sampling from q(x,z) and constructing the two-part codes are two different interpretations of the same equation, and both are correct. One results in the VAE form of the ELBO, and the other results in the IAE form of the ELBO.\\n==========\\n\\\"The appendix ends with \\\"IAE only minimizes the extra number of bits required for transmitting x, while the VAE minimizes the total number of bits required for the transmission\\\" but the IAE = VAE by Equation (4-6) ... They are equivalent, how can one minimizing something the other doesn't?\\\"\\n\\nActually, VAE = IAE + H_data. The VAE objective corresponds to the total number of bits, meaning that the optimal value for the VAE objective is the entropy of data. But the IAE objective corresponds to the extra number of bits, meaning that the optimal value for the IAE objective is zero, which corresponds to the case where the model distribution is equal to the data distribution. Since H_data is fixed, optimizing the IAE objective also optimizes the VAE objective, but the value of these objectives are not equal.\\n==========\\n\\\"In general the paper to me reads at times as VAE=IAE but IAE is better ... Equation 6 is equivalent to a VAE.\\\"\\n\\nThere are many ways to re-write the ELBO. The VAE objective is one form of writing the ELBO, and the IAE objective is another form of writing the ELBO. The advantage of writing the ELBO in form of Equation 6 is that this equation tells us that the ELBO is equal to the summation of two distribution-matching objectives. Now by replacing these distribution-matching objectives with the GAN objectives, we can learn implicit distributions for both the posterior and the conditional likelihood distributions.\\nThe underlying objective of VAEs and IAEs is the same, but just because two methods optimize the same objective, does not mean they learn equally good generative models. For example, the normalizing flow VAE or AVBs optimize the same ELBO that the VAE optimizes, but they can learn more expressive posterior distributions, which results in better distribution-matching between the variational and true posteriors, tighter variational bound and thus better generative models. Similarly, the IAE can learn more expressive posterior and conditional likelihoods, which results in better distribution-matching in the IAE objective, and thus better generative models.\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for the feedback.\\nThe main claim of our paper is that IAEs can learn useful unsupervised representations using their latent code. This is because the latent code of the IAE can only focus on capturing high-level abstractions, while the remaining low-level information is separately captured by the implicit decoder. While we have done many qualitative experiments to support our claim, we believe that the best currently available method to quantitatively evaluate unsupervised representations is to evaluate them on downstream tasks. So we quantitatively evaluated the usefulness of the IAE representations in our clustering and semi-supervised learning experiments on the MNIST and the SVHN datasets, and showed that the IAE can achieve very competitive results (Section 2.1.2 and Section 3.1). However, we believe it is important for the generative modeling research community to find better metrics for evaluating the quality of unsupervised representations in generative models.\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for the positive feedback.\\n\\n1,2) We have a detailed discussion about the global vs. local decomposition of information in Appendix C. In the case of having a wide bottleneck and a powerful implicit decoder p(x|z), the ELBO does not prefer one decomposition of information to another. So in this case, as the reviewer points out, in theory, the network can capture all the information solely by the latent code, solely by the implicit decoder, or by a combination of them. However, empirically, it is the *dynamic of training/optimization* and not the objective that determines the decomposition of information. The architecture of the IAE is very similar to that of the standard autoencoder. In fact, if we remove the local latent code, the IAE becomes a deterministic autoencoder, and the network learns the same kind of high-level concepts that an autoencoder would learn. However, in the presence of the local code, as shown by our empirical experiments, the network tries to capture as much information as possible by the latent code, but instead of averaging over the remaining information and generating blurry images, it captures the distribution of the remaining information by the implicit decoder.\\n\\n3) In order to match r(x,z) to q(x,z), we back-propagate the reconstruction discriminator gradient through negative examples. In this case, we are considering q(x,z) as the target distribution that r(x,z) is trained to match to. The process of learning r(x,z) requires learning the encoder, which indirectly changes the underlying target distribution q(x,z) to a new target distribution q(x,z). In other words, r(x,z) is aiming a moving target distribution, which is q(x,z), and once the reconstruction discriminator is confused, we will have r(x,z)=q(x,z). We empirically observed that by only back-propagating through negative examples, we will provide a more stable target distribution for r(x,z) to aim for, which results in a more stable training dynamic and better empirical performance.\"}",
"{\"title\": \"Rebuttal (Part 2)\", \"comment\": \"Reviewer: \\\"Not to mention that in practice a further hack is employed wherein only the negative example passes gradients to the generator.\\\"\\n\\nIn order to match r(x,z) to q(x,z), we back-propagate the reconstruction discriminator gradient through negative examples. In this case, we are considering q(x,z) as the target distribution that r(x,z) is trained to match to. The process of learning r(x,z) requires learning the encoder, which indirectly changes the underlying target distribution q(x,z) to a new target distribution q(x,z). In other words, r(x,z) is aiming a moving target distribution, which is q(x,z), and once the reconstruction discriminator is confused, we will have r(x,z)=q(x,z). We empirically observed that by only back-propagating through negative examples, we will provide a more stable target distribution for r(x,z) to aim for, which results in a more stable training dynamic and better empirical performance.\\n==============\", \"reviewer\": \"\\\"The \\\"Global vs. Local Decomposition of Information in IAEs\\\" section conflates dimensionality with information capacity. While these are likely correlated for real neural networks, at least fundamentally an arbitrary amount of information could be stored in even a 1 dimensional continuous random variable. This is not addressed.\\\"\\n\\nWe updated the paper by adding a section explaining how the Bits-Back argument is also applicable to continuous random variables in neural networks.\"}",
"{\"title\": \"Rebuttal (Part 1)\", \"comment\": \"We thank the reviewer for the feedback.\\n\\nThe reviewer has a principle concern about one of the arguments of our paper, which has resulted in a very strong negative feedback about the whole paper. This argument is that the KL divergence between two distributions p(x) and q(x) can be *approximately* minimized by training a GAN that tries to match these two distributions. More specifically, the theoretical contribution of our paper is to re-derive the ELBO as the summation of two KL divergences and a fixed term: -KL(q(z)||p(z))-KL(q(x,z)||r(x,z))-H_data. These KL divergences are not tractable to optimize, so the empirical contribution of our paper is to show that we can use GANs to approximately minimize each of these KL divergences, and that the resulting *empirical* algorithm can perform useful tasks such as variational inference, clustering or semi-supervised learning.\\n\\nFirstly, we would like to point out that, theoretically, the standard GAN optimizes the JS divergence, which is a symmetric divergence, whose square root is a metric. In the minimization of the JS divergence, we try to get the two distributions as close as possible, which almost always results in the minimization of the KL divergence. There could be some pathological cases where the KL divergence does not decrease, but we empirically show that this approximate optimization works in our experiments. That being said, we never claimed that we are exactly optimizing the ELBO, and in several parts of the paper we have explicitly pointed out that we are only approximately optimizing the ELBO. For example, in the paper, we mention that \\\"This KL divergence is *approximately* minimized with the reconstruction GAN\\\", or that \\\"This is the same regularization cost function used in AAEs, and is *approximately* minimized with the regularization GAN\\\".\\n\\nSecondly, replacing an intractable divergence with another tractable divergence is a common idea used in many generative models such as adversarial autoencoders, the wake-sleep algorithm or ALI/BiGANs. In fact, the adversarial autoencoder (Eq. 5) uses the *exact same approximation* that we use, by replacing the intractable KL(q(z)||p(z)) in the code space with the adversarial cost. The concern of the reviewer can be similarly raised for the adversarial autoencoder, as it does not exactly optimizes the ELBO; nevertheless, the adversarial autoencoder shows that this approximation results in a useful generative model. Another example is the wake-sleep algorithm, in which the wake-phase optimizes the right KL divergence of data and model, but the sleep phase replaces this KL divergence with the reversed KL divergence and optimizes that instead. As the result, the wake-sleep algorithm does not exactly optimizes the ELBO; nevertheless, it is very successful in training sigmoid belief networks. Another example is the ALI/BiGAN methods which use the JS divergence in their formulations, but recently many papers such as [1] have argued that these methods are *approximately* optimizing the ELBO.\\nSimilar to all these works, in implicit autoencoders, by replacing intractable KL divergences with the adversarial training, we are not exactly optimizing the ELBO; nevertheless, we empirically show that the adversarial training can actually match the distributions that the KL divergence aims to match; and this results in a useful and practical algorithm. For example, the IAE can successfully learn expressive variational posteriors that can almost perfectly match the true posteriors (Fig. 9) using adversarial training; the IAE can empirically achieve very competitive clustering and semi-supervised learning results (Section 2.1.2); and can perform useful tasks such as high-level vs. low-level decomposition of information (Fig. 2,3,4).\\n\\nThirdly, as the reviewer points out, the objective of the GAN can be modified to optimize any f-divergence including the KL divergence. Thus, in theory, by replacing the KL divergences of the IAE with the f-GAN objectives, we can very closely follow the gradient of the ELBO. Indeed, in the initial phases of this project, we did perform some experiments with the f-GAN objective, but observed that its empirical performance is very similar to the original GAN objective. This result has also been independently reported in many other works including the original f-GAN paper, which reports that \\\"all three divergences [JS, KL and Hellinger] produce equally realistic samples\\\". So given that the f-divergence objective did not bring any empirical benefit in our case, we chose to perform the experiments of this project with the standard GAN objective.\\n\\nFinally, We revised our paper by adding a paragraph to fully discuss this issue and to better clarify both our theoretical and empirical contributions. We hope that this revision of the paper along with the above response address the main concern of the reviewer.\\n\\n[1] Ferenc Husz\\u00e1r. Variational inference using implicit distributions, 2017.\"}",
"{\"title\": \"Experiments in paper do not implement the objective in paper.\", \"review\": \"This paper introduces the implicit autoencoder, which purports to be a VAE with an implicit encoding and decoding distribution.\\n\\nMy principle problem with the paper and reason for my strong rejection is that there appears to be a complete separation between the discussion and theory of the paper and the actual experiments run. The paper's discussion and theory all centers around rewriting the ordinary ELBO lower bound on the marginal likelihood in equations (4) through (7) where it is shown that this can be recast in the form of two KL divergences, one between the representational joint q(x,z) = p_data(x) encoder(z|x) and the 'reconstruction joint' r(x,z) = encoder_marginal(z) decoder(x|z), and one between the encoding marginal q(z) and the generative prior p(z). The entire text of the paper then discusses the similarities between this formulation of the objective and some of the alternatives as well as discussing how this objective might behave in various limits.\\n\\nHowever, this is not the objective that is actually trained. In the \\\"Training Process\\\" section is it revealed that an ordinary GAN discriminator is trained. The ordinary GAN objective does not minimize a KL divergence, it is a minimax formulation of a Jensen Shannon divergence as the original GAN paper notes. More specifically, you can optimize a KL divergence with a GAN, as shown in the f-GAN paper (1606.00709) but this requires attention be paid to the functional form of the loss and structure of the discriminator. No such care was taken in this case. As such the training process does not minimize the objective derived or discussed. Not to mention that in practice a further hack is employed wherein only the negative example passes gradients to the generator. \\n\\nWhile is not specified in the training process section, assuming the ordinary GAN objective (Equation 1) is used, according to their own reference (AVB) the optimal decoder should be: D = 1/(1 + r(z,x)/q(z,x)) for which we have that what they deem the 'generative loss of the reconstruction GAN' is T = log(1 + r(z,x)/q(z,x)) . When we take unbiased gradients of the expectation of this quantity, we do not obtain an unbiased gradient of the KL divergence between q(z,x) and r(z,x).\\n\\nThroughout the paper, factorized Gaussian distributions are equated with tractable variational approximations. While it is common to use a mean field gaussian distribution for the decoder in VAEs this is by no means required. Many papers have investigated the use of more powerful autoregressive or flow based decoders, as this paper itself cites (van der Oord et al. 2016). The text further misrepresents the current literature when it claims that the IAE uniquely \\\"generalizes the idea of deterministic reconstruction to stochastic reconstruction by learning a decoder distribution that learns to match to the inverse encoder distribution\\\". All VAEs have employ stochastic reconstruction, if the authors again here meant to distinguish a powerful implicit decoder from a mean field gaussian one, the choice of language here is wrong.\\n\\nGiven that there are three joint distributions in equation (the generative model, the representational joint and the reconstruction joint), the use of Conditional entropy H(x|z) and mutual information I(x, z) are ambiguous. While the particular joint distribution is implied by context in the equations, please spell it out for the reader.\\n\\nThe \\\"Global vs. Local Decomposition of Information in IAEs\\\" section conflates dimensionality with information capacity. While these are likely correlated for real neural networks, at least fundamentally an arbitrary amount of information could be stored in even a 1 dimensional continuous random variable. This is not addressed. \\n\\nThe actual experiments look nice, its just that objective used to train the resulting networks is not the one presented in the paper. \\n\\n------\\n\\nIn light of the author's response I am changing my review from 2 to 3. I still feel as though the paper should be rejected. While I appreciate that there is a clear history of using GANs to target otherwise intractable objectives, I still feel like those papers are all very explicit about the fact that they are modifying the objective when they do so. I find this paper confusing and at times erroneous. The added appendix on the bits back argument for instance I believe is flawed.\\n\\n\\\"It first transmits z, which ideally would only require H(z) bits; however, since the code is designed\\nunder p(z), the sender has to pay the penalty of KL(q(z)kp(z)) extra bits\\\"\\n\\nFalse. The sender is not trying to send an unconditional latent code, they are trying to send the code for a given image, z \\\\sim q(z|x). Under usual communication schemes this would be sent via an entropic code designed for the shared prior at the cost of the cross entropy \\\\int q(z|x) \\\\log p(z) and the excess bits would be KL(q(z|x) | p(z)), not Kl(q(z)|p(z)). \\n\\nThe appendix ends with \\\"IAE only minimizes the extra number of bits required for transmitting x, while the VAE minimizes the total number of bits required for the transmission\\\" but the IAE = VAE by Equation (4-6). They are equivalent, how can one minimizing something the other doesn't? In general the paper to me reads at times as VAE=IAE but IAE is better. While it very well might be true that the objective trained in the paper (a joint GAN objective attempting to minimize the Jensen Shannon divergence between both (1) the joint data density q(z,x) and the aggregated reconstruction density r(z,x) and (2) the aggregated posterior q(z) and the prior p(z)) is better than a VAE (as the experiments themselves suggest), the rhetoric of the paper suggests that the IAE referred to throughout is Equation (6). Equation 6 is equivalent to a VAE.\\n\\nI think the paper would greatly benefit from a rewriting of the central story. The paper has a good idea in it, I just feel as though it is not well presented in its current form and worry that if accepted in this form might cause more confusion than clarity. Combined with what I view as some technical flaws especially in the appendices I still must vote for a rejection.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting models (with a potentially indifferent encoder?)\", \"review\": \"The paper presents two generative autoencoding models, that optimize a variational objective by adversarial training of implicit distributions. Applications in generative modeling, clustering, semi-supervised learning and disentangling \\u201cglobal\\u201d vs \\u201clocal\\u201d variations in data are presented.\\n\\nIn particular, the first model, called implicit autoencoder, maximizes the ELBO using two adversarial objectives: an adversarial regularizer penalizes the deviation of the aggregated posterior from the prior and the adversarial reconstruction penalizes the disagreement between the joint distribution and the joint reconstruction. Since both implicit distributions use a noise source, they can both explain variations in the data. The paper argues (and presents experimental results to suggest) that the global vs local information is the separation in these two components. The second architecture, called _flipped_ implicit autoencoder replaces the role of code and input in the first architecture, changing the training objective to reverse KL-divergence. The relationship between the proposed models to several prior works including ALI, BiGAN, AVB, adversarial autoencoders, wake-sleep, infoGAN is discussed. \\n\\nThe paper is nicely written, and the theory part seems to be sound. The strength of this paper is that it ties together a variety of different autoencoding generative architectures. In particular, I found it interesting that AVB and InfoGAN become special cases of the regular and flipped model, where an implicit term in the loss is replaced by an explicit likelihood term. \\n\\nI have some issues/questions (did I miss something obvious?):\\n\\n1) A mode of failure: suppose the encoder simply produces a garbled code that does not reflect that data manifold, yet it has the right aggregated posterior. In this case, the implicit generator and discriminator should ignore the code. However, the generator can still match the joint reconstruction and to the correct joint distribution. The loss can go towards its minimum. Note that this is not a problem in AVB.\\n\\n2) Global vs local info: a closely related issue to previous failure mode is that encoder has no incentive to produce informative codes. While the paper argues that the size of the code can decide the decomposition of local vs. global information, even for a wide bottleneck in the autoencoder, the code can have no (or very little) information.\\n\\n3) Could you please elaborate on the footnotes of page 3?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Increasing the expressiveness of decoder by an implicit decoder looks interesting, and it enables the decompositions of high-level abstract information from low one.\", \"review\": \"The paper proposed an implicit auto-encoder, featuring both the encoder and decoder constituted by implicit distributions. Adversary training is used train the models, similar to the technique used in the AVB model. The main difference with AVB is the use of an implicit decoder, which endows the model with the ability to disentangle the data into high-level abstract representation and local representation. Although sharing some similarities, the extension of using implicit decoder is interesting, and leading to some interesting results.\\n\\nMy main concern on this paper is the lack of any quantitive results to compare with other similar models. We only see the model can do some task, but cannot assess how well it did comparing to other models.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
BJfRpoA9YX | Adversarial Information Factorization | [
"Antonia Creswell",
"Yumnah Mohamied",
"Biswa Sengupta",
"Anil Bharath"
] | We propose a novel generative model architecture designed to learn representations for images that factor out a single attribute from the rest of the representation. A single object may have many attributes which when altered do not change the identity of the object itself. Consider the human face; the identity of a particular person is independent of whether or not they happen to be wearing glasses. The attribute of wearing glasses can be changed without changing the identity of the person. However, the ability to manipulate and alter image attributes without altering the object identity is not a trivial task. Here, we are interested in learning a representation of the image that separates the identity of an object (such as a human face) from an attribute (such as 'wearing glasses'). We demonstrate the success of our factorization approach by using the learned representation to synthesize the same face with and without a chosen attribute. We refer to this specific synthesis process as image attribute manipulation. We further demonstrate that our model achieves competitive scores, with state of the art, on a facial attribute classification task. | [
"disentangled representations",
"factored representations",
"generative adversarial networks",
"variational auto encoders",
"generative models"
] | https://openreview.net/pdf?id=BJfRpoA9YX | https://openreview.net/forum?id=BJfRpoA9YX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJgvHPZXxE",
"SJxREjWhk4",
"H1eYVztBJN",
"rketL2QERm",
"Bygh8K2Ypm",
"rygdXunFTQ",
"rJlTePhKT7",
"ryex243tpQ",
"HJeg7Nhtpm",
"SJgjWSOOpQ",
"BkemBfdO6X",
"B1lH7aDuTX",
"SkxudqvuTQ",
"BkePHaX_TX",
"rkes_n7OT7",
"S1e2knXOam",
"SkeKbjXdpm",
"S1llhcmdTQ",
"Skg76lIDpQ",
"SyxqTW-va7",
"rklJqbWPp7",
"Skg-zbZwTX",
"rklwCe-Pa7",
"SyeFIpxXam",
"ryx4STlmaQ",
"B1eFm6xmpQ",
"Hkg9k6gmTX",
"SkgHp2l76m",
"H1gUKn2d2Q",
"Hyecgv9dhQ",
"Syglr36U2Q",
"BJguthVAc7",
"ryxDOqG0qQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1544914750833,
1544457013755,
1544028721194,
1542892625290,
1542207827879,
1542207520419,
1542207221132,
1542206632449,
1542206487762,
1542124803478,
1542124090581,
1542122780534,
1542122096098,
1542106430982,
1542106227308,
1542106084140,
1542105857412,
1542105767633,
1542049978534,
1542029762208,
1542029702991,
1542029576859,
1542029519072,
1541766481092,
1541766459578,
1541766432681,
1541766369818,
1541766333268,
1541094526142,
1541084914283,
1540967480189,
1539357823784,
1539349103277
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper851/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper851/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper851/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"ICLR.cc/2019/Conference/Paper851/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"ICLR.cc/2019/Conference/Paper851/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper851/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"ICLR.cc/2019/Conference/Paper851/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"ICLR.cc/2019/Conference/Paper851/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"ICLR.cc/2019/Conference/Paper851/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper851/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper851/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper851/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes a supervised adversarial method for disentangling the latent space of a VAE into two groups: latents z which are independent of the given attribute y, and \\\\hat{y} which contains information about y. Since the encoder also predicts \\\\hat{y} it can be used for classification and the paper shows competitive results on this task, apart from the attribute manipulation task. Reviewers had raised points about model complexity and connections to prior works which the authors have addressed and the paper is on the borderline based on the scores.\\n\\nThough none of the reviewers explicitly pointed out the similarity of the paper with Fader networks (Lample et al., 2017), the adversarial setup for getting attribute invariant 'z' is exactly same as in Fader networks, as also pointed out in an anonymous comment. The only difference is that encoder in the current paper also predicts the attribute itself (\\\\hat{y}), which is not the case in Fader n/w, and hence the encoder can be used as a classifier as well (authors have also mentioned and discussed this difference in their response). However, the core idea of the paper as outlined in the title of the paper, ie, using adversarial loss for information factorization, is very similar to this earlier work, which diminishes the originality of the work. \\n\\nWith the borderline review scores, the paper can go in either of the half-spaces (accept/reject) but I am hesitant to recommend an \\\"accept\\\" due to limited originality of the approach. However, if there is space in the program, the paper can be accepted.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Good results on classification and attribute manipulation but has considerable overlap with Fader networks\"}",
"{\"title\": \"feedback to authors response\", \"comment\": \"Thanks for the comment.\\n\\nI believe this would be one way of linking this to mutual information , although it is not convincing since you have to make an assumption on the real P_y that is very limiting(equal probability) .\", \"another_more_direct_way_is_to_use_the_golden_formula_to_bound_mutual_information\": \"\", \"http\": \"//people.lids.mit.edu/yp/homepage/data/itlectures_v5.pdf\\npage 36, Theorem 3.3 and Corollary 3.1 especially the note under this corrolary: \\n\\nBasically use that I(X,Y)= min_{Q} D_{KL}(P_{Y|X}|| Q | P_{X}),\\nEffectively you are using I think instead of KL the Jensen Shanon divergence and you are using its variational form. and for Q you are parametrizing it with a neural network.\"}",
"{\"title\": \"clarification regarding MINE\", \"comment\": \"I thank the authors for their revision and for addressing my concerns.\\n\\nI maybe did not express myself well , sorry for the confusion. Yes of course estimating mutual information in MINE does not use a min/max game, one estimate the mutual information with the variational form and then if the goal is to minimize this mutual information to learn an auxiliary network, one would get the min/max game. \\n\\nWhat I meant regarding mine is that your approach is to work on making the distance between \\n P(tilde{y}= A_{psi}(z)|z=E_phi(x)) and P(y) the smallest possible. your approach is to say we optimize on the A_{psi} to predict correctly and on E_{phi} to make errors. The only thing is that quantity is intuitive but it is not immediately linked explicitly to a mutual information estimation. If you have any formal insight how your approach would link to that that would be great addition to the paper.\", \"another_approach_would_have_been_using_mine\": \"min_{phi} max_{T} E_{(x,y) joint }T(E_{phi}(x), y) - log (E_{x,y random } T(E_{phi}(x), y) )\\n\\nthis would be l_aux that one would add.\\n\\nI am not asking to baseline this now in the current paper - I acknowledge the novelty of your proposal - but this would have been much more linked to information based factorization as the title suggests.\"}",
"{\"title\": \"Improvements to related work (MINE)\", \"comment\": \"We contacted one of the authors of Belghazi et al., who confirmed that they did * not * use a min-max objective to estimate the mutual information. Belghazi et al. approximated mutual information, T, by `maximizing a dual lower bound'. There is a min-max objective if and only if an additional network is incorporated to minimize the mutual information.\\n\\nWe are minimizing mutual information without approximating it directly. Our approach is closer to Predictability Minimization (Schmidhuber et al.), where our auxiliary network approximates p(y|z) and the encoder minimizes E_y p(y|z). Note that our auxiliary network takes only z as input, if we were trying to predict mutual information, I(Y;Z), using MINE, the model, T, would take batches of y \\\\in Y and z \\\\in Z as input. In our work we do not approximate mutual information.\\n\\nHowever, we do agree that it would be suitable to include a reference to Belghazi et al.'s work and have added the following to the related work section:\\n\\n\\\"\\\\cite{belghazi2018MINEMI} proposed a general approach for predicting the mutual information, which may then be minimized via an additional model. Rather than predicting mutual information \\\\citep{belghazi2018MINEMI} between latent representations and labels, we implicitly minimize it via adversarial information factorization.\\\"\"}",
"{\"title\": \"Thank you for addressing my concerns, I have increased my rating.\", \"comment\": \"Hi authors,\\n\\nThank you for addressing my concerns in the text of the paper.\\n\\nI do feel that the paper is overall much improved, particularly the model explanation with one fewer moving component. I have increased my score to be in line with the other reviewers.\"}",
"{\"title\": \"Our results are sufficient to confirm that our model achieves good factorization -- the objective of our paper.\", \"comment\": \"[Reviewer]\\nI agree that you present results on both attribute classification and attribute editing. My concern is whether it is clear that you perform either task significantly better than state of the art methods in either task.\\n\\n[Authors]\\n\\nOur work presents a method for learning representations that factor the attribute information from the rest of the latent representation. To evaluate our factorisation method we perform facial attribute editing and classification. Our results are (more than) sufficient to confirm that our model achieves good factorization -- it is not necessary for our model to achieve state of the art results to confirm this.\\n\\nWe do indeed claim that our results are competitive with a state of the art classification, but we do not claim to present state of art classification results. Please note that if we were aiming to achieve state of the art classification we would, for example, have used much deeper networks, Zhuang et al. 2018 use 13 layers while our encoder (which we use for classification) has only seven.\"}",
"{\"title\": \"We have included comparison to IcGAN\", \"comment\": \"[Reviewer]\\n\\nYou argue that you use smaller images \\\"(a) to make our ablation study more computationally feasible and (b) to make our results more reproducible by those with modest resources.\\\" I definitely agree it is useful to include the smaller results for this purpose. However, my concern is that a lack of any results on larger face images makes it difficult to compare your approach with methods that succeed at editing larger images.\\n\\n[Authors]\\n\\nFor work that focuses only on attribute editing, it may make sense to consider higher resolution images, however we focus on representation learning, where it is common to used images at resolution 64x64 (or less) [higgins2016beta, bao2017cvae, li2017alice, larsen2016autoencoding, burgess2018understanding, kumar2018variational]. Of the three papers you propose (Upchurch et al., Lample et al., Perarnau et al.) none of them are motivated by representation learning and only Perarnau et al. proposes an encoder that outputs a representation that is sufficient to describe an input image.\\n\\nAs per your suggestion, we have included a comparison with IcGAN (Perarnau et al.) in Tables 1 and 3, taking values from Lample et al. We compare to IcGAN since they also train on images that are 64x64. Our model without residual layers obtains that same reconstruction error as IcGAN, 0.028, while our model with residual layers achieves a much lower reconstruction error, 0.011. Lample et al. also suggests that the IcGAN only successfully edits attributes (Smiling --> Not Smiling) 9.9% of the time, while our model (with residual layers) successfully edits them at least 98% of the time. Our model without residual layers edits successfully edits them 81% of the time.\\n\\n[Please note that there is a typo in the Lample et al. paper, the authors write RMSE rather than MSE. We have contacted the authors and they confirm that this was a typo and they were in fact reporting MSE.]\\n\\nThank you for helping us to improve our paper with additional comparisons to related work.\"}",
"{\"title\": \"Thank your helping us to simplify our model.\", \"comment\": \"We have now revised our paper and we do not include \\\\hat{L}_{class} in our model description. As per your helpful suggestion, we now have add a single experiment in our ablation study to demonstrate that \\\\hat{L}_{class} was not helpful.\\n\\nThank you for this suggestion and for helping us to improve our paper.\"}",
"{\"title\": \"We have removed \\\\hat{L}_{class} from our model description -- simplifying our model.\", \"comment\": \"We have now revised our paper and we do not include \\\\hat{L}_{class} in our model description. As per your helpful suggestion (below), we have added a single experiment in our ablation study to demonstrate that \\\\hat{L}_{class} was not helpful.\"}",
"{\"title\": \"More responses\", \"comment\": \"> The quotation \\\"better for 6 out of 10 attributes\\\" is no where to be found in our paper. The paper read \\\"outperformed for 6 out of 10\\\".\\n\\nOkay.\\n\\n> However, we understand the reviewer's concerns and agree that the values are close...\\n\\nThanks. I do feel that the updated text more honestly and scientifically represents the results of Figure 2.\\n\\n> The focus of our work has been to learn a representation that factors attribute information from the rest of the representation. We test this factorization process in two ways: (1) attribute editing and (2) attribute classification.\\n\\nI agree that you present results on both attribute classification and attribute editing. My concern is whether it is clear that you perform either task significantly better than state of the art methods in either task. I focused on attribute editing papers simply because this is a case where I think there are very strong baselines that work on megapixel images. \\n\\nYou argue that you use smaller images \\\"(a) to make our ablation study more computationally feasible and (b) to make our results more reproducible by those with modest resources.\\\" I definitely agree it is useful to include the smaller results for this purpose. However, my concern is that a lack of any results on larger face images makes it difficult to compare your approach with methods that succeed at editing larger images.\"}",
"{\"title\": \"More responses\", \"comment\": \"> Since our model is no more complex than that of Bao et al., an accepted paper, we assert that this is not grounds for rejection.\\n> The explanation for using \\\\hat{L}_{class} ...\\n\\nI covered \\\\hat{L}_{class} in a comment above, but since you are addressing it again, I am happy to as well. \\n\\nFirst, I disagree with your assertion. In my opinion, presenting a model with moving parts that you claim *in the paper* serve no purpose adds needless complexity. \\n\\nIn my opinion, components of your model that do nothing should not be included in the full description of your model. It strictly adds unnecessary complexity, and the section should be a description of the full model you are proposing, not every component that was tried along the way. \\n\\nThe ablation study is great, but it would suffice to mention in a single experimental result that following other literature you tried adding such a term, but as Table 1 demonstrates it didn't help and therefore was excluded from the model. \\n\\nThe fundamental problem with including components like this just because other papers did is that useless components of models propagate this way. If you feel your paper is the one to finally discover that it is useless, then great! Your paper should be the first to not include it as part of the model.\"}",
"{\"title\": \"Added results for our model trained without L_gan or L_KL\", \"comment\": \"[Reviewer]\\n1) While these components(e.g., L_gan and L_KL) are verified in Bao et al., it is still possible that they are not necessary in this model since you add some new staff. Since you have the experiments which demonstrates the importance of adding these staffs, can you add the results to clarify this or at least add several sentences to mention this?\\n\\n[Authors]\\nThank you for the suggestion. We have updated Table 3 to include results for our model trained without L_gan and without L_KL and added additional text to the appendix. We have also added Figure 6 which demonstrates the blurred images obtained if the GAN loss is not used.\\n\\n\\n[Reviewer]\\n2) In the original GAN, if the equilibrium of min-max objective is achieved, we will have p_data = p_model. Is there anything similar in your model? What will we obtain if the equilibrium of your min-max objective is achieved? This part seems to be not very clear and make your method to be not that \\\"principle\\\".\\n\\n[Authors]\\n\\nThe objective function as we have written it above, E_p_real(x) log C_\\\\chi(x) + E_p_fake(x) log (1 - C_\\\\chi(x)), is in the exact same form as the original objective, simply with different notation. In our case, we refer to p_data and p_model as p_real and p_fake respectively. C_\\\\chi is the discriminator. Therefore, if the generator (in our case the decoder) and the discriminator are optimal, it follows that p_real = p_fake. Recall (from above) p_fake is the distribution of reconstructed and synthesised images (p_model) and p_real are samples from the (training) data (p_data). Therefore, when optimal the reconstructed and synthesised samples appear to come from the same distribution as the training data.\"}",
"{\"title\": \"My point is that \\\\hat{L}_{class} should not be in the paper at all.\", \"comment\": \"With regards to the purpose of \\\\hat{L}_{class}, the quote you provide is clearly not sufficient to adequately explain its purpose. Evidence for this is the fact that, as you later demonstrate with an ablation study, the loss has no purpose.\\n\\nIn my opinion, when explaining your model, you should not include terms that \\\"did not play an important role in [your] model\\\" and \\\"[do] not provide any clear benefit.\\\", *regardless of whether those terms have been used in prior art.* If you feel mentioning this term is sufficient, a simple explanation of why it *wasn't* included would suffice.\"}",
"{\"title\": \"\\\"There is no additional classifier in our model. The encoder itself acts as a classifier\\\"\", \"comment\": \"[Authors]\\nWe would like to sincerely thank the reviewer for reading our paper and for providing constructive feedback. The reviewer appears to have understood all the components of our model well, with the exception of the \\\\hat{L}_{class} loss, which has been the focus of majority of the comments. We have addressed all of the reviewer's comments below, as well as improved our paper where necessary (please see the updated version).\\n\\n[Reviewer]\\nAn additional classifier is trained using a classification loss \\\\hat{L}_{class} on the encoded reconstructed image, the use of which I don't understand.\\n\\n[Authors]\\nThere is no additional classifier in our model. The encoder itself acts as a classifier, predicting both a latent vector \\\\hat{z}, and a label \\\\hat{y}. The primary loss for ensuring that the encoder is an effective classifier is L_{class}, not \\\\hat{L}_{class} as claimed by the reviewer. \\n\\nWith regards to \\\\hat{L}_{class}, the following quote from our paper (Section 3.1) explains its purpose:\\n\\u201cThe classification loss, \\\\hat{L}_{class}, provides a gradient containing label information to the decoder, which otherwise the decoder would not have \\\\citep{chen2016infogan}.\\u201d\\n\\nWe found that this term did not play an important role in our model. This is stated in our paper as follows (Section 4.2):\\n\\u201c\\\\hat{L}_{class} does not provide any clear benefit. We explored the effect of including this term since a similar approach had been proposed in the GAN literature \\\\cite{chen2016infogan,odena2016conditional} for conditional image synthesis.\\u201d\"}",
"{\"title\": \"Explaining the `classification model'.\", \"comment\": \"[Reviewer]\\nI think additional work on section 2.5 through section 3 would be helpful to improve clarity. As one example, \\\"y\\\" is unnecessarily overloaded: y denotes a specific attribute, \\\\hat{y} denotes a latent vector that is intended to not be class agnostic, \\\\tilde{y} denotes the prediction of an auxiliary network on an intended class-agnostic latent vector \\\\hat{z} of the presence of the original attribute y, and \\\\hat{\\\\hat{y}} denotes the non agnostic latent vector achieved by passing the decoded image back through the encoder.\\n\\n[Authors]\\nWe believe our notation is consistent, correct and necessary, and this reviewer (as well as the others) have understood our model. We provide a clear diagram as well as an algorithm and descriptions in the text.\\n\\n[Reviewer]\\nThis notational complexity is compounded by the fact that a number of steps in the method are not well motivated in the text, and left to the reader to understand their purpose. \\n\\n[Authors]\\nOur core contribution is the introduction of an auxiliary network for factorizing attribute information from the rest of the latent code. This is communicated clearly in Section 3.2 and the reviewer appears to understand the * motivation * for the auxiliary network well:\", \"reviewers_own_words\": \"\\\"An auxiliary network A attempts to classify the face attribute y from the class agnostic features \\\\hat{z}, with the idea being that the encoder should try to produce \\\\hat{z} vectors from which the class cannot be predicted.\\\"\\n\\nWe incorporate our network into a pre-existing model, the VAE-GAN which is also communicated clearly in Section 2. In Section 2 we introduce VAEs and GANs, and explain the benefits of combining the two in Section 2.3 'Best Of Both GAN And VAE'. Finally, we also explain conditional GANs. The VAE, GAN and VAE-GAN are previous related works, motivated by their respective authors.\\n\\n\\n[Reviewer]\\nFor example, the authors state that \\\"we incorporate a classification model into the encoder so that our model may easily be used to perform classification tasks.\\\" What does this mean? In the diagram (Figure 1), where is this classification model?\\n\\n[Authors]\\nWe would like the thank the reviewer for raising this concern. To avoid confusion, we have amended the paper in Section 3.1 to read as follows:\\n\\\"Additionally, the encoder also acts as a classifier, outputting an attribute vector, \\\\hat{y}, along side a latent vector, \\\\hat{z}.\\\"\\n\\nThere is no separate classification network. The encoder, which takes an image, x, as input, is split in two, outputting a latent vector \\\\hat{z} and an attribute vector \\\\hat{y}. The model is trained such that the attribute vector may be used to classify an input image, x, therefore the encoder is also a classifier.\\n\\nThis is described in Section 2.4 (quote from our paper):\\n\\u201cthe encoder outputs both a latent vector, \\\\hat{z}, and an attribute vector, \\\\hat{y}\\u201d\\n\\nOur point here, is that rather than adding a separate classifier that takes reconstructed images as input, as is done in Bao et al. (illustrated in our paper in Figure 1(b)), we incorporate the classifier into the encoder, E_{y, \\\\phi}, (illustrated in Figure 1(a)) which allows our model to be used both for image attribute editing and classification. This also avoids the need to train an entirely separate classifier network. We are essentially killing two birds with one stone in this proposed approach.\\n\\n[Reviewer]\\nWhy in the GAN loss is there a term that compares the fake loss with the result of classifying a decoded z vector?\\n\\n[Authors]\\nThere are three terms in the GAN loss. In the first, the discriminator, C_\\\\chi, is applied to real samples. In the second and third term the discriminator is applied to fake images. There are two sources of fake images; (1) the reconstructed images, D_\\\\theta(E_\\\\phi(x)), and (2) synthetic images, D_\\\\theta(E_{\\\\phi,z}(x), y). The GAN loss is similar to the one used by Bao et al. (see Line 9 of Algorithm 1 of Bao et al.). \\n\\n[Reviewer]\\nIs this z \\\\hat{z}, or a latent vector drawn from a distribution p(z)?\\n\\n[Authors]\\nBy definition z is drawn from the prior, p(z), this is described in Section 2.3 when describing VAE-GAN; \\u201clatent variable z, which is drawn from a specified random distribution, p(z)\\u201d. This is also illustrated in line 5 of Algorithm 1. We have made this clearer by adding the following after defining L_{gan}: \\u201cand z \\\\sim p(z)\\u201d.\\n\\n[Reviewer]\\nIf it is the former, how does this term differ from the second term in the GAN loss. If it is the latter, then shouldn't it be concatenated with some y in order to be used as input to the decoder D_{\\\\theta}?\\n\\n[Authors]\\nWe would like to thank the reviewer for catching this typo. Indeed, there is supposed to be a y input to the decoder, we have updated the paper to reflect this, the third term now reads: \\u201cL_{bce}(y_{fake}, C_\\\\chi(D_\\\\theta(z, y)))]\\u201d.\"}",
"{\"title\": \"The encoder acts as a classifier, outputting an attribute vector, hat{y}, as well as a latent vector, z.\", \"comment\": \"[Reviewer]\\nWhy is it important to extract \\\\hat{\\\\hat{y}} from \\\\hat{x}? In the paper you state that the loss \\\"provides a gradient containing label information to the decoder,\\\" but why can't we use the known label y of the original input x to ensure that the encoder and decoder preserve this information if it is used as \\\\hat{y}?\\n\\n[Authors]\\nL_{class} ensures that \\\\hat{y} contains label information, but this loss is not dependant on the parameters of the decoder and, therefore, cannot be used to update the decoder. Note that there is a precendence for computing \\\\hat{\\\\hat{y}}: the Bao et al. model also use it to provide label information to the decoder. We could have placed an additional classifier at the output of the decoder, as is done by Bao et al., to compute \\\\hat{\\\\hat{y}}. However, rather than introducing and training another classifier, we made use of our encoder which is already able to predict labels, thus we pass the reconstructed image back through the encoder. \\n\\n[Reviewer]\\nLater in the paper, you explicitly state that \\\\hat{\\\\mathcal{L}_{class}} \\\"does not provide any clear benefit.\\\" If that is the case, then you should ideally include it neither in the model nor in the paper. If it was included primarily because previous models included it, then I would recommend you introduce its use in a background section on Bao et al., 2017 rather than including it in your model description with an explanation like \\\"so that our model may easily be used to perform classification tasks.\\\"\\n\\n[Authors]\\nThe explanation for using \\\\hat{L}_{class} is not \\\"so that our model may easily be used to perform classification tasks.\\\", it is because (Section 3.1) it \\\"provides a gradient containing label information to the decoder\\\".\\n\\nWe chose to investigate the need for \\\\hat{L}_{class} because other works had proposed a similar idea - that is to train a classifier on reconstructed samples \\\\cite{odena2016conditional, bao2017cvae}. We did not know a priori that this component would be redundant, and only discovered this following investigation. Indeed, it is a useful and relevant finding of our work for the representation learning community. Rather than simply leaving this component out of our model, we chose to perform an extensive ablation study to provide evidence for this.\\n\\nWe would like to make a final note concerning the quotation \\\"so that our model may easily be used to perform classification tasks\\\". This was simply referring to the fact that the encoder outputs an attribute label vector, \\\\hat{y}, and hence may be used as a classifier. As mentioned above, we have amended this section of the text (Section 3.1) to read:\\n\\n\\\"Additionally, the encoder also acts as a classifier, outputting an attribute vector, \\\\hat{y}, along side a latent vector, \\\\hat{z}.\\\"\\n\\n[Reviewer]\\nUltimately, this last point brings us to a good summary of my concerns with the model: the inclusion of too many moving parts, some of which the authors explicitly say later on provide no benefit.\\n\\n[Authors]\\nSince our model is no more complex than that of Bao et al., an accepted paper, we assert that this is not grounds for rejection. The original Bao et al. model consists of 6 components (see Equation 7 of Bao et al.). There is only one term in our loss function that following investigation, we considered to be redundant. Rather than just leaving this out, we performed an extensive ablation study to provide evidence for this.\\n\\nThe source of the confusion seems to be summed up by the question, \\\"where is this classification model?\\\". While the reviewer has understood the core contributions of our paper, this misunderstanding has lead to most of the questions above. Simply, the encoder acts as a classifier because it predicts both an attribute, \\\\hat{y}, and a latent vector, \\\\hat{z}. We have amended our paper to make this more clear, adding the following to Section 3.1:\\n\\n\\\"Additionally, the encoder also acts as a classifier, outputting an attribute vector, \\\\hat{y}, along side a latent vector, \\\\hat{z}.\\\"\\n\\nCrucially, the encoder in our model may be used as a classifier, unlike in other attribute editing models and we demonstrate that our classifier achieves results that are competitive with state of the art classification results.\"}",
"{\"title\": \"Addressing concerns about experimental results.\", \"comment\": \"[Reviewer]\\nMoving on to experimental results, I think this is another area where I have a few concerns. First, in Figure 2, the authors argue that your model is \\\"better for 6 out of 10 attributes\\\" and comparable results for most others. The authors include a gap of 0.1 in the \\\"Gray_hair\\\" category as \\\"better\\\" but label a gap of 0.5 in the Black hair category as \\\"comparable.\\\" I think results in several of the categories are sufficiently close hat error bars would be necessary to draw actual conclusions. If \\\"better\\\" were to mean \\\"better by 0.5\\\" for example, then the authors method is better on 4 tasks (smiling, blonde hair, heavy makeup, mustache) and worse on 3 (black hair, brown hair, wavy hair).\\n\\n[Authors]\\nThe quotation \\\"better for 6 out of 10 attributes\\\" is no where to be found in our paper. The paper read \\\"outperformed for 6 out of 10\\\". However, we understand the reviewer's concerns and agree that the values are close. For this reason, throughout the paper we have stressed that the results are competitive. We have amended the text in the results section to reflect this also:\\n\\n\\\"Results in Figure \\\\ref{fig:state_of_art} show that our model is highly competitive with a state of the art facial attribute classifier \\\\cite{zhuang2018multi}. We outperformed by more than 1% on $2$ out of $10$ categories, underperformed by more than 1% on only $1$ category and remained competitive with all other attributes.\\\"\\n\\n[Reviewer]\\nWith respect to the actual attribute editing, my main concern here is a lack of comparison to models other than Bao et al., despite the fact that face attribute changing is an exhaustively studied task. A number of papers like Perarnau et al., 2016, Upchurch et al., 2017, Lample et al., 2017 and others study this task from machine learning perspectives, and in some cases can perform photorealistic image attribute editing without complicated machinery on megapixel face images. \\n\\n[Authors]\\nWe would like to thank the reviewer for pointing out additional related work, this has helped us to improve the related work section of our paper.\\n\\nThe focus of our work has been to learn a representation that factors attribute information from the rest of the representation. We test this factorization process in two ways: (1) attribute editing and (2) attribute classification.\\n\\nPerarnau et al., 2016 is similar to our model without the L_{aux}, \\\\hat{L}_{class} or \\\\hat{L}_KL. While Perarnau et al. does perform image attribute editing, they do not present classification results.\\n\\nThe method proposed by Upchurch et al., 2017 requires a reverse mapping procedure which is very computationally intensive, applying gradient descent in image space. Additionally, the focus of the work by Upchurch et al., 2017 is not on representation learning and may not be used for image classification. \\n\\nIn our paper, the goal is to learn representations for images. For this goal, our encoder network needs to encode more than just the attribute-invariant information, but also the attribute information itself. The encoder of the Fader Network proposed by Lample et al., 2017 predicts only the attribute-invariant information, while our encoder network predicts both attribute-invariant information and the attribute. This makes our model not only suitable for attribute editing, but it also makes our model suitable for classification. Ultimately, developping models capabable of more than just one task are exciting and important steps forward for the field. \\n\\nWe have updated our paper to include references to Perarnau et al., 2016, Upchurch et al., 2017 and Lample et al., 2017, as per the reviewer's suggestions, to strengthen the related work section of our paper.\\n\\n[Reviewer]\\nAt least the images in Figure 3 and 4 are substantially downsampled from the typical resolution found in the Celeba dataset, suggesting that there was some failure mode on full resolution images.\\n\\n[Authors]\\nWe use the standard image size, 64x64, and these were used directly without down-sampling. Unfortunately, some resolution was unintentionally lost in Figure 4, when annotated with \\\\hat{y}=0 and \\\\hat{y}=1. We have rectified this and have updated the image with a higher resolution version. However, images in Figure 3 were not affected and we are not aware of any failure modes.\\n\\nWe use images of size 64x64, rather than larger image sizes, (a) to make our ablation study more computationally feasible and (b) to make our results more reproducible by those with modest resources. We believe that this is generally good for the field.\\n\\nWe provide extensive quantitative results which show (a) that the attribute manipulation is reliable and (b) that we achieve a low reconstruction error. Additionally, our classification results are further evidence that our model does indeed factor attribute information from the rest of the latent vector, which is our objective.\"}",
"{\"title\": \"Our model is no more complex that related models and we compare to a state of art classification model.\", \"comment\": \"The reviewer's main concerns appear to be complexity of the model and comparison to related work.\\n\\n[1] Complexity:\\nThe reviewer's main concern is that the model is too complex, however, our proposed model is no more complex than the accepted paper of Bao et al. Our cost has the same number of components and hyper parameters and our model has the same number of networks (our encoder network has two outputs). Most of our components are also less complex because losses are computed on network outputs rather than on features extracted from multiple intermediate layers. Additionally, we demonstrate that terms in our loss function, \\\\hat{L}_{class}, may be excluded, making our model less complex.\\n\\nThroughout our work we have been intentionally explicit and detailed about the costs we use. This may have resulted in the complexity of our approach being excessively emphasised, however, it is merely a thorough presentation of our idea, intended to make the work reproducible. Complexity appears to be the reviewer\\u2019s main concern, however, since our paper is no more complex than papers previously accepted, we assert that our paper, detailing a novel approach, should be accepted. \\n\\n\\n[2] Comparison to related work:\\nThe focus of our work has been to learn a representation that factors attribute information from the rest of the representation. We test this factorization process in two ways: (1) attribute editing and (2) attribute classification. \\n\\nThe papers recommended by the reviewer focus only on attribute editing and not on representation learning and they (Upchurch et al., 2017 and Lample et al., 2017) may not be used for, or (Perarnau et al., 2016) have not been demonstrated for attribute classification. To the best of our knowledge, our work is the only approach to learn disentangled representations that may be applied to both image attribute manipulation and simultaneously achieves competitive results with state of the art models on image classification. This novel versatility of the model is certainly a strength of our paper and grounds for acceptance.\\n\\nWhen comparing our model to previous work, we chose the most challenging benchmark for facial attribute classification, not just comparing to models intended for attribute editing. Our classification results are highly competitive with this benchmark.\\n\\nWe hope that following the revisions suggested by the reviewer and the inclusion of recommended citations, the reviewer will take our response into consideration and revise their assessment of our paper. Thank you.\"}",
"{\"title\": \"thanks for clarifying\", \"comment\": \"1) While these components(e.g., L_gan and L_KL) are verified in Bao et al., it is still possible that they are not necessary in this model since you add some new staff. Since you have the experiments which demonstrates the importance of adding these staffs, can you add the results to clarify this or at least add several sentences to mention this?\\n\\n2) In the original GAN, if the equilibrium of min-max objective is achieved, we will have p_data = p_model. Is there anything similar in your model? What will we obtain if the equilibrium of your min-max objective is achieved? This part seems to be not very clear and make your method to be not that \\\"principle\\\".\"}",
"{\"title\": \"Thank you for acknowledging that our paper is `clear' and `interesting'\", \"comment\": \"[Reviewer]\\nThis paper proposed a generative model to learn the representation which can separates the identity of an object from an attribute. Authors extended the autoencoder adversarial by adding an auxiliary network.\\n\\n[Reviewer]\\nStrength\\nThe motivation of adding this auxiliary network, which is to distinguish the information between latent code z and attribute vector y, is clean and clear.\\nExperiments illustrate the advantage of using auxiliary network and demonstrating the role of classify. Experimental results also show the proposed model learning to factor attributes from identity on the face dataset.\\n\\n[Authors]\\nWe thank the reviewer for acknowledging this paper is \\u201cclear\\u201d, \\u201cinteresting\\u201d and that experiments presented in our paper do indeed support our proposed method. The reviewer appears to have a very good understanding of the contributions made in our paper.\"}",
"{\"title\": \"We have addressed comments from reviewer 3\", \"comment\": \"[Reviewer]\\nWeakness \\nThe proposed model seem to be unnecessarily complex. For example, the loss of in (6) actually includes 6 components (5 are from L_enc) and 4~5 tuning hyper-parameters. \\n\\n[Authors]\\nWe appreciate that our model has several components, however, the original Bao et al. model also consists of 6 components (see Equation 7 of Bao et al.) along with 4 hyper-parameters. Since our model is no more complex than that of Bao et al. \\u2014 an accepted paper \\u2014 we assert that this should not be seen as a weakness of our paper.\\n\\nDespite having a few hyper-parameters, our model does not require extensive hyper-parameter tuning. However, to obtain high fidelity reconstructions it is necessary to select low values of delta (the weight on the L_gan term) and alpha (the weight on the KL term).\\n\\n[Reviewer]\\nThe L_gan also includes 3 parts. \\n\\n[Authors]\\nThe L_gan term is similar to the one used by Bao et al, which also has three components. Note also that the GAN loss is intended purely to improve the visual quality of the samples. Our contribution is still valid without L_gan, since our main contribution is the introduction of an auxiliary network, A_\\\\psi, and the loss, L_aux.\\n\\n[Reviewer]\\nThe reason of adding gan loss lacks either theoretical or empirical analysis. So as L_KL.\\n\\n[Authors]\\nThe use of the gan loss and the KL loss are motivated already by Bao et al. in the VAE-GAN which we introduce in Section 2.\\n\\nAccording to our understanding of the GAN literature, it is generally accepted that GAN loss improves the visual quality of samples. Experimentally, we found this to be true for our model also. Similarly, regularisation of the latent space helps with generalisation to test samples. We found that our model performed better with a small amount of KL regularisation than without any, and that models trained without regularisation overfit and had very poor reconstruction (MSE=0.0381). \\n\\nInterestingly, when our model is trained without a GAN loss or KL loss, it is still able to edit attributes with high accuracy, however, the visual quality of samples is poor. This shows that the attribute information is still factored from the rest of the latent representation, which is the main contribution of our work. \\n\\n[Reviewer]\\nIn addition, the second term in L_gan is unnecessary since you already have a reconstruction loss. It also make it to be unclear what we obtain if the equilibrium of the GAN objective achieved.\\n\\n[Authors]\\nWe use a similar L_gan to that used by Bao et al. Please refer to Algorithm 1, line 9 of Bao et al. In the cVAE-GAN there are two sources of `fake' images, (a) reconstructed images, D_\\\\theta(E_\\\\phi(x)) and (b) sampled images, D_\\\\theta(z), z ~ p(z), where p(z) is the prior. This is why there are three terms in the GAN loss. Reconstruction loss alone is often not enough to achieve high quality reconstruction.\\n\\nWe explicitly wrote out the GAN loss as three terms for clarity, but the GAN loss could still be written as two terms, where C_\\\\chi is the discriminator:\\nE_p_real(x) log C_\\\\chi(x) + E_p_fake(x) log (1 - C_\\\\chi(x)), where sampling x_fake ~ p_fake(x), includes both reconstructed images and sampled images. This is the same as the original GAN loss.\"}",
"{\"title\": \"Addressing questions of reviewer 3\", \"comment\": \"[Reviewer]\\nThe written of this paper can be improved to make it more clear. \\nIt looks \\\\hat_y and \\\\tilde_y are same thing. \\n\\n[Authors]\\nLine 6 of Algorithm 1 (as well as Figure 1) show that \\\\hat{y} is one of the outputs of the encoder. This is also described in Section 2.4: \\u201cthe encoder outputs both a latent vector, \\\\hat{z}, and an attribute vector, \\\\hat{y}\\u201d\\n\\nLine 8 of Algorithm 1 (as well as Figure 1) show that \\\\tilde{y} is the output of the auxiliary classifier. This is also described in Section 3.1: \\\"We introduce an auxiliary network, A_\\\\psi : \\\\hat{z} \\u2014> \\\\tilde{y},\\u201d\\n\\nTo re-iterate, \\\\hat{y} is one of the outputs of the encoder. \\\\tilde{y} is the output of the auxiliary classifier. They are not the same thing.\\n\\n[Reviewer]\\nHow do you get \\\\hat_z? Do you assume the posterior distribution is Gaussian and use the reparameterization trick?\\n\\n[Authors]\\nYes, we assume a Gaussian prior and posterior and use the re-parametrization trick. On page 4 we say \\u201cWe integrate this novel factorisation method into a VAE-GAN.\\u201d And we describe VAE-GANs in Section 2. Specifically, we describe the re-parametrization trick in Section 2.1:\\n\\n\\\"The encoder predicts, \\\\mu_\\\\phi(x) and \\\\sigma_\\\\phi(x) for a given input x and a latent sample, \\\\hat{z}, is drawn from q_\\\\phi(z|x) as follows: \\\\epsilon \\\\sim \\\\mathcal{N}(\\\\mathbf{0},I) then z = \\\\mu_\\\\phi(x) + \\\\sigma_\\\\phi(x) \\\\odot \\\\epsilon.\\\"\\n\\nThere was a typo here, the paper now reads: \\u201cthen \\\\hat{z} = \\\\mu_\\\\phi(x) + \\\\sigma_\\\\phi(x)\\u201d\\n\\n[Reviewer]\\nWhat are \\\\hat_y and \\\\hat_\\\\hat_y? Are they binary or a scalar between 0 and 1? \\n\\n[Authors]\\nLine 6 of Algorithm 1 (as well as Figure 1) show that \\\\hat{y} is one of the outputs of the encoder.\\n\\nLine 8 of Algorithm 1 (as well as Figure 1) show that \\\\hat\\\\hat{y} is the \\u201cpredicted label of a reconstructed data sample\\u201d. \\n\\n\\\\hat_y and \\\\hat_\\\\hat_y are continuous scalars between [0,1]. We have now added in Section 2.5: \\u201c\\\\hat{y} \\\\in [0,1]\\u201d and in Section 3.1 we have added \\u201c\\\\hat{\\\\hat{y}} \\\\in [0,1]\\u201d to make this more clear.\\n\\nThank you for helping us to improve our paper with this comment.\\n\\n[Reviewer]\\nHow do you generate \\\\hat_x? When generating \\\\hat_x, do you sample \\\\hat_z and \\\\hat_y? If so, how do treat the variance problem of \\\\hat_y? \\n\\n[Authors]\\nTo obtain \\\\hat{x} we pass \\\\hat{z} and \\\\hat{y} through the decoder, D_\\\\theta. During training \\\\hat{z} and \\\\hat{y} are the output of the encoder, which takes a sample from the data as input. During attribute manipulation, \\\\hat{z} is still the output of the encoder, and we set \\\\hat{y} to 0 or 1. This is detailed in the paper as follows:\", \"during_training\": \"Lines 4, 6, and 7 of Algorithm 1 show how \\\\hat_x is synthesised.\", \"line_4\": \"An image x is sampled from the data\", \"line_6\": \"x is passed through the encoder, which outputs \\\\hat{z} and \\\\hat{y}.\", \"line_7\": \"\\\\hat{z} and \\\\hat{y} are concatenated and passed to the decoder.\", \"during_attribute_manipulation\": \"Quote taken from Section 3.3 of our paper:\\nwe encode the image to obtain a \\\\hat{z}, the identity representation, append it to our desired attribute label, \\\\hat{y} <\\u2014 y, and pass this through the decoder. We use \\\\hat{y}=0 and \\\\hat{y}=1 to synthesize samples in each mode of the desired attribute e.g. `Smiling' and `Not Smiling'.\"}",
"{\"title\": \"Our model is no more complex than others.\", \"comment\": \"The reviewer's main concern is that the model is too complex, however, our proposed model is no more complex than the accepted paper of Bao et al. Our cost has the same number of components and hyper parameters and our model has the same number of networks (our encoder network has two outputs). Most of our components are also less complex because losses are computed on network outputs rather than on features extracted from multiple intermediate layers. Additionally, we demonstrate that terms in our loss function, \\\\hat{L}_{class}, may be excluded, making our model less complex.\\n\\nThroughout our work we have been intentionally explicit and detailed about the costs we use. This may have resulted in the complexity of our approach being excessively emphasised, however, it is merely a thorough presentation of our idea. If complexity is the reviewer\\u2019s main concern, our paper is no more complex than papers previously accepted. If this is the main criticism, our paper should be accepted.\\n\\nWe would again like to thank the reviewer for their constructive feedback, which has enabled us to improve our paper.\"}",
"{\"title\": \"Thank you for helping us to improve our paper.\", \"comment\": \"We would like to sincerely thank the reviewer for reading and understanding our paper and for providing constructive feedback. We have addressed all of the comments below and in one case we respectfully ask for some clarification, please. The feedback from the reviewer has been very helpful for improving our paper (please see the updated version).\"}",
"{\"title\": \"Improvements to related work and additional comparison with DIP-VAE\", \"comment\": \"[Reviewer]\\nThere is a large body of work on disentanglement that the paper does not cite or compare to for instance, InfoGAN, Beta-VAE https://openreview.net/pdf?id=Sy2fzU9gl and disentangled latent concepts https://arxiv.org/pdf/1711.00848.pdf ([Authors] (DIP-VAE)).\\n\\n[Authors]\\nWe appreciate the additional references that the reviewer proposed, two of which we had already included (beta-VAE and InfoGAN). Based on these suggestions we have improved the related work section of our paper by adding the following:\\n\\n\\\"Finally, while we use labelled data to learn representations, we acknowledge that there an many other models that learn factored, or disentangled, representations from unlabelled data including several VAE variants \\\\citep{higgins2016beta, kumar2018variational}.\\\"\\n\\nWe would also like to draw the reviewer's attention to the six instances where we had already cited InfoGAN in our paper and the one instance of beta-VAE.\\n\\nBelow is one quoted example where we investigated the inclusion of a component of our model that was inspired by a similar approach in InfoGAN (Section 4.1, page 6): \\n\\n\\\"Using \\\\hat{\\\\mathcal{L}}_{class} does not provide any clear benefit. We explored the effect of including this term since a similar approach had been proposed in the GAN literature \\\\cite{chen2016infogan,odena2016conditional} for conditional image synthesis (rather than attribute editing). To the best of our knowledge, this approach has not been used in the VAE literature. This term is intended to maximise I(x,y) by providing a gradient containing label information to the decoder, however, it does not contribute to the factorization of attribute information, y, from \\\\hat{z}.\\\"\\n\\nThough we have cited beta-VAE (Section 4.3, page 8), we did not make a direct comparison to beta-VAE since it is trained without labelled data and a classifier is trained post-hoc; this is similar to the DIP-VAE (an improvement on the beta-VAE). Instead, we chose to compare our results to a state of the art classification model, since it outperforms all other methods, including those whose objective is to learn disentangled representations. For the sake of completeness, below is the comparison of our classification results (which are competitive with state of the art) and those reported in the DIP-VAE paper:\\n\\n\\tLabel\\t\\t|\\tDIP-VAE\\t\\t|\\tours\\t|\\n-\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nBlack hair\\t\\t|\\t80.6 \\t\\t|\\t89.8 \\t|\\nBlonde hair\\t\\t|\\t91.9 \\t | \\t97.2 \\t|\\nHeavy Makeup\\t|\\t81.5 \\t\\t|\\t92.8 \\t|\\nWavy hair\\t\\t|\\t71.5. \\t\\t| \\t84.5 \\t|\\nLipstick. \\t\\t|\\t84.7 \\t\\t|\\t94.4 \\t|\\n\\nAs is clear, our model strongly out-performs DIP-VAE, which is why we did not explicitly compare with these results in our paper, and rather chose to compare to a state of the art classification model. We felt this was a vastly more challenging benchmark.\\n\\n---- EDIT ----\\n\\nWe have included additional results comparing our model with DIP-VAE (Kumar et al.) in the Appendix.\"}",
"{\"title\": \"Improvements to related work and a request for clarification.\", \"comment\": \"[Reviewer]\\nNote that for example that in beta-VAE it is a similar idea where but it is on z and z|x and the distance used is KL (since it is has closed form with gaussian), min_phi Loss+ beta KL (p(z), p(z|x)), a discussion of the previous related work in the paper is necessary. \\n\\n[Authors]\\nThe authors whole heartedly appreciate the contributions beta-VAE has made to the field and specifically to representation learning. As per the reviewer's helpful suggestion, we have added the following to improve the related work section of our paper (adding an additional citation to both beta-VAE and Burgess et al.):\\n\\n\\\"The beta-VAE \\\\cite{higgins2016beta} objective is similar to the information bottle neck \\\\cite{burgess2018understanding}, minimizing mutual information, I(x;z), which forces the model to exploit regularities in the data and learn a disentangled representation. In our approach we perform a more direct, supervised, factorisation of the latent space, using a mini-max objective, which has the effect of approximately minimizing I(z;y).\\\"\\n\\n[Reviewer]\", \"the_work_is_also_related_to_mine_https\": \"//arxiv.org/pdf/1801.04062.pdf where one would like to minimize the mutual information I(z;y) this mutual information is estimated through a min/max game.\\n\\n[Authors]\\nThank you very much for the pointers to additional literature.\\n\\nWe have one question about the connection to MINE (Belghazi et al): To the best of our knowledge, we cannot find the implied connection of MINE with performing an explicit mini-max game, which our paper proposes?\\n\\nThe method presented in the paper, Belghazi et al. 2018, learns a model, T_\\\\theta, that takes set of two variables (e.g. a \\\\in A, b \\\\in B) as input and predicts the mutual information (e.g I(A;B)). According to Algorithm 1 of Belghazi et al., T_\\\\theta is learned via gradient ascent only, there is no mini-max objective for estimating T_\\\\theta. Depending on the application, the mutual information may be minimized or maximized.\\n\\nTwo applications that Belghazi et al. propose include:\\n(1) Using T_\\\\theta as a regularizer in a GAN, maximizing the mutual information, I(x;z), between data, x and latent code, z.\\n (a) The only minimax game here is between the generator and discriminator.\\n (b) There is no mini-max objective for estimating T_\\\\theta.\\n (c) The purpose of using T_\\\\theta as a regularizer is to prevent mode dropping.\\n(2) T_\\\\theta is used to approximate the mutual information term, I(x;z), in the information bottleneck. In this example, I(x,z), is minimized.\\n (a) There is no minimax game here.\\n (b) There is no mini-max objective for estimating T_\\\\theta.\\n\\nThe only mention of mini-max we found is in Equation 16, which only corresponds to a GAN setting but does not correspond to their method for approximating mutual information.\\n\\nAdditionally, we could not find any example of mutual information computed between a latent code z and a label y, which is our setting.\\n\\nCould you let us know what you had in mind when making that connection?\\n\\nThank you in advance.\"}",
"{\"title\": \"Addressing Questions Of Reviewer 2\", \"comment\": \"[Reviewer]\", \"questions\": \"- why is RMSprop used for optimization, your model and the Bao et al baseline might benefit from the use of Adam?\\n\\n[Authors]\\nWe experimented with both RMSprop and Adam for both the Bao et al model and ours and generally found RMSprop to give better quality reconstructions.\\n\\n[Reviewer]\\n- (Table 3 in appendix ) Have you tried higher values of alpha the weight of KL, with the model of Bao et al (it is recommended in beta VAE to have high value of what you call alpha)?\\n\\n[Authors]\\nAs discussed in the beta-VAE paper, higher beta values (in our case alpha) often lead to worse reconstruction, which in this case would mean worse preservation of identity. We refer to this in our paper, referencing beta-VAE (Section 4.3, page 8): \\n\\n\\\"It is challenging to learn a representation that both preserves identity and allows factorisation \\\\cite{higgins2016beta}\\\"\\n\\nWe have indeed tried higher values of alpha and observed how these affect classification and reconstruction. For alpha=1.0, in our model that uses res-nets, the MSE rises to 0.041 (very poor reconstruction) and we generally see no improvement in classification. We point the reviewer to Section 4.3 where we discuss this: \\n\\n\\\"We found that the naive cVAE-GAN (Bao et al. \\\\cite{bao2017cvae}) failed to synthesise samples with the desired target attribute \\u2018Not Smiling\\u2019. This failure demonstrates the need for models that can deal with both reconstruction and attribute-editing. Note that we achieve good reconstruction by reducing weightings on the KL and GAN loss terms, using \\\\alpha=0.005 and \\\\delta=0.005 respectively.\\\"\\n\\nIn most VAE based models there is a trade off between reconstruction and factorization. In our model, factorization comes from the auxiliary loss so the KL term may be weighted less strongly, hence we are able to use a small alpha weighting on the KL term.\"}",
"{\"title\": \"Improved Related Work.\", \"comment\": \"[Reviewer]\", \"overall_assessment\": \"The paper novelty is using min/max game to estimate the mutual information between y (attribute) and z (identity code). Disentanglement and use of min/max games for estimating mutual information has been explored before. Further discussion and comparison to previous work is needed. \\n\\n[Authors]\\nAs mentioned above, to the best of our knowledge, we cannot find the implied connection of MINE with performing an explicit mini-max game, which our paper proposes. We would appreciate it if the reviewer could please let us know what they had in mind when making this connection?\\n\\nWe appreciate the additional references that the reviewer proposed, two of which we had already included (beta-VAE and InfoGAN). Based on these very helpful and constructive suggestions we have improved the the related work section of our paper, by adding the following:\\n\\n\\\"Finally, while we use labelled data to learn representations, we acknowledge that there are many other models that learn factored, or disentangled, representations from unlabelled data including several VAE variants \\\\citep{higgins2016beta, kumar2018variational}. The beta-VAE \\\\cite{higgins2016beta} objective is similar to the information bottleneck \\\\cite{burgess2018understanding}, minimizing mutual information, I(x;z), which forces the model to exploit regularities in the data and learn a disentangled representation. In our approach we perform a more direct, supervised, factorisation of the latent space, using a mini-max objective, which has the effect of approximately minimizing I(z;y).\\\"\\n\\nWe agree that disentanglement has indeed been studied before, however, when making comparisons we have focused on comparing to models that, like ours, make use of labelled data. When comparing our model to previous work, we chose the most challenging benchmark for facial attribute classification, not just those that use disentangled representations. Those that use disentangled representations perform worse than this benchmark. Our classification results are highly competitive with this benchmark.\\n\\nTo the best of our knowledge, our work is the * only * approach to learn disentangled representations which enable image attribute manipulation and simultaneously achieves competitive results with state of the art models on image classification. We believe the demonstrated versatility and novelty of this work are strong grounds for acceptance.\\n\\nAgain, we would like to sincerely thank the reviewer for helping us to improve our paper with their constructive suggestions.\"}",
"{\"title\": \"Method clarity can be improved, and lacks some key comparisons experimentally.\", \"review\": \"In this paper, the authors introduce a neural network architecture that has three components.\\nFirst a VAE is used to encode images in to two latent states \\\\hat{y} and \\\\hat{z}, with \\\\hat{z}\\nintended to be class (e.g. face attribute) agnostic. The decoder reconstructs images from \\\\hat{y}\\nand \\\\hat{z} concatenated together. A GAN style discriminator attempts to distinguish the \\ndecoded image from the original input image as real or fake, allowing the decoder to produce \\nhigher quality decoded images. An auxiliary network A attempts to classify the face attribute y\\nfrom the class agnostic features \\\\hat{z}, with the idea being that the encoder should try to produce \\n\\\\hat{z} vectors from which the class cannot be predicted. An additional classifier is trained\\nusing a classification loss \\\\hat{L}_{class} on the encoded reconstructed image, the use of which \\nI don't understand.\\n\\nI think additional work on section 2.5 through section 3 would be helpful to improve clarity.\\nAs one example, \\\"y\\\" is unnecessarily overloaded: y denotes a specific attribute, \\\\hat{y}\\ndenotes a latent vector that is intended to not be class agnostic, \\\\tilde{y} denotes the\\nprediction of an auxiliary network on an intended class-agnostic latent vector \\\\hat{z} of\\nthe presence of the original attribute y, and \\\\hat{\\\\hat{y}} denotes the non agnostic latent\\nvector achieved by passing the decoded image back through the encoder.\\n\\nThis notational complexity is compounded by the fact that a number of steps in the method are\\nnot well motivated in the text, and left to the reader to understand their purpose. For example,\\nthe authors state that \\\"we incorporate a classification model into the encoder so that our model may\\neasily be used to perform classification tasks.\\\" What does this mean? In the diagram (Figure 1),\\nwhere is this classification model? Why in the GAN loss is there a term that compares the\\nfake loss with the result of classifying a decoded z vector? Is this z \\\\hat{z}, or a latent vector\\ndrawn from a distribution p(z)? If it is the former, how does this term differ from the second\\nterm in the GAN loss. If it is the latter, then shouldn't it be concatenated with some y in order to\\nbe used as input to the decoder D_{\\\\theta}?\\n\\nWhy is it important to extract \\\\hat{\\\\hat{y}} from \\\\hat{x}? In the paper you state that the loss\\n\\\"provides a gradient containing label information to the decoder,\\\" but why can't we use the known label y\\nof the original input x to ensure that the encoder and decoder preserve this information if it is used as \\\\hat{y}?\\nLater in the paper, you explicitly state that \\\\hat{\\\\mathcal{L}_{class}} \\\"does not provide any clear benefit.\\\"\\nIf that is the case, then you should ideally include it neither in the model nor in the paper. If it was\\nincluded primarily because previous models included it, then I would recommend you introduce its use\\nin a background section on Bao et al., 2017 rather than including it in your model description with an\\nexplanation like \\\"so that our model may easily be used to perform classification tasks.\\\"\\n\\nUltimately, this last point brings us to a good summary of my concerns with the model: the inclusion\\nof too many moving parts, some of which the authors explicitly say later on provide no benefit.\\n\\nMoving on to experimental results, I think this is another area where I have a few concerns. First, in\\nFigure 2, the authors argue that your model is \\\"better for 6 out of 10 attributes\\\" and comparable results for most others. The authors include a gap of 0.1 in the \\\"Gray_hair\\\" category as \\\"better\\\" but label a gap of 0.5\\nin the Black hair category as \\\"comparable.\\\" I think results in several of the categories are sufficiently close\\nthat error bars would be necessary to draw actual conclusions. If \\\"better\\\" were to mean \\\"better by 0.5\\\" for example,\\nthen the authors method is better on 4 tasks (smiling, blonde hair, heavy makeup, mustache) and worse on 3 (black hair, brown hair, wavy hair).\\n\\nWith respect to the actual attribute editing, my main concern here is a lack of comparison to models other than Bao et al., despite the fact that face attribute changing is an exhaustively studied task. A number of papers like Perarnau et al., 2016, Upchurch et al., 2017, Lample et al., 2017 and others study this task from machine learning perspectives, and in some cases can perform photorealistic image attribute editing without complicated machinery on megapixel face\\nimages. At least the images in Figure 3 and 4 are substantially downsampled from the typical resolution found in the Celeba dataset, suggesting that there was some failure mode on full resolution images.\\n\\n----\", \"edit\": \"I've reviewed the authors' addressing my concerns in their paper and am happy to increase my rating as a result.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"missing references to previous work\", \"review\": \"Summary:\\n\\nThis paper builds upon the work of Boa et al (2017 ) (Conditional VAE GAN) to allow attribute manipulation in the synthesis process.\", \"in_order_to_disentangle_the_identity_information_from_the_attributes_the_paper_proposes_adversarial_information_factorization\": \"let z be the latent code and y be the attribute the paper proposes to have p(y) = p(y|z= E_phi(x)), i.e to have z independent of y. This disentanglement is implemented through a GAN on the variable y min _phi Distance (p(y), p(y|z)), the distance is defined via a discriminator on y.\\n\\nExperiments are presented on celeba dataset, 1) on attribute manipulation from smiling to non smiling for example, on 2) attribute classification results are presented , 3) ablation studies are given to study the effect of each component of the model highlighting the effect of the adversarial information factorization.\", \"originality_novelty\": \"There is a large body of work on disentanglement that the paper does not cite or compare to for instance, InfoGAN, Beta- VAE https://openreview.net/pdf?id=Sy2fzU9gl and disentangled latent concepts https://arxiv.org/pdf/1711.00848.pdf\\n\\nNote that for example that in beta- VAE it is a similar idea where but it is on z and z|x and the distance used is KL (since it is has closed form with gaussian) , min_phi Loss+ beta KL (p(z), p(z|x)), a discussion of the previous related work in the paper is necessary.\", \"the_work_is_also_related_to_mine_https\": \"//arxiv.org/pdf/1801.04062.pdf where one would like to minimize the mutual information I(z;y) this mutual information is estimated through a min/max game.\", \"questions\": [\"why is RMSprop used for optimization, your model and the Bao et al baseline might benefit from the use of Adam?\", \"(Table 3 in appendix ) Have you tried higher values of alpha the weight of KL, with the model of Bao et al (it is recommended in beta VAE to have high value of what you call alpha)?\"], \"overall_assessment\": \"The paper novelty is using min/max game to estimate the mutual information between y (attribute) and z (identity code). Disentanglement and use of min/max games for estimating mutual information has been explored before. Further discussion and comparaison to previous work is needed.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea, Too complex model\", \"review\": \"This paper proposed a generative model to learn the representation which can separates the identity of an object from an attribute. Authors extended the autoencoder adversarial by adding an auxiliary network.\\n\\nStrength\\nThe motivation of adding this auxiliary network, which is to distinguish the information between latent code z and attribute vector y, is clean and clear.\\nExperiments illustrate the advantage of using auxiliary network and demonstrating the role of classify. Experimental results also show the proposed model learning to factor attributes from identity on the face dataset.\\n\\nWeakness \\nThe proposed model seem to be unnecessarily complex. For example, the loss of in (6) actually includes 6 components (5 are from L_enc) and 4~5 tuning hyper-parameters. The L_gan also includes 3 parts. The reason of adding gan loss lacks either theoretical or empirical analysis. So as L_KL. In addition, the second term in L_gan is unnecessary since you already have a reconstruction loss. It also make it to be unclear what we obtain if the equilibrium of the GAN objective achieved.\\n\\nThe written of this paper can be improved to make it more clear. \\nIt looks \\\\hat_y and \\\\tilde_y are same thing. \\nHow do you get \\\\hat_z? Do you assume the posterior distribution is Gaussian and use the reparameterization trick? What are \\\\hat_y and \\\\hat_\\\\hat_y? Are they binary or a scalar between 0 and 1? How do you generate \\\\hat_x? When generating \\\\hat_x, do you sample \\\\hat_z and \\\\hat_y? If so, how do treat the variance problem of \\\\hat_y?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"We predict both the *attribute* and the attribute-invariant information, making our model suitable for classification.\", \"comment\": \"Hello,\\n\\nThank you for showing interest in our paper and for pointing us to this work. We agree that there are some similarities between our work, however there is also a key difference.\\n\\nIn our paper, the goal is to learn representations for images, for this our encoder network needs to encode more than just the attribute-invariant information, but also the attribute information itself. The encoder of the Fader Network predicts only the attribute-invariant information, while our encoder network predicts both attribute-invariant information and the attribute. This makes our model not only suitable for synthesising images, but it also makes our model suitable for classification. \\n\\nWe present classification results, using our model, that are highly competitive with state of the art classification results of Zhuang et al. (2108).\\n\\n(Additional details)\\n\\nAs mentioned above, unlike in the Fader Networks, our encoder model predicts attribute information, \\\\hat{y}, along side the attribute-invariant information, \\\\hat{z}, which we refer to as identity. During training, our decoder network is fed with predicted attribute values, \\\\hat{y}, rather than the ground truth values, y, as in the Fader Networks. Training the decoder to reconstruct images from predicted attributes, \\\\hat{y}, combined with the adversarial factorization, forces the encoder network to put attribute information into \\\\hat{y}, resulting in the encoder being an excellent classifier.\\n\\nWe will add Fader Networks to the related work section of our paper in the next revision.\"}",
"{\"comment\": \"I just wonder why not cite \\\"Fader Networks:Manipulating Images by Sliding Attributes\\\" in NIPS2017. In my opinion, The adversarial factorisation in latent space in this paper is quite similar with the referred paper. Both aim to disentangle the attribute-invariant representations and the attribute labels via an adversarial classifier.\", \"title\": \"Why not cite FaderNetworks in NIPS2017\"}"
]
} |
|
SkeRTsAcYm | Phase-Aware Speech Enhancement with Deep Complex U-Net | [
"Hyeong-Seok Choi",
"Jang-Hyun Kim",
"Jaesung Huh",
"Adrian Kim",
"Jung-Woo Ha",
"Kyogu Lee"
] | Most deep learning-based models for speech enhancement have mainly focused on estimating the magnitude of spectrogram while reusing the phase from noisy speech for reconstruction. This is due to the difficulty of estimating the phase of clean speech. To improve speech enhancement performance, we tackle the phase estimation problem in three ways. First, we propose Deep Complex U-Net, an advanced U-Net structured model incorporating well-defined complex-valued building blocks to deal with complex-valued spectrograms. Second, we propose a polar coordinate-wise complex-valued masking method to reflect the distribution of complex ideal ratio masks. Third, we define a novel loss function, weighted source-to-distortion ratio (wSDR) loss, which is designed to directly correlate with a quantitative evaluation measure. Our model was evaluated on a mixture of the Voice Bank corpus and DEMAND database, which has been widely used by many deep learning models for speech enhancement. Ablation experiments were conducted on the mixed dataset showing that all three proposed approaches are empirically valid. Experimental results show that the proposed method achieves state-of-the-art performance in all metrics, outperforming previous approaches by a large margin. | [
"speech enhancement",
"deep learning",
"complex neural networks",
"phase estimation"
] | https://openreview.net/pdf?id=SkeRTsAcYm | https://openreview.net/forum?id=SkeRTsAcYm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SyewFf46-r",
"rkecjeQQo4",
"BkeutKjWgV",
"SygPU_KB0Q",
"r1lmk_KH0X",
"rylC1oN4p7",
"rklUSYVEpX",
"Hke4DDVN6m",
"SyxRGSbR37",
"HylyWCnnnm",
"S1lX3gYqhm"
],
"note_type": [
"comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1563406975281,
1556455586377,
1544825215883,
1542981711067,
1542981595464,
1541847781837,
1541847358047,
1541846875543,
1541440789865,
1541357047506,
1541210283377
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper850/Authors"
],
[
"ICLR.cc/2019/Conference/Paper850/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper850/Authors"
],
[
"ICLR.cc/2019/Conference/Paper850/Authors"
],
[
"ICLR.cc/2019/Conference/Paper850/Authors"
],
[
"ICLR.cc/2019/Conference/Paper850/Authors"
],
[
"ICLR.cc/2019/Conference/Paper850/Authors"
],
[
"ICLR.cc/2019/Conference/Paper850/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper850/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper850/AnonReviewer3"
]
],
"structured_content_str": [
"{\"comment\": \"Dear Authors,\\n\\nAs a random reader of your paper, I really appreciate your honesty. Also a thank to the ICLR organizer for hosting papers in OpenReview. It is a great platform for a continual review!\\n\\nThank you\", \"title\": \"Thank you for your honesty\"}",
"{\"title\": \"Retracted from ICLR2019.\", \"comment\": \"Dear readers, this is the announcement of the retraction of our paper \\\"Phase-Aware Speech Enhancement with Deep Complex U-Net\\\" from ICLR2019.\\n\\nFirst, we thank you for the interest in our work.\\n\\nWe are truly sorry, however, to inform that a significant error was found by ourselves in the experimental process of our paper \\u201cPhase-aware Speech Enhancement with Deep Complex U-Net\\u201d, which was accepted for ICLR2019. After careful examinations, we therefore have made a decision to retract the accepted paper.\\n\\nTo be more specific about the error, we found out that the training data path was accidentally set to the evaluation data path which means the reported numbers in the table are utterly wrong (possibly overfitted results).\\n\\nLastly, we sincerely apologize for the mistake we made and promise it will not happen again.\\n\\nBest regards,\\nAuthors\"}",
"{\"metareview\": \"The authors propose an algorithm for enhancing noisy speech by also accounting for the phase information. This is done by adapting UNets to handle features defined in the complex space, and by adapting the loss function to improve an appropriate evaluation metric.\\n\\nStrengths\\n- Modifies existing techniques well to better suit the domain for which the algorithm is being proposed. Modifications like extending UNet to complex Unet to deal with phase, redefining the mask and loss are all interesting improvements.\\n- Extensive results and analysis.\\n\\nWeaknesses\\n- The work is centered around speech enhancement, and hence has limited focus. \\n\\nEven though the paper is limited to speech enhancement, the reviewers agreed that the contributions made by the paper are significant and can help improve related applications like ASR. The paper is well written with interesting results and analysis. Therefore, it is recommended that the paper be accepted.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Application specific paper, but well written with interesting evaluations and analysis\"}",
"{\"title\": \"Response to Reviewer 3 regarding the mask performance.\", \"comment\": \"Reflecting the concern by the Reviewer 3, we conducted further experiments by implementing the \\u2018tanh compression\\u2019 mask.\\nAs expected, our method gave better performance by every quantitative measure.\\n\\n CSIG CBAK COVL PESQ SSNR\\nBDT (ours) 4.18 3.77 3.63 3.06 13.29\\nTanhCompression 4.11 3.33 3.56 3.01 7.01\"}",
"{\"title\": \"Our revised paper has been uploaded.\", \"comment\": \"We would like to thank all the reviewers for their fruitful comments and suggestions that help make our paper more complete and comprehensive.\\nWe have uploaded a newly revised paper reflecting almost all the comments, concerns and suggestions. \\nWe mainly focused on revising the Introduction and Conclusion sections to make the manuscript more comprehensible to general audiences by clarifying the motivation of our work and by describing the potential applications of our work. \\nIn addition, we conducted subjective listening tests and demonstrated that the proposed approach yields superior performance qualitatively.\\nIf there are any further recommendations for the revised paper, we would like to reflect those until the due date.\"}",
"{\"title\": \"Response to Reviewer#3\", \"comment\": \"Thank you for your review and comments.\\n\\nBelow are the responses to each of your comments.\\n\\nFor me, the complex-valued network is already there and weighted SDR loss is not difficult to think. The modified complex ratio mask is a bit interesting. However, I think it better to compare with [Donald S Williamson et al] where the hyperbolic tangent compression is used.\\n-> Answer:\\nThank you for your suggestion. In fact, we tried this before. Since the perceptual quality of hyperbolic tangent compression was not good compared to the other masking methods, we did not add the results in our manuscript. However, as you suggested, we think it is fair to have the actual quantitative results to compare, and thus we are currently retraining the network using hyperbolic tangent compression and will report the result as soon as the training finishes.\\n\\n\\nApart from the objective metrics, a human listening test using MOS or preference score should be conducted.\\n-> Answer:\\nThank you for your suggestion. We will conduct a user listening study with random samples from the test dataset and update the manuscript by adding the results as soon as possible.\\n\\n\\nOn Fig 3, the unbounded complex mask might suffer from the infinity problem leading to training failure. However, on table 2, the performance of the unbounded mask is quite close to your method. It is a bit strange for me.\\n-> Answer:\\nAlthough it may seem to theoretically suffer from the infinite search space, the real distribution of the ideal complex masks are most likely bounded to a small finite region, which is likely to help alleviate the problem. In our case, we had no such problems when training our models.\"}",
"{\"title\": \"Response to Reviewer#2\", \"comment\": \"We thank the reviewer for the extensive comments, which were very constructive and helpful for building a better paper.\\n\\nBelow are the responses to each of your comments.\\n\\nMy major concern about this paper is that this paper is a little bit too specific to the speech enhancement applications, which will not be accepted with so many researches in the major ICLR community. My suggestion is to describe some potential applications of this method to the other (speech) applications including speech separation, noise-robust front-end for ASR, TTS, or other speech analysis, and also discuss the possibility of extending this method for multichannel input. \\n-> Answer:\\nThank you for your suggestion. We will add more explanations on potential applications and describe speech enhancement as a fundamental problem for general audio tasks.\\n\\n\\nI\\u2019m more interested in the multichannel enhancement because the phase (difference) is critical in this scenario. \\n-> Answer:\\nWe acknowledge the importance of such scenarios and also are interested in studying the case. We will add a discussion in the future work.\\n\\n\\n- Introduction: It\\u2019s better to cite and discuss the paper of \\u201cE. Hakan et al, \\u201cPhase-sensitive and recognition-boosted speech separation using deep recurrent neural networks,\\u201d Proc. ICASSP\\u201915, pp. 708--712 (2015). This paper is one of the first studies tries to incorporate the phase information to DNN based speech enhancement.\\n-> Answer:\\nThank you for the suggestion. We will add sentences to the Introduction as one of the initial dnn-based approaches incorporating phase information.\\n\\n\\n- Several researchers prefer to use LSTM based enhancement method. Please discuss whether this method (objective function and complex masks) can be applied to complex extensions of LSTMs instead of complex U-net.\\n-> Answer:\\nThank you for your idea for extension. As you mentioned, our objective function and complex masking method can be applied to complex-valued LSTMs, which will be expected to be effective for sequential representation learning and potentially improve the performance. We will add this discussion to the Conclusion.\\n\\n\\n- Page 2, the first paragraph: You may also refer https://arxiv.org/abs/1810.01395\\n-> Answer:\\nThank you for notifying us, but we were not aware of this paper since the deadline was the end of September. As it is very relevant in terms of phase estimation, we will refer to this work in the paper.\\n\\n\\n- Page 3, it\\u2019s better to explicitly mention that h = x + i y\\n-> Answer:\\nWe will fix this in the updated version soon.\\n\\n\\n- Section 3.3: discuss how we treat STFT/iSTFT operations under a computational graph representation. It is not so obvious.\\n-> Answer:\\nWe will add the description in Section 3.3.\\n\\n\\n- Section 3.3: again it\\u2019s better to mention E. Hakan\\u2019s method here.\\n-> Answer:\\nWe will refer to the E. Haken\\u2019s method in Section 3.2 as it is more related to the masking method.\\n\\n\\n- Page 6 footnote: I cannot access to the URL. Please check it.\\n-> Answer:\\nWe think the URL doesn\\u2019t work because the underbars have been removed from from the hyperlink. We will fix this.\\n\\n- Experiments: I think it would be more interesting to add SDR (using speech and noise as a source) to the experimental measure. Some people use SDR as a speech enhancement measure, and I\\u2019m expecting that this method can have more reasonable performance since it is optimized based on wSDR.\\n-> Answer:\\nFor quick comparison, we evaluated SDR for DCUnet-20 with the BDT mask setting, and obtained the following results:\\n\\t Spc Wav wSDR\\nSDR 23.17 | 23.99 | 24.16\\nSSNR 9.54 | 12.34 | 13.29 \\nAs expected, wSDR yielded the best performance among the three loss terms compared in the paper.\\nHowever, we would like to note that SDR is essentially scale-invariant SNR, and since wSDR loss tries to fit the scale of the target source, it leads to maximizing SNR (not scale-invariant) more than maximizing SDR. This can be confirmed by the fact that more dramatic improvement is observed in terms of SSNR rather than SDR.\"}",
"{\"title\": \"Response to Reviewer#1\", \"comment\": \"Thank you for your review.\\n\\nBelow are the responses to each of your concerns.\\n\\nThe methodological contribution is mild, essentially changing a building block in a state-of-the-art neural architecture.\\n-> Answer:\\nWe agree that the change of building block (DCUNet) itself might be considered as a mild contribution with respect to methodological points. However, in addition to the modification, our main contributions include a novel masking approach and an advanced loss function design. We believe it is not trivial to successfully incorporate these components, as this results in remarkable performance improvement from the previously proposed methods. Furthermore, as far as we are concerned, this is the first work that enables efficient phase estimation using continuous regression with a complex-valued method.\\n\\n\\nThe paper is for the expert audience mostly and is difficult to grasp without a good background on deep learning for speech enhancement.\\n-> Answer:\\nThank you for pointing this out. As most of the general audience may have less understanding about speech signal modeling, we added additional explanations to the Introduction we consider fundamental for a wider range of audience.\"}",
"{\"title\": \"well written & rather experimental paper -- for the experts mostly\", \"review\": \"The paper is written, provides good description of the state-of-the-art and comprehensive experimental results.\\nThe methological contribution is mild, essentially changing a buiding block in a state-of-the-art neural architecture.\\nThe paper is for the expert audience mostly and is difficult to grasp without a good background on deep learning for speech enhancement.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"review of \\\"Phase-Aware Speech Enhancement with Deep Complex U-Net\\\"\", \"review\": \"This paper tackles one of important speech enhancement issues of how to predict phase information. The authors work on this problem based on three novel techniques, one is to use complex U-net, second is to propose a new complex mask representation, which is well bounded and well model complex mask distribution, and the last is an objective function motivated by SDR. The paper is well written, and also shows the experimental effectiveness of the proposed method by analyzing these three novel techniques and also by comparing the method with other speech enhancement methods. My major concern about this paper is that this paper is a little bit too specific to the speech enhancement applications, which will not be accepted with so many researches in the major ICLR community. My suggestion is to describe some potential applications of this method to the other (speech) applications including speech separation, noise-robust front-end for ASR, TTS, or other speech analysis, and also discuss the possibility of extending this method for multichannel input. I\\u2019m more interested in the multichannel enhancement because the phase (difference) is critical in this scenario.\", \"comments\": [\"Introduction: It\\u2019s better to cite and discuss the paper of \\u201cE. Hakan et al, \\u201cPhase-sensitive and recognition-boosted speech separation using deep recurrent neural networks,\\u201d Proc. ICASSP\\u201915, pp. 708--712 (2015). This paper is one of the first studies tries to incorporate the phase information to DNN based speech enhancement.\", \"Several researchers prefer to use LSTM based enhancement method. Please discuss wether this method (objective function and complex masks) can be applied to complex extensions of LSTMs instead of complex U-net.\", \"Page 2, the first paragraph: You may also refer https://arxiv.org/abs/1810.01395\", \"Page 3, it\\u2019s better to explicitly mention that h = x + i y\", \"Section 3.3: discuss how we treat STFT/iSTFT operations under a computational graph representation. It is not so obvious.\", \"Section 3.3: again it\\u2019s better to mention E. Hakan\\u2019s method here.\", \"Page 6 footnote: I cannot access to the URL. Please check it.\", \"Experiments: I think it would be more interesting to add SDR (using speech and noise as a source) to the experimental measure. Some people use SDR as a speech enhancement measure, and I\\u2019m expecting that this method can have more reasonable performance since it is optimized based on wSDR.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"generally good paper on speech enhancement using complex operations\", \"review\": \"This paper used a complex-valued network to learn the modified complex ratio mask with a weighted SDR loss for the speech enhancement task. It can get good enhancement performance.\\n\\nFor me, the complex-valued network is already there and weighted SDR loss is not difficult to think. The modified complex ratio mask is a bit interesting. However, I think it better to compare with [Donald S Williamson et al] where the hyperbolic tangent compression is used.\\n\\nApart from the objective metrics, a human listening test using MOS or preference score should be conducted.\\n\\nOn Fig 3, the unbounded complex mask might suffer from the infinity problem leading to training failure. However, on table 2, the performance of the unbounded mask is quite close to your method. It is a bit strange for me.\\n\\nThe total idea is good, but the novelty is not much.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
H1g0piA9tQ | Evaluation Methodology for Attacks Against Confidence Thresholding Models | [
"Ian Goodfellow",
"Yao Qin",
"David Berthelot"
] | Current machine learning algorithms can be easily fooled by adversarial examples. One possible solution path is to make models that use confidence thresholding to avoid making mistakes. Such models refuse to make a prediction when they are not confident of their answer. We propose to evaluate such models in terms of tradeoff curves with the goal of high success rate on clean examples and low failure rate on adversarial examples. Existing untargeted attacks developed for models that do not use confidence thresholding tend to underestimate such models' vulnerability. We propose the MaxConfidence family of attacks, which are optimal in a variety of theoretical settings, including one realistic setting: attacks against linear models. Experiments show the attack attains good results in practice. We show that simple defenses are able to perform well on MNIST but not on CIFAR, contributing further to previous calls that MNIST should be retired as a benchmarking dataset for adversarial robustness research. We release code for these evaluations as part of the cleverhans (Papernot et al 2018) library (ICLR reviewers should be careful not to look at who contributed these features to cleverhans to avoid de-anonymizing this submission). | [
"adversarial examples"
] | https://openreview.net/pdf?id=H1g0piA9tQ | https://openreview.net/forum?id=H1g0piA9tQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJg3eiqSgE",
"Hye-MhpZT7",
"S1xAN6UJT7",
"S1x0HKfqh7",
"SygQJELD3X"
],
"note_type": [
"meta_review",
"official_review",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1545083636034,
1541688328663,
1541528886234,
1541183813757,
1541002202530
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper849/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper849/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper849/Authors"
],
[
"ICLR.cc/2019/Conference/Paper849/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper849/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers agree the paper is not ready for publication.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Reject\"}",
"{\"title\": \"A good topic to explore, but suffers from methodological problems\", \"review\": \"This paper introduces a family of attack on confidence thresholding algortihms. Such algorithms are allowed to refuse to make predictions when their confidence is below a certain threshold.\\n\\nThere are certainly interesting links between such models and KWIK [1] algorithms (which are also supposed to be able to respond 'null' to queries), however they are not mentioned in this paper, which focuses mainly on evaluation methodologies.\", \"the_definition_of_the_metric_is_certainly_natural\": \"you would expect some trade-off between performance in the normal versus the adversarial regime. I am not certain why the authors don't simply measure the success rate on both natural and adversarial conditions, so as to have the performance metric uniform. Unfortunately the paper's notationleaves something to be desired, as it fails to concretely define the metric.\\nLet me do so instead, and consider the classification accuracy of a classification rule $P_t$ using a threshold $t$ under a (possibly adaptive) distribution $Q$ to be $U(P,Q)$. Then, we can consider $Q_N, Q_A$ as the normal and adversarial distribution and measure the corresponding accuracies. \\n\\nEven if we do this, however, the authors do not clarify how they propose to select the classification rule. Should they employ something like a convex combination:\\n\\\\[\\nV(P_t) := \\\\alpha U(P_t, Q_N) + (1 - \\\\alpha) U(P_t, Q_A) \\n\\\\]\\nor maybe take a nimimax approach\\n\\\\[\\nV(P_t) := \\\\min \\\\{U(P_t, Q) | Q = Q_A, Q_N\\\\}\\n\\\\]\\n\\nIn addition, the authors simply plot curves for various choices of $t$, however it is necessary to take into account the fact that measuring performance in this way and selecting $t$ aftewards amounts to a hyperparameter selection [2]. Thus, the thresholding should be chosen on an independent validation set in order to optimise the chosen performance measure, and then the choice should evaluated on a new test set with respect to the same measure $V$\\n\\nThe MaxConfidence attack is not very well described, in my opinion. However, it seems it simply wishes to find to find a single point $x \\\\in \\\\mathbb{S}$ that maximises the probability of misclassification. It is not clear to me why performance against an attack of this type is interesting to measure.\\n\\nThe main contribution of the paper seems to be the generalisation of the attack by Goodfellow et al to softmax regression. The proof of this statement is in a rather obscure place in the paper. \\n\\nI am not sure I follow the idea for the proof, or what they are trying to prove. The authors should follow a standard Theorem/Proof organisation, clearing stating assumptions and what the theorem is showing us. It seems that they want to prove that if a solution to (1) exists, then MaxConfidence() finds it. But the only definition of MaxConfidence is (1). Hence I think that their theorem is vacuous. There are quite a few details that are also unclear such as what the authors mean by 'clean example' etc. \\n\\nHowever the authors do not explain their attack very well, their definition of the performance metric is not sufficiently formal, and their evaluation methodology is weak. Since evaluation methodology is the central point of the paper, this is a serious weaknes. Finally, there doesn't seem to be a lot of connection with the conference's topic.\\n\\n[1] Li, Lihong, Michael L. Littman, and Thomas J. Walsh. \\\"Knows what it knows: a framework for self-aware learning.\\\" Proceedings of the 25th international conference on Machine learning. ACM, 2008.\\n\\n[2] Bengio, Samy, Johnny Mari\\u00e9thoz, and Mikaela Keller. \\\"The expected performance curve.\\\" International Conference on Machine Learning, ICML, Workshop on ROC Analysis in Machine Learning. No. EPFL-CONF-83266. 2005.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Reply\", \"comment\": \"The main topic of the paper is how to evaluate models that use confidence thresholding. The primary purpose is to compare *defenses*. However, to justify the attack strategy that we propose to use, we also compare *attacks*. Specifically, we provide an experiment demonstrating that our attack actually is stronger than the baseline. However, it is not really necessary to provide multiple experiments demonstrating that MaxConfidence is more powerful because the superiority of MaxConfidence is theoretically guaranteed.\"}",
"{\"title\": \"Hard to understand\", \"review\": \"This paper proposes an evaluation method for confidence thresholding defense models, as well as a new approach for generating of adversarial examples by choosing the wrong class with the most confidence when employing targeted attacks.\\n\\nAlthough the idea behind this paper is fairly simple, the paper is very difficult to understand. I have no idea that what is the propose of defining a new evaluation method and how this new evaluation method helps in the further design of the MaxConfidence method. Furthermore, the usage of the evaluation method unclear as well, it seems to be designed for evaluating the effectiveness of different adversarial attacks in Figure 2. However, in Figure 2, it is used for evaluating defense schemes. Again, this confuses me on what is the main topic of this paper. Indeed, why the commonly used attack success ratio or other similar measures cannot be used in the case? Intuitively, it should provide similar results to the success-failure curve.\\n\\nThe paper also lacks experimental results, and the main conclusion from these results seems to be \\\"MNIST is not suitable for benchmarking of adversarial attacks\\\". If the authors claim that the proposed MaxConfidence attack method is more powerful than the MaxLoss based attacks, they should provide more comparisons between these methods.\\n\\nMeanwhile, the computational cost on large dataset such as ImageNet could be huge, the authors should further develop the method to make sure it works in all situations.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting attack but an unclear paper with limited experimental support.\", \"review\": \"The paper presents an evaluation methodology for evaluating attacks on confidence thresholding methods and proposes a new kind of attack. In general I find the writing poor, as it is not exactly clear what the focus of the paper is - the evaluation or the new attack? The experiments lacking and the proposed evaluation methodology & theoretical guarantees trivial.\", \"major_remarks\": [\"Linking the code and asking the reviewers not to look seems like bad practice and close to violating double blind, especially when considering that the cleavhans library is well known. Should have just removed the link and cleavhans name and state it will be released after review.\", \"It is unclear what the focus of the paper is, is it the evaluation methodology or the new attack? While the evaluation methodology is presented as the main topic in title, abstract and introduction most of the paper is dedicated to the attack.\", \"The evaluation methodology is a good idea but is quiet trivial. Also, curves are nice visually but hard to compare between close competitors. A numeric value like area-under-the-curve should be better.\", \"The theoretical guarantees is also quiet trivial, more or less saying that if a confident adversarial attack exists then finding the most confident attack will be successful. Besides that the third part of the proof can be simplified significantly.\", \"The experiments are very lacking. The authors do not compare to any other attack so there is no way to evaluate the significance of their proposed method\", \"That being said, the max-confidence attack by itself sounds interesting, and might be useful even outside confidence thresholding.\", \"One interesting base-line experiment could be trying this attack on re-calibrated networks e.g. \\u201cOn Calibration of Modern Neural Networks\\u201d\", \"Another baseline for comparison could be doing just a targeted attack with highest probability wrong class.\", \"I found part 4.2 unclear\", \"In the conclusion, the first and last claims are not supported by the text in my mind.\"], \"minor_remarks\": [\"The abstract isn\\u2019t clear jumping from one topic to the next in the middle without any connection.\", \"Having Fig.1 and 2 right on the start is a bit annoying, would be better to put in the relevant spot and after the terms have been introduced.\", \"-In 6.2 the periodically in third line from the end seems out of place.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SkGT6sRcFX | Infinitely Deep Infinite-Width Networks | [
"Jovana Mitrovic",
"Peter Wirnsberger",
"Charles Blundell",
"Dino Sejdinovic",
"Yee Whye Teh"
] | Infinite-width neural networks have been extensively used to study the theoretical properties underlying the extraordinary empirical success of standard, finite-width neural networks. Nevertheless, until now, infinite-width networks have been limited to at most two hidden layers. To address this shortcoming, we study the initialisation requirements of these networks and show that the main challenge for constructing them is defining the appropriate sampling distributions for the weights. Based on these observations, we propose a principled approach to weight initialisation that correctly accounts for the functional nature of the hidden layer activations and facilitates the construction of arbitrarily many infinite-width layers, thus enabling the construction of arbitrarily deep infinite-width networks. The main idea of our approach is to iteratively reparametrise the hidden-layer activations into appropriately defined reproducing kernel Hilbert spaces and use the canonical way of constructing probability distributions over these spaces for specifying the required weight distributions in a principled way. Furthermore, we examine the practical implications of this construction for standard, finite-width networks. In particular, we derive a novel weight initialisation scheme for standard, finite-width networks that takes into account the structure of the data and information about the task at hand. We demonstrate the effectiveness of this weight initialisation approach on the MNIST, CIFAR-10 and Year Prediction MSD datasets. | [
"Infinite-width networks",
"initialisation",
"kernel methods",
"reproducing kernel Hilbert spaces",
"Gaussian processes"
] | https://openreview.net/pdf?id=SkGT6sRcFX | https://openreview.net/forum?id=SkGT6sRcFX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJlN6PoayV",
"HygAk060A7",
"B1lP1Ca0A7",
"rkxaa2b5Rm",
"S1ldNRec07",
"rylf0Tgc0m",
"r1ezww6Up7",
"BJldu_uza7",
"rJlx_rSo3m",
"HkechuUDnQ"
],
"note_type": [
"meta_review",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544562619754,
1543589349772,
1543589343304,
1543277764565,
1543274031938,
1543273929936,
1542014810194,
1541732463786,
1541260648082,
1541003441531
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper848/Area_Chair1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper848/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper848/Authors"
],
[
"ICLR.cc/2019/Conference/Paper848/Authors"
],
[
"ICLR.cc/2019/Conference/Paper848/Authors"
],
[
"ICLR.cc/2019/Conference/Paper848/Authors"
],
[
"ICLR.cc/2019/Conference/Paper848/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper848/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper848/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper studies how to construct infinitely deep infinite-width networks from a theoretical point of view, and uses the results of its theoretical analysis to design a weight initialization scheme for finite-width networks. While the idea is interesting and the paper may contain novel theoretical contributions, the experimental results are weak, as pointed out by all three reviewers from several different perspectives. In particular, it seems that the presented theoretical analysis is useful mainly for weight initialization and hence has limited potential impacts. In addition, the authors have responded to neither the AC's question, nor a detailed anonymous comment that challenges the value of Proposition 1 given the previous work by Aronszajn.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Solid theoretical analysis but unconvincing experiments and limited potential impacts\"}",
"{\"comment\": \"I have several confusions regarding Proposition 1. I will try to informally describe the proof in Proposition 1. Kindly clarify if my understanding is correct or not.\", \"firstly_regarding_the_statement_of_the_proof\": \"You are trying to construct a distribution over the weights connecting two (infinitely wide) hidden layers that will ensure that the inner product between the activations and the weights is well defined. Is this correct?\", \"now_regarding_the_proof_itself\": \"Firstly, you define a distribution over the weights connecting the first and second hidden layer. Since the first hidden layer has infinite width, any weight vector connecting the first hidden layer will be infinite dimensional. We can think of this weight vector as a function of the weights connecting the visible layer and the first hidden layer. Is this correct? To be more specific, a weight vector connecting the first infinite hidden layer to a single unit of the second hidden layer is treated as a function of the weights between the visible and hidden layer?\\n\\nAssuming the above is indeed correct, you now need to define a distribution over these weights. In particular, you assume that these weights are sampled from a Gaussian Process with 0 mean and covariance function C_1. Now you claim that this choice of C_1 ensures that resultant distribution is a distribution over the RKHS induced by the first hidden layer. This is confusing for me since I am not aware of this result. Can you please point me to the exact result in (Aronszajn) where this result is mentioned? \\n\\nI think that the entire theoretical aspect of the paper relies on this result (presumably from Aronszajn). Hence, it is surprising that this result is not mentioned in the \\\"Related Works\\\" section or anywhere else in the main paper. With this result in mind, the rest of the paper becomes easy to digest.\", \"title\": \"Clarifications regarding Proposition 1\"}",
"{\"title\": \"Has an infinite deep infinite-width network being used?\", \"comment\": \"Dear authors,\", \"as_stated_in_your_abstract\": \"\\\"we propose a principled approach to weight initialisation that correctly accounts for the functional nature of the hidden layer activations and facilitates the CONSTRUCTION of arbitrarily many infinite-width layers, thus enabling the CONSTRUCTION of arbitrarily deep infinite-width networks,\\\" but all I could find in your paper are networks with fixed depths and widths. If you can show your method helps CONSTRUCT a network whose depth and width are learned from the data and could in theory go to infinite, then I could buy your claim, but at this moment, I can hardly see the reason for claiming \\\"Infinitely Deep Infinite-Width Networks\\\" in the title and elsewhere. Could you please elaborate on this point?\\n\\nThanks,\\nAC\"}",
"{\"title\": \"Novel theoretical contribution validated by experimental results\", \"comment\": \"We thank the reviewer for their time and comments.\\n\\nConcerning examples of kernels that can be constructed using our proposed approach in Section 3, the resulting kernels have been derived for some specific nonlinearities (e.g. ReLU nonlinearities by Cho & Saul 2009). Given that our proposed approach is agnostic to the choice of non-linearity, due to the iterative nature of the construction, the derived kernels will in general not be available in closed form and are thus not particularly suited for use within e.g. an SVM. However, one can examine the structural properties of these kernels. We will clarify this point in the paper. \\n\\nUnlike Cho & Saul 2009 and Wilson et al 2016 who strive to derive kernels that mimic the computations in deep networks, this paper is motivated by the desire to improve our understanding of neural networks. In particular, we are motivated by the fact that single-layer infinite-width networks have played an important role in helping us acquire a better understanding of single-layer standard, finite-width networks. Until now the construction of infinite-width networks has been limited to at most two layers, while on the other hand we currently use deep finite-width networks. Our paper is an effort towards bridging this gap by enabling the construction of deep infinite-width networks that can then be used for analysing deep finite-width networks. This should help us gain a better understanding of deep, finite-width networks that have transformed the field dramatically over the last decade. Please also note that in this paper kernels are used as tools that enable us to more easily reason about the function spaces induced by the activations of an infinite-width network and are not the main focus of the paper. In particular, the main motivation of our paper is not the derivation of kernels that mimic computations in infinite-width neural networks, but the furthering our understanding of neural networks.\\n\\nConcerning the experimental results -- The main contribution of this paper is of theoretical nature (see Proposition 1 in our submission). However, we also extensively discuss the practical implication of this results and support the direct practical relevance of our theory with experiments on benchmark datasets in two different domains, classification and regression. Given the motivation for our paper, the experimental results should be seen as showing the practical relevance of the proposed theory and not as an attempt at developing a new training heuristic for achieving state-of-the-art performance.\\n\\nTo adequately validate our proposed theoretical contribution, we need to disentangle the effect of initialisation on network performance from the effects of other training heuristics. Thus, we purposefully did not use any advanced techniques, such as data augmentation, learning rate scheduling or architecture search, all of which are commonly used to improve state-of-the-art performance. \\n\\nIn particular, to appropriately validate our theory, we tested Win-Win against commonly used initialisation methods on three benchmarks datasets in classification and regression, thus showing that the performance of Win-Win initialisation is not tied to any particular dataset or task. On all datasets considered in the paper, we have demonstrated that Win-Win either matches or exceeds the performance of the competing approaches for the considered architectures. In particular, for classification on MNIST, we chose the architectures that had the best performance without using advanced preprocessing and training heuristics as recorded on http://yann.lecun.com/exdb/mnist/. For CIFAR10, we used an architecture that achieves state-of-the-art performance and applied the different initialisation approaches in the last fully-connected layer. Note that we didn\\u2019t use any advanced training heuristics here, but just made use of the underlying architecture.\\n\\nWith respect to the reviewer\\u2019s comment about our experimental setup, we hope that our reply explains our main motivation for choosing the tasks, architectures and datasets for the experimental studies presented in the paper.\\n\\nFollowing the reviewer\\u2019s suggestion, we have now also implemented Win-Win for convolutional layers and are currently running experiments. We will update this thread and the paper with the results as soon as they are available. \\n\\nIf you have any further comments, suggestions or questions, we would appreciate them.\\nGiven the above explanations, particularly our responses to the raised issues, we would kindly like to ask you to consider raising the rating of this paper. Thank you!\"}",
"{\"title\": \"Continued: Theoretical contribution validated by experimental results\", \"comment\": \"Concerning the experimental results -- Note that the main contribution of this paper is of theoretical nature (Proposition 1 in our submission). Furthermore, we extensively discuss the practical implications of the developed theory and conduct experiments on benchmark datasets in two different domains, classification and regression, to demonstrate the direct and practical relevance of our theory. Given that the main motivation of this paper is expanding our understanding of neural networks, the discussed practical implications and experimental results should be seen as highlighting the practical relevance of the proposed theory and not as an attempt at developing a new training heuristic for achieving state-of-the-art performance.\\n\\nWith regard to the comment about unconvincing results, we think that this should be put into perspective with the goal of the paper and the architectures considered. In order to adequately validate the developed theory, we need to disentangle the effect of initialisation on network performance from the effects of other training heuristics. Thus, we purposefully did not use any advanced techniques, such as data augmentation, learning rate scheduling or architecture search, all of which are commonly used to improve state-of-the-art performance. \\n\\nThus, to appropriately validate our theory, we tested Win-Win against commonly used initialisation methods on three benchmarks datasets in classification and regression, thus showing that the performance of Win-Win initialisation is not tied to any particular dataset or task. On all datasets considered in the paper, we have demonstrated that Win-Win either matches or exceeds the performance of the competing approaches for the considered architectures. In particular, for classification on MNIST, we chose the architectures that had the best performance without using advanced preprocessing and training heuristics as recorded on http://yann.lecun.com/exdb/mnist/. For CIFAR10, we used an architecture that achieves state-of-the-art performance and applied the different initialisation approaches in the last fully-connected layer. Note that we didn\\u2019t use any advanced training heuristics here, but just made use of the underlying architecture. \\n\\nWe hope that our reply to the reviewer\\u2019s comment about experimental results explains our main motivation for choosing the tasks, architectures and datasets for the experimental studies presented in the paper, and thereby also addresses the reviewer\\u2019s final comment on our results being \\u201c inadequate and unconvincing\\u201d. If you have any further comments, suggestions or questions, we would appreciate them.\\n\\nGiven the above explanations, particularly our responses to the raised issues, we would kindly like to ask you to consider raising the rating of this paper. Thank you!\"}",
"{\"title\": \"Theoretical contribution validated by experimental results\", \"comment\": \"We thank the reviewer for their time and comments.\\n\\nIn our proposed method, we define a kernel at the level of a layer and construct the distribution over the weights based on that kernel. In particular, our method does not construct kernel mappings between layers and as such does not use these for weight initialisation. Furthermore, the distribution constructed over the RKHS H_{k_i} is not a result of the representer theorem as no objective is being minimised with respect to any training data. Specifically, this distribution is derived by studying the structure of the induced RKHS and is a known result from RKHS theory. Given the constructed weight distribution, this particular form of the weights arises due to the fact that the covariance function of the GP is a convolution of kernels. This can be easily verified by computing the covariance between the weights. Note that although the form of the weights does resemble the result from the representer theorem, it is not based on it, but follows from results on RKHS distributions as discussed above.\\n\\nBefore addressing the specific issues identified by the reviewer, we would first like to draw the attention of the reviewer to the main contribution of this paper (Proposition 1 in our submission), namely a method for constructing infinite-width networks with arbitrarily many layers. To the best of our knowledge, this is the first method that allows us to go beyond just two layers of infinite width. As single-layer infinite-width networks have historically played an important role in helping us acquire a deeper understanding of standard, finite-width networks, being able to construct deep infinite-width networks should help us gain a better understanding of deep, finite-width networks. \\n\\nAlthough the main contribution of this paper is of theoretical nature, we also discuss its immediate practical implications. In particular, we can transfer our findings from the infinite-width to the finite-width case. Specifically, we show how our proposed infinite-width construction approach gives rise to a novel weight initialisation scheme for finite-width networks that we term Win-Win. Furthermore, we conducted experiments with Win-Win on benchmark datasets across two different task domains, classification and regression, thus showcasing that our theoretical contribution has direct and practical relevance.\", \"we_now_address_the_issues_raised_by_the_reviewer\": \"The infinite width of the layers is given by the fact that there are infinitely many weights connecting each pair of layers. In particular, the infinite width is not given by any kernel as we define our kernels at the level of individual layers that are of infinite width to begin with. Further, we would like to draw the attention of the reviewer to the fact that, after proposing a construction approach for infinite-width networks with arbitrarily many layers, we examine the implications of this construction for finite-width networks. In particular, the practical implications discussed are motivated by the idea of transferring the findings from infinite-width networks to their finite-width counterparts and thus hopefully enabling a better understanding of finite-width networks in general. Note that this is not done with the idea of practically implementing the proposed infinite-width construction approach.\\nFurthermore, we note that we use Monte Carlo sampling to approximate the integrals derived in the infinite-width case. As we discuss in the paper, this can be viewed as a random features expansion of the kernels. Note, however, that in general this will not be a random Fourier features expansion because the underlying kernels are not guaranteed to be shift invariant.\\n\\nWith regard to the comment about ensuring that the \\u201capproximated weights are still in the same space\\u201d, we would like to point out that, in the finite-width case, this is automatically ensured as the weights are linear combinations of the activations. In particular, in the finite-width case, the required spaces are all subspaces of R^d , where d is the width of the respective layer. Thus, by examining the dimensionality of the weight vectors, it is straightforward to verify that the weights are indeed in the appropriate space. We understand this potential source of confusion and will provide further clarification in the paper.\"}",
"{\"title\": \"Novel theoretical contribution with practical consequences supported by experiments\", \"comment\": \"We thank the reviewer for their time and comments.\\n\\nIn the case of infinite-width networks, both the activations and weight \\u201cvectors\\u201d (connecting one layer with one neuron in the subsequent layer) are infinite-dimensional \\u201cvectors\\u201d, i.e. they are representable by functions. In order to be able to compute the inner product between activations and weights, these objects need to be in the same function space. If they are not in the same function space, then we cannot compute inner products between these quantities and therefore cannot define infinite-width networks correctly. If we assume that the activations are sufficiently well-behaved then the function space that they span will be a subset of, for example, L_{2}. In that case, we can characterise these functions in terms of their norms. \\n\\nWith regard to the comment on the norm issue, in general we do not know how well-behaved activations are and are therefore not able to characterize the space of activations in terms of norms. Thus, in general, we have no way of quantifying the space of activations in terms of norms and cannot further address this issue apart from emphasizing that the activations and weights need to be in the same function space.\\n\\nThe main contribution of this paper is theoretical, namely a method to construct infinite-width networks with arbitrarily many layers (Proposition 1 in our submission). To the best of our knowledge, this is the first method that allows us to go beyond just two layers of infinite width. Historically, single-layer infinite-width networks have played an important role in helping us acquire a deeper understanding of standard, finite-width networks. Being able to construct deep infinite-width networks should help us gain a better understanding of deep, finite-width networks that have transformed the field dramatically over the last decade. This summarises our main motivation for developing a construction approach for deep infinite-width networks. \\n\\nApart from purely theoretical merit, our theoretical contribution also has immediate practical implications. In particular, we can transfer some of our findings from the infinite-width case to finite-width networks. Specifically, we show how our proposed construction approach gives rise to a novel weight initialisation scheme for finite-width networks that we term Win-Win.\\n\\nDue to the theoretical nature and mathematical complexity involved, we agree that this paper is rather technical. We therefore tried to ease the reader into the challenges of defining deep infinite-width networks (see Section 3.1). We thank the reviewer for the comment about clarity and welcome any further comments on what we could add to the text to make this topic more approachable.\\n\\nAlthough the main contribution of this paper is theoretical in nature, we also discuss some practical implications of the developed theory. Note that the main motivation of this paper is expanding our understanding of neural networks. Thus, the practical implications discussed should be seen as highlighting the practical relevance of the proposed theory and not as an attempt at developing a new training heuristic for achieving state-of-the-art performance. \\n\\nTo highlight the implications of the developed theory, we conducted experiments on benchmark datasets across different task domains. In particular, the experimental results are there to demonstrate that our theoretical contribution has direct and practical relevance, and to validate the theory developed in the paper. \\n\\nFurthermore, in order to adequately validate the developed theory, we needed to disentangle the effect of initialisation on network performance from the effects of other training heuristics. To this end, we purposefully did not use any advanced techniques, such as data augmentation or learning rate scheduling, all of which are commonly used to improve state-of-the-art performance. In the experimental setup, we tested Win-Win against commonly used initialisation methods on three benchmark datasets in classification and regression, thus showing that the advantages of Win-Win are not tied to one particular dataset or even one particular task.\\n\\nIf you have any further comments, suggestions or questions, we would appreciate them. Given the above explanations, we would also kindly like to ask you to consider raising the rating of this paper. Thank you!\"}",
"{\"title\": \"The idea proposed in the paper is interesting but the paper appears quite incomplete\", \"review\": \"In this paper, the authors propose deep neural networks of infinite width. The primary challenge in such networks is defining a distribution over the weights connecting two layers of infinite width. The authors tackle this by using Gaussian Processes for these distributions with the covariance functions defined in a canonical manner. Inspired by these networks, the authors propose weight initialization schemes for finite width networks.\\n\\nWhile the idea proposed in the paper is interesting, the paper appears quite incomplete. In particular, the authors do not mention a single example of a kernel that can be constructed using the process outlined in Section 3. Furthermore, the only application of these infinitely wide networks proposed in this paper is for initialization of the weights of finite width networks. It will perhaps be more interesting if the authors can use the kernels obtained for supervised learning tasks using kernel machines (as done in Cho & Saul 2009) or as the covariance function of a Gaussian process (as done in Wilson et al. 2014).\\n\\nMoreover, the experiments conducted on finite width networks are not enough to justify the utility of this initialization scheme. It will be useful if the authors can test the performance of the state-of-the-art networks for CIFAR-10/100 and ImageNet, where the weights of the last fully connected layer have been sampled from different distributions. An extension to initialization for convolutional layers will further strengthen the paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Systematic theoretical work with some supporting experiments\", \"review\": \"Summary:\\nI'm not very familiar with the work this is building off of, but my summary is as follows:\\nThe authors look at the problem of defining multilayer infinite width neural networks. The main challenge is that the weights (which are in some sense now a function) must be appropriately sampled to ensure that norms don't explode. \\n\\nThis has only been done for two layers before, and the authors derive how to do this for more than two hidden layers using RKHSs. This initialization is called Win-Win, and is compared to different initializations on a few different datasets.\", \"clarity\": \"This paper is quite technical and hard to follow without knowledge of the prior work. I think the authors could have been a little clearer on some of the challenges. E.g. instead of talking about the weights needing to be \\\"in the same function space\\\", it would be helpful to remphasise the norm issue.\", \"comments\": \"I'm not sure about the high level motivation for developing networks of this kind. In particular, none of the performance numbers are near state of the art (not necessary for developing new promising methods!) but I don't know exactly what the initialization buys in this setting.\\n\\nAlso, it would have been nice to see whether the Win-Win initialization is only useful for larger width networks compared to smaller width i.e. do other initialization schemes work better in this latter setting?\\n\\nThe derivations are interesting though, so I still recommend accept.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"a weight initialization approach to enable infinitely deep and infinite-width networks, experimental results on small datasets\", \"review\": \"Pros:\\n\\nThis paper uses kernel mappings between any two layers for weight initialisation. Using the representer theorem, a proper distribution for weights is constructed in H_{k_i} instead of being learned by \\\\phi_i, and then is formulated as a GP.\", \"cons\": \"However, there are some key issues.\\n1. The so-called \\u201cinfinite width\\u201d is just yielded by kernels in RKHS for weight initialization. For practical implementation, the authors use this scheme with random Fourier features to construct finite-width network. A key issue is that how to guarantee that the approximated weights are still in the same space? For example, weights can be in RKHS, but their approximation might be not in RKHS. See in [S1] for details.\\n \\n[S1] Generalization Properties of Learning with Random Features, NIPS 2017.\\n \\n2. Experimental part is not very convincing. First, the authors just compare different initialization schemes. The used architectures are simple and not representative. Second, the overall performance is not satisfactory, and the compared classification datasets are quite small. Overall, the experimental results are inadequate and unconvincing.\", \"summary\": \"The paper attempts to proposal a weight initialization scheme to enable infinite deep infinite-width networks. However, there are some key issues not address such as whether the approximated weights are still in the same space and the limited experimental results.\", \"response_to_rebuttal\": \"The authors have addressed my question about the weights being still in the same RKHS. I still think the motivation and experiments are not very satisfactory. \\n\\nTherefore the paper is very borderline. However, I would like to bump my rating a bit higher.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SJgTps0qtQ | Exploiting Environmental Variation to Improve Policy Robustness in Reinforcement Learning | [
"Siddharth Mysore",
"Robert Platt",
"Kate Saenko"
] | Conventional reinforcement learning rarely considers how the physical variations in the environment (eg. mass, drag, etc.) affect the policy learned by the agent. In this paper, we explore how changes in the environment affect policy generalization. We observe experimentally that, for each task we considered, there exists an optimal environment setting that results in the most robust policy that generalizes well to future environments. We propose a novel method to exploit this observation to develop robust actor policies, by automatically developing a sampling curriculum over environment settings to use in training. Ours is a model-free approach and experiments demonstrate that the performance of our method is on par with the best policies found by an exhaustive grid search, while bearing a significantly lower computational cost. | [
"Reinforcement Learning",
"Policy Robustness",
"Policy generalization",
"Automated Curriculum"
] | https://openreview.net/pdf?id=SJgTps0qtQ | https://openreview.net/forum?id=SJgTps0qtQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ByxZ_GAxlV",
"H1ezADy9CQ",
"rygMev1q0m",
"r1lZIB19CX",
"HyeiUJ1laQ",
"SklnoXrk6m",
"BJgPWcE0hQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544770152616,
1543268297985,
1543268074201,
1543267656744,
1541562195087,
1541522339958,
1541454335382
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper847/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper847/Authors"
],
[
"ICLR.cc/2019/Conference/Paper847/Authors"
],
[
"ICLR.cc/2019/Conference/Paper847/Authors"
],
[
"ICLR.cc/2019/Conference/Paper847/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper847/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper847/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents a strategy for randomizing the underlying physical hyper-parameters of RL environments to improve policy's robustness. The paper has a simple and effective idea, however, the machine learning content is minimal. I agree with the reviewers that in order for the paper to pass the bar at ICLR, either the proposed ideas need to be extended theoretically or it should be backed with much more convincing results. Please take the reviewers' feedback into account and improve the paper.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"The paper needs improvement\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your review.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your review.\\n\\n--------------------------------\\nI) Response to your questions:\\n\\n1. K and M are unrelated. 'M' is the total number of environment settings can be changed. K is the number of 'tasks' initialized under a specific environment configuration - i.e. after sampling some settings, we train on those settings for K episodes without changing them\\n\\n2. Tasks are initialized uniformly over the state space of the task. Environment settings are initialized per the sampling policy defined by the bandit.\\n\\n--------------------------------\\nII) Please clarify:\\n\\nIn your review, you have a sentence, which seems to end abruptly: \\\"For instance, the authors claim that\\\". Could you please clarify what you meant to say?\\n\\nYou mention \\u201cmore sophisticated and interesting continuous control environments such as control suite [1] or manipulation suite [2]\\u201d, however neither references [1] nor [2] in your review seem to address such suites. Could you please clarify which suites you are referring to?\\n\\n--------------------------------\\nIII) Primary contributions of this work and novelty:\\n\\nPrimarily, we sought to introduce a learning scheme that would combat the problem of policy brittleness in RL when policies are exposed to environmental behavior not seen in training. We recognize that work has been done in model-based RL to combat these problems, however, we specifically focus on the model-free setting.\\n\\nWe focus specifically on a model-free formulation because measuring the properties of a test environment may sometimes be impossible or infeasible - for example, in a real-world task, adding sensors and processing to perceive the properties of the environment may prove too expensive to be feasible. If the action policy could instead be robust to changes, the problem is somewhat alleviated. \\n\\nTo your point about model parameters needing to be known a priori during training, we would like to note that we also test our trained policies against environmental settings that were not explicitly trained for. For example, as shown in Table 3, we train the pendulum with only masses 2, 4, 6, 8 and 10, but masses 1, 3, 5, 7 and 9 are also tested and appear to be implicitly handled. We observed similar trends with ball-pushing. The apparent efficacy demonstrated by this result also seemed to justify the choice of discrete bandit scheme.\\n\\nWhen discussing inadvertent generalization, we were specifically referring to the observation that the brittleness of RL policies seems to change quite significantly, and is even seemingly mitigated, in response to small changes in the environment during training, in ways that allow it to remain robust despite lacking a model of the environment in test time. To the best of our knowledge, this has not been addressed in prior work.\\n\\n--------------------------------\\nIV) Addressing related works:\\n\\n1. We note that model-free randomization (reference [4] in your review) is effectively uniform random sampling employed during training - which is equivalent to the \\u2018joint\\u2019 training, which we use a baseline.\\n\\n2. Adaptive randomization (reference [3]) appears to adapt the training distribution by testing policies in a target environment . However, in our work, we do not assume access to tests on target domains during training, disallowing us from adopting a similar adaptation approach.\\n\\n3. We sought to demonstrate our algorithm on a few controls tasks, as is the current norm in deep RL, where it was tentatively more intuitive to understand the control schemes and strategies. However, we view our approach as more generally applicable to deep RL and thus did not focus on comparing it against controls techniques like MRAC, MPC or LQR. Unlike traditional controls, our method is directly extensible to tasks where variations in the task environment are not limited to changes in control system dynamics - examples include training object-detection/recognition to be more robust to scene lighting, or game AI to be more robust to level design.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for your review.\\n\\nCitation issues have been resolved in the latest revision of the paper.\\n\\nWhile a PID controller is likely to solve the problems we addressed, our focus was not on being able to implement a control policy for individual tasks, but rather to demonstrate a technique to combat policy brittleness in generic deep RL - where learned policies fail when presented with data from domains different to those they were trained on, which is often a problem in machine learning.\"}",
"{\"title\": \"An interesting view point on robustness.\", \"review\": \"This paper investigated the robustness of RL policies learning under different environmental conditions.\\n\\nBased on the observations that policies learnt in different experimental settings lead to different generalizability, the authors proposed an EXP3 based reward-guided curriculum for improving policy robustness. The algorithm was tested on inverse pendulum, cart-pole balancing, and ball-pushing in OpenAI gym.\\n\\nThe paper is well-organized and easy to understand. Written errors didn't influence understanding. Papers in the references were not properly cited.\\n\\nIt is an interesting discovery that different environment brewed different policies with different robustness/generalizability in daily life. However, these are also easily derivable in physics, especially in the three experiments tested in the paper. It would be more complete to compare with PID controllers.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Curriculum design for dynamics randomization with a Bandits style method during training.\", \"review\": \"The paper looks at the problem of generalization across physical parameter varaition in learning for continuous control. The paper presents a method to develop a sampling based curriculum over env. settings for training robust agents.\\n\\n\\n* The paper makes an interesting observation on inadvertent generalization in robust policy learning. \\nHowever, the examples in both the cartpole and the pendulum cases seem not to be watertight. \\nFor instance, the authors claim that \\nBut from a dynamical system perspective in both cases, the controller is operating near limits. \\nThe solution and subsequent generalization depend more on the topology of the solution space. \\nA heavy Pendulum is an overdamped system and required the policy to operate at the limits of action to generate momentum for swing up. Hence a solution for a lighter pendulum in implicitly included. Similarly, the rolling ball is an underdamped system, and where the policy operates near zero limits in light ball case to prevent the system from going unstable. Adding mass results in damping which makes it easier. In this case, as well the solution space is implicitly contained.\\n\\n\\nBut this is not a novel observation. Similar observations have been made for Robust control and Model-Reference Adaptive Control. \\nThe paper also overlooks a number of related works in model-free randomization [4], adaptive randomization [3], adversarial randomization [5,6]. The method also does not compare with model-based methods for adaptive policy learning and iLQR based methods to handle this problem [2, 7].\\n\\n\\nThe argument that the method is model-free is perhaps not as acceptable since the model parameters need to be known apriori for adaptation. The policy itself may be model-free but that is a design choice. \\nA good experimental evaluation for this is generalization across known unknowns and unknown unknowns. \\n\\n\\n* The algorithm itself is reasonable but the problem setup and choice of a discrete dynamics parameter choices are questionable. The bandit style method operates over a discrete decision set. \\nIt also assumes in the multi-parameter setting that they are independent, which may not be true very often. \\n\\nThe algorithm proposed itself isnt novel, but would have been justified if the results supported the use of such a method. \\n\\n* Experiments are quite weak. \\nBoth the experimental domains are rather simplistic with smooth nonlinear dynamics. There are more sophisticated and interesting continuous control environments such as control suite [1] or manipulation suite [2]. \\n\\nIt would be useful to see how tis method works in more complicated domains and how the performance compares with simpler methods such as joint brute-force randomization both in performance and in computation.\", \"questions\": \"1. Please provide details of Algorithm 1. How are the quantities K and M related? \\n2. What is the process of task initialization? What information is required and what priors are used. Uniform prior over what range?\\n\\n\\nIn summary, the authors explore an interesting adaptive curriculum design method. However, in its current form, the work needs more thought and empirical evaluation for the sake of completeness.\", \"references\": \"1. Model Reference Adaptive Control [https://doi.org/10.1007/978-1-4471-5102-9_116-1\\n]\\n2. ADAPT: Zero-Shot Adaptive Policy Transfer for Stochastic Dynamical Systems [https://arxiv.org/abs/1707.04674]\\n3. EPOpt: Learning Robust Neural Network Policies Using Model Ensembles [https://arxiv.org/abs/1610.01283]\\n4. Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World\\n[https://arxiv.org/abs/1610.01283]\\n5. Certifying Some Distributional Robustness with Principled Adversarial Training [https://arxiv.org/pdf/1710.10571.pdf]\\n6. Adversarially Robust Policy Learning: Active Construction of Physically-Plausible Perturbations [http://vision.stanford.edu/pdf/mandlekar2017iros.pdf]\\n7. Synthesis and Stabilization of Complex Behaviors through Online Trajectory Optimization [https://homes.cs.washington.edu/~todorov/papers/TassaIROS12.pdf]\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The papers proposes methods to robustify reinforcement learning algorithms against environment uncertainty which arises due to parametric variability. This is a interesting paper with promising results. What would make this paper a clear accept is the addition of experiments with high dimensional systems with more unknown parameters.\", \"review\": [\"Does the paper present substantively new ideas or explore an under-explored or highly novel question?\", \"The paper claimed that there is limited work on the investigating the sensitivity of RL caused by the physics variations of the environment, such as object weight, surface friction, arm dynamics, etc. So the paper proposed learning a stochastic curriculum, guided by episodic reward signals (which is their contribution compared with previous related work) to develop policies robust to environmental perturbation. Overall the combination of ideas is novel but the experimental results are limited in scope.\", \"Does the results substantively advance the state of the art?\", \"The results advance the state of the art, since they are compared against : 1) the best results observed via a grid search (oracle) on policies trained exclusively on specific individual environment settings; 2) Policies trained under a mixed training structure, where the environment settings are varied every episode during training, with the episode settings drawn uniformly at random from a list of values of interest. Their 3 experiment results are competitive with 1) and much better than 2).\", \"Will a substantial fraction of the ICLR attendees be interested in reading this paper?\", \"Yes, because the robustness of RL policies to changes in the physic parameters of the environment has not been well explored. Although previous investigations exist, and this paper\\u2019s algorithm is the combination of EXP3 and DDPG, it is still interesting to see them combined together to solve model uncertainty problem of RL with very good simulation results.\", \"Would I send this paper to one of my colleagues to read?\", \"I would definitely send the paper to my colleagues to read.\", \"In terms of quality:\", \"Clear motivation; substantiated literature review; but the algorithms proposed are not novel and the question of whether the method will scale to more unknown parameters is not answered.\", \"I terms of clarity:\", \"Easy to read.\\u2013Experimental evaluation is clearly presented.\", \"Originality: The problem of developing an automated curriculum for learning generalization over environment settings for a given RL task is formulated as a multi-armed bandit problem, and EXP3 algorithm is used to minimize regret and maximize the actor\\u2019s rewards. Itis a very interesting application of EXP3, although such inspiration is drawn from a former multi-task NLP paper Graves et al. (2017).\", \"In terms of significance:\", \"The paper is definitely interesting and presents an promising direction. The significance is limited because of the simplicity of the examples considered in the experimental session. It would be interesting to see how this method performs in problems with more states and more unknown parameters.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rJgTTjA9tX | The Comparative Power of ReLU Networks and Polynomial Kernels in the Presence of Sparse Latent Structure | [
"Frederic Koehler",
"Andrej Risteski"
] | There has been a large amount of interest, both in the past and particularly recently, into the relative advantage of different families of universal function approximators, for instance neural networks, polynomials, rational functions, etc. However, current research has focused almost exclusively on understanding this problem in a worst case setting: e.g. characterizing the best L1 or L_{infty} approximation in a box (or sometimes, even under an adversarially constructed data distribution.) In this setting many classical tools from approximation theory can be effectively used.
However, in typical applications we expect data to be high dimensional, but structured -- so, it would only be important to approximate the desired function well on the relevant part of its domain, e.g. a small manifold on which real input data actually lies. Moreover, even within this domain the desired quality of approximation may not be uniform; for instance in classification problems, the approximation needs to be more accurate near the decision boundary. These issues, to the best of our knowledge, have remain unexplored until now.
With this in mind, we analyze the performance of neural networks and polynomial kernels in a natural regression setting where the data enjoys sparse latent structure, and the labels depend in a simple way on the latent variables. We give an almost-tight theoretical analysis of the performance of both neural networks and polynomials for this problem, as well as verify our theory with simulations. Our results both involve new (complex-analytic) techniques, which may be of independent interest, and show substantial qualitative differences with what is known in the worst-case setting. | [
"theory",
"representational power",
"universal approximators",
"polynomial kernels",
"latent sparsity",
"beyond worst case",
"separation result"
] | https://openreview.net/pdf?id=rJgTTjA9tX | https://openreview.net/forum?id=rJgTTjA9tX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Bkg1sk6Ng4",
"Hkl1W-vLRm",
"Sylr0yhhhX",
"SyeekjH9h7",
"ByeUreMZhX"
],
"note_type": [
"meta_review",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545027479111,
1543037174964,
1541353420918,
1541196504376,
1540591678062
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper845/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper845/Authors"
],
[
"ICLR.cc/2019/Conference/Paper845/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper845/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper845/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper makes a substantial contribution to the understanding of the approximation ability of deep networks in comparison to classical approximation classes, such as polynomials. Strong results are given that show fundamental advantages for neural network function approximators in the presence of a natural form of latent structure. The analysis techniques required to achieve these results are novel and worth reporting to the community. The reviewers are uniformly supportive.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting new analysis of function approximation in the presence of sparse latent structure\"}",
"{\"title\": \"Response to Reviewers\", \"comment\": \"We thank the reviewers for their valuable feedback, which we have incorporated into the new revision of the paper. In particular, in response to AnonReviewer2, we added a discussion of a related work by Zhang et al. on kernel methods simulating neural networks and have added more details to both the proofs and proof sketches.\", \"anonreviewer2_and_anonreviewer1_asked_about_more_complex_models\": \"we agree that sparse regression is a relatively simple model and that it would be nice to study more complex models as well. Since latent sparsity is a common feature in many models, this seemed like the natural place to start -- we hope the analysis of more sophisticated models will follow.\\n\\nAnonReviewer2 also asked about tightness of the dependence on \\\\mu: for noisy sparse linear regression, the polynomial dependence on \\\\mu cannot be significantly improved due to issues of computational complexity. For instance, https://arxiv.org/pdf/1402.1918.pdf show that sparse linear regression with a statistical rate better than polynomial in \\\\mu is computationally hard. If, for example, small-degree polynomials existed (with dependency better than polynomial in \\\\mu), these computational hardness results would be violated.\"}",
"{\"title\": \"Review\", \"review\": \"This paper studies the problem of understanding the representation power of neural nets with Relu activations for representing structured data. In order to formalize this, the authors consider data generated from a sparse generative model as follows: A sparse m-dimensional vector Z is sampled from a distribution over sparse vectors. In input X is formed\\nas AZ, where A is an incoherent matrix. The corresponding output is Y= w. X. The goal is to fit the data of the form (X_i, Y_i). The main result of the paper is that a 2-layer ReLU network can fit the data with near optimal error. On the other hand, low degree polynomials~(of degree up to log m) cannot fit the data with non-trivial error. Finally,\\nthe authors also show that polynomials of degree polylog(m) can, in fact, fit the data as well as a 2-layer ReLU network. The paper is well written and provides new insights into the representation power of neural nets. It is also nice to know that ReLU networks can be approximated by low degree polynomials in the non-worst case scenario. This\\nis a good paper and I recommend acceptance.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting \\\"Relevant domain\\\" based approximation of ReLU to approximate sparse latent structures\", \"review\": [\"The paper studies the representational power of two-layer ReLU networks and polynomials for approximating a linear generative model for data with sparsity in the latent vector. They show that ReLU networks achieve optimal rate whereas low degree polynomials get a much worse rate.\", \"Overall, the results are strong, the authors provide a lower bound on the degree of polynomial needed to approximate the model indicating the power of non-linearity. The observation of moving away from uniform approximators is well-motivated. The approximation theorem for ReLU is intriguing and uses new ideas which I have not seen before and are potentially useful in other applications. So far, only rational functions have been able to give such approximation guarantees. However, the motivation for studying sparse linear regression from a representation view-point is not very clear. Ideally, you would like to study representation for more complex models.\", \"Questions/Comments:\", \"Related work is missing prior work at the intersection of kernel methods and neural networks, please update.\", \"Define notation before using, for example, \\\\rho_\\\\tau^{\\u2a02m}\", \"Expand proof sketches, they are not very clear, also full proofs are written with not much detail.\", \"Is the dependence on \\\\mu tight? The current dependence sort of suggests that you need the observation matrix to be very close to identity.\", \"Proof of Lemma B.1 is unclear, could you explain how you deduce the lemma from the inequality?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The Comparative Power of ReLU Networks and Polynomial Kernels in the Presence of Sparse Latent Structure\", \"review\": \"In this paper, authors analyze the performance of neural networks and polynomial kernels in a natural regression setting where the data enjoys sparse latent structure, and the labels depend in a simple way on the latent variables. They give an almost-tight theoretical analysis of the performance and verify them with simulations.\\n \\nAuthors motivated the theoretical analysis from typical applications, for which the desired function can be only important to be approximated well on the relevant part of domains. Instead of formalizing the above problem, authors tackle a particular simple question. However, it is not easy to understand the relationships between the two problems.\\n \\nA regression task is studied where the data has a sparser latent structure. Authors measure the performance of estimators via the expected reconstruction error from theoretical perspectives for both two-layer ReLU network and polynomial kernel. Empirical experiments will be even better to show the performance of some applications consistent with the theoretical results.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
BJeapjA5FX | GEOMETRIC AUGMENTATION FOR ROBUST NEURAL NETWORK CLASSIFIERS | [
"Robert M. Taylor",
"Yusong Tan"
] | We introduce a novel geometric perspective and unsupervised model augmentation framework for transforming traditional deep (convolutional) neural networks into adversarially robust classifiers. Class-conditional probability densities based on Bayesian nonparametric mixtures of factor analyzers (BNP-MFA) over the input space are used to design soft decision labels for feature to label isometry. Classconditional distributions over features are also learned using BNP-MFA to develop plug-in maximum a posterior (MAP) classifiers to replace the traditional multinomial logistic softmax classification layers. This novel unsupervised augmented framework, which we call geometrically robust networks (GRN), is applied to CIFAR-10, CIFAR-100, and to Radio-ML (a time series dataset for radio modulation recognition). We demonstrate the robustness of GRN models to adversarial attacks from fast gradient sign method, Carlini-Wagner, and projected gradient descent. | [
"Bayesian nonparametric",
"robust",
"deep neural network",
"classifier",
"unsupervised learning",
"geometric"
] | https://openreview.net/pdf?id=BJeapjA5FX | https://openreview.net/forum?id=BJeapjA5FX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Byg7RfXlJN",
"S1xets1S6Q",
"HkewPPDF3Q",
"ByglSPbvnX"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1543676618973,
1541892983786,
1541138271074,
1540982584236
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper844/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper844/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper844/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper844/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"All three reviewers feel that the paper needs to provide more convincing results to support their robustness claim, in addition to a number of other issues that need to be clarified/improved. The authors did not provide any response.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"More convincing experiments are needed\"}",
"{\"title\": \"The paper can be much improved by providing more evidence of the robustness to adversarial attack and advantages over other models.\", \"review\": \"The paper is working on a robust classifier that consists of two stages. The first stage performs unsupervised conditional kernel density estimates (KDE) of the covariate vectors, and the second stage is feature extractions and classification. I appreciate the authors' efforts to clarify the intuition, but more technical details and experiments can be provided to support their arguments. My questions and comments are below.\\n\\n1. Page 2. \\\"this means the stochastic gradient descent training algorithm minimizing...\\\" Is the problem because of SGD or the structure of NN? I think the reason might be the latter, consider logistic regression, which can be seen as a single-layer NN, does not suffer such a problem. \\n2. I know the KDE part is from an existing paper, but more technical details can make the paper clearer and some statements are questionable. Specifically, what basis vectors are used for (3)? Is it really speedy and scalable (Page 4, Section 3.1) for BNP-MFA if using Gibbs sampling? Is it the reason why the experiments in Table 1 is incomplete?\\n3. For Eqn (7), how do you calculate \\\\beta's to \\\"scale the correct class label higher than incorrect classes for the cases...?\\\"\\n4. Is the proposed model robust to all kinds of attacks, like gradient based noise, and outliers which locates far away from the corresponding cluster?\\n5. Can you provide some experiments to show the advantage over other approaches?[1]\\n\\n\\nI highly encourage the use of BNP KDE which has many advantages as stated in the paper. But the authors may have to solve the problem of scalability and show advantages over other approaches.\\n\\n[1]Uli\\u010dn\\u00fd, Matej, Jens Lundstr\\u00f6m, and Stefan Byttner. \\\"Robustness of deep convolutional neural networks for image recognition.\\\" International Symposium on Intelligent Computing Systems. Springer, Cham, 2016.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting work but more comprehensive evaluations needed\", \"review\": \"This paper proposes geometrically robust networks (GRN), which applies geometric perspective and unsupervised model augmentation to transform traditional deep neural networks into adversarial robust classifiers. Promising experimental results against several adversarial attacks are presented as well.\", \"the_bnp_mfa_are_applied_twice_in_the_framework\": \"one for getting the soft labels, and the other for getting the predictions through MAP estimation. There are existing works which are in the same line as the second part: deep kNN [1], and simple cache model [2] for example, where similarities to training examples are used to derive the test prediction and substantial increase of the robustness against adversarial attacks considered in this work have also been shown.\", \"these_raise_two_questions\": \"(1) How much does the soft label encoding help increase the robustness?\\n(2) How does the proposed model compare with the deep kNN and the simple cache model, which are much simpler?\", \"some_minor_issues\": [\"The unsupervised learning for label encoding is performed on the input space, the image pixel for example. But it is known that they are not good features for image recognition.\", \"It is unclear which part of the network is considered as \\\"feature extraction\\\" part which is used for MAP estimation in the experiments.\", \"It would be nicer to have results with different architectures.\", \"[1] N. Papernot and P. McDaniel. Deep k-nearest neighbors: towards confident, interpretable and robust deep learning. arXiv:1803.04765.\", \"[2] E. Orhan. A simple cache model for image recognition. arXiv:1805.08709.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This work lacks any convincing experimental result to support the claims\", \"review\": \"This work proposes a defence based on class-conditional feature distributions to turn deep neural networks into robust classifiers.\\n\\nAt present this work lacks even the most rudimentary evidence to support the claims of robustness, and I hence refrain from providing a full review. In brief, model robustness is only tested against adversarials crafted from a standard convolutional neural network (i.e. in a transfer setting, which is vastly different from what the abstract suggests). Unsurprisingly, the vanilla CNN is less robust than the density-based architecture introduced here, but that can be simply be explained by how close the substitute model and the vanilla CNN are. No direct attacks - neither gradient-based, score-based or decision-based attacks - have been used to evaluate robustness. Please check [1] for how a thorough robustness evaluation should be performed.\\n\\n[1] Schott et al. \\u201cTowards the first adversarially robust neural network model on MNIST\\u201d.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
BJl6TjRcY7 | Neural Probabilistic Motor Primitives for Humanoid Control | [
"Josh Merel",
"Leonard Hasenclever",
"Alexandre Galashov",
"Arun Ahuja",
"Vu Pham",
"Greg Wayne",
"Yee Whye Teh",
"Nicolas Heess"
] | We focus on the problem of learning a single motor module that can flexibly express a range of behaviors for the control of high-dimensional physically simulated humanoids. To do this, we propose a motor architecture that has the general structure of an inverse model with a latent-variable bottleneck. We show that it is possible to train this model entirely offline to compress thousands of expert policies and learn a motor primitive embedding space. The trained neural probabilistic motor primitive system can perform one-shot imitation of whole-body humanoid behaviors, robustly mimicking unseen trajectories. Additionally, we demonstrate that it is also straightforward to train controllers to reuse the learned motor primitive space to solve tasks, and the resulting movements are relatively naturalistic. To support the training of our model, we compare two approaches for offline policy cloning, including an experience efficient method which we call linear feedback policy cloning. We encourage readers to view a supplementary video (https://youtu.be/CaDEf-QcKwA ) summarizing our results. | [
"Motor Primitives",
"Distillation",
"Reinforcement Learning",
"Continuous Control",
"Humanoid Control",
"Motion Capture",
"One-Shot Imitation"
] | https://openreview.net/pdf?id=BJl6TjRcY7 | https://openreview.net/forum?id=BJl6TjRcY7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ryxLhu1bx4",
"HyxwKQnF1N",
"ryevUebMRm",
"SygmMXpJAm",
"Sylz2k9o6Q",
"SJlewPYtpX",
"HJlsxPYYam",
"HkllTLYYaQ",
"HJxBuUYY6m",
"S1eS0BTpnX",
"S1gTWJZ6hX",
"HJeCcTtms7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544775853785,
1544303487408,
1542750286693,
1542603530975,
1542328233547,
1542195032131,
1542194931374,
1542194871903,
1542194796755,
1541424588640,
1541373700742,
1539706261630
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper843/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper843/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper843/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper843/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper843/Authors"
],
[
"ICLR.cc/2019/Conference/Paper843/Authors"
],
[
"ICLR.cc/2019/Conference/Paper843/Authors"
],
[
"ICLR.cc/2019/Conference/Paper843/Authors"
],
[
"ICLR.cc/2019/Conference/Paper843/Authors"
],
[
"ICLR.cc/2019/Conference/Paper843/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper843/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper843/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"Strengths: One-shot physics-based imitation at a scale and with efficiency not seen before.\\nClear video, paper, and related work.\", \"weaknesses_described_include\": \"the description of a secondary contribution (LFPC)\\ntakes up too much space (R1,4); results are not compelling (R1,4); prior art in graphics and robotics (R2,6);\\nconcerns about the potential limitations of the linearization used by LFPC.\\n\\nThe original reviews are negative overall (6,3,4). The authors have posted detailed replies.\\nR1 has posted a followup, standing by their score. We have not heard more from R2 and R3.\\n\\nThe AC has read the paper, watched the video, and read all the reviews.\\nBased on expertise in this area, the AC endorses the author's responses to R1 and R2. \\nBeing able to compare LFPC to more standard behavior cloning is a valuable data point for the community; \\nthere is value in testing simple and efficient models first.\\nThe AC identifies the following recent (Nov 2018) paper as being the closest work, which is not identified by the authors or the reviewers. The approach being proposed in the submitted paper demonstrates equal-or-better scalability,\\nlearning efficiency, and motion quality, and includes examples of learned high-level behaviors.\\nAn elaboration on HL/LL control: the DeepLoco work also learns mocap-based LL-control with learned HL behaviors.\\n although with a more dedicated structure.\\n Physics-based motion capture imitation with deep reinforcement learning\", \"https\": \"//dl.acm.org/citation.cfm?id=3274506\\n\\nOverall, the AC recommends this paper to be accepted as a paper of interest to ICLR. \\nThis does partially discount R3 and R1, who may not have worked as directly on these specific problems before.\\n\\nThe AC requests is rating the confidence as \\\"not sure\\\" to flag this for the program committee chairs, in light of the fact that this discounts the R1 and R3 reviews.\\nThe AC is quite certain in terms of the technical contributions of the paper.\", \"confidence\": \"2: The area chair is not sure\", \"recommendation\": \"Accept (Poster)\", \"title\": \"reviews on balance lean negative, but recommend accept (is this excessive influence of the AC opinion?)\"}",
"{\"title\": \"final remarks? R3?\", \"comment\": \"We are reaching the end of the discussion period.\\nThere remain mixed opinions on the paper.\\nAny further thoughts from R2 and R3? Stating pros + cons and summarizing any change in opinion would be very useful.\\nThe main contribution is centred around one-shot imitation as well as reuse of low-level motor behaviors in the context of new tasks. Issues being discussed include related prior art, demonstrated benefit of method in results, importance of LFPC.\\nOf course we recognize that reviewer & author time is limited.\\n-- area chair\"}",
"{\"title\": \"Response to revised version of paper\", \"comment\": \"I have read all of the comments (from the reviewers and the authors) and have also read the revised version of the paper. I am still not convinced that the paper makes a strong contribution. Here are my comments:\\n\\n- The revised version of the paper still has LPFC as a major portion of the paper. In particular, the real estate in terms of pages devoted to explaining LPFC is more than that devoted to neural probabilistic motor primitives (which the authors claim is the main contribution of the paper). The conclusion of the paper also highlights LPFC (including its limitations). I do not think that the revised version of the paper adequately de-emphasizes LPFC.\\n\\n- The results from the simulation experiments showcasing neural probabilistic motor primitives (NPMP) presented in the paper are not particularly compelling. In particular, Figure 4 (which presents the relative performance of NPMP as compared to the expert) suggests that NPMP is not really doing a good job at capturing the expert's behavior. In particular, for both training and test data, the relative performance is around 0.5, which doesn't seem particularly good. Moreover, as noted by AnonReviewer2, the target following example is not particularly compelling, since it has previously been demonstrated by many other papers. I would thus have liked to have seen a thorough comparison of NPMP with other methods on this example. Moreover, as noted in my previous review, the results for LPFC are also quite weak.\\n\\nBased on this, I retain my original rating for the paper.\", \"small_comments\": [\"For clarity, I would recommend using \\\\eqref{} when referencing equations. For example, on pg. 6, \\\"Objective 5\\\" should be \\\"Objective (5)\\\".\"]}",
"{\"title\": \"replies from reviewers to author responses?\", \"comment\": \"This paper has seen detailed reviews and detailed responses by the authors. Thank you to all.\", \"reviewers\": \"please do provide further feedback based on the authors replies,\\nand note whether it changes your evaluation and your score for the paper.\\nAlso note that a revised draft has been submitted. \\nYour input is greatly appreciated, as the opinions are mixed and they focus on different aspects of the work.\", \"for_revision_differences_of_the_revised_draft\": \"select \\\"Show Revisions\\\" on the review page, and then select the check-boxes for the versions you wish to compare. \\n\\n-- area chair\"}",
"{\"title\": \"Revised draft posted.\", \"comment\": \"In response to reviewer feedback, we have revised our abstract and contributions portion of the introduction to better communicate the focus of the paper. We consider the neural probabilistic motor primitive module to be the primary contribution and LFPC as an auxiliary contribution. As judged by reviewer reception, this did not come across as intended. We hope the revision better reflects this.\"}",
"{\"title\": \"Response to reviewer\", \"comment\": \"We thank the reviewer for their detailed discussion of the LFPC method and address concerns below. However, as pointed out in the introductory remarks this is only one aspect of the paper and we would also like to encourage the reviewer to include our main result, the neural probabilistic motor primitive module in their assessment.\", \"addressing_the_concerns_about_lfpc_in_turn\": \"\", \"c1\": \"For single-behavior experts, indeed we intended Fig 3 to indicate (perhaps surprisingly) that linear-feedback policies perform well, and that LFPC can transfer that level of performance into a new neural network (from a single rollout of behavior). For a single behavior, this is merely a validation that the new neural network can be as robust as even the linear feedback policy. Our real aim is to be able to distill many experts into a single network as we demonstrate subsequently.\", \"c2\": \"Both LFPC and the behavioral cloning baseline were able to train the NPMP and permit skill reuse, but in our specific one-shot imitation comparisons the behavior-cloning approach performed better. Behavioral cloning from arbitrary amounts of data is an arbitrarily strong baseline. The two considerations that motivate LFPC are that we can store fewer data from experts and that we can query fewer trajectories from the expert system (in settings where rollouts are costly, such as real platforms).\\n\\nC3, C4: The general setting for our approach is that we assume the existence of experts that perform single behaviors -- as of late, this is a reasonable assumption, enabled by previous research (e.g. Liu et al. 2010, 2015, 2018, Merel et al. 2017, Peng et al. 2018). What has not been done prior to this work is to exhibit single policies capable of flexibly generating a wide range of skills, and this is the problem we are focusing on. For our purposes, it is not critical how experts are obtained, and this paper does not advocate any particular way of generating expert policies. That being said, neural network experts have been successfully trained in some recent work, so we expected it would work, and a priori, it was not obvious that directly training a linear feedback policy might suffice. Moreover, in preliminary experiments done when beginning this work (not reported here), we found that it can be quite data inefficient to directly train a time-indexed linear feedback policy for tracking motion capture using RL, we believe due to lack of parameter sharing across timesteps, so we did not pursue this further. \\n\\nNevertheless our single-behavior expert transfer experiments demonstrated empirically that linear feedback policies extracted from the expert neural networks were essentially as performant as RL-trained neural network experts in terms of robust tracking of single behaviors (Fig. 3). That linear feedback policies work as well here is a statement about the dynamics of the environment and the complexity of the behaviors (i.e. that the behaviors here are sufficiently unimodal). It seems, for a wide range of stereotyped behaviors, the policies required to execute the behaviors might be \\u201csurprisingly simple\\u201d, depending on your initial preconceptions.\", \"c5\": \"In contemporary neural network languages, it is straightforward to compute the Jacobian of the actions w/ respect to observation inputs. As described in eqn 2, this directly provides the linearization of the policy when evaluated at the nominal trajectory. In section 3.1, we use the same network architecture for cloning from noisy rollouts and from the linear feedback policy (MLP with two hidden layers: 1024, 512; we can add this detail to the text).\", \"references\": \"Liu, L., Yin, K., van de Panne, M., Shao, T. and Xu, W., 2010. Sampling-based contact-rich motion control. ACM Transactions on Graphics (TOG), 29(4), p.128.\\n\\nLiu, L., Yin, K. and Guo, B., 2015, May. Improving Sampling\\u2010based Motion Control. In Computer Graphics Forum (Vol. 34, No. 2, pp. 415-423).\\n\\nLibin Liu and Jessica Hodgins. Learning basketball dribbling skills using trajectory optimization and deep reinforcement learning. ACM Transactions on Graphics (TOG), 37(4):142, 2018. \\n\\nMerel, Josh, Yuval Tassa, Sriram Srinivasan, Jay Lemmon, Ziyu Wang, Greg Wayne, and Nicolas Heess. \\\"Learning human behaviors from motion capture by adversarial imitation.\\\" arXiv preprint arXiv:1707.02201 (2017).\\n\\nXue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel van de Panne. Deepmimic: Example-guided deep reinforcement learning of physics-based character skills. arXiv preprint arXiv:1804.02717, 2018.\"}",
"{\"title\": \"Response to reviewer\", \"comment\": \"We thank the reviewer for appreciating the difficult problem we\\u2019re tackling. However, we disagree with the reviewer about the level of similarity between this work and previous work. We have discussed a number of relationships between this work and existing approaches in the robotics, ML, and graphics communities. As far as we are aware, no existing work learns a rich embedding space for physics-based control. For kinematic sequence modeling, there is abundant work in computer graphics that learns to blend/reuse/compose movement trajectories (e.g. Holden et al. 2017). To our knowledge, for the much more challenging problem of flexible physics-based control, there is no prior work which results in a robustly reusable skill space that is as comprehensive in scope as what was demonstrated here. We would sincerely appreciate references of any previous papers that the reviewer thinks overlap in terms of successfully demonstrating the learning of a skill space which is reusable for physics-based control, especially for humanoids.\\n\\nOne-shot imitation has been demonstrated by a few groups in the past couple years for mounted robotic arms. But we are aware of considerably less work (primarily Wang et al. 2017; discussed in the paper) in which humanoids perform one-shot behaviors. The reason this is difficult in the physics-based case is that the humanoid must balance and remain upright in addition to imitating the demonstration. Moreover, while one-shot imitation is the core systematic test of the model, since the architecture was trained for this setting, we emphasize that the demonstration of reuse is considerably more interesting to us. After producing this module, a fresh HL policy can learn to set the \\u201cintention\\u201d of the LL controller and produces fairly human-like behaviors by reusing the learned skill space. We selected the go-to-target task because we wanted to heavily tax the LL movement space by demanding sudden, jerky changes of movement and what resulted were strikingly human-like movement changes, with only a very simple reward (reward = 0 everywhere except when target is reached) and no additional constraints on the human-likeness of the behavior. While simpler bodies can solve this problem from scratch, for a complex humanoid, the movements produced by learning from scratch are most definitely very non-human-like in general.\\n\\nSimultaneously reusing the upper body for manipulation while having lower body locomote is indeed a great challenge problem for future work. We have already included imitation of arm movements in our evaluation but our training distribution does not contain any manipulation demonstrations. We are optimistic that this approach can scale to this setting, but it is beyond the scope of the present paper. We do believe, that what we have demonstrated here advances the state of the art for reusable physics-based locomotion behaviors.\", \"references\": \"Holden, Daniel, Taku Komura, and Jun Saito. \\\"Phase-functioned neural networks for character control.\\\" ACM Transactions on Graphics (TOG) 36, no. 4 (2017): 42.\\n\\nZiyu Wang, Josh S Merel, Scott E Reed, Nando de Freitas, Gregory Wayne, and Nicolas Heess. Robust imitation of diverse behaviors. In Advances in Neural Information Processing Systems, pp. 5320\\u20135329, 2017.\"}",
"{\"title\": \"Response to reviewer\", \"comment\": \"Concerning LFPC, we note that bipedal locomotion is highly nonlinear and despite this, the linear feedback policy empirically works rather robustly (despite the high-D observation space) as shown in section 3.1. The term linear-feedback-stabilized policy, refers to the linear feedback policy in equation 2, which is stabilized with linear feedback (relative to the naive open-loop policy that simply executes a fixed sequence of actions).\\n\\nWe consider it clear from our results that time-indexed linear feedback policies suffice to capture the behavior of experts around nominal trajectories in our setting. Correspondingly, LFPC is capable of transferring expert functionality. We would like to point out that in our scenario there is no need to estimate J -- it is simply the Jacobian of a neural network with respect to the inputs which is readily available in standard neural network languages (see eqn 2). \\n\\nThere seems to be some confusion about delta s -- it has very little to do with the \\u201cvarious optimal controllers\\u201d and indeed we state in the paper (page 4) that the approach is fairly insensitive to precise selection of this distribution. One possible reason for this is that the distribution does not matter much as long as it covers the states visited by the linear feedback policy which appears to stay pretty close to the nominal trajectory.\\n\\nFinally, the reviewer expresses concerns with respect to the applicability of our approach to the real robot setting. Our paper primarily targets the control of simulated physical humanoids and we do not make any further claims. However recent approaches in a similar imitation learning setting have been shown to be effective for real robots (e.g. Laskey et al. 2017), so we do believe, as we speculate in the discussion that this is a plausible direction for future work.\\n\\nWe thank the reviewer for spotting a typo in equation 5 which we will correct.\", \"references\": \"Laskey, M., Lee, J., Fox, R., Dragan, A. and Goldberg, K., 2017. Dart: Noise injection for robust imitation learning. arXiv preprint arXiv:1703.09327.\"}",
"{\"title\": \"Overall response to reviewers\", \"comment\": \"We thank all reviewers for their time and comments.\\n\\nWe would like to emphasize that there are two contributions in the work. The focal motivation is the production of a single trained motor architecture which can execute and reuse motor skills of a large, diverse set of experts with minimal manual segmentation or curation. The architecture that we develop permits one-shot imitation as well as reuse of low-level motor behaviors in the context of new tasks. \\n\\nOur main results involve one-shot imitation and motor reuse, using our trained module for a humanoid body with relatively high action DoF. We believe this novel architecture enables more generic behavior and motor flexibility than other work involving learning to control physically simulated humanoids. \\n\\nAnonReviewer3 and AnonReviewer1 essentially restrict criticism of the work to the LFPC approach, which is only one aspect of our research contribution. We address these concerns in detail below. But we would also encourage the reviewers to assess the quality and novelty of the core architectural contributions as well as the quality of the experimental results. We are not aware of previous work for control of a physically simulated humanoid that demonstrates a learned module that can execute many behavioral skills and permits reuse.\"}",
"{\"title\": \"The idea is oversimplified, which may limit its applications.\", \"review\": \"This paper mainly focuses the imitation of expert policy as well as compression of expert skills via a latent variable model. Overall, I feel this paper is not quite readable, albeit that the prosed methods are simple and straightforward.\\n\\nAs one major contribution of this paper, the authors introduce a first-order approximation to estimate the action of an expert, where perturbations are considered. However, this linear treatment could yield large errors when the residuals in (1) are still large, which is very common in high-dimensional and highly-nonlinear cases. Specifically, the estimation of \\u201cJ\\u201d could be hard. In addition, just below (1), the authors mention (1) yields a \\u201cstabilized policy\\u201d, so what do you mean \\u201cstabilized\\u201d?\\n\\nAnother crucial issue lies on the treatment of \\u201c\\\\Delta(s)\\u201d, which is often unknown and hard to modeled, Thus, various optimal controllers are introduced so as to obtain robust controllers. Similarly, in (9) it is also difficult to decide what is \\u201csuitable perturbation distribution\\u201d.\\n\\nOverall, the linear treatment in (2) and assumption on \\u201c\\\\Delta(s)\\u201d in (5) actually oversimplify the imitation learning problem, which may not be applicable in real robot applications.\", \"others_small_comments\": \"-Section 2.1 could be moved to supplementary material or appendix, as this part is indeed not a contribution.\\n\\n- in (5), it should be \\u201c-J_{i}^{*}\\u201d\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Sound approach, but very similar to prior work\", \"review\": \"The paper tackles the problem of distilling large numbers of expert demonstrations into a single policy that can both recreate original demonstrations in a physically-simulated environment and humanoid platform, and to generalize to novel motions. Towards this, the paper presents two approaches learn policies from expert demonstrations without involving costly closed loop RL training, and distilling these individual experts into a shared policy by learning latent time-varying codes.\\n\\nThe paper is well-written and the method is well-evaluated in the scope that it is proposed. Both components of the proposed approach have previously been explored in the literature - there is extensive work on learning local controllers for physics based evironments from demonstrations in both open loop and closed loop settings as well as work on mixtures of these controllers in machine learning, robotics and computer graphics communities. While the paper proposes these two components as a contribution, I would like to see a more detailed argument of what this work contributes over previous such approaches. \\n\\nAnother part where I wish the paper could make a more compelling argument is that distilled policy can perform non-trivial generalization. Target following is a good illustrative example, but has been showcased by multitude of prior work. The paper talks about compositionality, and it would have been compelling to see examples of that if the method can achieve it. For example, simultaneously performing locomotion skills with upper body manipulation skills is something mixture of expert demonstrations approaches still struggle with and it would have been great to see this paper investigate the approach on this problem. \\n\\nOverall, this is a sound and well-written submission, but the existence of very related prior work with similar capabilities makes me reluctant to recommend this paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Concerns with proposed approach and results\", \"review\": \"This paper considers the problem of transferring motor skills from multiple experts to a student policy. To this end, the paper proposes two approaches: (1) an approach for policy cloning that learns to mimic the (local) linear feedback behavior of an expert (where the expert takes the form of a neural network), and (2) an approach that learns to compress a large number of experts via a latent space model. The approaches are applied to the problem of one-shot imitation from motion capture data (using the CMU motion capture database). The paper also considers an extension of the proposed approach to the problem of high-level planning; this is done by treating the learned latent space as a new action space and training a high-level policy that operates in this space.\", \"strengths\": \"S1. The supplementary video was clear and helpful in understanding the setup.\\nS2. The paper is written in a generally readable fashion.\\nS3. The related work section does a thorough job of describing the context of the work. \\n\\nHowever, I have some significant concerns with the paper. These are described below.\", \"significant_concerns\": \"C1. My biggest concern is that the paper does not make a strong case for the benefits of LPFC over simpler strategies. The results in Figure 3 demonstrate that a linear feedback policy computed along the expert's nominal trajectory performs as well as (and occasionally even better than) LPFC. This is quite concerning.\\nC2. Moreover, as the authors themselves admit, \\\"while LPFC did not work quite as well in the full-scale model as cloning from noisy rollouts, we believe it holds promise insofar as it may be useful in rollout-limited settings...\\\". However, the paper does not present any theoretical/experimental evidence that would suggest this.\\nC3. Another concern has to do with the two-step procedure for LPFC (Section 2.2), where the first step is to learn an expert policy (in the form of a neural network) and the second step is to perform behavior cloning by finding a policy that tries to match the local behavior of the expert (i.e., finding a policy that attempts to produce similar actions as the expert policy linearized about the nominal trajectory). This two-step procedure seems unnecessary; the paper does not make a case for why the expert policies are not chosen as linear feedback controllers (along nominal trajectories) in the first place.\\nC4. The linearization of the expert policy produced in (1) may not lead to a stabilizing feedback controller and could easily destabilize the system. It is easy to imagine cases where the expert neural network policy maintains trajectories of the system in a tube around the nominal trajectory, but whose linearization does not lead to a stabilizing feedback controller. Do you see this in practice? If not, is there any intuition for why this doesn't occur? If this doesn't occur in practice, this would suggest that the expert policies are not highly nonlinear in the neighborhood of states under consideration (in which case, why learn neural network experts in the first place instead of directly learning a linear feedback controller as the expert policy as suggested in C3?)\\nC5. I would have liked to have seen more implementation details in Section 3. In particular, how exactly was the linear feedback policy along the expert's nominal trajectory computed? Is this the same as (2)? Or did you estimate a linear dynamical model (along the expert's nominal trajectory) and then compute an LQR controller? More details on the architecture used for the behavioral cloning baseline would also have been helpful (was this a MLP? How many layers?)\", \"minor_comments\": [\"There are some periods missing at the end of equations (eqs. (1), (2), (6), (8), (9)).\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SJl2ps0qKQ | Learning to Decompose Compound Questions with Reinforcement Learning | [
"Haihong Yang",
"Han Wang",
"Shuang Guo",
"Wei Zhang",
"Huajun Chen"
] | As for knowledge-based question answering, a fundamental problem is to relax the assumption of answerable questions from simple questions to compound questions. Traditional approaches firstly detect topic entity mentioned in questions, then traverse the knowledge graph to find relations as a multi-hop path to answers, while we propose a novel approach to leverage simple-question answerers to answer compound questions. Our model consists of two parts: (i) a novel learning-to-decompose agent that learns a policy to decompose a compound question into simple questions and (ii) three independent simple-question answerers that classify the corresponding relations for each simple question. Experiments demonstrate that our model learns complex rules of compositionality as stochastic policy, which benefits simple neural networks to achieve state-of-the-art results on WebQuestions and MetaQA. We analyze the interpretable decomposition process as well as generated partitions. | [
"Compound Question Decomposition",
"Reinforcement Learning",
"Knowledge-Based Question Answering",
"Learning-to-decompose"
] | https://openreview.net/pdf?id=SJl2ps0qKQ | https://openreview.net/forum?id=SJl2ps0qKQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1eLMCmHlN",
"Bkl2pJE5Am",
"ryeHUaMqRQ",
"rkgD8DMcC7",
"BJln2XG5AX",
"BJe28_qT27",
"BJlhURN5nm",
"BJesG858hQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545055758116,
1543286724455,
1543281996696,
1543280463298,
1543279540220,
1541412947990,
1541193300114,
1540953618942
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper842/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper842/Authors"
],
[
"ICLR.cc/2019/Conference/Paper842/Authors"
],
[
"ICLR.cc/2019/Conference/Paper842/Authors"
],
[
"ICLR.cc/2019/Conference/Paper842/Authors"
],
[
"ICLR.cc/2019/Conference/Paper842/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper842/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper842/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": [\"an interesting task -- learning to decompose questions without supervision\", \"reviewers are not convinced by evaluation. Initially evaluated on MetaQA only, later relation classification on WebQuestions has been added. It is not really clear that the approach is indeed beneficial on WebQuestion relation classification (no analysis / ablations) and MetaQA is not a very standard dataset.\", \"Reviewers have concerns about comparison to previous work / the lack of state-of-the-art baselines. Some of these issues have been addressed though (e.g., discussion of Iyyer et al. 2016)\"], \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"interesting directions / results are not very convincing\"}",
"{\"title\": \"Thank you very much for your insightful review! Paper updates and model improves!\", \"comment\": \"Thank you very much for your insightful review! We have updated our paper with new experiments! We will address your concerns point by point.\\n\\nPlease refer to global comments for brief version of model improvement and paper refinement!\", \"q1\": \"Could you provide results on WebQuestions (or WebQSP).\", \"a1\": \"Yes! We conduct experiments on WebQuestions relation detection since relation detection is believed to be the bottleneck of KBQA and we attempt to solve it. It achieves competitive results to strong baseline (Yu et al, 2017, [1]).\\n\\nIt seems like our model performs differently in two datasets. Here are the reasons:\\n- There are ~5% relations that remains unseen in training set. It is a harmful setting for classification task.\\n- To address the above issue, recent approaches try to leverage the information from knowledge base, especially the detailed name or schema info of Freebase relations. By contrast, our proposed model only leverages the question information to achieve competitive results.\", \"q2\": \"\\\"I think the authors should compare their approach with previous work.\\\"\", \"a2\": [\"We have updated our paper for discussion in related work (Please check out paragraph 2 & 3 in section 2.2). We tried to reimplement their methods and found that it is not suitable for our setup.\", \"Search-based Neural Structured Learning for Sequential Question Answering\", \"When generating datasets, the author employs crowdsourcing workers to manually decompose questions from WikiTableQuestions into sequential questions. It aims to train a text-to-sql model for querying answers and updating next input question interactively. Conversely, our proposed model emphasizes decomposing compound questions automatically with fewer supervision.\", \"ComplexWebQuestions\", \"The state-of-the-art solution of ComplexWebQuestions adopts pointer network to decompose complex web questions into simple ones. This decomposition process is guided by supervisions inline with human logic (e.g., conjunction or composition etc.). The author feeds all the decomposed questions into search engine then collects top-ranked web snippets as data source of answers.\", \"Note that the pointer network is trained via maximizing log-likelihood of annotations.\", \"The problem is that, if we replace pointer network with our learning-to-decompose agent, we cannot afford to crawl web pages during training because our agent will generate different partitions.\", \"Thank you again for your valuable review and inspiration! We would be happy to open source our code and hyper-parameters until the final decisions are out!\", \"[1] Yu et al. Improve Neural Relation Detection for Knowledge-based Question Answering. ACL, 2017.\"]}",
"{\"title\": \"Thank you very much for your helpful reviews! We have updated our paper for clarification.\", \"comment\": \"Thank you very much for your valuable review! We have updated our paper with additional experiments! We will provide detailed explanation for your concerns.\\n\\nPlease refer to global comments for brief version of model improvement and paper refinement!\", \"q1\": \"Does this mean that the model can have <=3 partitions, but not more? How is this number decided?\", \"a1\": [\"The central assumption of our paper is to generalize the assumption of answerable questions from simple questions to compound questions (simple questions included).\", \"Based on the observation of daily questions asked by people (e.g. WebQuestions) and the currently available datasets (MetaQA), it is hard to find compound questions with more than three partitions to experiment with. So the default number of partitions is 2 or 3 (<=3). We have updated our paper for ablation test of these two options. Results and discussion can be found in section 4.3.\"], \"q2\": \"From Eq (4), it seems that the answerer only uses the current partition, is that the case? Moreover, how is the gold relation r obtained?\", \"a2\": \"In our improved model, we use three answerers for each partition. The vector representation of a partition is the last hidden state of answerer's LSTM network. The golden relation $r$ is part of the golden label providing by datasets. The answerer predicts and updates according to the gradients of cross entropy loss.\", \"q3\": \"It would be nice to add more explanation to the caption of Figure 4 to make it self-contained.\", \"a3\": \"We have updated our paper to make it self-contained! Please check out our paper for more details.\", \"q4\": \"The case study section (4.3) only contains a single example. It would be very helpful to include more examples of question partitions (there is enough space). Error analysis would also be helpful to understand, for example, why the proposed model is worse than VRN (Zhang et al. 2017) on 1- and 2-hop questions.\", \"a4\": [\"Case Study is now section 4.4! The main purpose of case study is to illustrate that our agent can maximize information utilization by assigning words to the best position.\", \"We also add an ablation test for providing better understanding of our model. Since we have further improved our model, please refer to global comments for reasons of model change. It directly leads to outperforming the state-of-the-art model by ~8% overall accuracy.\", \"Thank you again for your time and helpful review! We really appreciate it! We would be happy to open source our code and hyper-parameters until the final decisions are out!\"]}",
"{\"title\": \"Thank you very much! Please check out our latest version of paper for model improvement and paper refinement!\", \"comment\": \"Thank you very much for your detailed and helpful review! We have updated our paper with your suggestions! We will address your concerns point by point. Please refer to global comments for brief version of model improvement and paper refinement!\\n\\n* Reply for Weakness\\nWe compare [1] with our work and summarize an important line of Semantic Role Labeling in our latest paper. We like to point out that [1] decomposes WikiTableQuestions into sequential questions by crowdsourcing workers (manually) in the process of generating SequentialQA dataset. However, we train our agent to learn to decompose questions automatically.\\n\\nSemantic Role Labeling is similar to labeling priority/actions word by word, which is part of our proposed method. However, we don't require supervision signals at the token-level. The only supervision for our agent is the +1/-1 reward as feedbacks.\\n\\n* Questions\", \"q1\": \"How do you obtain x^(k)? Is it the last state of the LSTM?\", \"a1\": \"Yes, it is the last hidden state $h$ of the LSTM. For clarity, there are two x^(k) with different style in our paper. The bold x^(k) denotes a sub-sequence of words as a partition. Another bold italic x^(k) denotes the final vector representation of corresponding partition. We have updated our paper for clarification.\", \"q2\": \"Why did you have to augment \\u201cNO_OP\\u201d relation in the MetaQA dataset?\", \"a2\": \"(1) The main reason for augmenting a dummy relation is to provide more freedom of the cooperation among our agent and the answerers\\\\*. When our model is trying to answer a simple question, our agent may filter some unrelated words (e.g. stop words) out of the first partition because of its stochastic policy. The second and the third answerer can return a \\\"NO_OP\\\" relation when receiving some meaningless inputs. \\n\\n(2) If we don't augment a \\\"NO_OP\\\" relation, our agent has to assign every single word to the first partition and hope the first answerer can predict the golden relation correctly, which is too strict for a feasible solution. Note that we allow our agent to learn partition strategies that is different from human intuition since we train it using RL settings.\", \"q3\": \"Why +1 reward has lower variance than probabilistic reward? Explanation or citation would be needed.\", \"a3\": \"(1) Because the maximum value of variance of +1/-1 reward is 1 (Please see proof below). The variance of probabilistic reward does not necessarily have an upper bound since the value of logarithmic function goes negative infinity if the likelihood is sufficiently small. This kind of situation is likely to occur in the early stages of training when the agent explores the space of partition strategies actively. \\n\\n(2) From the perspective of model design, we have tried our best to disentangle our model, i.e. prohibiting our agent to update the embedding layer and use +1/-1 reward. If the agent is allowed to observe probabilistic reward as feedback, it will greedily maximize partial reward (say the first term of the sum of log-likelihood). The feedback from first answerer will dominate before the agent fully explores the search space. Hence the model is likely to collapse which leads to unstable training.\\n\\n[Proof]: Suppose the probability mass function (PMF) of reward is defined as \\n$p_X(x) = p if x = +1 \\n = q if x = -1, p + q = 1.$\\nThe expected reward is $E[x] = p + (1 - p)(-1) = 2p - 1$.\\nThe variance is \\n\\t$Var[x] = \\\\Sigma (x - E[x])^2 \\\\times p(x) \\n\\t = 4pq \\n\\t <= 4 ((p + q) / 2)^2 \\n\\t = 1$,\\nwith equality if and only if $p = q = 0.5$. #\", \"q4\": \"What if two partitions need to share a word? The current setup necessitates that a word participates in only one partition. Wouldn\\u2019t this be problematic?\", \"a4\": \"No, we provide the following four explanations.\\n(1) Maximum Information Utilization. The current setup forces the agent to fully explore the search space of partition strategies such that each word in the questions contributes to the confidence of downstream classifiers. Imagine a key word being misplaced, one classifier losses information and the other classifier receives extra noise, which harms information utilization significantly.\\n\\n(2) Performance and size of search space tradeoff. If we allow two partitions to share a word, the size of search space increases by a factor of 2^N (from 3^N to 6^N). N denotes the length of a question. It would be interesting to further investigate whether the generalized model is able to converge and produce better results or interpretability.\", \"q5\": \"I am a bit confused about how the simple question answering module is trained. Is it directly trained by the gold relation label?\", \"a5\": \"Sorry for the confusion. Yes. It is fair to train simple question answerers by the gold relation label directly, compared to training three independent question classifier using the same supervision. We have updated our paper for clarification.\"}",
"{\"title\": \"Dear reviewers, we thread a global comment for improvement on both our model and paper.\", \"comment\": [\"## Main Improvement\", \"1. We have improved our model by replacing the answerer into three identical simple-question answerers with embedding layer shared.\", \"Reasons\", \"During experiments, we observe that if we share the simple-question answerer across different partitions of a question, our agent may generate conflict assignments at the beginning of training process. Data conflicts undermine the decision boundary learned by the answerer (classifier).\", \"An Example\", \"Considering a four-word question \\\"w1 w2 w3 w4?\\\", the agent generates two different labeling \\\"1st 1st 2nd 2nd\\\" and \\\"2nd 2nd 1st 1st\\\" in two different epoch. Since the answerer is shared,\", \"the former mapping (f(\\\"w1 w2\\\") -> 1st_golden_relation) and\", \"the latter mapping (f(\\\"w1 w2\\\") -> 2nd_golden_relation) is conflicting.\", \"Performance Improvement\", \"On MetaQA, our model now outperforms state-of-the-art by ~8% overall accuracy, and\", \"On WebQuestions, our model achieves competitive result to results that leverage knowledge base information by only using question information.\", \"Details are described in section 4.2 of our paper.\", \"2. We have updated the following subsections.\", \"Section 2.2 for detailed comparison of Iyyer et al., 2016 [1] and other works related to complex questions;\", \"Section 2.3 for Deep Semantic Role Labeling and its relevance to our work;\", \"Section 3.2 for describing our improved model architecture;\", \"Section 3.3 for more training details;\", \"Section 4.2 for benchmarking WebQuestions which is widely used in the KBQA community;\", \"Section 4.3 for ablation study to conclude that there exists a tradeoff between model assumption and model performance.\"]}",
"{\"title\": \"Good paper, but need more related work discussions\", \"review\": \"Summary: the paper is interested in parsing compound questions for querying on knowledge graph, e.g. MetaQA by Zhang et al. (2017). The paper proposes to have two modules, one that segments the question into partitions (up to three) and the other that looks at each segment to get the relation. The relations are merged to obtain a single KG path, which is queried to obtain the answer. Since the segmentation is a non-differentiable process, the paper uses reinforcement learning to propagate gradient to the segmentation model. The segmentation is a process of classifying each word for which partition it should be tied to. Answering is a process of classifying the partition into one of the possible relation edges. The model shows expected results in a synthetic arithmetic dataset, and obtains the state of the art in MetaQA, improving nearly 5% over the baseline. The model especially does much better on 3-hop questions, with nearly 20% improvement.\", \"strengths\": \"the paper is well-written. The model is simple yet effective and is a novel contribution to compound question answering on KG. Especially, the improvement on 3-hop category is nearly 20%, which is substantial and quite impressive.\", \"weaknesses\": \"My biggest concern is the lack of discussions on its relevance to (Iyyer et al., 2016), which also proposed to decompose question into simpler ones for WIkiTableQuestions. Also, I think it would be good to mention Semantic Role Labeling as related literature, which is about tagging each word with its role in the sentence. The partition index can be somewhat considered as a \\u201crole\\u201d in the sentence.\", \"questions\": \"1. How do you obtain x^(k)? Is it the last state of the LSTM?\\n2. Why did you have to augment \\u201cNO_OP\\u201d relation in the MetaQA dataset?\\n3. Why +1 reward has lower variance than probabilistic reward? Explanation or citation would be needed.\\n4. What if two partitions need to share a word? The current setup necessitates that a word participates in only one partition. Wouldn\\u2019t this be problematic?\\n5. I am a bit confused about how the simple question answering module is trained. Is it directly trained by the gold relation label?\", \"typos_and_suggestions\": [\"Second paragraph of 2.1: in stead -> instead\", \"Third paragraph of 2.1: research. -> research\", \"c_t + h_t: would be good to explicitly mention that the circled plus sign is concatenation.\", \"Last paragraph on page 4: \\u201cleave to be\\u201d?\", \"Second last paragraph of 4.1: he -> The\", \"Second paragraph of 4.2: \\u201cif exists a proper meaning\\u201d?\", \"First paragraph of page 7: be either assume -> either assume\", \"Last paragraph of Section 5: generalizing -> generalize\", \"I think you should not put acknowledgment in a double-blind submission.\", \"M Iyyer, W Yih, MW Chang. Answering complicated question intents expressed in decomposed question sequences. 2016 (https://arxiv.org/abs/1611.01242)\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting idea. Lacking technical details and error analysis.\", \"review\": \"This paper proposes a knowledge-based QA system that learns to decompose compound questions into simple ones. The decomposition is modeled by assigning each token in the input question to one of the partitions and receiving reward signal based on the final gold answer. The model achieves the state-of-the-art performance on the MetaQA dataset.\\n\\nMy main complaint about the paper is its lack of technical details and analysis of empirical results. Parts of the paper seem quite unclear, for example:\\n\\nIn the last paragraph of Section 3.1, it says \\u201cWe do not assume that nay question should be divided into exactly three parts. \\u2026 See section 4 for case study.\\u201d Does this mean that the model can have <=3 partitions, but not more? How is this number decided?\\n\\nSection 3.2 describes the simple-question answer. From Eq (4), it seems that the answerer only uses the current partition, is that the case? Moreover, how is the gold relation r obtained?\\n\\nIt would be nice to add more explanation to the caption of Figure 4 to make it self-contained.\\n\\nThe case study section (4.3) only contains a single example. It would be very helpful to include more examples of question partitions (there is enough space). Error analysis would also be helpful to understand, for example, why the proposed model is worse than VRN (Zhang et al. 2017) on 1- and 2-hop questions.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Lack of comparison with previous state-of-the-art methods over more widely used benchmarks\", \"review\": \"This paper proposes a new approach for answering questions requiring multi-hop reasoning. The key idea is to introduce a sequence labeler to divide the question into at most 3 parts, each part corresponds to a relation-tuple. The labeler is trained with the whole KB-QA pipeline with REINFORCE in an end-to-end way.\\n\\nThe proposed approach was applied to a synthetic dataset and a new KB-QA dataset MetaQA, and achieves good results.\\n\\nI like the proposed idea, which sounds a straightforward solution to compound question answering. I also like the clarification between \\\"compound questions\\\" instead of \\\"multi-hop questions\\\". In my opinion, \\\"multi-hop questions\\\" can also refer to the cases where the questions (can be simple questions) require multi-hop over evidence to answer.\\n\\nMy only concern is about the evaluation on MetaQA, which seems a not widely used dataset in our community. Therefore I am wondering whether the authors could address the following related questions in the rebuttal or revision:\\n\\n(1) I was surprised that WebQuestions is not used in the experiments. Could you explain the reason? My guess is that WebQuestions contains compound questions that cannot be simply decomposed as sequence labeling, because that some parts of the question can participant in different relations. If this is not true, could you provide results on WebQuestions (or WebQSP).\\n\\n(2) There were several previous methods proposed for decomposition of compound questions, although they are not proposed for KB-QA. Examples include \\\"Search-based Neural Structured Learning for Sequential Question Answering\\\" and \\\"ComplexWebQuestions\\\". I think the authors should compare their approach with previous work. One choice is to reimplement their methods. An easier option might be applying the proposed methods to some previous datasets, because the proposed method is not specific to KB-QA, as long as the simple question answerer is replaced to other components like a reader in the ComplexWebQuestions work.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rylhToC5YQ | Unsupervised Neural Multi-Document Abstractive Summarization of Reviews | [
"Eric Chu",
"Peter J. Liu"
] | Abstractive summarization has been studied using neural sequence transduction methods with datasets of large, paired document-summary examples. However, such datasets are rare and the models trained from them do not generalize to other domains. Recently, some progress has been made in learning sequence-to-sequence mappings with only unpaired examples. In our work, we consider the setting where there are only documents (product or business reviews) with no summaries provided, and propose an end-to-end, neural model architecture to perform unsupervised abstractive summarization. Our proposed model consists of an auto-encoder trained so that the mean of the representations of the input reviews decodes to a reasonable summary-review. We consider variants of the proposed architecture and perform an ablation study to show the importance of specific components. We show through metrics and human evaluation that the generated summaries are highly abstractive, fluent, relevant, and representative of the average sentiment of the input reviews. | [
"unsupervised learning",
"abstractive summarization",
"reviews",
"text generation"
] | https://openreview.net/pdf?id=rylhToC5YQ | https://openreview.net/forum?id=rylhToC5YQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1eRYDz-l4",
"r1gFz6AaAQ",
"B1xGr10aCm",
"SJeQ9oU_6X",
"SylKksg_pm",
"B1eUccxupQ",
"SJgmRtgdT7",
"BygCDbac37",
"B1gpdP5K37",
"BJgnUKPvhX",
"rJgGTrfa5Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1544787846005,
1543527697017,
1543524154109,
1542118283321,
1542093536577,
1542093453657,
1542093259160,
1541226854284,
1541150580576,
1541007699792,
1539282362166
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper841/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper841/Authors"
],
[
"ICLR.cc/2019/Conference/Paper841/Authors"
],
[
"ICLR.cc/2019/Conference/Paper841/Authors"
],
[
"ICLR.cc/2019/Conference/Paper841/Authors"
],
[
"ICLR.cc/2019/Conference/Paper841/Authors"
],
[
"ICLR.cc/2019/Conference/Paper841/Authors"
],
[
"ICLR.cc/2019/Conference/Paper841/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper841/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper841/AnonReviewer3"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper introduces a method for unsupervised abstractive summarization of reviews.\", \"strengths\": \"(1) The direction (developing unsupervised multi-document summarization systems) is exciting\\n\\n(2) There are interesting aspects to the model\", \"weaknesses\": \"(1) The authors are clearly undecided how to position this work: either as introducing a generic document summarization framework or as an approach specific to summarization of reviews. If this is the former, the underlying assumptions, e.g., that the summary looks like a single document in a group is problematic. If this is the latter, then comparison to some more specialized methods are lacking (see comments of R1).\\n\\n(2) Evaluation, though improved since the first submitted version (when human evaluation was added), is still not great (see R1 / R3). The automatic metrics are not very convincing and do not seem to be very consistent with the results of human eval. I believe that instead or along with human eval, the authors should create human written summaries and evaluate against them. It has been done for extractive multi-document summarization and can be done here. Without this, it would be impossible to compare to this submission in the future work. \\n\\n(3) It is not very clear that generating abstractive summaries of the form proposed in the paper is an effective way to summarize documents. Basically, a good summary should reflect diversity of the opinions rather than reflect an average / most frequent opinion from tin the review collection. By generating the summary from a review LM, the authors make sure that there is no redundancy (e.g., alternative views) or contradictions. That's not really what one would want from a summary (See R3 and also non-public discussion with R1)\\n\\nOverall, I'd definitely like to see this work published but my take is that it is not ready yet.\\n\\nR1 and R2 are relatively negative and generally in agreement. R3 is very positive. I share excitement about the research direction with R3 but I believe that concerns of R1 and R2 are valid and need to be addressed before the paper gets published.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting work but not mature enough\"}",
"{\"title\": \"We avoid review-specific features in our model to be more generally applicable\", \"comment\": \"Regarding usefulness/practicality of abstractive summarization, we believe the most natural form of summary for humans is language, i.e. sentences/paragraphs. Certainly extracting common bi-grams could be done, is straightforward, and has been done in review-specific summarization systems in prior work, but is in our opinion less natural. That is a different problem than what we\\u2019re trying to solve.\\n\\nRegarding the comments for comparisons with opinion-based summarization models, we also sought to create a general, domain-agnostic model architecture that could be applied to non-review documents by not relying on any review-specific features.\"}",
"{\"title\": \"response to Nov 24 review modification\", \"comment\": \"We want to clarify that the automatic metrics are used to guide model development. However, the final evaluation is done with humans. In future comparisons with our method, we expect a human evaluation to be done as well.\\n\\nWe did not focus on comparing to review-specific algorithms because our method does not rely on any review-specific properties, domain-knowledge, or highly engineered features, unlike the above papers. Review-specific choices could improve the algorithm, but would take away from the generalizability of the proposed architecture/model which is our goal. The spirit of this conference is learning generic representations of data that can be widely applicable, and not to be focused on domain-specific feature engineering. We thus focused our comparison on generic approaches without heavy feature engineering. We expect follow-up work to use our model architecture on other domains with minimal changes.\\n\\nP.S. Just a point of clarification, despite the title, the authors of Ganesan et al. (Opinosis), say in the paper their method is \\u201cword-level extractive summarization\\u201d and not actually abstractive.\"}",
"{\"title\": \"Added human evaluation and further discussed limitations.\", \"comment\": \"Thank you for your feedback.\\n\\n-We clarified in the paper that a limitation here is the summary is assumed to be in the form of a review with similar stylistic characteristics as the input reviews.\\n-Although ROUGE is often used in the summarization literature, it attempts to approximate human evaluation of summaries, which is the gold standard. We have shown that the metrics we used in model development have guided us to good human evaluations.\\n- Indeed the symptoms discussed in the error analysis affect all current neural text generation models, and we included them to ensure we didn't claim to have solved it.\"}",
"{\"title\": \"Summary of changes, Nov 12, 2018.\", \"comment\": \"We thank all 3 reviewers for their feedback, which we have used to improve the paper. We summarize the changes here and replied individually as well to each reviewer below.\\n\\nWhile the proxy metrics defined are useful in model development, the gold standard for evaluating summaries, human evaluation, was missing and we have now added it to validate our model. These results agree (rank order) with the automatic metrics and show that the abstractive model has comparable sentiment agreement, information agreement, and fluency with the extractive method.\\n\\nWe made many changes to clarify the problem more formally and described the models in more detail. We also clarified limitations of the model and metrics. \\n\\nWhile we believe the architecture proposed can be applied to data other than reviews, we added \\u201c... of Reviews\\u201d to the title since we only showed results on reviews.\"}",
"{\"title\": \"Added human evaluation (that corresponds to our metrics)\", \"comment\": \"Thank you for your feedback, which we have incorporated, and resulting in, we believe, a much stronger paper. We agree the lack of human evaluation to validate our methods was a glaring omission in the original paper. As a result, we added Table 2 with human evaluation results of the summaries on multiple dimensions, showing our model is competitive with the extractive baseline with respect to representing the overall sentiment and information in the input reviews and also the fluency. We clarified that the automatic metrics (which rank order the methods similarly) are useful for guiding model development, but the only gold standard here is human evaluation.\", \"regarding_the_points_about_the_metrics\": \"\", \"rating_accuracy\": \"The sentiment of a good summary should be reflective of the overall sentiment of the reviews. We approximate this overall segment by the average rating. We clarify that this captures a necessary aspect of the summary, but by itself is not sufficient, which is why we have other things we look at, including now human evaluation. The \\u201cactual contents that are conveyed\\u201d is meant to be covered by the word overlap score and one of our human eval questions.\", \"word_overlap\": \"we agree with your point that abstractive systems could have lower overlap because they aggregate information and generalize. We clarified in the paper that this word overlap score is included as a sanity check: it\\u2019s possible to get a high rating accuracy while talking about something completely unrelated, and a very low word-overlap would suggest something pathological. That said, it appears to rank-order similarly as our human evaluation question.\", \"negative_log_likelihood\": \"In this paper now we only use this metric to compare abstractive variants. It\\u2019s true that using it to compare to the \\u201cConcatenation\\u201d baseline (now removed) was inappropriate. The gold standard for measuring fluency would be a human evaluation which we added.\", \"other_points\": \"\", \"multi_lead_1\": \"Having proved to be a strong baseline in other summarization tasks, we sought to create an analog of Multi-lead 1 in our multi-document setting. This proved to be a reasonably strong baseline. In any case, this is simply one of several baselines which we compare against.\\nWe don\\u2019t want to overclaim, and we\\u2019ve modified the title of our paper to include \\u201cof Reviews\\u201d and added clarification of limitations in the Conclusion.\"}",
"{\"title\": \"Added human evaluation; additions to improve clarity of paper\", \"comment\": \"Thank you for your review and the very helpful, comprehensive feedback, which we strove to address. We agree that the biggest previous issue was uncertainty around the evaluation. As a result we added results of a human evaluation that directly assesses various aspects of summarization quality and showing similar results as our proxy metrics. Regarding your specific points:\\n\\n1. We\\u2019ve added a more formal, mathematical description of the problem setup. Overall, we\\u2019ve tried to be clearer and and consistent with our notation.\\n\\n3. Good question. We did try a larger weight on l_sim (as intuitively, this loss helps the model produce outputs that actually summarize the original review), but we did not find meaningful differences and pointed this out in the paper.\\n\\n4. Although there are no ground-truth summaries for Equation 2, there are ground truth reconstructions in Equation 1. As shown in the ablations, it is crucial that the decoders are tied in this architecture. In one of our experiments, the \\u201cEarly cosine loss\\u201d (also shown schematically in Appendix A), we did not need to use the Gumbel-softmax estimator and simply decoded auto-regressively from the mean vector. That experiment shows that the decoding the summary as part of training significantly improves results.\\n\\n5. (1) We\\u2019ve added details about how the language model was trained in the Experimental Setup section, as well as how the reviews were generated using the language model in the \\u201cNo Training\\u201d baseline in the Baselines section. The reviews in the \\u201cNo training\\u201d model are generated in the same fashion as the proposed model. The purpose of this baseline is to show that optimizing the proposed loss improves the output over simply using pre-trained language models.\\n\\n(2) Great point. We\\u2019ve added human evaluation experiments on Mechanical Turk regarding the quality of the summaries to assess the validity of our metrics. Briefly, the results show that our metrics guided us to a good model -- the extractive and abstractive models obtain comparable results on how well they summarize information and sentiment, and the abstractive summaries are similarly fluent.\\n\\n(3) There are no known neural, end-to-end models for this problem setup and this being the first is one of our main contributions. We hoped that reasonable model variations and ablations would probe into the efficacy of various aspects of our model. \\n\\n6. We\\u2019ve modified the Rating accuracy description to hopefully make clear that the classifier is trained by taking as input a review x (sequence of tokens) and producing probabilities over the 5 possible ratings (i.e. it is a classification problem and not a regression problem). There are no hand-engineered features. The rating with the highest probability is the predicted rating. This is then compared to the average rating of the original reviews (rounded to the nearest 1-5 star rating).\\n\\nIn general, we agree that the classifier baseline should be applied carefully. We\\u2019ve removed the concatenation baseline because we believe it\\u2019s outside the input space of the classifier. However, we believe the rating accuracy still applies to the other models and is a useful metric. For instance, the summaries produced by our model are constrained to the review space due to the tying of the decoders. Our human evaluation experiments also agree with the trends provided by our rating accuracy metric.\\n\\n7. The assumption we make is the summary should be in some sense the \\u201ccentroid\\u201d of the documents it is summarizing. If there are some positive reviews, but they are mostly negative, the summary will be mostly negative which is representative. If a priori we have a notion of review importance, we could weight some reviews higher in Equation (3) rather than equally. Or as you suggest we could summarize different clusters; in this case the most natural clustering is by review rating. In Figure 3, we show how multiple reviews could be generated for the same business, but pre-clustered by rating. We also clarify that our model architecture produces summaries in the form of a single review.\"}",
"{\"title\": \"Promising unsupervised approach, but clarity issues\", \"review\": \"Overall and positives:\\n\\nThe paper investigates the problem of multidocument summarization\\nwithout paired documents to summary data, thus using an unsupervised\\napproach. The main model is constructed using a pair of locked\\nautoencoders and decoders. The model is trained to optimize the\\ncombination of 1. Loss between reconstructions of the original reviews\\n(from the encoded reviews) and original the reviews, 2. And the\\naverage similarity of the encoded version of the docs with the encoded\\nrepresentation of the summary, generated from the mean representation\\nof the given documents.\\n\\nBy comparing with a few simple baseline models, the authors were able\\nto demonstrate the potential of the design against several naive\\napproaches (on real datasets, YELP and AMAZON reviews). \\nThe necessity of several model components is demonstrated\\nthrough ablation studies. The paper is relatively well structured and\\ncomplete. The topic of the paper fits well with ICLR. The paper\\nprovides decent technical contributions with some novel ideas about\\nmulti-doc summary learning models without a (supervised) paired\\ndata set.\\n\\nComments / Issues\\n\\n[ issue 6 is most important ]\\n\\n1. Problem presentation. The problem was not properly introduced and\\nelaborated. In fact, there is not a formal and mathematical\\nintroduction of the problem, input, output, dataset and model\\nparameters. The notations used are not very clearly defined and are\\nquite handwavy, (e.g. what is V, dimensions of inputs x_i was not\\nmentioned until much later in the paper). The authors should make\\nthese more precise. Similar problem with presentations of the models,\\nparameters, and hyperparameters.\\n\\n3. How does non-equal weighted linear combinations of l_rec and l_sim\\nchange the results? Other variation of the overall loss function? How\\ndo we see the loss function interaction in the training, validation\\nand test data? With the proposed model, these could be interesting to\\nobserve.\\n\\n4. In equation two, the decoder seems to be very directly affecting\\nthe quality of the output summary. Teacher forcing was used to train\\nthe decoder in part (1) of the model, but without ground truth, I\\nwould expect more discussions and experiments on how the Gumbel\\nsoftmax trick affect or help the performance of the output.\\n\\n5. Baseline models and metrics\\n\\n(1) There should be more details on how the language model is trained,\\nsome examples, and how the reviews are generated from the language\\nmodel as a base model (in supplement?).\\n\\n(2). It is difficult to get a sense of how these metrics corresponds\\nto the actual perceived quality of the summary from the\\npresentation. (see next)\\n\\n(3). It will be more relevant to evaluate the proposed design\\nvs. other neural models, and/or more tested and proved methods.\\n\\n6. The rating classifier (CLF) is intriguing, but it's not clearly\\nexplained and its effect on the evaluation of the performance is not\", \"clear\": \"One of the key metrics used in the evaluation relies on the\\noutput rating of a classifier, CLF, that predicts reader ratings on\\nreviews (eg on YELP). The classifier is said to have 72%\\naccuracy. First, the accuracy is not clearly defined, and the details\\nof the classifier and its training is not explained (what features are\\nits input, is the output ordinal regression). Equation 4 is not\", \"explained_clearly\": \"what does 'comparing' in 'by comparing the\\npredicted rating given the summary rating..' mean? The classifier may\\nhave good performance, but it's unclear how this accuracy should\\naffect the results of the model comparisons.\\n\\nThe CLF is used to evaluate the rating of output\\nreviews from various models. There is no justification these outputs\\nare in the same space or generally the same type of document with the\\ntraining sample (assuming real Yelp reviews). That is probably\\nparticularly true for concatenation of the reviews, and the CLF classifier\\nscores the concatenation very high (or eq 4 somehow leads to highest value\\nfor the concatenation of reviews )... It's not clear whether such a classifier is \\nbeneficial in this context.\\n\\n7. Summary vs Reviews. It seems that the model is built on an implicit\\nassumption that the output summary of the multi-doc should be\\nsufficiently similar with the individual input docs. This may be not\\ntrue in many cases, which affects whether the approach generalizes.\\nDoc inputs could be covering different aspects of the review subject\\n(heterogeneity among the input docs, including topics, sentiment etc),\\nor they could have very different writing styles or length compared to\\na summary. The evaluation metrics may not work well in such\\nscenarios. Maybe some pre-classification or clustering of the inputs,\\nand then doing summarization for each, would help? In the conclusions section, the\\nauthors do mention summarizing negative and positive reviews\\nseparately.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Evaluation methodology and measures are questionable and should not be adopted by the community\", \"review\": \"This paper proposes a method for multi-document abstractive summarization. The model has two main components, one part is an autoencoder used to help learn encoded document representations which can be used to reconstruct the original documents, and a second component for the summarization step which also aims to ensure that the summary is similar to the original document.\\n\\nThe biggest problem with this paper is in its evaluation methodology. I don't really know what any of the three evaluation measures are actually measuring, and there is no human subject evaluation back them up.\\n- Rating Accuracy seems to depend on the choice of CLF used, and at best says whether the summary conveys the same average opinion as the original reviews. This captures a small amount about the actual contents of the reviews. For example, it does not capture the distribution of opinions, or the actual contents that are conveyed.\\n- Word Overlap with the original documents does not seem to be a good measure of quality for abstractive systems, as there could easily be abstractive summaries with low overlap that are nevertheless very good exactly because they aggregate information and generalize. It is certainly not appropriate to use to compare between extractive and abstractive systems.\\n-There are many well-known problems with using log likelihood as a measure of fluency and grammaticality, such as biases around length, and frequency of the words.\\nIt also seems that these evaluation measures would interact with the length of the summary being evaluated in ways which systems could game.\", \"other_points\": \"- Multi-Lead-1: The lead baseline works very well in single-document news summarization. Since this model is being applied in a multi-document setting to something that is not news, it is hard to see how this baseline is justified.\\n\\n- Despite the fact that the model is only applied to product reviews, and there seem to be modelling decisions tailored to this domain, the paper title does not specify so, which in my opinion is a type of over-claiming.\\n\\nHaving a paper with poor evaluation measure may set a precedent that causes damage to an entire line of research. For this reason, I am not comfortable with recommending an accept.\\n\\n\\n---\\nThank you for responding to my comments and updating the paper. I have slightly raised my score to reflect this effort.\\n\\nThere are new claims in the results section that do not seem to be warranted given the human evaluation. The claim is that the human evaluation results validate the use of the automatic metrics. The new human evaluation results show that the proposed abstractive model performs on par with the extractive model in terms of conveying the overall sentiment and information (Table 2), whereas it substantially outperforms the extractive model on the automatic measures (Table 1). This seems to be evidence that the automatic measures do not correlate with human judgments, and should not be used as evaluation measures.\\n\\nI am also glad that the title was changed to reflect the scope of the experiments. I would now suggest comparing against previous work in opinion summarization which do not assume gold-standard summaries for training. Here are two representative papers:\\n\\nGanesan et al. Opinosis: A Graph-Based Approach to Abstractive Summarization of Highly Redundant Opinions. COLING 2010.\\nCarenini et al. Multi-Document Summarization of Evaluative Text. Computational Intellgience 2012.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Novel work breaking ground on abstractive unsupervised multi-document summarization\", \"review\": [\"# Positive aspects of this submission\", \"This submission presents a really novel, creative, and useful way to achieve unsupervised abstractive multi-document summarization, which is quite an impressive feat.\", \"The alternative metrics in the absence of ground-truth summaries seem really useful and can be reused for other summarization problems where ground-truth summaries are missing. In particular, the prediction of review/summary score as a summarization metric is very well thought of.\", \"The model variations and experiments clearly demonstrate the usefulness of every aspect of the proposed model.\", \"# Criticism\", \"The proposed model assumes that the output summary is similar in writing style and length to each of the inputs, which is not the case for most summarization tasks. This makes the proposed model hard to compare to the majority of previous works in supervised multi-document summarization like the ones evaluated on the DUC 2004 dataset.\", \"The lack of applicability to existing supervised summarization use cases leaves unanswered the question of how much correlation there is between the proposed unsupervised metrics and existing metrics like the ROUGE score, even if they seem intuitively correlated.\", \"This model suffers from the usual symptoms of other abstractive summarization models (fluency errors, factual inaccuracies). But this shouldn't overshadow the bigger contributions of this paper, since dealing with these specific issues is still an open research problem.\"], \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"comment\": \"In Section 5.5, you mention that \\\"Factual\\naccuracy is an ongoing area of research in the summarization field\\\". Do you have some recent papers in mind that analyze or address this specific issue in generated summaries?\\n\\nBy the way, this is a very interesting and thought-provoking submission!\", \"title\": \"Further research about factual inaccuracies\"}"
]
} |
|
HJe3TsR5K7 | Learning Joint Wasserstein Auto-Encoders for Joint Distribution Matching | [
"Jiezhang Cao",
"Yong Guo",
"Langyuan Mo",
"Peilin Zhao",
"Junzhou Huang",
"Mingkui Tan"
] | We study the joint distribution matching problem which aims at learning bidirectional mappings to match the joint distribution of two domains. This problem occurs in unsupervised image-to-image translation and video-to-video synthesis tasks, which, however, has two critical challenges: (i) it is difficult to exploit sufficient information from the joint distribution; (ii) how to theoretically and experimentally evaluate the generalization performance remains an open question. To address the above challenges, we propose a new optimization problem and design a novel Joint Wasserstein Auto-Encoders (JWAE) to minimize the Wasserstein distance of the joint distributions in two domains. We theoretically prove that the generalization ability of the proposed method can be guaranteed by minimizing the Wasserstein distance of joint distributions. To verify the generalization ability, we apply our method to unsupervised video-to-video synthesis by performing video frame interpolation and producing visually smooth videos in two domains, simultaneously. Both qualitative and quantitative comparisons demonstrate the superiority of our method over several state-of-the-arts. | [
"joint distribution matching",
"image-to-image translation",
"video-to-video synthesis",
"Wasserstein distance"
] | https://openreview.net/pdf?id=HJe3TsR5K7 | https://openreview.net/forum?id=HJe3TsR5K7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Hkltjjuee4",
"rkeFnzYeyE",
"Bklj1wCJC7",
"rJlwcXAkCm",
"HylYuK6JAm",
"B1xVUu45hm",
"B1xfWwGqnm",
"Hygakz2dh7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544747936972,
1543701168717,
1542608611374,
1542607758994,
1542605169319,
1541191755672,
1541183226136,
1541091812938
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper840/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper840/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper840/Authors"
],
[
"ICLR.cc/2019/Conference/Paper840/Authors"
],
[
"ICLR.cc/2019/Conference/Paper840/Authors"
],
[
"ICLR.cc/2019/Conference/Paper840/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper840/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper840/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a new image to image translation technique, presenting a theoretical extension of Wasserstein GANs to the bidirectional mapping case.\\n\\nAlthough the work presents promise, the extent of miscommunication and errors of the original presentation was too great to confidently conclude about the contribution of this work. \\n\\nThe authors have already included extensive edits and comments in response to the reviews to improve the clarity of method, experiments and statement of contribution. We encourage the authors to further incorporate the suggestions and seek to clarify points of confusion from other reviewers and submit a revised version to a future conference.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Shows promise but requires improvements to presentation to make contribution clear\"}",
"{\"title\": \"Reply to authors\", \"comment\": \"Thank you for addressing my comments.\"}",
"{\"title\": \"To AR2: Misunderstanding for experiments; Difficult extension for I2IT and V2VS problem\", \"comment\": \"The reviewer misunderstood the results in Table 1. We have described them clearly in the revision.\\n\\n1) & 2) Issues on baselines\\nOur method (JWAE) focuses on the different problem and different experiment settings from Bicycle-GAN, Triple GAN and Triangle GAN. Specifically, JWAE focuses on the unsupervised learning setting for image-to-image translation and video-to-video synthesis tasks. In the training, JWAE is trained on unpaired training data. However, \\n(a) Bicycle-GAN requires paired data and is trained in a supervised setting; \\n(b) The regularizations in Triple GAN are designed for the classification task in the semi-supervised learning; \\n(c) Triangle GAN requires the semi-supervised learning. Thus, the comparison between JWAE and these methods is not fair.\\n\\n2) Concerns on MMD\\nOn Cityscapes and SYNTHIA, we show the results using MMD and GAN, respectively. Note that MMD is to measure the distribution divergence of embeddings. Here, we use FID and FID4Video to evaluate the quality of images and videos, respectively. In general, lower score indicates better performance. From Table B, MMD-JWAE can achieve the comparable performance with GAN-JWAE. \\n\\n Table B. FID | FID4Video scores on Cityscapes and SYNTHIA\\n| Method | Cityscapes | SYNTHIA |\\n| | photo2seg | seg2photo\\t| winter2spring | winter2summer | winter2fall |\\n| MMD-JWAE\\t | 41.60 | 6.62 | 42.65 | 23.48 | 89.72 | 21.18 | 83.10 | 19.88 | 86.56 | 15.99 |\\n| GAN-JWAE\\t | 22.74 | 6.80 | 43.48 | 25.87\\t| 88.24 | 21.37 | 77.12 | 17.99 | 87.50 | 14.14 |\\n\\n\\n3) Concerns on W/O-Triple\\nThere is a misunderstanding of the results in Table 1. The baseline method W/O-Triple indicates that we remove the second term (i.e., cycle adversarial (CA) term) in the right-hand side of Loss (6), instead of removing the whole Triple-GAN loss. We refer to this baseline as W/O-CA and we have clarified this in the revision. \\n\\n4) Generalization ability of JWAE\\nWe have shown the generalization ability of JWAE in the paper. According to (Bojanowski et al., 2018), the generalization ability can be evaluated by the interpolation performance in the target domain. From the results in Table 1 and Figures 2 & 3, our method consistently outperforms the considered baseline methods both quantitatively and qualitatively.\\n\\n5) Difficulty of applying WAE to I2IT and V2VS tasks\\nIt is very difficult to extent WAE to image-to-image translation (I2IT) and video-to-video synthesis (V2VS) tasks, because there exists an intractable joint distribution matching problem. Moreover, the original WAE aims at learning a generative model from noise to images, and thus it cannot be directly applied in I2IT and V2VS problem.\"}",
"{\"title\": \"To AR3: First work derived from a theoretical perspective\", \"comment\": \"The reviewer undervalued the significance and novelty of the proposed method. We have highlighted them in the revision.\\n\\n1) Novelty\\nMost image-to-image translation (I2IT) methods (e.g., Cycle GAN) are often designed without a theoretical analysis, which, however, may limit the understanding and the learning performance. Unlike existing methods, to our knowledge, our method is the first work to solve the I2IT problem from a theoretical perspective. Essentially, our method can be regarded as a generalization of CycleGAN. More critically, we believe that our theoretical results would be helpful for understanding the I2IT problem.\\n\\n2) Results of image translation\\nWe have shown the results of image-to-image translation in Table 1 and Figures 2 and 3. Furthermore, we also conducted interpolation-based video-to-video synthesis to evaluate the quality of the learned joint distribution, which is a better method to evaluate the generalization ability of JWAE.\\n\\n3) Generative power of the proposed method\\nOn Cityscapes and SYNTHIA, we conduct image translation and compare image FID scores of the models trained with different distribution divergences and show the results in Table A. In general, lower FID indicates better performance. From Table A, GAN-JWAE consistently outperforms other distribution divergences. The qualitative results can be found in Appendix F of the supplementary material. It is worth mentioning that our method is not restricted to the choice of distribution divergences.\\n \\n Table A. FID results for different distribution divergences.\\n| Method | Cityscapes | SYNTHIA |\\n| | scene2seg | seg2scene | winter2spring | winter2summer | winter2fall |\\n| WGAN-JWAE | 134.40 | 84.00 | 122.27 | 97.86 | 86.52 |\\n|SN-WGAN-JWAE | 128.01 | 87.60 | 117.05 | 97.72 | 85.57 |\\n| GAN-JWAE | 21.89 | 42.13 | 88.26 | 84.37 | 83.26 |\"}",
"{\"title\": \"To AR1: Clear revision has been updated\", \"comment\": \"Thanks for your helpful review. We have uploaded a revised version of the paper.\\n\\n1) We have revised the paper and provided a formal problem definition in Section 3.\\n\\n2) The definitions of $E_A(f^*)$, $E_B^g(f^*)$ were given in the supplementary material. We have also defined them in Theorem 2 of the revised paper.\\n\\n3) We have revised the paper and unified notations as follows: \\n(a) $G2 \\\\circ E1$ and $G1 \\\\circ E2$ denote the cross-domain mappings;\\n(b) $G1 \\\\circ E1$ and $G2 \\\\circ E2$ represent two Auto-Encoders;\\n(c) $M$ and $N$ denote the number of samples in the $X$ and $Y$ domain, respectively;\\n(d) $F(s) \\\\lesssim G(s)$ indicates that there exists $C_1, C_2> 0$ such that $F(s) \\\\leq C_1 G(s) + C_2$.\\n\\n4) Concerns on $Q_1, Q_2$ and Eqn. (17)\\n(a) The definitions of $Q_1, Q_2$ in Theorem 1 are reasonable. Taking $Q_1$ as example, the encoder $Q(Z_1|X)$ must satisfy the distribution constraint $Q_{Z_1} = P_Z = Q_{Z_2}, P_Y = P_{G_2}$. Similarly, we can also obtain the constraint for $Q_2$.\\n\\n(b) The Equality (17) does hold. Firstly, we decompose $P(X, Z_1)$ into $P(X) P(Z_1|X)$. Then, we use $Q(Z_1|X)$ to replace the $ P(Z_1|X) $, and enforce its marginal distribution $Q_{Z_1} = E_X [Q(Z_1|X)]$ to be identical to the prior distribution $P_Z$, i.e., $Q_{Z_1}=P_Z$. Meanwhile, the encoder $Q(Z_1|X)$ can be obtained from $E_1(X)$ and $E_2(G_2(E_1(X)))$ when $Q_{Z_1} = Q_{Z_2}$ and $P_Y = P_{G_2}$. \\n\\n5) Concerns on Lemma 1\\nAccording to the duality theorem of Kantorovich-Rubinstein, we can choose any value greater or equal than the considered cost function to satisfy the condition. For simplicity, we choose the equality in the proof of Lemma 1. \\n\\n6) Formula (24) in Lemma 3 is an inequality. \\n\\n7) The sets $Q_1$ and $Q_2$ in Problem (4) are different from the definitions in Theorem 1, because the constraints $Q_1$ and $Q_2$ are regularized as penalties. We have made them clearer in the revision.\"}",
"{\"title\": \"Interesting approach with poor presentation\", \"review\": \"This paper studies the joint distribution matching problem where given data samples in two different domains, one is interested in learning a bi-directional mapping between unpaired data elements in these domains. The paper proposes a joint Wasserstein auto-encoder (JWAE) to solve this problem. The paper shows that under the decomposable cost metric and deterministic decoding maps, the optimization problem associated with the JWAE formulation can be reduced to a tractable optimization problem. The paper also establishes a generalization bound for the JWAE formulation. Finally, the paper conducts an experimental evaluation of the proposed solution with the help of a video-to-video synthesis problem and show improved performance as compared to the existing results in the literature.\\n\\nOverall, the reviewer finds that the paper considers an important problem and proposes some interesting ideas to tackle the problem. However, in its current form, there is a large scope for improvement in the presentation of the paper. The paper is full of errors/typos which make it an extremely difficult read (see my comments below). That said the paper fairs quite well as compared to other existing methods. Since the reviewer is not very much familiar with this field, the reviewer leaves it to the other reviewers to decide the significance of these results.\", \"pros\": \"1) The paper aims to provide a theoretical treatment of the joint distribution matching problem which has many interesting applications, including image-to-image translation and video-to-video synthesis. \\n\\n2) The proposed method in the paper had good empirical performance on the real world datasets.\", \"cons\": \"The paper is very poorly written with many typos and (possibly) mistakes. Some of the comments in this direction are as follows.\\n\\n1) The paper does not formally define the underlying problem before diving into the details of the proposed solution. The authors only informally talk about the problem in the introduction. Given that the ICLR has a wide audience, it would have been nice if the authors have made the presentation of the paper self-contained.\\n\\n2) In the same vein, the paper talks about many important quantities without introducing them first. E.g., what are $E_{A}(f^*)$, $E_B^g(f^*)$ etc. in the statement of Theorem 2? These quantities are first defined inside the proofs in the supplementary material!\\n\\n3) Some of the notation in the paper is also very confusing. For example, cross-domain mapping have two different sets of notations. $(E1oG2, E2oG)$ in Sec. 4.2 and $(G2oE1, G1oE2)$ in Section 5. It should be latter. Similarly, Sec. 4.2 refers to $E1oG1$ and $E2oG2$ as auto-encoders, which should be $G1oE1$ and $G2oE2$, respectively. In Sec. 3, the authors refer to $N$ and $M$ as the number of samples in the $X$ and $Y$ domain, respectively. This is then reversed in Theorem 2 and 4. These are only a small list of large number of such typos. Also, what is the notion defined in the last line of Sec. 3?\\n\\n4) It is not clear to me why the sets $Q_1$ and $Q_2$ in Theorem 1 are define in their current forms. In particular, it is not clear why the equality hold in Eq. (17) in the proof of Theorem 1. \\n\\n5) One line in the proof of Lemma 1 says, \\\"Specifically, we choose its equality, then we have\\\". Could the authors elaborate on this?\\n\\n6) Eq. (24) should be inequality?\\n\\n7) Given that the authors write a regularized problem in (4). Does that mean now sets $Q_1$ and $Q_2$ are different from how they are defined in the statement of Theorem 1?\\n\\n#########################\", \"post_rebuttal\": \"The authors have addressed most of my concerns regarding the poor presentation of the earlier version. I have updated my score.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"some issues prevent me from recommending an acceptance\", \"review\": \"This paper proposes a joint Wasserstein Auto-Encoder (JWAE) method to solve the problem of joint distribution matching. Instead of \\ufb01nding a coupling, the paper seeks a decoupling to make the primal problem of Wasserstein distance tractable. The decoupled version of joint Wasserstein distance is used for empirical reconstruction losses of within-domain Auto-Encodings and cyclic mappings. In addition, two GAN divergences are used to learn the cross-domain mappings such that the generated distributions are close to the real distribution, and another GAN divergence is imposed to align the latent distributions generated by two Auto-Encoders. Later, the paper applies the proposed model on the interpolation based video-to-video synthesis problem.\\n\\nAs far as I understand, the paper can be thought of revisiting the Cycle-Consistent Adversarial Networks (CycleGAN) from the joint Wasserstein Auto-Encoder point of view. In other words, it essentially extends the CycleGAN using additional within-domain auto-encoding reconstruction losses and the latent code alignment loss. Accordingly, the proposed model can be naturally applied to image-to-image translation. I have no idea why the paper merely applies it to interpolation based video-to-video translation. In addition, as the paper tries to apply the relaxed optimal Wasserstein distance to Auto-Encoder and cycle consistency losses, why not apply such Wasserstein distance to the distribution divergence as well. To study the generative power of the proposed generative model using the relaxed Wasserstein distance, it is quite necessary to evaluate the use of exit Wasserstein distance based VAE (e.g., Wasserstein AE) and GAN (e.g., Wasserstein GAN and spectral normalized Wasserstein GAN) losses.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"a straightforward extension to WAE\", \"review\": \"The whole model can be simplified by this: using auto-encoders for X and Y's reconstruction, then use Triple GAN loss to match the joint distribution of (X, Y). However, the deterministic model with GAN loss looks problematic to me.\", \"questions\": \"1. Although the authors showed strong evidence in their experiment part, they still failed to compare models with Bicycle-GAN, i.e., how Bicycle GAN performs on these two dataset?\\n\\n2. missing some comparison: why use simplified Triple-GAN loss (i.e. without two regularization terms) instead of Triangle-GAN, which is addressed to be better? I think the authors need to discuss about this. Also, the authors need to use MMD and other methods mentioned in the original WAE paper.\\n\\n3. In table 1, without triple-GAN loss, the whole model is deterministic, but the authors can still show the FID score for the generalization ability, which is better than all other cycle-GAN based models, why is that possible? Is this equivalent to claim that auto-encoder has the ability to generate realistic images just by sampling z? \\n(If I understand the experiment correctly, the author's synthesized images is generated by $y_hat = G_2(E_1(X))$, no sampling z required)\\n\\n4. Can the authors show the generalization ability of JWAE? For example, with input X, we can have different correct corresponding Ys, just like Bicycle-GAN did.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
B1MhpiRqFm | A Convergent Variant of the Boltzmann Softmax Operator in Reinforcement Learning | [
"Ling Pan",
"Qingpeng Cai",
"Qi Meng",
"Wei Chen",
"Tie-Yan Liu"
] | The Boltzmann softmax operator can trade-off well between exploration and exploitation according to current estimation in an exponential weighting scheme, which is a promising way to address the exploration-exploitation dilemma in reinforcement learning. Unfortunately, the Boltzmann softmax operator is not a non-expansion, which may lead to unstable or even divergent learning behavior when used in estimating the value function. The non-expansion is a vital and widely-used sufficient condition to guarantee the convergence of value iteration. However, how to characterize the effect of such non-expansive operators in value iteration remains an open problem. In this paper, we propose a new technique to analyze the error bound of value iteration with the the Boltzmann softmax operator. We then propose the dynamic Boltzmann softmax(DBS) operator to enable the convergence to the optimal value function in value iteration. We also present convergence rate analysis of the algorithm.
Using Q-learning as an application, we show that the DBS operator can be applied in a model-free reinforcement learning algorithm. Finally, we demonstrate the effectiveness of the DBS operator in a toy problem called GridWorld and a suite of Atari games. Experimental results show that outperforms DQN substantially in benchmark games. | [
"Reinforcement Learning",
"Boltzmann Softmax Operator",
"Value Function Estimation"
] | https://openreview.net/pdf?id=B1MhpiRqFm | https://openreview.net/forum?id=B1MhpiRqFm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HklTrpKQlN",
"BJlC7pKmxN",
"HygoWg7-x4",
"r1lxzs0hyE",
"ByxIovASCX",
"rkx-AhYRaX",
"BylNDqY0T7",
"Skx9B9YRa7",
"r1xsbcFA6Q",
"Hklc614o37",
"SJlRd9sYnQ",
"Syer4w_d2m"
],
"note_type": [
"official_comment",
"official_comment",
"meta_review",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544949060928,
1544949030342,
1544790018840,
1544510215707,
1543002013684,
1542524104912,
1542523483803,
1542523457711,
1542523394613,
1541255106110,
1541155445989,
1541076780736
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper839/Authors"
],
[
"ICLR.cc/2019/Conference/Paper839/Authors"
],
[
"ICLR.cc/2019/Conference/Paper839/Area_Chair1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper839/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper839/Authors"
],
[
"ICLR.cc/2019/Conference/Paper839/Authors"
],
[
"ICLR.cc/2019/Conference/Paper839/Authors"
],
[
"ICLR.cc/2019/Conference/Paper839/Authors"
],
[
"ICLR.cc/2019/Conference/Paper839/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper839/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper839/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Response\", \"comment\": \"Theorem 4 applies to any action selection policies that guarantees infinite visitations for states and actions, and epsilon-greedy is an example policy that satisfy the requirement. Please note that a common choice for such policy is epsilon greedy (e.g. the DQN algorithm). Although epsilon varies, it decays from 1.0 to 0.1 and remains 0.1 thereafter. As epsilon is not 0, it still guarantees infinite visits for states.\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks a lot for your reply. We have actually updated the paper accordingly, however, it seems that due to system errors our paper is not the latest version. We are sorry about this.\", \"a1\": \"We derive the bound of |L_b(X) - max(X)| \\u2264 log(n)/b by showing that max(X) \\u2264 L_b(X) \\u2264 max(X) + log(n)/b as follows:\\nAs b*max(X) = log(e^(max(b*X))) \\u2264 log(\\u2211e^(b*x_i)), we have b*max(X) \\u2264 b*L_b(X)\\nAs b*max(X) + log(n) = log(e^(max(b*X))) + log(n) = log(max(e^(b*X))) + log(n) = log(n*max(e^(b*X))) \\u2265 log(\\u2211e^(b*x_i)), we have b*L_b(X) \\u2264 b*max(X) + log(n)\\nCombining these inequalities, we have max(X) \\u2264 L_b(X) \\u2264 max(X) + log(n)/b.\", \"a2\": \"Please refer to Exercise 31.1 (a) of page 402 in Mackay\\u2019s book (https://www.ece.uvic.ca/~agullive/Mackay.pdf). To the best of our knowledge, there is no proof of the bound of | L_b(X) - boltz_b(X) |, and we give a proposition here:\\n\\nL_b (X) - boltz_b(X) = 1/b \\u2211-p_i log(p_i), where p_i is the weight of the Boltzmann distribution, i.e. p_i = e^(b*x_i)/\\u2211e^(b*x_j). The proof is as follows:\\n1/b \\u2211-p_i log(p_i) = 1/b \\u2211 ( -e^(b*x_i)/\\u2211e^(b*x_j) ) * log( e^(b*x_i)/\\u2211e^(b*x_j) )\\n = 1/b \\u2211 ( -e^(b*x_i)/\\u2211e^(b*x_j) ) * ( b*x_i - log( \\u2211e^(b*x_j) ) ) )\\n = -\\u2211 ( ( e^(b*x_i) * x_i ) / \\u2211e^(b*x_j) ) + 1/b * log( \\u2211e^(b*x_j) )\\n = -boltz_b(X) + L_b(X)\\n\\nAs L_b(X) \\u2265 boltz_b(X), we have | L_b(X) - boltz_b(X) | = 1/b \\u2211-p_i log(p_i), where the right hand side is equal to the entropy of the Boltzmann distribution. The maximum of the right hand side is achieved when p_i=1/n, and equals to log(n)/b.\\nThus, we have | L_b(X) - boltz_b(X) | \\u2264 log(n)/b.\"}",
"{\"metareview\": \"Pros:\\n- a method that obtains convergence results using a using time-dependent (not fixed or state-dependent) softmax temperature.\", \"cons\": [\"theoretical contribution is not very novel\", \"some theoretical results are dubious\", \"mismatch of Boltzmann updates and epsilon-greedy exploration\", \"the authors seem to have intended to upload a revised version of the paper, but unfortunately, they changed only title and abstract, not the pdf -- and consequently the reviewers did not change their scores.\", \"The reviewers agree that the paper should be rejected in the submitted form.\"], \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-review\"}",
"{\"comment\": \"1. For Q3 above, there is no discussion in Appendix B regarding the bound of |L(Q) - max(Q)| (At least in the current version)\\n\\n2. For the bound of |boltz(Q) - L(Q)|, could you please point out the corresponding page number in MacKay's book?\", \"title\": \"Questions regarding the proofs\"}",
"{\"title\": \"response to clarifications\", \"comment\": \"Reviewer 1 is right that corollary 1 is ok as is.\\n\\nWhere in Section 4.2 does it say that actions are selected to be epsilon-greedy. If that is the case, with fixed epsilon, Theorem 4 will be correct. But I don't see where that is assumed. Further, if that is assumed, its a poor choice of exploration scheme.\\n\\nI still can't verify the proof of Theorem 4.\"}",
"{\"title\": \"Summary of the updated version\", \"comment\": [\"We thank the reviewers for their careful reading and thoughtful reviews. We have updated the submission accordingly and the main changes in the updated version of the paper include:\", \"we elaborate more about the exploration-exploitation dilemma in value function optimization\", \"we add empirical analysis of the exploration-exploitation dilemma\", \"we compare with G-learning in the GridWorld\", \"we discuss more related papers\", \"we refine experimental results of Atari\", \"we elaborate the details for the proof of Theorem 1\"]}",
"{\"title\": \"To Reviewer2\", \"comment\": \"Thank you very much for the thoughtful reviews, especially for the exploration-exploitation trade-off.\\n\\nIn this paper, we aim to make the Boltzmann softmax operator converge from the view of trade-off between exploration and exploitation in value function optimization, instead of the traditional understanding in the action selection process. To be specific, in stochastic environments, the max operator updates the value estimator in a \\u2018hard\\u2019 way by greedily summarizing action-value functions according to current estimation. However, this may not be accurate due to noise in the environment. Even in deterministic environments, this may not be correct either. This is because the estimate for the value is not correct in the early phase of the learning process. We elaborate this and distinguish it from the exploration-exploitation trade-off in the updated version in Section 2.2 and Section 5.1.\\n\\nConsidering the title would be misleading, we change it accordingly.\\n\\nThank you for pointing out the reference paper. We cite and discuss the paper in the updated version in Section 6 (Related Work).\"}",
"{\"title\": \"Clarification to Reviewer1\", \"comment\": \"Thank you for the comments. We are afraid that you have some misunderstandings for our work.\", \"q1\": \"Theorem 1 is straightforward.\", \"a1\": \"The effect of operators which are not non-expansion when applied in value iteration is an open problem and worth studying (Algorithms for Sequential Decision Making, Littman, 1996). Although error bounds of value iteration with the traditional max operator is well-established, there\\u2019s no results for the Boltzmann softmax operator which violates the property of non-expansion.\\n\\nIn Theorem 1, we propose a novel analysis to characterize the error bound of the Boltzmann operator when applied in value iteration. Please note that this is the first time that the analysis is presented, and it is of vital importance as value iteration is the basis for RL algorithms.\", \"q2\": \"Corollary 1 may be technically wrong.\", \"a2\": \"Please note that ||\\u00b7||_{\\\\infty} denotes the L-\\\\infty norm, and ||V_0 - V^*||_{\\\\infty}, \\\\log{|A|}, \\\\beta, and \\\\gamma are all constants which will not change by taking the limit of t. Corollary 1 is derived by taking the limit of t in both sides of Inequality (6) in Theorem 1.\", \"q3\": \"Theorem 4 may be wrong. Stronger conditions are required.\", \"a3\": \"Theorem 4 is correct. In our DBS Q-learning algorithm, the action selection policy is epsilon-greedy. Thus, states will be visited infinitely often. In addition, different from (Singh et al., 2000), where they study on-policy reinforcement learning algorithm (Sarsa), \\\\beta is state-independent here and thus is more flexible. Please also note that the main result of the paper is the characterization of the (dynamic) Boltzmann softmax operator in value iteration (Theorem 1, Theorem 2, and Theorem 3). We then apply the DBS operator in a well-known off-policy reinforcement learning algorithm, i.e Q-learning, and Theorem 4 is to guarantee the convergence of the resulting DBS Q-learning algorithm.\", \"q4\": \"Cannot find Lemma 2.\", \"a4\": \"Lemma 2 refers to the stochastic approximation lemma (Lemma 1) in Section 3.1 of (Singh et al., 2000).\"}",
"{\"title\": \"To Reviewer3\", \"comment\": \"Thank you for the comments. Please find our responses as below, especially for the novelty of the work.\", \"q1\": \"The novelty of the DBS operator.\", \"a1\": \"First of all, thank you for viewing our analysis for DBS novel. As we mentioned in the paper and showed by the corresponding title, we mainly aim to enable the convergence of the widely-used Boltzmann operator by a better exploration-exploitation trade-off, which is dispensable for reinforcement learning. As far as we know, it is the first time that we find a variant of the Boltzmann operator with good convergence rate.\\n\\nAlthough the state-dependent weighting of Boltzmann operator is proposed in (Singh et al. 2000), our DBS operator is state-independent and can scale to high-dimensional state space, which is crucial for RL algorithms. Furthermore, their operator is for on-policy RL algorithm, i.e. SARSA, while our DBS is for value iteration (a basic algorithm to solve the MDP) and Q-learning (a more popular off-policy RL algorithm). Therefore, our Q-learning algorithm with DBS is novel.\\n\\nDue to the difference of our algorithms and that in (Singh et al. 2000), we develop new techniques to prove the convergence. Specifically, for value iteration, we propose a novel analysis to characterize the error bound of value iteration with the Boltzmann operator, prove the convergence and present convergence rate analysis; for Q-learning, we leverage the stochastic approximation lemma (SA Lemma) presented in (Singh et al. 2000), which is an extension of the classic stochastic approximation theorem proven in (Jaakkola et al. 1994), to relate the process to the well-defined stochastic process in SA Lemma and then we quantify the additional term using similar techniques in our Theorem 1. Our results of value iteration have little relation with (Singh et al. 2000) and are mainly based on our own analysis (Proposition 1, Theorem 1, Theorem 2, and Theorem).\", \"q2\": \"What is the action selection policy? The states should be visited infinitely.\", \"a2\": \"In our DBS Q-learning algorithm, the action selection policy is epsilon-greedy. Thus, states will be visited infinitely often. We make it clearer in the updated version.\\n\\nPlease note that, the exploration-exploitation dilemma here is related to value function optimization (Asadi et al. 2017), rather than the traditional view of exploring the environment and exploiting the action during the action selection process. In stochastic environments, the max operator updates the value estimator in a \\u2018hard\\u2019 way by greedily summarizing action-value functions according to current estimation. However, this may not be accurate due to noise in the environment. Even in deterministic environments, this may not be accurate either. This is because the estimate for the value is not correct in the early stage of the learning process. We elaborate the effect of exploration and added empirical study in the updated version, please refer to Section 5.1.\", \"q3\": \"|L(Q) - max(Q)| <= log(A||) / beta is not immediately clear.\", \"a3\": \"We give more details of the proof in the updated version, please refer to Appendix B.\", \"q4\": \"Non-expansion is not necessary for convergence.\", \"a4\": \"Yes, non-expansion is an important and widely-used sufficient condition to guarantee the convergence of the learning problem (Littman 1996, Asadi et al. 2017). In this understanding, we say non-expansion is \\u2018vital\\u2019 for convergence. (Bellemare et al. 2016) proposed an alternative sufficient condition different from the non-expansion property. However, the condition is still not enough to cover common operators violating non-expansion such as the Boltzmann softmax operator.\", \"q5\": \"Detailed comments for the experiment.\", \"a5\": \"Here are our quick feedbacks.\\n1) We compare with G-learning and analyze the effect in the updated version (Section 5.1).\\n2) We change the score to raw game scores in the updated version (Appendix H). \\n3) Please note that our score listed is exactly the same with \\u2018Dueling Network Architectures for Deep Reinforcement Learning\\u2019 and \\u2018Rainbow\\u2019, where the (original) scores for DQN are raw scores. \\n4) In our experiments, c is in [0, 1], and we have tuned the value of c in some of the games. This is because different games have different features and should have different values of c.\\n5) We have redrawn the plots to make it more reader-friendly and corrected some typos in the updated version.\"}",
"{\"title\": \"Okay paper but relatively thin novelty\", \"review\": \"Summary: This work demonstrates that, although the Boltzmann softmax operator is not a non-expansion, a proposed dynamic Boltzmann operator (DBS) can be used in conjunction with value iteration and Q-learning to achieve convergence to V* and Q*, respectively. This time-varying operator replaces the traditional max operator. The authors show empirical performance gains of DBS+Q-learning over Q-learning in a gridworld and DBS+DQN over DQN on Atari games.\", \"novelty\": \"(1) The error bound of value iteration with the Boltzmann softmax operator and convergence & convergence rate results in this setting seem novel. (2) The novelty of the dynamic Boltzmann operator is somewhat thin, as (Singh et al. 2000) show that a dynamic weighting of the Boltzmann operator achieves convergence to the optimal value function in SARSA(0). In that work, the weighting is state-dependent, so the main algorithmic novelty in this paper is removing the dependence on state visitation for the beta parameter by making it solely dependent on time. A question for the authors: How does the proof in this work relate to / differ from the convergence proofs in (Singh et al. 2000)?\", \"clarity\": \"In the DBS Q-learning algorithm, it is unclear under which policy actions are selected, e.g. using epsilon-greedy/epsilon-Boltzmann versus using the Boltzmann distribution applied to the Q(s, a) values. If the Boltzmann distribution is used then the algorithm that is presented is in fact expected SARSA and not Q-learning. The paper would benefit from making this clear.\", \"soundness\": \"(1) The proof of Theorem 4 implicitly assumes that all states are visited infinitely often, which is not necessarily true with the given algorithm (if the policy used to select actions is the Boltzmann policy). (2) The proof of Theorem 1 uses the fact that |L(Q) - max(Q)| <= log(|A|) / beta, which is not immediately clear from the result cited in McKay (2003). (3) The paper claims in the introduction that \\u201cthe non-expansive property is vital to guarantee \\u2026 the convergence of the learning algorithm.\\u201d This is not necessarily the case -- see Bellemare et al., Increasing the Action Gap: New Operators for Reinforcement Learning, 2016.\", \"quality\": \"(1) I appreciate that the authors evaluated their method on the suite of 49 Atari games. This said, the increase in median performance is relatively small, the delta being about half that of the increase due to double DQN. The improvement in mean score in great part stems from a large improvement occurs on Atlantis.\\n\\nThere are also a number of experimental details that are missing. Is the only change from DQN the change in update rule, while keeping the epsilon-greedy rule? In this case, I find a disconnect between the stated goal (to trade off exploration and exploitation) and the results. Why would we expect the Boltzmann softmax to work better when combined to epsilon-greedy? If not, can you give more details e.g. how beta was annealed over time, etc.?\\n\\nFinally, can you briefly compare your algorithm to the temperature scheduling method described in Fox et al., Taming the Noise in Reinforcement Learning via Soft Updates, 2016?\", \"additional_comments\": \"(1) It would be helpful to have Atari results provided in raw game scores in addition to the human-normalized scores (Figure 5). (2) The human normalized scores listed in Figure 5 for DQN are different than the ones listed in the Double DQN paper (Van Hasselt et al, 2016). (3) For the DBS-DQN algorithm, the authors set beta_t = ct^2 - how is the value of c determined? (4) Text in legends and axes of Figure 1 and Figure 2 plots is very small. (5) Typo: citation for MacKay - Information Theory, Inference and Learning Algorithms - author name listed twice.\\n\\nSimilarly, if the main contribution is DBS, it would be interesting to have a more in-depth empirical analysis of the method -- how does performance (in Atari or otherwise) vary with the temperature schedule, how exploration is affected, etc.?\\n\\nAfter reading the other reviews and responses, I still think the paper needs further improvement before it can published.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"I don't think the theoretical results represent a significant advance\", \"review\": \"The writing and organization of the paper are clear. Theorem 1 seems fine but is straightforward to anyone who has studied this topic and knows the literature. Corollary one may be technically wrong (or at least it doesn't follow from the theorem), though this can be fixed by replacing the lim with a limsup. Theorem 4 seems to be the main result all the work is leading up to, but I think this is wrong. Stronger conditions are required on the sequence \\\\beta_t, along the lines discussed in the paragraph on Boltzmann exploration in Section 2.2 of Singh et al 2000. The proof provided by the authors relies on a \\\"Lemma 2\\\" which I can't find in the paper. The computational results are potentially interesting but call for further scrutiny. Given the issues with the theoretical results, I think its hard to justify accepting the paper.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Boltzmann Weighting Done Right in Reinforcement Learning\", \"review\": \"I liked this paper overall, though I feel that the way it is pitched to the reader is misguided. The looseness with which this paper uses 'exploration-exploitation tradeoff' is worrying. This paper does not attack that tradeoff at all really, since the tradeoff in RL concerns exploitation of understood knowledge vs deep-directed exploration, rather than just annealing between the max action and the mean over all actions (which does not incorporate any notion of uncertainty). Though I do recognize that the field overall is loose in this respect, I do think this paper needs to rewrite its claims significantly. In fact it can be shown that Boltzmann exploration that incorporates a particular annealing schedule (but no notion of uncertainty) can be forced to suffer essentially linear regret even in the simple bandit case (O(T^(1-eps)) for any eps > 0) which of course means that it doesn't explore efficiently at all (see Singh 2000, Cesa-Bianchi 2017). Theorem 4 does not imply efficient exploration, since it requires very strong conditions on the alphas, and note that the same proof applies to vanilla Q-learning, which we know does not explore well.\\n\\nI presume the title of this paper is a homage to the recent 'Boltzmann Exploration Done Right' paper, however, though the paper is cited, it is not discussed at all. That paper proved a strong regret bound for Boltzmann-like exploration in the bandit case, which this paper actually does not for the RL case, so in some sense the homage is misplaced. Another recent paper that actually does prove a regret bound for a Boltzmann policy for RL is 'Variational Bayesian Reinforcement Learning with Regret Bounds', which also anneals the temperature, this should be mentioned.\\n\\nAll this is not to say that the paper is without merit, just that the main claims about exploration are not valid and consequently it needs to be repositioned. If the authors do that then I can revise my review.\\n\\nAlgorithm 2 has two typos related to s' and a'.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HkgnpiR9Y7 | Recycling the discriminator for improving the inference mapping of GAN | [
"Duhyeon Bang",
"Hyunjung Shim"
] | Generative adversarial networks (GANs) have achieved outstanding success in generating the high-quality data. Focusing on the generation process, existing GANs learn a unidirectional mapping from the latent vector to the data. Later, various studies point out that the latent space of GANs is semantically meaningful and can be utilized in advanced data analysis and manipulation. In order to analyze the real data in the latent space of GANs, it is necessary to investigate the inverse generation mapping from the data to the latent vector. To tackle this problem, the bidirectional generative models introduce an encoder to establish the inverse path of the generation process. Unfortunately, this effort leads to the degradation of generation quality because the imperfect generator rather interferes the encoder training and vice versa.
In this paper, we propose an effective algorithm to infer the latent vector based on existing unidirectional GANs by preserving their generation quality.
It is important to note that we focus on increasing the accuracy and efficiency of the inference mapping but not influencing the GAN performance (i.e., the quality or the diversity of the generated sample).
Furthermore, utilizing the proposed inference mapping algorithm, we suggest a new metric for evaluating the GAN models by measuring the reconstruction error of unseen real data.
The experimental analysis demonstrates that the proposed algorithm achieves more accurate inference mapping than the existing method and provides the robust metric for evaluating GAN performance. | [
"gans",
"inference mapping",
"data",
"latent vector",
"discriminator",
"generation process",
"latent space",
"generation quality",
"gan performance",
"algorithm"
] | https://openreview.net/pdf?id=HkgnpiR9Y7 | https://openreview.net/forum?id=HkgnpiR9Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HklRyXoZeE",
"BygIKlaj6X",
"SyldBAQtaQ",
"ByeRQAmKam",
"HJes5EG46X",
"S1xZsyw16Q",
"ByxcyXr53m",
"rylzg9Tu2Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544823526286,
1542340734293,
1542172224147,
1542172198174,
1541837970606,
1541529496958,
1541194465747,
1541097962400
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper838/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper838/Authors"
],
[
"ICLR.cc/2019/Conference/Paper838/Authors"
],
[
"ICLR.cc/2019/Conference/Paper838/Authors"
],
[
"ICLR.cc/2019/Conference/Paper838/Authors"
],
[
"ICLR.cc/2019/Conference/Paper838/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper838/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper838/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents a method to learn inference mapping for GANs by reusing the learned discriminator's features and fitting a model over these features to reconstruct the original latent code z. R1 pointed out the connection to InfoGAN which the authors have addressed. R2 is concerned about limited novelty of the proposed method, which the AC agrees with, and lack of comparison to a related iGAN work by Zhu et al. (2016). The authors have provided the comparison in the revised version but the proposed method seems to be worse than iGAN in terms of the metrics used (PSNR and SSIM), though more efficient. The benefits of using the proposed metrics for evaluating GAN quality are also not established well, particularly in the context of other recent metrics such as FID and GILBO.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Marginal novelty; the advantage over existing methods is not convincing enough\"}",
"{\"title\": \"Author Response to Reviewer 1\", \"comment\": \"Thanks for your comments,\\nWe thank the Reviewer1 for constructive feedback. Reviewer1 suggests the comparison among the results of various unidirectional GANs to strengthen the experimental evaluation. We agree that the comparison mentioned by Reviewer1 helps improving the quality of the paper. To reflect this comment, we now revise our manuscript to supplement the qualitative comparison among unidirectional GANs.\\nFurthermore, Reviewer1 suggests that you should provide the inception score or MS-SSIM of other datasets. It would be helpful to show that the existing metrics are meaningful only on the specific datasets. As we mentioned on section 3.2, MS-SSIM scores are almost zero for CIFAR10 or Fashion MNIST, which have multiple classes. Meanwhile, the inception score utilizes the classifier, thus it is not appropriate to apply onto the single class dataset (i.e., CelebA). As a result, the scores from those cases would present meaningless scores. Due to limited space, we did not include them in our main manuscript. \\nWe hope our additional results can answer your comments.\"}",
"{\"title\": \"Author Response to Reviewer 2_1\", \"comment\": \"Thanks for your comments,\\nWe appreciate the constructive feedback from Reviewer 2. As Reviewer 2 pointed out, we missed the comparison with the relevant existing work (iGAN) which also adopts independent inference mapping method to the GAN training; our experimental evaluation focused on the limitation of the bidirectional generative models that trains generation and inference mapping together. We agree that the comparison with iGAN and the ablation study using the pre-trained feature extractor would improve the quality of our paper. To reflect this valuable feedback, we conduct the experiment as follows. Please note that all changes are reflected in our revision (page 6-7, 13). \\n\\niGAN compares three different inference prediction methods. 1) The first method is a direct inference mapping through na\\u00efve encoders. 2) The second method is to adopt the non-convex optimization, which minimizes the pixel wise difference between the original image and the image generated from the estimated latent vector. 3) Finally, their proposal is a hybrid method that carries out encoder mapping followed by the non-convex optimization. Similarly, we compare the proposed method with 1) na\\u00efve encoder mapping, 2) iGAN (na\\u00efve encoder followed by optimization) and 3) hybrid method using the proposed method (discriminator with CN followed by optimization). Furthermore, to investigate the capability of the discriminator as the feature extractor, we directly compare our inference mapping (discriminator with CN network) with the VGG-16 based inference mapping (pre-trained VGG with CN network). Note that the pre-trained VGG16 network is trained on ImageNet 1k. The quantitative evaluation is summarized as follows. Please also see qualitative results on our revision. \\n\\n\\n Ours\\tna\\u00efve\\tiGAN\\tHybrid+Ours\\tVGG16\\nSSIM\\t0.5214\\t0.4872\\t0.5624\\t0.5710\\t\\t0.5199\\nPSNR\\t16.85\\t15.28\\t18.21\\t18.52\\t\\t16.81\"}",
"{\"title\": \"Author Response to Reviewer 2_2\", \"comment\": \"Our algorithm successfully synthesizes the attributes in various faces, unlike the na{\\\\\\\"i}ve encoder. As reported in iGAN, we confirm that adopting the non-convex optimization for inference mapping significantly enhances the quantitative score (i.e., SSIM and PSNR). It is because the non-convex optimization directly minimizes the pixel-wise difference between test images and reconstructed images; the goal of the non-convex optimization is nearly equivalent to the goal of PSNR. Hence, the hybrid method improves the PSNR of any baseline encoder mapping. When we replace the na{\\\\\\\"i}v encoder with the proposed inference mapping, its quantitative results are better than iGAN. It is because our inference mapping predicts more accurate initial latent vector.\\n\\nHowever, these quantitative results do not exactly match with qualitative results. The quantitative results demonstrate that hybrid inference mapping is the most effective among all others. Meanwhile, the qualitative results from the hybrid methods are generally blur or have missing important components (e.g., eye glasses, mustache, gender, wrinkles, detailed hair lines, etc.). Because the hybrid inference mapping optimizes the inference mapping in the image domain (i.e., minimizing the pixel-wise difference), the inference network finally chooses the latent vector corresponding to an average-like image. Note that there exists average-like faces among many possible faces. We conjecture that, although the generator can produce sharp images, the hybrid inference mapping strategically selects average-like faces to reduce its loss function. Meanwhile, our method (also VGG16 based inference mapping) optimizes the inference mapping in the latent domain. Thus, our inference results are sharp and better preserve semantically important attributes. From examples shown in Fig ~\\\\ref{figure04} and Appendix Fig \\\\ref{appendix fig}, pixel-wise loss based methods (i.e., iGAN and Hybrid+ours) fail to capture glasses, but latent vector loss based methods (i.e., ours and VGG16) reproduce the glasses. In fact, for the same reason, VEEGAN chooses to minimize a reconstruction loss on the latent vector to solve mode collapse. \\nBy replacing the discriminator as feature extractor by the pre-trained VGG16 network, we observe that its inference results are also as sharp and realistic as our results. However, considering the semantic similarity between the original and reconstructed image, our inference mapping can restore unique attributes (e.g., mustache, race, age, etc.) better than the VGG16 based inference mapping. Moreover, utilizing the pre-trained VGG16 require additional memory overhead while our method does not. In terms of network capacity, VGG16 has the much deeper network than the discriminator. Thus, we conclude that the proposed inference mapping is more efficient than the VGG16 based inference mapping. From these results, we confirm that recycling discriminator as a feature extractor is effective for improving inference accuracy and reducing the computational complexity.\\n\\nIn conclusion, we adopt the inference mapping through optimizing on the latent space because this preserves the properties of GANs that generate sharp images and semantic attributes. Moreover, since preserving the semantic attributes could be interpreted as how well GAN understand the images, the non-convex optimization that prefers average images even omitting semantic attributes is not appropriate for suggesting the GAN evaluation metric. In the same context, since recycling the discriminator directly affect the inference mapping accuracy, our method is suitable for evaluating whole GAN framework (i.e., both the generator and the discriminator).\"}",
"{\"title\": \"Author Response to Reviewer 3\", \"comment\": \"Thanks for your comments,\\nFirst of all, we would like to solve important misunderstanding about our methodology. Especially, the major difference of our work compared to three techniques mentioned by Reviewer 3 is summarized as follows. Unlike the existing techniques (i.e., InfoGAN, CycleGAN and ALICE), our inference mapping using the connection network is completely independent of both generator updates and discriminator updates. We would like to stress that this is why we could maintain the quality of generation in the baseline GAN model; other inference mapping techniques influence the generation quality.\\n\\nOur key idea of inference model is to reuse the discriminator network as feature extractor and learn a direct mapping from the feature vector to the GAN's latent space, as mentioned by Reviewer 1. Because we utilize the well-educated discriminator to extract the meaningful features, the direct mapping is learned after the training of both generator and discriminator end. On the other hand, infoGAN, CycleGAN, and ALICE intend to design the inference mapping that controls the generation process; the above three techniques all affect the generator updates for their own purpose. Again, we emphasize that the reconstruction loss of the connection network is different from the conditional entropy of infoGAN and the cycle consistency of CycleGAN and ALICE in two aspects. First, our model does not affect both the generator update and the discriminator update. Secondly, our goal is to build the inference mapping without affecting the baseline GAN performance. Meanwhile, three techniques develop new GAN models for learning interpretable representation (infoGAN), unsupervised domain transfer (CycleGAN), or alleviating the mode collapse (ALICE).\\nWe decide to separate the generation mapping (from z to x) and the inference mapping (from x to z) because of the convergence issue reported in bidirectional GANs. For example, ALICE reported in appendix E.3, \\u201cAs a trade-off between theoretical optimum and practical convergence, we employ feature matching, and thus our results exhibit a slight blurriness characteristic\\u201d. We believe that this convergence issue is caused by the error propagation from both generator and encoder. Furthermore, their inception score is 6.015, which is slightly worse than the average inception of unidirectional GAN, 6.5. This experimental result demonstrates that existing bidirectional methods (either using cycle consistency or joint distribution matching) have shown the limited generation performance. Although we did not compare the performance to ALICE directly in the paper, the experimental results of ALI/BiGAN could represent the limitation of bidirectional GANs trained for joint distribution matching.\\n\\nWe agree that the reconstruction loss could be interpreted as the negative log likelihood, and this is equivalent to mutual information and conditional entropy. However, such a scheme is applicable for techniques that handle the joint distribution (both generation and inference or both generation and latent code) matching. Meanwhile, our model disconnects the inference mapping from generation process. \\n\\nWe now revise our manuscript to clarify those of difference of ours and three existing techniques. We hope our explanations can answer your concerns.\"}",
"{\"title\": \"Missing important connections to existing works\", \"review\": \"Paper Summary:\\nThis paper proposes to reconstruct the generated images to the their corresponding latent code. As claimed, the goal is to improve the accuracy and efficiency of inference mapping better than other inference mapping techniques, while maintaining their generation quality.\\n Instead of using an independent encoder, the authors propose to share the encoder parameters with the discriminator: a Connection Network (CN) is built on top of the features extracted by the discriminator. The weight-sharing machisme shows better performance in Figure 1.\", \"the_proposed_method_has_two_benefits\": \": a) manipulating the image by disentangling the latent space and b) suggesting a new metric for assessing the GAN model by measuring reconstruction errors of real data.\", \"general_comments\": \"In term of algorithm, the paper essentially adds the conscontruction term (CN) to the standard GAN loss, and partially shares the weights of the \\u201cencoder\\u201d and discriminator. However, it is almost identical to the existing works, which are NOT cited, and the connections are not discussed.\", \"connection_to_infogan\": \"To relate the generated images to the latent code, the proposed method employs the reconstruction loss, InfoGAN employs the mutual information. Note that reconstruction loss = negative log likelihood, and effectively is equivalent to Mutual Information and Conditional Entropy in the case. Please see the discussion in Lemma 3 and Appendix A of [3] for detailed discussion. Further, InfoGAN has proposed to to sharing weights of the encoder and discriminator, exactly the same with this submission. The claimed advantage is to disentangle the latent space. It is not surprise at all, once the authors see the connection to InfoGAN, which was originally proposed to disentangle the latent codes.\", \"connection_to_cyclegan\": \"CycleGAN consists of four losses: two reconstruction losses and two standard GAN losses. As shown in Section 4 of [3] \\u201cConnecting ALI and CycleGAN\\u201d, one reconstruction loss and one standard GAN loss is sufficient to achieve CycleGAN\\u2019s objective, the other two losses would only help to accelerate. In another word, the proposed method is exactly half of the CycleGAN losses.\\n\\nThe author mention in Abstract that \\u201cthe bidirectional generative models introduce an encoder to establish the inverse path of the generation process. Unfortunately, their inference mapping does not accurately predict the latent vector from the data because the imperfect generator rather interferes the encoder training.\\u201d This is the non-identifiable issue of ALI/BiGAN discovered in [3]. Please clarify. \\n\\nThe proposed method should compare with [1] and [2] in great detail, to demonstrate its own advantages. Given the missing literature, the current experimental comparisons seem not that meaningful, because the baseline methods are not really the competitors. \\n\\nOne interesting contribution of the submission is to consider the reconstruction errors to measure the quality of GANs. To my best knowledge, it is original.\", \"references\": \"[1] InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets, NIPS 2016\\n[2] Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks, ICCV 2017\\n[3] ALICE: Towards Understanding Adversarial Learning for Joint Distribution Matching, NIPS 2017\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Insufficient experimental validation and marginal novelty\", \"review\": \"The paper proposes using the GAN discriminator for inference mapping, mapping an image to a latent code that would be used to generate the image by the encoder, based on the argument that the discriminator can be used as a powerful feature extractor because it has seen both real and fake data during training. The paper compares the proposed approach to several approaches that train an inference model together with a generator.\\n\\nWhile the paper compares its approach to several baselines, they are not the most relevant ones. In fact, the most relevant baseline is not cited and compared. As a result, the novelty of the paper is not justified. Specifically, the baselines the paper compare to are mostly methods that jointly learn an inference model and a generation model, while the proposed approach first learns a generation model and then fits an inference model (it is referred to as the connection network in the paper). In this regard, the paper should compare its approach to methods that first learns a generation model and then learns an inference model. The iGAN work by Zhu et. al. ECCV 2016 is arguably most relevant approach. Especially, they also use the discriminator architecture for the inverse mapping. Unfortunately, the work is neither cited nor compared.\\n\\nIn addition, pretrained networks such as VGG and ResNet have been known to be powerful feature extractor. It would be ideal the paper can compare the proposed approach to that using VGG and ResNet for finding the z for a given image.\\n\\nFinally, the paper seems to lack of comprehensive knowledge on how the inference mapping has been investigated in the GAN literature. For example, the statement that \\\"BEGAN (Berthelot et al., 2017) made the first attempt to solve the inverse mapping from x to z using the non-convex optimization\\\" in the introduction section is incorrect. The scheme is used in at least two 2016 papers (Liu and Tuzel NIPS 2016 and Zhu et. al. ECCV 2016).\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"a novel approach to GAN inference mapping\", \"review\": \"This paper describes a novel method to provide inference mapping for GAN networks. The idea is to reuse the discriminator network's feature vector (output of layer before last) and learn a direct mapping to the GAN's latent space. This can be done very efficiently since the dimensionality of both layers are relatively small. Also, the mapping does not interfere with the learning process of the GAN itself and thus can be applied on top of any GAN method without affecting its performance.\\n\\nInference mapping is useful in the GAN context for several reasons that are well described in the paper. First it allows to more efficiently generate \\\"edited\\\" images as the mapping provides a good starting point in the latent space. Second it provides a sound way to evaluate GAN's performance as the reconstruction of a given image through the inference mapping and the generator provides auto-encoder-like capabilities. Comparison of GAN models have been difficult due to a lack of adequate evaluation technique. This paper proposes a novel evaluation scheme that is both fair and technically simple.\\n\\nIn the experimental part, the authors first compare their approach to the 'naive encoder' approach where the last layer of the discriminator is removed after training, a feature layer of the size of the encoder's latent space is added, and the rest of the discriminator's layers are frozen. The proposed approach outperforms the naive encoder approach on the CelebA dataset. The second set of experiments investigates reconstruction accuracy of various GAN models. Figure 2 shows reconstructed images for 7 GANs and 36 examples from 3 datasets. Unfortunately, no subjective comparison can be attempted since the examples are different for each GAN. In Figure 3, editing in performed on the CelebA dataset, but again, subjective comparison among the GAN's is precluded by the fact that different examples are chosen. This oversight does not affect the paper's relevance, since those comparison would be purely subjective, however it would add some visual interpretation to the quantitative comparison given in table 1. I also wish the authors would have provided the inception score for FashMNIST and CelebA and also provide the more recent FID (Frechet Inception Distance). Inception scores are trained on ImageNET and are too commonly applied to CIFAR-10 and CelebA. It would be good to compare them against the proposed method on those datasets to show that there are not good for datasets other than those on which they were trained.\\n\\nThe article is technically sound. The citations are adequate. The English is fine with some extraneous articles being the only issue. The article lacks a graphic for the architecture of the system and many of the figures are too small to interpret when printed out. Also there's a typo on table 1. where the inception score for WGAN-GP on CelebA should be 6.869 and not 0.6869.\\n\\nOverall, I find this paper provides a simple, novel significant method for evaluating GAN models and making better use of their latent space arithmetic editing capabilities. Due to the algorithm's simplicity, most of the paper is devoted to experiments and discussions.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
S1x2aiRqFX | Differentiable Expected BLEU for Text Generation | [
"Wentao Wang",
"Zhiting Hu",
"Zichao Yang",
"Haoran Shi",
"Eric P. Xing"
] | Neural text generation models such as recurrent networks are typically trained by maximizing data log-likelihood based on cross entropy. Such training objective shows a discrepancy from test criteria like the BLEU metric. Recent work optimizes expected BLEU under the model distribution using policy gradient, while such algorithm can suffer from high variance and become impractical. In this paper, we propose a new Differentiable Expected BLEU (DEBLEU) objective that permits direct optimization of neural generation models with gradient descent. We leverage the decomposability and sparsity of BLEU, and reformulate it with moderate approximations, making the evaluation of the objective and its gradient efficient, comparable to common cross-entropy loss. We further devise a simple training procedure with ground-truth masking and annealing for stable optimization. Experiments on neural machine translation and image captioning show our method significantly improves over both cross-entropy and policy gradient training. | [
"text generation",
"BLEU",
"differentiable",
"gradient descent",
"maximum likelihood learning",
"policy gradient",
"machine translation"
] | https://openreview.net/pdf?id=S1x2aiRqFX | https://openreview.net/forum?id=S1x2aiRqFX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1x0O4pelV",
"SyxopJ_537",
"rJluy5-qn7",
"S1lpbM__nX",
"rygg9AB-hm",
"BJe1ZVt3iQ"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1544766581561,
1541205955081,
1541179872430,
1541075461174,
1540607624052,
1540293622590
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper837/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper837/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper837/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper837/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper837/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents a differentiable approximation of BLEU score, which can be directly optimized using SGD. The reviewers raised concerns about (1) direct evaluation of the quality of the approximation and (2) the significance of the experimental results. There is also a concern (3) regarding the significance of BLEU score in the first place, and whether BLEU is the right metric that one needs to directly optimize. The authors did not provide a response, and based on the concerns above (especially 1-2) I believe that the paper does not pass the bar for acceptance at ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"The paper needs improvement\"}",
"{\"title\": \"Too many approximations in formulation, few experiments and discussion\", \"review\": \"This paper proposed a differentiable metric for text generation tasks inspired by BLEU and a random training method by the Gumbel-softmax trick to utilize the proposed metric. Experiments showed that the proposed method improves BLEU compared with simple cross entropy training and policy gradient training.\", \"pros\": [\"The new metric provides a direct perspective on how good the conjunction of a generated sentence is, which has not been provided other metric historically used on language generation tasks, such as cross-entropy.\"], \"cons\": \"* Too many approximations that blur the relationship between the original metric (BLEU) and the derived metric.\\n* Not enough experiments and poor discussion. Authors should consume more space in the paper for experiments.\\n\\nThe formulation of the metric consists of many approximations and it looks no longer BLEU, although the new metric shares the same motivation: \\\"introducing accuracy of n-gram conjunction\\\" to evaluate outputs. Selecting BLEU as the starting point of this study seems not a reasonable idea. Most approximations look gratuitously introduced to force to modify BLEU to the final metric, but choosing an appropriate motivation first may conduct more straightforward metric for this purpose.\\n\\nIn experiments on machine translation, its setting looks problematic. The corpus size is relatively smaller than other standard tasks (e.g., WMT) but the size of the network layers is large. This may result in an over-fitting of the model easily, as shown in the results of cross-entropy training in Figure 3. Authors mentioned that this tendency is caused by the \\\"misalignment between cross entropy and BLEU,\\\" however they should first remove other trivial reasons before referring an additional hypothesis.\\nIn addition, the paper proposed a training method based on Gumbel softmax and annealing which affect the training stability through additional hyperparameters and annealing settings. Since the paper provided only one training case of the proposed method, we couldn't discuss if the result can be generalized or just a lucky.\\n\\nIf the lengths of source and target are assumed as same, the BP factor becomes always 1. Why the final metric (Eq. 17) maintains this factor?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The paper misses important references. It chooses an empirical setup which prevents comparison with related work, and the report results on de-en seem weak. The proposed approach does not bound or estimate how far from BLEU is the proposed approximation. This means that the authors need to justify empirically that it preserves correlation with BLEU, which is not shown in the paper.\", \"review\": \"The paper reads well. It has a few but crucial missing references. The motivation is easy to understand and a relevant problem to work on. The main weaknesses of the work lies in its very loose derivations, and its weak empirical results.\\n\\nFirst on context/missing references: the author ignores approaches optimizing BLEU with log linear models (Franz Och 2003), and the structured prediction literature in general, both for exact (Tsochantaridis et al 2004) and approximate search (Daume and Marcu 2005). This type of approach has been applied to NMT recently (Edunov et al 2018). Your paper also misses important references addressing BLEU optimization with reinforcement strategies (Norouzi et al 2016) or (Bahdanau et al 2017). Although not targeting BLEU directly (Wiseman and Rush 16) is also a reference to cite wrt optimizing search quality directly. \\n\\nOn empirical results, you chose to work IWSLT in the de-en direction while most of the literature worked on en-de. It prevents comparing your results to other papers. I would suggest to switch directions and/or to report results from other methods (Ranzato et al 2015; Wiseman and Rush 2016; Norouzi et al 2016; Edunov et al 2018). De-en is generally easier than en-de (generating German) and your BLEU scores are particularly low < 25 for de-en while other methods ranges in 26-33 BLEU for en-de (Edunov et al 2018).\\n\\nOn the method itself, approximating BLEU with a continuous function is not easy and the approach you take involves swapping function composition and expectation multiple times in a loose way. You acknowledge that (7) is unprincipled but (10) is also problematic since this equation does not acknowledge that successive ngrams overlap and cannot be considered independent. Also, the dependence of successive words is core to NMT/conditional language models and the independence hypothesis from the footnote on page 4 can be true only for a bag of word model. Overall, I feel that given the shortcuts you take, you need to justify that your approximation of BLEU is still correlated with BLEU. I would suggest to sample from a well trained NMT system to collect several hypotheses and to measure how well your BLEU approximation correlate with BLEU. How many times BLEU decides that hypA > hypB but your approximation invert this relation? is it true for large difference, small difference of BLEU score? at low BLEU score, high BLEU score?\\n\\nFinally, you do not mention the distinction between expected BLEU \\\\sum_y P(y|x) BLEU(y, ref) and the BLEU obtained by beam search which only look at (an estimate of) the most likely sequence y* = argmax P(y|x) . Your approach and most reinforcement strategy targets optimizing expected BLEU, but this has no guarantee to make BLEU(y*, ref) any better. Could you report both an estimate of expected BLEU and beam BLEU for different methods? In particular, MERT (), beam optimization (Wiseman and Rush 2016) and structured prediction (Edunov et al 2018) explicitly make this distinction. This is not a side issue as this discussion is in tension with your motivations.\", \"paper_summary\": \"Neural translation systems optimizes training data likelihood, not the end metric of interest BLEU. This work proposes to approximate BLEU with a continuous, differentiable function that can be optimized during training.\", \"review_summary\": \"The paper misses important references. It chooses an empirical setup which prevents comparison with related work, and the report results on de-en seem weak. The proposed approach does not bound or estimate how far from BLEU is the proposed approximation. This means that the authors need to justify empirically that it preserves correlation with BLEU, which is not shown in the paper.\\n\\nMissing references\\n\\nAn Actor-Critic Algorithm for Sequence Prediction (ICLR 2017) Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio\\n\\nHal Daume III and Daniel Marcu. Learning as search optimization: Approximate large margin methods for structured prediction. ICML 2005.\\n\\nSergey Edunov, Myle Ott, Michael Auli, David Grangier, Marc'Aurelio Ranzato\\nClassical Structured Prediction Losses for Sequence to Sequence Learning, NAACL 18\\n\\nMinimum Error Rate Training in Statistical Machine Translation Franz Josef Och. 2003 ACL\\n\\nI. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun, Support Vector Machine Learning for Interdependent and Structured Output Spaces, ICML 2004.\\n\\nMohammad Norouzi, Samy Bengio, Zhifeng Chen, Navdeep Jaitly, Mike Schuster, Yonghui Wu, Dale Schuurmans, Reward Augmented Maximum Likelihood for Neural Structured Prediction, 2016\\n\\nSequence-to-Sequence Learning as Beam-Search Optimization, Sam Wiseman and Alexander M. Rush., EMNLP 2016\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"official review\", \"review\": \"The paper describes a differentiable expected BLEU objective which computes expected n-gram precision values by ignoring the brevity penalty.\", \"clarity\": \"Section 3 of the paper is very technical and hard to follow. Please rewrite this section to be more accessible to a wider audience by including diagrams and more explanation.\\n\\nOriginality/signifiance: the idea of making BLEU differentiable is a much researched topic and this paper provides a nice idea on how to make this work.\", \"evaluation\": \"\", \"the_evaluation_is_not_very_strong_for_the_following_reasons\": \"1) The IWSLT baselines are very weak. For example, current ICLR submissions, report cross-entropy baselines of >33 BLEU, whereas this paper starts from 23 BLEU on IWSTL14 de-en (e.g., https://openreview.net/pdf?id=r1gGpjActQ), even two years ago baselines were stronger: https://arxiv.org/abs/1606.02960\\n\\n2) Why is policy gradient not better? You report a 0.26 BLEU improvement on IWSLT de-en, which is tiny compared to what other papers achieved, e.g., https://arxiv.org/abs/1606.02960, https://arxiv.org/abs/1711.04956\\n\\n3) The experiments are on some of the smallest translation tasks. IWSLT is very small and given that the method is supposed to be lightweight, i.e., not much more costly than cross-entropy, it should be feasibile to run experiments on larger datasets.\\n\\nThis makes me wonder how significant any improvements would be with a good baseline and on a larger datasets.\\n\\nAlso, which test set are you using?\\n\\nFinally, in Figure 3, why is cross-entropy getting worse after only ~2-4K updates? Are you overfitting? \\nPlease reference this figure in the text.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Thanks and response\", \"comment\": [\"Thanks for your valuable comment. We didn\\u2019t notice the latest update of [1] which was released on arXiv only around one month prior to this submission. With the new results in [1], we will update our statement of \\u201ctoy tasks\\u201d accordingly. We appreciate for pointing this out! As in the paper, we\\u2019ve said \\u201cour formulation uses a couple of similar approximations or assumptions\\u201d with [1] and (Casas et al., 2018). Here we emphasize the difference of our work with [1] as below:\", \"Our formulation stems from a different and clear intuition of leveraging the sparsity of BLEU score, and decomposes the goal into multiple derivation steps with clear motivations.\", \"We\\u2019ve developed a mask-and-anneal training process to stabilize the training. We also describe key implementation and analyze the computational complexity which is comparable to common cross-entropy training.\", \"Our formulation naturally leads to Gumbel-softmax decoding for differentiable BLEU training and gradient backpropagation along time steps, while in [1] it\\u2019s unclear to us what decoding strategy is used.\", \"We believe the claim of \\u201clower bound\\u201d in [1] could be problematic. For example, in Eq.(20) in [1], the inequality does not necessarily hold since `min(1, c/x)` is not a convex function of x.\", \"The difference of experimental results on IWSLT\\u201914 can be attributed to different data preprocessing procedures and model configurations (e.g., input to each decoding step, #layers in encoder, etc). We will release the code.\"]}",
"{\"comment\": \"Hi!\\n\\nComparing with [1], derivation of DEBLEU objective looks very similar to \\\"lower bound\\\" (LB) in [1] (it should be noted however, that your derivation is much easier to follow). What is a general difference between DEBLEU and LB of [1]?\\n\\nYou refer [1] as \\\"made preliminary attempts to develop differentiable approximations of BLEU for neural model training, but only studied on toy tasks\\\", however, the latest version of [1] in arXiv (dated August 23) includes experiments on IWSLT'14 and WMT'14 datasets, which show improvement over both cross-entropy and direct BLEU objectives. Moreover, [1] reports significantly higher BLEU scores with a smaller network for all objectives.\\n\\n[1] Vlad Zhukov, Eugene Golikov, Maksim Kretov - Differentiable lower bound for expected BLEU score. arXiv:1712.04708v4\", \"title\": \"Difference between DEBLEU and LB of [1]? Novelty and results of experiments on IWSLT'14?\"}"
]
} |
|
H1xipsA5K7 | Learning Two-layer Neural Networks with Symmetric Inputs | [
"Rong Ge",
"Rohith Kuditipudi",
"Zhize Li",
"Xiang Wang"
] | We give a new algorithm for learning a two-layer neural network under a very general class of input distributions. Assuming there is a ground-truth two-layer network
y = A \sigma(Wx) + \xi,
where A, W are weight matrices, \xi represents noise, and the number of neurons in the hidden layer is no larger than the input or output, our algorithm is guaranteed to recover the parameters A, W of the ground-truth network. The only requirement on the input x is that it is symmetric, which still allows highly complicated and structured input.
Our algorithm is based on the method-of-moments framework and extends several results in tensor decompositions. We use spectral algorithms to avoid the complicated non-convex optimization in learning neural networks. Experiments show that our algorithm can robustly learn the ground-truth neural network with a small number of samples for many symmetric input distributions. | [
"Neural Network",
"Optimization",
"Symmetric Inputs",
"Moment-of-moments"
] | https://openreview.net/pdf?id=H1xipsA5K7 | https://openreview.net/forum?id=H1xipsA5K7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkxkxZJHl4",
"SkgfHt_PRX",
"BygkLw_vAm",
"HJlQpUuwC7",
"H1eC-EI6n7",
"rJxkjtHThX",
"H1xu6gRmhm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545036006816,
1543108921904,
1543108422529,
1543108282952,
1541395461696,
1541392791017,
1540772031815
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper836/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper836/Authors"
],
[
"ICLR.cc/2019/Conference/Paper836/Authors"
],
[
"ICLR.cc/2019/Conference/Paper836/Authors"
],
[
"ICLR.cc/2019/Conference/Paper836/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper836/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper836/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"Although the paper considers a somewhat limited problem of learning a neural network with a single hidden layer, it achieves a surprisingly strong result that such a network can be learned exactly (or well approximated under sampling) under weaker assumptions than recent work. The reviewers unanimously recommended the paper be accepted. The paper would be more impactful if the authors could clarify the barriers to extending the technique of pure neuron detection to deeper networks, as well as the barriers to incorporating bias to eliminate the symmetry assumption.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"A very interesting theoretical contribution on learning 1-hidden-layer neural networks\"}",
"{\"title\": \"Thank you for your review!\", \"comment\": \"Thanks a lot for your efforts in the review process. We really appreciate your valuable suggestions and detailed comments.\\n\\n-generalize technique to shifted input or bias term.\\n\\nOur current technique does not generalize to the case where the input is shifted or there is a bias term in the output. We think this is a very interesting open question and we are now discussing that in the conclusion. Note that many previous works (Goel et al. 2017, Ge et al. 2017) also do not handle bias terms. It has been empirically observed that for many networks fixing the bias to be 0 only makes the performance slightly worse.\\n\\n-generalize purifying idea to general depth neural network.\\n\\nOur idea basically removes the last linear layer given good understanding of what happens in previous layers. If there are results that can learn a p-layer network, it is possible that similar ideas could allow it to learn a p+1-layer network whose last layer is linear. However, there are no general algorithms for learning a neural network under symmetric input for p > 1, so we leave this as an open problem.\\n\\n-sample complexity of the algorithm\\n\\nWe've added a third plot in Figure 2 of the performance of our algorithm as a function of the dimension of A and W. One thing to keep in mind is that the number of parameters also grow quadratically with the dimension of A and W, so we expect the squared error to grow quadratically with the dimension of A and W. To account for this phenomenon, we've plotted the square root of the error normalized by the dimension of A and W so as to more clearly illustrate the extent to which our algorithm's actual performance deteriorates for high-dimensional A and W. \\n\\nAs illustrated by the flatness of the error curves, the performance of our algorithm remains stable as the dimension of A and W grows from 10 to 32. Note that this is much better than what our theory predicts and obtaining tighter sample complexity is an open problem. We believe that truly determining the exponent of our algorithm's asymptotic performance will necessitate evaluating the algorithm with much larger A and W. Such an experiment will require considerable computational resources and is beyond the present scope of the work.\"}",
"{\"title\": \"Thank you for your feedback!\", \"comment\": \"Thank you very much for reviewing our paper. We really appreciate your positive reviews and insightful comments.\\n\\nThanks for pointing out several related papers, we have already added these papers in our updated version. As you mentioned in the review, our technique cannot immediately apply to the case where the dimension of the output is smaller than the number of hidden units. This is a very interesting question, and we leave it as an open problem in the paper.\"}",
"{\"title\": \"Thanks for your review!\", \"comment\": \"Thanks a lot for your positive feedback! It\\u2019s definitely an important question to study general depth neural network, as we also discuss in the conclusions. We hope that our technique could be further improved to help to learn deeper neural networks.\"}",
"{\"title\": \"interesting, technical results on learning one hidden layer NN\", \"review\": \"This paper pushes forward our understanding of learning neural networks. The authors show that they can learn a two-layer (one hidden layer) NN, under the assumption that the input distribution is symmetric. The authors convincingly argue that this is not an excessive limitation, particularly in view of the fact that this is intended to be a theoretical contribution. Specifically, the main result of the paper relies on the concept of smoothed analysis. It states that give data generated from a network, the input distribution can be perturbed so that their algorithm then returns an epsilon solution.\\n\\nThe main machinery of this paper is using a tensor approach (method of moments) that allows them to obtain a system of equations that give them their \\u201cneuron detector.\\u201d The resulting quadratic equations are linearized through the standard lifting approach (making a single variable in the place of products of variables). \\n\\nThis is an interesting paper. As with other papers in this area, it is somewhat difficult to imagine that the results would extend to tell us about guarantees on learning a general depth neural network. Nevertheless, the tools and ideas used are of interest, and while already quite difficult and sophisticated, perhaps do not yet seem stretched to their limits.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Requiring a symmetric distribution and ReLU activation functions seems to be too strong.\", \"review\": \"This paper studies the problem of learning the parameters of a two-layer (or one-hidden layer) ReLU network $y=A\\\\sigma(Wx)$, under the assumption that the distribution of $x$ is symmetric. The main technique here is the \\\"pure neuron detector\\\", which is a high-order moment function of a vector. It can be proved that the pure neuron detector is zero if and only if the vector is equal to the row vector of A^{-1}. Hence, we can \\\"purify\\\" the two layer neural network into independent one layer neural networks, and solve the problem easily.\\n\\nThis paper proposes interesting ideas, supported by mathematical proofs. This paper contains analysis of the algorithm itself, analysis of finding z_i's from span(z_i z_i^T), and analysis of the noisy case. \\nThis paper is reasonably well-written in the sense that the main technical ideas are easy to follow, but there are several grammatical errors, some of which I list below. I list my major comments below:\\n\\n1) [strong assumptions] The result critically depends on the fact that $x$ is symmetric around the origin and the requirement that activation function is a ReLU. Lemma 1, 2, 3 and Lemma 6 in the appendix are based on these two assumptions. For example, the algorithm fails if $x$ is symmetric around a number other than zero or there is a bias term (i.e. $y=A \\\\sigma(Wx+b) + b'$ ). This strong assumptions significantly weaken the general message of this paper. Add a discussion on how to generalize the idea to more general cases, at least when the bias term is present. \\n\\n2) [sample efficiency] Tensor decomposition methods tend to suffer in sample efficiency, requiring a large number of samples. In the proposed algorithm (Algorithm 2), estimation of $E[y \\\\otimes x^{\\\\otimes 3}]$ and $E[y \\\\otimes y \\\\otimes (x \\\\otimes x)]$ are needed. How is the sample complexity with respect to the dimension? The theory in this paper suggests a poly(d, 1/\\\\epsilon) sample efficiency, but the exponent of the poly is not known. In Section 4.1, the authors talk about the sample efficiency and claim that the sample efficiency is 5x the number of parameters, but this does not match the result in Figure 2. In the left of Figure 2, when d=10, we need no more than 500 samples to get error of W and A very small, but in the right, when d=32, 10000 samples can not give very small error of W and A. I suspect that the required number of samples to achieve small error scales quadratically in the number of parameters in the neural network. Some theoretical or experimental investigation to identify the exponent of the polynomial on d is in order. Also, perhaps plotting in log-y is better for Figure 2.\\n\\n3) The idea of \\\"purifying\\\" the neurons has a potential to provide new techniques to analyze deeper neural networks. Explain how one might use the \\\"purification\\\" idea for deeper neural networks and what the main challenges are.\", \"minor_comments\": \"\\\"Why can we efficiently learn a neural network even if we assume one exists?\\\" -> \\\"The question of whether we can efficiently learn a neural network still remains generally open, even when the data is drawn from a neural network.\\\"\\n\\n\\\"with simple input distribution\\\" -> \\\"with a simple input distribution\\\"\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A Strong Theory Paper\", \"review\": \"This is a strong theory paper and I recommend to accept.\", \"paper_summary\": \"This paper studies the problem of learning a two-layer fully connected neural network where both the output layer and the first layer are unknown. In contrast to previous papers in this line which require the input distribution being standard Gaussian, this paper only requires the input distribution is symmetric. This paper proposes an algorithm which only uses polynomial samples and runs in polynomial time. \\nThe algorithm proposed in this paper is based on the method-of-moments framework and several new techniques that are specially designed to exploit this two-layer architecture and the symmetric input assumption.\\nThis paper also presents experiments to illustrate the effectiveness of the proposed approach (though in experiments, the algorithm is slightly modified).\", \"novelty\": \"1. This paper extends the key observation by Goel et al. 2018 to higher orders (Lemma 6). I believe this is an important generalization as it is very useful in studying multi-neuron neural networks.\\n2. This paper proposes the notation, distinguishing matrix, which is a natural concept to study multi-neuron neural networks in the population level.\\n3. The \\u201cPure Neuron Detector\\u201d procedure is very interesting, as it reduces the problem of learning a group of weights to a much easier problem, learning a single weight vector.\", \"clarity\": \"This paper is well written.\", \"major_comments\": \"My major concern is on the requirement of the output dimension. In the main text, this paper assumes the output dimension is the same as the number of neurons and in the appendix, the authors show this condition can be relaxed to the output dimension being larger than the number of neurons. This is a strong assumption, as in practice, the output dimension is usually 1 for many regression problems or the number of classes for classification problems. \\nFurthermore, this assumption is actually crucial for the algorithm proposed in this paper. If the output dimension is small, then the \\u201cPure Neuron Detection\\u201d step does work. Please clarify if I understand incorrectly. If this is indeed the case, I suggest discussing this strong assumption in the main text and listing the problem of relaxing it as an open problem.\", \"minor_comments\": \"1. I suggest adding the following papers to the related work section in the final version:\", \"https\": \"//arxiv.org/abs/1712.00779\\nThese paper are relatively new but very relevant. \\n\\n2. There are many typos in the references. For example, \\u201crelu\\u201d should be ReLU.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
ryGs6iA5Km | How Powerful are Graph Neural Networks? | [
"Keyulu Xu*",
"Weihua Hu*",
"Jure Leskovec",
"Stefanie Jegelka"
] | Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs. GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance. | [
"graph neural networks",
"theory",
"deep learning",
"representational power",
"graph isomorphism",
"deep multisets"
] | https://openreview.net/pdf?id=ryGs6iA5Km | https://openreview.net/forum?id=ryGs6iA5Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"-1k_enXfHS",
"SygoZPCxGE",
"Syl605aefE",
"BJe7wWXkGV",
"r1gaJ5CAW4",
"BkxaHGJ1lV",
"SJeYuLH41V",
"B1et5yXJ14",
"H1gRfJQJy4",
"rkxt80KARX",
"rJxY7atRCX",
"S1egpyLCAX",
"H1xW3wVA0X",
"BygALwN0CX",
"B1xYlDERRX",
"S1ljyieA0m",
"BkxHNhu607",
"rJx9PavpA7",
"BkgrFw3iRQ",
"BJeRhEhiRX",
"ryeaD73iRX",
"SJxgqg2N0m",
"Byg6WVL4Rm",
"B1l2k48VCQ",
"H1xLhQUEA7",
"BkxOBQ8VA7",
"SyeZ3MU4AX",
"r1xnFfUVA7",
"SklslP4NRX",
"HJlp9LgVRX",
"Hke3DIJNAQ",
"ByxsKEkV07",
"SklgY8Ky07",
"rkl2Q1Qi6X",
"B1xLcPaKpQ",
"H1gkUYX76Q",
"rkeW9FDnnQ",
"HJgMSgUqhQ",
"BJgIGNTP27",
"BJgd4DhjiQ",
"HJgofotjs7",
"HkeLiClijQ",
"S1xDl1Ovim",
"rklL6jDwjm",
"SJlxQFl4i7",
"Bkg-9GNXi7",
"HJltX0els7",
"HJgxQah1sQ",
"Byx_XQZJjQ",
"H1lpE_ACc7",
"r1xAhRX05X",
"Syl6WEQn9m",
"r1g3L4zh5X",
"BygugHlo9Q",
"H1grV6yjcQ",
"r1giX8SqqQ",
"HJgCBjM5cm",
"HyxT16oFqm",
"HJe9n0eFcQ",
"SyguON1Ocm",
"S1evjSRPc7"
],
"note_type": [
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"comment",
"comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"official_comment",
"comment",
"comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1701158369946,
1546868483095,
1546865364963,
1546756442900,
1546738148695,
1544643140994,
1543947889159,
1543610256914,
1543610133677,
1543573073021,
1543572769452,
1543557047534,
1543550888865,
1543550805652,
1543550705511,
1543535330650,
1543502892803,
1543499105762,
1543387004724,
1543386294001,
1543385956771,
1542926472386,
1542902788639,
1542902755532,
1542902702059,
1542902592083,
1542902441436,
1542902404170,
1542895346864,
1542878869238,
1542874724148,
1542874243033,
1542588023824,
1542299428215,
1542211470477,
1541777734751,
1541335432986,
1541197881800,
1541030926432,
1540241200041,
1540229906883,
1540193950066,
1539960559487,
1539959741603,
1539733783996,
1539682952868,
1539472929119,
1539456279852,
1539408671520,
1539397685211,
1539354293615,
1539220485390,
1539216467762,
1539142895538,
1539140908964,
1539098147013,
1539087174225,
1539058916586,
1539014322113,
1538942064198,
1538938270593
],
"note_signatures": [
[
"~Yangliao_Geng1"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"~Octavian_Eugen_Ganea1"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper835/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"ICLR.cc/2019/Conference/Paper835/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper835/AnonReviewer3"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"ICLR.cc/2019/Conference/Paper835/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper835/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper835/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"ICLR.cc/2019/Conference/Paper835/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"ICLR.cc/2019/Conference/Paper835/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper835/AnonReviewer2"
],
[
"(anonymous)"
],
[
"~Christopher_Morris1"
],
[
"~Christopher_Morris1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper835/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper835/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper835/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"~Christopher_Morris1"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper835/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"comment\": \"Thanks for the excellent work. I have a little doubt about the mean operation of GCN in Eq. (2.3). It seems somewhat different from that in the original paper (Kipf & Welling, 2017). In (Kipf & Welling, 2017), since symmetric normalization is used for the adjacency matrix (or Laplacian matrix), it is actually a weighted average of the node features of a node's neighborhood (different nodes may have different weights). Therefore, using the mean operation to abstract seems unreasonable since the mean operation has permutation invariance, but the weighted average does not in general.\\n\\nFurthermore, the proof in Lemma 2 (\\\"The same input, i.e. neighborhood features, generates the same output\\\") seems to rely on the permutation invariance of the mean operation. If a weighted average is used, will it affect the correctness of Lemma 2's proof?\\n\\n\\nKipf, T. N., & Welling, M. (2016, November). Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations.\", \"title\": \"about the mean operation in Eq. (2.3) for Graph Convolutional Networks (GCN)\"}",
"{\"title\": \"Thanks everyone!\", \"comment\": \"Thank you everyone for reading and liking our paper, as well as giving many good suggestions. Happy holidays!\"}",
"{\"comment\": \"I believe you can restate lemma 5 to hold for any finite or infinite (but countable) multiset with the assumption that each element x appears at most N - 1 times in this multiset. Then, the same function f(x) = N^{-Z(x)} can be used, and the convergent series \\\\sum_{x \\\\in X} f(x) would always be injective. This can be proven by induction and it reduces to the following case: if we have 2 series: S = \\\\sum_{i >= 0} x_i * N^{-i} and T=\\\\sum_{i >= 0} y_i * N^{-i} , with N-1 > x_i,y_i >=0, then, if x_0 < y_0, one can prove that S < T.\", \"title\": \"this assumption in lemma 5 can be relaxed\"}",
"{\"title\": \"Thanks!\", \"comment\": \"We appreciate your great suggestion, and we will clarify our assumption accordingly.\"}",
"{\"comment\": \"In appendix D and E you state and use the fact that \\\"because multisets X are finite, there exists a number N s.t. |X| < N for all (finite) X\\\". This is mathematically incorrect, since arbitrarily large finite sets cannot have an upper bound in size. In fact, the set of all finite sets containing elements in a countable set is uncountable. The authors might want to clearly state the assumption that they deal with finite sets of cardinality at most N.\", \"title\": \"The sizes of all finite multisets cannot be bounded\"}",
"{\"metareview\": \"Graph neural networks are an increasingly popular topic of research in machine learning, and this paper does a good job of studying the representational power of some newly proposed variants. The framing of the problem in terms of the WL test, and the proposal of the GIN architecture is a valuable contribution. Through the reviews and subsequent discussion, it looks like the issues surrounding Theorem 3 have been resolved, and therefore all of the reviewers now agree that this paper should be accepted. There may be some interesting followup work based on studying depth, as pointed out by reviewer 1, but this may not be an issue in GIN and is regardless a topic for future research.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Oral)\", \"title\": \"Excellent theoretical contribution to the graph neural network literature\"}",
"{\"title\": \"Response to R1 (updated)\", \"comment\": \"Thank you for the detailed response. Regarding the depth of the networks, GIN does not suffer from the curse of depth, i.e. we can use many layers, because we apply architectures similar to JK-Nets (specifically, JK-Concat) in Xu et al. 2018 for readout as described in Section 4.2. We conducted graph classification experiments using 5-layers GNNs (with JK-net) and they work nicely in our experiments. Moreover, as R1 nicely suggested, the influence distribution expansion phenomenon in Xu et al. 2018 indeed would apply to GraphSAGE, GIN etc, though the transition probabilities may not follow canonical random walks when MLP is applied. That being said, Xu et al. 2018 is a great work and we like it. We just wanted to clarify that Theorem 1 was about influence distribution rather than node features, thus, there would be no issue for GIN in terms of invertibility. We hope you are happy with our clarification.\\n\\nRegarding cross-validation, thanks for letting us know the work. We will mention it in the final version. To clarify, we use the boldface to indicate the best performance in terms of mean accuracy. As we mentioned in the rebuttal, the graph classification benchmark datasets are extremely small compared to other standard deep learning benchmarks for computer vision or NLP, e.g. ImageNet. That\\u2019s why standard deviations are high for all the methods (including the previous methods). We do believe we all should move beyond the conventional evaluation on these small datasets, but that is beyond the scope of this paper.\\n\\nThank you again for your nice suggestions and detailed reviews. We hope our clarification regarding the analysis addresses your concerns.\"}",
"{\"title\": \"Effectiveness of Eqn (4.1) and GINs\", \"comment\": \"Thank you for the clarification. We would like to first clarify that the letters (phi, f) in Corollary 6 and Theorem 3 do not have direct correspondence; but we can easily rearrange Eqn (4.1) to obtain the corresponding (phi, f) in the form of Theorem 3. Intuitively, what Theorem 3 asks for is to injectively represent a pair of a node and its neighbors, so the injective function, g(c, X), corresponds to (phi, f) in Theorem 3.\\n\\nFurthermore, our motivation for designing Eqn (4.1), i.e. GIN-0 and GIN-eps, rather than simply applying concatenation, is for better empirical performance. In our preliminary experiments, we found such concatenation was harder to train compared to our simple GINs (both GIN-0 and GIN-eps) and achieved lower test accuracy than GINs. The simplicity of GINs brings better performance in practice. We leave the extensive investigation and comparison to our future work.\"}",
"{\"title\": \"Response to R2, Part II\", \"comment\": \"Regarding experimental setup, we emphasize again that the graph classification datasets are extremely small compared to standard benchmarks in computer vision and NLP, e.g. ImageNet. Therefore, using a (single) validation dataset to select hyper-parameters is very unstable (for instance, MUTAG only has 180 data points, so each validation set only contains 18 data points). Therefore, following some of the previous deep learning work, we reported the ordinary cross-validation accuracy (the same hyper-parameters, such as number of epochs and minibatch size, were used for the entire folds). That being said, we understand the existing benchmarks and evaluation for graph classification are limited and we should all move on to large datasets as an anonymous reader pointed out in https://openreview.net/forum?id=ryGs6iA5Km¬eId=ryGs6iA5Km¬eId=H1gkUYX76Q. In the final version, we will also state our experimental setup more clearly. Thank you for your nice suggestion.\"}",
"{\"title\": \"Experimental setup, part 2\", \"comment\": \"Even though the same approach was used in a previous paper, it is not convincing. Typically the results vary greatly between the epochs. Picking the one with the best validation accuracy leads to unrealistic results. Also the comparison to the results of the WL kernel is not meaningful since it was obtained with an SVM, where the number of hyperparameters is less. Therefore, you cannot pick the best value from such a large set of values. It is questionable to speak of \\\"generalization\\\" in the discussion of your results.\\n\\nI would like to propose to state the method you used more clearly in the paper and check the experimental setup used to obtain the results you have copied from other papers.\\n\\nSince the main contribution of the paper is theoretical, I will keep my rating, although I think that the experimental setup is a clear weak point.\"}",
"{\"title\": \"Eq 4.1 and g(c, X)\", \"comment\": \"What I meant was, in g(c, X) you have two functions phi and f, which is the form required by Theorem 3. The problem of the counter-example comes in when you used a single function instead of 2 functions, which ignores the difference between the node at the center and all its neighbors.\\n Introducing an epsilon is a technical solution to this problem (in my opinion), I think you actually don't need this because the original form of g(c, X) is enough, and using a single function rather than 2 does not save you much.\", \"note\": \"I think of phi and f as MLPs, \\\",\\\" as concat, and \\\"{}\\\" as some aggregation operator, like sum.\"}",
"{\"comment\": \"Neural network-based graph embedding for cross-platform binary code similarity detection\", \"https\": \"//arxiv.org/pdf/1708.06525.pdf\", \"title\": \"You mean an MLP parameterization like equation (2) of this paper?\"}",
"{\"title\": \"Response to R3\", \"comment\": \"Thank you for the encouraging review! We respond to your further comments below.\\n\\n1) We probably do not fully understand your comment regarding Eqn (4.1) and g(c,X). Especially, could you please clarify your meaning of \\u201csimplify g(c, X)\\u201d? In our GIN in Eqn (4.1), we compose phi and f in Corollary 6.\\n\\n2) We will further edit related work according to your suggestions. Interaction Networks is a great work and we like it.\"}",
"{\"title\": \"Response to R2\", \"comment\": \"Thank you for the response. We address your question regarding experimental setup. First, past work in graph classification report the best cross-validation accuracy as what we did in our experiments [3]. The graph classification dataset sizes are often small, and therefore using a (single) validation dataset to select hyper-parameters is very unstable (for instance, MUTAG only has 180 data points, so each validation set only contains 18 data points. Compare this to standard deep learning benchmark sets like MNIST that has 70000 data points)**. Therefore, in our paper, we reported cross-validation accuracy for fair comparison to the previous methods. Moreover, our GNN variants and the WL kernel all follow the same experimental setups, so the comparison among them is definitely meaningful; consequently, our conclusion regarding the expressive power is also meaningful. We are planning for future work to evaluate our method on larger datasets, e.g. those mentioned in the post by one of our readers Mr. Christopher Morris, in https://openreview.net/forum?id=ryGs6iA5Km¬eId=B1xLcPaKpQ.\\n\\nWe have thoroughly addressed all the concerns of R2. If Reviewer2 still has other questions or concerns regarding our work, we are happy to answer them. \\n\\n**[5] uses a test set, but its experiments focus on the larger datasets.\"}",
"{\"title\": \"Response to R1 (updated)\", \"comment\": \"Thank you for the detailed response. Regarding the depth of the networks, GIN does not suffer from the curse of depth, i.e. we can use many layers, because we apply architectures similar to JK-Nets (specifically, JK-Concat) in Xu et al. 2018 for readout as described in Section 4.2. We conducted graph classification experiments using 5-layers GNNs (with JK-net) and they work nicely in our experiments. Moreover, as R1 nicely suggested, the influence distribution expansion phenomenon in Xu et al. 2018 indeed would apply to GraphSAGE, GIN etc, though the transition probabilities may not follow canonical random walks when MLP is applied. That being said, Xu et al. 2018 is a great work and we like it. We just wanted to clarify that Theorem 1 was about influence distribution rather than node features, thus, there would be no issue for GIN in terms of invertibility. We hope you are happy with our clarification.\\n\\nRegarding cross-validation, thanks for letting us know the work. We will mention it in the final version. To clarify, we use the boldface to indicate the best performance in terms of mean accuracy. As we mentioned in the rebuttal, the graph classification benchmark datasets are extremely small compared to other standard deep learning benchmarks for computer vision or NLP, e.g. ImageNet. That\\u2019s why standard deviations are high for all the methods (including the previous methods). We do believe we all should move beyond the conventional evaluation on these small datasets, but that is beyond the scope of this paper.\\n\\nThank you again for your nice suggestions and detailed reviews. We hope our clarification regarding the analysis addresses your concerns.\"}",
"{\"title\": \"Further corrections\", \"comment\": \"For 10-fold cross validation, it is important to highlight that it tends to underestimate the confidence interval range (see (Bengio and Grandvalet, 2004)). Important to let readers know that there is more uncertainty in the results, which was not quantified.\\n\\nI also find the use of boldface confusing. Summing and subtracting the confidence intervals and a lot more models overlap. \\n\\nBengio, Yoshua, and Yves Grandvalet. \\\"No unbiased estimator of the variance of k-fold cross-validation.\\\" Journal of machine learning research 5, no. Sep (2004): 1089-1105.\"}",
"{\"title\": \"Thanks for the response.\", \"comment\": \"I thank the authors for the revision of the paper and the response. I have readjusted my rating.\\n\\nThe solution to the question raised by the counter example in the new equation (4.1) is a technical one, I would rather prefer not to simplify the function g(c, X) which uses two functions phi and f in this form, as it really doesn't buy us much.\\n\\nW.r.t. related work, the statement \\\"Not surprisingly, some building blocks of GIN, e.g. sum aggregation\\nand MLP encoding, also appeared in other models\\\" (section 6) is not fair and misleading. As it is not the case that \\\"some building blocks\\\" also appear in other models, but rather some other models, like interaction networks, already contains \\\"all\\\" the essential building blocks (sum, MLP, etc.) presented in this paper. This doesn't undermine the theoretical contribution of this paper, but the authors should be fair to previous work.\"}",
"{\"title\": \"Experimental setup\", \"comment\": \"Thanks for your detailed reply. The mentioned weak points 1, 2 and 4 were appropriately addressed by the authors and I have increased my rating accordingly.\\n\\nRegarding point 3.\\n>> We selected an epoch with the highest cross-validation accuracy (averaged over 10 folds) following what previous deep learning papers do, e.g., [3][4].\\n\\nI think there is no common approach to this and the experimental setup in previous papers differs. Many papers use nested cross-validation, others use cross-validation with a fixed validation set, e.g., [5]. Also in [4] a validation seems to be used.\\nIf I understand your method correctly, you report the best accuracy value obtained for any combination of hyperparameters -- instead of applying the classifier with the hyperparameters that work best for a validation set to the test set. In my opinion the approach is problematic. In particular, comparing to accuracy results obtained with a different experimental setup is not meaningful.\\n\\n[5] Hierarchical Graph Representation Learning with Differentiable Pooling\\nRex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L. Hamilton, Jure Leskovec \\nNeurIPS 2019\"}",
"{\"title\": \"No. GIN is different.\", \"comment\": \"Thanks for your interest. GIN is different from the paper you mentioned. Critically, GIN uses MLP while Dai et al. uses perceptron.\\nThere are many GNN variants and we leave the analysis of some of them for the future work. Note that the graph Laplacian normalization can decrease the representational power of GNNs, but it can also induce useful inductive bias for the applications of interest, e.g, semi-supervised learning. Therefore, we can not draw a decisive conclusion about the normalization only from the perspective of representational power. It is our future work to investigate generalization, inductive bias and optimization of different GNN variants.\"}",
"{\"comment\": \"According to the current paper, can one say that all the graph Laplacian normalizations in previous GCN are not essential? Or redundant in some sense?\\nWhat's really essentially in graph neural network is equation (4.1) for GIN, or equation (10) for structure2vec in Dai et al.? \\nAnd a potentially really different representation power will probably come from a different message passing update as in eq (14) & (15) in Dai et al.?\", \"title\": \"All the graph Laplacian normalizations in previous GCN are not essential?\"}",
"{\"comment\": \"GIN is essentially the same as the graph neural network in\\nequation (10) of this paper: \\nDai et al. ICML 2016. Discriminative Embeddings of Latent Variable Models for Structured Data\", \"https\": \"//arxiv.org/pdf/1603.05629.pdf\\n\\nA discussion of this related work, and compare to structure2vec in their datasets will help improve the paper. \\n\\nAlso how about the other message passing version of graph neural network developed in Dai et al. (eq (14) & (15)) ? Will it be more powerful?\", \"title\": \"GIN is essentially the same as structure2vec?\"}",
"{\"title\": \"Concern is addressed above.\", \"comment\": \"We thoroughly addressed the counter-example and the related concern in https://openreview.net/forum?id=ryGs6iA5Km¬eId=SyeZ3MU4AX\\nFurthermore, we revised our paper.\"}",
"{\"title\": \"Response to Reviewer3\", \"comment\": \"We thank the reviewer for the positive review and constructive feedback! We are glad that the reviewer likes our paper.\\n\\nFirst, we completely agree that the ability of GNNs to capture structural similarity of graphs is very important besides their discriminative power, and we believe this is one of the most important benefits of using GNNs over WL kernel. We have now made this point clearer in Section 4. Furthermore, we emphasized that we do consider node features to lie in R^d so that they can capture the similarity. The subtlety is that (as R1 nicely pointed out), we need a common assumption that node features at each layer are from countable set in R^d (not from the entire R^d). This is satisfied if the input node features are from a countable set, because for a graph neural network, countability propagates across all layers in a GNN. We leave uncountable node input features for future work and add a more detailed discussion in Section 4 of the revised paper. \\n\\nIn the following, we respond to R3\\u2019s other helpful comments and suggestions:\\n\\n1. RE: Architecture is similar to, e.g., Interaction Networks\\nThank you for the pointers. Some of our GIN\\u2019s building blocks, e.g. sum and MLP indeed appeared in other architectures. We emphasize that while previous work tend to be somewhat ad-hoc in designing GNN architectures, our main emphasis is on deriving our GIN architecture based on the theoretical motivation. In Section 6 of the revised version, we mention related GNN architectures and discuss the differences. \\n\\n2. RE: Using MLP for mean or max in the initial step is more fair?\", \"we_think_there_might_be_a_slight_misunderstanding_here\": \"as we discussed with concrete examples in Section 5.2, mean or max pooling are inherently incapable of capturing the multiset information regardless of the use of MLP. Especially, in our experiments, we use one-hot encodings as input node features, so the use of MLP on top of them does not increase the discriminative power of mean/max pooling.\\n\\n3. RE: Training set results optimized for test performance?\\nThe results were not actually optimized for test performance. Instead, we used exactly the same configurations for all the datasets: For all the GNNs, the same configurations were used across datasets: 5 GNN layers (including the input layer), hidden units of size 64, minibatch of size 128, and 0.5 dropout ratio. For the WL subtree kernel, we set the number of iterations to 4, which is comparable to the 5 GNN layers. We clarified this in Figure 6 of the revised paper.\"}",
"{\"title\": \"Response to Reviewer2\", \"comment\": \"Thank you for the detailed reviews. In the general post, we have addressed your chief concern regarding our original Eqn (4.1) and part of Theorem 3a). We sincerely hope R2 can revisit the rating in light of our revision and response.\\n\\nAnswers to R2\\u2019s other questions:\\n1. RE: Standard deviations\\nWe added the standard deviations in Table 1. Note that on many datasets, standard deviations are fairly high for the previous methods as well as our methods due to the small training datasets. Our GINs achieved statistically significant improvement on the two REDDIT datasets where the number of graphs are fairly large. We leave the empirical evaluation on larger datasets to future work, but we believe that more expressive GNN models like our GINs can benefit more from larger training data by better capturing important discriminative structural features.\\n\\n2. RE: Discussion on related work\\nFollowing the suggestion, in Section 6 of the revised paper, we discuss the difference of our work to e.g., [1][2]. In short, the important difference is that [1][2] both focus on the specific GNN architectures, while we provide a general framework for analyzing and characterizing the expressive power of a broad class of GNNs in the literature.\\n\\n3. RE: Experimental setup and stopping criteria\\nWe selected an epoch with the highest cross-validation accuracy (averaged over 10 folds) following what previous deep learning papers do, e.g., [3][4]. This is for fair comparison as most previous papers on graph classification only report cross-validation accuracy.\\n\\n4. RE: Network width\\nOur proofs focus on existential analysis, i.e., there exists a way we can represent multisets with unique representations. Thus, the network width necessary for the functions provided in our proofs may only serve as an upper bound. For practical purposes, in our experiments, we found 32 or 64 hidden units are usually sufficient to perfectly fit the training set.\\n\\n[3] Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In International Conference on Machine Learning (ICML), pp. 2014\\u20132023, 2016.\\n[4] Sergey Ivanov and Evgeny Burnaev. Anonymous walk embeddings. In International Conference on Machine Learning (ICML), pp. 2191\\u20132200, 2018.\"}",
"{\"title\": \"Response to Reviewer1\", \"comment\": \"Thank you for the detailed reviews and constructive feedback! We are glad that the reviewer finds our paper interesting. We apologize for the somewhat delayed response; it took us time to run additional experiments and add more careful analysis so that we can present an improved and more polished paper to everyone. We appreciate your understanding.\\n\\nIn the following, we first address the main concern on equating the WL test and the WL-GNNs by showing its validity under a mild practical assumption. Then, we clarify the misunderstanding regarding the random walk mixing behavior of the WL-GNNs, showing that our GIN architecture does not suffer from such behavior. Finally, we discuss confidence intervals of our experimental results and also address other concerns of the reviewer.\\n\\n1. RE: Validity of equating the WL test operating on countable sets to the WL-GNN operating on uncountable sets.\\nThe reviewer makes a great observation that countability of node features is essential and necessary for our theory, and we acknowledge that our current Theorem 3 and Lemma 5 are built on the common assumption that input node features are from a countable universe. We have now made this clear in our paper. We also filled in a technical gap/detail to address R1\\u2019s concern that after the first iteration, we are in an uncountable universe: this actually does not happen. We can show that for a fixed aggregation function, hidden node features also form a countable universe, because the countability of input node features recursively propagates into deeper layers. We also added a rigorous proof for this (Lemma 4 in our revised paper). As the reviewer nicely suggests, for the uncountable setting, it would be useful to have measure-theoretic analysis, which we leave for future work. Often input node features in graph classification applications (e.g., chemistry, bioinformatics, social) come from a countable (in fact, finite) universe, so our assumption is realistic. In the revised version, we clearly stated our assumptions at the beginning of Section 3 and have added further discussion on the relation between the WL test and WL-GNN after Theorem 3.\\n\\n2. RE: Random walk mixing behavior of the GIN architecture.\", \"we_think_there_might_be_a_slight_misunderstanding_here\": \"(1) Theorem 1 of (Xu et al., 2018) relates the random walk to the influence distribution in Definition 3.1 of (Xu et al., 2018), rather than the precise node representation, and (2) the analysis of Theorem 1 is specific to the GCN architecture (Kipf & Welling, 2017), where 1-layer perceptrons with mean pooling are used for neighbor aggregation. The GIN architecture does not suffer from the problem of random walk mixing because (1) Theorem 1 in (Xu et al., 2018) shows the influence distribution converges to a random walk limit distribution, however, it does not yet tell whether the node representations converge to the random walk limit distribution. Thus, \\u201cthe walker forgetting where it started\\u201d may not happen. (2) The GIN architecture uses MLPs rather than the 1-layer perceptron in (Kipf & Welling, 2017). The analysis in (Xu et al., 2018) specifically applies to models using 1-layer perceptrons, and therefore, it is not clear whether this analysis still holds for GIN.\\nFurthermore, the reviewer is concerned with a possibly exploding value due to the sum aggregation, but this can be avoided because we have different learnable neural networks at each layer that can scale down the summed output (also, in practice, we did not observe such explosion).\\n\\n3. RE: Confidence interval in experiments\\nFollowing the suggestion, we added the standard deviations in Table 1. Because of space limit, we only added standard deviation in Table1, and confidence interval can be obtained via the standard deviation. The confidence interval of 95% is mean 0.754*std, and confidence interval of 90% is mean 0.611*std. Note that on many datasets, standard deviations are fairly high for the previous methods as well as our methods due to the small training datasets. Our GINs achieved statistically significant improvement on the two REDDIT datasets where the number of graphs are fairly large. We leave the empirical evaluation on larger datasets to future work, but we believe that more expressive GNN models like our GINs can benefit more from larger training data by better capturing important discriminative structural features.\\n\\n4. Other comments:\\nWe also thank the reviewer for many other comments to strengthen our paper. In the revised paper, we clarified that WL-test cannot distinguish e.g., regular graphs. We discussed in Section 5.5 that the expressive power of other poolings such as LSTM and attention pooling can be analyzed under our framework, but we leave the empirical investigation to future work.\"}",
"{\"title\": \"Paper update overview\", \"comment\": \"We sincerely appreciate all the reviews, they give positive and high-quality comments on our paper with a lot of constructive feedback. We also thank the many anonymous commenters for their interest and helpful discussion. In the revised paper, we did our best to address the concerns and suggestions to strengthen our paper. We sincerely hope reviewers revisit the rating in light of our revision and response. The following summarizes our revisions. Please see our rebuttal for the detailed discussion.\", \"major_revisions\": \"1. An anonymous reader and Reviewer2 made a clever observation that our original GIN aggregation in Eq. (4.1) and Theorem 3a-Eqn.2) of the initial submission and cannot distinguish certain corner case graphs that the WL test can distinguish. We fixed this issue by 1) making a slight modification to GIN\\u2019s aggregation in Eq. (4.1), and 2) adding Corollary 6 to show Eqn. (4.1) in the revised paper is as powerful as WL, 3) removed Theorem 3a-Eqn.2). The modified GIN aggregation smoothly extrapolates the original one, avoids the corner case, and can be shown to be as powerful as the WL test. We conducted extensive experiments on the modified GIN to further validate our model. (see below https://openreview.net/forum?id=ryGs6iA5Km¬eId=ryGs6iA5Km¬eId=SyeZ3MU4AX for our detailed response.)\\n\\n2. Based on the helpful comments of Reviewer1 on countability of node features, we have now made our setting much clearer: We clarified the common assumption that input node features are from a countable set, and we further added Lemma 4 in the revised paper to prove that the hidden node features are also always from a countable set under this assumption. With the countability assumption, it is meaningful to discuss injectiveness in Theorem 3, and our countability assumption used in Lemma 5 (universal multiset functions) always holds. We also provided detailed discussion on the correspondence between the WL test and WL-GNN under the countability assumption, validating our theory to equate those two.\", \"minor_revisions\": \"1. R3 makes a great point that beyond distinguishing different graphs, it is equally important for GNNs to capture their structural similarity. We have already mentioned this point after Theorem 3. We now made this clearer and added a more detailed discussion in Section 4.\\n2. In response to R3 and R2, we added Section 6 for detailed discussion of related work.\\n3. Following the suggestions by R1 and R2, we added standard deviations in the experiments.\\n4. Based on the great insight by an anonymous reader, we added discussion on the expressive power of Sum-Linear when the bias term is included.\"}",
"{\"title\": \"Modification of GIN aggregation to address the concern (Part 1).\", \"comment\": \"We begin by acknowledging that Eqn (4.1) and Theorem 3a-Eqn.2) in our initial submission (which does not distinguish the center nodes from their neighbors) were indeed insufficient to be as powerful as the WL test. The example provided by the anonymous reader makes a great point about the corner case. That said, we agree that in order to realize the most powerful GNN, its aggregation scheme needs to distinguish the center node from its neighbors.\\n\\nThe good news is that we can resolve this corner case by making a very simple modification to our GIN aggregation scheme in Eq. (4.1) of the initial submission, so that the modified GIN can provably distinguish the root/center node from its neighbors during the aggregation. This implies that our modified GIN handles the counter-example raised by the anonymous reader, and, more importantly, we can prove that the modified GIN is as powerful as the WL test under the common assumption that the input node features are from a countable universe. In the following, we will explain these points in more detail.\\n\\nFirst, we present a simple update to our current GIN aggregation scheme, and show that it now handles the counter-example provided by the anonymous reader. Our simple modification to the original GIN aggregation in Eq. (4.1) of the initial submission is:\\n\\nh_v^{(k)} = MLP ( (1 + \\\\epsilon) h_v^(k-1) + \\\\sum_{u \\\\in neighbor} h_u^(k-1) ), (**), Eq. (4.1) of the revised paper.\\n\\nwhere \\\\epsilon is a fixed or learnable scalar parameter. We will show that there exist infinitely many \\\\epsilon where the modified GIN (as defined above) is as powerful as WL. Note that setting \\\\epsilon = 0 reduces to our original GIN aggregation in Eq. (4.1) of the initial submission. Thus, the above equation (Eq. (**)) smoothly \\u201cextrapolates\\u201d the original GIN architecture, and with the epsilon term, the modified GIN can now distinguish the center node from its neighbors. Before moving to the formal proof, let us first illustrate how modified GIN handles the counter-example by the anonymous reader:\\n\\n R - R R - G\\n| | v.s. | |\\n G - G G - R\\n\\nAssume we use the one-hot encodings for the input node features, i.e., R = [1, 0] and G = [0, 1]. After 1 iteration of aggregation defined by Eq. (**), our modified GIN obtains the following node representations (before applying MLP in (**)); thus, it successfully distinguishes the two graphs with small non-zero eps=\\\\epsilon.\\n\\n[2+eps, 1] -- [2+eps, 1] [1+eps, 2] -- [2, 1+eps]\\n| | vs. | |\\n| | | |\\n[1, 2+eps] -- [1, 2+eps] [2, 1+eps] -- [1+eps, 2]\\n\\nThe key here is that with non-zero (small) eps, [2+eps, 1] and [2, 1+eps] are now different. In other words, adding \\\\epsilon term in Eq. (**) enables the modified GIN to \\u201cidentify\\u201d the center nodes and distinguish them from neighboring nodes. \\n\\nWith the intuition above, we now give a formal proof for the modified GIN architecture. We start with Lemma 5 (universal multiset functions) in our revised paper, and extend it to Corollary 6 in the revised paper that can distinguish center node from the neighboring nodes. Crucially, the function h(c, X) in Corollary 6 is now the injective mapping over the *pair* of a center node c and its neighbor multiset X. This implies that h(c, X) in Corollary 6 can distinguish center nodes from their neighboring nodes.\\n\\nCorollary 6\\nAssume \\\\mathcalcal{X} is countable. There exists a function f: \\\\mathcal{X} \\u2192 R^n so that for infinitely many choices of \\\\epsilon, including all irrational numbers, h(c, X) \\\\equiv (1 + \\\\epsilon) f(c) + \\\\sum_{x \\\\in X} f(x) is unique for each pair (c, X), where c \\\\in \\\\mathcal{X}, and X \\\\subset \\\\mathcal{X} is a finite multiset. \\n\\n---Proof sketch (details are provided in Appendix of the revised paper, see Proof of Corollary 6)\\nThe proof builds on Lemma 5 that constructs the function f that maps each finite multiset uniquely to a rational scalar with N-digit-expansion representation. With the same choice of f from Lemma 5, the irrationality of \\\\epsilon enables us to distinguish the center node representation c from any combination of multiset representation, which is always rational. That is, h(c,X) is unique for each unique pair (c,X).\\n----\\n\\nUsing h(c, X) for the aggregation, we can straightforwardly derive our modified GIN aggregation in Eq. (**) (similarly to the MLP-sharing-across-layer trick described after Lemma 5.) We included a detailed derivation in Section 4.1 of the revised paper.\"}",
"{\"title\": \"Modification of GIN aggregation to address the concern (Part 2).\", \"comment\": \"We also conducted extensive experiments on the modified GIN architecture with Eq. (**), where we learn epsilon by gradient descent. We included the additional results in Section 7 of our revised paper. In terms of training accuracy, which is the main focus of our paper, we observed from our new Figure 4 (in the revised paper) that the modified GIN (we call it GIN-eps in our paper) gives the same results as our original GIN (GIN-0) does, showing no improvement on the training accuracy. This is because the original GIN already fits the training data very well, achieving nearly 100% training accuracy on almost all of our datasets. Consequently, the explicit learning of epsilon in the modified GIN (GIN-eps) does not help much. Interestingly, in terms of the test accuracy, we observed from Table 1 (in the revised paper) that for GIN-eps (modified GIN) there is a slight drop in test accuracy (0.5% on average) compared to GIN-0 (original GIN). Since GIN-0 and GIN-eps showed almost no difference in training accuracy, both have sufficient discriminative power on this data, and the slight drop in test accuracy should be explained by generalization rather than expressiveness. We leave the investigation of the effectiveness of GIN-0 for future work. We want to emphasize that the pooling scheme (sum vs. average vs. max) and mapping scheme (MLP vs. linear) does affect the performance w.r.t. training accuracy, and consequently also affects the test accuracy. Thus, our main findings distinguishing the sum-MLP architecture from other aggregation schemes for maximally expressive GNNs is still valid.\\n\\nAs a final remark, as R1 nicely commented, instead of Eq. (**), a node and neighbors can be concatenated, rather than summed, to achieve the same power as the WL test. Interestingly, as R1 cleverly predicted, in our preliminary experiments, we found such concatenation was harder to train compared to our simple GINs (both GIN-0 and GIN-eps) and achieved lower test accuracy. We leave the extensive investigation and comparison to our future work.\\n\\nWe sincerely appreciate the reviewer and commenter for the great suggestions and insights, which enabled us to further strengthen our work and make our paper stronger. We hope our new version resolves the reviewers\\u2019 main concerns.\"}",
"{\"title\": \"Indeed, Theorem 3 is problematic (and the notation is confusing)\", \"comment\": \"I understood \\\\{h_v^{(k-1)}, h_u^{(k-1)} : u \\\\in \\\\mathcal{N}_v\\\\} as a typo for a set of tuples \\\\{(h_v^{(k-1)}, h_u^{(k-1)}) : u \\\\in \\\\mathcal{N}_v\\\\}. Which would have been fine.\\n\\nBut you are right that looking at the proof in the appendix, it states \\\"difficulty in proving this form of aggregation mainly lies in the fact that it does not immediately distinguish the root or central node from its neighbors\\\" ... which is not how WL is supposed to work. Thanks!\\n\\nOn top of these issues, WL requires a countable space while their approach operates over uncountable spaces (which remains my main concern). Even reverting to aggregation will not fix this mismatch.\"}",
"{\"title\": \"The concern will be addressed soon.\", \"comment\": \"We are now working hard for the thorough response and revision to fully address the concern of Reviewer2 and the anonymous reader. Thanks for your patience.\"}",
"{\"title\": \"Relation to Theorem 3\", \"comment\": \"The counterexample appears to be related to a flaw in Theorem 3, see this comment: https://openreview.net/forum?id=ryGs6iA5Km¬eId=ByxsKEkV07\\n\\nIn my opinion, a statement of the authors (and a revision) is absolutely necessary.\"}",
"{\"title\": \"The counterexample applies to Theorem 3\", \"comment\": \"Theorem 3 states (as a sidline) that it makes no difference whether we consider a) (label(v), {label(u) : uv in E}) or b) just the set {label(v)} \\\\cup {label(u) : uv in E}. The set notation used for b) in the paper is a bit unclear, but this appears to be the intended meaning (from the proof and the approach used in section 4.1). For this set, Equation (4.1) yields an injection as claimed. Therefore the error actually affects Theorem 3, the main result of the paper. Clearly, WL is not perfect (otherwise it would solve the graph isomorphism problem), but that does not make the flaw any less serious. In my opinion, a revision of the authors is absolutely necessary.\"}",
"{\"comment\": \"Thanks for pointing out the dataset. But I believe those datasets contain many small graphs. A dataset of many large graphs is still missing.\", \"title\": \"But we still don't have a dataset that contain many large networks\"}",
"{\"comment\": \"I do not think that Equation (4.1) is as powerful as the 1-WL. Consider the two labeled graphs\\n\\nr -- g\\n| |\\ng -- r\\n\\nand \\n\\nr -- g\\n| |\\nr -- g\\n\\nwith node color \\\"g\\\" and \\\"r\\\". Clearly, the 1-WL can distinguish between these two graphs. Howeover, when using (4.1) with an 1-hot encoding of the labels, both graphs will end up with the same two features. The set of node features will always be the same.\", \"title\": \"Problem with Equation (4.1)\"}",
"{\"comment\": \"There are already larger real-world datasets available, see e.g., [1].\\n\\n[1] http://moleculenet.ai/datasets-1\", \"title\": \"Dataset problem\"}",
"{\"comment\": \"A comment on the dataset. I think current dataset is very limited for evaluating different graph learning algorithms. A new paper showed that using very simple degree statistics already can perform on par with the state-of-the-art graph neural networks and graph kernel. Imagenet Like dataset is strongly needed for evaluating different algorithms fairly.\", \"reference\": \"\", \"a_simple_yet_effective_baseline_for_non_attribute_graph_classification_https\": \"//arxiv.org/abs/1811.03508\", \"title\": \"Dataset problem\"}",
"{\"title\": \"One of the better GNN papers; would improve a lot with more careful discussion/analysis\", \"review\": \"This papers presents an interesting take on Weisfeiler-Lehman-type GNNs, where it shows that a WL-GNNs classification power is related to its ability to represent multisets. The authors show a few exemplar networks where the mean and the max aggregators are unable to distinguish different multisets, thus losing classification power. The paper also proposes averaging the node representation with its neighbors (foregoing the \\u201cconcatenate\\u201d function) and using sum pooling rather than mean pooling as aggregator. All these observations are wrapped up in a GNN, called GIN. The experiments on Table 1 are inconclusive, unfortunately, as the average accuracies of the different methods are often close and there are no confidence intervals and statistical tests to help guide the reader to understand the significance of the results.\\n\\nMy chief concern is equating the Weisfeiler-Lehman test (WL-test) with Weisfeiler-Lehman-type GNNs (WL-GNNs). The WL-test relies on countable set inputs and injective hash functions. Here, the paper is oversimplifying the WL-GNN problem. After the first layer, a WL-GNN is operating on uncountable sets. On uncountable sets, saying that a function is injective does not tells us much about it; we need a measure of how closely packed we find the points in the function\\u2019s image (a measure in measure theory, a density in probability). On countable sets, saying a function is injective tells us much about the function. Moreover, the WL-test hash function does not even need to operate over sets with total or even partial orders. As a neural network, the WL-GNN \\u201chash\\u201d ($f$ in the paper) must operate over a totally ordered set (\\\\mathbb{R}^n, n > 0). Porting the WL-test argument of \\u201cconvergence to unique isomorphic fingerprints\\u201d to a WL-GNN requires a measure-theoretic analysis of the output of the WL-GNN layers, and careful analysis if the total order of the set does not create attractors when they are applied recursively. \\n\\nTo illustrate the above *attractor* point, let\\u2019s consider the construct of Theorem 1 of (Xu et al., 2018), where the WL-GNN \\u201chash\\u201d ($f$) is (roughly) described as the transition probability matrix of a random walk on the input graph. Under well-known conditions, the successive application of this operator (\\\"hash\\\" or transition probability matrix P in this case) can go towards an attractor (the steady state). Here, we need a measure-theoretic analysis of the \\u201chash\\u201d even if it is bijective: random walk mixing. The random walk transition operator can be invertible (bijective), but we still say the random walker will mix, i.e., the walker forgets where it started, even if the transition operation can be perfectly undone by inversion (P^{-1}). In a WL-GNN that only uses the last layer for classification, this would manifest itself as poor performance in a WL-GNN with a large number of layers, and vanishing gradients. Of course, since (Xu et al., 2018) argued to revert back to the framework of (Duvenaud et al., 2015) of using the embeddings of all layers, one can argue that this mixing problem is just a problem of \\u201cwasted computation\\u201d.\\n\\nThe matrix analysis of the last paragraph also points to another potential problem with the sum aggregator. GIN needs to be shallow. With ReLU activations the reason is simple: for an adjacency matrix $A$, the value of $A^j$ grows very quickly with $j$ (diverges). With sigmoid activations, GIN would experience vanishing gradients in graphs with high variance in node degrees.\\n\\nThe paper should be careful with oversimplifications. Simplifications are useful for insight but can be dangerous if not prefaced by clear warnings and a good understanding of their limitations. I am not asking for a measure-theoretic analysis revision of the paper (it could be left to a follow-up paper). I am asking for a *relatively long* discussion of the limitations of the analysis.\", \"suggestions_to_strengthen_the_paper\": \"\\u2022\\tPlease address the above concerns.\\n\\u2022\\tTable 1 should have confidence intervals (a statistical analysis of significance would be a welcome bonus).\\n\\u2022\\tPlease mention the classes of graphs where the WL-test cannot distinguish two non-isomorphic graphs. See (Douglas, 2011), (Cai et al., 1992) and (Evdokimov and Ponomarenko, 1999) for the examples. It is important for the WL-GNN literature to keep track of the more fundamental limitations of the method.\\n\\u2022\\t(Hamilton et al, 2017) also uses the LSTM aggregator, besides max aggregator and mean aggregator, which outperforms both max and mean in some tasks. Does the LSTM aggregator also outperforms the sum aggregator in the tasks of Table 1? It is important for the community to know if unusual aggregators (such as the asymmetric LSTM) have some yet-to-be-discovered class-distinguishing power.\\n\\n\\n--------- Update -------\\n\\nThe counter-example in\", \"https\": \"//openreview.net/forum?id=ryGs6iA5Km¬eId=rkl2Q1Qi6X\\nis indeed a problem for Theorem 3 if \\\\{h_v^{(k-1)}, h_u^{(k-1)} : u \\\\in \\\\mathcal{N}_v\\\\} is not a typo for a set of tuples \\\\{(h_v^{(k-1)}, h_u^{(k-1)}) : u \\\\in \\\\mathcal{N}_v\\\\}. Unfortunately, in their proof, the submission states \\\"difficulty in proving this form of aggregation mainly lies in the fact that it does not immediately distinguish the root or central node from its neighbors\\\", which means \\\\{h_v^{(k-1)}, h_u^{(k-1)} : u \\\\in \\\\mathcal{N}_v\\\\} is actually \\\\{h_v^{(k-1)}\\\\} \\\\cup \\\\{ h_u^{(k-1)} : u \\\\in \\\\mathcal{N}_v\\\\}, which is not as powerful as WL. Concatenating is more powerful than the summing the node's own embedding, but it results in a simpler model and could be easier to learn in practice. And I am still concerned about the countable x uncountable domain/image issue I raised in my review.\\n\\nStill, the reviewers seem to be doing all the discussion among themselves, with no input from the authors. I am now following Reviewer 2.\\n\\n----\\n\\nReverting my score to my original score. The authors have addressed most of my concerns, thank you. The restricted theorems and propositions better describe the contribution.\\n\\nI would like to note that while the proof of (Xu et al., 2018) is limited that does not mean it is not applicable to GIN or GraphSAGE or similar models. The paper uses 5 GNN layers, which in my experience is the maximum I could ever use with GNNs without seeing a degradation in performance. I don't think this should be a topic for this paper, though.\\n\\n\\nXu, K., Li, C., Tian, Y., Sonobe, T., Kawarabayashi, K., & Jegelka, S. (2018). Representation Learning on Graphs with Jumping Knowledge Networks. In ICML.\\n\\nCai, J. Y., F\\u00fcrer, M., & Immerman, N. (1992). An optimal lower bound on the number of variables for graph identification. Combinatorica, 12(4), 389-410.\\n\\nDouglas, B. L. (2011). The Weisfeiler-Lehman method and graph isomorphism testing. arXiv preprint arXiv:1101.5211.\\n\\nEvdokimov, S., & Ponomarenko, I. (1999). Isomorphism of coloured graphs with slowly increasing multiplicity of Jordan blocks. Combinatorica, 19(3), 321-333.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Nice results on the expressive power of neighborhood aggregation mechanisms used in GNNs\", \"review\": \"The author study the expressive power of neighborhood aggregation mechanisms used in Graph Neural Networks and relates them to the 1-dimensional Weisfeiler-Lehman heuristic (1-WL) for graph isomorphism testing. The authors show that GCNs with injections acting on the neighborhood features can distinguish the same graphs that can be distinguished by 1-WL. Moreover, they propose a simple GNN layer, namely GIN, that satisfies this property. Moreover, less powerful GNN layers are studied, such as GCN or GraphSage. Their advantages and disadvantages are discussed and it is shown which graph structures they can distinguish. Finally, the paper shows that the GIN layer beats SOTA GNN layers on well-known benchmark datasets from the graph kernel literature.\\n\\nStudying the expressive power of neighborhood aggregation mechanisms is an important contribution to the further development of GCNs. The paper is well-written and easy to follow. The experimental results are well explained and the evaluation is convincing.\\n\\nHowever, I have some concerns regarding the main result in Theorem 3. A consequence of the theorem is that it makes no differences (w.r.t. expressive power) whether one distinguishes the features of the node itself from those of its neighbors. This is remarkable and counterintuitive, but not discussed in the article. However, it is discussed in the proof of Theorem 3 (Appendix) which suggests that the number of iterations must be increased for some graphs in order to obtain the same expressive power. Unfortunately, at this point, the proof is a bit vague. I would like to see a discussion of this differences in the article. This should be clarified in a revised version. \\n----\", \"edit\": \"The counter example posted in a comment ( https://openreview.net/forum?id=ryGs6iA5Km¬eId=rkl2Q1Qi6X¬eId=rkl2Q1Qi6X ) actually shows that my concerns regarding Theorem 3 and its proof were perfectly justified. I agree that the two graphs provide a counterexample to the main result of the paper. Therefore, I have adjusted my rating. I will increase my rating again when the problem can be resolved. However, this appears to be non-trivial.\\n----\\nMoreover, the novelty of the results compared to the related work, e.g., mentioned in the comments, should be pointed out.\", \"some_further_questions_and_remarks\": \"(Q1) Did you use a validation set for evaluation? If not, what kind of stopping criteria did was use?\\n\\n(Q2) You use the universal approximation theorem to prove Theorem 3. Could you please say something about the needed width of the networks?\\n\\n(R1) Could you please provide standard deviations for all experiments. I suspect that the accuracies on the these small datasets fluctuates quite a bit.\\n\\n(R2) In the comments it was already mentioned, that some important related work, e.g., [1], [2], are not mentioned. You should address how your work is different from theirs.\", \"minor_remarks\": \"- The colors in Figure 1 are difficult to distinguish\\n\\n\\n\\n[1] https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4703190\\n[2] https://people.csail.mit.edu/taolei/papers/icml17.pdf\\n\\n-------------------\", \"update\": \"Most of the weak points were appropriately addressed by the authors and I have increased my rating accordingly.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Reviewer comment\", \"review\": \"This paper presents a very interesting investigation of the expressive capabilities of graph neural networks, in particular focusing on the discriminative power of such GNN models, i.e. the ability to tell that two inputs are different when they are actually different. The analysis is based on the study of injective representation functions on multisets. This perspective in particular allows the authors to distinguish different aggregation methods, sum, mean and max, as well as to distinguish one layer linear transformations from multi-layer MLPs. Based on the analysis the authors proposed a variant of the GNN called Graph Isomorphism Networks (GINs) that use MLPs instead of linear transformations on each layer, and sum instead of mean or max as the aggregation method, which has the most discriminative power following the analysis. Experiments were done on node classification benchmarks to support the claims.\\n\\nOverall I quite liked this paper. The study of the expressive capabilities of GNNs is a very important problem. Given the popularity of this class of models recently, theoretical analysis for these models is largely missing. Previous attempts at studying the capability of GNNs focus on the function approximation perspective (e.g. Mapping Images to Scene Graphs with Permutation-Invariant Structured Prediction by Hertiz et al. which is worth discussing). This paper presents a very different angle focusing on discriminative capabilities. Being able to tell two inputs apart when they are different is obviously just one aspect of representation power, but this paper showed that studying this aspect can already give us some interesting insights.\\n\\nI do feel however that the authors should make it clear that discriminative power is not the only thing we care, and in most applications we are not doing graph isomorphism tests. The ability to tell, for example, how far two inputs are, when they are not the same is also very (and maybe more) important, which such isomorphism / injective map based analysis does not capture at all. In fact the assumption that each feature vector can be mapped to a unique label in {a, b, c, ...} (Section 3 first paragraph) is overly simplistic and only makes sense for analyzing injective maps. If we want to reason anything about the continuity of the features and representations, this assumption does not apply, and the real set is not countable so such a mapping cannot exist.\\n\\nIn equation 4.1 describes the GIN update, which is proposed as \\u201cthe most powerful GNN\\u201d. However, such architecture is not really new, for example the Interaction Networks (Battaglia et al. 2016) already uses sum aggregation and MLP as the building blocks. Also, it is said that in the first iteration a simple sum is enough to implement injective map, this is true for sum, but replacing that with mean and max can lose information very early on. Another MLP on the input features at least for mean or max aggregation for the first iteration is therefore necessary. This isn\\u2019t made very clear in the paper.\\n\\nThe training set results presented in section 6.1 is not very clear. The plots show only one run for each model variant, which run was it? As the purpose is to show that some variants fit well, and some others overfit, these runs should be chosen to optimize training set performance, rather than generalization. Also the restrictions should be made clear that all models are given the same (small) amount of hidden units per node. I imagine if the amount of hidden units are allowed to be much bigger, mean and max aggregators should also catch up.\\n\\nAs mentioned earlier I quite liked the paper despite some restrictions anc things to clarify. I would vote for accepting this paper for publication at ICLR.\\n\\n--------\\n\\nConsidering the counter-example given above, I'm lowering my scores a bit. The proof of theorem 3 is less than clear. The proof for the first half of theorem 3 (a) is quite obvious, but the proof for the second half is a bit hand-wavy.\\n\\nIn the worst case, the second half of theorem 3 (a) will be invalid. The most general GNN will then have to use an update function in the form of the first half of 3(a), and all the other analysis still holds. The experiments will need to be rerun.\\n\\n--------\", \"update\": \"the new revision resolved the counter-example issue and I'm mostly happy with it, so my rating was adjusted again.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"More powerful GNNs can better capture discriminative substructures of graphs\", \"comment\": \"As we have pointed out in the experiment section, although stronger discriminative power does not directly imply better generalization, it is reasonable to expect that models that can accurately capture graph structures of interest also perform well on test set. In particular, with many existing GNNs, the discriminative power may not be enough to capture graph substructures that are important for classifying graphs. Therefore, we believe strong discriminative power is generally advantageous for graph classification. In our experiments, we empirically demonstrated that our powerful GIN has better generalization as well as better fitting to training datasets compared to other GNN variants. GINs performed the best in general, and achieved state-of-the-art test accuracy. We leave further theoretical investigation of generalization to our future work.\"}",
"{\"comment\": \"I understand that GIN provably has more discriminative power than other variants of GNN. But the ability to differentiate non-isomorphic graphs does not necessarily imply better graph classification accuracy, right? Would it be possible to strong discriminative power will backfire for the graph classification? After all, we don't need to solve graph isomorphism here.\", \"title\": \"The role of discriminative power for graph classification\"}",
"{\"title\": \"Answers\", \"comment\": \"Thanks for your interest. Answers to your inquiries:\\n\\n1. Note that being powerful entails \\u201cbeing able to\\u201d map nodes with different subtrees to different representations. If a model is not capable of achieving this, then it\\u2019s intrinsically less powerful in distinguishing different graphs. In addition, to combat noise, we can simply regularize the mapping function to be locally smooth (e.g., by using Virtual Adversarial Training [1]). Nonetheless, in many graph classification applications including those in our experiments, the node features have specific meanings (e.g. an atom of certain types) and are not noisy. \\n\\n2. Note that our paper focuses on expressive power of GNNs, and there are two main reasons why it is not very interesting for us to conduct node classification experiments to validate our claim.\\nFirst, as we have emphasized in Section 5 and 5.3, in many node classification applications, node features are rich and diverse (e.g. bag-of-words representation of papers in a citation network), so GNN models like GCN and GraphSAGE are often already able to fit the training data well. Second, many node classification tasks assume limited training labels (semi-supervised learning scenario); thus, the inductive bias of GNNs also plays a key role in empirical performance. For example, as we discussed in Section 5.3, the statistical and distributional information of neighborhood features may provide a strong signal for many node classification tasks. \\n\\nOur GINs may potentially perform well on node classification tasks. However, due to our explanations above, the performance on node classification tasks are less directly explained by our theory of representational power, so we leave the experiments for future work. We believe our experiments on graph classification are sufficient and great for validating our theoretical claim on expressive power of GNNs. \\n\\n3. We set the numbers of hidden units and output units of MLP to be same. So the parameter complexity of Sum-MLP is roughly two times as many as that of Sum-Linear. However, note that with more hidden units, the performance of models with 1-layer perceptrons usually decreases. \\n\\n[1] https://arxiv.org/abs/1704.03976\"}",
"{\"title\": \"Thank you everyone.\", \"comment\": \"We thank everyone for interest and many inquiries about our work.\", \"to_anonymous_3\": \"Thanks for bringing up this related work. Graph representation learning is an increasingly popular research topic with a surge of many wonderful works. We will make sure to add all the relevant references in our updated version. To emphasize the difference with the related work, [5] shows their proposed architecture lies in the RKHS of graph kernels, but does not tell anything about which graphs can actually be discriminated by the network. In contrast, we address the question of which graphs can be distinguished, and provide a framework for addressing this representational question in a general way, settling the representational power of a broad class of GNNs.\"}",
"{\"comment\": \"Hi\\uff01I'm writing to ask some questions.\\n\\n1. In Section 3, you said that \\\"Intuitively, the most powerful GNN maps two nodes to the same location only if they have identical subtrees structures with identical features on the corresponding nodes\\\". However, in my opinion, a powerful model should map nodes with different labels into different locations instead of features, since there may be some noise in features. \\n\\n2. In the paper, you said that GIN is the most powerful model. But you only reported experimental results on graph classification. Have you validated the proposed model on node classification tasks? Based on my understanding, it's also important to consider the performance on node classification when judging the power of a GNN model?\\n\\n3. Instead of Mean/Max aggregators in GCN and GraphSAGE, MLP is used as the aggregator in each layer. Have you compared the parameter complexity with other baselines?\\n\\nThank you!\", \"title\": \"Some questions about the paper\"}",
"{\"comment\": \"Thank you so much for providing possible ideas for future directions! The materials you referenced look very helpful and I will take a look at graph minor theory and spectral graph theory.\", \"title\": \"Thank you so much for the reply!\"}",
"{\"comment\": \"I think you also miss other important related work [5], which shows that the features computed by GNNs lie in the same Hilbert space as WL.\\n\\n\\n[5] https://people.csail.mit.edu/taolei/papers/icml17.pdf\", \"title\": \"More important related work\"}",
"{\"title\": \"Our thoughts.\", \"comment\": \"Thank you for your interest in our work!\\n\\nGreat that you found the framework presented in our paper intuitive/natural for understanding graph representations. We think the spectral perspectives [1] [2] also provide a very valuable and important angle. It would be interesting to understand how to connect and relate the different perspectives. Regarding future directions, besides what we have mentioned in our conclusion, we do not have further comments at this moment. Combining and applying techniques from many other communities indeed sounds very interesting and promising. Ideas from graph minor theory [3] and spectral graph theory [4] [5] may be interesting and are not fully explored in the current message passing frameworks, although we do not have detailed suggestions at the moment.\\n\\n[1] Bruna, J., Zaremba, W., Szlam, A., and LeCun, Y. Spectral networks and locally connected networks on graphs. International Conference on Learning Representations (ICLR), 2014.\\n[2] Bronstein, M. Bruna, J., Szlam, A., LeCun, Y. and Vandergyst, P. Geometric Deep Learning: going beyond Euclidean Data IEEE Sig. Proc. Magazine, 2017 \\n[3] https://www.birs.ca/workshops/2008/08w5079/report08w5079.pdf\\n[4] http://www.cs.yale.edu/homes/spielman/561/\\n[5] http://courses.csail.mit.edu/6.S978/\"}",
"{\"comment\": \"Thanks for the thoughtful and provocative work! The paper answered some questions I have been thinking about. Graph convolution that many people talk about was motivated by Fourier transform of graph Laplacian and analogy with computer vision, yet I thought it\\u2019s not quite the same as vision. I was curious what are the more natural explanations. The view of \\u201ccapturing graph structures with powerful aggregators\\u201d sounds much more natural to me and also natural to graphs problems. Very provocative!\\n\\nI wonder what possible good future directions look like for graphs? Many great works these years apply theoretical computer science techniques to machine learning, e.g. Prof Sanjeev Arora group from Princeton and Prof. Aleksander Madry group from MIT. Do you see similar directions for graphs?\", \"title\": \"Thoughtful and provocative work! Future directions?\"}",
"{\"title\": \"We provide a general framework for analyzing and rethinking a large amount of Graph NNs in the literature.\", \"comment\": \"We thank both Anonymous 1 and Anonymous 2 for your interest in our work!\", \"to_anonymous_1\": \"Thanks for bringing up this early work! We will comment on the differences below. We would like to refer to Anonymous 2\\u2019s comment first, which made a very good point.\", \"to_anonymous_2\": \"Thank you for the insightful comment! Indeed, [1] analyzes a specific model with recurrent contraction maps, but our analysis framework applies to general GNNs with message passing/neighbor aggregation. Regarding the connection and differences of contraction, recurrent maps and more general aggregators, the talk/paper by Yujia Li et al [3][4] provide some very good explanations and insights! Highly recommended!\", \"more_detailed_explanations_on_the_differences\": \"1) As Anonymous 2 pointed out, the 2009 paper [1] analyzes a specific architecture designed in [2] that uses contraction maps and the same aggregator in all layers. Although [1] proves [2] can capture rooted subtree structures, it has been observed e.g. in [3][4], that it does not perform ideally in practice, thus leading to the surge of a large amount of modern GNN architectures like Gated GNN, GCN, GraphSAGE etc. Our architecture GIN is shown to perform well in practice. To Anonymous 2: in our preliminary experiments, we also tried sharing the same aggregator across all layers of GNN, but the training accuracy was fairly low (usually < 80%), possibly due to optimization or capacity issues.\\n\\n2) While [1] focuses on the specific GNN in [2], we provide a general framework for characterizing the expressive power of many different GNN variants proposed so far in the literature. Our results are not only applicable to [2], GIN etc. but also applicable to almost all modern GNN architectures like GCNs and GraphSAGE.\\n\\n3) We made an explicit comparison of different GNN variants both theoretically and empirically so that we can have better understandings of their theoretical properties. Specifically, we characterized what graph substructures different aggregation schemes can capture, and discussed how that might affect empirical performance. We also made it clear that injectiveness of the aggregation function is the key to achieving high expressive power in GNNs. \\n\\nTherefore, we believe our work plays an important role in rethinking and structuring the 10-year literature of GNNs from the viewpoint of expressive power, despite some similarity to [1] in terms of capturing rooted subtree structures. We will also discuss [1] and [2] in our updated version.\\n\\n\\n[1] Scarselli, Franco, et al. \\\"Computational capabilities of graph neural networks.\\\" IEEE Transactions on Neural Networks 20.1 (2009): 81-102.\\n[2] Scarselli, Franco, et al. \\\"The graph neural network model.\\\" IEEE Transactions on Neural Networks 20.1 (2009): 61-80.\\n[3] https://www.cs.toronto.edu/~yujiali/files/talks/iclr16_ggnn_talk.pdf\\n[4] Li, Yujia, et al. \\\"Gated graph sequence neural networks.\\\" arXiv preprint arXiv:1511.05493 (2015).\"}",
"{\"comment\": \"I am also curious if there is any connections here. From my understanding, one difference is that Scarselli et al. (2009) focus on a specific type of GNN (with a recurrent contraction aggregator), so the analysis probably doesn't apply to mordern GNN architectures like GCN. On the other hand, this paper provides a general framework that gives insight to a number of GNN architectures.\", \"title\": \"Connections\"}",
"{\"comment\": \"There is an article from 2009 [1] which has a similar theoretical contribution. Could you please comment on the differences.\\n\\n[1] https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4703190\", \"title\": \"Related work, difference to older work\"}",
"{\"title\": \"Reply\", \"comment\": \"You are right; we can simply pick sufficiently large N that is bigger than the size of any graphs of interest. Also, all graphs of our interest are of bounded sizes, and we explicitly stated in our Lemma 4 that we dealt with finite multiset*; thus, your second question does not make sense to us.\\n\\n*https://en.m.wikipedia.org/wiki/Finite_set\"}",
"{\"comment\": \"Could you clarify how you can always find an N that works without an upper bound? My understanding is that N should be at least as large as the largest degree you would encounter in the set of all training + testing graphs, for the function to be injective in all of these graphs. Please correct me if I am wrong.\\n\\nIf the set of training + testing graphs are bounded in size, sure I can pick a large constant for N and that should work. But it's possible the distribution of graphs includes graphs of unbounded size (e.g., number of nodes drawn from a geometric distribution). What N should I pick then? \\n\\nIn practice, of course, all graphs have bounded size and it doesn't matter. But I want to understand what is the precise theoretical statement to be made here.\", \"title\": \"clarification\"}",
"{\"title\": \"The node degrees can be arbitrarily large\", \"comment\": \"Thank you for your interest! The finite node degrees |X| can be arbitrarily large, and we can always find an N that works (we do not have to put an upper bound on N). Note that Lemma 4 only shows the existence of injective functions, and in practice, we need our neural networks to learn these functions from data.\"}",
"{\"comment\": \"The proof of Lemma 4 assumes the graphs have a constant degree bound (|X|<N). Is the statement true even in general (i.e., finite |X|, but not bounded by a constant)? E.g., in inductive setting test graphs could have high degree.\", \"title\": \"regarding lemma 4\"}",
"{\"title\": \"We develop theory to turn \\u201cby accident\\u201d into \\u201ccommon practice\\u201d\", \"comment\": \"That\\u2019s a good observation. Indeed, there are great stuff in the nature possibly found by accident, e.g. rare grasses in Chinese medicine. Here, our goal is to study and develop theory to understand the underlying principles, so that we can appreciate the great stuff, and that in the future, with the insight of our theory, we can build even better graph deep learning models!\"}",
"{\"comment\": \"Thank you for the discussion!\\n\\nI'd like to clarify my point 2) further (it is an observation, not criticism):\", \"assuming_we_have_a_combine_operation_like_described_above\": \"\\\\sigma ( W_1*x + W_2*y + b)\\nIf we now stack n layers (NO weight sharing over time) and assume W_1 = 0 for the first n-1 of them, we arrive exactly at the formulation where we have an MLP with n-1 layers, followed by a normal GNN layer.\", \"the_point_i_wanted_to_make\": \"There are architectures in current literature that already achieve injectivity (maybe by \\\"accident\\\") through this construction. Maybe it can be said: As long as there is an individual W for the self-connection, the condition can be fulfilled through stacking.\", \"examples_are\": \"Defferrard et al.: Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering, 2016 (individual parameter for k=0 neighbourhood)\\nGilmer et al.: Neural Message Passing for Quantum Chemistry, 2017 (depending on implementation, i guess)\", \"title\": \"Clarification\"}",
"{\"title\": \"Thanks for the discussion!\", \"comment\": \"Thank you for your interest and positive comments on our work! Let us try to answer your questions. There are many GNN formulations. So it is always interesting to understand the power of different variants!\\n\\n1) Thanks for this insightful comment! With sufficiently large dimensionality of output units, ReLU with bias might indeed be able to distinguish different multisets (larger output dimensionality is generally needed as we have more multisets to distinguish). In our experiments, we actually had the bias term, and we empirically observed that under-fitting still sometimes occurred for models with 1-layer perceptrons (with bias) (see Figure 4). We think it could be due to the limited number of output units or optimization.\\n\\nWe would like to emphasize that with MLPs, we can enjoy universal approximation of multiset functions. This allows Sum-MLP (GIN) to go beyond just distinguishing different multisets and to learn suitable representations that are useful for applications of interest. In fact, Sum-MLP outperformed Sum-1-layer in 7 out of 9 datasets (comparable in the other 2) in terms of test accuracy!\\n\\nWe will further discuss these points and practical implications in our updated version.\\n\\n2) There can certainly be other GNN architectures with the same discriminative power as GIN (as long as they satisfy conditions in our Theorem 3). Your proposed formulation with COMBINE could potentially also work, although we do not fully understand your description. It would be great future work to investigate other powerful GNN models with potentially better generalization and optimization.\\n\\n3) (2.2) is indeed not exactly the same as the original GCN. Our emphasis here was that MEAN aggregation was used in GCN. We used the formulation (2.2) to share the same framework with GraphSAGE (MAX aggregation) to save space. We will include the exact formulation of GCN in the updated version. Also, we mentioned after (2.2) that GCN does not have a COMBINE step and aggregates a node along with its neighbors.\"}",
"{\"comment\": \"Thank you for this very interesting work which gives a lot of insight into graph neural networks and structures the large amount of related work out there.\\n\\nI have some remarks/opinions regarding the use of non-linearities in this work.\\n\\n1) Regarding section 5.1 and lemma 5: I do not think that more than 1 layer is necessary. The ReLU non-linearity does only show its full potential when used together with a bias. In most literature, the bias term is (unfortunately) omitted in the paper but still used in the implementation. ReLU without bias separates based on a hyperplane which always goes through the origin, which is why the example in the proof of Lemma 5 works. All values lie in one piece-wise linear subspace of the functions range. When using a bias, the non-linear point can be shifted to separate both examples in a non-linear fashion and the example that proves Lemma 5 does not work anymore. I am not sure though if there is another example that works if a bias is present. I suspect though, that one layer with a \\\"working non-linearity\\\", e.g. ReLU with bias, should be enough.\\n\\nTherefore, I guess the insight here is: We need a (working) non-linear mapping before doing the feature aggregation (assuming no one-hot encoding), otherwise, we lose injectivity and therefore, discriminative power. In many current GNN models (including GCNs), this is not the case.\\n\\n2) Further, I suspect that depending on how the COMBINE operation is defined, the discriminative power of WL can also be obtained by stacking 2 layers in the following way: \\nAssuming COMBINE to be \\\\sigma ( W_1*x + W_2*y + b), with x being the result of neighbourhood aggregation and y the last current node feature. Further, in the first layer, let the features from the neighbourhood aggregation get discarded (W_1 = 0), resulting in a node-wise fully connected layer with nonlinearity (or \\\"1x1-convolution\\\" or however it might be called). \\nThen, the second layer receives features which went through a non-linear function before aggregation. Since the network could learn W_1 = 0, those two layers should have the same discriminative power as the WL.\\n\\n3) I think the formulation of GCN in Equation 2.2 is not correct. The original GCN aggregates first and applies the non-linearity afterwards. \\nIt should be noted that since GCN does not have individual W's for the root node and the neighbourhood (W_1 and W_2 in the equation above) the mentioned construction from 2) does not work here.\", \"title\": \"Remarks regarding the use of ReLU as non-linearity\"}",
"{\"title\": \"Because GIN can capture similarity between different subtrees.\", \"comment\": \"Thanks for your questions!\\n\\nAs we mentioned in Section 4 right after Theorem 3, GIN generalizes the WL graph isomorphism test by learning to embed the subtrees to continuous space. This enables GIN to not only discriminate different structures, but also to learn to map similar graph structures to similar embeddings and capture dependencies between graph structures. Such learned embeddings are particularly helpful for generalization when the co-occurrence of subtrees is sparse across different graphs or there are noisy edges (Yanardag & Vishwanathan, 2015).\\n\\nRegarding the dataset, we did not try reddit-12K at this moment.\"}",
"{\"comment\": \"Since the GIN is developed to achieve as strong expressive power as WL graph isomorphism test, why does it still has much better result on reddit-binary and reddit-5K than WL subtree Kernel? Do you also tried on larger dataset such as reddit-12K?\", \"title\": \"So why GIN still outperforms WL kernel on some dataset?\"}"
]
} |
|
HJej6jR5Fm | Meta-Learning to Guide Segmentation | [
"Kate Rakelly*",
"Evan Shelhamer*",
"Trevor Darrell",
"Alexei A. Efros",
"Sergey Levine"
] | There are myriad kinds of segmentation, and ultimately the `"right" segmentation of a given scene is in the eye of the annotator. Standard approaches require large amounts of labeled data to learn just one particular kind of segmentation. As a first step towards relieving this annotation burden, we propose the problem of guided segmentation: given varying amounts of pixel-wise labels, segment unannotated pixels by propagating supervision locally (within an image) and non-locally (across images). We propose guided networks, which extract a latent task representation---guidance---from variable amounts and classes (categories, instances, etc.) of pixel supervision and optimize our architecture end-to-end for fast, accurate, and data-efficient segmentation by meta-learning. To span the few-shot and many-shot learning regimes, we examine guidance from as little as one pixel per concept to as much as 1000+ images, and compare to full gradient optimization at both extremes. To explore generalization, we analyze guidance as a bridge between different levels of supervision to segment classes as the union of instances. Our segmentor concentrates different amounts of supervision of different types of classes into an efficient latent representation, non-locally propagates this supervision across images, and can be updated quickly and cumulatively when given more supervision. | [
"meta-learning",
"few-shot learning",
"visual segmentation"
] | https://openreview.net/pdf?id=HJej6jR5Fm | https://openreview.net/forum?id=HJej6jR5Fm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1e-IYtNeE",
"rJemqdpaRm",
"rkx0CjV96Q",
"HJlEPLVqpQ",
"H1g_Df45aX",
"SygL0b496Q",
"HkxNQtn937",
"HkxrUnYc2Q",
"S1lxeZrq37"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545013577490,
1543522443042,
1542241238378,
1542239836046,
1542238815684,
1542238669741,
1541224732433,
1541213260765,
1541193960241
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper834/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper834/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper834/Authors"
],
[
"ICLR.cc/2019/Conference/Paper834/Authors"
],
[
"ICLR.cc/2019/Conference/Paper834/Authors"
],
[
"ICLR.cc/2019/Conference/Paper834/Authors"
],
[
"ICLR.cc/2019/Conference/Paper834/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper834/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper834/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"Paper proposes a meta-learning approach to interactive segmentation. After the author response, R2 and R3 recommend rejecting this paper citing concerns of limited novelty and insufficient experimental evaluation (given the popularity of this topic in computer vision). R1 does not seem be familiar with the extensive literature on interactive segmentation and their positive recommendation has been discounted. The AC finds no basis for accepting this paper.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}",
"{\"title\": \"Experimental Setting\", \"comment\": \"Thank you for the detailed explanation about the experimental setting and clarification.\\nHowever, I'm still not convinced whether proposed model could learn diverse user's intent in case of interactive image segmentation.\\nI think there should be significant amount of ambiguity given few sparse guidance for segmentation.\\nFor example, in case guidance is given on the chest of a person in a image, does it mean every person in the image should be segmented? only one person should be segmented? t-shirt should be segmented? or some region in a chest with similar pattern should be segmented. \\nIn the experimental setting, it is hard to see how the proposed method is dealing with this ambiguity, and whether model's output is biased to class label in case guidance is clearly point some reason of image.\\nFor example, guidance is pointing glasses on a face, does the model correctly segment glasses only? or does it segment whole face, or person.\\nI think these perspectives cannot be analyses just by comparing overall numbers with simple baselines.\\nI would recommend to present more analysis on the characteristics of the model by showing qualitative examples showing that the model correctly handle ambiguity and not bias. \\nOne potential way to quantitatively evaluate this aspect would be using segmentation dataset containing classes with different granularity. \\nFor example, one can repurpose MSCOCO dataset and include additional classes including subset of classes (e.g vegetable, food, fruit , etc) and designing an setting that ambiguity should be resolved. \\nI believe PASCAL VOC is somewhat limited to evaluate model's behaviour on such cases properly.\\n\\nI still believe this type of detailed analysis of the algorithm is essential to give acceptance to this paper.\"}",
"{\"title\": \"contributions, results metric, and interpretation of experiments\", \"comment\": \"Thank you for the review and the attention to our architecture and results. Here we detail how our architectural choices lead to key differences from prior work, clarify the metric in our experiments, and discuss the interpretation of our experiments. We would appreciate if the reviewer can comment on how these points affect their views on the novelty, strength of results, and interpretation of our work and reconsider their rating.\\n\\n> the only differences with the referenced paper (Shaban et al., 2017) is how the support is fused and how multiple guidance could be handled, which can be done by averaging.\\n\\nOur work differs in architecture, optimization, and scope.\\n\\nFor architecture, we factorize the approach into (1) extracting the task representation and (2) guiding inference by the representation: this takes the form of our novel late fusion architecture in contrast to the early fusion of Shaban et al. While this difference might appear minor, it has several important consequences. Late fusion allows for parameter sharing between guide and inference branches that makes optimization converge sooner. Given new support annotations, inference by our model updates an order of magnitude faster because only the late stage is recomputed, unlike the full recomputation of the net required by early fusion. For multi-class segmentation, our method only requires a single pass to compute a guide for each class, while Shaban et al. inefficiently require a forward pass per class since their early fusion is only defined for binary tasks.\\n\\nFor optimization we meta-train on sparse annotations, not dense, and do not require per-branch learning rate tuning (since our parameters are shared). For tasks with sparsely labeled supports, we achieve an an accuracy improvement of ~50% relative over Shaban et al. for only two points per image; see Figure 5 (right).\\n\\nFor scope, we formulate the more general problem of guided segmentation, and we agree with the reviewer that \\\"learning a single segmentation algorithm to solve various segmentation problem is an interesting problem that worth exploring.\\\" However Shaban et al. restrict their scope to one-shot semantic segmentation from densely labeled support. We hope that a unified meta-learning framework for varied types of segmentation leads to further progress on the accuracy of such methods over those that require more specialization.\\n\\n> absolute performance looks bad compared to existing algorithms exploiting prior knowledge for each of the tasks\\n> 0.45 IOU in PASCAL VOC dataset, while many existing algorithms could achieve more than 0.8 mean IOU in this dataset\\n\\nPlease note that we score all methods with positive IU for consistency across tasks (section 5, paragraph 2), which is not equivalent to class-wise mean IU! We will further highlight and explain our choice of metric in the revision to resolve the commented on confusion, thank you.\\n\\nOur reported 0.45 positive IU oracle for few-shot semantic segmentation corresponds to 0.62 mean IU, which is expected for our FCN architecture based on VGG-16. The referred to methods that score more than 0.8 mean IU on PASCAL VOC require outside segmentation data, deeper architectures, longer optimization schedules, aggressive data augmentation, test-time post-processing, and more. These extensions are orthogonal to our scientific question comparing our general meta-learning method with the specialized methods for each of the tasks in a common experimental framework with the same base architecture.\\n\\n> I question whether foreground / background baseline is reasonable baseline for all these tasks\\n\\nThe foreground-background baseline is surprisingly strong for video because DAVIS clips are biased towards containing one salient object per frame. To reduce the severity of this issue, our work evalutes on DAVIS'17 (Section 5.1, Figure 6) which includes some multi-object tasks instead of the simpler DAVIS'16.\\n\\n> In 4.3, authors argue that the model is trained with S=1, but could operate with different (S, P)\\n> increasing S does not necessarily increase the performance.\\n\\nWhile the semantic-trained guided segmentor struggles to effectively aggregate larger supports, the instance-trained segmentor performs better with increasing S; see Section 5.2 for a discussion. Likewise our guided segmentor for video object segmentation improves with increasing S; see Section 5.1, Figure 6 (right).\\n\\n> In 5.3, this paper investigated whether the model trained with instances could be used for semantic segmentation.\\n> but this might be just because there are many images with single instance in each image\\n\\nIt\\u2019s true that PASCAL includes many images containing a single class. However, the semantic-guided instance-trained segmentor significantly outperforms the foreground-background baseline, which should do just as well on single-class images and single-instance images, so the accuracy of our guided segmentor cannot be entirely explained away by these kinds of images.\"}",
"{\"title\": \"meta-learning setting, method novelty, and comparisons (2/2)\", \"comment\": \"> it is not clear what the problem setting of this paper is, as it seems to have two sets of training data of fully-annotated images (for training) and the combined set of point-wise annotated images and unannotated images (guidance images)\\n\\nOur problem setting is meta-learning for segmentation. Meta-learning seeks to learn a learning algorithm that can learn a new task, often from little supervision. In our case, a task consists of a support set of (sparsely) labeled images and a query set of unlabeled images to be segmented. In the standard terminology of few-shot learning, the \\\"point-wise annotated images\\\" are the labeled supports and the \\\"unannotated images\\\" are the queries to be segmented according to the labeled support. \\n\\nWe divide the set of tasks into sets for meta-training and meta-testing. We optimize the parameters of our model to perform learning on tasks drawn from the meta-training set, and evaluate on tasks drawn from meta-test. For our guided nets, learning a task corresponds to inference in the model, which we call guidance: extracting the task representation from the supports and guided inference to segment the queries. Meta-training optimizes the model parameters to improve guidance, and once meta-training is complete the model parameters are fixed and only the task representation changes as a function of the support. For meta-testing we evaluate on heldout instances, classes, or videos in our interactive, semantic, and video object segmentation results respectively. \\n\\n> It is not clear whether authors generate the second dataset out of the first one, or they have separate datasets for these two.\\n\\nThis kind of dataset division is a common approach to few-shot learning for image classification (e.g., Omniglot from Lake et al. 2015, miniImageNet from Vinyals et al. 2016) that we adapt to pixel-wise tasks.\\n\\nWe generate our sparse meta-learning datasets from the standard, fully-annotated segmentation datasets by sampling different tasks (e.g., segment a particular bear in all the frames of the video) and subsampling the annotations. A task consists of a support set of (sparsely) labeled images and a query set of unlabeled images to be segmented. Tasks are synthesized from a densely labeled dataset such as PASCAL by binarizing and sparsifying dense masks, as illustrated in Section 4.3 Figure 4. During training, the query set is given as input to the model without labels, and the dense ground truth labels for the query set used to define the loss. We are revising section 4.3 to clearly explain this process.\\n\\n> it is not clear how the authors incorporate the unannotated images for training (guidance images)\", \"our_method_is_trained_by_meta_learning_through_episodic_optimization\": \"during meta-training, the unnannotated images are given as queries to be segmented by the model, the model infers an output segmentation, and these are compared against the true segmentation of the queries (known only during meta-training). Please see figure 4 and section 4.3. Are queries what was meant by guidance images?\\n\\n> what is the dataset used for the evaluation of the first paragraph in section 5.1? How do you split the Pascal VOC data to exclusive sets?\\n\\nThe dataset used in the first paragraph of Section 5.1 is PASCAL VOC/SBD, as used in Xu et al., which we compare against (we are correcting this omission in a revision of the text\\u2014thank you for noticing it). For few-shot semantic segmentation, we follow the experimental protocol of Shaban et al., as stated in the second to last paragraph of Section 5.1, which tests few-shot performance on held-out classes by dividing the 20 classes of PASCAL into 4 sets of 5, then reports the average performance across these sets for the 5 held-out classes after training on the remaining 15. Images that contain both held-out and training classes are placed in the held-out set.\\n\\n> How do you sample point-wise annotation from dense mask labels? How does the sampling procedure affect the performance? \\n\\nThe dense ground truth labels are sparsified via uniform random sampling. We found random sampling to perform about equal to more complex sampling strategies explored in previous work, such as Xu et al. 2016. We are adding these details to the paper appendix.\\n\\n> The performance of the guided semantic segmentation is also quite low\\n\\nThe performance of our method is in some cases lower than the performance of task-specific methods (video object segmentation and 5-shot semantic segmentation). However, a main contribution of our work is to present a first general meta-learning framework for structured output tasks. A compensating advantage of our proposed late fusion architecture is that it is quicker to update than Shaban et al. and Caelles et al., making it more practical for interactive use.\\n\\n> Please revise the notations in equations.\\n\\nThank you for noticing these typsetting errors! We are correcting them in a revision of the text.\"}",
"{\"title\": \"meta-learning setting, method novelty, and comparisons (1/2)\", \"comment\": \"Thank you for your review, especially the comments regarding the clarity of the method description and experimental setting, which are helping us to revise the text to depend less on familiarity with meta-learning approaches and few-shot learning setups. In the meantime we offer clarifications here, and in particular address the problem setting, architecture and optimization novelty, and experimental comparisons. We will make a follow-up post once the revision is uploaded. Please let us know if the method and experiments are now clear, and how these details impact your evaluation of the submission's originality, significance, and experiments.\\n\\n> This paper proposed a few-shot learning approach for interactive segmentation\\n\\nWe would like to clarify that our work is an extension of interactive segmentation. Our meta-learning learning approach, guided segmentation, generalizes the usual problem statement of interactive segmentation. Given an image with partial annotations, an interactive segmentor fully segments that image, but it cannot segment a new image without any annotations. That is, for an interactive segmentor, annotations on one image do not inform the segmentation of another image. On the other hand, our guided segmentor extracts a latent representation of the pixel-wise annotations and conditions on it to inform the segmentation of all images, and additional annotations on any image affect the segmentation of all of them.\\n\\n> I do not see many novel contributions in terms of both network architecture and learning perspective.\\n\\n\\nPrior work is limited to binary segmentation of a single image (interactive segmentation by Xu et al. 2016), two-class tasks supervised by dense annotations from a single image (one-shot semantic segmentation by Shaban et al. 2017), and slow optimization that fails for sparse annotations (video object segmentation through fine-tuning by Caelles et al. 2017). Our novel choices for architecture and optimization are key to addressing these issues:\\n\\n- Our novel late-fusion architecture (Section 4.1 and Figure 3) is necessary for efficient representation and segmentation from annotations that are multi-shot (multi-image, multi-pixel) and multi-way (multi-class). Xu et al. and Shaban et al., with their early fusion architectures, are limited to one image and two classes at a time. When annotations change, they must re-compute the entire network as the annotations are fused early at the input, while we update in constant time w.r.t. the full network time since only the late stage is re-computed. For multi-class segmentation, our model simply and efficiently fuses shared image features with the annotations for each class (end of Section 4.1), while Xu et al. and Shaban et al. inefficiently have to do a forward pass for each class.\\n- With optimization by meta-learning, our model learns to handle sparse annotations that the Caelles et al. approach of optimization by fine-tuning fails on. While Shaban et al. likewise optimize by meta-learning, they require dense annotations, and we show more than 50% relative improvement for accuracy in the sparse regime.\\n- Our novel contributions to meta-learning optimization (Section 4.3) are (1) sampling tasks with different shot (number of labels) and way (number of classes) per episode of optimization for better generalization to different amounts of supervision and (2) investigating transfer learning when meta-learning one kind of task, instances, then meta-testing on a different kind of task, semantics.\\n\\nFor novelty in experiments, our work is the first to show results on this set of tasks with a unified model.\\n\\n> The method is compared against only a few (not popular) interactive segmentation methods, although there exist many recent works addressing the same task (e.g. Xu et al. 2016)\\n\\nFor comparison we chose popular, state-of-the-art at publication methods: DIOS (Xu et al. 2016) for interactive segmentation and OSVOS (Caelles et al.) for video object segmentation. To the best of our knowledge Shaban et al. 2017 is the first and only few-shot semantic segmentor prior to our work. Furthermore, these methods were chosen for fair comparison since their architectures and ours are all derived from a VGG-16 backbone and are free from confounding differences in post-processing, data augmentation, and so forth. Our work shows results on few-shot semantic segmentation, video object segmentation, and interactive instance segmentation (as mentioned above, guided segmentation is not simply interactive segmentation, as evidenced by this set of tasks).\\n\\nWe ask that the reviewer please be specific about alternative comparisons.\"}",
"{\"title\": \"related work, interactive segmentation, and our latent task representation z\", \"comment\": \"Thank you for the review and your enthusiasm for applying few-shot learning to richer visual tasks like segmentation! We provide a few clarifications and address the questions listed in your review. Given our response here, we would appreciate it if you could comment further regarding\\n\\n- novelty with respect to the one existing few-shot segmentation method we cite\\n- clarity of our figure summarizing interactive segmentation and the other segmentation tasks we address (Figure 2)\\n\\nWe agree that few-shot learning need not be limited to image classification and should address higher-level tasks such as different types of segmentation as we show in this work. We hope that our work inspires more progress on few-shot learning for structured output tasks for which labels are even more costly and scarce than image-level supervision.\\n\\nOur work is not the first to consider few-shot learning for structured output, but we do significantly generalize the problem scope and extend the approach. Shaban et al. (2017) consider one-shot semantic segmentation. We consider a wider range of tasks (instance, semantic, and video object segmentation), experiment with varying shot and way (from one-shot to 1000+ shot and 2-20 way) beyond the prior 1-5 shot and fixed 2-way of Shaban et al., and propose a novel late fusion architecture (that is faster to update during inference).\\n\\n> what is interactive segmentation?\\n\\nInteractive segmentation is the task of inferring dense segmentation masks from sparse pixel-wise labels within the same image (see middle panel of Figure 2 and our references Kass et al. 1998, Boykov and Jolly 2001, and Xu et al. 2016). Guided segmentation is our extension to interactive segmentation that can propagate pixel labels across images and not just within images. Guided segmentation is necessary to (1) cumulatively incorporate labels across inputs to keep improving the segmentation and (2) increase data efficiency by not requiring annotations on every input.\\n\\n> is there any constraint on z? Like Gaussian distributions like what z is like in VAE models\\n\\nz is the latent task encoding extracted by the guide branch g (see Figure 1 and Sections 4 & 4.1). We do not enforce a distribution over z, although this is a possible extension of our work for regularization or sampling diverse segmentations. We are revising the text to make it clear that there is no constraint on the value of z.\"}",
"{\"title\": \"review\", \"review\": \"To my knowledge, this paper is probably the first one to apply few-shot learning concept into high-level computer vision tasks. In this paper's sense, segmentation. It proposes a general framework to few from the very few sample, extract a latent representation z, and apply it to do segmentation on a query. Cases of semantic, interactive and video segmentation are applied. Experiments are very thorough.\\n\\nWe see too many variants of few-shot learning papers on mini-imagenet or omniglot. For the reason of applying to high-level segmentation, the paper already deserves an acceptance for the first work. I believe this work would inspire many follow-ups in related domain (especially for high-level vision tasks)\", \"comments\": [\"what is interactive segmentation? I looked through the related work, it just mentioned some previous work without defining or describing it.\", \"z is the network output of g? is there any constraint on z? Like Gaussian distributions like what z is like in VAE models.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Unclear presentations, limited novelty.\", \"review\": \"Summary:\\nThis paper proposed a few-shot learning approach for interactive segmentation. Given a set of user-annotated points, the proposed model learns to generate dense segmentation masks of objects. To incorporate the point-wise annotation, the guidance network is introduced. The proposed idea is applied to guided image segmentation, semantic segmentation, and video segmentation.\", \"clarity\": \"Overall, the presentation of the paper can be significantly improved. First of all, it is not clear what the problem setting of this paper is, as it seems to have two sets of training data of fully-annotated images (for training) and the combined set of point-wise annotated images and unannotated images (guidance images T in the first equation); It is not clear whether authors generate the second dataset out of the first one, or they have separate datasets for these two. Also, it is not clear how the authors incorporate the unannotated images for training. \\n\\nThe descriptions on model architecture are also not quite clear, as it involves two components (g and f) but start discussing with g without providing a clear overview of the combined model (I would suggest changing the order of Section 4.1 and Section 4.2 to make it clearer). The loss functions are introduced in the last part of the method, which makes it also very difficult to understand.\", \"originality_and_significance\": \"The technical contribution of the paper is very limited. I do not see many novel contributions in terms of both network architecture and learning perspective.\", \"experiment\": \"Overall, I am not quite convinced with the experiment results. The method is compared against only a few (not popular) interactive segmentation methods, although there exist many recent works addressing the same task (e.g. Xu et al. 2016). \\n\\nThe experiment settings are also not clearly presented. For instance, what is the dataset used for the evaluation of the first paragraph in section 5.1? How do you split the Pascal VOC data to exclusive sets? How do you sample point-wise annotation from dense mask labels? How does the sampling procedure affect the performance? \\n\\nThe performance of the guided semantic segmentation is also quite low, limiting the practical usefulness of the method. Finally, the paper does not present qualitative results, which are essential to understanding the performance of the segmentation system.\", \"minor_comments\": \"1. There are a lot of grammar issues. Please revise your draft.\\n2. Please revise the notations in equations. For instance, \\n T = {{(x_1, L_1),...} \\\\cup {\\\\bar{x}_1,...}\\n L_s = {(p_j,l_j):j\\\\in{1,...,P}, l\\\\in{1,...,K}\\\\cup{\\\\emptyset}}\\n Also, in the next equation, j\\\\in\\\\bar{x}_q} -> p_ j\\\\in\\\\bar{x}_q} (j is an index of pixel)\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Incremental idea and weak analysis\", \"review\": \"Summary\\nThis paper proposes to formulate diverse segmentation problems as a guided segmentation, whose task is defined by the guiding annotations.\\nThe main idea of this paper is using meta-learning to train a single neural network performing guidance segmentation.\\nSpecifically, they encode S annotated support image into a task representation and use it to perform binary segmentation.\\nBy performing episodic optimisation, the model's guidance to segmentation output is defined by the task distribution.\\n\\nStrength\\nLearning a single segmentation algorithm to solve various segmentation problem is an interesting problem that worth exploring.\\nThis paper tackles this problem and showed results on various segmentation problems.\\n\\nWeakness\\nThe proposed method, including the architecture and training strategy, is relatively simple and very closely related to existing approach. Especially, the only differences with the referenced paper (Shaban et al., 2017) is how the support is fused and how multiple guidance could be handled, which can be done by averaging. These differences are relatively minor, so I question the novelty of this paper.\\n\\nThis paper performs experiments on diverse tasks but the method is compared with relatively weak baselines absolute performance looks bad compared to existing algorithms exploiting prior knowledge for each of the tasks.\\nFor example, the oracle performance in semantic segmentation (fully supervised method) is 0.45 IOU in PASCAL VOC dataset, while many existing algorithms could achieve more than 0.8 mean IOU in this dataset. \\nIn addition, I question whether foreground / background baseline is reasonable baseline for all these tasks, because a little domain knowledge might already give very strong result on various segmentation tasks.\\nFor example, in terms of video segmentation, one trivial baseline might include propagating ground truth labels in the first frame with color and spatial location similarity, which might be already stronger than the foreground / background baseline.\\n\\nThere are some strong arguments that require further justification. \\n- In 4.3, authors argue that the model is trained with S=1, but could operate with different (S, P).\\nHowever, it's suspicious whether this would be really true, because it requires generalisation to out-of-distribution examples, which is very difficult machine learning problem. The performance in Figure 5 (right) might support the difficulty of this generalisation, because increasing S does not necessarily increase the performance.\\n- In 5.3, this paper investigated whether the model trained with instances could be used for semantic segmentation. I think performing semantic segmentation with model trained for instance segmentation in the same dataset might show reasonable performance, but this might be just because there are many images with single instance in each image and because instance annotations in this dataset are based on semantic classes. So the argument that training with instance segmentation lead to semantic segmentation should be more carefully made.\\n\\nOverall comment\\nI believe the method proposed in this paper is rather incremental and analysis is not supporting the main arguments of this paper and strength of the proposed method. \\nEspecially, simple performance comparison with weak baselines give no clues about the property of the method and advantage of using this method compared to other existing approaches.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
r1espiA9YQ | Towards More Theoretically-Grounded Particle Optimization Sampling for Deep Learning | [
"Jianyi Zhang",
"Ruiyi Zhang",
"Changyou Chen"
] | Many deep-learning based methods such as Bayesian deep learning (DL) and deep reinforcement learning (RL) have heavily relied on the ability of a model being able to efficiently explore via Bayesian sampling. Particle-optimization sampling (POS) is a recently developed technique to generate high-quality samples from a target distribution by iteratively updating a set of interactive particles, with a representative algorithm the Stein variational gradient descent (SVGD). Though obtaining significant empirical success, the {\em non-asymptotic} convergence behavior of SVGD remains unknown. In this paper, we generalize POS to a stochasticity setting by injecting random noise in particle updates, called stochastic particle-optimization sampling (SPOS). Notably, for the first time, we develop {\em non-asymptotic convergence theory} for the SPOS framework, characterizing convergence of a sample approximation w.r.t.\! the number of particles and iterations under both convex- and noncovex-energy-function settings. Interestingly, we provide theoretical understanding of a pitfall of SVGD that can be avoided in the proposed SPOS framework, {\it i.e.}, particles tend to collapse to a local mode in SVGD under some particular conditions. Our theory is based on the analysis of nonlinear stochastic differential equations, which serves as an extension and a complementary development to the asymptotic convergence theory for SVGD such as (Liu, 2017). With such theoretical guarantees, SPOS can be safely and effectively applied on both Bayesian DL and deep RL tasks. Extensive results demonstrate the effectiveness of our proposed framework. | [
"svgd",
"particle optimization",
"sampling",
"pos",
"spos",
"spos framework",
"particles",
"towards",
"deep learning towards",
"deep"
] | https://openreview.net/pdf?id=r1espiA9YQ | https://openreview.net/forum?id=r1espiA9YQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SkgvUTqegV",
"HJgjRK4A07",
"BkxGR-FtpX",
"rygw9m34TX",
"rJxZJF1m6X",
"S1ll281mTm",
"HkevVLJ7aX",
"SygunrymTQ",
"BygxVSkmTm",
"r1eg4Z1X6X",
"HyxLEekQ6Q",
"B1gr88ge6Q",
"ryeRvH30h7",
"HyegQHwhn7",
"SkeZkIz537",
"HyxzqCPFhQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544756559408,
1543551443044,
1542193609783,
1541878671380,
1541761241210,
1541760679570,
1541760559451,
1541760431687,
1541760296424,
1541759271794,
1541759021913,
1541568076638,
1541485926091,
1541334296314,
1541182936970,
1541140106213
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper832/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper832/Authors"
],
[
"ICLR.cc/2019/Conference/Paper832/Authors"
],
[
"ICLR.cc/2019/Conference/Paper832/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper832/Authors"
],
[
"ICLR.cc/2019/Conference/Paper832/Authors"
],
[
"ICLR.cc/2019/Conference/Paper832/Authors"
],
[
"ICLR.cc/2019/Conference/Paper832/Authors"
],
[
"ICLR.cc/2019/Conference/Paper832/Authors"
],
[
"ICLR.cc/2019/Conference/Paper832/Authors"
],
[
"ICLR.cc/2019/Conference/Paper832/Authors"
],
[
"ICLR.cc/2019/Conference/Paper832/Authors"
],
[
"ICLR.cc/2019/Conference/Paper832/Authors"
],
[
"ICLR.cc/2019/Conference/Paper832/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper832/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper832/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a combination of SVGD and SLGD and analyzes its non-asymptotic properties based on gradient flow. This is an interesting direction to explore. Unfortunately, two major concerns have been raised regarding this paper: 1) the reviewers identified multiple technical flaws. Authors provided rebuttal and addressed some of the problems. But the reviewers think it requires significantly more improvement and clarification to fully address the issues. 2) the motivation of the combination of SVGD and SLGD, despite of being very interesting, is not very clearly motivated; by combining SVGD and SLGD, one get convergence rate for free from the SLGD part, but not much insight is shed on the SVGD part (meaning if the contribution of SLGD is zero, then the bound because vacuum). This could be misleading given that one of the claimed contribution is non-asymptotic theory of ''SVGD-style algorithms\\\" (rather than SLGD style..). We encourage the authors to addresses the technical questions and clarify the contribution and motivation of the paper in revision for future submissions.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting paper but Improvement and Clarification are needed\"}",
"{\"title\": \"thanks for your reply\", \"comment\": \"Q: what is really the point of Section 6 then?\", \"a\": \"We try to explain the proof a bit here, hopefully can make it more clear to you.\\n\\nIf you look at the long equation between the Equation (20) and Equation (21)\\uff0c \\u201c-\\\\nabla_{\\\\thetab} \\\\cdot (\\\\mu_tF(\\\\thetab_t) + (\\\\mathcal{K}*\\\\mu_{t})\\\\mu_{t}t)\\n\\\\triangleq \\n-\\\\partial_{\\\\thetab}(Gf)(\\\\thetab, t).\\n\\nIt is worth noting that we use \\u201c\\\\triangleq\\u201d here, which means this is a definition. Notice that we also have mentioned \\u201c\\u200bwhere $f(\\\\thetab, t) = \\\\mu_t$\\u201d just behind this long equation, it is obvious to find that $G=F+\\\\mathcal{K}*\\\\mu_{t}$. That is the reason why we said \\u201cG is defined in the equation below eq.20, which is related to LHS of eq.7\\u201d in our first rebuttal.\\n \\nYes, \\u201cthe particles are interacting\\u201d as you mentioned. However, it does not mean our proof is wrong. It is just the reason why $G=F+\\\\mathcal{K}*\\\\mu_{t}$, which means G is not independent of $\\\\mu_{t}$. The fact that \\u201cG is related to $\\\\mu_{t}\\u201d just shows that \\u201cthe particles are interacting\\u201d. \\n \\nMoreover, please notice the result in Theorem 2 is derived through \\u201cparticle approximation\\u201d, which has been mentioned both in the statement of Theorem 2 and the proof. This means the $\\\\mu_{t}$ in G is approximated by $\\\\frac{1}{M}\\\\sum_{i=1}^M\\\\delta_{(\\\\thetab_t^{(i)})}(\\\\thetab)$. By combing the knowledge you think is obvious in your original review, we believe you can reach the result of Theorem 2 with the equation $\\\\mathrm{d}\\\\thetab_t^{(i)} = G(\\\\thetab_t^{(i)}) \\\\mathrm{d}t$ in the end of our proof .\", \"q\": \"The \\\"proof\\\" of Theorem 2 is extremely sloppy...\"}",
"{\"title\": \"Respectfully, we hope to make the following points\", \"comment\": \"Thank you for your response. We apologize for the previous long rebuttal. Nonetheless, we didn\\u2019t mean to write an \\u201cunprofessional\\u201d rebuttal, but hope to provide all the details and to solve possible doubts one might encounter when reading the rebuttal. We respect your decision, but still want to make the following points.\\n\\n1. First of all, as pointed out by our rebuttal, we can revise one line in the presentation of Gronwell Lemma and the minor flaw in our proof of Theorem 3 (by changing \\u201cindependent\\u201d to \\u201cidentical\\u201d) to correct the proof. The only minor \\u201ccorrection\\u201d we can find in the \\u201cstatement\\u201d is changing \\u201c(M,d)\\u201d to \\u201c(M,t)\\u201d, which does not affect the correctness of our result in Theorem 3. By the way, we have admitted the unintended typo and have corrected it in our rebuttal.\\n\\n2. We are glad that you have resolved your previous confusions such as the \\u201cdistribution under the expectation bounding Wasserstein distance\\u201d, and you also agree that our bound on W2 is correct. That is just what our rebuttal, \\u201cmisunderstandings of the Wasserstein metric (Part 2.1 & 2.2)\\u201d, aims at. Respectfully, we do not think our necessary rebuttal is \\u201clargely irrelevant to the review\\u201d since it helps eliminate possible questions in your original review. For example, in your original review you mentioned \\u201cdiscrete measures defined by weights of the atoms, not atom locations\\u201d; you also said you don't understand the meaning of this bound and Theorem is concerned with W_1 distance between two \\u201catomic measures\\u201d. Hence, we didn\\u2019t mean to invent new questions (actually we didn\\u2019t), but tried to provide detailed explanations.\\u00a0\\n\\n\\n3. Actually when we wrote our rebuttal, we knew your potential misunderstandinig which might be summarized by the word \\u201cunnecessary\\u201d in your latest response. That is the reason why we decided to provide our rebuttal, \\u201c misunderstandings of the Wasserstein metric (Part 3)\\u201d. We have emphasized there that choosing W_1 instead of W_2 in the statement Theorem 3 is also due to similarity to the work on the asymptotic convergence analysis of SVGD in Liu, 2017.\\n\\nBesides, we want to emphasize the main propose of our paper aims at developing the first non-asymptotic convergence theory for SVGD-style algorithm, SPOS. We are not inventing new approaches to deal with W_1 distance. Respectfully, we think it is unreasonable to say that our use of W_1 distance in the statement is \\u201cmisleading\\u201d the researchers who are looking for new approaches to deal with W_1. We did not try to invent the new approaches indeed, and have never mentioned that inventing the new approaches is our aim in our paper.\\u00a0\\n\\n4. We always respect your decision. We were just a little disappointed that the reason for rejection is due to misunderstanding or typo and minor flaw (which have been addressed in our rebuttal). \\n\\nAgain, thank you for your latest response which provides us another chance to re-emphasize the importance of our rebuttal. Please let us know if you still have questions. We are ready to answer them respectfully.\"}",
"{\"title\": \"This rebuttal, although contains useful pieces, is unprofessional and largely irrelevant to the review\", \"comment\": \"Authors made mistakes in their proof and even the theorem statement itself needs corrections. Despite admitting to it in the subsequent comments, in the summary statement they say \\\"unfortunately, there appears to be lots of misunderstandings\\\". The only \\\"misunderstanding\\\" as far as I can tell is with regard to the distribution under the expectation bounding Wasserstein distance. I am also surprised to hear that authors think I applied \\\"higher standard\\\" to their paper. Checking proofs of the theorems in a theoretically oriented paper appears as a basic standard to me.\\n\\nAll that was needed from the rebuttal was to admit the mistakes and suggest resolutions when possible. Although authors partially did so, they decided to spend bulk of the rebuttal inventing some new questions I did not ask and answering to themselves. I suggest you delete the irrelevant parts of your rebuttal.\\n\\nRegarding your reading recommendation - If the authors expect the reader to consult Wikipedia about the Gronwell Lemma, perhaps they should include the link in the manuscript instead of presenting this \\\"basic mathematics knowledge\\\" with a typo. And if such typo was made, it is best to simply admit and correct it. In my review I said that \\\"authors forgot to multiply by $t$ and $\\\\lambda_1$\\\". Surely if we include $\\\\exp$ in the Lemma formulation, this missing terms are found as $\\\\exp(-\\\\lambda_1 t) < 1$ in your rebuttal.\\n\\nRegarding \\\"misunderstanding\\\" - I agree that your bound on W2 is correct and follows from the definition of W2 when expectation is with respect to the optimal coupling. Although finding such coupling could be interesting for discrete measures with random atoms (unlike well-known cases of discrete optimal transport), but looks like it is not needed for your proof. Nonetheless I recommend to add a brief explanation to the distribution under expectation under the bound and why (if so) you don't need to construct it.\", \"regarding_w1_and_w2\": \"indeed I decided to check your proof because I wanted to see how you handle W1 (as opposed to W2, which is known to be easier to analyze). Your proof does not bring anything new for someone interested in studying W1 since you are essentially working with W2. Hence I think that using W1 metric in your theorem statements is misleading and unnecessary.\\n\\nTo summarize, I recommend that the mistakes are corrected and manuscript is resubmitted to a journal or a future ML venue as it needs to undergo additional round of review. My recommendation remains to be \\\"reject\\\" at this point.\"}",
"{\"title\": \"Thank you for the detailed review.\", \"comment\": \"Thanks for your detailed review! We can see that you went to some details of our paper and applied a higher standard to it, which we are really grateful. Unfortunately, there appears to be lots of misunderstandings. We hope you would not mind that we decide to explain in detail to address your concerns. Despite the long and detailed explanation, we only need to revise our paper slightly to fully address your comments.\\n\\nWe divide the whole rebuttal into several parts, which we believe will be easier for you to follow. We hope you can read through our rebuttal even if you encounter some new questions during reading. We believe your new questions can also be addressed after finishing reading our rebuttal. We hope you could read them carefully, and we think our rebuttal is also very helpful for other researchers who read this paper. Thank you.\"}",
"{\"title\": \"Typo and minor flaw which have been fully addressed and do not affect the correctness of our theorem\", \"comment\": \"1. There is a typo in the presentation of Gronwell Lemma. The last line of Lemma 12 should be changed to \\u201cv(t) \\\\leq v(a) \\\\exp (\\\\int_{a}^{t}\\\\beta(s)\\\\mathrm{d}s)\\u201d. We missed the $\\\\exp$ here. We recommend you to read this page https://en.wikipedia.org/wiki/Gr%C3%B6nwall%27s_inequality if you need more information about this basic mathematics knowledge.\\n\\n2. Yes, you are correct. When we set $\\\\theta_0^{(i)}$ to be independent of $\\\\bar{\\\\theta}_0^{(i)}$, the $\\\\gamma_i(t) \\\\triangleq \\\\mathbb{E}\\\\left\\\\|\\\\theta_t^{(i)} - \\\\bar{\\\\theta}_t^{(i)}\\\\right\\\\|^2$ does not equal to zero. \\n\\nHowever, we can address your concern. Actually, we can set $\\\\bar{\\\\theta}_0^{(i)}$ identical to $\\\\theta_0^{(i)}$, which will then make the $\\\\gamma_i(0)$ equal to zero! \\u201c$\\\\bar{\\\\theta}_0^{(i)}$ identical to $\\\\theta_0^{(i)}$\\u201d means that when $\\\\theta_{0}^{(i)}$ equals to some value like $\\\\theta_0^{(i)}=x$, the $\\\\bar{\\\\theta}_0^{(i)}$ will also equals to $x$, where $x$ is some real number. In other words, the statement \\u201cwe can set $\\\\bar{\\\\theta_{0}}^{(i)}$ to be independent of $\\\\theta_{0}^{(i)}$ \\\" should be changed to \\u201dwe can set $\\\\bar{\\\\theta}_{0}^{(i)}$ identical to $\\\\theta_0^{(i)}$\\u201d.\\n\\nWe guess you might immediately have the following question, \\u201cwhy can you set $\\\\bar{\\\\theta}_{0}^{(i)}$ identical to $\\\\theta_0^{(i)}$\\u201d. Yes, we can. Please notice the fact that $\\\\bar{\\\\theta}_t^{(i)}$ is introduced only for our proof convenience, it is user-defined, just as former work on analyzing granular media equations in Malrieu (2003); Cattiaux et al. (2008); Durm us et al. (2018). That said, the $\\\\bar{\\\\theta}_t^{(i)}$ does not exist in our interpretation of our algorithm. Theoretically, we can let $\\\\bar{\\\\theta}_0^{(i)}$ satisfy any distribution. But most of them are not useful for our proof. To prove Theorem 3 (we can change its name to \\u201cLemma\\u201d as your suggestions), we choose to set it identical to $\\\\theta_0^{(i)}$. The introduction of $\\\\bar{\\\\theta}_t^{(i)}$ is broadly used in former literature of both Mathematics and Machine Learning like Malrieu (2003); Cattiaux et al. (2008); Durm us et al. (2018). if you still have concerns, we strongly recommend you to scan those literature, which we have cited.\\n\\n3. Until now, we have addressed the minor flaw. Based on the above explanations, we will show you the correctness of Theorem 3 in detail, which means we will show you how Gronwell Lemma works here.\\n\\nFirst, please locate the following statement in our proof \\u201c\\\\Rightarrow\\t(\\\\sqrt{\\\\gamma(t)}-\\\\frac{(H_{\\\\nabla K}+H_F)/\\\\sqrt{2}}{\\\\sqrt{M}(\\\\beta^{-1}-3H_FL_K-2L_F)})^\\\\prime \\\\leq -\\\\lambda_1 (\\\\sqrt{\\\\gamma(t)}-\\\\frac{(H_{\\\\nabla K}+H_F)/ \\\\sqrt{2}}{\\\\sqrt{M}(\\\\beta^{-1}-3H_FL_K-2L_F)})\\u201d. (Finding the rightarrow in our proof will help you!)\\n\\nNow applying the Gronwell Lemma, we can derive that: \\n\\\\begin{align*}\\n\\\\sqrt{\\\\gamma(t)} - \\\\frac {(H_{\\\\nabla K}+H_F)/ \\\\sqrt{2}} {\\\\sqrt{M}(\\\\beta^{-1}-3H_FL_K-2L_F)} \\\\leq \\n\\\\( \\\\sqrt{\\\\gamma(0)} - \\\\frac {(H_{\\\\nabla K}+H_F)/ \\\\sqrt{2}} {\\\\sqrt{M}(\\\\beta^{-1}-3H_FL_K-2L_F)} \\\\) \\\\exp(-\\\\lambda_1 t)\\n\\\\end{align*}\\n\\nNext, it is worth noting that $ \\\\exp(-\\\\lambda_1 t) < 1$ since $\\\\lambda_1> 0$ and $t> 0$. And according to $\\\\gamma(0)}=0$, we get that \\n\\\\begin{align*}\\n\\\\sqrt{\\\\gamma(t)} \\\\leq \\\\frac {(H_{\\\\nabla K}+H_F)/\\\\sqrt{2}} {\\\\sqrt{M}(\\\\beta^{-1}-3H_FL_K-2L_F)}\\n\\\\end{align*}\\n,which is what we need.\\n\\nUntil now we have addressed your concerns about the proof of our theorem 3.\\n\\n4. We appreciate your suggestions on the additional assumptions, we will include them in the assumption statements. And we agree with your statement that c1 depends on d, and we will rephrase it.\"}",
"{\"title\": \"Misunderstandings of the Wasserstein metric (Part 1)\", \"comment\": \"As for your concerns on Wasserstein-1 metric, we are afraid that there might be a lot of misunderstandings.\\n\\n1.Your first concern is about the correctness of our way of bounding W_1. \\n\\nFirst of all, please notice what we want to bound in Theorem 3 is the W_1 distance between $\\\\rho_t$ and $\\\\nu_t$. \\n\\nNext, please notice that, for every t, the particles $\\\\theta_t^{(i)}$ in Equation (8) and the proof of Theorem 3 are *random variables*. In our theoretical analysis, they are not \\u201cfixed atom locations\\u201d as you mentioned due to the Wiener process in Equation (8). And as mentioned at the beginning of Section 4.1, \\u201cdue to the exchangeability of the particle system $\\\\{\\\\theta_t^{(i)}\\\\}_{i=1}^M$ in Equation (8), if we initialize all the particles $\\\\theta_t^{(i)}$ with the same distribution $\\\\rho_0$, they would endow the same distribution for each time $t$. We denote the distribution of each $\\\\theta_t^{(i)}$ as $\\\\rho_t$.\\u201d Hence, the $\\\\rho_t$ in our statement of Theorem 3 is actually the distribution of each $\\\\theta_t^{(i)}$. \\n\\nSimilar results hold for $\\\\nu_t$. According to the definition of $\\\\bar{\\\\theta}_t^{(i)}$, the $\\\\nu_t$ in our statement of Theorem 3 is the distribution of each $\\\\bar{\\\\theta}_t^{(i)}$.\\n\\nNow let\\u2019s explain why we \\u201cbound W_1 with W_2 and then with just an expectation of l2 norm\\u201d as you mentioned. In Equation (26) of the proof for Theorem 3, the expectation \\u201cE || \\\\theta_{t}^{(i)} - \\\\bar{\\\\theta}_t^{(i)} ||^2\\u201d is taken over the distribution of the coupling $( \\\\theta_{t}^{(i)} , \\\\bar{\\\\theta}_t^{(i)} )$. And the distribution of $( \\\\theta_{t}^{(i)} , \\\\bar{\\\\theta}_t^{(i)} )$ is a joint distribution with marginal distributions equal to $\\\\rho_t$ and $\\\\nu_t$. According to the definition of W_2 distance mentioned at the beginning of Section 4, we can bound W_2 \\u201cwith just an expectation of l2 norm\\u201d.\\n\\nTo sum up, there is no problem bounding W_1 by bounding W_2, which is achieved by bounding the expectation term. And we are so sorry to say that we do not need to make changes to the proof since all the definitions used in the proof have been provided in our context. And the order of these definitions has been set carefully for reading.\"}",
"{\"title\": \"Misunderstandings of the Wasserstein metric (Part 2.1)\", \"comment\": \"2.We think we could know your other potential concern from your comments.\\n\\nAlthough we feel sorry that from your comments, it is too hard for us to get exactly whtat you mean, but we still try our best to guess your concern from your wording, like \\u201cdiscrete measures\\u201d and \\u201catom locations\\u201d. We are afraid that your concern might not merely exists in your understanding of Theorem 3, but also makes you confused throughout the whole paper. Hence, we decide to use our paper\\u2019s main target, bounding the W_1 metric between $\\\\mu_k$ and the posterior distribution, mentioned at the beginning of Section 4.1 to resolve your concern (we use $\\\\mu_k$ instead of $\\\\mu_T$ here just for the sake of clarity).\\n\\nBefore we present your concerns, please look at equation (9). Due to the fact that $\\\\xi_{k-1}^(i)$ are random variables, the particle ${\\\\theta}_{k}^{(i)}$ in our SPOS are also random variables. In other words, ${\\\\theta}_{k}^{(i)}$, for any i and k, is a random variable and has its own distribution. Just as what we mentioned in Section 4.1, due to the exchangeability of those particles, if we initialize them with the same distribution $\\\\mu_0$, all the particles ${\\\\theta}_{k}^{(i)}$ will have the same distribution for any k, denoted as $\\\\mu_k$. \\n\\nNow, we will present your concern as follows. We guess you might focus on the issue that in practice, what we get is several \\u201cfixed atom locations\\u201d of the particles after running our SPOS algorithm. And based on those \\u201cfixed atom locations\\u201d, we will get a \\u201cdiscrete distribution\\u201d, not the $\\\\mu_k$ as we mentioned above. What\\u2019s worse, due to the fact that each particle ( ${\\\\theta}_{k}^{(i)}$ ) is a random variable, the \\u201cfixed atom locations\\u201d do not remain the same. This leads to the problem that the \\u201cdiscrete distribution\\u201d we derive becomes stochastic, making our problem much more complicated and the \\u201cexpectation\\u201d extremely hard for you to understand.\\n\\nHowever, please notice our paper do not focus on that \\u201ccomplicated stochastic discrete distribution\\u201d. What we bound in our paper is the W_1 metric between $\\\\mu_k$ and our target distribution (posterior distribution), where $\\\\mu_k$ is the distribution of each ${\\\\theta}_{k}^{(i)}$ instead of that \\u201ccomplicated stochastic discrete distribution\\u201d. This has been mentioned in our beginning of Section 4.1. \\n\\nUntil now, we guess you might have the question why we do not work on that \\u201ccomplicated stochastic discrete distribution\\u201d. We have the following two reasons:\\n\\n1)The W_1 metric between $\\\\mu_k$ and the posterior distribution is adopted due to the goal of Bayesian sampling. This is our first reason.\\n\\nIn our Bayesian sampling algorithm SPOS, the particles ${\\\\theta}_{k}^{(i)}$ (or we can call them atoms) are actually \\u201cparameters\\u201d, which are used to characterize the corresponding statistical models in Bayesian statistics. (Please see the first sentence of our Section 2.1 which provides the basic background of Bayesian sampling). Those \\u201cparameters\\u201d ${\\\\theta}_{k}^{(i)}$ all have the same distribution of $\\\\mu_k$. And since the target of Bayesian sampling algorithm is to sample some parameters from the posterior distribution, how $\\\\mu_k$ approximates the posterior distribution is exactly what we need to work on. Therefore, we do not need to care about that \\u201ccomplicated changing discrete distribution\\u201d.\"}",
"{\"title\": \"Misunderstandings of the Wasserstein metric (Part 2.2)\", \"comment\": \"2)The W_1 or W_2 metrics between $\\\\mu_k$ and the posterior distribution are also broadly used in the convergence analysis of SG-MCMC and non-convex optimization. This is the second reason.\\n\\nSimilarly, the ${\\\\theta}_{k}$ in the updates of SG-MCMC is actually also a random variable. And we denote its distribution as $\\\\mu_k$. It is worth noting that in practice, SG-MCMC also collects many \\u201cfixed atom locations\\u201d one by one, and they also form a discrete distribution. But many recent work on SG-MCMC also only care about $\\\\mu_k$ and use W_1 or W_2 metric to make convergence analysis, such as Xu et al. (2018), Raginsky et al. (2017) in our reference and other related papers like \\u201cOn the Theory of Variance Reduction for Stochastic Gradient Monte Carlo\\u201d(ICML 2018), \\u201cFurther and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent\\u201d(COLT 2017) and \\u201cUser-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient\\u201d(arXiv:1710.00095). Hence, we decided to follow what the existing work does.\\n\\nAnother point we want to emphasize is that Bayesian sampling algorithms like SGLD are more and more broadly used in non-convex optimization. In non-convex optimization, the \\u201ccomplicated stochastic discrete distribution\\u201d is much less useful than $\\\\mu_k$. Please refer to Xu et al. (2018), Raginsky et al. (2017) in our reference for more details. And they also adopt W_1 or W_2 metric between $\\\\mu_k$ and their target distribution in their analysis. It is worth noting that we have pointed out in Section 7, the Conclusion, that non-convex optimization is one of the interesting future work for our SPOS. Hence, it is much more valuable to give a bound in terms of the W_1 metric between $\\\\mu_k$ and our target distribution.\"}",
"{\"title\": \"Misunderstandings of the Wasserstein metric (Part 3)\", \"comment\": \"3. Until now, you might have another concern: \\u201cwhy do you choose W_1 instead of W_2?\\u201d Although many work on SG-MCMC adopted W_2 metric, it is worth noting that providing a bound for SPOS is much more complicated than for SGLD. Hence, we think focusing on W_1 metric is quite acceptable. Moreover, W_1 is adopted also because of the work on the asymptotic convergence analysis of SVGD in Liu, 2017.\\n\\nThe asymptotic convergence analysis of SVGD in Liu, 2017 adopts \\u201cBL metric\\u201d. The definition of BL metric is $BL(\\\\mu,\\\\nu) \\\\triangleq sup{E f(\\\\mu) - E f(\\\\nu), || f ||_{\\\\infty} <=1 and || f ||_{Lip}<=1}$. And please notice that W_1 metric has another well-know definition, $W_1(\\\\mu,\\\\nu) \\\\triangleq sup{E f(\\\\mu) - E f(\\\\nu), || f ||_{Lip}<=1}$. Hence, the definitions of these metrics are similar but not identical. (Actually it is easy to verify that $BL(\\\\mu,\\\\nu) <= W_1(\\\\mu,\\\\nu) $. ) Therefore, we decide to adopt W_1 metric here due to its similarity to BL metric in the existing work.\"}",
"{\"title\": \"Other minor issues\", \"comment\": \"1. We will change the \\u201cW\\u201d to \\u201cK\\u201d.\\n\\n2. Yes, we understand your suggestions that Theorems 3-6 could be lemmas and there should be a unifying theorem for the bound. However, it is worth noting that the \\u201cunifying theorems for the bounds\\u201d are provided in the appendix H as mentioned at the end of Section 4.3. And we hope not to move them to the context due to the space limit. Our paper is already 10 pages long and can not fit these two long theorems. Besides, we have no problems to change the \\u201cTheorem\\u201d to \\u201cLemma\\u201d. Unfortunately, as our responses to other reviewers, our paper\\u2019s main contribution lies in the theoretical analysis. We think the techniques and ideas of Theorems 3-6 are also very important, which provide some guides for other researchers in the field. Hence, we think Theorems 3-6 worth the name \\u201cTheorem\\u201d, not just \\u201cLemmas\\u201d (which mean they are only affiliated to the \\u201cunifying theorems for the bounds\\u201d). But if you still disagree with our opinions, we are willing to make the changes since this issue is quite minor and should not be the reason for your rejection.\\n\\n3. We will change the notation of Wasserstein metric from $\\\\mathcal{W}$ to $W$.\\n\\n4. We don\\u2019t think \\u201cExample in Figure 1 is contrived\\u201d. The distribution form used in Figure 1 is given in Appendix A. It is obvious that the distribution are nonzero everywhere (the probabilities are just very small somewhere), thus it does not have the problem of \\u201cdisconnected modes\\u201d. We use this example to show failure case of SVGD, which induces no problem with our SPOS.\\n\\nUsing RMSE and log-likelihood is the gold standard in Bayesian learning of DNNs. Instead of directly showing uncertainty for in/out distribution samples as suggested by you, we test It in the more direct scenario of reinforcement learning. The reason is that it is well-accepted that RL performance directly measures how well the uncertainty is learned, as there is an exploration stage in the learning, requiring uncertainty to explore the environment. As a result, we believe our measure in the experiments are standard.\\n \\nAt last, we really hope you could reconsider your scoring as it seems to be quite unfair based on your comments. In our response, we have fully address the minor problem which you pointed out. Besides, we have resolved your concerns about the W_1 metric excessively. And we have shared our opinions with you on the name of our Theorem and decides to change the notations as you suggested.\\n\\nWe think it is really unfair to reject our paper merely for some minor problems, which has been fully addressed. Although we have explained much to your concerns about W_1 metric, most of them are not needed to add to our paper, which means that we do no need to make much revision. If you like, we are willing to add our explanations for your concerns into the Appendix. Thank you so much for your time and re-consideration!\"}",
"{\"title\": \"Thanks for your comments\", \"comment\": \"To eliminate the confusion of the reviewer, we re-run the experiments for SVGD and SPOS, the same split of data (train, val and test) are used for SVGD and SPOS. The test results are reported on the best model on the validation set. The results are as follows:\", \"boston_housing\": \"SPOS (MSE) & SVGD (MSE) & SPOS (LL) & SVGD (LL)\\n2.829 \\\\pm 0.126 & 2.961 \\\\pm 0.109 & -2.532 \\\\pm 0.082 & -2.591 \\\\pm 0.029\", \"concrete\": \"SPOS (MSE) & SVGD (MSE) & SPOS (LL) & SVGD (LL)\\n5.071 \\\\pm 0.1495 & 6.157 \\\\pm 0.082 & -3.062 \\\\pm 0.037 & -3.247 \\\\pm 0.01\", \"energy\": \"SPOS (MSE) & SVGD (MSE) & SPOS (LL) & SVGD (LL)\\n0.752 \\\\pm 0.0285 & 1.291 \\\\pm 0.029 & -1.158 \\\\pm 0.073 & -1.534 \\\\pm 0.026\", \"kin8nm\": \"SPOS (MSE) & SVGD (MSE) & SPOS (LL) & SVGD (LL)\\n0.079 \\\\pm 0.001 & 0.075 \\\\pm 0.001 & 1.092 \\\\pm 0.013 & 1.138 \\\\pm 0.004\", \"naval\": \"SPOS (MSE) & SVGD (MSE) & SPOS (LL) & SVGD (LL)\\n0.004 \\\\pm 0.0 & 0.004 \\\\pm 0.000 & 4.145 \\\\pm 0.02 & 4.032 \\\\pm 0.008\", \"ccpp\": \"SPOS (MSE) & SVGD (MSE) & SPOS (LL) & SVGD (LL)\\n3.939 \\\\pm 0.0495 & 4.127 \\\\pm 0.027 & -2.794 \\\\pm 0.025 & -2.843 \\\\pm 0.006\", \"winequality\": \"SPOS (MSE) & SVGD (MSE) & SPOS (LL) & SVGD (LL)\\n0.598 \\\\pm 0.014 & 0.604 \\\\pm 0.007 & -0.911 \\\\pm 0.041 & -0.926 \\\\pm 0.009\", \"yacht\": \"SPOS (MSE) & SVGD (MSE) & SPOS (LL) & SVGD (LL)\\n0.84 \\\\pm 0.0865 & 1.597 \\\\pm 0.099 & -1.446 \\\\pm 0.121 & -1.818 \\\\pm 0.06\", \"protein\": \"SPOS (MSE) & SVGD (MSE) & SPOS (LL) & SVGD (LL)\\n4.254 \\\\pm 0.005 & 4.392 \\\\pm 0.015 & -2.876 \\\\pm 0.009 & -2.905 \\\\pm 0.010\\n\\n\\nFor the YearPredict data, we follow the literature, and only report one result (the training is quite stable for this dataset). \\n\\nFor RL results, the four benchmarks are the simplest benchmarks for reinforcement learning; thus it is obviously not necessary to use a 400-400 MLP as a policy. Even in much more complex benchmarks, e.g., humanoid and walker, previous methods such as soft-Q learning and SAC used 128-128 MLP or 256-256 MLP as the policy network, TRPO used one-layer MLP. We followed the settings of VIME, and think it is more reasonable to use a simpler policy network. 400-400 MLP as a policy is too complex to be a good choice. We used the released code of SVPG and the same settings for both methods (also same seeds), thus the comparisons are fair for both methods. \\n\\nFor the other environments, Mountain car is a very simple environment compared with CartpoleSwingUp and Double Pendulum, and we encountered errors from the framework when running the algorithms. We will try to fix the problem and incorporate results into our next revision.\\n\\nAs in our response to Reviewer 1, we did not claim a better algorithm than SVGD in theory because there is no nonasymptotic theory for SVGD (though we did observed better empirical performance), but a better way to understand the nonasymptotic convergence behavior of particle optimization algorithms, e.g., SPOS, providing a non-asymptotic bound for an SVGD-style algorithm the first time.\"}",
"{\"title\": \"we don't think the comments are fair\", \"comment\": \"We thank the reviewer for his comments, however, the reviewer seems to miss our key point.\\n\\nFirst, we would like to stress that our paper does not try to show our proposed method is better than either SVGD or SGLD. The motivation of our method is to help better understand the nonasymptotic convergence behavior of SVGD. Since there is no/limited nonasymptotic theory for SVGD, it is hard to understand its convergence behavior. To overcome this difficulty, we combine SGLD with SVGD, and for the first time successfully develop nonasymptotic convergence theory for a SVGD-style algorithm. Because there no nonasymptotic theory for SVGD (except for some restrict results of the recent work [1]), nothing can be said about SVGD and our algorithm in theory. Similarly, it is hard to compare to SGLD as well because our algorithm is particle based.\\n\\nThat said, even though we can perform other experiments on simple toy data, nothing can be expected by comparing our algorithm and SVGD, except the pitfall property of SVGD described in Sec 4.4, which has been shown in Figure 1.\\n\\nFor the proof of Theorem 2, G is defined in the equation below eq.20, which is related to LHS of eq.7. It is unfair to say that \\\"Reading this proof got me very worried and did not motivate me to read the rest of the paper\\\" because the proof techniques of Theorem 2 and other theorems are complete independent.\"}",
"{\"title\": \"a hybrid SGLD - SVGD\", \"review\": \"Two promising methods for scalable sample-based Bayesian inference are:\\n1) SGLD: simply discretize a standard Langevin dynamics to construct a Markov chain that approximate the correct invariant distribution. This reads: \\n\\nx_{t+1} = x_t + \\\\nabla \\\\log \\\\pi(x_t) \\\\delta + \\\\sqrt(2 \\\\delta) \\\\xi\\n\\n2) SVGD: the method can be expressed as a type of gradient descent of an appropriate functional on the space of probability distributions. A cloud of particles {x_i}_{I=1}^M evolves according to:\\n\\nx^i_{t+1} = x^i_t + (some functional of all the particles) \\\\, \\\\delta\\n\\nThe method proposed in the article is not very different from alternating the two above mentioned update, which is indeed quite a natural idea, and can work pretty well I think. The method reads:\\n\\nx^i_{t+1} = x^i_t + [ \\\\nabla \\\\log \\\\pi(x_t) \\\\delta + (some functional of all the particles) \\\\, \\\\delta ] + \\\\sqrt(2 \\\\delta) \\\\xi.\", \"pros\": [\"yes, I think that the method can work quite OK since it may be borrowing the strengths of both SGLD and SVGD.\", \"It seems that the meat of the paper consists in proving some (non-asymptotic) convergence result. Unfortunately, this went above my head and I cannot claim that I have read the details of the proofs.\"], \"cons\": [\"it is (very) difficult to fairly evaluate this type of methods in high-dimensional settings. I thus appreciate that the numerical section starts with a toy very simple Gaussian model. I would have been much more interested in fair and extensive simulations in this type of settings where it is relatively easy to compare the proposed method with SGLD and SGVD. In other words, after reading the paper, I must say that I am not at all convinced that the method does bring something over SGLD or SVGD (although it is very possible that it does). For example, comprehensive and fair comparisons with SGLD and SVGD in Gaussian settings (not necessarily one-dimensional) could have been presented. The delicate tuning of the different methods, the speed of convergence wrt algorithmic time, the speed of comparison wrt the number of particles, etc.. could have been investigated numerically: this would have been, I think, much more convincing.\"], \"minor_comments\": [\"I did check the proof of Theorem 2, which seems hand-wavy and overly complicated. What is the function G? It seems that the proof of Theorem 2 simply consists in establishing that if each particle x_i follows the dynamics dx = F(x)*dt then the associated densities satisfy \\\\partial_t \\\\mu_t = -\\\\partial_x(F(x) * \\\\mu_t(x)) , which is obvious. But the situation in the paper is indeed more delicate since the particles are interacting, etc... Reading this proof got me very worried and did not motivate me to read the rest of the paper.\"], \"summary\": [\"the method is not terribly original -- this is a simple hybrid SVGD / SGLD -- but may work very well.\", \"unfortunately, the numerical experiments are not convincing.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This paper proposes a particle-based inference algorithm, the optimal update for each particle is the summation of the standard SGLD direction and SVGD velocity. The work further analyzes non-asymptotic properties of SPOS. The results appear theoretically interesting and of potential practical value in designing inference algorithms. I did not go through the proofs in the supplementary.\\n\\n[Experimental results are not convincing] \\n\\n[BNN] I noticed the test RMSE and test LL of SVGD are directly copied from the original SVGD paper. However, the performance critically depends on:\\n1. Running time, or training epochs\\n2. Data partitions\\nTo be a fair comparison, the authors should keep at least the training epochs and random partitions the same. Especially for the dataset Year, for which only one random partition is conducted. It\\u2019s highly likely that the performance gain is due to favored data partition rather than the superiority of the algorithm.\\n\\n[RL] Average rewards are significantly lower than the scores reported in the original SVPG paper?\\n1. From figure 3, SPOS only outperforms SVPG on envs Cartpole Swing Up and Double Pendulum. The best reward for env Cartpole Swing Up reported in this paper is around 200. However, the score is ~400 in the original SVPG paper. For the env Double Pendulum, there\\u2019s also very large performance gap. I am aware the code for SVPG is now publicly available, the authors may consider conducting the experiments with the same settings (e.g. same seed?). Otherwise, it\\u2019s hard to tell whether the performance gain is significant while the baseline is much worse than it should be.\\n2. Only 3 envs are reported, the authors may also consider reporting all the envs are used in the SVPG paper\\n\\n[Figure 1] The authors may consider reporting the exact settings of this case, otherwise, it\\u2019s hard to believe that SVGD would collapse on a simple 1D case.\\n\\nIf the authors can fully address the concerns above, I will consider changing the scores.\", \"other_comments\": \"- Related papers:\\n Stein Variational Message Passing for Continuous Graphical Models, Wang et al., ICML18 (https://arxiv.org/abs/1711.07168)\\n Stein Variational Gradient Descent as Moment Matching, Liu et al., NIPS18 (https://arxiv.org/abs/1810.11693)\\n\\n- Page 30 crashes my browser all the time\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"I have multiple concerns regarding proof of Theorem 3\", \"review\": \"This paper considers the problem of Bayesian inference using particle optimization sampler. Similarly to SGLD, authors propose Stochastic Particle Optimization Sampler (SPOS), augmenting Stein Variational Gradient Descent (SVGD) with diminishing Gaussian noise, replacing the hard-to-compute term of the Chen et al. (2018) formulation. Various theoretical results are given.\\n\\nThis paper was a pleasant read until I decided to check the proof of Theorem 3. I was not able to understand transitions in some of the steps and certain statements in the proof seem wrong.\", \"theorem_3\": \"\\\"Note that $\\\\theta^i_t$ and $\\\\hat \\\\theta^i_t$ are initialized with the same initial distribution \\u00b50 = \\u03bd0 and we can also set $\\\\theta^i_0$ to be independent of $\\\\hat \\\\theta^i_0$, we can have $\\\\gamma(0) = 0$. $\\\\gamma(0) = E \\\\|\\\\theta^i_0 - \\\\hat \\\\theta^i_0 \\\\|^2$.\\\" - this doesn't seem right to me. Expectation of squared difference of two independent and identically distributed random variables is not 0, assuming expectation is with respect to their joint density.\\n\\n\\\"Then according to the Gronwall Lemma, we have\\\" - I don't see how the resulting inequality was obtained. When I tried applying Gronwall Lemma, it seems that authors forgot to multiply by $t$ and $\\\\lambda_1$. Could you please elaborate how exactly Gronwall Lemma was used in this case.\\n\\n\\\"... some positive constants c1 and c2 independent of (M, d)$ - in the proof authors introduce additional assumption \\\"We can tune the bandwidth of the RBF kernel to make \\u2207K \\u2264 H_\\u2207K, which is omitted in the Assumption due to the space limit.\\\" First, there is a missing norm, since \\u2207K is a vector and H_\\u2207K is I believe a scalar constant. Second, c1 = H_\\u2207K + H_F, which both bound norm of d-dimensional vector and hence depend on d. I also suggest that all assumptions are included in the theorem statements, especially since authors have another assumption requiring large bandwidth. Additionally, feasibility of these both assumptions being satisfied should be explored (it seems to me that they can hold together, but it doesn't mean that part of assumptions can be moved to the supplement).\\n\\nI find using Wasserstein-1 metric misleading in the theorem statement . This is not what authors really bound - from the proof it can be seen that they bound W_1 with W_2 and then with just an expectation of l2 norm. Moreover I don't understand the meaning of this bound. Theorem is concerned with W_1 distance between two atomic measures. What is the expectation over? Note that atom locations are supposed to be fixed for the W_1 to make sense in this context (and the expectation is over the coupling of discrete measures defined by weights of the atoms, not atom locations).\\n\\n\\\"Note the first bullet indicates U to be a convex function and W to be ... \\\" I think it should be K, not W.\\n\\nTheorems 3-6 could be lemmas, while there should be a unifying theorem for the bound.\\n\\nFinally, I think notation should be changed - same letter is used for Wasserstein distance and Wiener process.\", \"other_comments\": \"Example in Figure 1 is somewhat contrived - clearly gradient based particle sampler will never escape the mode since all modes are disconnected by regions with 0 density. Proposed method on the other hand will eventually jump out due to noise, but it doesn't necessarily mean it produces better posterior estimate. Something more realistic like a mixture of Gaussians, with density bounded away from zero across domain space, will be more informative.\\n\\nIt is not sufficient to report RMSE and test log likelihood for BNNs. One of the key motivating points is posterior uncertainty estimation. Hence important metric, when comparing to other posterior inference techniques, is to show high uncertainty for out of distribution samples and low for training/test data.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
Bke96sC5tm | SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning | [
"Marvin Zhang*",
"Sharad Vikram*",
"Laura Smith",
"Pieter Abbeel",
"Matthew Johnson",
"Sergey Levine"
] | Model-based reinforcement learning (RL) methods can be broadly categorized as global model methods, which depend on learning models that provide sensible predictions in a wide range of states, or local model methods, which iteratively refit simple models that are used for policy improvement. While predicting future states that will result from the current actions is difficult, local model methods only attempt to understand system dynamics in the neighborhood of the current policy, making it possible to produce local improvements without ever learning to predict accurately far into the future. The main idea in this paper is that we can learn representations that make it easy to retrospectively infer simple dynamics given the data from the current policy, thus enabling local models to be used for policy learning in complex systems. We evaluate our approach against other model-based and model-free RL methods on a suite of robotics tasks, including manipulation tasks on a real Sawyer robotic arm directly from camera images. | [
"model-based reinforcement learning",
"structured representation learning",
"robotics"
] | https://openreview.net/pdf?id=Bke96sC5tm | https://openreview.net/forum?id=Bke96sC5tm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1g0jpfQxV",
"S1ekABFkgE",
"H1eNN3ma14",
"H1xc-4vsCQ",
"BylfnXvi0m",
"BkeI2zwr0m",
"S1gVBIlrRm",
"HJgPZ8lHRQ",
"BJemlBxrRm",
"B1g3stit6Q",
"HygFzNbFam",
"r1eTCmWFTm",
"SklP9FFR3Q",
"S1gFTfz537"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1544920486140,
1544684998593,
1544530988488,
1543365634484,
1543365546362,
1542972078134,
1542944315864,
1542944254805,
1542943979455,
1542203812425,
1542161424770,
1542161364644,
1541474703366,
1541182145184
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper831/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper831/Authors"
],
[
"ICLR.cc/2019/Conference/Paper831/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper831/Authors"
],
[
"ICLR.cc/2019/Conference/Paper831/Authors"
],
[
"ICLR.cc/2019/Conference/Paper831/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper831/Authors"
],
[
"ICLR.cc/2019/Conference/Paper831/Authors"
],
[
"ICLR.cc/2019/Conference/Paper831/Authors"
],
[
"ICLR.cc/2019/Conference/Paper831/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper831/Authors"
],
[
"ICLR.cc/2019/Conference/Paper831/Authors"
],
[
"ICLR.cc/2019/Conference/Paper831/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper831/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a method to learn representations to infer simple local models that can be used for policy improvement. All the reviewers agree that the paper has interesting ideas, but they found the main contribution to be a bit weak and the experiments to be insufficient.\\n\\nPost rebuttal, the reviewers discussed extensively with each other and agreed that, given more work is done on a clear presentation and improving the experiments, this paper can be accepted. In its current form however, the paper is not ready to be accepted. I have recommended to reject this paper, but I will encourage the authors to resubmit after improving the work.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting ideas, but the paper can be improved.\"}",
"{\"title\": \"Thank you for your response.\", \"comment\": \"We appreciate your feedback, however we disagree with several points in your assessment.\\n\\nWe find your point on model-based RL in general to be unreasonable in judging the merit of our work. As our paper addresses model-based RL, our method of course will exhibit some of the limitations of the current state of the art in model-based RL while addressing other limitations. As you mention, it is well-known that model-based RL typically lacks the final performance reached by model-free RL. However, this criticism can be applied to any model-based RL research, and blanket rejection of all model-based RL research is unreasonable. Our aim is to expand the scope of domains that model-based RL can be applied to rather than improve the final performance of model-based RL in general. While the latter is an important line of work, it is not within the scope of our paper, and we believe that our method as is already demonstrates the novel ability to operate directly on image-based domains with orders of magnitude fewer samples than previously reported.\\n\\nThis leads to our objection of your claim that our method is \\u201cnowhere close to solving the task\\u201d. Which video would support this claim? In the nonholonomic car case, the policy is successful at reaching the target at least 80% of the time, and furthermore it cannot be faulted for driving against the wall as this is not directly penalized by the cost function. For the reacher, while we agree that the behavior is noticeably worse than PPO, the final position of the end effector is consistently within a few pixels of the target, not \\u201cnot really close\\u201d as you suggest. For the block stacking, even conservatively speaking, the final policy is successful on four out of five tries, and we note that even a carefully hand-engineered controller would not be 100% successful due to the inherent noise in controlling a real-world robot. Regarding your point on having multiple block positions, we have run this experiment multiple times and the block position has not been fixed between runs. Our method was successful every time at learning a good final policy, and we will formally include multiple block positions in the final version of our paper.\\n\\nAs we have noted, we believe that the key take-away from our work is the development of a method that can operate in domains with complex observations such as images in an extremely data-efficient manner. This method strikes a favorable balance in extending the capabilities of model-based RL while being practical for use in real world tasks, which is generally not true for model-free RL algorithms. We again emphasize that, to the best of our knowledge, no prior work has shown the ability to solve a real world task as complex as block stacking with only half an hour of total interaction time, and we encourage you to either point to prior work that is comparable or reconsider your stance on the significance of this result.\"}",
"{\"title\": \"Thanks for the clarifications\", \"comment\": \"The updated paper definitely goes in the right direction. I like that you updated the videos for all experiments.\\n\\nI think the videos interestingly highlight the current state of the art model based RL, which is - unfortunately - nowhere close to solving the task. This statement is not specific to the SOLAR algorithm and applies to most currently known model-based RL algorithms. In the non-holonomic car experiment PPO drives quite smoothly to the desired location while SOLAR learned that driving against the wall gets the agent closer to the target. Similar for the reacher task, where PPO goes smoothly to the desired location while SOLAR only goes in the direction to the target but not really close. Yes, SOLAR learns to increase the reward with a very good sample complexity - and does this comparable / slightly better than the other methods - but SOLAR is not close to solving the task and converges prematurely. Therefore, I am quite objecting these conclusions:\\n\\n\\\"Not only is our method able to solve this task directly from raw, high-dimensional camera images within 200 episodes, corresponding to about half an hour of interaction time, our method is also successful at handling the complex, contact-rich dynamics of block stacking. As seen in the video on our project website, our method learns a policy that can react to slightly different contacts, due to the bottom block shifting between episodes, and is ultimately successful in stacking the block in most episodes.\\\" => Well, yes the distance to the goal reduces with episodes and the final policy can sometimes stack the blocks. However, there are also episodes where the blocks are not completely stacked. Furthermore, there is no evaluation of different block positions.\\n\\n\\\"The key insights in SOLAR involve learning latent representations where simple models are more accurate and utilizing PGM structure to infer dynamics from data conditioned on entire real-world trajectories. Our experimental results demonstrate that SOLAR is competitive in sample efficiency, while exhibiting superior final policy performance, compared to other model-based methods. \\\" => Well, yes the performance might be slightly better but the paper provides no argument for simple models other than empirically shown on 3 experiments the simple models performed slightly better. The open questions are still, why is this converging prematurely and is this premature convergence easier to fix with global or local models?\", \"so_concluding\": \"The paper is ok written and does not contain major issues. The idea is not rather novel but a solid extension. For the solid extension, the results are not significantly better and lack general conclusions applicable to model-based RL. So I don't have a key take-away other than somebody has tried it and it resulted in mixed performance. So for me it is still borderline, i.e., a rating 5.5.\"}",
"{\"title\": \"Thank you for your response.\", \"comment\": \"We have made further updates based on your comments, and we would appreciate any other feedback you might offer.\\n\\nAs requested by both you and reviewer 1, we have moved details about the cost function learning and model optimization from the appendix to the main paper. We believe that this should clarify the presentation in the main paper, particularly sections 3 and 4, without relying as heavily on the appendix. Also as requested by both you and reviewer 1, the project website ( https://sites.google.com/view/iclr19solar ) now includes longer videos of the final policies from both our method and the comparisons for the simulated car and reacher tasks. We believe that the new video of the reacher policy learned by our method should demonstrate that are indeed learning the task. We perform worse than PPO, which solves the task nearly perfectly with about 40 times more data, however we significantly outperform the ablations and we perform comparably to TRPO. In the video, the policy we learn consistently brings the reacher arm close to the target in all ten episodes, and we would appreciate your re-evaluation of this experiment given the new videos.\\n\\nRegarding your concerns about the metric we use for reacher, we included the standard Gym reward function for this task because it is a standard Gym task, and this is the metric that prior works report. We could of course also report something else, but our evaluation metric is consistent with prior work in this regard. If the standardized reward function for this task (which we did not design) does not exhibit the desired behavior, this is the fault of the reward function, not the algorithm. Of course, we agree with the reviewer that reward doesn't tell the whole story, which is why we added three other tasks and included an extensive qualitative evaluation on the real robot task, which is the hardest task that we evaluate on. However, we do not believe that it is appropriate to criticize the work for using a standardized evaluation protocol, especially when we made an intentional effort to avoid the pitfalls of this evaluation by including other additional experiments, including an experiment with a real robot.\\n\\nWe would like to emphasize the fact that our experimental evaluation includes four tasks, including one task on a real-world physical robot. All tasks involve learning directly from images. Compared to many recent works on model-based reinforcement learning that evaluate on four tasks [Watter \\u201815, Banijamali \\u201817, Nagabandi \\u201818, Chua \\u201818], the scope of the evaluation is comparable, and in contrast to all of these recent works, we evaluate on real-world image-based robotic control. We believe that it would be appropriate to evaluate the experimental evaluation comparatively to other works in the field, as this would seem to be the only fair standard against which to judge the work.\\n\\nWe appreciate your responsiveness in helping us improve our paper, and we would appreciate it if you could take another look at our updates.\"}",
"{\"title\": \"We have made further updates to the paper and project website based on your feedback.\", \"comment\": \"As requested, we have moved details about the cost function learning and model optimization from the appendix to the main paper. We believe that this should clarify the presentation in the main paper without relying as heavily on the appendix. Also as requested, the project website ( https://sites.google.com/view/iclr19solar ) now includes longer videos of the final policies from both our method and the comparisons for the simulated car and reacher tasks. This should help clarify that all methods are fairly consistent in their behavior by the end of training, and our method and the model-free algorithms are qualitatively more successful than the ablations.\\n\\nYou and reviewer 3 have raised similar concerns about the quantitative results that we would like to address. For the reacher task, we included the standard Gym reward function for this task because it is a standard Gym task, and this is the metric that prior works report. We could of course also report something else, but our evaluation metric is consistent with prior work in this regard. If the standardized reward function for this task (which we did not design) does not exhibit the desired behavior, this is the fault of the reward function, not the algorithm. Of course, we agree with the reviewer that reward doesn't tell the whole story, which is why we added three other tasks and included an extensive qualitative evaluation on the real robot task, which is the hardest task that we evaluate on. All tasks involve learning directly from images. Compared to many recent works on model-based reinforcement learning that evaluate on four tasks [Watter \\u201815, Banijamali \\u201817, Nagabandi \\u201818, Chua \\u201818], the scope of the evaluation is comparable, and in contrast to all of these recent works, we evaluate on real-world image-based robotic control. We believe that it would be appropriate to evaluate the experimental evaluation comparatively to other works in the field, as this would seem to be the only fair standard against which to judge the work.\\n\\nWe would appreciate it if you could take another look at our updates and let us know if you would like to revise your score or request further additions and clarifications.\"}",
"{\"title\": \"Effort to the right direction. Still needs improvement though\", \"comment\": \"I appreciate the authors' effort to improve the manuscript.\\n\\nLet me first explain my comment regarding \\\"methodologically we have not learned anything new\\\". From what I understood by thoroughly reading the paper and the supplementary material, the theoretical section does not present any new method/technique. It is rather a collation of other ideas which of course, in order to make it work, requires a heavy understanding of the problem, a lot of engineering and some extensions to prior work (seem to be marginal). I want to clarify that I am perfectly OK with that. However, this issue, in combination with the evaluation on a limited number of tasks and the fact that in 1 out of 4 tasks the model is nowhere close to solving the problem makes the paper weak.\\n\\nRegarding the reacher experiment, specifically, I raised the same concerns as Reviewer 1 and I am not satisfied by the authors' reply. They authors have added the following piece in section 6.2:\\n\\\"On the reacher task, we plot the reward function as defined by Gym since this is the standard metric used to evaluate performance on this task, and as shown by the videos on our project website, achieving high Gym reward correlates strongly with solving the task in terms of distance to the goal\\\".\\nFirst of al, the fact that it is the standard metric does not make it the right one (the fact that you do not report it for the other tasks suggests that the authors do not believe it is the right metric either). It is definitely a metric that can hide many pathologies. As for the video, it is only 1s long and we cannot conclude much from it. However, what we saw is that the proposed approach is nowhere close to solving the task. It is just a free fall of the arm towards the goal, so no \\\"strong correlation\\\" between reward and solving the task, which brings us to my point that plotting the accumulated reward gives no information regarding the performance of the model. Since we see that the model fails on the reacher task, why have the authors not studied other similar gym environments? More importantly, why they do not comment on the reasons why the model cannot solve the task but they rather try to argue that it achieves higher reward? In my opinion, further analysis is required.\\n\\nApart from the experimental evaluation, I agree with Reviewer1 that presentation of the methodology requires more work. I apologise for not bringing this up in my first review but I share the same opinion as Reviewer 1 that \\nthe important details like the cost-functions and the optimisation problem should be present in the main text and not buried in the appendix. I feel like sections 3 and 4 are difficult to follow unless you thoroughly study the appendix.\\n\\nOverall, I appreciate the authors' effort, and based on their response and their additional comparisons I am willing to change my score to a 5 but I still do not believe that it is good enough for publication.\"}",
"{\"title\": \"We have made some further updates to the paper and project website.\", \"comment\": \"We have updated the paper with the requested comparisons, in particular, we have now included the results for PPO for all simulated tasks in Figure 4 and Appendix F. The project website (https://sites.google.com/view/iclr19solar ) also now includes a longer video of the learning process of our method on the Sawyer block stacking task. We would appreciate it if you could take another look at our changes and additional results.\"}",
"{\"title\": \"We have made some further updates to the paper and project website.\", \"comment\": \"We have updated the paper with the requested comparisons, and the project website (https://sites.google.com/view/iclr19solar ) also now includes a longer video of the learning process of our method on the Sawyer block stacking task. We would appreciate it if you could take another look at our changes and additional results.\"}",
"{\"title\": \"Thank you for your detailed feedback.\", \"comment\": \"We have added the requested comparisons and evaluations, and we hope to provide answers to your questions in this comment.\\n\\nAs requested, we have updated the project website (https://sites.google.com/view/iclr19solar) with a longer video of the learning process for the Sawyer block stacking task with more example trajectories during training and from the final policy. Similarly to including image overlays of these trajectories, this should make clear that our method is indeed learning the task successfully. The video now depicts both the image observations we provide to the policy along with our model\\u2019s reconstructions of the images, which are generally fairly accurate. Also, as we discuss in Section 6.3, the video shows that the final policy successfully handles shifts in the position of the bottom block, and this indicates that the policy is reacting to different contacts rather than simply following a trajectory, as this strategy would fail more often. We will also update the website with videos depicting several episodes from all final policies for each method, which should help clarify our method\\u2019s performance compared to each of the baselines.\\n\\nAs requested, we have added trajectories of the latent state for the 2D navigation task to Appendix F. We also clarify in Appendix E that we use velocity control for the Sawyer experiment, which helps ensure that the policy is safe along with an explicit safety constraint that clips velocities that are too high. We have added the exact reward functions for each task also to Appendix E. Finally, we have added all of the PPO comparisons to Appendix F, which should elaborate on the differences between the model-based and model-free policies.\\n\\nWe clarify several points in the paper in response to some of your questions, including clarifying how we define partial observability in Section 2.2, explaining the global model failure on reacher in Section 6.2, and elaborating on the contacts in the block stacking task in Section 6.3. We now address the specific questions you raise:\\n\\n> \\u201cI would like to see more comparisons to other model-based approaches.\\u201d\\n\\nWe are happy to include any comparisons to prior approaches that you can recommend, though we believe that our comparisons in the main paper and Appendix F already cover a substantial subset of the model-based approaches including LQR-FLM, MPC-NN, baselines similar to E2C and deep visual foresight, and several ablations. Note that LQR-FLM and MPC-NN were only successful from states rather than images as in Appendix F, so we subsequently added other comparisons and ablations as suggested by the other reviewers. Most prior model-based methods cannot operate directly on complex image observations in a data-efficient manner as we have demonstrated with our method, and we believe this is a significant result.\\n\\n> \\u201cit would be really interesting to try your approach on breakout.\\u201d\\n\\nNote that Breakout and other Atari games utilize discrete actions, and our model is designed for and tested on continuous action domains as we focus on robotic applications. Extending our model to discrete actions is non-trivial as it would necessitate some type of continuous relaxation or learned action representation, and we believe that this is an interesting direction for future work. We have added this point to Section 7, and we are happy to run evaluations on other tasks at your request for the final paper.\\n\\n> \\u201cHow do you explain these jumps [in the reacher plot]?\\u201d\\n\\nThe jumps occur whenever the policy is updated, which happens after we collect a batch of data. These jumps are also present in all of the other tasks, however they are less noticeable because of the greater variance in policy performance on those tasks.\\n\\n> \\u201cWhat is the unit of \\u2018Average Distance to Final Goal\\u2019\\u201d?\\n\\nThis is measured as the ground truth distance in simulation, for which units are not meaningful. For reference, the x-y coordinates for both the 2D navigation and car tasks are between -3 and 3, so the final distances we achieve indicate that the agent essentially reaches the goal.\\n\\nWe would appreciate it if you could take another look at our changes and additional results, and let us know if you would like to either revise your rating of the paper, or request additional changes that would alleviate your concerns.\"}",
"{\"title\": \"Interesting model based RL approach that needs additional evaluation and clearer algorithm description\", \"review\": \"The introduction and experiment section is clearly written but the algorithm description lacks clarity and details, which hinder the understanding of the complete algorithm. One understands the motivation and the main approach but lacks a detailed understanding. For my personal taste the detailed description of learning the embedding is missing. I personally would prefer the statement of the cost-functions and the optimisation problem within the paper and not the appendix. The same holds true for the policy improvement. Therefore, I do not fully understand the approach without extensively studying the appendix or the references. Especially the contribution remains unclear. I am not aware how much the previous work had to be extended.\\n\\nThe experimental evaluation focuses on learning control signal to achieve certain trajectories, where the observations are high-dimensional images rather than low-dimensional representations. I personally think that these tasks are unnecessarily made more complex to incorporate high-dimensional images. Especially, the Sawyer experiment throws away all joint information even though the reward function is solely defined in joint/end-effector position. However, I am aware that this is general practice in the RL community. From the learning curves it seems that the approach is working and achieving good sample complexity compared to model free approaches. However, the improvement over the naive VAE approach remains unclear. I would like to see more comparisons to other model-based approaches. In addition, I am missing qualitative comparisons as the learning curves can be misleading. Especially, the videos on the homepage are really short and do not provide a good overview about the actual performance. Furthermore, you are not providing videos for all models in comparison. The 1s video of a single episode on the reacher task make me wonder what happens in the other episodes. Could you please add longer videos for all comparisons. Furthermore, it would be interesting how the trajectories evolve over time. Could you plot these trajectories? \\n\\nFurthermore, it would be really interesting to try your approach on breakout. And test if your approach is learning the actual game dynamics and does not overfit to the block configuration.\", \"further_minor_comments\": \"- \\\"This shifts our problem setting to that of a partially observed MDP, as we do not observe the latent state\\\"\\nYou are mentioning that you are solving a POMDP. Could you elaborate how you exploit the POMDP formulation and relate your work to POMDP algorithms. In addition, how do you define partial observability? \\n\\n- You claim \\\"our method is also successful at handling the complex, contact-rich dynamics of block stacking, which poses a significant challenge compared to the other contactfree tasks.\\\" I am quite doubt-full about the claim. Is the dynamics model really modelling contacts and is your policy really reacting to these contacts? Or is your policy just tying to follow a trajectory? From your current evaluation and the videos, I personally wouldn't conclude this. Could you elaborate how you come to this conclusion and provide additional evaluations to solidify your argument? \\n\\n- You are not describing the action space for the Sawyer experiment. Are you using torques, velocities or positions? Can you guarantee that the control sequence is smooth? If not how do you ensure that the policy does not harm the robot? \\n \\n- Could you please incorporate the exact reward functions for each experiment within the appendix.\\n\\n- Figure 4. Thanks a lot for including the additional model free baselines and adding all learning curves. However, the learning curves raise multiple questions:\\n\\n(1) The Global Model Ablation, i.e. the MPC in latent space, works well in the the navigation and car experiment however fails \\nto achieve a meaning-full policy within the reacher task. Even though the initial performance is significantly better than \\nthe other policies. Do you have an explanation for this failure?\\n\\n(2) The LDS SVAE and VAE Solar version on the reacher task experiences jumps in performance even though the change between policies is bounded by a KL-Bound and the cost function is smooth. How do you explain these jumps? Furthermore, why are these jumps only occurring within the reacher tasks and not the other experiments. \\n\\n(3) You are still missing the PPO baselines for the reacher and car experiment. Could you further explain the qualitative difference between the model-free and model-based policies. The difference in learning curves can be misleading. \\n\\n(4) What is the unit of \\\"Average Distance to Final Goal\\\"? Is this measured in pixel or a different unit? \\n\\n- Figure 5: You are plotting the distance to the goal as performance measure for the Sawyer experiment. The final policy has an approximate error of 2.5 cm. From just the learning curve I cannot conclude that the robot actually learns the task successfully. Is the block really stacked or can it also be wedged? Could you please provide image overlays of the last 10 episodes such that one can evaluate the qualitative performance?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Thank you for your detailed review.\", \"comment\": \"To address your concerns, we have added the requested comparisons and clarifications to the paper. We also clarify several points about our methodology in this comment. We believe that these changes and our response address all issues, but we would appreciate any other feedback you might offer.\\n\\nWe have included in Figure 9a a comparison to PPO (purple line) on the 2D navigation task as requested, and we will include the other simulated tasks as well in the final. This method performs well and converges to a better policy than our method and TRPO, though the improvement is marginal and our method is still several orders of magnitude more data efficient. Note that we do not assume access to the underlying state as in GPS (Levine 2016), and we have compared our method to LQR-FLM directly on images which was unable to learn the tasks. If LQR-FLM is unsuccessful, then GPS will be unsuccessful as the neural network policy is supervised by the LQR-FLM policies. We have now emphasized this point in Section 6.2.\\n\\nWe have clarified various parts of the paper as suggested. Regarding the specific point on closed-form vs constrained LQR, we note that both of these procedures utilize a quadratic cost, thus there is not a significant implementation difference in terms of the constrained vs unconstrained procedures, and we have clarified this in Section 4.2. We have added citations for prior work that assumes access to a simple representation, in particular, Levine 2014, Deisenroth 2014, and Nagabandi 2018. We have also cited Deisenroth 2014 to elaborate on modeling bias, which is discussed in detail in that paper.\\n\\nIn response to your concerns about using a constrained policy update, we further clarify and justify our policy update method in Section 4.2. We would appreciate if you can take a look at this section and see if it helps explain our modeling and policy learning choices. Finally, to address your concern about the presentation of the results, we justify why we choose to plot the reward rather than the final distance for the reacher task in Section 6.2.\\n\\nIn response to \\u201cmethodologically we have not learned anything new\\u201d, we believe that the main methodological contribution of this paper is to wed structured representation learning and local model-based RL into a more principled method compared to prior work, such that approximate inference within the model exactly enables the local model method we derive. These individual components are based on prior work, however they are extended to meet the needs of our overall method, and to our knowledge this combination is a novel contribution. This is combined with a demonstration that our method can solve real world robotics tasks such as block stacking from only image observations in about half an hour of interaction time, which again to our knowledge has not been done to the level of performance we demonstrate.\\n\\nWe would appreciate it if you could take another look at our changes and additional results, and let us know if you would like to either revise your rating of the paper, or request additional changes that would alleviate your concerns.\"}",
"{\"title\": \"Thank you for your detailed feedback and insightful comments.\", \"comment\": \"We have addressed the issues you raised by adding the requested comparisons and evaluations. We also clarify the performance of the E2C comparison in this comment. We believe that these changes and our response address all issues, but we would appreciate any other feedback you might offer.\\n\\nAs requested, we now have a comparison to a version of deep visual foresight, which is a pixel space global model, on the real robot block stacking task in Figure 5 (brown line). We will include a full and fair comparison to deep visual foresight on all tasks in the final, however, we note that the main difficulty here is data efficiency -- both deep visual foresight and Agrawal/Nair 2016 utilize weeks of robot data, whereas we demonstrate successful block stacking in about 30 minutes of interaction time. We believe that our method\\u2019s data efficiency in solving image-based tasks on a real robot is a significant result.\\n\\nWe have also now included the other requested baselines on the block stacking task in Figure 5, specifically using an autoencoder representation (VAE ablation, pink line) and using a forward model (global model ablation, blue line). The ablations in this setting are competitive with our method, though our method still achieves a better final policy that is able to more consistently stack the block successfully. We also now compare against these baselines in Figure 4 for all simulated tasks as requested, and though they work well for 2D navigation, they are less successful on the reacher. Finally, as suggested, we include a diagram of the 2D navigation task in Appendix E, Figure 6.\\n\\nAs you noted, E2C performs rather poorly on our version of the 2D navigation task. Our task is harder than that of Watter 2015 and Banijamali 2017, since the policy must reach every target position (which is appended to the state) rather than just a single position in the bottom right. In Appendix F.1, we show that E2C does perform better on the single-target task. Also note that we have used code directly from Banijamali and we have had discussions with these authors who noted similar challenges in reproducing the results of E2C. These works have not made their code publicly available, and to our knowledge no one has been able to reproduce their reported results. However, we are confident that our implementation is correct and that the E2C results accurately reflect the performance of that method.\\n\\nWe would appreciate it if you could take another look at our changes and additional results, and let us know if you would like to either revise your rating of the paper, or request additional changes that would alleviate your concerns.\"}",
"{\"title\": \"Interesting idea, but insufficient comparison to baselines\", \"review\": \"This paper proposes a model-based reinforcement learning approach, called SOLAR,\\nwhich consists of mapping complex, high-dimensional observations to low-dimensional \\nrepresentations where transition dynamics between consecutive states are approximately linear. \\nIn this low-dimensional space, local models can easily be fit in closed form and then used to optimize a policy, using a similar method to Guided Policy Search (GPS). The method is evaluated in 4 different settings (3 simulated, 1 on a real robot). \\n\\n*Quality: the method seems to work well in the experiments. However, there are issues with the experimental evaluation (detailed below) which make it unclear whether the method is better than standard baselines.\\n\\n*Clarity: the paper is well-written and clear overall. \\n\\n*Originality: the paper proposes an extension of GPS, which to my knowledge is novel. \\n\\n*Significance: the idea of learning representations where transitions are linear seems well-founded and potentially useful. However the merits of this method are not yet clear from the experiments.\", \"specific_comments\": \"- Please include an illustration of the 2D navigation task in Figure 3a\\n- I'm confused by the poor performance of E2C in the 2D navigation task. \\nThe previous works of [Watter et. al, 2015] and [Banijalami et. al, 2017] report close to 100% accuracy using similar methods. Is the task formulated differently here? \\n- I would think a global action-conditional forward model (represented as convnet+deconvnet, and trained unrolled on its own predictions to reduce model errors) would perform quite well on the 2D navigation task, and possibly on the reacher task. Even though these are represented as images, they are very simple images with little distracting information, no changes in illumination/perspective, etc. It seems the model essentially just needs to learn a pixel translation for each action for the navigation task, and some rotations for the reacher. It already seems to work quite well for the non-holonomic car, which requires learning similar transformations. This baseline should be included for all the tasks. \\n- Although it does seem that the method performs well on the stacking tasks for the real robot, there are no baselines included. However, there are many works which have explored representation learning and control for robotics using neural networks. A couple examples (+see references within):\\n\\n\\\"Learning to poke by poking: Experiential learning of intuitive physics\\\" Pulkit Agrawal, Ashvin V Nair, Pieter Abbeel, Jitendra Malik, Sergey Levine. NIPS 2016\\n\\\"Deep Visual Foresight for Planning Robot Motion\\\" Chelsea Finn, Sergey Levine ICRA 2017\\n\\nAt the very least, the method should be compared to pixel-based global models and representations learned with some kind of autoencoder or forward model for the robot task. \\n\\nThe paper proposes what seems to be a good idea, but it is not yet demonstrated by the current experiments.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Nice work but methodology is not clear enough in some parts. It can also benefit from a broader experimental evaluation.\", \"review\": [\"In this work the authors propose an end to end approach for model based reinforcement learning from images, where the main building blocks are locally-linear dynamical systems and variational auto-encoders (VAE). Specifically, it is assumed that the input features (i.e., the images) are generated from a low dimensional latent representation mapped through parametric random functions; the latter are modeled via neural networks. A recognition model based on convolutional neural networks operates on the reverse way and is responsible for projecting the input features to the latent space, in order to proceed with the reinforcement learning task. The variational framework is employed in order to jointly learn the VAE and the linear dynamics on the latent state. As a final step, once the model is fitted a linear quadratic system (LQS) is solved in order to learn the cost function and the optimal policy.\", \"The paper is well motivated and tries to solve an interesting problem, that of data-efficient reinforcement learning. The experiments are well picked and demonstrate the advantages of the proposed approach towards solving the task, however, the method is only evaluated on few environments and compared against only a couple of other methods. I would expect a broader evaluation and/or comparison against more methods. Since the model is able to reach TRPO\\u2019s performance in much less steps it would be nice to see how it performs against PPO from [Schulman et al. 2017] (at least on the simulated environments). Also, would it make sense to compare against [Levine et al. 2016] that has been evaluated on similar tasks?\", \"[Schulman et al 2017] \\u201cProximal Policy Optimization Algorithms\\u201d.\", \"[Levine et al. 2016] \\u201cEnd-to-End Training of Deep Visuomotor Policies\\u201d.\", \"Methodologically, the paper is sound. The model part (as the authors point out) is based on [Johnson et al. 2016] and is well explained. On the other hand, the policy part, and in particular the policy update in Section 4.2 has some issues regarding readability. There is a strong interplay between Section 4.2, Section 2.1 and Appendix D and the authors did not manage to nicely explain what exactly is happening during the update phase. In the beginning the reader has the impression that we are finding the optimal policy via the closed-form LQS. Later on we switch to constrained optimisation for the cost by accounting for the KL divergence between the policy on two episodes. Finally, in the appendix we are back to the original quadratic cost. The authors need to clarify all the above. Also, they need to explicitly mention why they opt for stochastic optimisation (is it because of minibatching?)\", \"To continue with the policy, in Section 4.2 the authors argue that although the optimal policy can be found in closed form this is not desirable because the policy will overfit the model and will not generalise well in the real environment. I disagree with this statement. If this happens it effectively means that the learned model or the assumption/learning of the linear dynamics is not right. The authors seem to also agree with this since they clearly state in the the experimental section that \\u201c... our method does not heavily rely on an accurate model...\\u201d. To my understanding, this means that we need to refine the modelling strategy and not learn a sub-optimal policy. I am really interested in the authors opinion on that.\", \"The above argument is also directly related to the recognition model and learning of the policy in the latent state (I completely agree with that). The recognition network, which in this case is a convolutional neural network, is used as an inference mechanism to project the observations to the latent space. We learn the (variational) parameters of the recognition model by optimising the likelihood\\u2019s lower bound. This means that we are \\u201callowed\\u201d to overfit the variational parameters as long as the bound gets tighter. This can possibly result in degraded performance during the policy update. Furthermore, the variational distribution of the latent state, i.e., q(z_t | s_t) is assumed to be mean field across time (independent z\\u2019s), while clearly this is not the case in the posterior. You somehow mitigate that by augmenting the observed state (feeding consecutive frames to the network), but still this is not ideal. Finally, is there a reason why we only use the mean of the recognition model to fit the cost on the projected latent states? Why are we throwing away the uncertainty? Especially since you do not use an exact solver and follow a stochastic gradient.\", \"In the end of Section 2.1, the authors argue regarding the fact that the prior work assumes access to a compact low-dimensional representation which does not allow them to perform well on images. Reference is needed.\", \"In the related work the authors mention modelling bias as a downside of prior work. Can you please elaborate on that? Where does the bias come from and, more importantly, how does your approach overcome this issue?\", \"In the experiment and specifically in Figure 4 am I right in assuming that the distance to target is measured in actual pixels? Furthermore, why the relevant plot for the reacher task is depicting rewards instead of the distance to target. To me this suggests that the task is not solved. In general what I find very upsetting in the field are plots that only depict accumulated reward for a specific task. There are many situations where the agent learns a weird behaviour that happens to give good rewards (e.g., spinning around the cart-pole), and unfortunately such behaviours are not spotted on the reward plots.\", \"Overall, the paper is nicely presented and definitely an interesting work. However, given the fact that methodologically we have not learned anything new from this paper and in combination with the not satisfying experimental evaluation I warrant for rejection.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
Bkg5aoAqKm | Fast Binary Functional Search on Graph | [
"Shulong Tan",
"Zhixin Zhou",
"Zhaozhuo Xu",
"Ping Li"
] | The large-scale search is an essential task in modern information systems. Numerous learning based models are proposed to capture semantic level similarity measures for searching or ranking. However, these measures are usually complicated and beyond metric distances. As Approximate Nearest Neighbor Search (ANNS) techniques have specifications on metric distances, efficient searching by advanced measures is still an open question. In this paper, we formulate large-scale search as a general task, Optimal Binary Functional Search (OBFS), which contains ANNS as special cases. We analyze existing OBFS methods' limitations and explain they are not applicable for complicated searching measures. We propose a flexible graph-based solution for OBFS, Search on L2 Graph (SL2G). SL2G approximates gradient decent in Euclidean space, with accessible conditions. Experiments demonstrate SL2G's efficiency in searching by advanced matching measures (i.e., Neural Network based measures). | [
"Binary Functional Search",
"Large-scale Search",
"Approximate Nearest Neighbor Search"
] | https://openreview.net/pdf?id=Bkg5aoAqKm | https://openreview.net/forum?id=Bkg5aoAqKm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SylfTv3eeV",
"Bkxw766hR7",
"H1giXF6hCX",
"BkebSdD9aX",
"S1gakuPcTX",
"HkehwwDqTX",
"SJxBPmP9pm",
"Byx3R3E03m",
"BklgOiC2jQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1544763321957,
1543458079319,
1543457059477,
1542252600743,
1542252516683,
1542252388069,
1542251356843,
1541455060152,
1540316007701
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper830/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper830/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper830/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper830/Authors"
],
[
"ICLR.cc/2019/Conference/Paper830/Authors"
],
[
"ICLR.cc/2019/Conference/Paper830/Authors"
],
[
"ICLR.cc/2019/Conference/Paper830/Authors"
],
[
"ICLR.cc/2019/Conference/Paper830/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper830/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes an Optimal Binary Functional Search (OBFS) algorithm for searching with general score functions, which generalizes the standard similarity measures based on Euclidean distances. This yields an extension of the classical approximate nearest neighbor search (ANNS). As observed by the reviewers, this work targets an important research direction. Unfortunately, the reviewers raised several concerns regarding the clarity and significance of the work. The authors provided a good rebuttal and addressed some concerns, but not to the degree that reviewers think it passes the bar of ICLR. We encourage the authors to further improve the work to address the key concerns.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Rejection: good Paper but still require further improvement\"}",
"{\"title\": \"Response to author rebuttal\", \"comment\": \"Thank you for the detailed rebuttal. It has been very helpful for me to better understand the proposed solution and its utility.\\n\\nWhat I meant with \\\"basic search algorithm\\\" was that for any functional f(x_i, q), a graph would be built on S on some distance (or inverse similarity) function d(x_i, x_j) such that small d(x_i, x_j) implies small |f(x_i, q) - f(x_j, q)| and large d(x_i, x_j) implies large |f(x_i, q) - f(x_j, q)|. Then the search would just proceed with f for any query q. But now I see that this is exactly your proposed SL2G where you fix your d(x_i, x_j) to be the \\\\ell_2 distance. So essentially the guarantee for accuracy is dependent on the Lipschitz constant of the function f(\\\\cdot, q) = f_q(\\\\cdot) for a fixed q. \\n\\nOBFS is definitely more general definition than Bregman search and Max-kernel search but it is just a definition. This paper provides a solution SL2G to OBFS (defined on Euclidean spaces) but SL2G has no concrete correctness guarantees to the best of my understanding, especially since it is very very hard to build exact Delaunay graph (even for incomplete \\\"local\\\" ones) and the guarantees in this manuscript do not account for this approximation. On the other hand, Bregman search and max-kernel search provide correctness guarantees (using the structure of the problem). But that may be a shortcoming of graph-based search algorithms in general, not SL2G in particular.\"}",
"{\"title\": \"Not enough for me to change my review.\", \"comment\": \"Thanks for the response. For general distillation, please refer to the original paper:\", \"https\": \"//arxiv.org/abs/1503.02531\\n\\nAs to your specific case, I see that you are using recommendation datasets.\\nPlease consider using factorized model with side information.\"}",
"{\"title\": \"Responses to the Comment 6, 7&8\", \"comment\": \"6. [Significance] Finally, I believe that it would be good to see a connection between the success of SL2G to relationship between |f(x1, q) - f(x2, q)| and |x1 - x2 |_2 since the author emphasize that the proposed scheme can be seen as \\\"gradient descent in Euclidean space\\\" (although the authors would need to also precisely explain what they mean by that statement).\\n\\n[Response] As we mentioned in the paragraph after Theorem 2, the success and accuracy of our algorithm depend on the radius of curvature of the level sets. We believe |f(x_1, q) - f(x_2, q)| and |x_1 - x_2 |_2 together with the density of dataset might affect the speed of convergence, which we have not covered in this paper.\\n\\n7. [Originality] Some related work that the authors should position their proposed problem/solution against\\u2026\\n\\n[Response] Thank you for pointing out these related works. First of all, we would like to emphasize that our work is very different from similarity search, so most of the existing methods in this field does not apply to our problem. \\n``Max-kernel search\\\" is defined on a Hilbert space, so it has to be symmetric. Bregman divergence does not have to be symmetric, but both variables must come from the same convex set. Even if we apply Bregman ball tree to a similarity search problem, we do not think it performs well on finding top-10 nearest neighbors.\\nWe will analyze these works in our updated manuscript. \\n\\n8. The authors should present the precise SL2G algorithm given the graph in the manuscript.\\n\\n[Response] Thanks for your suggestion. To make the manuscript more self-contained, we will list the algorithms (graph construction and greedy search) in the Appendix, although they are common algorithms for search on graph methods.\\n\\nFinally, we really appreciate your time and detailed comments.\"}",
"{\"title\": \"Responses to the Comment 3, 4&5\", \"comment\": \"3. [Significance] While Definition 1 considers topological spaces, SL2G is assuming that X and (maybe) Y are in $R^d$ (for different values of d). So does that mean that SL2G does not solve the general OBFS?\\n\\n[Response] Thanks for pointing this out. The answer is \\u201cno\\u201d, it only solves OBFS when X and Y are subsets of Euclidean spaces. Actually, we are usually interested in Euclidean spaces or its subsets in real applications, as mentioned below Definition 1.\\n\\n4. [Significance/Correctness/Clarity] The assumptions in Theorem 2 (as well as the supporting Proposition 1 in Appendix B) seems quite unreasonable. In moderately high dimensional X, doesn't the curse of dimensionality imply that this condition will not hold in most case? \\u2026\\n\\n[Response]Thank you for reading the proposition and theorem very carefully. \\nWe consider an asymptotic setting that the number of data points growing to infinity and the dimension of X is fixed. When the region E is fixed, \\\\lambda^* is proportional to the number of data points, so it goes to infinity. \\nOf course, one can consider a high dimensional setting when n and d increase simultaneously. However, C_r is still not critical since it is not in the exponent. In the failing probability formula, the volume of r/2 ball depends on d and plays a much more important role than C_r when d increases. If we hope the failing probability still goes to 0, then we should require log \\\\lambda^* is much greater than d log d.\\nAbout the implication from Proposition 1 to condition (b) in Theorem 2, it is a simple geometry property. We recall that F is a set of centers of open r/2-balls whose union covers E. For a fixed open r-ball, say B, its center is covered by at least one open r/2-ball with the center in F. This open r/2-ball is contained in B by triangle inequality. The open r/2-ball contains at least one data point, which also belongs to B. This implies every open r-ball contains a data point, which is the assumption (b) in Theorem 2. We believe our proof is mathematically correct and clear.\\nWe believe this is a nontrivial result. As the number of data points increases, we have more \\u201cbad\\u201d data points. Here, \\u201cbad\\u201d data point means it is far away from the local optimum of $f$, but it is a local optimum in greedy search on the graph. The theorem and proposition show that even if we have more bad data points, the failure probability of the greedy search still goes to 0.\\n\\n5. [Clarity/Significance] I am unable to understand the baseline HNSW-SBFG (or the motivation for it) in the empirical section. It would be good to clarify this. \\n\\n[Answer] HNSW-SBFG is quite similar to the original HNSW. We just replace the metric measure in HNSW, such as l2 or cosine, with the focusing search binary functional f. Beyond that, the graph construction and greedy search approaches of HNSW-SBFG are same as the original HNSW. Note that, to let HNSW-SBFG be applicable, we set X and Y in the same space (both 64-dimensional). In this way, f(x_i,x_j) will output a value no matter f is symmetrical (e.g., MLP-Em-Sum) or asymmetrical (e.g., MLP-Concate). If f is asymmetrical, f(x_i,x_j) is problematic actually. That why HNSW-SBFG works even worse on MLP-Concate datasets. If X and Y have different dimensions, HNSW-SBFG will be not applicable.\"}",
"{\"title\": \"Responses to the Comment1&2\", \"comment\": \"1. While the problem being addressing is extremely important, and the proposed solution seems reasonable, the manuscript is really hard to follow. For example, Definition 3 and Theorem 1 are extremely hard to understand.\\n\\n[Response] Thanks for your comments. We will add more explanations for the theory part and make it easier to access. Specifically, although the problem is in an asymmetric setting, readers can still assume f(x,y) = -|x-y| as a typical example to understand the definitions and theorem. For example, assuming f is the negative l2-norm, then definition 3 means we will connect two data points in the Delaunay graph if the Voronoi cell is \\u201cadjacent\\u201d to each other. Here, adjacency means their boundary has nonempty intersection. Theorem 1 means that, for an arbitrary query, a greedy search on Delaunay graph with any initial point can find the nearest neighbor of the query.\\n\\n2. [Clarity/Significance] Moreover, I feel that the authors should be more precise in pointing out why the current graph-based search algorithms are just not trivially applicable to OBFS. \\u2026\\n\\n[Response] Thank you for this comment. We are going to assume \\\"basic search algorithm on similarity graph\\\" indicates the previous search on graph methods, such as HNSW or Bregman ball tree (you mention in a later comment). These algorithms require f(x,y) defined on the product of two identical spaces. OBFS is much more general and does not have such an assumption. \\n\\nSuppose we still assume x and y are from the same space and plug in f as a \\\"similarity function\\\" in HNSW, which is exactly the baseline, HNSW-SBFG, we used in experiments. Particularly, in the recommendation-system scenario, we embed users and items in the same Euclidean space. As shown in the experimental results on page 8, the performance of HNSW-SBFG is much poorer than HNSW-SL2G. We believe original HNSW or any other existing similarity graph based algorithms require f performs like a similarity function. A well behaved f in recommendation system should not measure the similarity between user and item.\\n\\nIt is also worth to mention that, although we provide guarantees for SBFG, but most of general f's, e.g., neural networks, does not satisfy the condition in Theorem 1.\"}",
"{\"title\": \"Responses to the comments\", \"comment\": \"The authors do not demonstrate sufficient value of performing approximation in this specific fashion. For instance, in Theorem 2, the authors start with the concavity assumption of the scoring function f(). Then it is natural to apply a gradient ascent method on the neighborhood graph. And the authors did not quantitatively or qualitatively justify their specific approach.\\n\\n[Response] Thanks for your comments. Nodes on the neighborhood graph are discrete points in the space. Searching on the neighborhood graph is quite different from gradient descent in the continuous space. That is why we try to figure out the conditions in which the proposed method will work well. To the best of our knowledge, this is the first work discusses this point. We provided the theoretical analysis and empirical experiments for the proposed approach. \\n\\nLately, numerous publications have shown that distilled models can achieve very high quality and render scoring function separable. The authors should at least compare their method against distillation and Maximum Inner Product Search based approaches. \\n\\n[Response] For related distilled models, could you specify the particular papers? Thanks. \\n\\nThe MIPS problem is a special case of the Binary Functional Search problem. Although the proposed method (SL2G) is not designed for the MIPS problem but for more complex searching measures, it can be applied for MIPS, the corresponding empirical study can be found in Appendix D.\"}",
"{\"title\": \"Fast Binary Functional Search on Graph\", \"review\": \"This work extends the approximate nearest neighbor search (ANNS) algorithm to a more general setting. Instead of search with a \\\"separable\\\" similarity measure, the authors propose Optimal Binary Functional Search (OBFS), where the scoring function f() is in general non-separable. The exact construction of the Binary Function Graph wrt f() and X is computationally expensive. The specific approximate algorithm of OBFS proposed in the paper is to:\\n1) First construct an L2 Delaunay graph for based on the dataset X only and;\\n2) Perform greedy search with the L2 Delaunay graph.\\n\\nThe authors also discuss various conditions under which, the approximation method can achieve close to optimal value.\", \"some_of_the_concerns_i_have_with_this_work\": \"1) The authors do not demonstrate sufficient value of performing approximation in this specific fashion. For instance, in Theorem 2, the authors start with the concavity assumption of the scoring function f(). Then it is natural to apply a gradient ascent method on the neighborhood graph. And the authors did not quantitatively or qualitatively justify their specific approach.\\n\\n2) Lately, numerous publications have shown that distilled models can achieve very high quality and render scoring function separable. The authors should at least compare their method against distillation and Maximum Inner Product Search based approaches.\\n\\nOverall, this research direction is interesting, but this specific work falls short for a publication at ICLR.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Promising novel idea; needs further clarification and development\", \"review\": \"Post-rebuttal\\n------------------\\nI have read the rebuttal and I better understand the paper. Given that, I am going to raise my rating by one point for the following reason:\\n- The manuscript presents a novel solution to a general problem and it is a valid solution. However, the solution is somewhat obvious, which is not necessarily a bad thing, which is why I am raising my rating by a point. However, an easy solution like the one proposed in the manuscript means that OBFS considered in this manuscript is not as general as the authors let on -- there is an implicit assumption that f(x_i, q) is close to f(x_j, q) if x_i is close to x_j.\\n- While the authors answered a lot of my clarification questions, the manuscript seems still a little hard to parse and can be significantly improved for easier reading and understanding.\\n\\n=========================================\\nPros\\n-------\\n[Originality/Significance] The manuscript focuses on a very general and important problem and proposes a scheme to solve this general problem. The authors present some theoretical and empirical results to demonstrate the utility of the proposed scheme.\\n\\nLimitations\\n----------------\\n[Clarity] While the problem being addressing is extremely important, and the proposed solution seems reasonable, the manuscript is really hard to follow. For example, Definition 3 and Theorem 1 are extremely hard to understand. \\n\\n[Clarity/Significance] Moreover, I feel that the authors should be more precise in pointing out why current graph based search algorithms are just not trivially applicable to OBFS. The nature of the approximate Delaunay graph is that it can be built for any given similarity function (the level of approximation obviously depends on the similarity function, but that is an existing issue with graph-based methods). Given the graph, I do not understand why the basic search algorithm on this similarity graph would not be an approximate solution to OBFS. Hence I believe the authors need to clarify why the existing graph based algorithms do not directly translate. \\n\\n[Significance] While Definition 1 considers topological spaces, SL2G is assuming that X and (maybe) Y are in R^d (for different values of d). So does that mean that SL2G does not solve the general OBFS?\\n\\n[Significance/Correctness/Clarity] The assumptions in Theorem 2 (as well as the supporting Proposition 1 in Appendix B) seems quite unreasonable. In moderately high dimensional X, doesn't the curse of dimensionality imply that this condition will not hold in most case? In there any reason why/how this would be circumvented? Moreover, in Proposition 1 (in Appendix B), the quantity C_r needs to be precisely defined since it could in general be exponential in the number of dimensions. Also, the assumption in Proposition 1 where \\\\lambda^* > 0 is fairly strong in high dimensional data since data gets really sparse in high dimensions. Finally, the last step in Proposition 1 (where the failure probability obtained from the union bound is connected to condition (b) in Theorem 2) is not clear at all -- it is not apparent how E and F related to S and how p relates to every ball containing a point in S. This is a very important step and needs better exposition. \\n\\n[Clarity/Significance] I am unable to understand the baseline HNSW-SBFG (or the motivation for it) in the empirical section. It would be good to clarify this. \\n\\n\\nGeneral comments\\n---------------------------\\n[Significance] Finally, I believe that it would be good to see a connection between the success of SL2G to relationship between |f(x1, q) - f(x2, q)| and ||x1 - x2 ||_2 since the author emphasize that the proposed scheme can be seen as \\\"gradient descent in Euclidean space\\\" (although the authors would need to also precisely explain what they mean by that statement).\\n\\n[Originality] Some related work that the authors should position their proposed problem/solution against:\\n- There is some work on \\\"max-kernel search\\\" which can perform similarity search with general notions of similarity (than just Euclidean metrics).\\n- There is some work on search with Bregman divergences which handle asymmetric similarity functions and also incorporate notions of gradient descent over convex sets.\\n\\nMinor comments/typos\\n---------------------------------\\n- The authors should present the precise SL2G algorithm given the graph in the manuscript.\\n- l^2 --> \\\\ell_2\\n- gradient decent --> gradient descent\\n- Table 1, f(q, x) --> f(x, q)\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
SJzqpj09YQ | Spectral Inference Networks: Unifying Deep and Spectral Learning | [
"David Pfau",
"Stig Petersen",
"Ashish Agarwal",
"David G. T. Barrett",
"Kimberly L. Stachenfeld"
] | We present Spectral Inference Networks, a framework for learning eigenfunctions of linear operators by stochastic optimization. Spectral Inference Networks generalize Slow Feature Analysis to generic symmetric operators, and are closely related to Variational Monte Carlo methods from computational physics. As such, they can be a powerful tool for unsupervised representation learning from video or graph-structured data. We cast training Spectral Inference Networks as a bilevel optimization problem, which allows for online learning of multiple eigenfunctions. We show results of training Spectral Inference Networks on problems in quantum mechanics and feature learning for videos on synthetic datasets. Our results demonstrate that Spectral Inference Networks accurately recover eigenfunctions of linear operators and can discover interpretable representations from video in a fully unsupervised manner. | [
"spectral learning",
"unsupervised learning",
"manifold learning",
"dimensionality reduction"
] | https://openreview.net/pdf?id=SJzqpj09YQ | https://openreview.net/forum?id=SJzqpj09YQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1gy0VoR14",
"r1g65MYbA7",
"SkgJEMFbCm",
"SyxHlzFbRQ",
"SkeVIGb037",
"HJxVNdzj2Q",
"Hke1_V2qnm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544627398631,
1542718100539,
1542717991291,
1542717932638,
1541440075587,
1541249068115,
1541223527034
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper829/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper829/Authors"
],
[
"ICLR.cc/2019/Conference/Paper829/Authors"
],
[
"ICLR.cc/2019/Conference/Paper829/Authors"
],
[
"ICLR.cc/2019/Conference/Paper829/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper829/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper829/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes a deep learning framework to solve large-scale spectral decomposition.\\n\\nThe reviewers and AC note that the paper is quite weak from presentation. However, technically, the proposed ideas make sense, as Reviewer 1 and Reviewer 2 mentioned. In particular, as Reviewer 1 pointed out, the paper has high practical value as it aims for solving the problem at a scale larger than any existing method. Reviewer 3 pointed out no comparison with existing algorithms, but this is understandable due to the new goal.\\n\\nIn overall, AC thinks this is quite a boarderline paper. But, AC tends to suggest acceptance since the paper can be interested for a broad range of readers if presentation is improved.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Some presentation issues, but practical value for large-scale eigen computations\"}",
"{\"title\": \"Thank you for your comments\", \"comment\": \"Response: We thank the reviewer for their comments. We are glad that they found the technical contribution strong, and hope we can address their issues with the presentation. To the specific points raised:\\n\\n(1) We use the term \\u201cnetwork\\u201d in spectral inference networks in the sense of \\u201cneural network\\u201d, similar to \\u201cgenerative adversarial networks\\u201d or \\u201cHopfield networks\\u201d. The nodes and edges would be exactly the same as for any other neural network architecture, and we describe the exact network architectures used in the supplementary materials in Section C. As these were fairly conventional network architectures, we did not want to use the already tight space in the main paper to describe them. To reiterate the point we made to Reviewer 1 - we are happy to move these details into the main paper, but it will put us over 8 pages. If this is a deciding factor in raising the score, we\\u2019ll do it.\\n\\t\\nAs for what the defining characteristics of a spectral inference network are, as opposed to other neural network architectures or machine learning frameworks, there are three key ingredients:\\n* The loss in Eq 6\\n* The symmetry-broken gradient in Eq 14, which provides a natural ordering to the output of the network\\n* The use of moving averages of the covariance and Jacobian of the covariance (line 7 and 8 of Alg 1) to correct for the bias in the gradients with bilevel optimization\\nWe summarize this training algorithm in Alg 1, but will include it in the text as well.\\n\\n(2) The full derivation of the expressions in Section 3 are too long to fit in the body of an already-tight paper. We provide a step-by-step derivation of every relevant expression in Section 3 in the supplementary material in Section A, and we provide references for all other derivations.\\n\\nAs for constructing an explicitly orthonormal basis without p(x) being known - we feel it is a self-evident statement that one cannot construct an orthonormal basis in closed form with respect to an inner product which is not known. We have rewritten this and the following statement to make it more concrete. If there are any other places in the paper which similarly could be improved, please let us know.\\n\\n(3) We\\u2019re not entirely sure how to respond to this. We feel we\\u2019ve described the technical contribution of the paper quite well. Perhaps you could give a more concrete example of how you feel we could improve or what you think is missing?\", \"and_to_your_specific_question\": \"yes, Omega is the support of x. It can be both continuous (as in the hydrogen atom example, where Omega is R^2) or discrete, as in any case with graph-structured data. The only requirement on Omega is that it is a measurable space. We have clarified this in the paper.\"}",
"{\"title\": \"Thank you for your comments\", \"comment\": \"Thank you for your kind words and comments, and we are gratified that you recognize the high potential for practical impact of our work. To the specific criticisms and suggestions you mention:\", \"accuracy\": \"We believe that any significant inaccuracy in the shape of the learned eigenfunctions for the hydrogen atom would be reflected in the energy. For instance, if the learned solution was not smooth enough, the Laplacian term in the Hamiltonian would be too high, and this would have a noticeable effect on the loss. The fact that the loss converges to close to the known closed form solution gives us good confidence in the accuracy of the method. We have also done follow-up experiments since the initial submission that achieve even higher accuracy on the energy, but feel that these additional experiments are outside the scope of this paper .\", \"clarity\": \"We apologize if any details were unclear. We saved the details of the network architecture for the supplementary materials, but if anything in Section C was unclear or insufficient please let us know and we\\u2019ll correct it. We are also happy to move the network architecture details into the main body of the paper. This would put the paper over 8 pages, but if you feel it would significantly improve the quality we\\u2019ll go ahead and do it. Since the focus of this paper was on the loss function and optimization procedure for spectral inference networks, we put less emphasis on choosing a network architecture, and made mostly conventional choices in our network design.\", \"scaling\": \"You are correct to point out that computing the Jacobian of the covariance of the features is the bottleneck of this approach. We believe that any strong paper should be as honest about the weaknesses of the proposed approach as the strengths, and we are sure to point out at the end of section 3.4 exactly what you mentioned. Out of all the ways we tried to approximate the gradient of the Rayleigh quotient in time for the submission deadline, using a moving average of the Jacobian was the stablest and fastest to converge. Since submission, we have done significant work on alternatives that scale better, and believe we have some promising candidates, but feel that this is best left to a future publication, since it constitutes a significant body of additional material.\", \"local_minima\": \"We would love to be able to achieve a global minimum - but the fact that we are reaching a local rather than global minimum is entirely because we use neural networks as a function approximator. If we could guarantee global convergence of neural networks on any problem it would be a much bigger deal than just improving spectral learning!\", \"ambiguity\": \"You are correct that there is an ambiguity in the eigenfunctions *if there are degenerate eigenfunctions*, that is, if there are two or more eigenfunctions with identical eigenvalues. This is in fact the case in the 2D hydrogen atom. The degenerate solutions take the form of different spherical harmonics, so as long as the solutions we find look recognizably like spherical harmonics (i.e. rotated versions of the solutions found in Fig 1a), and the energies are correct, then we are confident in our results.\", \"smoothness\": \"What is going on in Fig 2a is not a perfect visualization of the eigenfunctions. The true underlying state space for the video is 12 dimensional (2 position, 2 momentum, 3 balls) with some symmetry due to the indistinguishability of the balls. We are taking that 12 dimensional state space and projecting it down to 2 dimensions, as well as mixing different dimensions together, because each frame of the video contributes 3 points to the visualization (one for each ball). Plotting the position of all 3 balls on the same 2D space is most likely what gives the figures the \\u201cspeckled\\u201d look. We will include this additional explanation in the paper.\", \"computational_complexity\": \"We already briefly touch on the complexity of computing the Jacobian in section 3.4, and we believe that the convergence results for two-time-scale optimization are not so different from the standard results in stochastic optimization (i.e. 1/sqrt(T) convergence rate). However we can add more detail in the paper explaining this.\\n\\nOnce again, we\\u2019re very glad that you enjoyed the paper, and thank you for the comments on how to improve it even more!\"}",
"{\"title\": \"Thank you for your comments\", \"comment\": \"We thank the reviewer for their comments. To the first comment, on the distinction between section 3.1 and 3.2 - section 3.1 deals only with the case of functions on finite, discrete spaces (that is, vectors). This is to ease the reader into the discussion of linear operators and eigenfunctions from the more familiar point of view of matrices and eigenvectors. Once the reader understands how eigenvectors can be derived as the solution to an optimization problem, the extension to arbitrary function spaces should be easier.\\n\\nThe discussion in section 3.2 pertains to *all* measurable spaces: discrete, continuous, compact or unbounded. We have updated the paper to clarify this. If we were writing the integral for <f, g> in section 3.2 in more formal measure-theoretic notation, we would express it as an integral over some measure d\\\\mu instead of p(x)dx. This measure could include the uniform measure, which generalizes uniform distributions to spaces with infinite total measure. In this case p(x) would just be a constant. We tried to highlight this point without burdening the reader with too much formal measure theory where we say \\u201cIn theory this could be an improper density, such as the uniform distribution over R^n\\u201d.\\n\\nThe section on the graph Laplacian and Laplace-Beltrami operator is not really specific to section 3.2 or functions on continuous spaces. Rather we wanted to shift the paper to a more general discussion of the types of kernels that might appear in different spectral problems. We can break this off into a separate section 3.3 to avoid confusion, but that will put the paper over 8 pages. If you think such a change is worth the paper going longer and would be the deciding factor in raising your score, we will happily do it.\\n\\nThe Laplacian (either graph or manifold) is really a specific choice of kernel k(x, x\\u2019), which can then be plugged in with any p(x) to define a linear operator. Depending on the application, common choices of p(x) would be the data distribution (for machine learning application) or the uniform distribution (i.e. a constant). When we say the Laplacian in continuous space is a local operator, we mean that the value of K[f](x) depends solely on the value of f and its first and second order derivative at x. Again, we will rewrite this section to make this more clear.\\n\\nTo the point on comparisons against the state of the art - the aim of this paper was to show we could compute meaningful spectral decompositions *at a scale larger than any existing method*. We don't provide quantitative comparisons to other methods because our method scales to solve problems that are orders of magnitude more difficult than the most difficult problems that standard methods can address. As we state in paragraph two of the introduction, using an existing method like the Nystrom approximation for generalization \\u201cis not practical for large datasets, and some form of function approximation is necessary.\\u201d For a sense of the scale at which exact spectral methods become impractical, please take a look at Perozzi, Al-Rfou and Skiena, KDD 2014. There they are unable to run spectral clustering on the YouTube dataset, which consists of a graph with over 1 million nodes and nearly 3 million edges - a scale which SpIN can easily handle. Neither their proposed algorithm, nor any of the other scalable baselines, are a true spectral method.\\n\\nSpectral inference networks are especially powerful in the case of large datasets *and high dimensional data*, as any neural network architecture can be applied as an eigenfunction approximator. This was why we chose the example of videos of bouncing balls as the second experiment. Existing spectral methods do not scale to this type of data. We also compare our algorithm against an approach to approximate spectral learning used by Machado et al in ICLR 2017 in section C.3 of the supplementary material. In that paper they claimed to learn eigenfunctions of the successor operator in reinforcement learning environments. However the approximate eigenfunctions they learn have no clear ordering as you would expect from true eigenfunctions. By contrast, the eigenfunctions learned by spectral inference networks are clearly more meaningful and learn features that are more distinguishable, even by the naked eye. It is known that the eigenfunctions of the successor operator can be useful for reinforcement learning tasks.\"}",
"{\"title\": \"linear algebra with deep learning framework (Tensorflow)\", \"review\": \"In this paper, the authors propose to use a deep learning framework to solve a problem in linear algebra, namely the computation of the largest eigenvectors.\\n\\nI am not sure tu understand the difference between the framework described in sections 3.1 and 3.2. What makes section 3.2 more general than 3.1?\\nIn particular, the graph example in section 3.2 with the graph Laplacian seems to fit in the framework of section 3.1. What is the probability p(x) in this example? Similarly for the Laplace-Beltrami operator what is the p(x)? I do not understand the sentence: 'Since these are purely local operators, we can replace the double expectation over x and x' with a single expectation.'\\n\\nThe experiments section is clearly not sufficient as no comparison with existing algorithms is provided. The task studied in this paper is a standard task in linear algebra and spectral learning. What is the advantage of the algorithm proposed in this paper compared to existing solutions? The authors provide no theoretical guarantee (like rate of convergence...) and do not compare empirically their algorithm to others.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"large-scale spectral decomposition - high practical value\", \"review\": [\"Spectral Inference Networks, Unifying Deep and Spectral Learning\", \"This paper presents a framework to learn eigenfunctions via a stochastic process. They are exploited in an unsupervised setting to learn representation of video data. Computing eigenfunctions can be computationally challenging in large-scale context. This paper proposes to tackle this challenge b y approximating then using a two-phase stochastic optimization process. The fundamental motivation is to merge approaches from spectral decomposition via stochastic approximation and learning an implicit representation. This is achievement with a clever use of masked gradients, Cholesky decomposition and explicit orthogonalization of resulting eigenvectors. A bilevel optimization process finds local minima as approximate eigenfunction, mimicking Borkar\\u201997. Results are shown to correctly recover known 2d- schrodinger eigenfunctions and interpretable latent representation a video dataset, with a practical promising results using the arcade learning environment.\", \"Positive\", \"Computation of eigenfunctions on very large settings, without relying on Nystrom approximation\", \"Unifying spectral decomposition within a neural net framework\", \"Specific comments\", \"Accuracy issue - Shape of eigenfunctions are said to be correctly recovered, but no words indicates their accuracy. If eigenfunction values are wrong, this may be critical to the generalization of the method.\", \"Clarity could be improved in the neural network implementation, what is exactly done and why, when building the network\", \"Algorithm requires computing the jacobian of the covariance, which can be large and computationally expensive - how to scale it to large settings?\", \"Fundamentally, a local minimum is reached - any future work on tackling a global solution? Perhaps by exploring varying learning rates?\", \"Practically, eigenfunction have an ambiguity to rotation - how is this enforced and checked during validation? (e.g., rotating eigenfunctions in Fig 1c)\", \"Eigenfunction of transition matrix should, if not mistaken, be smooth, whereas Fig 2a shows granularity in the eigenfunctions values (noisy red-blue maps) - Is this regularization issue, and can this be explicitly correctly?\", \"Perhaps a word on computational time/complexity?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Good work, but bad presentation\", \"review\": \"In this paper, the authors proposed a unified framework which computes spectral decompositions by stochastic gradient descent. This allows learning eigenfunctions over high-dimensional spaces and generating to new data without Nystrom approximation. From technical perspective, the paper is good. Nevertheless, I feel the paper is quite weak from the perspective of presentation. There are a couple of aspects the presentation can be improved from.\\n\\n(1) I feel the authors should formally define what a Spectral inference network is, especially what the network is composed of, what are the nodes, what are the edges, and the semantics of the network and what's motivation of this type of network.\\n\\n(2) In Section 3, the paper derives a sequence of formulas, and many of the relevant results were given without being proven or a reference. Although I know the results are most likely to be correct, it does not hurt to make them rigorous. There are also places in the paper, the claim or statement is inclusive. For example, in the end of Section 2.3, \\\"if the distribution p(x) is unknown, then constructing an explicitly orthonormal function basis may not be possible\\\". I feel the authors should avoid this type of handwaving claims. \\n\\n(3) The authors may consider summarize all the technical contribution in the paper.\", \"one_specific_question\": \"What's Omega above formula (6)? Is it the support of x? Is it continuous or discrete? Above formula (8), the authors said \\\"If omega is a graph\\\". It is a little bit confusing there.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
Hkl5aoR5tm | On Self Modulation for Generative Adversarial Networks | [
"Ting Chen",
"Mario Lucic",
"Neil Houlsby",
"Sylvain Gelly"
] | Training Generative Adversarial Networks (GANs) is notoriously challenging. We propose and study an architectural modification, self-modulation, which improves GAN performance across different data sets, architectures, losses, regularizers, and hyperparameter settings. Intuitively, self-modulation allows the intermediate feature maps of a generator to change as a function of the input noise vector. While reminiscent of other conditioning techniques, it requires no labeled data. In a large-scale empirical study we observe a relative decrease of 5%-35% in FID. Furthermore, all else being equal, adding this modification to the generator leads to improved performance in 124/144 (86%) of the studied settings. Self-modulation is a simple architectural change that requires no additional parameter tuning, which suggests that it can be applied readily to any GAN. | [
"unsupervised learning",
"generative adversarial networks",
"deep generative modelling"
] | https://openreview.net/pdf?id=Hkl5aoR5tm | https://openreview.net/forum?id=Hkl5aoR5tm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SklsXRB1gV",
"rkef957sRm",
"r1gBxvdwpm",
"Bkgu7f_Dp7",
"rylaP-uP6m",
"HJgjezT1Tm",
"Byl04qxA2X",
"rylkjAtu2m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544670754595,
1543350922267,
1542059757423,
1542058527966,
1542058341448,
1541554674930,
1541438006422,
1541082775098
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper828/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper828/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper828/Authors"
],
[
"ICLR.cc/2019/Conference/Paper828/Authors"
],
[
"ICLR.cc/2019/Conference/Paper828/Authors"
],
[
"ICLR.cc/2019/Conference/Paper828/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper828/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper828/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This manuscript proposes an architectural improvement for generative adversarial network that allows the intermediate layers of a generator to be modulated by the input noise vector using conditional batch normalization. The reviewers find the paper simple and well-supported by extensive experimental results. There were some concerns about the impact of such an empirical study. However, the strength and simplicity of the technique means that the method could be of practical interest to the ICLR community.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Meta-review\"}",
"{\"title\": \"In disagreement\", \"comment\": \"It appears that Reviewer 2 and I disagree with Reviewer 3 in terms of submission rating. I feel strongly about the submission being publication-worthy, and I would like to challenge Reviewer 2\\u2019s score.\\n\\nThere is ample room in a research conference for empirical contributions, provided the experimentation is carried out rigorously. To me, the bar for acceptance for this type of paper is 1) whether or not the results can be expected to generalize outside of the reported experimental setting, 2) whether the proposed approach has the potential to have an impact in the research community, and 3) whether the approach and results are communicated clearly to the target audience. In this instance, criteria 1) and 3) are easily met in my opinion: the breadth of model architectures, regularization techniques, and datasets used for evaluation makes me confident that the observed performance improvements are not a happy accident, and the paper writing was straightforward and easy to follow. For criterion 2), I am of the opinion that although the proposed self-modulation mechanism isn\\u2019t likely to drastically change the way we train and think of GANs, it is nevertheless a good addition to the set of architectural features that could facilitate GAN training.\\n\\nI feel that asking for a fundamental explanation of how self-modulation helps improve performance is an unreasonable bar to set for acceptance. Plenty of architectural features like dropout or batch normalization were poorly understood at the time they were first presented, yet in retrospect had a significant impact in the research community. Likewise, asking for the proposed approach to show an improvement for more than \\u201conly\\u201d 86% of the evaluation settings is unreasonably strict: I don\\u2019t find it surprising that there are instances in which self-modulation does not improve performance, and given these odds I would certainly try the approach on a new dataset and architecture combination.\"}",
"{\"title\": \"More fundamental understanding can happen asynchronously, while we presented a careful empirical evaluation\", \"comment\": \"We would like to thank the reviewer for the time and useful feedback. Our response is given below.\\n\\n- The paper is mainly empirical, although the authors compute two diagnostic statistics to show the effect of the self-modulation method. It is still not clear why self-modulation stabilizes the generator towards small conditioning values.\", \"we_consider_self_modulation_as_an_architectural_change_in_the_line_of_changes_such_as_residual_connections_or_gating\": \"simple, yet widely applicable and robust. As a first step, we provide a careful empirical evaluation of its benefits. While we have provided some diagnostics statistics, understanding deeply why this method helps will fuel interesting future research. Similar to residual connections, gating, dropout, and many other recent advances, more fundamental understanding will happen asynchronously and should not gate its adoption and usefulness for the community.\\n\\n- It should be pointed out that the D in the hinge loss represents a neural network output without range restriction, while the D in the non-saturating loss represents sigmoid output, limiting to take in [0,1]. It seems that the authors are not aware of this difference.\\n\\nWe are aware of this key difference and we apply the sigmoid function to scale the output of the discriminator to the [0,1] range for the non-saturating loss. Thanks for carefully reading our manuscript and noticing this typo which we will correct. \\n\\n- In addition to report the median scores, standard deviations should be reported.\\n\\nWe omitted standard errors simply to reduce clutter. The standard error of the median is within 3% in the majority of the settings and is presented in both Tables 5 and Table 6.\"}",
"{\"title\": \"Our response\", \"comment\": \"We would like to thank the reviewer for the time and useful feedback. Our response is given below.\\n\\n- Interpretation of self-modulation model performs worse in the combination of spectral normalization and the SNDC architecture.\\n\\nOverall, self-modulation appears to yield the most consistent improvement for the deeper ResNet architecture, than the shallower, more poorly performing, SNDC architecture. Self-modulation doesn\\u2019t help in the SNDC/Spectral Norm setting on the Bedroom data, where the SNDC architecture appears to perform very poorly compared to ResNet. For the other three datasets, self-modulation helps in this setting though.\\n\\n- The ablation study shows that the impact is highest when modulation is applied to the last layer (if only one layer is modulated). It seems modulation on layer 4 comes in as a close second. I am curious about why that might be.\\n\\nFigure 4 in the Appendix contains the equivalent of Figure 2(c) for all datasets. Considering all datasets: (1) Adding self-modulation to all layers performs best. (2) In terms of median performance, adding it to the layer farthest from the input is the most effective. We believe that the apparent significance of layer 4 in Figure 2(c) is statistical noise.\\n\\n- I would like to see some more interpretation on why this method works.\", \"we_consider_self_modulation_as_an_architectural_change_in_the_line_of_changes_such_as_residual_connections_or_gating\": \"simple, yet widely applicable and robust. As a first step, we provide a careful empirical evaluation of its benefits. While we have provided some diagnostics statistics, understanding deeply why this method helps will fuel interesting future research. Similar to residual connections, gating, dropout, and many other recent advances, more fundamental understanding will happen asynchronously and should not gate its adoption and usefulness for the community.\\n\\n- Did the authors inspect generated samples of the baseline and the proposed method? Is there a notable qualitative difference?\\n\\nA 10% change in FID is visually noticeable. However, we note that FID rewards both improvements in sample quality (precision) and mode coverage (recall), as discussed in Sec 5 of [1]. While we can easily assess the former by visual inspection, the latter is extremely challenging. Therefore, an improvement in FID may not always be easily visible, but may indicate a better generative model of the data.\\n\\n[1] https://arxiv.org/abs/1806.00035\\n\\n- Overall, the idea is simple, the explanation is clear and experimentation is extensive. I would like to see more commentary on why this method might have long-term impact (or not).\\n\\nWe view this contribution as a simple yet generic architecture modification which leads to performance improvements. Similarly to residual connections, we would like to see it used in GAN generator architectures, and more generally in decoder architectures in the long term.\"}",
"{\"title\": \"Our response\", \"comment\": \"We would like to thank the reviewer for the time and useful feedback. Our response is given below.\\n\\n- Relationship to z-conditioning strategy in BigGAN.\\n\\nThanks for pointing out the connection to this concurrent submission. We will discuss the connections in the related work section. The main differences are as follows:\\n1. BigGAN performs conditional generation, whilst we primarily focus on unconditional generation. BigGAN splits the latent vector z and concatenates it with the label embedding, whereas we transform z using a small MLP per layer, which is arguably more powerful. In the conditional case, we apply both additive and multiplicative interaction between the label and z, instead of concatenation as in BigGAN. \\n2. Overall BigGAN focusses on scalability to demonstrate that one can train an impressive model for conditional generation. Instead, we focus on a single idea, and show that it can be applied very broadly. We provide a thorough empirical evaluation across critical design decisions in GANs and demonstrate that it is a robust and practically useful contribution.\\n\\n- Propagation of signal and ResNets.\\n\\nIndeed, ResNets provide a skip connection which helps signal propagation. Arguably, self-modulation has a similar effect. However, there are critical differences in these mechanisms which may explain the benefits of self-modulation in a resnet architecture:\\n1. Self-modulation applies a channel-wise additive and multiplicative operation to each layer. In contrast, residual connections perform only an element-wise addition in the same spatial locality. As a result, channel-wise modulation allows trainable re-weighting of all feature maps, which is not the case for classic residual connections. \\n2. The ResNet skip-connection is either an identity function or a learnable 1x1 convolution, both of which are linear. In self-modulation, the connection from z to each layer is a learnable non-linear function (MLP).\\n\\n- Reading Figure 2b, one could be tempted to draw a correlation between the complexity of the dataset and the gains achieved by self-modulation over the baseline (e.g., Bedroom shows less difference between the two approaches than ImageNet). Do the authors agree with that?\\n\\nYes, we notice more improvements on the harder, more diverse datasets. These datasets also have more headroom for improvement.\"}",
"{\"title\": \"Simple idea, shown to work in a large number of settings\", \"review\": \"Summary:\\nThe manuscript proposes a modification of generators in GANs which improves performance under two popular metrics for multiple architectures, loss, benchmarks, regularizers, and hyperparameter settings. Using the conditional batch normalization mechanism, the input noise vector is allowed to modulate layers of the generator. As this modulation only depends on the noise vector, this technique does not require additional annotations. In addition to the extensive experimentation on different settings showing performance improvements, the authors also present an ablation study, that shows the impact of the method when applied to different layers.\", \"strengths\": [\"The idea is simple. The experimentation is extensive and results are convincing in that they show a clear improvement in performance using the method in a large majority of settings.\", \"I also like the ablation study showing the impact of the method applied at different layers.\", \"Requests for clarification/additional information:\", \"I might have missed that, but are the authors offering an interpretation of their observation that the performance of the self-modulation model performs worse in the combination of spectral normalization and the SNDC architecture?\", \"The ablation study shows that the impact is highest when modulation is applied to the last layer (if only one layer is modulated). It seems modulation on layer 4 comes in as a close second. I am curious about why that might be.\", \"I would like to see some more interpretation on why this method works.\", \"Did the authors inspect generated samples of the baseline and the proposed method? Is there a notable qualitative difference?\", \"Overall, the idea is simple, the explanation is clear and experimentation is extensive. I would like to see more commentary on why this method might have long-term impact (or not).\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The paper is mainly empirical\", \"review\": \"This paper proposes a Self-Modulation framework for the generator network in GANs, where middle layers are directly modulated as a function of the generator input z.\\nSpecifically, the method is derived via batch normalization (BN), i.e. the learnable scale and shift parameters in BN are assumed to depend on z, through a small one-hidden layer MLP. This idea is something new, although quite straight-forward.\\nExtensive experiments with varying losses, architectures, hyperparameter settings are conducted to show self-modulation improves baseline GAN performance.\\n\\nThe paper is mainly empirical, although the authors compute two diagnostic statistics to show the effect of the self-modulation method. It is still not clear why self-modulation stabilizes the generator towards small conditioning values.\\n\\nThe paper presents two loss functions at the beginning of section 3.1 - the non-saturating loss and the hinge loss. It should be pointed out that the D in the hinge loss represents a neural network output without range restriction, while the D in the non-saturating loss represents sigmoid output, limiting to take in [0,1]. It seems that the authors are not aware of this difference.\\n\\nIn addition to report the median scores, standard deviations should be reported.\\n\\n=========== comments after reading response ===========\\n\\nI do not see in the updated paper that this typo (in differentiating D in hinge loss and non-saturating loss) is corrected. \\n\\nThough fundamental understanding can happen asynchronously, I reserve my concern that such empirical method is not substantial enough to motivate acceptance in ICLR, especially considering that in (only) 124/144 (86%) of the studied settings, the results are improved. And there is no analysis of the failure settings.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review\", \"review\": \"The paper examines an architectural feature in GAN generators -- self-modulation -- and presents empirical evidence supporting the claim that it helps improve modeling performance. The self-modulation mechanism itself is implemented via FiLM layers applied to all convolutional blocks in the generator and whose scaling and shifting parameters are predicted as a function of the noise vector z. Performance is measured in terms of Fr\\u00e9chet Inception Distance (FID) for models trained with and without self-modulation on a fairly comprehensive range of model architectures (DCGAN-based, ResNet-based), discriminator regularization techniques (gradient penalty, spectral normalization), and datasets (CIFAR10, CelebA-HQ, LSUN-Bedroom, ImageNet). The takeaway is that self-modulation is an architectural feature that helps improve modeling performance by a significant margin in most settings. An ablation study is also performed on the location where self-modulation is applied, showing that it is beneficial across all locations but has more impact towards the later layers of the generator.\", \"i_am_overall_positive_about_the_paper\": [\"the proposed idea is simple, but is well-explained and backed by rigorous evaluation. Here are the questions I would like the authors to discuss further:\", \"The proposed approach is a fairly specific form of self-modulation. In general, I think of self-modulation as a way for the network to interact with itself, which can be a local interaction, like for squeeze-and-excitation blocks. In the case of this paper, the self-interaction allows the noise vector z to interact with various intermediate features across the generation process, which for me appears to be different than allowing intermediate features to interact with themselves. This form of noise injection at various levels of the generator is also close in spirit to what BigGAN employs, except that in the case of BigGAN different parts of the noise vector are used to influence different parts of the generator. Can you clarify how you view the relationship between the approaches mentioned above?\", \"It\\u2019s interesting to me that the ResNet architecture performs better with self-modulation in all settings, considering that one possible explanation for why self-modulation is helpful is that it allows the \\u201cinformation\\u201d contained in the noise vector to better propagate to and influence different parts of the generator. ResNets also have this ability to \\u201cpropagate\\u201d the noise signal more easily, but it appears that having a self-modulation mechanism on top of that is still beneficial. I\\u2019m curious to hear the authors\\u2019 thoughts in this.\", \"Reading Figure 2b, one could be tempted to draw a correlation between the complexity of the dataset and the gains achieved by self-modulation over the baseline (e.g., Bedroom shows less difference between the two approaches than ImageNet). Do the authors agree with that?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
BJlc6iA5YX | ACE: Artificial Checkerboard Enhancer to Induce and Evade Adversarial Attacks | [
"Jisung Hwang",
"Younghoon Kim",
"Sanghyuk Chun",
"Jaejun Yoo",
"Ji-Hoon Kim",
"Dongyoon Han",
"Jung-Woo Ha"
] | The checkerboard phenomenon is one of the well-known visual artifacts in the computer vision field. The origins and solutions of checkerboard artifacts in the pixel space have been studied for a long time, but their effects on the gradient space have rarely been investigated. In this paper, we revisit the checkerboard artifacts in the gradient space which turn out to be the weak point of a network architecture. We explore image-agnostic property of gradient checkerboard artifacts and propose a simple yet effective defense method by utilizing the artifacts. We introduce our defense module, dubbed Artificial Checkerboard Enhancer (ACE), which induces adversarial attacks on designated pixels. This enables the model to deflect attacks by shifting only a single pixel in the image with a remarkable defense rate. We provide extensive experiments to support the effectiveness of our work for various attack scenarios using state-of-the-art attack methods. Furthermore, we show that ACE is even applicable to large-scale datasets including ImageNet dataset and can be easily transferred to various pretrained networks. | [
"Adversarial Examples",
"Neural Network Security",
"Deep Neural Network",
"Checkerboard Artifact"
] | https://openreview.net/pdf?id=BJlc6iA5YX | https://openreview.net/forum?id=BJlc6iA5YX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJgdAnPBg4",
"SylhtNBFkN",
"rylYb5iqCQ",
"SyePSFiSAQ",
"BkgTgd4Tam",
"HJl5PX0-T7",
"rkeyuETr3Q",
"r1ec3QQz3Q",
"S1gYmGQAsX",
"r1gqDA65jX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"comment",
"official_review"
],
"note_created": [
1545071823752,
1544275075698,
1543318017112,
1542990143267,
1542436853414,
1541690210452,
1540899942827,
1540662194399,
1540399648842,
1540181601911
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper827/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper827/Authors"
],
[
"ICLR.cc/2019/Conference/Paper827/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper827/Authors"
],
[
"ICLR.cc/2019/Conference/Paper827/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper827/Authors"
],
[
"ICLR.cc/2019/Conference/Paper827/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper827/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper827/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers have agreed this work is not ready for publication at ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Reject\"}",
"{\"title\": \"Thank you for the suggestion\", \"comment\": \"Finding the constraint on which our model is robust is crucial. Thank you for the suggestion.\\n\\nFor now, from our experiments we find that our model is robust against L0-based attacks. Our method works well for attacks that are not bounded within an epsilon ball, but are bounded in terms of the number of pixels perturbed.\\n\\nWe will come up with a more general measure of robustness to backup the performance of our model.\"}",
"{\"title\": \"Rating unchanged\", \"comment\": \"I thank the authors for their detailed response. Unfortunately my assessment remains unchanged:\\n\\nRegarding robustness versus successful defense. Any attack should obey a set of constraints (which define the sense in which the attack is adversarial). This could be a bounded norm, or the set of rotations of the image, or a number of pixels that can be changed, etc. A meaningful defense should be robust, in that all points which obey the constraints are correctly classified. If the defense is not robust, then its success represents the limitations of the attacks used. (one counter-case would be if you could prove that the remaining adversarial points inside the constraints are computationally hard to find).\\n\\nI would change my assessment, if the authors provided convincing evidence that the defense is robust. I acknowledge that the authors considered an attacker which is aware of the defense, but it is not clear to me that this attacker successfully exploited this information.\"}",
"{\"title\": \"Our response to reviewer4\", \"comment\": \"Thank you for reviewing our paper with valuable comments. We will revise the terminology to be more precise in the paper as you suggested. We will present the definition of key ideas as a formula, and the specifications of experiments will be documented in our code for the ease of reproduction.\\n\\nC1) The meaning of \\u201cthey were reproduced by [the authors themselves]\\u201d\\n-> We intended to show clearly that all the experiments are reproduced by ourselves. This is just to stress that our results are reproducible and to present that we will release our code so that everyone can easily reproduce our results.\\n\\nC2) How shifting the pixel makes it harder to attack in the adaptive case.\\n-> In Appendix H, the ACE module \\u201crandomly\\u201d shifts the pixel so that the adversary knows the distribution of shift, but does not know the exact direction for a single image. For attack algorithms bounded by l_0, l_1, or l_2-norm (i.e., except for the l_inf-norm-based attacks), this random shift averages out the perturbation within the neighborhood of each pixel. Then, the intensity of perturbation towards the decision boundary is reduced, therefore we have the increased probability of classifying the attacked image correctly, which is identical to increasing the defense rate.\\n\\nHowever in the l_inf-norm case, random shift does not necessarily reduce the intensity of perturbation imposed on each pixel. In this case, shifting the pixel may not have any influence on defense, thus the attack should be mitigated by the use of adversarial training. Therefore, we performed the adversarial training by combining our pixel shifting method as shown in Appendix H..\\n\\nC3) Why would ACE enhance the checkerboard pattern?\\n-> This is because the ACE module makes our model learn through the checkerboard pattern with respect to our intensity parameter lambda. Let us assume that the ACE module has the autoencoder structure stated in the second paragraph in Section 5. If \\u03bb=0, the gradient clearly has no checkerboard pattern as shown in Figure 3(c). If \\u03bb=1, the gradient must be distributed in a checkerboard pattern as Figure 3(b). This is because these pixels are the only pixels that are connected to the output. For \\u03bb \\u2208 (0,1), the gradient can be interpreted as the interpolation between the case of \\u03bb=0 and \\u03bb=1. As \\u03bb approaches 1 from 0, the checkerboard pattern becomes more clear. Since the original network without the ACE module is the same as when \\u03bb=0 with the ACE module, using the ACE module with \\u03bb > 0 will enhance the checkerboard pattern in gradient. Please refer to Section 4.1 for details about the structure of ACE.\\n\\nC4) Why wouldn't an adversary remove the padded pixels before generating the attack?\\n-> Yes it would. \\u201cThe adversary in the adaptive case\\u201d in Appendix H does try to remove the padded pixels. We have set the direction of shift to random in order to avoid this.\"}",
"{\"title\": \"Needs further clarifications\", \"review\": \"I have to emphasize first that this is not my area of expertise so I am going to review it as an outsider.\\n\\nThe authors argue that the checkerboard phenomenon can be exploited to make neural networks robust against adversarial attacks. They propose to enhance the checkerboard pattern by first adding a layer, called Artificial Checkerboard Enhancer (ACE), and then evading the attacks by zero-padding the image. The authors\\u2019 argument is that enhancing the checkerboard phenomenon will make attacks more targeted towards certain pixels, which can be evaded by shifting the image. \\n\\nOverall, I think the paper is difficult to read and is not suitable for publication. In terms of clarity, the authors do not use precise terminology that would allow the reader to reproduce their work. They allude to vague statements. For example, they introduce two KEY terminologies that are repeatedly used throughout the paper but are not properly defined (see for instance the \\u201cdefinition\\u201d of \\u201cGradient Overlap\\u201d in Appendix C). \\n\\nIn addition, in terms of the experiements, it certainly does not help to say that they were \\u201creproduced by [the authors themselves]\\u201d. What does this mean?\\n \\nIn terms of originality, I agree with the first reviewer that the defense strategy seems to be easily breakable. The authors propose that they enhance the checkerboard phenomenon so that adversarial attacks become easier to implement by targeting individual pixels (the pixels in the checkerboard artifacts). Then, they pad the image with zero pixels to shift it to the right. I don\\u2019t understand how shifting the pixels would make it harder to attack (especially when the adversary knows the system).\", \"it_would_be_really_appreciated_if_the_authors_elaborate_on_the_following_points_to_help_me_understand_their_contribution\": \"- The entire discussion about ACE in Section 4.1 is ad-hoc and not well-motivated. Why would ACE enhance the checkerboard patterm? Can you please explain why it works? This is not mentioned anywhere in the paper. The experiment in Section 4.2 helps a bit but it does not answer this question. \\n\\n- What wouldn't an adversary remove the padded pixels before generating the attack? In defense strategies, it is often assumed that the adversary knows the system. Can you please explain why that is not possible in this setting? \\n\\n- \\n\\nIn Figure 4, the axes are \\\\bar i and \\\\bar j in the main body, but they are x and y in the figure. Please use the same notation.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Our response to reviewer3\", \"comment\": \"We thank the detailed review and valuable comments. First of all, we believe that our defense will not be easy to break even if the adversary knows in advance about our method. We hope that this response can resolve all of your concerns. Plus, we will revise the overall paper organization for better clarity.\\n\\nBefore responding to the comments (from C1 to C6 below), we first point out the relationship between building a \\u201crobust model\\u201d and creating a \\u201csuccessful defense method\\u201d. The Euclidean distance between the fooled images and the originals (in C2 below), and the attack-ability inside an epsilon ball (in C6 below) are good measures for model robustness in metric spaces with the l_2-norm and its equivalent. The attack methods that are formulated within the l_2-equivalent norms, such as Carlini&Wagner and PGD, can be defended if a model is robust concerning the measures above. However, for the attacks including OnePixel and JSMA, which are not formulated within the l_2-equivalent, the model can be fooled even when the model is robust concerning those measures. This is due to the fact that the constraint these attacks utilize cannot be bounded by the l_2-norm. Therefore, a \\u201crobust model\\u201d is robust to l_2-equivalent attacks and can be considered as a subset of \\u201csuccessful defense methods\\u201d. This can be demonstrated by evaluating the classification accuracy of the models adversarially-trained with PGD, when attacking with an l_0-norm-based attack.\\n\\n\\nC1) The attacks considered did not uncover the defense strategy.\\n-> At the beginning of Section 5, we propose three types of threat models for attack algorithms. Among those attack scenarios, we considered the \\u201cwhite-box scenario\\u201d where the attacks uncover our defense strategy. To show that our method is not vulnerable in the scenario, we conducted the experiment against the PGD attack method, which is the strongest attack, combined with adversarial training. As shown in Appendix H, our method can successfully defend the attacks, thus we can guarantee our method can successfully defend in the white-box scenario as well.\\n\\nC2) Evidence to suggest that the method can reduce the probability that a misclassified example lies close to the training or test examples was not presented.\\n-> Using the suggested measure concerning the probability change could be suitable to identify the l_2-equivalent-robustness of a model. However, as mentioned above, since the probability cannot reflect the robustness against attacks that are not formulated within the l_2-equivalent such as OnePixel and JSMA attacks, our method did not aim to reduce this probability.\\n\\nC3) The specific kinds of attacks the method was intended to defend against were unclear.\\n-> Our goal is to propose a method that can be utilized in both black-box and white-box attack scenarios as explicitly mentioned in Section 5. Specifically, we targeted three scenarios: 1) the vanilla attack scenario, the adversary can access the target model but not our proposed defense method, 2) the transfer attack scenario, the adversary generates adversarial perturbations from a source model, which is different from the target model, and finally 3) the adaptive attack scenario (white-box attack scenario), the adversary knows every aspect of the model and the defense method so that it can directly exploit our defense.\\n \\nC4) In section 2.2, key definitions are relegated to Appendix C\\n-> We will revise our paper for better readability as you suggested.\\n\\nC5) Could the authors provide a baseline with a random 30% of pixels?\\n-> Yes, we have been conducting an experiment based on the suggested setting. We will post the result as soon as possible. \\n\\nC6) Recommendation about including a measure of the attack-ability under random noise in the epsilon ball. \\n-> Thank you for the suggestion. It seems that we need to verify the robustness of our method against the attacks that do not utilize the gradients. Our method would be robust against such attacks because of the following three reasons. First, for a random noise in the epsilon ball, the probability that the noise can directly affect the labels is very low (according to Section 4 in [1]). Furthermore, the probability that it matches the \\u201crandom\\u201d direction of shift (i.e., in the adaptive scenario) as mentioned in Appendix H is low as well. Finally, the random noise itself can be reduced through the autoencoder structure of the ACE module (according to Section 5 in [2]). We will conduct experiments by following your suggestion to verify these claims and strengthen our proposed defense method. \\n\\n\\n[1] Xiaoyu Chao and Neil Zhenqiang Gong. \\u201cMitigating Evasion Attacks to Deep Neural Networks via Region-based Classification\\u201d. ACSAC 2017.\\n\\n[2] Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. \\u201cExtracting and Composing Robust Features with Denoising Autoencoders\\u201d. ICML 2008.\"}",
"{\"title\": \"I worry that this defense will be easy to break\", \"review\": \"The authors propose that the \\\"checkerboard phenomenon\\\", whereby the gradients exhibit a repeating pattern over the pixel space, is a source of vulnerability to adversarial examples. They propose to first enhance this vulnerability for pre-trained models with a pre-conditioning layer, and then to evade it by zero padding the image to offset the pattern.\", \"clarity\": \"I found the work difficult to follow in places, and I felt that some material crucial to the paper was relegated to appendices.\", \"originality\": \"To my knowledge, the idea is original.\", \"quality_and_significance\": \"I feel the significance of this work is likely to be low. While the authors report positive \\\"defense\\\" results, I strongly suspect this is simply because the attacks considered did not uncover the defense strategy. I expect that this defense would be broken relatively quickly if the paper is accepted. The authors did not present evidence to suggest that their method reduces the probability that a misclassified example lies close to the training or test examples. As such, the defense seems to rely on the attacker being \\\"tricked\\\".\", \"specific_comments\": \"1) Throughout the paper, I was unclear what specific kinds of attacks the method was intended to defend against.\\n2) In section 2.2, key definitions are relegated to appendix C.\\n3) Section 3.2-> p = 0.3 is still 30% of the pixels. Could the authors provide a baseline with a random 30% of pixels?\\n4) Adaptive attack scenario: I would recommend that the authors also included a measure of the attack-ability under random noise in the epsilon ball. This would demonstrate whether the defense actually removes adversarial examples or just \\\"attacks the attackers\\\".\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Explanation on Figure 8\", \"comment\": \"Thank you for your interest in our work. How Figure 8 was generated has been explained in Section 4.2, but we would like to elaborate more on this. We will explain about the axes in the figure first, and then give some detailed explanation on why the classified label map has such shape.\\n\\nFor a classified label map (Figure 4 and 8), we have an input image x without any perturbation at (0, 0), where X-axis is the gradient direction vector of checkerboard artifacts (C), and Y-axis is the gradient direction vector of non-checkerboard artifacts (X\\\\C). Formally, this is expressed as \\\\hat{e}_C and \\\\hat{e}_{X\\\\C} in Section 4.2. Note that C is the checkerboard pixels with high absolute gradients designed by our ACE module with 1 x1 conv and stride 2 (Figure 3.(b) shows the pixels in C that absolute gradients turn out to be in general greater than those in X\\\\C). Then, each point in the classified label map is generated by perturbing the original image to the direction of (x, y) coordinates from -100 to 100 respectively. We have the classified label map after pixel perturbations using an example image of soup bowl as shown in Figure 4.\\n\\nIn Figure 4, we have empirically demonstrated the effect of this imbalance on gradients by creating a classified label map. This is an empirical backup of showing that our ACE module induces the vulnerable domain to the checkerboard. As \\\\lambda increases, our model becomes more vulnerable on the perturbation on C, while the opposite behavior is observed on the perturbation on X\\\\C. Therefore, the labels easily change on y-axis of classified label map with large lambda and the opposite on x-axis. You can think of this as an extension of Section 3 where we have shown that one-pixel attack success rates have checkerboard shape (Figure 2) due to the difference in the number of associated parameters on each pixel. We designed C to be the vulnerable domain which induces attacks on our known C. Thus, we successfully defended attacks with one-pixel padding.\\n\\nWe hope this explanation is clear enough.\"}",
"{\"comment\": \"Can you explain Figure 8? How are the X and Y axis selected?\\n\\nIt is confusing that traveling +/- 100 along the X axis does not change the class label, but traveling +/- along the Y axis quickly does.\", \"title\": \"Figure 8 question\"}",
"{\"title\": \"I am not right person to review\", \"review\": \"I am a researcher in NLP and know little about vision, so I cannot review this paper. I have contacted general chair about this situation.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}"
]
} |
|
Hklc6oAcFX | Co-manifold learning with missing data | [
"Gal Mishne",
"Eric C. Chi",
"Ronald R. Coifman"
] | Representation learning is typically applied to only one mode of a data matrix, either its rows or columns. Yet in many applications, there is an underlying geometry to both the rows and the columns. We propose utilizing this coupled structure to perform co-manifold learning: uncovering the underlying geometry of both the rows and the columns of a given matrix, where we focus on a missing data setting. Our unsupervised approach consists of three components. We first solve a family of optimization problems to estimate a complete matrix at multiple scales of smoothness. We then use this collection of smooth matrix estimates to compute pairwise distances on the rows and columns based on a new multi-scale metric that implicitly introduces a coupling between the rows and the columns. Finally, we construct row and column representations from these multi-scale metrics. We demonstrate that our approach outperforms competing methods in both data visualization and clustering. | [
"nonlinear dimensionality reduction",
"missing data",
"manifold learning",
"co-clustering",
"optimization"
] | https://openreview.net/pdf?id=Hklc6oAcFX | https://openreview.net/forum?id=Hklc6oAcFX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1lFYThllE",
"ryg0-lr5R7",
"B1lGCkrqCX",
"Hke8xaN9AQ",
"ByxOX2N5CQ",
"HyetndV5R7",
"Skl0LdE5A7",
"r1esFmN50Q",
"BJelEZN5CX",
"Syl58eh927",
"rkgxURsK37",
"HkgYwBCu3X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544764801297,
1543290885880,
1543290825792,
1543290094444,
1543289887542,
1543289008623,
1543288918052,
1543287682857,
1543287079530,
1541222482371,
1541156424471,
1541100897417
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper826/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper826/Authors"
],
[
"ICLR.cc/2019/Conference/Paper826/Authors"
],
[
"ICLR.cc/2019/Conference/Paper826/Authors"
],
[
"ICLR.cc/2019/Conference/Paper826/Authors"
],
[
"ICLR.cc/2019/Conference/Paper826/Authors"
],
[
"ICLR.cc/2019/Conference/Paper826/Authors"
],
[
"ICLR.cc/2019/Conference/Paper826/Authors"
],
[
"ICLR.cc/2019/Conference/Paper826/Authors"
],
[
"ICLR.cc/2019/Conference/Paper826/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper826/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper826/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This manuscript proposes a technique for co-manifold learning that exploits smoothness jointly over the rows and columns of the data. This is an important topic worth further study in the community.\\n\\nThe reviewers and AC opinions were mixed, with reviewers either being unconvinced about the novelty of the proposed work or expressing issues about the clarity of the presentation. Further improvement of the clarity -- particularly clarification of the learning goals, combined with additional convincing experiments would significantly strengthen this submission.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Metareview\"}",
"{\"title\": \"response to AnonReviewer1 (part 4)\", \"comment\": \"12. For the Lung data, it does not look like the proposed algorithm is better than the other two. None of the algorithms seem to do great at capturing any of the underlying structure, especially in the rows. It also is not super clear that the normal patients are significantly further from the cancer patients.\\n\\nA. Corrected. We have added clarification as to which color in the plot corresponds to which sample type. in addition we added experimental results for an additional method that jointly takes into account graph structure on the rows and columns. This method similar to ours separates between the normal and the colon cancer patients. \\n\\nWith respect to the organization of the rows, the manifold-like structure revealed by all methods indeed serves to illustrate our motivation for this paper. Our assumption is that for certain types of data, assuming a bi-clustering model is too restrictive and results in breaking up smooth geometries into disjoint clusters that do not match the actual geometry of the data. For the genes, the structure implied by the data is not one of disjoint clusters. (We have witnessed this manifold structure of genes in other gene expression datasets and indeed this one of of the motivations for our approach). \\n\\n13. Additionally are the linkage results from figure 3 from one trial? Without multiple trials it is hard to argue that this not just trial noise.\\n\\nA. Corrected. We now write in the text \\\"where we averaged over 30 realizations of the data and the locations of the missing entries\\\".\\n\\n14. How big are N1 and N2 in the linkage simulations. The Lung dataset is not very large, and it seems like the proposed algorithm has large computation complexity (it is not clear). Will the algorithm work on even medium-large sized matrices ($10^4 x 10^4$)?\\n\\nA. Corrected. We now write in the text for linkage1 that $N1=190, N2=300$ and for linkage 2 $N1=200, N2=300$.\\nRegarding computational complexity, we now write in the conclusions \\\"We also intend to develop efficient solutions to accelerate the optimization in order to address large-scale datasets, in addition to the small-scale regime we demonstrate here. We note that the datasets considered here, while being small-scale in the observation domain are high-dimensional in the feature domain, which is a non-trivial setting, and indeed a challenge for supervised methods such as deep learning due to limited training data.\\\"\"}",
"{\"title\": \"response to AnonReviewer1 (part 3)\", \"comment\": \"9. The authors state the objective in 1 is not convex. Do they mean it is not strictly convex? In which case, by stationary points, they are specifically referring to local minima? Otherwise, what benefits does the MM algorithm have on an indefinite objective i.e. couldn't you end up converging to a saddle point or a local maxima instead of a local minima, as these are all fixed points.\\n\\nA. In a new paragraph on page 4, we have clarified that the objective is not convex when using $\\\\Omega$ that are concave and more importantly we explain why we choose to solve a non-convex optimization problem compared to a convex one. \\n\\nMinimizing a non-convex function, however, is generically hard and converging to a stationary point is typically the best one can hope for when attempting to minimize a non-convex function (There are rare exceptions, notably using the SVD to find the best, in Frobenius-nom, rank-k approximation of a matrix). Thus, not being able to guarantee convergence to a global minimizer is not a drawback that is unique to the MM algorithm. One could employ other algorithms such as proximal gradient or ADMM to minimize the non-convex objective and at best would only be able to guarantee convergence to a stationary point of the original problem. The objective function landscapes of deep learning problems are quite non-convex, and yet the humble stochastic gradient method produces useful answers even though convergence to a global minimizer cannot be guaranteed.\\n\\nMoreover, it is not essential to use an MM algorithm, but we use it to take advantage of existing algorithms for the convex biclustering problem. As noted in the sentence after Eq 5: \\\"Minimizing $g(U \\\\mid \\\\tilde{U} )$ is equivalent to minimizing the objective function of the convex biclustering problem for which efficient algorithms have been introduced (Chi et al., 2017).\\\" \\n\\nDue to space limitation we have decided to keep the detailed discussion on stationary points in Appendix B.2 from the original submission, but to answer the referee's question, this is what we mean by a stationary point. A point $u$ is a stationary point of a function $f$ if all directional derivatives of $f$ at $u$ are non-negative:\\n$$\\n\\\\underset{t \\\\rightarrow 0}{\\\\lim}\\\\; \\\\frac{f(u + tv) - f(u)}{t} \\\\geq 0 \\\\quad\\\\quad \\\\text{for all $v$ such that $u + tv$ is in the domain of $f$}.\\n$$\\nIn other words, taking an infinitesimal step in any direction $v$ cannot decrease the objective function value but is allowed to increase the objective function. Note that the directional derivative of the function in Eq (1) exists everywhere and for all directions $v$.\\n\\n10. It is not clear what the sub/super scripts $l, k$ mean. Maybe with these defined, the proposed multi-scale metric would have obvious advantages, but currently it is not clear what the point of this metric is.\\n\\nA. Corrected. We now explicitly write that \\\"Note that the $l$ and $k$ denote the power of 2 taken for specific row and column cost parameters ($\\\\gamma_c,\\\\gamma_r$) in the solution. This is intended as a compact notation that corresponds a pair of parameters $(\\\\gamma_r,\\\\gamma_c)$ to their solution $U^{l,k}$ and filled in estimate $\\\\tilde{X}^{(l,k)}$.\\\"\\n\\nWe also write \\\"Our goal is to aggregate distances between a pair of rows (columns) across multiple scales of the solution, to calculate a metric that better recovers the local and global geometry of the data despite the missing values, thus \\\"fixing\\\" the missing data metric.\\\" \\nAnd furthermore \\\"This metric takes advantage of solving the optimization for multiple pairs of cost parameters and filling in the missing values with increasingly smooth estimates (as $\\\\gamma_r$ and $\\\\gamma_c$ increase). It also alleviates the need to identify the ideal scale at which to fill in the points; it is not clear that a single \\\"optimal\\\" scale actually exists, but rather different points in the matrix may have different optimal scales\\\". We have demonstrated in the experimental results the large variability in results for a competing method for which it is unclear how to select the proper scale of the row and column cost parameters. \\n\\n11. Figure 4 appears before it is mentioned and is displayed as part of the previous section.\\n\\nA. Corrected.\"}",
"{\"title\": \"response to AnonReviewer1 (part 2)\", \"comment\": \"6. Why do the authors want Omega to be concave functions as this makes the objective not convex. Additionally the penalty $\\\\sqrt(|| \\\\cdot ||_2) $is approximately doing a square root twice because the l2-norm already is the square root of the sum of squares. Also what is the point of approximating the square root function instead of just using the square root function? It is overall not clear what the nature of the penalty term g2 is; Appendix A, implies it must be overall a convex function because of the upper bound.\\n\\nA. Corrected. In a new paragraph above Section 2.1, we explain why we choose to solve a non-convex optimization problem compared to a convex one. Using a single square-root would result in a convex $\\\\Omega$. We use a function that results in taking a square root twice to get a concave $\\\\Omega$. We approximate the square root because just using a square root function, i.e. taking $\\\\Omega(z) = \\\\sqrt{z}$, would result in an $\\\\Omega$ that does not satisfy Assumption 2.2 as the derivative $\\\\Omega'(z) = \\\\frac{1}{2\\\\sqrt{z}}$ does not exist at $z=0$. Practically this would mean that the MM algorithm updates would be undefined if for example any pair of rows or column variables were identical (as desired), e.g. if $U_{i \\\\cdot} = U_{j \\\\cdot}$. The penalty term g2 in Appendix A is indeed convex. Our MM algorithm inexactly solves the non-convex optimization problem, whose objective function is given in Eq (1), by solving a sequence of convex optimization problems - the convex biclustering problem, whose objective function is given in the equation above Eq (5).\\n\\n7. Equation 5 is not clear that it is the first order Taylor approximation. Omega' is the derivative of the Omega function? Do the other terms cancel out? Also what is the derivative with respect to; each Ui. for all Uj. ?\\n\\nA. Corrected. $\\\\Omega'$ is the derivative of the $\\\\Omega$ function. We have clarified where the notation is first introduced. The inequality $\\\\Omega(z) \\\\leq \\\\Omega(\\\\tilde{z}) + \\\\Omega'(\\\\tilde{z})(z - \\\\tilde{z})$ holds for all non-negative $z$ and $\\\\tilde{z}$. Therefore, it holds for $z = \\\\lVert U_{i\\\\cdot} - U_{j \\\\cdot} \\\\rVert_2$ and $\\\\tilde{z} = \\\\lVert \\\\tilde{U}_{i\\\\cdot} - \\\\tilde{U}_{j \\\\cdot} \\\\rVert_2$.\\n\\n8. \\\"first-order Taylor approximation of a differentiable concave function provides a tight bound on the function\\\" Tight bound is not an appropriate term and requires being provable. Unless the function is close to linear, a first order Taylor approximation won't be anything close to tight.\\n\\nA. We used \\\"tight\\\" in the sense that there are values of $z$ at which the inequality becomes an equality. Note that the linear approximation $\\\\Omega(\\\\tilde{z}) + \\\\Omega'(\\\\tilde{z}) (z - \\\\tilde{z})$ equals the nonlinear function $\\\\Omega(z)$ when $z = \\\\tilde{z}$. This tangency condition is needed to prove the convergence properties of the MM algorithm. But we agree \\\"tight\\\" was not appropriate and have changed \\\"tight bound\\\"to \\\"global upper bound.\\\"\"}",
"{\"title\": \"response to AnonReviewer1 (part 1)\", \"comment\": \"1. The overall motivation for how they construct the algorithm and the intuition behind how all the pieces of the algorithm work together are not great.\\n\\nA. Corrected, we have provided more motivation intuition and details on the algorithm.\\n\\n2. Smooth is not clearly defined and not an obvious measure for a matrix. Figure 1 shows smooth matrices at various levels, but still doesn't define explicitly what smoothness is. Does smoothness imply all entries are closer to the same value?\\n\\nA. Smoothness can be characterized mathematically using a bi-H\\u00f6lder condition, which is a common assumption in the matrix organization / biclustering literature. Smoothness implies that under the true row and column geometry of the data, neighboring entries are similar.\\n\\n3. \\\"Replacing Jr(U) and Jc(U) by quadratic row and column Laplacian penalties\\\" The sentence is kind of strange as Laplacian penalties is not a thing. Graph Laplacian can be used as an empirical estimate for the Laplace Beltrami operator which gives a measure of smoothness in terms of divergence of the gradient of a function on a manifold; however the penalty is one on a function's complexity in the intrinsic geometry of a manifold. It is not clear how the proposed penalty is an estimator for the intrinsic geometry penalty. It seems like the equation that is listed is just the function map $\\\\Omega(x) = x^2$, which also is not a concave function (it is convex), so it does not fit the requirements of Assumption 2.2.\\n\\nA. Corrected. We have removed the term \\\"Laplacian penalties.\\\" The function $\\\\Omega(x) = x^2$ is convex, but it is not the $\\\\Omega$ function studied in this paper. We included it as part of the literature review as it is a commonly used regularizer that bears some similarity to the one used in this paper, but agree we did not make this clear enough. We have added discussion below Assumption 2.2, clarifying that the penalties used in this paper (ones that satisfy Assumption 2.2) are different from those like commonly used convex quadratic penalties, previously referred to as ``\\\"Laplacian penalties.\\\" Penalties used in this paper can completely eliminate small variations between pairs of similar rows (columns) but less aggressively shrink very different pairs of rows (columns) towards each other.\\n\\n4. Proposition 1 is kind of strangely presented. At first glance, it is not clear where the proof is, and it takes some looking to figure out it is Appendix B because it is reference before, not after the proposition. Or it might be more helpful if it is clearly stated at the beginning of Appendix B that this is the proof for Proposition 1.\\n\\nA. Corrected. We have put the sentence \\\"The proof of Proposition 1 is in Appendix B\\\" after the statement of the proposition. We have also changed the title of Appendix B from \\\"Convergence\\\" to \\\"Proof of Proposition 1.\\\"\\n\\n5. The authors write: \\\"Missing values can sabotage efforts to learn the low dimensional manifold underlying the data. As the number of missing entries grows, the distances between points are increasingly distorted, resulting in poor representation of the data in the low-dimensional space.\\\" However, they use the observed values to build the knn graph used for the row/column penalties, which is counter-intuitive because this knn graph is essentially estimating a property of a manifold and the distances have the same distortion issue.\\n\\nA. While we are using a knn graph on the rows and columns, note that our method takes into account both rows and columns geometry jointly. Thus we are leveraging information from both domains to fill in the values of the data. In addition the weights of of our knn graph are continuously updated throughout the optimization based on the current smooth estimate $U$. Thus the weights are pulling rows and columns together at increasingly coarse scales, going from local geometry to global geometry.\"}",
"{\"title\": \"response to AnonReviewer3 (Part 2/2)\", \"comment\": \"2) The purpose of the learning is unclear. The title does not give any hint about the learning goal. The objective function reads like filling missing values. The subsequent text claims that minimizing such a objective can achieve biclustering. However, in the experiment, the comparison is done via visualization and normal clustering (k-means).\\n\\nA. Manifold learning is a class of unsupervised methods aiming at uncovering a low-dimensional representation for data. We have generalized the typical manifold learning problem to addressing a coupled structure along both rows and columns, revealing manifolds for those. The purposes for such learning, as we present in the introduction, are plentiful and include exploratory data analysis, data visualization, and precursors for clustering and classification.\\n\\nRegarding biclustering, following our reply above to 1a), we integrate multiple biclustering solutions in order to perform co-manifold learning, i.e. simultaneously identify row and column space manifolds organizing the entries of the data matrix, as opposed to seeking a single bi-clustering. Our assumption is that for different types of data, assuming a bi-clustering model is too restrictive and results in breaking up smooth geometries into disjoint clusters that do not match the actual geometry of the data.\\n\\nOur empirical results are intended to address these cases, demonstrated for both gene expression and a synthetic example. In both cases the domain of the columns is clustered whereas the domain of the rows is a manifold. Indeed the visualization of the low-dimensional embedding of the genes for all methods demonstrates that there is no clear clustered structure in this domain (we have witnessed this manifold structure of genes in other gene expression datasets and indeed this one of of the motivations for our approach). \\nFollowing the reviewer's comment we now clarify in the beginning of the experimental results section that \\n\\\"The model we consider in the paper is such that the data is not represented by a biclustering model but rather at least one of the modes (rows/columns) lies on a low-dimensional manifold. In our experiments we consider three such examples. In the first a manifold structure exists along both rows and columns, and for the second and third the columns belong to disjoint clusters while the rows lie on a manifold...\\\"\\n\\n3) The empirical results are not convincing. Two data sets are synthetic. The only real-world data set is very small. Why k-means was used? How to choose k in k-means?\\n\\nA. Corrected.\\n k-means was used as it a common technique to extract clusters from low-dimensional embeddings (e.g. spectral clustering). \\n We now write that \\\"We apply $k$-means to the column embeddings of each method, with $k$ set to the correct number of clusters in the data, as we want to evaluate the ability of the methods to properly represent the data without being sensitive to empirical estimation of the number of clusters in the data.\\\"\\nRegarding the size of the datasets, we now write in the conclusions:\\n\\\"We also intend to develop efficient solutions to accelerate the optimization in order to address large-scale datasets, in addition to the small-scale regime we demonstrate here. We note that the datasets considered here, while being small-scale in the observation domain are high-dimensional in the feature domain, which is a non-trivial setting, and indeed a challenge for supervised methods such as deep learning due to limited training data.\\\" \\n\\n4. The choice Omega function after Proposition 1 needs to be elaborated. A function curve plot could also help.\\n\\nA. Corrected. We have added discussion below Assumption 2.2 to clarify the key feature of the choice of the $\\\\Omega$ function (previously after Proposition 1, but now in Eq. 3). We have also added a function curve plot in Appendix A when discussing the construction of the majorization.\\n\\n5. What is Omega' in Eq. 2?\\n\\nA. Corrected. $\\\\Omega'(z)$ is the first derivative of the function $\\\\Omega(z)$. We have added an explanation of this where the notation is first introduced.\"}",
"{\"title\": \"response to AnonReviewer3 (Part 1/2)\", \"comment\": \"1a) Filling missing values is not new. Even co-clustering with missing values also exists.\\n\\nA. Corrected. \\nNote that our end-goal is not to fill in the missing data, but rather to reveal low dimensional embeddings for rows and columns of a data matrix in a missing data scenario. \\nIn addition, we are not addressing a co-clustering scenario but rather a more general problem in which the rows and columns are not necessarily clustered, but rather lie on a manifold structure.\\nTaking the reviewers comments into consideration we have updated the introduction, now writing that\\n\\\"...In certain settings, however, assuming a bi-clustering model is too restrictive and results in breaking up smooth geometries into artificial disjoint clusters that do not match the actual structure of the data. This can occur when the true geometry is one of overlapping rather than disjoint clusters, for example in word-document analysis (Ahn et al., 2010), or when the underlying structure is not one of clusters at all but rather a smooth manifold (Gavish & Coifman, 2012). Thus, we consider a more general viewpoint: data matrices possess geometric relationships between their rows (features) and columns (observations) such that both modes lie on low-dimensional manifolds.\\\"\\n\\nWe have also added more details on how our formulation differs from related work, now writing that \\n\\\"Our formulation (1) is distinct from related problem formulations in the following ways:\\n1. Rows and columns of U are simultaneously shrunk towards each other as the parameters $\\\\gamma_r$ and $\\\\gamma_c$ increase. Note that this shrinkage procedure is fundamentally different from methods like the clustered dendrogram, which independently cluster the rows and columns as well as alternating partition tree construction procedures (Gavish & Coifman, 2012; Mishne et al., 2016).\\n2. Our ultimate goal is not to perform matrix completion (though this is a by-product of our approach) but rather to perform joint row and column dimension reduction.\\n3. Our work generalizes both Shahid et al. (2016) and Chi et al. (2017) in that we seek the flexibility of performing non-linear dimension reduction on the rows and columns of the data matrix, i.e. a more general manifold organization than a co-clustered structure.\\n4. Instead of determining an optimal single scale of the solution as in Shahid et al.(2016);Chi et al. (2017), we recognize that the multiple scales of the different solutions can be aggregated to better estimate the underlying geometry, similar to the tree-based Earth mover's distance proposed in Ankenman (2014); Mishne et al. (2017).\\\"\\n\\n1b) It is insufficient to defeat two methods which are older than ten years. More extensive comparison is needed but lacking here. Why not first use a dedicated method such as MICE or collaborative filtering, and then run embedding method on rows and columns?\\n\\nA. Corrected. We have added both qualitative and quantitative comparisons to Fast robust PCA on Graphs [Shahid2016] in the experimental results. \\nWe note that we tried to impute the data using MICE as the reviewer suggested but the algorithm failed to converge in reasonable time on either dataset. From our understanding MICE is also unsuitable for filling in the data for high percentage of missing values as we considered in our paper.\\n\\nComparison to Diffusion Maps is intended to demonstrate the degradation that occurs for both visualization and clustering of data when values are missing. Since we use diffusion maps in our framework, this is a natural comparison. In addition, comparing to Diffusion Maps when the values have been filled in with the mean of all data is s equivalent to applying our approach for only a single scale of the cost parameters: $\\\\gamma_r,\\\\gamma_c \\\\rightarrow \\\\infty$.\"}",
"{\"title\": \"response to AnonReviewer2\", \"comment\": \"Q. In detail, the method just simply combines a loss for competing missing values, which is not new, and Laplacian losses for rows and columns, which are also not new.\\n\\nA. Corrected. We have added a clarification on how our work is different from related work. Specifically, we clarify below Assumption 2.2, that our penalties are different from commonly used Laplacian losses and the advantage our penalties have over commonly used Laplacian penalties for rows and columns.\", \"we_also_now_write_that\": \"\\\"Our formulation (1) is distinct from related problem formulations in the following ways:\\n1. Rows and columns of U are simultaneously shrunk towards each other as the parameters $\\\\gamma_r$ and $\\\\gamma_c$ increase. Note that this shrinkage procedure is fundamentally different from methods like the clustered dendrogram, which independently cluster the rows and columns as well as alternating partition tree construction procedures (Gavish & Coifman, 2012; Mishne et al., 2016).\\n2. Our ultimate goal is not to perform matrix completion (though this is a by-product of our approach) but rather to perform joint row and column dimension reduction.\\n3. Our work generalizes both Shahid et al. (2016) and Chi et al. (2017) in that we seek the flexibility of performing non-linear dimension reduction on the rows and columns of the data matrix, i.e. a more general manifold organization than a co-clustered structure.\\n4. Instead of determining an optimal single scale of the solution as in Shahid et al.(2016);Chi et al. (2017), we recognize that the multiple scales of the different solutions can be aggregated to better estimate the underlying geometry, similar to the tree-based Earth mover's distance proposed in Ankenman (2014); Mishne et al. (2017).\\\"\"}",
"{\"title\": \"Revised paper\", \"comment\": \"We thank the reviewers for their time and constructive feedback. We have uploaded a revised version of the paper.\\nWe have added comparisons with an additional recently published method in the experimental results section, and have provided more details on the motivation to our approach, its novelty with respect to related work and the choice of row and column penalties used in our problem formulation. Detailed individual replies to reviewers are added below.\"}",
"{\"title\": \"lack of novelty\", \"review\": \"The manuscript proposes a co-manifold learning approach for missing data. The problem is important, but the method is lack of novelty.\", \"pros\": \"important problem setting, Good experimental results.\", \"cons\": \"the method is lack of novelty.\\n\\nIn detail, the method just simply combines a loss for competing missing values, which is not new, and Laplacian losses for rows and columns, which are also not new. I don't see much novelty in the model.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A joint method for filling missing values and emdedding rows & columns, but not convincing\", \"review\": \"This paper presents a joint learning method for filling missing value and bi-clustering. The method extends (Chi et al. 2017), using a penalized matrix approximation. The proposed method is tested on three data sets, where two are synthetic and one small real-world data matrix. The presented method is claimed to be better than two classical approaches Nonlinear PCA and Diffusion Maps.\\n\\n1) Filling missing values is not new. Even co-clustering with missing values also exists. It is insufficient to defeat two methods which are older than ten years. More extensive comparison is needed but lacking here. Why not first use a dedicated method such as MICE or collaborative filtering, and then run embedding method on rows and columns?\\n\\n2) The purpose of the learning is unclear. The title does not give any hint about the learning goal. The objective function reads like filling missing values. The subsequent text claims that minimizing such a objective can achieve biclustering. However, in the experiment, the comparison is done via visualization and normal clustering (k-means).\\n\\n3) The empirical results are not convincing. Two data sets are synthetic. The only real-world data set is very small. Why k-means was used? How to choose k in k-means?\\n\\n4) The choice Omega function after Proposition 1 needs to be elaborated. A function curve plot could also help.\\n\\n5) What is Omega' in Eq. 2?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Sufficient Novelty but Significant Clarity Issues\", \"review\": \"Review for CO-MANIFOLD LEARNING WITH MISSING DATA\", \"summary\": \"This paper proposes a two-stage method to recovering the underlying structure of a data manifold using both the rows and columns of an incomplete data matrix. In the first stage they impute the missing values using their proposed co-clustering algorithm and in the second stage they propose a new metric for dimension reduction.\\nThe overall motivation for how they construct the algorithm and the intuition behind how all the pieces of the algorithm work together are not great. The paper also has significant specific clarity issues (listed below). Currently these issues seem to imply the proposed algorithm has significant logic issues (mainly on the convex/concave confusions); however depending on how they are addressed, this may end up not being an issue. The experimental results for the two simulated datasets look very good. However for the lung dataset, the results are less promising and it is less clear of the advantage of the proposed algorithm to the two competing ones. \\nNovelty/Significance:\\nThe overall idea of the algorithm is sufficiently novel. It is very interesting to consider both rows and column correlations. Each piece of the algorithm seems to draw heavily on previous work; bi-clustering, diffusion maps, but overall the idea is novel enough. The algorithm is significant in that it addresses a relatively open problem that currently doesn\\u2019t have a well established solution.\\nQuestions/Clarity:\\nSmooth is not clearly defined and not an obvious measure for a matrix. Figure 1 shows smooth matrices at various levels, but still doesn\\u2019t define explicitly what smoothness is. Does smoothness imply all entries are closer to the same value? \\n \\u201cReplacing Jr(U) and Jc(U) by quadratic row and column Laplacian penalties\\u201d \\u2013 The sentence is kind of strange as Laplacian penalties is not a thing. Graph Laplacian can be used as an empirical estimate for the Laplace Beltrami operator which gives a measure of smoothness in terms of divergence of the gradient of a function on a manifold; however the penalty is one on a function\\u2019s complexity in the intrinsic geometry of a manifold. It is not clear how the proposed penalty is an estimator for the intrinsic geometry penalty. It seems like the equation that is listed is just the function map Omega(x) = x^2, which also is not a concave function (it is convex), so it does not fit the requirements of Assumption 2.2.\\nProposition 1 is kind of strangely presented. At first glance, it is not clear where the proof is, and it takes some looking to figure out it is Appendix B because it is reference before, not after the proposition. Or it might be more helpful if it is clearly stated at the beginning of Appendix B that this is the proof for Proposition 1.\", \"the_authors_write\": \"\\u201cMissing values can sabotage efforts to learn the low dimensional manifold underlying the data. \\u2026 As the number of missing entries grows, the distances between points are increasingly distorted, resulting in poor representation of the data in the low-dimensional space.\\u201d However, they use the observed values to build the knn graph used for the row/column penalties, which is counter-intuitive because this knn graph is essentially estimating a property of a manifold and the distances have the same distortion issue.\\nWhy do the author\\u2019s want Omega to be concave functions as this makes the objective not convex. Additionally the penalty sqrt(|| ||_2) is approximately doing a square root twice because the l2-norm already is the square root of the sum of squares. Also what is the point of approximating the square root function instead of just using the square root function? It is overall not clear what the nature of the penalty term g2 is; Appendix A, implies it must be overall a convex function because of the upper bound.\\nEquation 5 is not clear that it is the first order taylor approximation. Omega\\u2019 is the derivative of the Omega function? Do the other terms cancel out? Also what is the derivative with respect to; each Ui. for all Uj. ?\\n \\u201cfirst-order Taylor approximation of a differentiable concave function provides a tight bound on the function\\u201d \\u2013 Tight bound is not an appropriate term and requires being provable. Unless the function is close to linear, a first order Taylor approximation won\\u2019t be anything close to tight.\\nThe authors state the objective in 1 is not convex. Do they mean it is not strictly convex? In which case, by stationary points, they are specifically referring to local minima? Otherwise, what benefits does the MM algorithm have on an indefinite objective i.e. couldn\\u2019t you end up converging to a saddle point or a local maxima instead of a local minima, as these are all fixed points. \\nIt is not clear what the sub/super scripts l, k mean. Maybe with these defined, the proposed multi-scale metric would have obvious advantages, but currently it is not clear what the point of this metric is.\\nFigure 4 appears before it is mentioned and is displayed as part of the previous section.\\nFor the Lung data, it does not look like the proposed algorithm is better than the other two. None of the algorithms seem to do great at capturing any of the underlying structure, especially in the rows. It also is not super clear that the normal patients are significantly further from the cancer patients. Additionally are the linkage results from figure 3 from one trial? Without multiple trials it is hard to argue that this not just trial noise.\\nHow big are N1 and N2 in the linkage simulations. The Lung dataset is not very large, and it seems like the proposed algorithm has large computation complexity (it is not clear). Will the algorithm work on even medium-large sized matrices (10^4 x 10^4)?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SJfFTjA5KQ | Unification of Recurrent Neural Network Architectures and Quantum Inspired Stable Design | [
"Murphy Yuezhen Niu",
"Lior Horesh",
"Michael O'Keeffe",
"Isaac Chuang"
] | Various architectural advancements in the design of recurrent neural networks~(RNN) have been focusing on improving the empirical stability and representability by sacrificing the complexity of the architecture. However, more remains to be done to fully understand the fundamental trade-off between these conflicting requirements. Towards answering this question, we forsake the purely bottom-up approach of data-driven machine learning to understand, instead, the physical origin and dynamical properties of existing RNN architectures. This facilitates designing new RNNs with smaller complexity overhead and provable stability guarantee. First, we define a family of deep recurrent neural networks, $n$-$t$-ORNN, according to the order of nonlinearity $n$ and the range of temporal memory scale $t$ in their underlying dynamics embodied in the form of discretized ordinary differential equations. We show that most of the existing proposals of RNN architectures belong to different orders of $n$-$t$-ORNNs. We then propose a new RNN ansatz, namely the Quantum-inspired Universal computing Neural Network~(QUNN), to leverage the reversibility, stability, and universality of quantum computation for stable and universal RNN. QUNN provides a complexity reduction in the number of training parameters from being polynomial in both data and correlation time to only linear in correlation time. Compared to Long-Short-Term Memory (LSTM), QUNN of the same number of hidden layers facilitates higher nonlinearity and longer memory span with provable stability. Our work opens new directions in designing minimal RNNs based on additional knowledge about the dynamical nature of both the data and different training architectures. | [
"theory and analysis of RNNs architectures",
"reversibe evolution",
"stability of deep neural network",
"learning representations of outputs or states",
"quantum inspired embedding"
] | https://openreview.net/pdf?id=SJfFTjA5KQ | https://openreview.net/forum?id=SJfFTjA5KQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJg4Lm5lxV",
"Syxt_zF4pm",
"B1xSrPE7pQ",
"HJgqWQUq2m",
"BJxJkz4cn7"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544753996262,
1541866096750,
1541781309281,
1541198593747,
1541190103214
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper825/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper825/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper825/AnonReviewer5"
],
[
"ICLR.cc/2019/Conference/Paper825/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper825/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"although the way in which the authors characterize existing rnn variants and how they derive a new type of rnn are interesting, the submission lacks justification (either empirical or theoretical) that supports whether and how the proposed rnn's behave in a \\\"learning\\\" setting different from the existing rnn variants.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"rejection\"}",
"{\"title\": \"Interesting idea, but difficult to read\", \"review\": \"This paper presents a new framework to describe and understand the dynamics of RNNs inspired by quantum physics. The authors also propose a novel RNN architecture derived by their analysis.\\n \\nAlthough I found the idea quite interesting, my main concern is that the jargon used in the paper makes it hard to understand. I suggest that the authors to add an in-depth \\\"background\\\" section, so the reader becomes more familiar with the terms that will be introduced later. \\n \\nDespite this paper is mainly a theory paper, it would have a lot more strength if the authors provide some experiments to demonstrate the strength of the proposed architecture over LSTMs. \\n \\nAs a minor suggestion, the term \\\"universal\\\" should be removed from \\\"UNIVERSAL COMPUTING NEURAL NETWORK\\\" as all recurrent neural networks are, in theory, universal.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Possibly interesting, but unclear exposition and lack of concrete evidence\", \"review\": \"This paper attempts to do three things:\\n\\t1) introduce a generalization / formalism for describing RNN architectures\\n\\t2) demonstrate how various popular RNN architectures fit within the proposed framework\\n\\t3) propose a new RNN architecture within the proposed framework that overcomes some limitations in LSTMs\", \"the_ultimate_goal_of_this_work_is_to_develop_an_architecture_that\": \"1) is better able to model long-term dependencies\\n\\t2) is stable and efficient to train\\n\\nSome strengths, concerns, and questions loosely ordered by section:\\n\\nStable RNNs\\n\\t- it's not clear to me where equations (2) and (3) come from; what is the motivation? Is it somehow derived from this Runge-Kutta method (I'm not familiar with it)? \\n\\t- I don't understand what this t^th order time-derivative amounts to in practice. A major claim (in Table 4) is that LSTMs are time-order 2 whereas QUNNs are time-order L and the implication is that this means LSTMs are worse at modeling long term structure than QUNNs ; but how does that actually relate to practical ability to model long-term dependencies? It certainly doesn't seem correct to me to say that LSTMs can only memorize sequences of length 2, so I don't know why we should care about this time-derivative order.\\n\\t- I thought this section was poorly written. The notation was poorly chosen at times, e.g. the t_k notation and the fact that some $l$ have subscripts and some don't. There were also some severe typos, e.g. I think Claim 1 should be \\\"L-2-ORNN\\\". Furthermore, there were crucially omitted definitions: what is reversibility and why should we care? Relatedly, the \\\"proofs\\\" are extremely hand-wavy and just cite unexplained methods with no further information.\\n\\nQUNNs\\n\\t- The practical difference between QUNNs and LSTMs seems to be that the weights of the QUNN are dynamic within a single forward prop of the network, whereas LSTM weights are fixed given a single run (although the author does admit that the forget gates adds some element of dynamism, but there's no concrete evidence to draw conclusions about differences in practice).\\n\\t- I don't understand the repeated claim that LSTMs don't depend on the data; aren't the weights learned from data?\\n\\t\\nThere may be something interesting in this paper, but it's not clear to me in its current incarnation and I'm not convinced that an eight-page conference paper is the right venue for such a work. There's a substantial of amount of exposition that needs to be there and is currently missing. I suspect the author knows this, but due to space constraints had to omit a lot of definitions and explanations.\\n\\nI don't think all papers need experiments, but this paper I think would have greatly benefited from one. Community knowledge of LSTMs has reached a point where they are in practice easy to train and fairly stable (though admittedly with a lot of tricks). It would have been much more convincing to have simple examples where LSTMs fail due to instability and QUNNs succeed. Similarly, regarding long-term dependencies, my sense is that LSTMs are able to model some long-term dependencies. Experimental evidence of the gains offered by QUNNs would have also been very convincing.\", \"note\": \"It looks like there's some funkiness in the tables on page 8 to fit into the page limit.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Interesting ideas but unclear exposition\", \"review\": \"The authors make connections between RNN dynamics and those of a class of ODEs similar to RNNs (ORNN) that has different orders of nonlinearity and order of gradients in time. They show that typical RNN architectures can be described as members of the ORNN family. They then make the connection that quantum mechanical systems can be described as following the Schrodinger equation which can be cast as a series of coupled 1st order ODEs of the evolution of wavefunctions under an influencing Hamiltonian. They then claim that these discretized equations can be represented by a RNN similar to a unitary RNN. They go on to outline a RNN structure inspired by this insight that has time-dependent activations to increase the scale of temporal dependence.\\n\\nThe main challenge of this paper is that it does not present or support its arguments in a clear fashion, making it difficult to judge the merit of the claims. Given the nuance required for their arguments, a more robust Background section in the front that contextualizes the current work in terms of machine learning nomenclature and prior work could dramatically improve reader comprehension. Also, while the parallels to quantum mechanics are intriguing, given that the paper is arguing for their relevance to machine learning, using standard linear algebra notation would improve over the unnecessary obfuscation of Dirac notation for this audience. While I'm not an expert in quantum mechanics, I am somewhat proficient with it and very familiar with RNNs, and despite this, I found the arguments in this paper very hard to decipher. I don't think this is a necessity of the material, as the URNN paper (http://proceedings.mlr.press/v48/arjovsky16.pdf) describes very similar concepts with a much clearer presentation and background. \\n\\nFurther, despite claims of practical benefits of their proposed RNN structure, (reduced parameter counts required to achieve a given temporal correlation), no investigations or analyses (even basic ones) are performed to try and support the claim. For example, the proposed scheme requires a time varying weight matrix, which naively implemented would dramatically grow the parameter count over a standard LSTM. I can understand if the authors prefer to keep the paper strictly a theory paper, but even the main proof in Theorem 4 is not developed in detail and is simply stated with reference to the URNN paper. \\n\\nThere are some minor mistakes as well including a reference to a missing Appendix A in Theorem 3, \\\"Update rule of Eq. (15)-(15)\\\", \\\"stble regime\\\". Finally, as a nit, the claim of \\\"Universal computing\\\" in the name, while technically true like other neural networks asymptotically, does not seem particularly unique to the proposed RNN over others, and doesn't provide much information about the actual proposed network structure, vs. say \\\"Quantum inspired Time-dependent RNN\\\".\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting viewpoint, but could use more examples\", \"review\": \"In this paper, the authors relate the architectures of recurrent neural\\nnetworks with ODEs and defines a way to categorize the RNN architectures by\\nlooking at non-linearity order and temporal memory scale. They further\\npropose QUNN, a RNN architecture that is more stable and has less complexity\\noverhead in terms of input dimension while comparing with LSTM. \\n\\nAlthough this paper provides a new view point of RNN architectures and relates\\nRNNs with ODEs, it fails to provide useful insight using this view point.\\nAlso, it is not clear what advantage the new proposed architecture QUNN has\\nover existing models like LSTM or GRU. \\n\\nThe paper is well presented and the categorization method is well defined.\\nHowever, how the order of non-linearity or the length of temporal memory\\naffect the behavior and performance of RNN architectures are not studied.\\n\\nIt is proved that QUNN is guaranteed existence and its Jacobian eigen values\\nwill always have zero real part. It would be easier to understand if the\\nauthors could construct a simple example of QUNN and conduct at least some \\nsynthetic experiments.\\n\\nIn general I think this paper is interesting but could be extended in various\\nways.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.