forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
SkgCV205tQ | Accelerating first order optimization algorithms | [
"Ange tato",
"Roger nkambou"
] | There exist several stochastic optimization algorithms. However in most cases, it is difficult to tell for a particular problem which will be the best optimizer to choose as each of them are good. Thus, we present a simple and intuitive technique, when applied to first order optimization algorithms, is able to improve the speed of convergence and reaches a better minimum for the loss function compared to the original algorithms. The proposed solution modifies the update rule, based on the variation of the direction of the gradient during training. We conducted several tests with Adam and AMSGrad on two different datasets. The preliminary results show that the proposed technique improves the performance of existing optimization algorithms and works well in practice. | [
"Optimization",
"Optimizer",
"Adam",
"Gradient Descent"
] | https://openreview.net/pdf?id=SkgCV205tQ | https://openreview.net/forum?id=SkgCV205tQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJxP9921eV",
"B1lJIzckam",
"Bklc29M5nX",
"rJg32aeL2X"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544698510721,
1541542471139,
1541184178003,
1540914612475
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1498/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1498/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1498/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1498/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"Dear authors,\\n\\nAll reviewers commented that the paper had issues with the presentations and the results, making it unsuitable for publication to ICLR. Please address these comments should you decide to resubmit this work.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Issues with the presentation\"}",
"{\"title\": \"Cannot understand the paper\", \"review\": \"The paper considers a simplistic extension of first order methods typically used for neural network training. Apart from the basic idea the paper's actual algorithm is hard to read because it is full of lacking definitions. I have tried to piece together whatever I could by reading the proof. The algorithm box is very unclear. For instance the * operator is undefined.\\n\\nTo the best of my understanding which the paper changes the update by first checking whether the gradient has the same direction as the previous gradient if yes it uses the component wise maximum of the new gradient and the previous gradient in the update and otherwise it uses the new gradient. Now whether this if condition is checked component wise or an angle between the two vectors is completely unclear. \\n\\nI will really suggest the authors to at least write their algorithm with clarity. Further while stating the theorem there are undefined parameter and even the objective Regret has not been defined anywhere. Further the theorem which I could not verify due to similar unclarity shows I believe the same convergence result as AMSGrad and hence there is no theoretical advantage for the proposed algorithm. In terms of practice further I do not see a significant advantage and it could result be a step size issue . The authors do not say that they do a search over the hyper parameters. \\n\\nOn a philosophical level it is unclear what the motivation behind this particular change to any algorithm is. It would be good to discuss what additional advantage is added on top of acceleration. Note that the method feels very much like acceleration.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Paper is confusing\", \"review\": \"The paper proposes an acceleration method that slightly changes the AMSGrad algorithm when successive stochastic gradients point in different directions. I found the paper confusing to read because the critical points of Algorithm 1 are very unclear. For instance the \\\\phi function defined by Reddi et al. takes as argument all the past gradients g1...gt (see paper at the bottom of page 3) but is used inside Algorithm 1 with only the current gradient --\\\\phi_t(g_t)-- or an enigmatic \\\"max\\\" of two vectors --\\\\phi_t(max(g_t,pg_t))-- I have no idea what the actual calculation is supposed to be. The proof of the theorem (equation 6 in the appendix) suggests that this is a componentwise maximum and that the other gradients are still in. But a componentwise maximum is a surprisingly assymetric construction. What if we reparametrize by changing the sign of one particular weight? We get a different maximum?\\n\\nI finally looked into the empirical evaluation. I am not sure that the purported effect cannot be ascribed to other factors such as the choice of stepsize --they do not seem to have been looking for the best stepsize for each algorithm. The MNIST experiments are performed with a bizarre variant of CNN that seems to perform substantially worse than comparable system. They show the test loss but not the test accuracy though.\\n\\nIn conclusion I remain confused and unconvinced.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Theoretical contribution is limited.\", \"review\": \"Prons:\\nThis paper provides a simple and economic technique to accelerate adaptive stochastic algorithms. The idea is novel and preliminary experiments are encouraging.\", \"cons\": \"1.\\tThe theoretical analysis for AAMSGrad is standard and inherits from AMSGrad directly. Meanwhile, the convergence rate of AAMSGrad merely holds for strongly convex online optimization, which does not match the presented experiments. Hence, the theoretical contribution is limited. \\n2.\\tThe current experiments are too weak to validate the efficacy of the proposed accelerated technique. We recommend the authors to conduct more experiments on various deep neural networks.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
HkxCEhAqtQ | Accelerated Gradient Flow for Probability Distributions | [
"Amirhossein Taghvaei",
"Prashant G. Mehta"
] | This paper presents a methodology and numerical algorithms for constructing accelerated gradient flows on the space of probability distributions. In particular, we extend the recent variational formulation of accelerated gradient methods in wibisono2016 from vector valued variables to probability distributions. The variational problem is modeled as a mean-field optimal control problem. The maximum principle of optimal control theory is used to derive Hamilton's equations for the optimal gradient flow. The Hamilton's equation are shown to achieve the accelerated form of density transport from any initial probability distribution to a target probability distribution. A quantitative estimate on the asymptotic convergence rate is provided based on a Lyapunov function construction, when the objective functional is displacement convex. Two numerical approximations are presented to implement the Hamilton's equations as a system of N interacting particles. The continuous limit of the Nesterov's algorithm is shown to be a special case with N=1. The algorithm is illustrated with numerical examples. | [
"Optimal transportation",
"Mean-field optimal control",
"Wasserstein gradient flow",
"Markov-chain Monte-Carlo"
] | https://openreview.net/pdf?id=HkxCEhAqtQ | https://openreview.net/forum?id=HkxCEhAqtQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Byg7H11Wx4",
"H1lvDj2FCX",
"SJe91SyLAQ",
"BkxMU9rA6m",
"Skx0X5H0TX",
"B1grUCbp67",
"BJgSTkJp6Q",
"BJemw1J6a7",
"H1gUpCRham",
"r1evDrYhnQ",
"SyxTTel53Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1544773434809,
1543256926676,
1543005409577,
1542507081783,
1542507046032,
1542426188953,
1542414269227,
1542414170968,
1542414014033,
1541342558947,
1541173444801
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1497/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1497/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1497/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1497/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1497/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1497/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1497/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1497/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1497/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1497/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1497/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper developed an accelerated gradient flow in the space of probability measures. Unfortunately, the reviewers think the practical usefulness of the proposed approach is not sufficiently supported by realistic experiments, and the clarity of the paper need to be significantly improved. The authors' rebuttal resolved some of the confusion the reviewers had, but we believe further substantial improvement will make this work a much stronger contribution.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting ideas but method is not practical\"}",
"{\"title\": \"Good discussion\", \"comment\": \"We thank the reviewer for reading our response and the revised version of the paper. We think this discussion is helpful and important in understanding the paper.\\n\\n\\u201c\\u2026 I still don't quite get how to go from an ODE/PDE to the Lagrangian. What is the relation between these two as well as the Lyapuno function ...\\u201d\\n\\nIt is actually the other way around. First, the Lagrangian is formulated. Then, ODE/PDEs are obtained from the Lagrangian. \\n\\nThe Lagrangian function in Eq. (2) is motivated by Lagrangian mechanics. In Lagrangian mechanics, the Lagrangian function is equal to the difference between kinetic energy and the potential energy. The Lagrangian in Eq. (2) has similar form when one think of X_t as position, U_t as velocity, and the objective function f(x) as potential energy. The time-varying scaling parameters are employed to obtain the convergence to the minimum. \\n\\nLyaponov functions are commonly understood as functions that capture the \\u201cenergy\\u201d of the system. They should be positive everywhere, except at the equilibrium, where they should equal to zero. The Lyapunov function Eq. (6) that appears in the paper has almost the form of sum of the kinetic energy and the potential energy.\\n\\nPlease note that we point out the role of \\u201ckinetic energy\\u201d and \\u201cpotential energy\\u201d in the Lagrangian in Eq. (2). For detailed discussion, we refer the interested reader to Wibinoso et. al. (2016).\\n\\n \\u201c\\u2026 Although the paper considers a different PDE (part of it is the first order Langevin PDE), I think same technique can be applied to the 2nd order PDE, which can also be defined by a Wasserstein gradient flow on the joint space.\\u201c\\n\\nThe accelerated flow proposed in our paper can NOT be defined by a Wasserstein gradient flow on the joint space. It does not belong to the family of Wasserstein gradient flows that appear in the paper that you mentioned [A blob method for diffusion] for any choice of functionals. \\n\\nThe situation is similar to the vector case. The accelerated flow is not a gradient flow. It can NOT be obtained from gradient of any function on the joint space. It is actually a Hamiltonian flow as it was shown in Wibinoso et. al. (2016), also reviewed in Sec. 2 of our paper. \\n\\nSimilarly, the accelerated flow for probability distribution that is presented in our paper is NOT a Wasserstein gradient flow with respect to any functional. It does not belong to the family of the Wasserstein gradient flows for any choice of the functionals. Actually, the accelerated flow presented in our paper is a Hamiltonian flow on the space of probability distributions, as it is shown in Theorem 1. \\n\\nIn fact, if it was possible, showing that the second order Langevin equation is a Wassertein gradient flow is a great contribution, and we are very interested to know how this derivation is done exactly. \\n\\n\\u201cthis paper seems to proposes a different way to get the 2nd order Langevin PDE, but the final numerical solution seems to be a numerical solution directly from some specific Wasserstein gradient flow\\u201d\", \"we_disagree_with_the_conclusion_for_the_following_reasons\": \"1) The accelerated flow presented in our paper is different from the 2nd order Langevin equation both in PDE form and in ODE form (please see our previous response and Appendix D). \\n\\n2) This paper is proposing a variational formulation to construct accelerated flow for probability distributions. We want to emphasize the important role of variational formulation (please see our previous response). Note that the same criticism also holds for Lagrangian mechanics. It can also be said that Lagrangian mechanics is \\u201ca different way\\u201d to obtain Newton\\u2019s second law. However, such evaluation is undermining the importance of Lagrangian mechanics. \\n\\n3) The numerical algorithm is obtained from discretizing the accelerated flow which itself is obtained from the variational formulation. The numerical algorithm is not obtained from any Wasserstein gradient flow because the accelerated flow proposed in our paper is not a Wasserstein gradient flow.\"}",
"{\"title\": \"thanks for the clarification\", \"comment\": \"I think the revision is getting better. However, I still don't quite get how to go from an ODE/PDE to the Lagrangian. What is the relation between these two as well as the Lyapuno function. I think some write of this part is necessary.\\n\\n\\u201c \\u2026 One can get the same formula by deriving it from Wasserstein gradient flows. \\u201d\\nOne straightforward way to directly solve the Wasserstein gradient flow with particle approximation, like what was done in this paper: A blob method for diffusion.\\n\\nAlthough the paper considers a different PDE (part of it is the first order Langevin PDE), I think same technique can be applied to the 2nd order PDE, which can also be defined by a Wasserstein gradient flow on the joint space. \\n\\nSo what I said is this paper seems to proposes a different way to get the 2nd order Langevin PDE, but the final numerical solution seems to be a numerical solution directly from some specific Wasserstein gradient flow. I tend to keep my decision.\"}",
"{\"title\": \"Response to the reviewer 4 (part 2)\", \"comment\": \"The reviewer also suggested several improvements as part of an enumerated list 1-8. These suggestions have been incorporated in the revised version of the paper:\\n\\n1) The definition of the divergence is indeed standard but now appears as part of Notation (on page 2). \\n\\n2) We have added a new section Appendix C as part of the Supplementary material. The definition of the Wasserstein gradient and Gateaux derivative appears as part of this section.\\n\\n3) The sentence has now been rephrased to avoid confusion.\\n\\n4) We do not completely understand the reviewer\\u2019s concern. The variational formulation in the finite-dimensional Euclidean setting is due to Wibisono et al. (2016). The motivation for the same appears in the Introduction. \\n\\n5) The Lyapunov function is useful to obtain convergence results. \\n\\n6) The definition of the Lagrangian (10) is a core contribution of this paper. The proposed definition represents a generalization of the Lagrangian (2) proposed by Wibisono et. al. The relationship between the two is summarized in Table I, discussed in Introduction. Additional relationship appears in Prop. 1 where it is shown that we recover the continuous limit of Nesterov ode in the Gaussian setting. Furthermore, the result of Theorem.1-(ii) shows that one also obtains the same convergence rate as in Wibisono, et. al. (2017). \\n\\n7) The text now reads \\u201cthe stochastic process (X_t,Y_t) is a Gaussian process\\u201d. The definition of a Gaussian process is standard.\\n\\n8) The typo has been fixed in the revised version of the paper.\"}",
"{\"title\": \"Response to the reviewer 4\", \"comment\": \"We thank the reviewer for reviewing the paper and for providing several helpful comments.\\n\\n\\u201c.. the resulting PDE seems to be a known result, which is the Fokker-Planck equation for the 2nd order Langevin dynamic.\\u201d\\n\\nIt appears that the main concern of the reviewer is that the accelerated gradient flow proposed in our paper is the same as the second order Langevin equation or SGHMC? We would like to clarify that this is not the case for the second order equation considered in this paper. \\n\\nFor a first order Langevin equation, it is indeed true that the Brownian motion and $\\\\nabla log(p)$ yield the same distribution. In other words, if one replaces the Brownian motion with $\\\\nabla log(p)$ in the first order Langevin equation, the resulting Fokker-Planck equation and thus the distribution remains the same. \\n\\nHowever, this property does not hold for the second order Langevin equation considered in this paper. In the second order system, we are dealing with the joint distribution on position and momentum. If one replaces the Brownian motion (in the momentum update) with $\\\\nabla log(p)$ where $p$ is the marginal on the position, the resulting Fokker-Planck equations are different. Consequently, the distributions are also different. \\n\\nSince this is an important point, we have included a new section (Appendix D) as part of the supplementary material in the paper to show the difference between the first order and the second order cases. \\n\\n\\u201c.. Actually, I think the derivation of accelerated gradient flow formula from the view of optimal control formalism does not seem necessary ..\\u201d \\n\\nVariational formulation of fundamental equations is a cornerstone of Mathematics. \\n 1. Lagrangian mechanics is a variational formulation of Newtonian mechanics;\\n 2. Feynman\\u2019s path integral formulation of quantum mechanics;\\n 3. For the Fokker-Planck equation, the celebrated gradient flow construction of the Jordan- Kinderlehrer-Otto;\\n 4. Finally, Wibisono et. al. is itself a variational formulation of the Nesterov ode (which has been known since 1980-s).\\n` 5. As noted in the introduction, the objective of this paper is to generalize Wibisono el. al. (2016). So a variational construction is natural. \\n \\nIn all these cases 1-4, variational formulations have been worthy of study not only for numerical reasons but also because of their rich mathematical structure, geometric aspects which makes the derivation of models and algorithms independent of the choice of coordinates, first integrals and Lyapunov function which provides insights into conserved quantities and convergence analysis etc. Variational formulations have also been useful for numerics, e.g., development of symplectic integrators. \\n\\n\\u201c \\u2026 One can get the same formula by deriving it from Wasserstein gradient flows. \\u201d\\n\\n We disagree that the proposed accelerated algorithm can be derived using Wasserstein gradient flows. Or at least, we are not aware of how to do that. \\n\\n\\\" ... though the derivation of accelerated gradient flow formula seems interesting, the resulting algorithm does not seem benefit from this derivation\\\"\\n\\nThe concern of the reviewer is justified. The numerical algorithm is obtained from discretizing the Hamilton\\u2019s equations (16). These equations are directly derived from the variational formulation. One may try to obtain the numerical algorithm directly from the variational formulation by discretizing (in both time and space) the Lagrangian directly. For example, the symplectic integration is the result of such a time discretization. We believe that it is possible express the variational problem in terms of particles in the Gaussian setting with the solution given by the proposed numerical algorithm in the Gaussian settings. However, doing so in more general setting is beyond the scope of this paper. \\n\\n\\u201cThe authors then shows empirically that the proposed method is better than SGHMC, which I think only comes from the numerical methods.\\u201d\\n\\nRegarding the numerical comparison, the revised version of the paper includes comparison to MCMC, HMCMC (which is the same as the second order Langevin equation), and a method based on the density estimation. Please note that, as clearly described in the Introduction, the main contribution of the paper is the variational formulation and the generalization of the Wibisono et. al. and not the numerical algorithm in of itself. The numerical experiments are included to illustrate the theoretical results (e.g., accelerated convergence rates), show the potential and limitations of the proposed algorithm (e.g., bias-variance tradeoff depicted in Fig. 3 (d) and computational complexity depicted in Fig. 3 (c), and provide some preliminary comparisons with MCMC and HMCMC (Fig. 3). \\n\\nWe do not claim that the proposed algorithm is better than all the existing algorithms. Such a claim will require extensive numerical experiments which are outside the scope of this paper.\"}",
"{\"title\": \"interesting derivation of 2nd gradient flows but with limited practical usefulness\", \"review\": \"This paper derives accelerated gradient flow formula in the space of probability measures from the view of optimal control formalism. The generalization of variational formulation from finite space to the space of probability measures seems new, but the resulting PDE seems to be a known result, which is the Fokker-Planck equation (with some minor modifications) for the 2nd order Langevin dynamic. From this point of view, the resulting algorithm from the derived PDE seems not having much practical advantage over SGHMC (a stochastic version of 2nd order Langevin dynamics).\\n\\nActually, I think the derivation of accelerated gradient flow formula from the view of optimal control formalism does not seem necessary. One can get the same formula by deriving it from Wasserstein gradient flows. When considering the functional as relative entropy, one can derive the formula simply from the Fokker-Planck equation of 2nd order Langevin dynamics. As a result, the proposed methods seems to be a new way to derive the Wasserstein gradient flow (or Fokker-Planck equation), which does not make impact the algorithm, e.g., both ways result in the same algorithm.\\n\\nBesides, I found the writing needs to be improved. There are a lot of background missing, or the descriptions are not clear enough. For example:\\n1. Page 2: the divergence operator is not defined, though I think it is a standard concept, but would be better to define it.\\n2. Page 2: the Wasserstein gradient and Gateaux derivative are not defined, what are the specific meanings of \\\\nabla_\\\\rho F(\\\\rho) and \\\\partial F / \\\\partial \\\\rho?\\n3. 1st line in Section 2: convex function f of d real variables seems odd, I guess the author means argument of f is d-dimensional variable.\\n4. Section 2, the authors directly start with the variational problem (3) without introducing the problem. Why do we need to variational problem? It would be hard to follow for some one who does not have such background.\\n5. Similarly, what is the role of Lyapunov function here in (6)? Why do we need it?\\n6. Why do you define the Lagrangian L in the form of (10)? What is the relation between (10) and (2)?\\n7. It is not clear what \\\"The stochastic process (X_t, Y_t) is Gaussian\\\" means in Proposition 1? It might need to be rephrased.\\n8. Second last line in page 5: I guess \\\\nabla \\\\log(\\\\rho) should be \\\\nabla\\\\log(\\\\rho_t).\\n\\nFor the theory, I think eq.15 only applies when the PDE, e.g. (13), is solved exactly, thus there is not too much practical impact, as it is well known from the Wasserstein gradient theory that the PDE decays exponentially, as stated in the theorem. When considering numerical solutions, I think this results is useless.\\n\\nFor the relation with SGHMC, let's look at eq.16. Actually, the derivative of the log term \\\\nabla \\\\log \\\\rho_t(X_t)) is equivalent to a brownian motion term. This can be seen by considering the Fokker-Planck equation for Brownian motion, which is exactly d \\\\rho_t = \\\\Delta \\\\rho_t. Consequently, instead of using the numerical approximations proposed later, one cane simply replacing this term with a Brownian motion term, which reduces to SGHMC (with some constant multipliers in front). \\n\\nThe authors then shows empirically that the proposed method is better than SGHMC, which I think only comes from the numerical methods.\\n\\nFor the kernel approximation, it makes the particles in the algorithm interactive. This resembles other particle optimization based algorithms such as SVGD, or the latest particle interactive SGLD proposed in [1] or [2[. I think these methods need to be compared.\\n\\n[1] Chen et al (2018), A Unified Particle-Optimization Framework for Scalable Bayesian Sampling.\\n[2] Liu et al (2018), https://arxiv.org/pdf/1807.01750.pdf\\n\\nTo sum up, though the derivation of accelerated gradient flow formula seems interesting, the resulting algorithm does not seem benefit from this derivation. The algorithm seems to be able to derived from a more direct way of using Wasserstein gradient flows, which results in a Wasserstein gradient flow for 2nd order Langevin dynamics, and is thus well known. The experiments are not convincing, and fail to show the advantage of the proposed method. The proposed method needs to be compared with other related methods.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response to reviewer 3\", \"comment\": \"We thank the reviewer for reviewing the paper and for providing insightful comments.\\n\\n \\u201cThis algorithm appears naively to have an O(n^2) complexity per iteration, which is very expensive in terms of the number of particles.\\u201d \\n\\nThis is an important criticism. As part of comparison with MCMC, we have included figure-3-(c) which highlights the O(n^2) complexity of the proposed algorithm compared to O(n) complexity of MCMC. In the revised version of the paper, we have now included text on the algorithm complexity and some approaches to ameliorate it: (i) exploiting the sparsity structure of the NxN matrix ; (ii) sub-sampling the particles in computing the empirical averages; (iii) adaptively updating the NxN matrix according to a certain error criteria. \\n \\nThe notational issues in Table-1 have been fixed in the revised version of the paper.\"}",
"{\"title\": \"Response to reviewer 1\", \"comment\": \"We thank the reviewer for reviewing the paper and for providing insightful comments.\\n\\n\\u201cNo comparison with other existing method is provided.\\u201d \\n \\nThe paper has been revised to now include a comparison with the MCMC and Hamiltonian MCMC algorithms. The comparison is described in Sec 4.3 and the results of the comparison (accuracy and computational time) are depicted in Figure 3. \\n\\n\\u201c \\u2026 the proposed methods either rely on strong Gaussian assumptions or density estimation.\\u201d \\n\\nWe would like to clarify that the proposed kernel algorithm does not involve explicit estimation of the density as an intermediate step. \\n1. We have included Remark 3 which clarifies the difference between the proposed kernel algorithm and an algorithm based on an explicit density estimation. \\n2. We have included results of numerical experiments comparing the kernel algorithm and the density estimation-based algorithm. Results appear in Figure-3-(a)-(d) in Sec. 4.3.\\n3. In order to avoid the confusion with the density estimation, we now refer to the kernel approximation as the diffusion-map approximation. \\n\\nThe algorithm based on Gaussian approximation is included because of its relationship to the Nesterov ode (see Remark 2). Also, the algorithm may be useful in the cases where the density is unimodal (see the discussion following equation (18) in the paper). \\n\\nFinally, we note that the proposed form of the interaction term arises as a solution of the variational problem (which is the main contribution of our paper). The theoretical results together with the positive preliminary numerical comparisons are likely to spur future work to develop more computationally efficient algorithms to approximate the interaction term.\"}",
"{\"title\": \"Summary of responses\", \"comment\": \"We thank the reviewers for carefully reading the paper and for providing helpful comments. Both the reviewers agreed that the problem is important, the contributions are original, and the paper is well written. Broadly, the reviewers raised two concerns on the numerical aspects of the paper:\", \"concern_1\": \"Lack of comparison with existing methods.\", \"concern_2\": \"Complexity/practicality of the proposed algorithm.\", \"our_answers_to_these_top_level_concerns_are_as_follows\": \"\", \"answer_to_concern_1\": \"The paper has been revised to now include also a comparison with the state-of-the-art Markov Chain Monte-Carlo (MCMC) and Hamiltonian MCMC algorithms.\", \"answer_to_concern_2\": \"The main contribution of this paper is theoretical. The preliminary numerical results demonstrate that, using the same number of samples, the proposed numerical algorithm achieves better accuracy compared to the state-of-the-art. The theoretical contributions together with these preliminary results are likely to fuel future study to develop more practical lower-complexity algorithms. Additional details appear in our response to the reviewers.\"}",
"{\"title\": \"theoretically interesting\", \"review\": \"The articles adapt the framework developed in Wibisono & al to the (infinite dimensional) setting consisting in carrying out gradient descent in the space of probability distributions.\", \"pros\": [\"the text is well written, with clear references to the literature and a high-level description of the current state-of-the-art.\", \"there is a good balance between mathematical details and high-level descriptions of the methods\", \"although I have not been able to check all the details of the proofs, the results appear to be correct.\"], \"cons\": [\"while I think that this type of article is interesting, I was really frustrated to discover at the end that the proposed methods either rely on strong Gaussian assumptions, or \\\"density estimations\\\". In other words, no \\\"practical\\\" method is really proposed.\", \"no comparison with other existing method is provided.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting extension of the Bregman Lagrangian framework, but quite expensive\", \"review\": \"Summary: This paper introduces a functional extension of the Bregman Lagrangian framework of Wibisono et al. 2016. The basic idea is to define accelerated gradient flows on the space of probability distribution. Because the defined flows include a term depending on the current distribution of the system, which is difficult to compute in general, the authors introduce an interacting particle approximation as a practical numerical approximation. The experiments are a proof-of-concept on simple illustrative toy examples.\", \"quality\": [\"The ideas are generally of high quality, but I think there might some typos (or at least some notation I did not understand). In particular\", \"tilde{F} is not defined for Table 1\", \"the lyapunov function for the vector column of table one includes a term referring to the functional over rho. I think this is a typo and should be f(x) - f(xmin) instead.\"], \"clarity\": \"The paper is generally clear throughout.\\n\\nOriginality & Significance: The paper is original to my knowledge, and a valuable extension to the interesting literature on the Bregman Lagrangian. The problem of simulating from probability distributions is an important one and this is an interesting connection between that problem and optimization.\", \"pros\": [\"An interesting extension that may fuel future study.\"], \"cons\": [\"This algorithm appears naively to have an O(n^2) complexity per iteration, which is very expensive in terms of the number of particles. Most MCMC algorithms would have only O(n) complexity in the number of particles. This limits its applicability.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
BygANhA9tQ | Cost-Sensitive Robustness against Adversarial Examples | [
"Xiao Zhang",
"David Evans"
] | Several recent works have developed methods for training classifiers that are certifiably robust against norm-bounded adversarial perturbations. These methods assume that all the adversarial transformations are equally important, which is seldom the case in real-world applications. We advocate for cost-sensitive robustness as the criteria for measuring the classifier's performance for tasks where some adversarial transformation are more important than others. We encode the potential harm of each adversarial transformation in a cost matrix, and propose a general objective function to adapt the robust training method of Wong & Kolter (2018) to optimize for cost-sensitive robustness. Our experiments on simple MNIST and CIFAR10 models with a variety of cost matrices show that the proposed approach can produce models with substantially reduced cost-sensitive robust error, while maintaining classification accuracy. | [
"Certified robustness",
"Adversarial examples",
"Cost-sensitive learning"
] | https://openreview.net/pdf?id=BygANhA9tQ | https://openreview.net/forum?id=BygANhA9tQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJe7cKRWeN",
"Byx0aQAY0X",
"ByxW5QAYCX",
"B1xUE7Rt0m",
"HJxSIK7VRX",
"rJgLQKrMRX",
"ByeXwZZzCm",
"HJlIr-JG0X",
"SygPPR_l0X",
"B1e-WxMi6Q",
"HklCJkMs6X",
"SklzwA-oaQ",
"H1llJV-z6X",
"HJg7nLR0n7",
"B1lNPh9c3Q",
"ryxEPIvKhm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544837514778,
1543263173682,
1543263112699,
1543263021547,
1542891853343,
1542768926434,
1542750554662,
1542742334325,
1542651486713,
1542295544909,
1542295269958,
1542295129712,
1541702616454,
1541494442649,
1541217372102,
1541138012065
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1496/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1496/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1496/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1496/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1496/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1496/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1496/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1496/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1496/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1496/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1496/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1496/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1496/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1496/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1496/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1496/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper studies the notion of certified cost-sensitive robustness against adversarial examples, by building from the recent [Wong & Koller'18]. Its main contribution is to adapt the robust classification objective to a 'cost-sensitive' objective, that weights labelling errors according to their potential damage.\\nThis paper received mixed reviews, with a clear champion and two skeptical reviewers. On the one hand, they all highlighted the clarity of the presentation and the relevance of the topic as strengths; on the other hand, they noted the relatively little novelty of the paper relative [W & K'18]. Reviewers also acknowledged the diligence of authors during the response phase. The AC mostly agrees with these assessments, and taking them all into consideration, he/she concludes that the potential practical benefits of cost-sensitive certified robustness outweight the limited scientific novelty. Therefore, he recommends acceptance as a poster.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"practical important variant of certifiably robust classification\"}",
"{\"title\": \"Thank you for your suggestion.\", \"comment\": \"Thank you for your consideration, we have included our discussions on the choices of cost matrices in Appendix D.\"}",
"{\"title\": \"Thank you for suggestions, but there is a misunderstanding on Madry et al. (2018) being a certified robust learning model.\", \"comment\": \"Madry et al., (2018) is based on robust training against adversarially generated images devised via PGD attacks, which is not targeted for certifiable robustness. Thus, investigation on how to make PGD-based robust training cost-sensitive is beyond the scope of our work.\\n\\nTo the best of our knowledge, before the submission of our paper there are only two proposed certifiable robust training methods: one is Wong & Kolter (2018) and the other one is [1]. Compared with Wong & Kolter (2018), [1] is only applicable to neural networks with two layers, thus we focus our experiments on Wong & Kolter (2018) that is more general. Recently, [2] extends the method of [1] to arbitrary number of neural network layers, thus it would be interesting to study whether our approach is applicable to the robust model developed in [2].\", \"reference\": \"[1] Raghunathan, et al., Certified Defenses against Adversarial Examples. https://arxiv.org/abs/1801.09344\\n[2] Raghunathan, et al., Semidefinite relaxations for certifying robustness to adversarial examples. https://arxiv.org/abs/1811.01057\"}",
"{\"title\": \"Please read Appendix B.3 and Appendix C in the revised pdf\", \"comment\": \"1. Comparison with other alternatives\\n(a) We have added an equivalent form of the cost-sensitive CE loss for standard classification as in (B.1) in Appendix. The derivation of (B.1) simply follows the definition of the cross entropy loss and the modified softmax outputs y_n as defined in (11) of [1]. Our robust classifier is basically applying the same techniques on the guaranteed robust bound to induce cost-sensitivity for the adversarial setting.\\n\\n(b) Indeed, [1] introduces other cost-sensitive loss including MSE loss and SVM hinge loss besides the cost-sensitive CE loss. However, they only evaluated the cost-sensitive CE loss in their experiments, as argued in [1] that CE loss usually performs best among the three loss functions for multiclass image classification. Thus, we consider cost-sensitive robust optimization based on CE loss as the most promising approach.\\n\\n(c) We have cited [1] in Section 3.2 of the main paper in the revised pdf.\\n\\n(d) Please refer to Appendix C for the discussions of existing related work on cost-sensitive learning with neural networks, and explanations on why choosing to incorporate cost information into the cross entropy loss, instead of other loss functions.\\n\\n(e) The proposed robust training objective adapts cost-sensitive CE loss to the adversarial setting, and the cost-sensitive CE loss is aligned with the idea of minimizing the Bayes risks (see equation (1) in MetaCost). More specifically, it is proved in Lemma 10 of [1] that the cost-sensitive CE loss is c-calibrated, or more concretely, there exists an inverse relationship between the optimal CNN output and the Bayes cost of the t-th class.\\nTherefore, minimization of the cost-sensitive CE loss will lead to classifier that has risks closer to the optimal Bayes risks.\\n\\n2. As requested, we have added in Appendix C to survey related works on cost-sensitive learning for non-adversarial settings and explain the reasoning behind the techniques we choose.\\n\\n3. Quoting from the reviewer, \\u201cWhat happens if the original examples are also evaluated/optimized cost-sensitively\\u201d, if you are referring to robustness of standard cost-sensitive learning method, this is what the experiment in Appendix B.3 tests. The results show naive cost-sensitive learning does not lead to cost-sensitive robustness.\\n\\nReference\\n[1]. Khan, et al., Cost-Sensitive Learning of Deep Feature Representations from Imbalanced Data. https://arxiv.org/abs/1508.03422\"}",
"{\"title\": \"Review update\", \"comment\": \"Thank you for providing the feedback to clarify my concerns on novelty and experiment. There is no denying that cost-sensitive adversarial learning is an interesting topic worth exploring. I appreciate the efforts the authors put to introducing and evaluating a cost-sensitive extension of a robust DL model by Wong & Kolter (2018). However, what I found most frustrating about this work is its strong A-plus-B flavor which I still don\\u2019t think can stand for a novel scientific paper. Moreover, the adversarial learning part of the current method builds largely on (Wong & Kolter 2018). This looks somewhat narrow given the fairly broad rage of paper title and claims. I suggest including one or two additional certified robust learning models (e.g., PGD by Madry et al., 2018) into the proposed framework to better justify the importance of cost-sensitive robust learning.\"}",
"{\"title\": \"thanks for clarifying\", \"comment\": \"Thanks to the authors for clarifying.\\n\\nFor point 1, I understand why the authors believe that their (3.1) is not ad hoc. Nevertheless, the authors' answers actually justify that (3.1) is perhaps not well-compared with other alternatives. \\n(a) (3.1) does not look the same from the cost-sensitive CE loss in [1], so why using (3.1) instead of [1] is a question mark.\\n(b) [1] contains more loss than the cost-sensitive CE loss, and why using a variant from cost-sensitive CE loss is another question mark.\\n(c) Even if (3.1) is a variant of cost-sensitive CE loss in [1], it hasn't been cited in this paper anyway?\\n(d) Quoting the authors, \\\"transformations that induce larger cost will receive larger penalization by minimizing the cost-sensitive CE loss\\\", but there are many different functions that achieve the property, including many discussed in other papers. Why or why not choosing (3.1)?\\n(e) Maybe it is because the derivation in Section 3 is way too short. But I fail to see how the authors follow MetaCost to \\\"multiply the probability estimates by the cost, but the result vector has to be normalized before plugging into the cross entropy loss\\\" and get (3.1). More detailed derivations are needed.\\n\\nI respectfully disagree with the authors' point on 2, as (IMHO) the authors are not using a state-of-art cost-sensitive objective (MetaCost is clearly outdated as evidenced by dozens of papers, and even [1] is just for the imbalanced setting, not for general cost-sensitivity that the authors want to achieve). So the current paper is like \\\"adversarial learning + some cost-sensitive objective\\\" gets better performance in the cost-sensitive setting. But cost-sensitive learning is a field that has been studied for more than 20 years. Why should we stick to \\\"some cost-sensitive objective\\\" but not \\\"good/state-of-art cost-sensitive objective\\\" when introducing cost-sensitivity to the adversarial setting? At least I demand to see a complete literature review on the cost-sensitive side (for the non-adversarial setting) and see the reasoning of the authors on the techniques that they choose to introduce to the adversarial learning field.\\n\\nI can accept the authors explanations on points 3 and 4, but still feel that it can be good to see what happens if the original examples are also evaluated/optimized cost-sensitively.\"}",
"{\"title\": \"re: Adversarial incentives and transformation hardness\", \"comment\": \"Thank you, the reduced effectiveness of the approach in settings where adversarial incentives do not align with class-pair hardness is the limitation I was concerned about in my previous comment. It would be great to add some elements of this discussion to the paper because your comments would help readers better understand cost matrices, as well as their applicability.\\n\\nI will increase my review score by one to take into account the outcome of this discussion.\"}",
"{\"title\": \"Adversarial incentives and transformation hardness\", \"comment\": \"We don\\u2019t see any intrinsic reason why the class transformation difficulty is correlated with adversarial value, but the actual value and difficulty should depend on the application. The results in Table 1 show that the cost-sensitive robustness can harden both \\u201ceasy\\u201d (4->9, robust error reduces from 10.08% to 1.02%) and \\u201chard\\u201d (0->2, robust error reduces from 0.92% to 0.38%) - the improvement is bigger for the \\u201ceasy\\u201d transformation, but even after the cost-sensitive robustness hardening, it remains slightly \\u201ceasier\\u201d than the \\u201chard\\u201d transformation in the overall robust model.\\n\\nFor the MNIST classes, there is no correlation between the adversarial value (in the toy check fraud motivation) and transformation difficulty, since adversarial value is directional and semantically different digits can look more similar than far apart ones. For a more realistic security application, it would be desirable to define the classes in such a way that the valuable adversarial transformations are also the hardest ones to achieve.\"}",
"{\"title\": \"re: explanations\", \"comment\": \"Thank you for taking the time to write a response to my review.\\n\\nRegarding 1., the explanation does provide some useful context for the choice of cost matrices. Do you have an intuition as to whether adversarial incentives will always correlate with transformation hardness? In other words, could there exist settings where the adversary would benefit more from a change in class that is relatively easy to make (and hard to defend against) compared to other class pairs? \\n\\nThank you for providing additional experimental results regarding 2.\"}",
"{\"title\": \"Additional experiments regarding cost-sensitive learning\", \"comment\": \"We\\u2019ve added an Appendix B.3 to the revised paper that addresses the question you raised about whether standard cost-sensitive loss trained on original examples would improve cost-sensitive robustness. The results from our experiments show that standard cost-sensitive loss does not result in a classifier with cost-sensitive robustness.\"}",
"{\"title\": \"Thank you for your positive and constructive comments\", \"comment\": \"We hope the following explanations address your questions:\\n\\n1. Regarding the choice of the cost matrices\\nOur goal in the experiments was to evaluate how well a variety of different types of cost matrices can be supported. MNIST and CIFAR-10 are toy datasets, thus defining cost matrices corresponding to meaningful security applications for these datasets is difficult. Instead, we selected representative tasks and designed cost matrices to capture them. Our experimental results show the promise of the cost-sensitive training method works across a variety of different types of cost matrices, so we believe it can be generalized to other cost matrix scenarios that would be found in realistic applications.\\n\\nIt is a good point that the cost matrices that were selected based on the robust error rates in Fig 1B are somewhat cyclical, but it does not invalidate our evaluation. We use the \\u201chardness\\u201d of adversarial transformation between classes only for choosing representative cost matrices, and the robust error results on the overall-robustness trained model as a measure for transformation hardness. Further, the transformation hardness implied by the robust error heatmap is generally consistent with intuitions about the MNIST digit classes (e.g., \\u201c9\\u201d and \\u201c4\\u201d look similar so are harder to make robust to transformation), as well as with the visualization results produced by dimensional reduction techniques, such as t-SNE [1]. \\n\\n2. Regarding the choice of epsilon for CIFAR-10\\nIn our CIFAR-10 experiments, we set epsilon=2/255, the same experimental setup as in [2]. Our proposed cost-sensitive robust classifier can be applied to larger epsilon for CIFAR-10 dataset, and similar improvements have been observed for different epsilon settings. In particular, we have run experiments on CIFAR-10 with epsilon varying from {2/255, 4/255, 6/255} for the single seed task. The comparison results are reported in Figure 5(b), added to the revised PDF. These results support the generalizability of our method to larger epsilon settings.\\n\\n[1] Maaten and Hinton, Visualizing Data using t-SNE. http://www.jmlr.org/papers/v9/vandermaaten08a.html\\n[2] Wong, et al., Scaling Provable Adversarial Defenses. https://arxiv.org/abs/1805.12514\"}",
"{\"title\": \"Novelty is cost-sensitive robustness\", \"comment\": \"Thank you for your review. Please see our responses below.\\n\\n1. Concern regarding the novelty\\nThe review correctly notes that the method we use to achieve cost-sensitive robustness is a straightforward extension to the training procedure in Wong & Kolter (2018). The novelty of our paper lies in the introduction of cost-sensitive robustness as a more appropriate criteria to measure classifier\\u2019s performance, and in showing experimentally that the cost-sensitive robust training procedure is effective. Previous robustness training methods were designed for overall robustness, which does not capture well the goals of adversaries in most realistic scenarios. We consider it an advantage that our method enables cost-sensitive robustness to be achieved with straightforward modifications to overall robustness training.\\n\\n2. Limitation in data scale\\nWe agree with the reviewer that certified robustness methods, including our work, are a long way from scaling to interesting models. All previous work on certified adversarial defenses has been limited to simple models on small or medium sized datasets (e.g., [1-3] below), but there is growing awareness that non-certified defenses are unlikely to resist adaptive adversaries and strong interest in scaling these methods. The method we propose and evaluate for incorporating cost-sensitivity in robustness training is generic enough that we expect it will also work with most improvements to certifiable robustness training. So, even though our implementation is not immediately practical today, we believe our results are of scientific interest, and the methods we propose are likely to become practical as rapid progress continues in scaling certifiable defenses. \\n\\n\\n[1] Wong and Kolter, Provable defenses against adversarial examples via the convex outer adversarial polytope. https://arxiv.org/abs/1711.00851\\n[2] Raghunathan, et al., Certified Defenses against Adversarial Examples. https://arxiv.org/abs/1801.09344\\n[3] Wong, et al., Scaling Provable Adversarial Defenses. https://arxiv.org/abs/1805.12514\"}",
"{\"title\": \"Objective justification\", \"comment\": \"Thank you for your review. Your comments about the model being ad hoc stem from a few misunderstandings, which we hope to clarify:\\n\\n1. Justification of training objective (3.1)\\nThe design of (3.1) is not ad hoc, but follows from previous cost-sensitive learning work such as MetaCost, and is inspired by the cost-sensitive CE loss (see equation (10) of [1] for a detailed definition). To be specific, class probabilities for cost-sensitive CE loss are computed by multiplying the corresponding cost and then normalizing the result vector. As a result, transformations that induce larger cost will receive larger penalization by minimizing the cost-sensitive CE loss. We neglected to include this explanation in the paper, and will revise it to make this clear. \\n\\nFor the first question, moving the sum of cost in front of \\u201clog\\u201d is unreasonable because the loss for each seed example will not be a negative log-likelihood term as in the case of cross-entropy. We can check the sanity of the objective by examining whether it reduces to standard CE loss if we set C = 1*1^\\\\top-I. For the second question, we indeed multiply the probability estimates by the cost, but the result vector has to be normalized before plugging into the cross entropy loss. Thus, the sum of cost will appear in front of the \\u201cexp\\u201d term.\\n\\n2. Comparison with other alternative designs\\nThe cost-sensitive neural network models you mentioned are only demonstrated to be effective in the non-adversarial settings, whereas we show that our proposed classifier is effective in the adversarial setting. Thus, comparing our method with theirs is not appropriate, since it is unclear whether such alternative cost-sensitive models can be adapted and remain effective in the adversarial setting. Even if they can be adapted, it is still not the main focus of our paper, as our main goal is to show that our proposed classifier achieves significant improvements in cost-sensitive robustness in comparison with models trained for overall robustness.\\n\\n3. Why are the original examples are not in cost-sensitive form?\\nThe training objective (3.1) is constructed for maximizing both cost-sensitive robustness and standard classification accuracy, and allows us to use the alpha hyperparameter to control the weighting between these goals. Thus, the first term in (3.1) doesn\\u2019t involve cost-sensitivity. We regard the standard classification accuracy as an important criteria for measuring classifier performance. Besides, the cost matrix for misclassification of original examples might be different from the cost matrix of adversarial transformations. For instance, misclassifying a benign program as malicious may still induce some cost in the non-adversarial setting, whereas the adversary may only benefit from transforming a malicious program into a benign one. In a scenario where the model is cost-sensitive regardless of adversaries, it could make sense to incorporate a cost-sensitive loss function as the first term also, but we have not explored this and are focused on the adversarial setting where cost-sensitivity is with respect to adversarial goals.\\n\\n4. What if we only optimize the original examples by cost-sensitive loss\\nGiven the vulnerability of deep learning classifiers against adversarial examples, we highly doubt that if we only optimize the original training by the cost-sensitive loss it would achieve significant cost-sensitive robustness (this expectation is based on how poorly models trained with the goal of overall accuracy do at achieving overall robustness). To be more convincing, we are running an experiment to test the robustness of a standard cost-sensitive classifier and will post the results soon.\\n\\nReference\\n[1]. Khan, et al., Cost-Sensitive Learning of Deep Feature Representations from Imbalanced Data. https://arxiv.org/abs/1508.03422\"}",
"{\"title\": \"interesting initiative, ad-hoc model\", \"review\": \"The authors define the notion of cost-sensitive robustness, which measures the seriousness of adversarial attack with a cost matrix. The authors then plug the costs of adversarial attack into the objective of optimization to get a model that is (cost-sensitively) robust against adversarial attacks.\\n\\nThe initiative is novel and interesting. Considering the long history of cost-sensitive learning, the proposed model is rather ad-hoc for two reasons:\\n\\n(1) It is not clear why the objective should take the form of (3.1). In particular, if using the logistic function as a surrogate for 0-1 loss, shouldn't the sum of cost be in front of \\\"log\\\"? If using the probability estimated from the network in a Meta-Cost guided sense, shouldn't the cost be multiplied by the probability estimate (like 1/(1+exp(...))) instead of the exp itself? The mysterious design of (3.1) makes no physical sense to me, or at least other designs used in previous cost-sensitive neural network models like\\n\\nChung et al., Cost-aware pre-training for multiclass cost-sensitive deep learning, IJCAI 2016\\nZhou and Liu, Training cost-sensitive neural networks with methods addressing the class imbalance problem, TKDE 2006 (which is cited by the authors)\\n\\nare not discussed nor compared.\", \"update\": \"I thank the authors for providing additional experiments on this part.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An incremental paper that straightforwardly applies cost-sensitive loss to robust adversarial learning.\", \"review\": \"The paper introduces a new concept of certified cost-sensitive robustness against adversarial attacks. A cost-sensitive robust optimization formulation is then proposed for deep adversarial learning. Experimental results on two benchmark datasets (MNIST, CIFAR-10) are reported to show the superiority of the proposed method to overall robustness method, both with binary and real-value cost matrices.\\n\\nThe idea of cost-sensitive adversarial deep learning is well motivated. The proposed method is clearly presented and the results are easy to access. My main concern is about the novelty of the approach which looks mostly incremental as a rather direct extension of the robust model (Wong & Kolter 2018) to cost-sensitive setting. Particularly, the duality lower-bound based loss function and its related training procedure are almost identical to those from (Wong & Kolter 2018), up to certain trivial modification to respect the pre-specified misclassification costs. The numerical results show some promise. However, as a practical paper, the current empirical study appears limited in data scale: I believe additional evaluation on more challenging data sets can be useful to better support the importance of approach.\", \"pros\": [\"The concept of certified cost-sensitive robustness is well motivated and clearly presented.\"], \"cons\": [\"The novelty of method is mostly incremental given the prior work of (Wong & Kolter 2018).\", \"Numerical results show some promise of cost-sensitive adversarial learning in the considered settings, but still not supportive enough to the importance of approach.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"review\", \"review\": \"** review score incremented following discussion below **\", \"strengths\": \"Well written and clear paper\", \"intuition_is_strong\": \"not all source-target class pairs are as beneficial to find adversarial examples for\", \"weaknesses\": \"Cost matrices choices feel a bit arbitrary in experiments\\nCIFAR experiments still use very small norm-balls\\n\\nThe submission builds on seminal work by Dalvi et al. (2004), which studied cost-sensitive adversaries in the context of spam detection. In particular, it extends the approach to certifiable robustness introduced by Wong and Kolter with a cost matrix that specifies for each pair of source-target classes whether the model should be robust to adversarial examples that are able to take an input from the source class to the target (or conversely whether these adversarial examples are of interest to an adversary).\\n\\nWhile the presentation of the paper is overall of great quality, some elements from the certified robustness literature could be reminded in order to ensure that the paper is self-contained. For instance, it is unclear how the guaranteed lower bound is derived without reading prior work. Adding this information in the present submission would make it easier for the reader to follow not only Sections 3.1 and 3.2 but also the computations behind Figure 1.b. \\n\\nThe experiments results are clearly presented but some of the details of the experimental setup are not always justified. If you are able to clarify the following choices in your rebuttal, this would help revise my review. First, the choice of cost matrices feels a bit arbitrary and somewhat cyclical. For instance, binary cost matrices for MNIST are chosen according to results found in Figure 1.b, but then later the same bounds are used to evaluate the performance of the approach. Yet, adversarial incentives may not be directly correlated with the \\u201chardness\\u201d of a source-target class pair as measured in Figure 1.b. The real-valued cost matrices are better justified in that respect. Second, would you be able to provide additional justification or analysis of the choice of the epsilon parameter for CIFAR-10? For MNIST, you were able to improve the epsilon parameter from epsilon=0.1 to epsilon=0.2 but for CIFAR-10 the epsilon parameter is identical to Wong et al. Does that indicate that the results presented in this paper do not scale beyond simple datasets like MNIST?\", \"minor_comments\": \"\", \"p2\": \"The definition of adversarial examples given in Section 2.2 is a bit too restrictive, and in particular only applies to the vision domain. Adversarial examples are usually described as any test input manipulated by an adversary to force a model to mispredict.\", \"p3\": \"typo in \\u201coptimzation\\u201d\", \"p5\": \"trade off -> trade-off\", \"p8\": \"the font used in Figure 2 is small and hard to read when printed.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
S1lTEh09FQ | Combinatorial Attacks on Binarized Neural Networks | [
"Elias B Khalil",
"Amrita Gupta",
"Bistra Dilkina"
] | Binarized Neural Networks (BNNs) have recently attracted significant interest due to their computational efficiency. Concurrently, it has been shown that neural networks may be overly sensitive to ``attacks" -- tiny adversarial changes in the input -- which may be detrimental to their use in safety-critical domains. Designing attack algorithms that effectively fool trained models is a key step towards learning robust neural networks.
The discrete, non-differentiable nature of BNNs, which distinguishes them from their full-precision counterparts, poses a challenge to gradient-based attacks. In this work, we study the problem of attacking a BNN through the lens of combinatorial and integer optimization. We propose a Mixed Integer Linear Programming (MILP) formulation of the problem. While exact and flexible, the MILP quickly becomes intractable as the network and perturbation space grow. To address this issue, we propose IProp, a decomposition-based algorithm that solves a sequence of much smaller MILP problems. Experimentally, we evaluate both proposed methods against the standard gradient-based attack (PGD) on MNIST and Fashion-MNIST, and show that IProp performs favorably compared to PGD, while scaling beyond the limits of the MILP. | [
"binarized neural networks",
"combinatorial optimization",
"integer programming"
] | https://openreview.net/pdf?id=S1lTEh09FQ | https://openreview.net/forum?id=S1lTEh09FQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1lIONLrxN",
"HJgVhR3R1V",
"SJxUbuV5yN",
"HJeKfyftCQ",
"rJejIzN5Tm",
"H1l0z6X9a7",
"S1eiXngK6Q",
"SyxipoxF6m",
"Bklt8KeKpm",
"B1lKEYgtaX",
"r1lwasqC37",
"HkxFLc32nX",
"H1xCvGx3hX",
"H1gJQ_S53m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545065581565,
1544634028493,
1544337405723,
1543212817419,
1542238803386,
1542237462372,
1542159394842,
1542159298712,
1542158673246,
1542158640816,
1541479358956,
1541356112548,
1541304933914,
1541195799278
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1495/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1495/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1495/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1495/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1495/Authors"
],
[
"~Nicholas_Carlini1"
],
[
"ICLR.cc/2019/Conference/Paper1495/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1495/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1495/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1495/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1495/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1495/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1495/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper provides a novel attack method and contributes to evaluating the robustness of neural networks with recently proposed defenses. The evaluation is convincing overall and the authors have answered most questions from the reviewers. We recommend acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper, accept.\"}",
"{\"title\": \"Thank you - A response\", \"comment\": \"Dear reviewer, thanks for taking the time to read our revised paper.\", \"regarding_1\": \"The sample of 1,000 test points that we used shows clear trends. We report standard deviation/quantiles whenever possible to give a full view of the results. Given that we lay out the results clearly and discuss regimes where our methods perform well or not, we believe we should not be at a disadvantage due to limited computational resources. Additionally, given that we are presenting optimization methods, a sample of 1,000 instances is very consistent with the size of the benchmarks used in the optimization literature. For instance, in Mixed Integer Programming, the common benchmark of MIPLIB2010 has 361 instances (http://miplib2010.zib.de/) that are all *very different*, i.e. coming from various applications/generators. In our setting, our optimization problems are near identical: same mathematical formulation (variables, constraints) and very similar data (input images, epsilon). As such, results on 1,000 test images are amply representative of the behavior of the algorithms we analyze.\\n\\nAdditionally, we would like to give a sense of the computational requirements for running experiments on the full 60,000 MNIST test points: for a single test point, method, network and epsilon, the method is run for 3 minutes. Considering the 6 values of epsilon, 8 network architectures and 4 methods (MIP, IPROP, PGD, SPSA), we would need 60K x 6 x 8 x 4 x 3 min > 34 million CPU minutes. Even with our cluster of 200 CPUs running simultaneously non-stop, we would need *120 days* to obtain the full results. We do agree that our initial 100 test points were too small a sample, and that is why we increased the sample size by one order of magnitude.\", \"regarding_4\": \"The practical relevance of binarized networks is studied at length in many papers. The recent set of contemporary papers on the topic by Courbariaux et al. and Hubara et al. (2016, 2017) are cited more than 1,000 times according to Google Scholar. The XNOR-Net paper on binarized convolutional networks is cited more than 800 times since ECML-PKDD 2016. As such, some researchers are now taking binarized networks to hardware implementations. Whether the binarized network was trained with -1/+1 weights from scratched or quantized from a full-precision network is irrelevant to our paper, as we take a trained network as input and attack it.\"}",
"{\"title\": \"thanks for the revision\", \"comment\": \"I thank the authors for the revision.\\n\\nRegarding 1, I think until the results on all the test images are published, I cannot recommend acceptance of the paper, because in my experience, the results can change significantly when testing on the entire test set versus a small subset of them.\\n\\nRegarding 4, I still find it difficult to understand the significance of BNNs when non binarized networks of much higher performance can be trained. I can certainly see ways to quantize non-binarized networks to facilitate hardware implementations.\\n\\nGiven these shortcomings, I feel I am unable to change my rating for this paper.I would suggest that the authors revise and resubmit this paper with complete experimental results and a careful evaluation against non-binarized networks.\"}",
"{\"title\": \"Revised version of the paper\", \"comment\": [\"We thank the reviewers for their comments and suggestions. We hope that the revised version of the paper and our direct replies to the reviews address all the issues that were raised.\", \"In particular, we note the following changes in the revised version:\", \"PGD: We now refer to the competing gradient-based attack as Projected Gradient Descent (PGD), rather than FGSM, in all figures and the text. We would like to emphasize that all the results reported in the original submission are indeed for PGD, but we were using the name FGSM to refer to it. The reviewers have correctly suggested that PGD is the right name for the method we are using, given that it is iterative (as opposed to the one-step FGSM).\", \"Additional baselines: On Reviewer1's recommendation, we have compared against the \\\"simultaneous perturbation stochastic approximation\\\" (SPSA) method used in [*]. The comparison with IProp is in the Appendix. SPSA performs significantly worse than IProp on MNIST, as can be seen in Figure 7.\", \"Larger test subset: On Reviewer1's recommendation, we have run additional experiments that use 1,000 instead of 100 MNIST test images to strengthen the results. All MNIST figures in the revised version now use 1,000 test images. The results are qualitatively consistent with the original results we reported.\", \"Larger epsilon: On Reviewer1's recommendation, we have run both our IProp method and PGD with larger attack radii, namely epsilon={0.05, 0.1, 0.2}; Figure 2 shows the prediction flip rates for MNIST. For these large radii, fooling the neural network is relatively easy, as manifested by the high bars. PGD can outperform IProp in this easy regime since IProp is more computationally expensive.\", \"Notation for h variables: On Reviewer3\\u2019s suggestion, the h variables are now always in {-1,1}, including in the MIP formulation.\", \"[*] Adversarial Risk and the Dangers of Evaluating Against Weak Attacks. https://arxiv.org/pdf/1802.05666.pdf.\"]}",
"{\"title\": \"Will do\", \"comment\": \"Thanks for your comment.\", \"we_agree\": \"since the name Projected Gradient Descent (PGD) has been widely adopted to refer to the iterative version (as popularized in https://arxiv.org/abs/1706.06083, page 4), we will update the paper to use PGD throughout.\"}",
"{\"comment\": \"FGSM is a specific attack defined by Goodfellow et al. This attack takes one step in the direction of the gradient.\\n\\nIf you are using a different attack---the \\\"Basic Iterative Method\\\" from Kurakin et al., say---then you should call it by that attack name, and don't call it FGSM. This is misleading.\", \"title\": \"Please use standard terminology\"}",
"{\"title\": \"Our FGSM is indeed PGD\", \"comment\": \"Thank you for taking the time to read our paper!\", \"we_just_responded_to_the_reviews_with_the_following\": \"\\\"In our paper, FGSM refers to \\u201citerated FGSM\\u201d or \\u201cmulti-step FGSM\\u201d or PGD (these are all referring to the same method, e.g. see page 4 of https://arxiv.org/abs/1706.06083). We make that clear in section 2: \\u201cSoon thereafter, an iterative variant of FGSM was shown to produce much more effective attacks (Kurakin et al., 2016); it is this version of FGSM that we will compare against in this work.\\u201d. In fact, we run iterated FGSM/PGD for 3 minutes (same as MIP and IProp) with random restarts every 100 iterations. This provides FGSM with the same computational budget as IProp. We will update the paper to clarify this point in the experiments section.\\\"\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for taking the time to review our paper. Our answers to your questions are numbered in the same order as your review:\\n\\n1. Yes, IProp does work for pooling layers, as the layer-to-layer satisfaction problem (section 4.1) can be modified to compute a pooling transformation by adding constraints appropriately. For instance, max/mean pooling are easily implemented with linear inequalities and/or binary variables.\\n\\n2. we discuss this point at length in section 5.2, page 7. The MIP solver fails to scale to the wider/deeper networks, and thus times out at the 3-minute cutoff. The final solution returned by MIP may thus be suboptimal, which results in green bars being smaller than red bars.\\n\\n3. (same reply as to reviewer 1) Thanks for raising this point - we already use PGD and will clarify this in writing. In our paper, FGSM refers to \\u201citerated FGSM\\u201d or \\u201cmulti-step FGSM\\u201d or PGD (these are all referring to the same method, e.g. see page 4 of https://arxiv.org/abs/1706.06083). We make that clear in section 2: \\u201cSoon thereafter, an iterative variant of FGSM was shown to produce much more effective attacks (Kurakin et al., 2016); it is this version of FGSM that we will compare against in this work.\\u201d. In fact, we run iterated FGSM/PGD for 3 minutes (same as MIP and IProp) with random restarts every 100 iterations. This provides FGSM with the same computational budget as IProp. We will update the paper to clarify this point in the experiments section.\\n\\n4. the big-M values are computed by simply bounding the a_{1,j} variables at the first hidden layer, since the input image is in an epsilon-box. Then, those bounds are passed on to the h_{1,j} variables, i.e. if the lower and upper bounds on a given a_{1,j} are negative, then h_{1,j} must be -1. Those bounds on h_{1,j} are then propagated to the a_{2,j} variables, and so on and so forth. This procedure is simple and runs in time linear in the size of the network. We are happy to describe it in the paper, if the reviewer thinks that would be useful.\\nOur formulation differs from that of Tjeng in that our constraints (4), (5) and (7) encode the discrete sign activation function and the binary weights.\\n\\n5. in Narodytska et al. (2018), the goal is to prove that an input to a network cannot be fooled with epsilon perturbations, or provide a counter-example to that. As such, they do not care about maximizing the difference between the incorrect class and the true class as we do. In other words, the verification problem in Narodytska et al. (2018) is a feasibility problem rather than an optimization problem, and so it does not have an explicit objective function.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for the positive comments and suggestions!\", \"regarding_the_set_s\": \"indeed, your suggestion is valid and we have tried it early on. We sampled neurons closer to the threshold (zero) with higher probability than others. We did not observe much improvement over uniform sampling at the time, and thus decided to stick with simple uniform sampling.\", \"regarding_warmstart_results\": \"that\\u2019s a great point; we will do so in the final version of the paper.\", \"regarding_notation\": \"thanks for catching that; we will make the notation consistent throughout.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thanks for the detailed comments - we believe most of your concerns are clarified below. In particular, our FGSM is the same as the PGD you refer to, as we explain below.\\n\\n1. We are currently running the same experiments reported in the paper on a much larger set of test images, and will report the updated results as soon as they become available.\\n\\n2. \\n- Regarding PGD: Thanks for raising this point - we already use PGD and will clarify this in writing. In our paper, FGSM refers to \\u201citerated FGSM\\u201d or \\u201cmulti-step FGSM\\u201d or PGD (these are all referring to the same method, e.g. see page 4 of https://arxiv.org/abs/1706.06083). We make that clear in section 2: \\u201cSoon thereafter, an iterative variant of FGSM was shown to produce much more effective attacks (Kurakin et al., 2016); it is this version of FGSM that we will compare against in this work.\\u201d. In fact, we run iterated FGSM/PGD for 3 minutes (same as MIP and IProp) with random restarts every 100 iterations. This provides FGSM with the same computational budget as IProp. We will update the paper to clarify this point in the experiments section.\\n\\n- Regarding gradient-free attacks: Thanks for bringing those papers to our attention. The first paper (https://arxiv.org/abs/1802.00420) proposes a method that uses the straight-through estimator to approximate gradients of a non-differentiable network; this is indeed the same trick used for FGSM/PGD on BNNs, and so our comparison with PGD already covers the method BPDA proposed in the paper. Regarding the second paper (https://arxiv.org/pdf/1802.05666.pdf), we are now implementing it and will report on results as soon as they become available.\\n\\n- Regarding bound propagation: indeed, we already do perform bound propagation since the input images are bounded in a small epsilon-box; the reported MIP results already use bound propagation. We will explicitly mention this in the updated paper.\\n\\n3. Thank you for the reference to this recent paper. We will consider these additional experiments.\\n\\n4. The point you raise relates to BNNs in general, rather than to our particular work. BNNs are amenable to fast hardware implementations as in the papers [a-c], which are much harder to achieve for non-binarized networks. As such, we believe it is important to study the robustness of BNNs to attacks, regardless of whether there exists robust non-binarized counterparts of similar size.\\n\\n[a] Liang, Shuang, et al. \\\"FP-BNN: Binarized neural network on FPGA.\\\" Neurocomputing 275 (2018): 1072-1086.\\n[b] McDanel, Bradley, Surat Teerapittayanon, and H. T. Kung. \\\"Embedded binarized neural networks.\\\" arXiv preprint arXiv:1709.02260 (2017).\\n[c] Yang, Li, Zhezhi He, and Deliang Fan. \\\"A Fully Onchip Binarized Convolutional Neural Network FPGA Impelmentation with Accurate Inference.\\\" Proceedings of the International Symposium on Low Power Electronics and Design. ACM, 2018.\"}",
"{\"comment\": \"While the motivation for studying attacks on binarized is not quite clear to me, I would like to point out that there are much stronger baselines than FGSM for attacking discrete, non-differentiable networks. In particular, several prior works have attempted to suggest binarization as a plausible defense and have evaluated their proposal by coming up with various attacks, all of which were subsequently broken because their attack method was weak compared to PGD [1] (and BPDA) [2]. So it is not sufficient to just compare against FGSM (as some reviewers have also pointed out).\\n\\n[1] https://arxiv.org/abs/1706.06083\\n[2] https://arxiv.org/abs/1802.00420\", \"title\": \"Weak baselines\"}",
"{\"title\": \"Interesting and novel idea, needs more experimental validation\", \"review\": \"The authors study the problem of generating strong adversarial attacks on binarized neural networks (networks whose weights are binary valued and have a sign function nonlinearity). Since these networks are not continuous (due to the sign function nonlinearity), it is possible that standard gradient-based attack algorithms are not effective at producing adversarial examples. While this problem can be encoded as a mixed integer linear program, off-the-shelf MILP solvers are not scalable to larger/deeper networks. Thus, the authors propose a new target propagation style algorithm that attempts to infer desired activations at each layer (from the perspective of maximizing the adversary's objective) starting at the final layer and moving towards the input. The propagation at each layer requires solving another MILP (albeit a much smaller one). Further, in order to prevent the target propagation from discovering assignments at upper layers that are unachievable given the constraints at lower layers, the authors propose two heuristics (making small moves and penalizing deviations from the previous target values) to obtain an effective attack algorithm. The authors validate their approach experimentally on MNIST/Fashion MNIST image classifiers.\", \"quality\": \"The paper is reasonably well written and the key ideas are communicated well. However, the experimental section needs to be improved significantly.\", \"clarity\": \"The paper is easy to understand and organized well.\", \"originality\": \"The application of target propagation in the context of adversarial examples is certainly novel and so are the specific enhancements proposed in the context of adversarial example generation. The\", \"significance\": \"The study of adversarial examples for binarized networks is novel and important and effective attack generation algorithms are a significant first step towards training robust models of this type - this could enable deployment of robust and compact binarized classifiers in on-device settings (where model size is important).\\n\\nCons\\nMy main concerns with this paper are regarding the experimental evaluation - I do not feel these are sufficient to justify the strength of the attack method proposed. Here are my broad concerns:\\n1. Even though the datasets used are small (MNIST/Fashion MNIST), the experimental validation of adversarial attacks is only performed on 100 test examples. This is not sufficiently representative (given experimental evidence with adversarial attacks on non-binarized models) and this needs to be addressed for the results to be considered conclusive.\\n\\n2. The attack method is only compared to FSGM, which is known to be a rather poor attack even on non-binarized networks. The authors should compare to stronger gradient based attacks (like PGD) and gradient free attacks which have been used to break adversarial defenses that are nondifferentiable in prior work - https://arxiv.org/abs/1802.00420 and https://arxiv.org/abs/1802.05666). Further, the MILP approach used can be strengthened by doing better bound propagation (like in https://arxiv.org/pdf/1711.00455.pdf)\\n\\n3. The attack radii used are very small compared to what has been used in non-binarized networks, where networks have been trained to even be verifiably robust to adversarial pertrubations of much larger radii (see for example https://arxiv.org/pdf/1805.12514.pdf). Given the existence of this work, it is important to evaluate the algorithms proposed on larger radii (since it is possible to construct non-binarized networks that are indeed robust to perburbations of eps=.1-.3 on MNIST).\\n\\n4. Motivation for binarization: I assume that motivation for binarized models arising from faster training/inference times and smaller model sizes. However, to justify this, the authors need to compare their BNNs to comparable non-binarized neural networks (for example,ones that are similar in terms of number of bits used to represent the model) on training time, inference time and adversarial robustness. Otherwise, it seems hard to see why binarized networks are valuable from a robustness.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"an interesting paper\", \"review\": \"This paper proposed a new attack algorithm based on MILP on binary neural networks. In addition to the full MILP formulation, the authors proposed an integer target propagation algorithm (IProp) to find adversarial examples by solving a smaller (instead of the full) MILP.\\n\\nThe topic is important but the clarity should be improved. It is less clear when describing the Iprop algorithm.\", \"questions\": \"1. Can IProp work for other architectures? It looks like the propagation steps work on only fully connected layers (or conv layers) with activation functions. Does it work for pooling layers?\\n2. The results in Figure 2 look weird and might be wrong:\\nsince MIP is the exact solution (green bar), how is it possible that the prediction flip rate of IProp larger than MIP? See top row figures where some red bars are larger than green bars. \\n3. Also, is the FGSM method comparing in Figure 2 operating on the approximate BNN as described in the related work? How does the performance of PGD (Madry etal) compared to IProp? \\n4. How are the big M parameters in equation 4 and 5 computed? Is the formulation eq (1) to (8) the same as that in Tjeng 2018? Since BNN is a special case of general neural networks. Please elaborate. \\n5. In Sec 2 related work, why \\\"there's no objective function\\\" for verification method?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Reviewer comments\", \"review\": \"This paper presents an algorithm to find adversarial attacks to binary neural networks. Binary neural networks uses sign functions as nonlinearities, making the network essentially discrete. Previous attempts at finding adversarial attacks for binary neural networks either rely on relaxation which cannot find very good adversarial examples, or calling a mixed integer linear programming (MILP) solver which doesn\\u2019t scale. This paper proposes to decompose the problem and iteratively find desired representations layer by layer from the top to the input. This so called Integer Propagation (IProp) algorithm is more efficient than solving the full MILP as it solves much smaller MILP problems, one for each layer, thus each step can be solved relatively quickly. The authors then proposed a few more improvements to the IProp algorithm, including ways to do local adjustments to the solutions, and warming starting from an existing solution. Experiments on binary neural nets trained for MNIST and Fashion MNIST show the superiority of the proposed method over MILP and relaxation based algorithms.\\n\\nOverall I found the paper to be very clear and the proposed method is sound. I think combining ideas from discrete / combinatorial optimization with deep learning is an important research direction and can shed light on training and verifying models with discrete components, like the hard nonlinearities in the binary neural nets studied in this paper.\\n\\nIn terms of the particular proposed approach, it is hard for me to imagine the blind IProp that does not take the input into account until the last layer is ever going to work. The small step size modifications make a lot more sense. Regarding the selection of the set S, in the paper the authors simply sampled elements to be in S uniformly, but it seems possible to make use of the information from the forward pass, and choose the hidden units that are the closed to reaching the desired activations. Would that be any better?\", \"a_few_minor_comments\": [\"when reporting warm start results, it would be good to also show the performance of the FGSM solution used for warm starting, in addition to the other two results shown in Figure 6 to have a more complete comparison\", \"the hidden units h_{l,j} were formulated to be in {0, 1} in equation (7), but everywhere else in the paper they are assumed to be in {-1, +1}, which is not consistent and slightly confusing.\", \"Overall I think this is a solid paper and support accepting it for publication.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HyxpNnRcFX | Modulating transfer between tasks in gradient-based meta-learning | [
"Erin Grant",
"Ghassen Jerfel",
"Katherine Heller",
"Thomas L. Griffiths"
] | Learning-to-learn or meta-learning leverages data-driven inductive bias to increase the efficiency of learning on a novel task. This approach encounters difficulty when transfer is not mutually beneficial, for instance, when tasks are sufficiently dissimilar or change over time. Here, we use the connection between gradient-based meta-learning and hierarchical Bayes to propose a mixture of hierarchical Bayesian models over the parameters of an arbitrary function approximator such as a neural network. Generalizing the model-agnostic meta-learning (MAML) algorithm, we present a stochastic expectation maximization procedure to jointly estimate parameter initializations for gradient descent as well as a latent assignment of tasks to initializations. This approach better captures the diversity of training tasks as opposed to consolidating inductive biases into a single set of hyperparameters. Our experiments demonstrate better generalization on the standard miniImageNet benchmark for 1-shot classification. We further derive a novel and scalable non-parametric variant of our method that captures the evolution of a task distribution over time as demonstrated on a set of few-shot regression tasks. | [
"meta-learning",
"clustering",
"learning-to-learn",
"mixture",
"hierarchical Bayes",
"hierarchical model",
"gradient-based meta-learning"
] | https://openreview.net/pdf?id=HyxpNnRcFX | https://openreview.net/forum?id=HyxpNnRcFX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HygTZubNeN",
"rJgk5vxH1E",
"HkxkCllryN",
"HJxYmA1rJ4",
"rkxmRTySkE",
"rkl1V9kry4",
"H1lwg9yHkV",
"ryg_PJdYCm",
"rkeZMawKAm",
"Bkx2s5vFRm",
"BJgTUwVtCQ",
"rkgvengN0m",
"rJgBMFg4Rm",
"rkl_0aJV0Q",
"SkehYq1V0m",
"H1xS9OkVRQ",
"Hkeu__y4AQ",
"BJxnHUyEAX",
"ryeDpxpX0m",
"B1xyDeTQRQ",
"H1eS7lpQ0X",
"BkxRN1aQCQ",
"S1l6hQFp3Q",
"Skl0gf6On7",
"BJeOO--12Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544980485329,
1543993222946,
1543991494930,
1543990816759,
1543990730823,
1543989799391,
1543989743223,
1543237471749,
1543236873180,
1543236259920,
1543223125009,
1542880239065,
1542879501006,
1542876623792,
1542875779990,
1542875277498,
1542875248025,
1542874691723,
1542865087099,
1542864982766,
1542864924589,
1542864694484,
1541407668882,
1541095925981,
1540456816476
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1493/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1493/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1493/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1493/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1493/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1493/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1493/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1493/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper is extending the meta-learning MAML method to the mixture case. Specifically, the global parameters of the method are now modeled as a mixture. The authors also derive the elaborate associated inference for this approach.\\n\\nThe paper is well written although Rev2 raises some presentation issues that can surely improve the quality of the paper, if addressed in depth. \\n\\nThe results do not convince any of the three reviewers. Rev3 asks for a clearer exposition of the results to increase convincingness. Rev2 and Rev1 also make similar comments. \\n\\nRev1 also questions the motivation of the approach, although the other two reviewers seem to find the approach well motivated. Although it certainly helps to prove the motivation within a very tailored to the method application, the AC weighted the opinion of all reviewers and did not consider the paper to lack in the motivation aspect. \\n\\nThe reviewers were overall not very impressed with this paper and that does not seem to stem from lack of novelty or technical correctness. Instead, it seems that this work is rather inconclusive (or at least it is presented in an inconclusive manner): Rev1 says that the important questions (like trade-offs and other practical issues) are not answered, Rev2 suggests that maybe this paper is trying to address too much, and all three reviewers are not convinced by the experiments and derived insights. \\n\\nFinally, Rev2 points out some inherent caveats of the method; although they do not seem to be severe enough to undermine the overall quality of the approach, it would be instructive to have them investigated more thoroughly (even if not completely solving them).\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Timely idea... but paper lacks in results and conclusive insights.\"}",
"{\"title\": \"Details on the revised submission\", \"comment\": [\"We sincerely thank the reviewers for feedback on making the submission better. Below, we describe how we have updated the submission to incorporate the suggested revisions.\", \"To address the concerns of R1 on technical correctness, we have revised the nonparametric mixture section to add more background on the classical inference procedures and traditional trade-offs that motivate our approach. Furthermore, we elaborated on the derivation of our point-estimation procedure to clarify the origin of each objective function in the pseudo-code (Algorithm 3 & Subroutines 4/5).\", \"We understand that the inclusion of the standard miniImageNet benchmark result in our paper was framed in a manner that does not make our goals transparent, as identified by R2. Accordingly, we have revised our paper to clarify how our results present the state-of-the-art on comparable architectures, and that our focus is on the task-agnostic, continual learning setting.\", \"We thank R2 for suggesting ablations studies that would improve understanding of the proposed method. Unfortunately, since these specific suggestions were made the morning of the revision deadline, we were unable to include them in the updated version, but will include the following in a future revision:\", \"An ablation study for the number of components in the mixture for the standard miniImageNet benchmark.\", \"An ablation study for an additional entropy regularizer that encourages cluster differentiation (analogous to the repulsion term in BMAML).\", \"Subject to the corresponding authors' release of code, a comparison against BMAML. We would, however, like to point out that R2's claim that the repulsion term will \\\"certainly \\u2026 do more than having no repulsion\\\" is too strong; this is an empirical question that has not yet been evaluated.\", \"An ablation study that better disentangles the effect of model capacity from the mixture approach (by e.g., keeping the number of parameters/filters the same as the mixture). We note that BMAML does not investigate such a baseline, and so the work is subject to R2's criticism: There is a \\\"confounding factor in the increased capacity of the meta-learner\\\".\", \"As a baseline such as MAML or BMAML is likely to struggle in the evolutionary setting and suffer from catastrophic forgetting, we will develop a more appropriate baseline to include in this setting. We note that no such method that applies to the task-agnostic continual meta-learning setting with neural networks currently exists. As well, we maintain that the demonstration of a baseline such as MAML failing on an important class of tasks is not \\\"trivial\\\", and we view such a demonstration as an empirical result worthy of note in the meta-learning community.\", \"We recognize and have fixed, in the updated version of the paper, the mismatch between the experiments section, the figures, and their corresponding captions. (The errata included that an introduction to the standard miniImageNet benchmark was missing, the synthetic experiments description referred to the validation loss as the responsibilities and identified a task change at 2100 instead of 1400, the caption discussed responsibilities not present in the figure, and stated that it reported losses for each cluster but the figure presented the average across clusters.)\", \"We acknowledge that it was hard to identify the desired differentiation in the previous results for the synthetic regression experiment figure, as it focused on the quantitative improvement of the validation losses. We have thus added a plot for the evolution of cluster assignments for each task over the 3 training phases corresponding to the three underlying task distributions.\", \"The new figure for regression tasks demonstrates:\", \"cluster spawning automatically when the task distribution is changed (in a task-agnostic manner)\", \"cluster differentiation across tasks where the 3rd cluster is clearly specialized whereas the 2nd cluster has the highest responsibility for the 2nd task distribution, and so on.\", \"We moved the evolving miniImageNet experiment figure to the appendix. R2 expressed concerns about mode collapse in this experiment, which we elaborate on in the specific response to R2. We have achieved better differentiation in recent runs of our algorithm by better tuning of the hyperparameters and choice of task-specific image transformation. Nonetheless, we could not satisfactorily assess the effect of these hyperparameters and datasets, and recreate the new figures in time for the revision deadline, and so leave these results to a future version of the paper.\", \"We have emphasized that our method is the first to address the generalized setting of non-stationary meta-/few-shot learning. In particular, prior work for continual learning does not present suitable datasets or benchmarks for few-shot learning, and current meta-learning work does not avoid catastrophic forgetting.\"]}",
"{\"title\": \"Thank you for your response. [3/3]\", \"comment\": \"> \\u201cMore importantly, the mixture model fails to retain performance on the two previous tasks ... indicating a catastrophic loss of performance.\\u201d\\nAs there is no standard threshold for what qualifies as \\u201ccatastrophic forgetting\\u201d, we quantitatively compare to MAML. MAML indicates a more significant loss of performance. It is quite likely that the different loss scales across task distributions is the main cause of the decay in performance for our method (as discussed in detail above in this response). However, the final validation loss on each task is not far from the best loss (at the end of each corresponding training phase), and, importantly, is superior to MAML. If the reviewer has any suggestions for a more appropriate baseline than MAML in this setting, we would very much appreciate the suggestion.\\n\\n> \\u201cwhen a new task is introduced, *three* clusters are immediately created \\u2026 suggesting one of them is redundant despite there now being three tasks.\\u201d\\nThe unintended behavior seen in this figure is the unnecessary spawning of a 3rd cluster right after the 2nd one. As such, *two* (not three) clusters were spawned *almost at the same time* (within 200 iterations of each other, not immediately). Understandably, there was not much differentiation between these two clusters (making one of them redundant) for the rest of this training phase. We have, however, found better cluster differentiation in more recent runs of our algorithm (with a different set of hyperparameter values) that we will add to a later revision of this work. We have moved the current miniImageNet figure to the appendix. \\n\\n> \\u201cWithout plotting the actual assignment distribution, it is hard to say anything...\\\"\\nWe agree with the reviewer that the unnormalized plot of validation negative log likelihoods would not strongly demonstrate our point regarding task differentiation. However, we would like to refer the reviewer to a new figure (Fig. 6) for the regression setting where we demonstrate sufficient differentiation across tasks and clusters by plotting the assignment distribution. We would, however, like to clarify that the change of task distribution between the training phases does not guarantee that the distributions have zero overlap. Accordingly, we cannot expect the clusters to be perfectly specialized.\\n\\n> \\u201csmall batch sizes (not reported in the manuscript)\\u201d\\n> \\u201cdoesn't explain the sudden change in the middle of training on a given task\\u201d\\nWe refer the reviewer to the \\u201chyperparameters\\u201d paragraph of the corresponding experiment, where the batch size is reported. Given the potentially high variance of the gradient of the validation loss across episodes due to the small batch of training data (1 example per class) as well as the relatively small meta-batch size (4), it is not surprising at all to see noisy fluctuations in parameter values. However, an additional source of noise in these diagrams is the small batch size we have used when evaluating on the meta-validation data. We will correct this by reporting the average validation loss on the entire meta-validation dataset for parameters at each iteration interval of training; this is costly but gives a much more stable estimate of generalization.\\n\\n> \\u201cA more general point on both these experiments is that a sequence of three task is a very short sequence...\\u201d\\nWe based our sequence of 3 on the permuted MNIST experiments in the Elastic Weight Consolidation paper [Kir2017]. However, we agree that we could better demonstrate our point with a longer sequence, and will explore such experiments in a future version of this work.\\n\\n> \\u201cFurther, introducing new experiments...without a relevant baseline makes your results hard to relate to.\\u201d\\nWe would like to note that, at the time of submission, there had been *no prior work on continual meta-learning*; the fact that MAML will trivially forget catastrophically is only another justification to the urgent need for a method that can tackle the non-stationary meta-learning setting. To dismiss this work\\u2019s contribution based on how a standard method for meta-learning cannot handle an evolving setting is analogous to dismissing the whole of the continual learning literature because SGD trivially forgets catastrophically.\\n\\nWe have considered different datasets to demonstrate our method, however, we could not find suitable datasets or sets of tasks suitable for *both \\u201cevolving\\u201d and \\u201cfew-shot\\u201d*. Note that the continual learning datasets the reviewer mentioned (permuted MNIST/CIFAR-10) are not immediately suitable for few-shot learning or meta-learning as there is no standardized batching into task episodes, each with a train and validation batch. Moreover, stylized miniImageNet is much more complex than these datasets (in terms of both pixel density and image content). Previous approaches to continual learning have never, to the best of our knowledge, been applied to data of this complexity.\"}",
"{\"title\": \"Thank you for your response. [2/3]\", \"comment\": \"> \\\"I\\u2019m not sure that just because random initialization breaks symmetry in the standard use case, it will do so when applied over a distribution of gradient fields ... your parameters are initialized around origin (presumably); from that point, the variation in initialization may very well be too small to prevent all gradients from pointing in the same direction and does so only if the loss surface is hyper-sensitive. \\\"\\n\\nWe note that we are not explicitly maintaining a distribution over gradient fields. However, the reviewer is correct in identifying another component on which we could perform an ablation: The random per-component initialization. We will perform the corresponding ablation that sets a new mixture component to the same value as another component.\\n\\nWe note, however, that too small a variance in initialization is not a concern of our approach. In particular, the spawning of a new cluster makes use of a variance hyperparameter (a parameter of the global prior) that we tune to ensure initial differentiation without having useless clusters. High-dimensionality may or may not play a role in differentiation at the initialization point, but this is a matter of empirical verification that is not within the scope of our paper.\\n\\nThere is, however, another component that breaks symmetry later on in training, as more data has been observed: the richer-get-richer property of predictive distribution for a cluster identity in Bayesian nonparametric clustering. As a consequence, the probability of joining an existing cluster is proportional to the size of that cluster. Over the course of training, the rich-get-richer property ensures that the meta-parameters do *not* follow the same optimization trajectory, and is thus another component driving specialization.\\n\\n> regarding synthetic experiments:\\nThe way in which we initialize a new cluster plays a role in the capacity of the mixture to succeed on a new task as compared to the baseline MAML. The new cluster is initialized with a sample from the global prior (i.e., it is perturbed). This perturbation may aid adaptation to a novel task. In contrast, the MAML parameters are applied without change to the new task data.\\n\\n> Comparison to particle methods with repulsion terms:\\nWe re-iterate that particle methods such as B-MAML, while they have the potential to capture multiple modes, are not optimal for the evolving setting. Furthermore, our interest is in clustering similar tasks together for optimal transfer. Particle methods do not perform that function, as they cannot constrain the meta-gradient update based on the cluster assignments of the tasks. In particular, note the lack of weighting (e.g., by a factor related to the repulsion term) in the meta-loss in Eq. (5) of BMAML [Kim18].\\n\\n> \\\"[the authors of BMAML] show that going from 1 (equivalent to MAML) to 3 or 5 particles yields a 3-percentage point increase in performance. This is a relative measure on a comparable architecture, and as such does provide a relevant baseline.\\\"\\nWe reported a 2% improvement with 5 components, in contrast to the 3% improvement of BMAML with 5 particles. However, given that the architectures are **not** comparable (BMAML: 5 layers of 64 filters; ours: 4 layers of 32 filters), this contrast is inconclusive, as more parameters might ease differentiation. As we stated in the main revision report, we are happy to run a more standardized comparison, but cannot straightforwardly do so as the authors of BMAML have not to our knowledge released code.\\n\\n> \\u201cFirst note that only the third panel \\u2026 differentiation between panels\\u201d:\\nWe apologize for the confusion caused by the mismatch between the figure and the main text due to a mixup at submission time. We have fixed that issue in addition to fixing the outdated caption in the submitted revision. \\n\\n> \\u201cAfter that, task 2 is introduced \\u2026 might be poorly calibrated. at ~1500 \\u2026 appears to behave oddly\\u201d\\nHere, we believe that MAML, at the end of the first training phase, converged to a local minima with respect to the second task distribution loss. Accordingly, MAML updates did not successfully learn the slightly different distribution of tasks (odd vs. even polynomial regression). In contrast, our algorithm spawns a new cluster that is further from such a stationary point (due to the perturbation induced by the global prior G) and is thus able to better learn the new tasks. However, we understand that the current figure might appear to report a subpar choice of a MAML run; in a future revision of the paper, we will rerun this experiment for a larger number of times to plot the mean and variance/CI for the validation loss at each step of the training process to investigate this phenomenon.\"}",
"{\"title\": \"Thank you for your response. [1/3]\", \"comment\": \"We thank the reviewer for the response. We respond to specific points below.\\n\\n> \\\"...with a known task distribution you can leverage more information in your objective and could possibly encourage greater simultaneous exploration of initialization space.\\\"\\n\\nWe reiterate that the task-agnostic setting is the core problem that our approach attempts to address, and therefore respectfully disagree that the setting is beyond scope. It does indeed present additional challenges but represents a more setting that is appropriate for applications in which the underlying task distribution has an uncontrollable, non-stationary component.\\n\\n> \\\"Notably, MAML, and as such a mixture thereof, is not scale invariant. If one task has gradients an order of magnitude larger than any other, all mixture components will be dragged towards that (or some) task-specific minima.\\\"\\n> \\\"My suspicion is that this happens because the loss with respect to the third task is many orders of magnitude larger.\\\"\", \"the_reviewer_is_correct\": \"The regression loss is unbounded and differences in magnitude between types of tasks (e.g., sinusoidal vs. polynomial) may be significant. Continual learning is difficult for any method that makes use of the loss on each type of task as a measure of catastrophic forgetting. A possible, but not immediate, solution is the normalization of loss scale between tasks. However, in our task-agnostic setting, such normalization would require prior information about the task distribution, which we do not assume access to.\\n\\nWe note that other established methods for continual learning, such as elastic weight consolidation (EWC) [Kir2017], synaptic intelligence (SI) [Zen17], and variational continual learning (VCL) [Ngu17], are also subject to the concern of incomparable task losses, as they make use of the loss for each task in order to overcome catastrophic forgetting. Moreover, although these methods are task-aware, we are not aware of an intrinsic algorithmic component that achieves normalized loss scales. We believe that this is because it is difficult to disentangle catastrophic forgetting from the inherent difficulty of a task simply by looking at the model's performance on the task (as measured by the task loss). Such an algorithmic component would be an interesting contribution, but is one that is not addressed in these works, nor is it one that we intend to address here, although we believe an approach such as gradient normalization (https://arxiv.org/abs/1711.02257) may be promising.\\n\\nWe acknowledge that we did not clarify these details surrounding loss scaling in the submission, and, as such, the lack of robustness to differences in loss scale between tasks could seem unique to our method. We also note that this issue is not prevalent in the standard benchmarks for continual learning (e.g., permuted MNIST), since these tasks are dealt with using the cross-entropy error, and thus the loss scaling is not significantly different across tasks. We thank the reviewer for pointing out this unrepresentative choice of tasks for the synthetic regression benchmark; however, we will defer an appropriate revision to a future draft of this work, as we could not revise this section in the day between the reviewer's comments and the revision deadline.\"}",
"{\"title\": \"Thank you for your response. [2/2]\", \"comment\": [\"> Comparison to VERSA:\", \"We were not aware of the per-iteration speed comparison since such results were only added recently (10 days before the revision deadline of Nov. 23) to the concurrent ICLR submission of VERSA [\\\"Versatility\\\" in Section 5.2, pg. 8]. However, seeing that the comparison merely adapted the public GitHub version of MAML, we would like to note that:\", \"It is difficult to disentangle algorithmic and implementational speedups. Our own implementation of MAML is significantly faster than the standard version from the original author's GitHub repo, which is not heavily parallelized.\", \"It is unclear if VERSA takes the same number of iterations during training (and thus the 5x speedup might not translate to 5x overall convergence speedup). This is related to a point that we make in the first author response: Approaches using amortization or hyper-networks trade off an increase in overall training time (due to extra parameters in the inference / hyper-network) for fast test-time inference. To fully disentangle these two components would be an orthogonal contribution. It is not the focus of our paper, as we focus on making the novel contribution of task-agnostic, continual meta-learning.\", \"Our task-specific weights are a high-dimensional parameter set, unlike VERSA\\u2019s restriction to the last layer.\", \"VERSA is *not* a comparable architecture to ours, as it uses 5 convolutional layers [see Table D.2, p. 19 of Gor18], as compared to 4 for the standard architecture, in addition to the extra parameters of the hyper-network.\", \"Our model achieves SOTA with the standard architecture of 4 convolutional layers [Vin16]. Amongst all reported approaches with various architectures, VERSA\\u2019s 53.4% 5-way 1-shot accuracy is nowhere near the state-of-the-art, which uses residual networks [Gid18, Mun18a, Mun18b, Ore18, Rus18].\", \"> \\u201cit is not clear to me what their goal is in the first place\\u201d\", \"We believe we made it clear in the submission and rebuttal that our main goal is addressing the continual setting of meta-learning that focuses on the question of learning-to-learn in a non-stationary environment. Towards that goal, our regression tasks are in line with prior work and fully demonstrate:\", \"a considerable amount of task differentiation (from the assignments plot);\", \"adaptive model capacity (clusters were indeed spawned); and\", \"improved generalization and less catastrophic forgetting (improved loss values for older tasks while training on newer tasks as compared to MAML).\", \"Lastly, we would like to point the reviewer to the ICLR 2019 reviewer guidelines (https://iclr.cc/Conferences/2019/Reviewer_Guidelines) in response to the following statement of R1: \\\"And where [sic] you like it or not, the Deep NN field (key audience for ICLR) is very much focused on competitive results.\\\" In particular, note the first paragraph:\", \"\\\"Does the paper present substantively new ideas or explore an underexplored or highly novel question? Papers that take risks and study a less explored area are likely to have less polished results, papers that study a highly explored topic are likely to have more polished results. This phenomenon often results in reviewers excessively penalizing papers that explore underexplored topics, and it is worth accounting for this.\\\"\"], \"we_study_an_unexplored_area\": \"task-agnostic, continual meta-learning. We sincerely believe that \\\"a substantial fraction of the ICLR attendees [would] be interested in reading this paper\\\" because of this particular problem setting, and our approach to addressing it. Regrettably, the reviewer did not refer to our contribution anywhere in the original review nor in the follow-up response. We encourage the reviewer to more carefully consider the potential contributions of a work when reviewing in the future.\\n\\nReferences\\n\\n[Gid18] Spyros Gidaris and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR). URL https://arxiv.org/abs/1804.09458.\\n\\n[Mun18a] Tsendsuren Munkhdalai and Adam Trischler. Metalearning with hebbian fast weights. arXiv preprint arXiv:1807.05076, 2018\\n\\n[Mun18b] T. Munkhdalai, X. Yuan, S. Mehri, and A. Trischler. Rapid adaptation with conditionally shifted neurons. In ICML, 2018.\\n\\n[Ore18] Boris N Oreshkin, Alexandre Lacoste, and Pau Rodriguez. Tadam: Task dependent adaptive metric for improved few-shot learning. In Advances in Neural Information Processing Systems (NIPS), 2018. URL https://arxiv.org/abs/1805.10123.\\n\\n[Rus18] Andrei A Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. Meta-learning with latent embedding optimization. arXiv preprint arXiv:1807.05960, 2018.\"}",
"{\"title\": \"Thank you for your response. [1/2]\", \"comment\": \"We thank the reviewer to the response. We respond to specific points below.\\n\\n> A \\u201cstraightforward mixturization\\u201d is an understatement of the contribution of this work which does not recognize the difficulties of gradient-based clustering in both the stochastic, fast-adaptation, few-shot setting. We highlight some central challenges:\\n\\n_Stochastic setting_: Clustering in the stochastic setting with gradient information is not as straightforward as metric- or likelihood-based clustering that iteratively passes over the whole dataset. This is particularly a concern in the non-parametric setting which normally requires access to the set of all prior assignments (atoms) for the *entire* dataset in order to refine all assignments at each iteration (M-step). Even in the parametric setting, there is little work on *stochastic* gradient-based clustering (we can think of Neural EM [Gre17], which we cite in our paper).\\n\\n_Joint learning_: The fact that the reviewer refers to our mixture as \\u201con top of MAML\\u201d indicates that our presentation might have not been clear enough: Our mixture learning approach is adaptive and takes place within gradient-based fast adaptation, not as an external step or loop over individual MAML components. In particular, the weighting of mixture components (E-step) takes into account the per-task loss and therefore allows appropriate information transfer between clusters for the meta-gradient step (M-step). We emphasize that this is not a simple categorization of the tasks after transfer has taken place, but an online procedure with dependencies.\\n\\nOn the other hand, a \\\"straightforward mixturization\\\" could cluster the meta-parameters (\\\\theta, not the task-specific weights \\\\phi) after each component is trained independently. This does not address the issue of heterogeneity as it does not limit negative transfer or amplify positive transfer during training.\\nTherefore, we can see the unique capacity of our EM-learning procedure is to cluster related tasks together to inform fast weight adaptation in a scalable manner.\\n\\n_No geometric assumptions/clustering based on optimal transfer instead of enforcing some notion of task similarity_: A \\\"straightforward mixturization\\\" could rely on the L2 distance among the cluster components (after both inner and update steps are taken). Not only would this approach encounter some scaling issues in the high-dimensional setting, but the L2 distance also does not account for the permutation of weights in a neural network, and could therefore erroneously differentiate functionally similar networks. This is also a limitation of likelihood-based clustering with standard (e.g., multivariate Normal) distributional assumptions. In contrast, our approach leverages the same computation used for gradient-based fast-adaptation to cluster based on the improved training loss, and to determine which tasks most benefit from mutual transfer.\\n\\nWe do not view our approach as a \\\"straightforward mixturization of MAML\\\", nor do we think that \\\"mixturizing\\\" probabilistic models is \\\"incremental\\\". We view our submission as continuing a line of work that began with the identification of gradient-based meta-learning as parameter estimation in a hierarchical probabilistic model. We detail how we view this investigation in the author response entitled \\\"Response to all reviewers: Alternate meta-learning algorithms correspond to the use of different inference techniques\\\".\\n\\n> Notes on higher capacity / increased complexity:\\nA higher capacity is unavoidable to address parameter saturation in an evolving setting. This is the main motivation for our approach. No prior work has addressed ways to optimally add or tune the capacity of a meta-learner, either in the stationary or non-stationary setting. Our approach allows adaptation of model capacity as needed based on the observed data. This cannot be achieved by cross-validation in the more difficult online setting.\"}",
"{\"title\": \"Concern review-specific rebuttal (2/2)\", \"comment\": \"> Note that the total cluster responsibility reported in Figure 6 is the sum of cluster responsibilities across the different tasks in a single mini-batch. Accordingly, at each moment, one or two clusters are assigned tasks (from the minibatch of 4 tasks) with a non-zero probability.\\n\\nAdmittedly, I found this figure extremely difficult to interpret and am still confused by it. It appears to me that after 20K iterations, when a new task is introduced, *three* clusters are immediately created, though only two should be needed. Moreover, the cluster assignment seems to fluctuate more than it should: the green and red cluster essentially take turns being assigned, which makes little sense to me. Moreover, after 25K steps (still two tasks), all three clusters seem to be active always with roughly equal responsibilities, suggesting the allocation may be largely random or at least not diversified as one would expect. Without plotting the actual assignment distribution, it is hard to say anything about how what these clusters are doing and how consistent they are with respect to the tasks. For instance, when the third task is introduced, something equally confusing happens at 35K steps. All of a sudden, green and red clusters again starts taking turns being assigned, suggesting one of them is redundant despite there now being three tasks. This may be an artifact of very small batch sizes (not reported in the manuscript), but that doesn't explain the sudden change in the middle of training on a given task. \\n\\nI am certainly open to the possibility of having misinterpreted this figure (and the former), if so please do correct me. \\n\\nA more general point on both these experiments is that a sequence of three task is a very short sequence to demonstrate an ability to modulate over an evolving task distribution. Further, introducing new experiments (there are several benchmarks on versions of MNIST or evolving Cifar10 (Zenke et al., 2017), why not use any of those?) without a relevant baseline makes your results hard to relate to. For instance, MAML is trivially going to incur catastrophic forgetting in an evolutionary setting. I do take your point though that the evolving miniImagenet experiment is mainly about exploring cluster differentiation.\"}",
"{\"title\": \"Concerning review-specific rebuttal (1/2)\", \"comment\": \"Dear authors,\\n\\nThese comments elaborate on concerns I raised in my initial review. Let me first emphasize that my evaluation is not primarily driven by the miniImagenet experiment, but what I maintain is evidence of either mode collapse or catastrophic forgetting. Below, I respond to your specific comments. \\n\\n> tl;dr: Auxiliary mode collapse penalties (analogous to the repulsion term in BMAML [Kim18]) might not be appropriate for clustering in the stochastic setting. \\n\\nI agree that auxiliary penalties are a non-trivial problem and any approach comes with limitations. You are right that the repulsive force does not guarantee the avoidance of mode collapse, but certainly, it will do more than having no repulsion. I\\u2019m not sure that just because random initialization breaks symmetry in the standard use case, it will do so when applied over a distribution of gradient fields. To understand my concern, note that all your parameters are initialized around origin (presumably); from that point, the variation in initialization may very well be too small to prevent all gradients from pointing in the same direction and does so only if the loss surface is hyper-sensitive. This may be a valid assumption in high-dimensional space\\u2013or it may not. Notably, MAML, and as such a mixture thereof, is not scale invariant. If one task has gradients an order of magnitude larger than any other, all mixture components will be dragged towards that (or some) task-specific minima. \\n\\nThe current manuscript is an attempt to tackle perhaps too much; with a known task distribution you can leverage more information in your objective and could possibly encourage greater simultaneous exploration of initialization space. Alternatively, focusing on the non-stationary continuous learning setup could motivate certain mechanisms for avoiding catastrophic forgetting. In the current manuscript my main concerned is that I see no strong evidence suggesting you achieve either.\\n\\n> We apologize for the confusion caused by the original version of this figure. Notably, what we represent in Figure 5 is the validation loss values for each task, on a logarithmic scale.\\n\\nFirst note that only the third panel is on a logarithmic scale. Further note that the caption does match the figure, and the description of the results in the main text agree with neither. As such, I may very well have misinterpreted these results. Thus, let me be very specific. \\n\\n- First, I note that the caption states \\u201cWe plot the negative log likelihood of the data across all tasks under each cluster\\u201d. This makes no sense for the baseline MAML, which only has one \\u201ccluster\\u201d. Since the curves for MAML differs between panels, and reduction in loss generally agree with when new tasks are being introduced, my interpretation is that each panel plots the validation loss for a specific task. Your rebuttal confirms this interpretation. I surmise the top panel is task 1, the middle panel task 2, and the bottom panel task 3. In the case of MAML, there is no cluster differentiation between panels. \\n\\n- Second, I note that both models do equally well on task 1 until step ~700 (after all, they are equivalent at this point). After that, task 2 is introduced and the baseline\\u2019s performance on task 1 deteriorates. It also fails to learn the closely related task 2, suggesting it might be poorly calibrated. In contrast, the mixture model retains performance on task 1 and learns task 2.\\n\\n- at ~1500 (not 2100, as claimed), the validation loss on task 3 goes down for both the baseline and the mixture. The mixture model learns the new task faster, but MAML again appears to behave oddly. More importantly, the mixture model fails to retain performance on the two previous tasks, and from step 1500 to 2500 validation losses on task 1 and 2 increases to a level indicating a catastrophic loss of performance. \\n\\nMy suspicion is that this happens because the loss with respect to the third task is many orders of magnitude larger. Thus, my concern with respect to mode collapse kicks in; every component in the mixture will receive very large gradient updates pulling them towards a local minimum on task 3, effectively collapsing into, if not a single mode, modes useful for task 3 only. In other words, the mixture model is suffering catastrophic forgetting over a sequence of three tasks. This makes me worried what would happen over a longer sequence, as is usually seen in continual learning, and in a more complex environment.\"}",
"{\"title\": \"Concerning miniImagenet\", \"comment\": \"Dear authors,\\n\\nThank you for an extensive rebuttal. Your frustration is palpable, but I do not believe my position on the miniImagenet experiment, which overlaps largely with R1, is unfair. \\n\\nYou dedicate considerable effort to defend you results on miniImagenet, primarily on the grounds that (a) the purpose is not to match state of the art (b) your architecture is less powerful and should not be compared against SOTA, and (c) differences in implementation invalidates direct comparisons. These are reasonable points, but you miss my key concern, elaborated below.\\n\\nI agree that there are several challenges with miniImagenet benchmarks. That places an additional burden on any paper aiming to include such benchmarks, especially in terms of reporting experimental setup, discussing what comparisons can be made, and which results are incompatible. Your current manuscript mentions miniImagenet twice; in the abstract and table 1. There are no experimental details, no motivation for including this experiment, nor any discussion of the results. You claim this is due to page count, but you have 1.5 pages to the max limit and a largely empty appendix. A consequence of this choice is that a reader is forced to be maximally conservative (i.e., compare against SOTA).\\n\\nGiven the authors rebuttal, I wonder what the purpose of this experiment is. If it is to evaluate the parametric mixture model, I am missing a carefully controlled experiment highlighting how performance varies with the number of components, or at the very least a suggestion for a relevant baseline that results can be compared against.\\n\\nIn particular, comparing against MAML alone is problematic: since your approach is a mixture of MAMLs it is essentially guaranteed to do at least as well. The only interesting question is *how much* of an improvement you can realize, and what is driving it. Being a mixture of MAMLs, you have a confounding factor in the increased capacity of the meta learner. It is therefore impossible for a reader to discern from table 1 whether the performance gain is due to your mixture model or simply due to increased capacity. One option would have been to control for confounders through an ablation study. In the absence of that, relevant comparisons must be found in competing methods attempting to capture heterogeneity in the data. Again, this is not a requirement to outperform SOTA, but failing to compare favourably against similar methods does not provide evidence in favor of yours. \\n\\nOne such benchmark that the authors discuss in the rebuttal is BMAML (Kim et. al, 2018). I agree that BMAML attempts to solve a slightly different problem; even so, it is trying to capture heterogeneity and has strong analogues to your method in that it maintains a set of parameterizations to capture multi-modality. At the very least, it provides an interesting comparison that would have been beneficial to highlight in the paper. As you point out, Kim et al. (2018) use a baseline MAML with higher performance than reported in the original paper (Finn et al., 2017) (likely driven the use of 5 convolutional layers, as opposed to 4). Even so, they carefully control for the effect of increasing the number of particles and show that going from 1 (equivalent to MAML) to 3 or 5 particles yields a 3-percentage point increase in performance. This is a relative measure on a comparable architecture, and as such does provide a relevant baseline. In contrast, the current manuscript reports a 2-percentage point increase in performance over a baseline MAML without controlling for (or even reporting) the number of components in the mixture. BMAML is just one possible baseline, others that aim to capture heterogeneity using a comparable architecture (e.g. Gidaris & Komodakis, 2018) realize even greater performance gains. \\n\\nTaken together, the authors conduct an experiment without context or discussion, and as such fail to provide any insights into the workings of the method or validate claims made. The fact that the method does not hold up against similar (in some respect) methods must be interpreted as evidence, if not against, at least not in favour of the proposed method.\"}",
"{\"title\": \"Reaction to author comments\", \"comment\": \"I have read the (verbose) feedback of the authors, and hereby acknowledge that.\\n\\nMy vote does not change. This paper provides a rather straightforward mixturization of MAML. This procedure makes the method substantially more expensive to run. On benchmarks the authors compared (very few), the payoff is minor, and the results are far from the state of art. The state of the art, such as VERSA etc, is also faster to run than even MAML, not to speak of the extension provided here. Results being equal, simplicity and ease of use of a method is the decisive factor.\\n\\nThe authors claim their goal is not to outperform state of the art (or get close to it). But then, it is not clear to me what their goal is in the first place. I do not see why putting a mixture on top of MAML has a lot of potential for future improvements. The authors are free to convince the readers otherwise, by coming up with a relevant use case and demonstrate it. Their toy examples are not convincing. Putting so much complexity together should really be motivated by a convincing use case.\\n\\nMixturizing probabilistic models is a pretty common step, I'd call it incremental. And where you like it or not, the Deep NN field (key audience for ICLR) is very much focused on competitive results.\"}",
"{\"title\": \"Response to all reviewers: Alternate meta-learning algorithms correspond to the use of different inference techniques\", \"comment\": \"tldr; Our intent in this paper is not to pit against each other techniques for inference of task-specific parameters (e.g., MAML vs. BMAML vs. VERSA), but to investigate how changes to the underlying probabilistic model (the hierarchical Bayesian model) can be realized as changes to procedures executed in a meta-learning algorithm. Our approach thus provides guidance in algorithm design for meta-learning. We believe that such explorations are timely and of great interest to the ICLR community.\\n\\nWe would like to emphasize what we view as the differences between our approach and recent methods that tackle the meta-learning problem using probabilistic methods (e.g., BMAML [Kim18], VERSA [Gor18], neural processes [Gar18]). Such approaches share an assumption about the structure of the underlying probabilistic model, namely the hierarchical Bayesian model with exchangeability across tasks as well as exchangeability across data within a task, depicted in our paper in Figure 1(a). What differs among these methods, then, is not a different assumption of dependence relationships between latent variables (the task-specific parameters, \\\\phi), but the inference procedure implemented to infer their values. (More detail given below.)\\n\\nIn contrast, we propose a set of different structural assumptions that are appropriate for certain practical settings: When there is known to be a latent clustering structure in a dataset of tasks. This corresponds to the graphical model visualized in Figure 1(b-c). The change to the underlying probabilistic model induces some benefits that do not result from changing the inference procedure: the ability to detect changing tasks and how tasks can be grouped via inspection of the latent variables, and, importantly, a principled way to adapt the complexity of a model in response to an evolving dataset.\\n\\nAs there has been a recent resurgence of interest in algorithmic developments inspired by a probabilistic approach to meta-learning, we hope that our proposal for exploring the underlying structure of the assumed probabilistic model will give rise to further interesting algorithmic developments. We expect that these developments will make use of inference techniques such as amortized inference, MCMC, classic variational inference (black-box or mean-field), and expectation propagation. For this paper, we resorted to the most tractable approximate inference procedure that is compatible with stochastic gradient descent: maximum-a-posteriori estimation via gradient descent on a loss function taken as the negative log-likelihood (as explained in Section 2).\\n\\n\\nWe now discuss the inference procedure of other methods in greater detail.\\n\\nVERSA [Gor18] makes use of the underlying hierarchical Bayesian model common to recent work in probabilistic meta-learning [Fin17, Gra18, Kim18]. The main difference is the inference procedure, not the model itself. While our approach makes use of traditional statistical inference procedures (i.e., direct optimization of the log-likelihood), VERSA leverages a neural network as a hyper network to learn a mapping from task-specific training data to task-specific parameters. The use of a hyper network to compute the task-specific parameters introduces a trade-off with respect to a purely gradient-based approach: The hyper network introduces additional training overhead as well as the requirement for appropriate architecture design. We do not investigate this trade-off in this work and note that it is most straightforward to adapt MAML-style inference for use in a mixture framework than the hyper-network setup presented in VERSA. Therefore, we do not view it as a detraction of our method that we do not make use of VERSA-style, feedforward computation of task-specific parameters, nor do we view our approach as a direct competitor.\"}",
"{\"title\": \"Response to all reviewers: Variations in model architecture and experimental setup in recent works on meta-learning result in non-standardized benchmarking\", \"comment\": \"As the reviewers have pointed out, there has been a flurry of recent papers on meta-learning and few-shot learning that perform benchmarking on the miniImageNet few-shot classification dataset. We agree that it is important to track these trends, as standardization via benchmarking is integral to validation in an empirically-driven field. However, we have found the evaluation on miniImageNet to be nonstandard, making benchmarking difficult.\\n\\nThe reported accuracies on miniImageNet appear to be not only the result of algorithmic improvements but also changes in model architecture and experimental setup. In particular, we note that it is difficult to pinpoint the implementation differences in approaches that give rise to reported improvements. As an illustrative example, the BMAML method reports better performance than MAML [Fin17] for 1 particle, despite the fact that in this special case the method reduces to exactly MAML. Another potential complication is that the generation of the miniImageNet few-shot episodes is nonstandard, resulting in differences in the training, validation and testing datasets between papers, which certainly has an effect on the reported test accuracies.\\n\\nTo combat these potential confounds, we have used the same data-generation procedure (seeded with the same random seed) as MAML [Fin17] to generate training, validation and testing episodes, and kept all parameters that are common to both our methods constant (including, for example, the inner loop learning rate and the network architecture). We subsequently report an improvement while adhering to these constraints, which we request the reviewer to not take lightly.\\n\\nBelow, we give a more detailed comparison with related methods that focus on a probabilistic approach to meta-learning (which we are happy to add to if the reviewers would find it useful).\\n\\n\\n** Bayesian model-agnostic meta-learning (BMAML) [Kim18]\\n\\nThis method makes use of Stein variational gradient descent (SVGD) to maintain a Monte Carlo estimate of the posterior over task-specific parameters \\\\phi given the task-specific training data X[1:N], Y[1:N]. This is, therefore, an alternative to a MAML-like gradient-based optimization technique, which would employ a point estimate for \\\\phi.\\n\\nHowever, a confusing result in this paper is that the baseline of 1 particle, which should perform exactly in line with the standard MAML implementation (\\\"Because SVGD with a single particle, i.e., M = 1, is equal to gradient ascent, Algorithm 2 reduces to MAML when M = 1\\\" pg. 4) achieves 50.60% \\u00b1 1.42% on the standard miniImageNet benchmark (cf. 48.7% \\u00b1 1.84% [Fin17]). This is an improvement of almost 2% due to an undisclosed change in the experimental setup. Because of this, as well as the fact that the code for BMAML not been released at this time, we are unable to perform a direct comparison with BMAML, and would caution against interpreting the raw difference in accuracies as conclusive evidence that our method is unfit for publication.\\n\\n\\nWe wish to emphasize again that our motivation for the proposed method is not simply to get as high performance as possible on the standard benchmark, and as such, we have tightly restricted variations in hyperparameters, such as the inner loop learning rate and neural network architecture, that could potentially improve performance. Instead, we aim to explore the analogy between gradient-based meta-learning and probabilistic modelling, which has served to inspire many of the papers that the reviewer cites, among others. Our exploration focuses on the structure of the underlying probabilistic model and how inference and parameter estimation can be performed efficiently using a MAML-like gradient-based optimization technique. \\n\\nMoreover, we do show an improvement on techniques in the same lineage (i.e., gradient-based meta-learning with the standard convnet architecture of [Vin16]). The methods that show further improvements consider alternative methods for inferring task-specific parameters (which is somewhat orthogonal to the structure of the underlying probabilistic model), but also, in many cases, changes to the experimental setup and the model architecture, as described in detail above.\"}",
"{\"title\": \"Response to all reviewers: References\", \"comment\": \"References\\n---------------\\n\\n[Fin17] Finn, Chelsea, Pieter Abbeel, and Sergey Levine. \\\"Model-agnostic meta-learning for fast adaptation of deep networks.\\\" ICML, 2017.\\n\\n[Fin18] Finn, Chelsea, Kelvin Xu, and Sergey Levine. \\\"Probabilistic Model-Agnostic Meta-Learning.\\\" In NeurIPS, 2018.\\n\\n[Gar18] Garnelo, Marta, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J. Rezende, S. M. Eslami, and Yee Whye Teh. \\\"Neural processes.\\\" arXiv preprint arXiv:1807.01622 (2018).\\n\\n[Gor18] Gordon, J., Bronskill, J., Bauer, M., Nowozin, S. and Turner, R.E., 2018. \\\"Decision-Theoretic Meta-Learning: Versatile and Efficient Amortization of Few-Shot Learning.\\\" arXiv preprint arXiv:1805.09921.\\n\\n[Gra18] Grant, Erin, et al. \\\"Recasting gradient-based meta-learning as hierarchical Bayes.\\\" ICLR, 2018.\\n\\n[Kim18] Kim, Taesup, et al. \\\"Bayesian Model-Agnostic Meta-Learning.\\\" NeurIPS, 2018.\\n\\n[Koir2017] Kirkpatrick, James et al. \\\"Overcoming catastrophic forgetting in neural networks.\\\" In Proceedings of the national academy of sciences (2017).\\n\\n[Ngu17] Nguyen, Cuong V. et al. \\\"Variational continual learning.\\\" ICLR, 2018.\\n\\n[Sne17] Snell, J., Swersky, K. and Zemel, R. \\\"Prototypical networks for few-shot learning.\\\" NeurIPS, 2017.\\n\\n[Zen17] Zenke, Friedemann, Ben Poole, and Surya Ganguli. \\\"Continual learning through synaptic intelligence.\\\" ICML, 2017.\"}",
"{\"title\": \"Response to all reviewers\", \"comment\": \"We thank the reviewers for their constructive comments. Below, we clarify some common points of concern regarding significance, originality and clarity, and hope that this response can facilitate ongoing discussion during the rebuttal period. We will subsequently follow up in an updated PDF submission with improvements in writing and presentation quality, as well as a clearer experimental comparison as some of the reviewers suggested. We would be happy to discuss any remaining concerns in the intervening time period.\\n\\nWe share the enthusiasm of reviewers towards tackling the problem of adaptively determining how much to transfer from previous tasks in a meta-learning setting (which we refer to in the paper as \\\"transfer modulation\\\") thus avoiding negative transfer and promoting positive transfer. We note that no recent prior method (including Prototypical Networks [Sne17], VERSA [Gor18], and MAML [Fin17]) explicitly proposes a solution to this challenge, nor has there been recent prior work in meta-learning that empirically investigates this phenomenon.\\n\\nWe also share the reviewers' enthusiasm towards developing meta-learning methods that possess adaptive complexity, as a natural continuation of recent progress in meta-learning with static datasets. As Reviewer 2 identified, current methods for meta-learning (including Prototypical Networks [Sne17], VERSA [Gor18], MAML [Fin17] and variants thereof [Kim18, Fin18]) inevitably saturate model parameters in the evolving dataset regime. Moreover, techniques developed specifically to address the catastrophic forgetting problem, such as elastic weight consolidation (EWC) [Kir2017], synaptic intelligence (SI) [Zen17], and variational continual learning (VCL) [Ngu17], require access to an explicit delineation between tasks that acts as a catalyst to grow model size. (We may refer to such methods as \\\"task-aware.\\\")\\n\\nIn contrast, our nonparametric algorithm tackles the \\\"task-agnostic\\\" setting of continual learning, where the meta-learner does not receive information about task changes but instead learns to recognize a shift in the task distribution and adapt accordingly. The task-agnostic setting is more realistic and inherently more difficult. Addressing this setting is a contribution of our work that we under-emphasized in the first submission but that justifies the complexity of the proposed nonparametric algorithm. We will revise the paper to make this contribution more central.\\n\\nBefore we address specific concerns, we would like to emphasize that our *primary* goal is not to make significant gains in the state of the art in traditional meta-learning tasks. The standard benchmarks for meta-learning, such as Omniglot and miniImageNet, were designed with a uniform distribution of tasks in mind. This assumption falls short of many scenarios in the real world, where the task distribution is significantly heterogeneous or nonstationary. A simple example would be the difficulties imposed by changes in the agent\\u2019s environment (e.g., the change from day to night for computer vision tasks, or terrain changes in the context of a locomotive controller in reinforcement learning) leading to a novel set of tasks which might be quite different from those previously encountered. \\n\\nRather, our intent is to investigate how the heterogeneity or non-stationarity of a dataset affects the performance of existing meta-learning algorithms, as well as to propose a tailored solution to these challenges. As an empirical investigation into such settings, we opted for an inventory of stylization effects in our design of a new evolving dataset derived from miniImageNet. Consequently, our results suggest that recognizing such underlying discrete structure via hierarchical modelling can improve performance via robustness to such abrupt changes. As such, we do not believe the modest improvement on the homogeneous miniImageNet task to be grounds for rejection. Moreover, as we detail below, the state of benchmarking in this domain is challenging due to nonstandard practices.\"}",
"{\"title\": \"[2/2] Thank you for your feedback! Please let us know if there is more we can do to address your concerns.\", \"comment\": \"> \\\"In fact, figure 5 and 6 suggest mode collapse occurs even in the non-parametric case.\\\"\\n> \\u201cUltimately, it performs on par with MAML, despite having three times the capacity\\u201d.\\n\\nWe apologize for the confusion caused by the original version of this figure. Notably, what we represent in Figure 5 is the validation loss values for each task, on a logarithmic scale. Accordingly, Figure 5 confirms that our model presents a substantial improvement over MAML that justifies the added complexity. Moreover, there is no mode collapse in the synthetic regression experiments, since Figure 5 shows that the spawned clusters were sufficiently differentiated (and at most one type of task was assigned per component).\\n\\nWe will add a table with the final loss values for a clearer comparison in an updated PDF submission.\\n\\n\\n> \\u201cExperiments on evolving tasks suggest the method is not able to capture task diversity... Similarly, on the evolving miniImagenet dataset, figure 6 indicates there is no cluster differentiation across tasks\\u201d\\n\\nWe would like to emphasize that Figure 6, as well as Figure 5, do demonstrate task differentiation to a reasonable degree. Note that the total cluster responsibility reported in Figure 6 is the sum of cluster responsibilities across the different tasks in a single mini-batch. Accordingly, at each moment, one or two clusters are assigned tasks (from the minibatch of 4 tasks) with a non-zero probability.\\n\\nIn a later version, we will present two figures for each experiment, one for the losses and one for the cluster responsibilities, to avoid further confusion. We are also working on a more visually informative and less overwhelming presentation of the cluster assignment probabilities per task to emphasize the capability of our approach to differentiate between tasks and spawn new clusters when needed, in a task-agnostic setting.\\n\\n\\n> \\\"The paper needs major polishing.\\\" \\n\\nThank you for bringing this up. We have devoted significant effort to increase clarity in the revised version.\\n\\n\\nReferences\\n------------\\n\\n[Kim18] Kim, Taesup, et al. \\\"Bayesian Model-Agnostic Meta-Learning.\\\" NeurIPS, 2018.\\n\\n[Pen99] Pena, Jos\\u00e9 M., Jose Antonio Lozano, and Pedro Larranaga. \\\"An empirical comparison of four initialization methods for the k-means algorithm.\\\" Pattern recognition letters 20.10 (1999): 1027-1040.\"}",
"{\"title\": \"[1/2] Thank you for your feedback! Please let us know if there is more we can do to address your concerns.\", \"comment\": \"We thank the reviewer for their comments. We respond below to specific comments below but please also see the general \\\"response to all reviewers\\\" above.\\n\\n\\n> \\\"Results on miniImagenet are not encouraging; the gains on MAML are small and similar methods that generalize MAML (Kim et al., 2018, Rusu et al., 2018) achieve significantly better performance.\\\"\\n\\nFor the standard homogeneous miniImageNet benchmark, we would first like to refer the reviewer to the \\u201cresponse to all reviewers\\u201d where we emphasize that our primary goal is not necessarily to achieve state-of-the-art results on these traditional datasets, and, moreover, benchmarking on this dataset is difficult due to nonstandard practices.\\n\\nHowever, as reported in the paper at submission time, our model does achieve the highest 1-shot accuracy for comparable architectures. The reported higher accuracies in the lower half of Table 2 use different and significantly more powerful architectures. \\n\\n\\n> \\\"There is nothing in the algorithm that prevents mode collapse, and the only thing breaking symmetry is random initialization\\u2026 A closely related paper that may be of interest ( Kim et al., 2018, https://arxiv.org/abs/1806.03836 ) address this issue by using Stein Variational SGD.\\\"\\n\\ntl;dr: Auxiliary mode collapse penalties (analogous to the repulsion term in BMAML [Kim18]) might not be appropriate for clustering in the stochastic setting.\\n\\nRegarding the use of Stein Variational Gradient Descent in Kim et al. [Kim2018]: The second term in Eq. (1) represents a repulsive force which might deter mode collapse to some degree. However, their approach does not necessarily handle multimodality in the case of heterogeneous tasks better than our proposed approach with a similar number of particles (to our number of components), as a small number of particles could still concentrate around one large mode and ignore the narrower ones. In particular, their repulsion term does not guarantee differentiation, nor do they investigate whether the phenomenon of mode collapse occurs in their experiments (either with the repulsion term or with an ablation of the repulsion term).\", \"regarding_our_method\": \"We confirm the reviewer's assessment that symmetry-breaking in the method described in the submission is only due to the random seeding of the cluster initializations. Using random initialization alone to break symmetry is a common practice in the clustering and latent mixture modelling literature due to its simplicity [e.g., Pen99]; more sophisticated approaches, such as data-dependent initialization schemes, would be orthogonal to our approach.\\n\\nA complication regarding imposing auxiliary regularization during training to break symmetries is that we are working in the stochastic setting, where task assignments from previous batches are not kept in memory; therefore, any such regularization terms must be evaluated per batch. However, it is difficult to impose a principled, batch-wise regularization term that encourages mode differentiation without making assumptions about the task distribution within a mini-batch. Since our meta-learning training formulation assumes tasks are sampled uniformly with replacement from a potentially non-stationary task distribution, it would be disadvantageous in the general case to make such assumptions.\\n\\nIn particular, an artificial penalty to enforce differentiation of assignments within a batch could hinder information transfer between two (somewhat similar) tasks by forcing their assignments into two different clusters. This assumption also crucially falls apart in the case of evolutionary miniImagenet (Figure 6), where the tasks in a batch do share the same stylization, and therefore may benefit from being assigned to the same cluster. For these reasons, although it is straightforward to include a batch-wise regularization term (one that, for example, penalizes the entropy of a categorical distribution, which would be analogous to the batch-wise repulsion term used in BMAML), we do not believe that this is appropriate for the general problem setting that we consider. Ideally, we want only to enforce/diminish transfer between tasks that share/lack similar properties, which is realized via our underlying probabilistic model.\"}",
"{\"title\": \"Thank you for your feedback! Can you elaborate on your comments?\", \"comment\": \"We thank the reviewer for their comments. We are in agreement with the reviewer that both modulating transfer and ensuring robustness to a changing task distribution are important and timely problems in meta-learning.\\n\\nWe respond below to specific comments but please also see the general \\\"response to all reviewers\\\" above.\\n\\n\\n> \\\"The performance of few-shot classification on MiniImageNet is not comparable to the state of the art (Table 2, Table 1)... More discussions and explanations on this experiment are clearly required.\\\"\\n\\nFor the standard homogeneous miniImageNet benchmark, we would first like to refer the reviewer to the \\u201cresponse to all reviewers\\u201d where we emphasize that our primary goal is not necessarily to achieve state-of-the-art results on these traditional datasets, and, moreover, benchmarking on this dataset is difficult due to nonstandard practices.\\n\\nHowever, as reported in the paper at submission time, our model does achieve the highest 1-shot accuracy for comparable architectures. The reported higher accuracies in the lower half of Table 2 use different and significantly more powerful architectures.\\n\\n\\n> \\\"A more systematic and realistic evaluation is necessary to justify the proposed method. As a method that aims to cope with heterogeneous or even evolving task distributions, it is expected to work well in practice and outperform those baselines that are designed for a single task distribution.\\\"\", \"we_apologize_for_the_confusion_caused_by_the_original_version_of_this_figure\": \"Notably, what we represent in Figure 5 is the validation loss values for each task, on a logarithmic scale. Accordingly, Figure 5 confirms that our model presents a substantial improvement over MAML that justifies the added complexity of our method.\\n\\nWe would also like to emphasize that Figure 6, as well as Figure 5, demonstrate task differentiation to a reasonable degree. Note that the total cluster responsibility reported in Figure 5 and Figure 6 is the sum of cluster responsibilities across the different tasks in a single mini-batch. Figure 5 shows that the spawned clusters were sufficiently differentiated (and at most one type of task was assigned per component). In Figure 6, at each moment, one or two clusters are assigned tasks (from the minibatch of 4 tasks).\\n\\nIn a later version, we will add a table with the final loss values, and we will present two figures for each experiment, one for the losses and one for the cluster responsibilities, to avoid further confusion. We are also working on a more visually informative and less overwhelming presentation of the cluster assignment probabilities per task to emphasize the capability of our approach to differentiate between tasks and spawn new clusters when needed, in a task-agnostic setting.\\n\\nWe would welcome more specific comments from the reviewer on what would constitute a more systematic and realistic evaluation.\"}",
"{\"title\": \"[4/4] Thank you for your detailed feedback! Please let us know if there is more we can do to address your concerns.\", \"comment\": \"> \\\"Results reported in Section 5 are potentially interesting, but entirely lack a reference point. The first is artificial, and surely does not need an algorithm of this complexity.\\\"\\n\\nWe concur--the toy regression experiment was meant to be explanatory rather than to serve as an extensive experimental benchmark (see similar explanatory figures in [Fin17, Lee18] that we have found to be helpful in understanding the corresponding methods). In particular, we included this section so the reader may observe, in a simplified setting, the qualitative difference between the types of meta-learning benchmarks studied in the past and the more heterogeneous and non-stationary variants that we focus on. We expect that the toy regression setting makes the task shift especially clear, as learning to regress to the output of a convex function (such as a parabola) is quite different from learning to regress to the output of a periodic one (such as a sinusoid).\\n\\nWe would also like to refer the reviewer to the \\u201cresponse to all reviewers\\u201d where we explain that our evolving dataset setting adds a previously unaddressed dimension to the standard benchmarks. It additionally proposes a crucial challenge that has been overlooked by the continual learning literature: task-agnostic continual learning. Accordingly, an algorithm of this complexity is necessary to detect shifts in the task distribution and adjust the model capacity accordingly, in contrast to current continual learning algorithms, which rely on external information about the start and end of each task as well as the number of tasks.\\n\\n\\n> \\\"The setup in Section 5.2 is potentially interesting, but needs more work, in particular a proper comparison to related work. This type of effort is needed to motivate an extension of MAML which makes everything quite a bit more expensive, and lacks behind the state-of-art, which uses amortized inference networks (Versa, neural processes) rather than gradient-based.\\\"\\n\\nWe have provided a more thorough explanatory comparison with related methods on the benchmark in Section 5.2 with related work, detailed in the \\\"response to all reviewers\\\" above. Additionally, we will subsequently follow up with more quantitative results, but ask for the reviewer's patience since we are restricted in terms of available computational resources at an academic lab.\\n\\n\\nReferences\\n---------------\\n\\n[Bes86] Besag, Julian. \\\"On the statistical analysis of dirty pictures.\\\" Journal of the Royal Statistical Society. Series B (Methodological) (1986): 259-302.\\n\\n[Bro13] Broderick, Tamara, et al. \\\"Streaming variational Bayes.\\\" NeuRIPS, 2013.\\n\\n[Fin17] Finn, Chelsea, Pieter Abbeel, and Sergey Levine. \\\"Model-agnostic meta-learning for fast adaptation of deep networks.\\\" ICML, 2017\\n\\n[Gom08] Gomes, Ryan, Max Welling, and Pietro Perona. \\\"Incremental learning of nonparametric Bayesian mixture models.\\\" In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pp. 1-8. IEEE, 2008.\\n\\n[Hug13] Hughes, Michael C., and Erik Sudderth. \\\"Memoized online variational inference for Dirichlet process mixture models.\\\" NeurIPS, 2013.\\n\\n[Kul12] Kulis and Jordan, \\\"Revisiting k-means: New Algorithms via Bayesian Nonparametrics.\\\" ICML 2012.\\n\\n[Lee18] Lee, Y. & Choi, S.. (2018). \\\"Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace.\\\" ICML, 2018.\\n\\n[Ray16] Raykov, Yordan P., et al. \\\"What to do when K-means clustering fails: a simple yet principled alternative algorithm.\\\" PloS one 11.9 (2016).\\n\\n[Roy13] Roychowdhury, Anirban, Ke Jiang, and Brian Kulis. \\\"Small-variance asymptotics for hidden Markov models.\\\" NeurIPS, 2013.\\n\\n[Vin16] Vinyals, Oriol, et al. \\\"Matching networks for one shot learning.\\\" NeurIPS, 2016.\\n\\n[Wan15] Wang, Yining, and Jun Zhu. \\\"DP-space: Bayesian nonparametric subspace clustering with small-variance asymptotics.\\\" ICML, 2015.\\n\\n[Wel06] Welling, Max, and Kenichi Kurihara. \\\"Bayesian K-means as a maximization-expectation algorithm.\\\" ICDM, 2006.\\n\\n[Zha01] Zhang, Yongyue, Michael Brady, and Stephen Smith. \\\"Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm.\\\" IEEE transactions on medical imaging 20.1 (2001): 45-57.\"}",
"{\"title\": \"[3/4] Thank you for your detailed feedback! Please let us know if there is more we can do to address your concerns.\", \"comment\": \"> \\\"In the case of infinite mixtures, it is not clear what is done in the end in the experiments.\\\"\\n> \\\"You use a set of size N+M per task update. In your 5-way, 1-shot experiments, what is N and M? I'd guess N=5 (1 shot per class), but what is M? If N+M > 5, then I wonder why results are branded as 5-way, 1-shot, which to mean means that each update can use exactly 5 labeled points. Please just be exact in the main paper about what you do, and what main competitors do, in particular about the number of points to use in each task update.\\\"\", \"we_apologize_for_lack_of_clarity\": \"Due to space constraints, we omitted the details about the task setup as we abided by the standard setup established by prior meta-learning literature [Vin16, Fin17]. We followed Vinyals et al. [Vin16] for the miniImageNet experiments (N=1 training datapoint and M=15 validation data points) with a meta-batch of 4 tasks. As for the regression experiments, we included in our experimental setup, at submission time, a breakdown of the mini-batch size (25) with an equal number of shots for training and validation (10) in a similar fashion to [Fin17]. We will take care to fit these domain-specific details into the next draft of the submission.\\n\\n\\n> \\u201cWhat is given in Algorithm 2, is not compatible with Section 4. How do you merge your Section 4 algorithm with stochastic EM? In Algorithm 2, how do you avoid that there is always one more (L -> L+1) components? Some threshold must be applied somewhere.\\\" \\n\\nWe did indeed use the suggested threshold in our Algorithm 2 as further detailed in Subroutine 4 in the paper at submission time. Due to space constraints, we opted to consolidate our parametric and nonparametric algorithms into a single algorithm block. This might have caused some confusion and distracted from the use of subroutines to solve crucial parts of Algorithm 2 separately for of the parametric and nonparametric variants.\\n\\nIn the non-parametric case, the E-step (where the decision on adding a new component is made) is detailed in Subroutine 4 where a threshold (similar to the one suggested by the reviewer) is employed. In Section 5, we justify the use of such a threshold and refer to prior work that uses a similar approach. We thank the reviewer for pointing out these points of ill clarity and will revise the paper accordingly.\\n\\n\\n> \\u201cAn alternative would be to use split&merge heuristics for EM.\\u201d\", \"we_would_first_like_to_clarify_a_misconception_about_our_experimental_setup\": \"We consider the stochastic setting, where task assignments from previous batches are not kept in memory. Our justification for this problem setup is that it is most aligned with (and therefore easily comparable to) previous methods that make use of mini-batch optimization (including MAML, Prototypical Networks, etc.) and it is most straightforwardly adapted to the online setting in which previous data may never be revisited. An additional practical reason for the stochastic setting is that it would be extremely expensive to store the assignments for an entire dataset of the size of miniImageNet in memory. Preserving task assignments is also potentially harmful due to stale parameter values since the task assignments in our framework are meant to be easily reconstructed on-the-fly using the E-step with updated parameters \\\\theta.\\n\\nAs a consequence of our treatment of the stochastic setting, split-and-merge heuristics are not compatible with our approach. We cannot, for example, split a component into two, as that implies re-assigning the atoms corresponding to past tasks (which requires access to the data itself) and inferring the new component parameters based on those assignments. Merging is also not fully realizable in this setting, as it also requires re-assigning atoms. \\n\\nWe did explore a naive approach to merging in which we computed a weighted average of the parameters of the merged components with weights proportional to the cluster sizes (realized as a moving average of data assigned to each component). However, we found the network parameter initializations resulting from this simplified approach to achieve worse validation loss. While it would certainly be interesting to consider more sophisticated heuristics for a moving summary of the task assignments (e.g., [Gom08], or a memoized approach similar to [Hug13]), we do not explore them here, since we believe our work already has sufficient novelty as the first foray into task-agnostic online meta-learning.\"}",
"{\"title\": \"[2/4] Thank you for your detailed feedback! Please let us know if there is more we can do to address your concerns.\", \"comment\": \"> \\\"There is also a nonparametric version, based on Dirichlet process mixtures, but a large number of approximations render this somewhat heuristic.\\\"\\n> \\\"Nonparametric extension via Dirichlet process mixture. This is quite elaborate, and uses further approximations (ICM, instead of Gibbs sampling). Can be seen as a heuristic to evolve the number of components.\\u201d\\n\\ntl;dr: Our approach is not a heuristic, but represents an alternative inference procedure that has an established history in latent variable modelling. Each method (including ICM and Gibbs) has its own trade-offs, so we do not view our selection of ICM as a detraction, nor do we claim to resolve the trade-offs between these inference techniques.\\n\\nWhile we do not estimate the full Bayesian posterior as is done with Gibbs sampling, our approach is in line with recent literature on approximate inference [Bro13, Roy13, Wan15]. In our paper, iterated conditional modes (ICM) [Bes86, Zha01, Wel06] is an established greedy strategy for iterative local maximization with guaranteed convergence (the same convergence guarantee that expectation maximization (EM) gives). In particular, ICM iteratively maximizes the full conditional distribution for each variable, instead of sampling from the conditional as is done in Gibbs sampling [Bes86]. Intuitively, it can be viewed as a special case of the framework of variational Bayes (VB) where the expectation over hidden variables (in our case, the task-specific parameters) is replaced with maximization (which can be realized as VB by taking the variational distribution as the Dirac delta distribution induced by maximization) [Wel06]. \\n\\nAccordingly, ICM for DPMM is simply a deterministic point-estimation approximation to the same inference problem typically solved by Gibbs sampling [Wel06, Ray16]. This is line with (but slightly different from) the more recent small-variance asymptotics (SVA) derivation of Bayesian nonparametrics introduced by Kulis & Jordan [Kul12], who also derive a deterministic alternative to Gibbs. Our gradient-based optimization approach to ICM crucially alleviates requirements for conjugacy (which is difficult to uphold in our general setting of placing a hierarchical prior over a black-box estimator such as a neural network model), or the requirement to fit a variational distribution (which has its own drawbacks in terms of bias, design of the variational family, etc.).\\n\\nWe thank the reviewer for bringing it to our attention that the tradeoffs inherent to the use of some of these methods are not clear in the current draft of the paper. For example, ICM is the most computationally efficient, while VB and Gibbs are the least efficient. Gibbs estimates are unbiased but of high variance, while all other methods potentially produce biased estimates. We do not intend to resolve these known, inherent tradeoffs, and so do not view it as a detraction of our method that we selected ICM over Gibbs sampling. Moreover, we made this selection primarily due to the computational efficiency of ICM as well as the previous interpretation of MAML as hierarchical Bayes (HB) [Gra18], which can be formalized as an ICM procedure. We will update the paper to make these tradeoffs clearer, and ask the reviewer if there remain any additional concerns over the use of ICM.\\n\\n\\n> \\\"These results are not near the state-of-the-art anymore, and some of the state-of-art methods are simpler and faster than even MAML.\\\"\\n\\nThe reviewer did not provide a complete list of references for state-of-the-art methods that are \\\"simpler and faster\\\" so we have done our best to compile a list of possible methods. We list them in the \\\"response to all reviewers\\\" and detail a direct comparison with our method with attention paid to simplicity and efficiency. Please let us know if there are other methods that we should attend to.\"}",
"{\"title\": \"[1/4] Thank you for your detailed feedback! Please let us know if there is more we can do to address your concerns.\", \"comment\": \"We thank the reviewer for an extensive and detailed review. We respond below to some specific comments but please also refer to the general \\\"response to all reviewers\\\" above. We encourage the reviewer to follow up with any other points that would improve the paper.\\n\\n\\n> \\\"Stochastic EM is used for end-to-end learning, an algorithm that is L times more expensive than MAML, where L is the number of mixture components.\\\"\\n> \\u201cImportant questions, such as how to make this faster, are not addressed\\u201d\\n> \\\"[VERSA] uses a simpler model (logistic regression head models) and is quite a bit faster than MAML, so much faster than what is proposed here\\u2026 [BMAML] is also quite complex and expensive, compared to Versa, but provides good results.\\\"\\n\\nWe will clarify in a revised version of the paper that this approach is easily parallelizable by assigning the computation of the MAP estimate \\\\hat{\\\\phi}, as well as the computation of the gradient with respect to the hyperparameter \\\\theta, to independent workers. Moreover, keeping the structure of the underlying probabilistic model fixed, our maximum a posteriori (MAP) procedure (which directly optimizes the negative log posterior via gradient descent) is the most straightforward approach to point estimation.\\n\\nIn contrast, approaches like VERSA [Gor18] that use a hyper network to compute task-specific parameters or approaches that make use of amortized inference require a heavily parameterized hyper/inference network in order to compute the task-specific parameter values. The training of this hyper/inference network imposes an additional computational cost, even though test-time computation/inference of task-specific parameters can be performed via a single feedforward pas. As such, these different approaches present alternative trade-offs in speed at training versus test time.\\n\\nOne modelling change we did experiment with on the homogeneous miniImageNet benchmark was learning the mixture model only on the last layer of the neural network initialization. In this case, we achieved significant speedups: The runtime for L = 2, \\u2026, 5 components was not much more than the original MAML runtime; this variant of our method is therefore quite simple and fast. The corresponding drop in generalization performance from clustering all the layers was not substantial. However, in the submission, we focused on the extensible (non)parametric mixture modeling aspect for the full set of neural network weights to demonstrate the generality and scalability of our method. Therefore, we did not prioritize reporting such results within the space constraints but will add them in an updated version of the paper.\\n\\nWe welcome further clarification on drawbacks related to the runtime of our method with respect to alternative approaches. Are there further details we can provide?\"}",
"{\"title\": \"A more systematic evaluation is necessary\", \"review\": \"This paper presents a mixture of hierarchical Bayesian models for meta-learning to modulate transfer between various tasks to be learned. A non-parametric variant is also developed to capture the evolution of a task distribution over time. These are very fundamental and important problems for meta-learning. However, while the proposed model appears to be interesting, the evaluation is less convincing.\\n\\n1. The performance of few-shot classification on MiniImageNet is not comparable to the state of the art (Table 2, Table 1). Especially, by Table, the proposed model performs much worse than existing methods (50% vs 60%). More discussions and explanations on this experiment are clearly required.\\n\\n2. A more systematic and realistic evaluation is necessary to justify the proposed method. As a method that aims to cope with heterogeneous or even evolving task distributions, it is expected to work well in practice and outperform those baselines that are designed for a single task distribution.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Promising, but more work needed\", \"review\": [\"This paper proposes a mixture of MAMLs (Finn et al., 2017) by exploiting the interpretation of MAML as a hierarchical Bayesian model (Grant et al. 2018). They propose an EM algorithm for joint training of parameter initializations and assignment of tasks to initializations. They further propose a non-parametric approach to dynamically increase the capacity of the meta learner in continual learning problems. The proposed method is tested in a few-shot learning setup on miniImagenet, on a synthetic continual learning problem, and an evolutionary version of miniImagenet.\", \"[Strengths]\", \"Modeling the initialization space is an open research question and the authors make a sound proposal to tackle this.\", \"The extension to continual learning is particularly interesting, as current methods for avoiding catastrophic forgetting. inevitably saturate model parameters. By dynamically increasing the meta-learner's capacity, this approach can in principle bypass catastrophic forgetting.\", \"[Weaknesses]\", \"There is nothing in the algorithm that prevents mode collapse, and the only thing breaking symmetry is random initialization. In fact, figure 5 and 6 suggest mode collapse occurs even in the non-parametric case. A closely related paper that may be of interest ( Kim et al., 2018, https://arxiv.org/abs/1806.03836 ) address this issue by using Stein Variational SGD.\", \"Results on miniImagenet are not encouraging; the gains on MAML are small and similar methods that generalize MAML (Kim et al., 2018, Rusu et al., 2018) achieve significantly better performance.\", \"Experiments on evolving tasks suggest the method is not able to capture task diversity. In the synthetic experiment (figure 5), the model suffers mode collapse when a sufficiently difficult task is introduced. Ultimately, it performs on par with MAML, despite having three times the capacity. Similarly, on the evolving miniImagenet dataset, figure 6 indicates there is no cluster differentiation across tasks.\", \"The paper needs major polishing.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Gradient-base few-shot learning. Extends MAML to a mixture distribution, to allow for internal task clustering. Falls short of recent state-of-art results, while being even a lot slower than MAML\", \"review\": \"Summary:\\n\\nThis work tackles few-shot (or meta) learning, providing an extension of the gradient-based MAML method to using a mixture over global hyperparameters. Each task stochastically picks a mixture component, giving rise to task clustering. Stochastic EM is used for end-to-end learning, an algorithm that is L times more expensive than MAML, where L is the number of mixture components. There is also a nonparametric version, based on Dirichlet process mixtures, but a large number of approximations render this somewhat heuristic.\\n\\nComparative results are presented on miniImageNet (5-way, 1-shot). These results are not near the state-of-the art anymore, and some of the state-of-art methods are simpler and faster than even MAML. If expensive gradient-based meta-learning methods are to be consider in the future, the authors have to provide compelling arguments why the additional computations pay off.\\n\\n- Quality: Paper is technically complex, but based on simple ideas. In the case of\\n infinite mixtures, it is not clear what is done in the end in the experiments.\\n Experimental results are rather poor, given state-of-the-art.\\n- Clarity: The paper is not hard to understand. What is done, is done cleanly.\\n- Originality: The idea of putting a mixture model on the global parameters is not\\n surprising. Important questions, such as how to make this faster, are not\\n addressed.\\n- Significance: The only comparative results on miniImageNet are worse than the\\n state-of-the-art by quite a margin (admittedly, the field moves fast here, but it\\n is also likely these benchmarks are not all that hard). This is even though better\\n performing methods, like Versa, are much cheaper to run\\n\\nWhile the idea of task clustering is potentially useful, and may be important in practical use cases, I feel the proposed method is simply just too expensive to run in order to justify mild gains. The experiments do not show benefits of the idea.\\n\\nState of the art results on miniImageNet 5-way, 1-shot, the only experiments here which compare to others, show accuracies better than 53:\\n- Versa: https://arxiv.org/abs/1805.09921.\\n Importantly, this method uses a simpler model (logistic regression head models)\\n and is quite a bit faster than MAML, so much faster than what is proposed here\\n- BMAML: https://arxiv.org/abs/1806.03836.\\n This is also quite complex and expensive, compared to Versa, but provides good\\n results.\", \"other_points\": \"- You use a set of size N+M per task update. In your 5-way, 1-shot experiments,\\n what is N and M? I'd guess N=5 (1 shot per class), but what is M? If N+M > 5,\\n then I wonder why results are branded as 5-way, 1-shot, which to mean means\\n that each update can use exactly 5 labeled points.\\n Please just be exact in the main paper about what you do, and what main\\n competitors do, in particular about the number of points to use in each task\\n update.\\n- Nonparametric extension via Dirichlet process mixture. This is quite elaborate, and\\n uses further approximations (ICM, instead of Gibbs sampling).\\n Can be seen as a heuristic to evolve the number of components.\\n What is given in Algorithm 2, is not compatible with Section 4. How do you merge\\n your Section 4 algorithm with stochastic EM? In Algorithm 2, how do you avoid\\n that there is always one more (L -> L+1) components? Some threshold must be\\n applied somewhere.\\n An alternative would be to use split&merge heuristics for EM.\\n- Results reported in Section 5 are potentially interesting, but entirely lack a\\n reference point. The first is artificial, and surely does not need an algorithm of this\\n complexity. The setup in Section 5.2 is potentially interesting, but needs more\\n work, in particular a proper comparison to related work.\\n This type of effort is needed to motivate an extension of MAML which makes\\n everything quite a bit more expensive, and lacks behind the state-of-art, which\\n uses amortized inference networks (Versa, neural processes) rather than\\n gradient-based.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HygTE309t7 | Outlier Detection from Image Data | [
"Lei Cao",
"Yizhou Yan",
"Samuel Madden",
"Elke Rundensteiner"
] | Modern applications from Autonomous Vehicles to Video Surveillance generate massive amounts of image data. In this work we propose a novel image outlier detection approach (IOD for short) that leverages the cutting-edge image classifier to discover outliers without using any labeled outlier. We observe that although intuitively the confidence that a convolutional neural network (CNN) has that an image belongs to a particular class could serve as outlierness measure to each image, directly applying this confidence to detect outlier does not work well. This is because CNN often has high confidence on an outlier image that does not belong to any target class due to its generalization ability that ensures the high accuracy in classification. To solve this issue, we propose a Deep Neural Forest-based approach that harmonizes the contradictory requirements of accurately classifying images and correctly detecting the outlier images. Our experiments using several benchmark image datasets including MNIST, CIFAR-10, CIFAR-100, and SVHN demonstrate the effectiveness of our IOD approach for outlier detection, capturing more than 90% of outliers generated by injecting one image dataset into another, while still preserving the classification accuracy of the multi-class classification problem. | [
"Image outlier",
"CNN",
"Deep Neural Forest"
] | https://openreview.net/pdf?id=HygTE309t7 | https://openreview.net/forum?id=HygTE309t7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1lVCIn0yE",
"SkldSLn_JV",
"B1xl7InOkE",
"SyePOrh_k4",
"SJeuP-1pA7",
"ryl5vJ3tRQ",
"rJlHfCjK0Q",
"rJl6r2sKCQ",
"rkxh6jsFCX",
"HkxbofoKCQ",
"Byx4wMFKCQ",
"SylwHAutCQ",
"HylM6a_W6Q",
"SJg-77cYnQ",
"rkgzpVtf3X",
"r1lxyDYj5X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1544632011657,
1544238656041,
1544238615677,
1544238446568,
1543463263624,
1543253858384,
1543253517342,
1543253060952,
1543252931889,
1543250585481,
1543242332320,
1543241278992,
1541668281531,
1541149465119,
1540687033823,
1539180247749
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1492/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1492/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1492/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1492/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1492/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1492/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1492/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1492/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1492/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1492/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1492/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1492/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1492/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1492/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1492/AnonReviewer2"
],
[
"~Andrey_Malinin1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes a decision forest based method for outlier detection.\\n\\nThe reviewers and AC note the improvement over the existing method is incremental.\\n\\nAlthough the problem is of significant practical importance, AC decided that the authors should do more works to attract the attention of a broader range of ICLR audience.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Incremental contribution\"}",
"{\"title\": \"Response to the new review from Reviewer 2 (Part 3)\", \"comment\": \"[REVIEWER: New experiments of varying k]: The proposal was to use the features right before the softmax layer as input to Isolation Forest, while the presented experiments in the appendix appear to use the output of the convolution layers. It looks like the deep neural forest works on features from the FC layer. So is there a reason why the isolation forest is only used on the convolution features of the trained model?\", \"response\": \"We had understood the suggestion to be to consider using the input of the softmax layer as input features of Isolation Forest. In fact we also tested a network design where the Isolation Forest used the input of the softmax layer as feature. The results were extremely poor -- lower than 1% in all cases. Hence, we did not present these results in our previous response. The reason for this poor performance is that the input of the softmax layer corresponds to the weighed sum computed by the final FC layer. As described in our paper at the beginning of Sec. 2, given one image, a weighed sum is computed with respect to each target class. It is then transformed by the softmax function to a probability from 0 -- 1. The image is then assigned to the class with the largest probability. So this weighed sum can be interpreted as relative measurement of how likely it is that the image falls into one target class. It is however not necessarily effective in representing the key features of an image.\\n\\nIn the Isolation Forest related experimental study, we reported the results produced by using the output of the convolution layers as input features to Isolation Forest, since to the best of our understanding, in deep neural networks typically the intermediate states at the convolutional layers are considered as features [1]. This works much better than using the input of the softmax layer as features to the Isolation Forest.\\n\\nWe would like to clarify that the deep neural forest (either the original deep neural forest as well as our modified version) in fact also use the output of the convolution layer as input features to the decision forest. However, since the number of nodes at the convolutional layer (the number of features) may not match the number of nodes in the decision forest, the deep neural forest needs a single FC layer to connect the neural network to the decision forest. We apologize, as this confusion may have been caused by our Figure 2. We will thus modify this figure in the future revision of the manuscript to make this point more explicit.\\n\\n[1] Visualizing and Understanding Convolutional Networks, ECCV2014\"}",
"{\"title\": \"Response to the new comments from Reviewer 2 (Part 2)\", \"comment\": \"[REVIEW: Empirical evaluation scheme]: The problem scenario used as motivating example implies that the testing image is still probably from the same domain, but the image is from a class not present in the training set. Then an appropriate experiment would be something like training MNIST on digits 1-9 and testing digit 0 for outlier detection and testing digits 1-9 for testing accuracy (and counting images spuriously flagged as unknown class as mistakes).\", \"response\": \"We either cropped or enhanced the testing images to match the size of the training images. More specifically, when testing CIFAR-10, CIFAR-100, and SVHN datasets on the model trained for MNIST, we change the image to gray scale and take central crops of the images. On the other hand, when testing MNIST on models trained for other datasets, we add zero padding on each border of the image and increase its color channel from 1 to 3 by copying the original gray image 3 times.\\n\\nThese have also been described in our experimental section (Parameter Settings, Sec. 5.1).\"}",
"{\"title\": \"Response to the new comments from Reviewer 2 (Part 1)\", \"comment\": \"We thank the reviewer for these additional questions and suggestions; and below we briefly summarize the questions we respond to along with our response.\\n\\n[REVIEW: Choice of parameter k]: It seems the assumption is that all training samples are inliers. Outlier detection is needed during testing/inference to flag images from potentially unknown classes. In that case, the threshold parameter k should really be k=0. Any k > 0 seems unjustified. You are essentially saying that k of the inliers in training set are now outliers (for no good reason to the best of my understanding).\", \"response\": \"We thank the reviewer for this good question. It is true that using our IOD approach, some images in the CIFAR-10/MNIST testing set will be flagged as outliers by the model trained on the CIFAR-10/MNIST training images even if k is set much smaller than 5000, while the original image classifier will assign them to one of the target classes. Our results on the testing (classification) accuracy reported in Tables 1 and 2 indeed have counted these images as testing (classification) errors.\\n\\nCounting these images as classification errors only affects the classification accuracy of our approach to a limited degree. The reason is that many of these testing images tend to be either mislabeled or indeed look quite different from the majority of the images in their corresponding labeled classes. Hence they tend to be mis-classified by the classical CNN classifier that does not have the reject function. Based on our evaluation, our IOD method flags 51 MNIST testing images and 871 CIFAR-10 testing images as outliers, out of which 29 MNIST testing images and 657 CIFAR-10 testing images are mis-classified by the classical CNN classifier. Therefore, the increase in classification error on MNIST/CIFAR-10 goes up by only 0.22%/2.14% due to the reject function.\"}",
"{\"title\": \"Response to the rebuttal\", \"comment\": \"Thank you for the detailed rebuttal. This has been extremely helpful in better understanding the problem setup and the proposed scheme.\\n\\nMy question regarding \\\"supervised setting\\\" needs clarification -- I was asking whether the proposed scheme can be used in an unsupervised setting where there are no labels what so ever and the goal is to find outliers, not a scenario where the model/scheme is presented with labeled outliers. Upon better understanding the scope of this paper, the problem being considered (and subsequently the scheme proposed to solve this problem) is closely tied to general supervised learning -- the problem is to identify/flag test points that do not belong to any of the classes present in the training set (and hence the trained model probably tries to shove the test point into one of the known classes).\\n\\nBut this problem setup further complicates my understanding regarding the choice of k. The assumption is that all training samples are inliers. Outlier detection is needed during testing/inference to identify/flag images from potentially unknown classes. In that case, the threshold parameter k should really be k=0. Any k > 0 seems unjustified. We are essentially saying that k of the inliers are now outliers (for no good reason to the best of my understanding). Moreover, making 5000 inlier training points outliers implies that (potentially) a non-trivial number of test images might get spuriously flagged. Does the testing accuracy reported in the Tables 1 & 2 consider the spuriously flagged test points as mistakes? Or is the testing accuracy being computed without regard to the outlier flagging process?\\n\\nThe problem setup also makes me feel that the empirical evaluation scheme (training on one dataset, testing for outliers on a completely different dataset) seems somewhat weird -- the problem scenario used as motivating example implies that the testing image is still probably from the same domain, but the image is from a class not present in the training set. Then an appropriate experiment would be something like training MNIST on digits 1-9 and testing digit 0 for outlier detection and testing digits 1-9 for testing accuracy (and counting images spuriously flagged as unknown class as mistakes).\\n\\nOn a related note, for models trained on a particular dataset (say MNIST), how are the images from different datasets (say CIFAR-10) with different shape/dimensions input to the model for inference? After the convolution layers, wouldn't there be a mismatch between the number of features created and the expected input size to the FC layer? Are the images just cropped to match the shape of the training images?\\n\\nThank you for the new experiments regarding choice of k and use of isolation forest. These are very informative. My proposal was to use the features right before the softmax layer, while the presented experiments in the appendix appear to use the output of the convolution layers. It looks like the deep neural forest works on features from the FC layer. So is there a reason why the isolation forest is only used on the convolution features of the trained model? Or have I misunderstood the experimental setups?\"}",
"{\"title\": \"Response to Reviewer 3 (Part 3): other minor comments\", \"comment\": \"[MINOR COMMENTS ]: auto-encoder based outlier detection.\", \"response\": \"We thank the reviewer for this great question. While we have not targeted this more general scenario of multiple labels per instance, we describe below our reasoning for why we consider our IOD framework to also be applicable to this more general case.\\n\\nTheoretically, our image outlier detection (IOD) framework would still be applicable in multi-label classification scenarios. In principle, our proposed IOD framework decides on whether a testing image is an outlier or not based on the confidence of the classifier for this testing image. If the classifier has a small confidence about a given image, then IOD rejects making an assignment to that class. This confidence is measured based the probability that the classifier ``believes'' the testing image belongs to a particular class. In multi-label scenarios, each testing image is also assigned a probability with respect to each class typically by a sigmoid layer as opposed to the softmax layer used in a single-label classification. Therefore, IOD could still work. \\n\\nA simple solution would be to continue to establish a probability cutoff threshold based on the object that has the xth smallest probability in the training set, and then classify a testing image as an outlier if its largest probability produced by sigmoid is smaller than this cutoff threshold. This solution could potentially be extended by establishing different cutoff thresholds with respect to different classes. Correspondingly, at the inference phase, given a testing image, if its probability with respect to any class is smaller than the corresponding cutoff threshold, it then would be considered to be an outlier.\\n\\nClearly, interesting future work, but beyond the scope of this current project. We will thus point at this idea as future work in Section 7.\"}",
"{\"title\": \"Response to Reviewer 2 (Part 2): comments on empirical evaluation -- comparing against Isolation Forest\", \"comment\": \"[COMMENT ON EMPIRICAL EVAL.] Why the use of something like Isolation Forest (IF) on the learned representations is not sufficient?\", \"response\": \"To address this comment, we tested the Isolation Forest-based method as you suggest above on its ability of detecting image outliers. The Isolation Forest method first builds an ensemble of isolation trees for a given data set. Then the average path lengths on the isolation trees is used as the outlierness measurement. The shorter the average path length is, the more likely the instance is to be an outlier.\\n\\nFirst, per the suggestion of the reviewer, we used a CNN to extract features from the raw image data and then built an Isolation Forest on these extracted features to detect outliers. As one input parameter of Isolation Forest, the number of outliers in CIFAR-10 is set as 5000 -- identical to the parameter k used in our IOD-based methods. More specifically, similar to our IOD-based approach, we use the 5,000th smallest average path length in CIFAR-10 as the cutoff threshold. If the average path length of a testing image is smaller than the cutoff threshold, it is considered an outlier. When the model is trained on CIFAR-10, we expect all images in CIFAR-100, MNIST and SVHN to be detected as outliers. However, in fact, the outlier detection accuracy is poor -- less than 2% in all cases, although we have carefully tuned the size of the network from producing 2042 dimensional feature vector to 512 dimensional feature vector and tuned other parameters in Isolation Forest such as the number of the trees and max_sample.\\n\\nWe also tried applying dimensionality reduction techniques to reduce the dimension of the extracted feature vectors and then applying Isolation Forests on the lower dimension space. The results were slightly better, although the detection rate for outliers is still lower than 2%. \\n\\nFinally, we directly applied Isolation Forests on the raw image. The outlier detection accuracy with respect to CIFAR-100, SVHN, and MNIST was 9.73%, 13.14% and 8.3% respectively. Although these results are much better than running against the extracted features from the CNN, they are still much worse than our maximum weighted sum baseline. Based on our evaluation, reducing the dimension of the raw features does not improve the outlier detection accuracy in this case. \\n\\nWe also tested Isolation Forests on the simpler MNIST data. Identical to the evaluation of our IOD-based methods, the number of outliers is set as 200. Again, building the Isolation Forest on raw image data achieves the highest outlier detection accuracy, namely 66.2%, 59.78% and 58.47% with respect to CIFAR-10, CIFAR-100 and SVHN respectively. However, the results are still much worse than any method we have evaluated in Appendix C.\"}",
"{\"title\": \"Response to Reviewer 3 (Part 3): the extensibility of the proposed method\", \"comment\": \"[COMMMENT FROM REVIEWER]: The extensibility of the proposed method.\", \"response\": \"We are not entirely certain what the reviewer means by extensibility. We are guessing here that the reviewer may be asking us about applicability to other applications and data sets. To address this, we would like to point out that our approach is broadly applicable in a rich variety of real world applications for two reasons:\\n\\n(1) It resolves a significant limitation of traditional image classifiers. Given one testing image, an existing CNN image classifier will assign this image to one of the classes observed in the training set, even if it does not belong to any known class in the training data set. For example, given a cat image, if we test it on a CNN model trained using MNIST, this cat image will be erroneously assigned to one of the digit classes. In the real applications, it is common for images supplied at inference time to not belong to any class known in the training data -- for example, consider an autonomous vehicle trained mostly on urban imagery taken to the desert, where it sees sand, cacti, and tumbleweed for the first time. Our approach thus enhances any of the existing CNN-based classifiers with this powerful \\\"rejection\\\" ability. That is, it no longer blindly assigns a testing image to one of the known classes. Instead, an image will be rejected as being an outlier if it does not \\\"sufficiently\\\" belong to any of the existing classes.\\n\\n(2) Real applications tend to have a sufficient amount of normal data, and thus are able to more easily provide us with a large amount of labeled normal data for training the classification model, while they lack access to labeled outliers due to the rarity of outliers. Thus, an approach, such as ours, that uses only labeled inliers, and does NOT rely on the availability of outlier labels is a preferred situation in practice.\"}",
"{\"title\": \"Response to Reviewer 3 (Part 2): the comparison to the Dropout method\", \"comment\": \"[COMMMENT FROM REVIEWER]: Compare with the method that uses dropout during testing [1]\", \"response\": \"We thank the reviewer for pointing us to [1] as an additional baseline. We have now followed your suggestion and also compare against this method. The results are worse than the previous baseline used in our paper. The results are detailed below and also incorporated into our revised paper (Appendix D ).\\n\\nThis dropout paper [1] proposed a new theoretical framework casting dropout used in training deep neural networks as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us a tool to model uncertainty using dropout in NNs. As discussed in paper [1], dropout is applied during inference. Given an image, we can get a range of softmax input values for each class by measuring 100 stochastic forward passes of the softmax input. If the range of the predicted class intersects that of other classes, then even though the softmax output is arbitrarily high (as much as 1 if the mean is far from the means of the other classes), the uncertainty of the softmax output can be as large as the entire space. In other words, the size of intersection signifies the model\\u2019s uncertainty in its softmax output value -- i.e., in the prediction. The larger the intersection is, the more uncertain the model is in its prediction.\\n\\nSo we agree that this uncertainty could be used as a score to measure the outlierness of an image. Therefore, as suggested by the reviewer, we evaluated this dropout method proposed in [1] as a new baseline method. \\n\\nIn the added experiments, we use the identical network architecture suggested in the github repository for [1] (https://github.com/yaringal/DropoutUncertaintyCaffeModels/tree/master/cifar10_uncertainty). When the method is applied on the CIFAR-10 training data, for each CIFAR-10 image, we forward it to the model 100 times and record the softmax input values for each class. The \\\"outlierness\\\" (uncertainty) of the image is defined as the intersection between predicted class and all other classes. We use the 5000-th largest value in the training as the cutoff threshold. Specifically, if an image has an uncertainty larger than the threshold, it is considered to be as an outlier. Then we forward each CIFAR-100 image 100 times in the model and record the outlier score for each CIFAR-100 images. Unfortunately as shown in Appendix D of our revised paper, the accuracy of this outlier detection scheme is only ~53%. That is, this is even worse than our maximum weighted sum baseline which has an accuracy of ~70%.\\n\\nWe also applied the Dropout method on MNIST training images. Again we use the network architecture suggested by the authors in their repository. We forward each MNIST image 100 times in the model and compute the outlierness score. However, when we use the 200-th largest outlierness score in MNIST training as the outlierness cutoff threshold (the same parameter setting to our MNIST experiment in Appendix C), the accuracy in detecting CIFAR-10 images (also forwarded 100 times in the model) as outliers is lower than 10%. When we increase the parameter from 200 to 5000, its accuracy in detecting outliers increases to 48.13%, which is much much lower than our proposed method (above 90%), although clearly in this case the parameter setting biases towards the Dropout method.\"}",
"{\"title\": \"Response to Reviewer 2 (Part 1): comments on significance -- new evaluation with varying outlier threshold and the applicability of the proposed method\", \"comment\": \"[COMMNENT FROM REVIEWER]: [Significance] A threshold that considers 5000 images as outliers seem unreasonable. Usually number of outliers intended for manual inspection is low.\", \"response\": \"Our approach does not rely on the labeled outliers to train an outlier classifier, although it needs labeled inliers to produce an outlierness score for each image and establish an outlierness cutoff threshold to detect outliers. As also noted in our response to reviewer 3 (The extensibility of the proposed method), we believe that our approach is broadly applicable in a rich variety of real world applications for two reasons:\\n\\n(1) It resolves a significant limitation of traditional image classifiers. Given one testing image, an existing CNN image classifier will assign this image to one of the classes observed in the training set, even if it does not belong to any known class in the training data set. For example, given a cat image, if we test it on a CNN model trained using MNIST, this cat image will be erroneously assigned to one of the digit classes. In the real applications, it is common for images supplied at inference time to not belong to any class known in the training data -- for example, consider an autonomous vehicle trained mostly on urban imagery taken to the desert, where it sees sand, cacti, and tumbleweed for the first time. Our approach thus enhances any of the existing CNN-based classifiers with this powerful ``rejection'' ability. That is, it no longer blindly assigns a testing image to one of the known classes. Instead, an image will be rejected as being an outlier if it does not ``sufficiently'' belong to any of the existing classes.\\n\\n(2) Real applications tend to have a sufficient amount of normal data, and thus are able to more easily provide us with a large amount of labeled normal data for training the classification model, while they lack access to labeled outliers due to the rarity of outliers. Thus, an approach, such as ours, that uses only labeled inliers, and does NOT rely on the availability of outlier labels is a preferred situation in practice.\"}",
"{\"title\": \"Response to Reviewer 3 (Part 1): the comments on details\", \"comment\": \"[COMMMENT FROM REVIEWER]: (1) the definition of maximum weighted sum; (2) Why not using maximum probability in Figure 1? are they equivalent? (3) What the 8.1701 threshold refer to; (4) the architectures used for the experiment in Section 2\", \"response\": \"Thank you for the detailed review. We have carefully revised Section 2 of our paper to include our responses to your questions, as explained below. The code and models used in this work will be made public after the double blind review process is completed.\\n\\n(1) The maximum weighted sum corresponds to the input of the softmax layer. For each class $C_i$, the final fully connected layer before the softmax layer produces a weighted sum score $s_i$. The score is computed by sum(F_j w_{ji} | 0<j<n+1) where $F_j$ is a feature and $w_{ji}$ is the learned weight of the FC layer that connects $F_j$ and class $C_i$. The maximum weighted sum is the largest weighted sum score among all classes, defined as max(s_1,s_2, ..., s_m). \\n\\n(2) The maximum weighted sum score is not equal to the maximum probability. Probabilities can be computed by applying a softmax layer to the weighted sum scores. That is softmax(s_1,s_2,...s_m) = (p_1,p_2,...p_m). The maximum probability then corresponds to the largest probability defined as max(p_1,p_2,...,p_m). In other words, the maximum weighted sum score corresponds to the maximum score before the softmax layer, while the maximum probability is the maximum score after the softmax layer. \\n\\nWe also worked with this maximum probability as the outlierness measure of each image. However, we discovered that using maximum probability performed worse than using maximum weighted sum. Therefore, we selected the better of the two, namely, the maximum weighted sum as our baseline method. \\n\\n(3) As explained in Sec. 2, the constant \\\"8.1701\\\" shown in Figure 1 is the cutoff threshold used in detecting outliers. It corresponds to the 5000-th smallest maximum weighted sum among the images in the training set CIFAR-10. Then at the inference time we consider images with maximum weighted sum smaller than 8.1701 as outliers. This cutoff threshold is variable. \\n\\n(4) The architecture we used for the experiments in Section 2 is identical to the ones used in our experimental section (Section 5. 2). The architecture is similar to the VGG-13 model with Batch Normalization. Specifically, the number of channels for the convolutional layers are [32, 32, \\u2019M\\u2019, 64, 64, \\u2019M\\u2019, 128, 128, \\u2019M\\u2019, 256, 256, \\u2019M\\u2019, 128, 128, \\u2019M\\u2019], where \\u2018M\\u2019 is the max-pooling layer with kernel size=2, stride size=2. The kernel size for each convolutional layer is 3. Batch normalization and Relu functions are applied after each convolutional layer. We previously had already described the training process we used in detail in our experimental section.\"}",
"{\"title\": \"Response to Reviewer 1: an effective solution to an extremely important problem\", \"comment\": \"[COMMMENT FROM REVIEWER]: The improvement over the existing method (DNDF) is incremental\", \"response\": \"Our approach represents an effective solution to an extremely important problem. In fact, our approach significantly outperforms the state-of-the-art in the accuracy of outlier detection as shown in our experimental study. While our approach leverages some of the DNDF principles, we introduce several critical insights to render it effective at detecting outliers.\\n\\nFirst, given that the DNDF approach was designed for improving classification accuracy, it was not obvious it would be applicable to tackling the outlier detection problem. Indeed, we are the first to leverage DNDF for addressing the outlier detection problem. We do this by unifying the best practice of unsupervised outlier detection with our observation that the max route of each tree in DNDF effectively captures the outlierness of each image.\\n\\nSecond, we refine the core method with several technical innovations to assure effective outlier detection, while concurrently also yielding high accuracy for image classification. By this, the existing CNN-based image classifiers are enhanced to have the ability to reject a testing image as being an outlier if it does not ``sufficiently'' belong to any of the existing classes known in the training data.\\n\\nIn particular, we proposed two new techniques, namely, an information-theoretic regularization strategy based on routing decisions and a new network architecture that ensures that each tree in the forest is completely independent. Further, as additional innovation, we designed a new joint learning method that optimizes the parameters for the decision node and for the prediction nodes in one integrated step through back-propagation, abandoning the two-step optimization strategy used in DNDF.\\n\\nBased on the above described innovations, our approach is not only technically novel but also useful, as it is highly effective at detecting outliers. \\n\\nWe have revised the Proposed Approach and Contributions of the Introduction section (Section 1) to reflect the above discussion.\\n\\n[COMMMENT FROM REVIEWER]: The regularization on routing decision may not really be necessary as, in DNDF, the soft splits start as uniform and gradually converge to something close to hard splits.\", \"our_response\": \"The datasets we used in our experiments are indeed commonly used in state-of-the-art image outlier detection papers such as [1] and [2]. In our experiments we focused on these datasets because this ensures a fair comparison of our proposed outlier detection approach against the state-of-the-art.\\n\\nIn the future, we will be happy to extend our evaluation to surveillance or street view data sets -- especially if we can gain access to data sets labeled with outliers.\\n\\nWe thus thank you for this suggestion. \\n\\n[1] Ruff, Lukas, et al. \\\"Deep one-class classification.\\\" International Conference on Machine Learning. 2018 \\n\\n[2] Zhou, Chong, and Randy C. Paffenroth. \\\"Anomaly detection with robust deep autoencoders.\\\" Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2017.\"}",
"{\"title\": \"Review of submission 1492\", \"review\": \"Summary: This paper modifies an existing technique designed for image classification to make it applicable to outlier detection.\", \"strengths\": \"The outlined problem is of significant practical importance.\", \"weaknesses\": [\"The improvement over the existing method is incremental;\", \"The regularization on routing decision may not really be necessary as, in DNDF, the soft splits start as uniform and gradually converge to something close to hard splits; this is discussed in the supplementary material of the DNDF paper;\", \"the datasets tested are standard image datasets, not even captured from vehicles or video surveillance. The SVHN (street view numbers) dataset is the closest the experiments get to the motivating application.\"], \"overall_assessment\": \"reject\", \"recommendations_for_the_authors\": \"Test on a surveillance or street view benchmark. Even then, it's questionable whether the paper is suitable for ICLR due to lack of methodological novelty.\", \"note\": \"I'd like to apologize to the authors for the delay in submitting this review. It was due to a technical error on my part (I thought the reviews had posted, but they had not). In the spirit of independent evaluation, this review was not influenced by the other comments on this paper. I will follow-up with a response which will take into account the existing dialogue.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"outlier detection using decision forest\", \"review\": \"The paper proposes a decision forest based method for outlier detection and claims that it is better than current methods.\", \"a_few_questions\": \"What is the precise definition of maximum weighted sum? Why not using maximum probability instead in Figure 1? Are they equivalent? What does this 8.1701 threshold refer to? What architectures you use for the experiment in Section 2?\", \"comments\": \"The observation that simple methods for outlier detection are not good enough is interesting, and deserves deeper understanding. \\nHowever, directly calculating max. prob. may be a weak baseline. A stronger method to compare with would be using dropout during testing, see [1], which is easy to calculate and very practical (can easily be deployed to other tasks such as sequence tagging). \\nThe extensibility of the proposed method is not clear to me. \\n\\nAlso, the reason that the observed failure of detection happens may due to the optimization procedure, i.e., how you train the model matters. The authors should provide the details of the training methods and architectures, along with the observation. \\n\\nThe baseline compared in the experiments are methods that do not use the classification feature. It would be necessary to compare with stronger baselines, such as using dropout.\", \"typo\": \"'a sample x $\\\\in$ based on its features'\", \"reference\": \"[1] Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning, by Yarin Gal, Zoubin Ghahramani\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting promising solution to outlier detection; application of proposed scheme to general outlier detection seems limited\", \"review\": \"Pros\\n----\\n\\n[Originality/Clarity]\\nThe manuscript presents a novel technique for outlier detection in a supervised learning setting where something is considered an outlier if it is not a member of any of the \\\"known\\\" classes in the supervised learning problem at hand. The proposed solution builds upon an existing technique (deep neural forests). The authors clearly explain the enhancements proposed and the manuscript is quite easy to follow.\\n\\n[Clarity/Significance]\\nThe enhancements proposed are empirically evaluated in a manner that clearly shows the impact of the proposed schemes over the existing technique. For the data sets considered, the proposed schemes have demonstrated significant improvements for this scoped version of outlier detection.\\n\\n[Significance]\\nThe proposed scheme for improving the performance of the ensemble of the neural decision trees could be of independent interest in the supervised learning setting.\\n\\nLimitations\\n-----------\\n\\n[Significance]\\nBased on my familiarity with the traditional literature on outlier detection in an unsupervised setting, it would be helpful for me to have some motivation for this problem of outlier detection in a supervised setting. For example, the authors mention that this outlier detection problem might allow us to identify images which are incorrectly labelled as one of the \\\"known\\\" classes even though the image is not a true member of any of the known classes, and might subsequently require (manual) inspection. However, if this technique would actually be used in such a scenario, the parameters of the empirical evaluation, such as a threshold for outliers that considers 5000 images as outliers, seem unreasonable. Usually number of outliers (intended for manual inspection) are fairly low. Empirical evaluations with a smaller number of outliers is more meaningful and representative of a real application in my opinion.\\n\\n[Significance]\\nAnother somewhat related question I have is the applicability of this proposed outlier detection scheme in the unsupervised scheme where there are no labels and no classification task in the first place. Is the proposed scheme narrowly scoped to the supervised setting?\\n\\n[Comments on empirical evaluations]\\n- While the proposed schemes of novel inlier-ness score (weighted sum vs. max route), novel regularization scheme and ensemble of less correlated neural decision trees are extremely interesting and do show great improvements over the considered existing schemes, it is not clear to me why the use of something like Isolation Forest (or other more traditional unsupervised outlier detection schemes such as nearest/farthest neighbour based) on the learned representations just before the softmax is not sufficient. This way, the classification performance of the network remains the same and the outlier detection is performed on the learned features (since the learned features are assumed to be a better representation of the images than the raw image features). The current results do not completely convince me that the proposed involved scheme is absolutely necessary for the considered task of outlier detection in a supervised setting.\\n- [minor] Along these lines, considering existing simple baselines such as auto-encoder based outlier detection should be considered to demonstrate the true utility of the proposed scheme. Reconstruction error is a fairly useful notion of outlier-ness. I acknowledge that I have considered the authors' argument that auto-encoders were formulated for dimensionality reduction.\\n\\n[Minor questions]\\n- In Equation 10, it is not clear to me why (x,y) \\\\in \\\\mathcal{T}. I thought \\\\mathcal{T} is the set of trees and (x,y) was the sample-label pair. \\n- It would be good understand if this proposed scheme is limited to the multiclass classification problem or is it also applicable to the multilabel classification problem (where each sample can have multiple labels).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"comment\": \"Hello! :) Very interesting work. You may find our work on predictive uncertainty estimation to be relevant relevant.\", \"https\": \"//arxiv.org/pdf/1802.10501.pdf\", \"title\": \"Related work\"}"
]
} |
|
r1laEnA5Ym | A Variational Inequality Perspective on Generative Adversarial Networks | [
"Gauthier Gidel",
"Hugo Berard",
"Gaëtan Vignoud",
"Pascal Vincent",
"Simon Lacoste-Julien"
] | Generative adversarial networks (GANs) form a generative modeling approach known for producing appealing samples, but they are notably difficult to train. One common way to tackle this issue has been to propose new formulations of the GAN objective. Yet, surprisingly few studies have looked at optimization methods designed for this adversarial training. In this work, we cast GAN optimization problems in the general variational inequality framework. Tapping into the mathematical programming literature, we counter some common misconceptions about the difficulties of saddle point optimization and propose to extend methods designed for variational inequalities to the training of GANs. We apply averaging, extrapolation and a computationally cheaper variant that we call extrapolation from the past to the stochastic gradient method (SGD) and Adam. | [
"optimization",
"variational inequality",
"games",
"saddle point",
"extrapolation",
"averaging",
"extragradient",
"generative modeling",
"generative adversarial network"
] | https://openreview.net/pdf?id=r1laEnA5Ym | https://openreview.net/forum?id=r1laEnA5Ym | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HygHUpH_g4",
"r1g_lMezgN",
"ryT6uWAa7",
"HJe-3rW0pQ",
"SJeigrZCaX",
"BkgcGClCa7",
"rygmGXa_p7",
"SJlRNFEa3Q",
"Bye1MVvq2m",
"HkekxnDu3m"
],
"note_type": [
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545260364591,
1544843760110,
1542490308517,
1542489512648,
1542489331336,
1542487569966,
1542144778677,
1541388598178,
1541202951488,
1541073895001
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1491/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1491/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1491/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1491/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1491/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1491/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1491/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1491/AnonReviewer2"
]
],
"structured_content_str": [
"{\"comment\": \"I have some concerns about this paper:\\n\\n(1) In the theory side, this paper assumes that the loss function F is \\\\mu-strongly monotone (or equivalently convex-concave), which seems to be a very unrealistic assumption as GANs are highly non-convex. Besides, many prior works on GANs have analyzed the convergence (w.r.t. regret bound) based on the convex-concave assumption, such as [1,2], which seems to make the theoretical contribution of this work even more limited. Finally, the authors argued that higher variance in the regret bounds implies worse convergence behavior, which is not quite convincing to me, as their actual convergence rates are the same though. \\n\\n(2) In the algorithm side, this paper proposed to use extrapolation from the past, which is exactly the same with [3] in the unconstrained case. The authors argued that their proposed technique also works in the *constrained* case, but from the nonexpansive property of projection, it seems to be very straightforward (or trivial) to extend the unconstrained case to the constrained one.\\n\\n(3) In the experimental side, it seems that the proposed \\\"extrapolation from the past\\\" algorithm does not improve the performance in the most time, by comparing \\\"PastExtraAdam\\\" with \\\"SimAdam\\\" or \\\"AltAdam\\\" in Table 1, Figure 5 and Figure 6. Does it contradict to your theory that \\\"extrapolation from the past\\\" has smaller variance term in the regret bound, which implies a better convergence behavior?\\n\\n\\n[1] https://openreview.net/pdf?id=Skj8Kag0Z (Stabilizing Adversarial Nets with Prediction Methods)\\n[2] https://arxiv.org/pdf/1705.07215.pdf (On Convergence and Stability of GANs)\\n[3] https://openreview.net/forum?id=SJJySbbAZ (Training GANs with Optimism)\", \"title\": \"Some concerns\"}",
"{\"metareview\": \"The paper presents a variational inequality perspective on the optimization problem arising in GANs. Convergence of stochastic gradient descent methods (averaging and extragradient variants) is given under monotonicity (or convex) assumptions. In particular, binlinear saddle point problem is carefully studied with batch and stochastic algorithms. Experiments on CIFAR10 with WGAN etc. show that the proposed averaging and extrapolation techniques improve the GAN training in such a nonconvex optimization practices.\\n\\nGeneral convergence results in the context of general non-monotone VIPs is still an open problem for future exploration. The questions raised by the reviewers are well answered. The reviewers unanimously accept the paper for ICLR publication.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Unanimous Accept by Reviewers.\"}",
"{\"title\": \"Many thanks for the constructive comments. **New experimental results**\", \"comment\": \"The authors first would like to thank AnonReviewer2 for his thorough evaluation and interesting remarks. In the following, we try to address as clearly as possible the points risen by AnonReviewer2.\\n\\n\\u201cI'm a bit skeptical about the experiments on GANs.:\\u201d\\nAs suggested, we tried to train a WGAN-GP with a ResNet architecture. The new results have been included in the updated version of the paper (see the experimental section 7.2, table 1 and figure 4 for the new results). After few days of experiments, we were able to obtain state-of-the art results on this architecture using ExtraAdam with averaging. As developed in our new experimental section (see the revised paper), we are not claiming that our principled methods are the solution to the many challenges of practical GAN optimization ( it is possible that a very fine-grained hyperparameter tuning on a standard method may give similar results) but after spending a similar limited time budget on optimizing the hyperparameters of each algorithm it looks clear to us that for this task the ExtraAdam method is much more robust to hyperparameter tuning, i.e, it yields reasonably good results for a large range of step-sizes.\\nThe code in pytorch containing all the algorithms presented in the paper as well as the exact experimental setup is ready and will be released after the anonymity period due to the reviewing process.\\n\\n\\u201cProposition 2 is a bit misleading.\\u201d\\nOur goal was not to weaken the value of implicit methods. When a closed form for the implicit updates is known, this method is very effective, but unfortunately for neural network optimization we are not aware of any practical way to implement the implicit steps. More precisely, an implicit step is equivalent to computing a minimization step of the original objective with a l2 regularization (see [1] for more details on implicit SGD and its applications), this subproblem is supposed to be simpler because of the strong convexity of the l2 regularization. Unfortunately, for neural networks, the optimization problem remains non-convex for any small step-size (which are the step-sizes of interest). Hence, we considered that implicit steps were prohibitively expensive for our applications of interest.\\n\\n\\u201cThe theory is presented for variational inequalities with monotone operators.\\u201d\\nAs we mentioned it in the second paragraph of Sec. 2.2 \\u201cStandard GAN objectives are non-convex (i.e. each cost function is non-convex),\\u201d, meaning that they are non-monotone since as we explain it right after the definition of monotonicity, \\u201cIf F can be written as (6), it implies that the cost functions are convex.\\u201c. We added a clarification in the paper right after the definition of monotonicity (page 7) stating that \\u201cGANs parametrized with neural networks lead to non-monotone VIPs\\u201d to clarify this. For further discussion about the extension of the VI to non-monotone operators we refer the reviewer to App. C.3.\\n\\n\\u201cA provably convergent algorithm for that setting is still an open problem, no?*\\u201d\\nTo our knowledge, general convergence results in the context of general non-monotone VIPs is still an open question. The only partial results we are aware of are mentioned in our related work section: \\n-\\u201cfor a new notion of regret minimization, by Hazan et al. (2017) and in the context of GANs by Grnarova et al. (2018)\\u201c\\n-\\u201c Mertikopoulos et al. (2018) also independently explored extrapolation providing asymptotic convergence results (i.e. without any rate of convergence) in the context of coherent saddle point. The coherence assumption is slightly weaker than monotonicity\\u201d.\\n\\n\\n\\n[1] TOULIS, Panagiotis, AIROLDI, Edoardo, et RENNIE, Jason. Statistical analysis of stochastic gradient methods for generalized linear models. In : International Conference on Machine Learning. 2014.\"}",
"{\"title\": \"Thank you for the knowledgeable review\", \"comment\": \"The authors first would like to thank AnonReviewer1 for his meticulous analysis and his insightful comments.\\n\\n\\u201cI think this behaviour is limited to only linear discriminator/generator and might not extend beyond the linear case.\\u201d\\nThis behavior is a local behavior (and then can be true for a globally non-monotone operator), i.e, if the objective is bilinear in a neighborhood of an (local) equilibrium this behavior is true in that neighborhood. It means that the iterates of the simultaneous method will be expelled from this neighborhood geometrically, the ones of the alternated method will stay in the neighborhood but will not converge to the equilibrium. On the contrary the averaged iterates and the iterate of the extragradient method will converge to the equilibrium. We also think that these results could be generalized to any game which is locally Hamiltonian (see [1]) around the (local) equilibrium. \\n\\n\\u201cExperiments are shown on the DCGAN architecture. \\u201c\\nAs suggested by AnonReviewer2, we tried our methods to train a ResNet architecture with the WGAN-GP objective (see the experimental section 7.2, table 1 and figure 4 for the new results). After few days of experiments, we were able to match current the state-of-the art results of ~8.2 on this architecture by using ExtraAdam with averaging (and without using spectral normalization). Contrary to the previous experiments of the paper, the hyperparameter search was less exhaustive (due to time reason) but a similar time budget was spent for fine tuning each algorithm. We observed that with quite few hyperparameter tuning it was possible to match the state of the art with ExtraAdam. We also observed that ExtraAdam is less sensitive to the choice of learning rate, making the hyperparameter tuning easier and enabling the use of higher learning rate.\\n\\n[1] Balduzzi, D., Racaniere, S., Martens, J., Foerster, J., Tuyls, K., & Graepel, T. The Mechanics of n-Player Differentiable Games. ICML 2018\"}",
"{\"title\": \"Thank you for the positive comments\", \"comment\": \"The authors first would like to thank AnonReviewer3 for his careful evaluation and his detailed comments.\\nWe would like to point out that we addressed the points raised by AnonReviewer2 in our updated version, particularly, we tried our methods to train a ResNet architecture with the WGAN-GP objective (see the experimental section 7.2, table 1 and figure 4 for the new results). After few days of experiments, we were able to match the current state-of-the art results of 8.2 on this architecture by using ExtraAdam with averaging (and without using spectral normalization). A similar time budget was spent for fine tuning each algorithm (SimAdam, AltAdam1, AltAdam5, ExtraAdam). We observed that with quite few hyperparameter tuning it was possible to match the state of the art with ExtraAdam. We also observed that ExtraAdam is less sensitive to the choice of learning rate, making the hyperparameter tuning easier and enabling the use of higher learning rate.\"}",
"{\"title\": \"Response to \\\"an interesting perspective and missing important references\\\"\", \"comment\": \"Hello,\\nfirst of all we would like to thank this anonymous reader for his interest on the paper. We agree that [Chiang et al. 2012] is a relevant and we will consider to incorporate it in the revision.\\n\\nHowever, note that we already mentioned in our paper a more general or a more seminal related work:\\n\\n1.\", \"we_are_aware_of_existing_convergence_proof_for_strongly_monotone_vi\": \"We actually mention in Section 3 a seminal work on strongly monotone VIPs: \\u201cThese iterates are known to converge linearly under an additional assumption on the operator\\\\footnote{ Strong monotonicity, a generalization of strong convexity. See \\u00a7A.} (Chen and Rockafellar, 1997)\\u201d.\\n\\nAs you pointed out, Nesterov and Scrimali (2011) consider another algorithm. The Forward-Backward algorithm presented in (Chen and Rockafellar, 1997) is another denomination (a bit more general though) for what we called \\u201cgradient method\\u201d (in Section 3). The Forward-Backward algorithm is more related to our work than Nesterov and Scrimali\\u2019s method is. Moreover, the proof of linear convergence of what we called \\u201cextrapolation from the past\\u201d algorithm (Theorem 1) is non trivial and, to our knowledge, does not directly extend from any existing work.\\n\\n2.\\nWe mention right after (21), \\u201c This update scheme can be related to the optimistic mirror descent (Rakhlin and Sridharan, 2013)\\u201d. Rakhlin and Sridharan (2013) explain in the beginning of Section 2 that \\u201c[they] exhibit a Mirror Descent type method which can be seen as a generalization of the recent algorithm of [9]\\u201d [9] being (Chiang et al. 2012). \\n\\nAs developed in our paper right after the definition of \\u201cextrapolation from the past\\u201d (and pointed out by the anonymous reviewers) we are bringing a new perspective on this method: \\u201cHowever our technique comes from a different perspective, it was motivated by VIP and inspired from the extragradient method\\u201d and \\u201c Using the VIP point of view we are able to prove a linear convergence rate for a projected version of the extrapolation from the past (see details and proof of Theorem 1 in \\u00a7B.3). We also extend these results to the stochastic operator setting in \\u00a74\\u201d.\"}",
"{\"comment\": \"It is an interesting perspective for training GAN.\\n\\nI would like to point out several important references that the paper is missing regarding the theoretical contributions of this work. \\n\\n1. the linear convergence for strongly monotone VI has been proved by Nesterov in 2011, though for a different algorithm. \\nYurii Nesterov and Laura Scrimali. Solving strongly monotone variational and quasi-variational inequalities. Discrete and Continuous Dynamical Systems - A, 2011.\\n\\n2. the idea of using one gradient in the extragradient method has been used in online optimization algorithms, e.g., \\n Chiang, C.K., Yang, T., Lee, C.J., Mahdavi, M., Lu, C.J., Jin, R., Zhu, S.: Online optimization with gradual variations. In: COLT 2012.\", \"title\": \"an interesting perspective and missing important references\"}",
"{\"title\": \"Principled optimization for GANS\", \"review\": \"Summary:\\nThe authors take a variational inequality perspective to the study of the saddle point problem that defines a GAN. By doing so, they are able to profit from the corresponding literature and propose a few methods that are variants of SGD. The authors show in a simple example (a bilinear function) these exhibit better performance than Adam and a basic gradient method. After showing theoretical guarantees of these methods (linear convergence) the authors propose to combine them with existing techniques, and show in fact this leads to better results.\\n\\nEvaluation\", \"this_is_a_very_good_paper_and_i_cannot_but_recommend_its_acceptance\": \"It is clear and well written. \\nIt has the right level of balance between theory and experiments. \\nTheoretical results are far from trivial. \\nI haven't seen something similar.\\nThe authors's do not make overstatements: they do not claim to have solved the GAN problem, but they do report improvements which are due to a thorough analysis (see above points). These results are much appreciated.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A new perspective on optimization problems arising in GANs which helps provide insights into why averaging helps, why certain type of updates are bad, and how extrapolation can be used to obtain even better solvers.\", \"review\": \"This paper looks at solving optimization problems that arise in GANs, via a variational inequality perspective (VIP). VIP entails solving an optimization problem that is related to the first order condition of the optimization problem that we wish to solve. VIP have been very successful in solving min-max style problems. Given that, GAN formulations tend to be min-max style problems (though not necessarily 0 sum) the VIP perspective is very natural, though under-explored in machine learning. Two techniques that have been widely used to solve VIP problems are averaging and extragradient methods. The authors look at a simple GAN setup where both the generator and the discriminator are linear models. In this case two kinds of gradient updates can be derived. First are simultaneous updates, and the other is alternated updates. The authors show that simultaneous updates are not even bounded and diverge to infinity, whereas alternated updates are more stable and stay bounded, but need not necessarily converge. However, I think this behaviour is limited to only linear discriminator/generator and might not extend beyond the linear case. The second key idea is the use of extra-gradient updates. Extra-gradient updates perform an \\\"extra\\\" or fake gradient step to get to a new point, and then kind of retracks back and perform a gradient step using the gradient step obtained from the \\\"extra step\\\". This extra-gradient method is a close approximation to Euler's method, though far more computationally efficient. However, the extragradient step requires one to calculate gradient twice, which can be expensive in large models. For this reason, the authors suggest using gradients from past as the \\\"extragradient\\\" in the extragradient method.\\n\\nFor strongly-monotone operators (a generalization of strongly-convex functions) extrapolation updates are shown to have linear convergence. Furthermore, the authors show that using extrapolation and averaging under the assumption that the operator is monotonic, and using constant step size SGD the rates of convergence are better than the rates obtained using plain SGD with averaging but without extrapolation. Authors also show how one can use these ideas using other first order methods such as ADAM instead of SGD. Experiments are shown on the DCGAN architecture. \\n\\nOn the whole this is a really nice paper, that shows how standard ideas from VIP can be useful for training GANs. I recommend acceptance\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Good review on algorithms for VIs in the context of GANs\", \"review\": \"Overall, the paper is well-written and of high quality, therefore I recommend acceptance.\", \"pros\": [\"The work gives an accessible but still rigorous introduction to the literature on VIs which I find highly valuable, as it creates a bridge between the classical mathematical programming literature and applications in AI.\", \"The theory for optimization of VIs with stochastic gradients (though only in monotone setting) was very interesting to me and contains some novel results (Theorem 2, Theorem 4)\"], \"cons\": [\"I'm a bit skeptical about the experiments on GANs. They indicate that for the specific choice of architectures and hyper-parameters \\\"ExtraAdam\\\" works better, but the chosen architectures are not state-of-the art. What would convince me if the algorithm can be used to improve a current best inception score of 8.2 reached with SNGANs. Also with WGAN-GP, scores of ~7.8 are reported which are much higher than the 6.4 reported in the paper. But I understand that producing state-of-the-art inception scores is not the focus of the paper, therefore I would suggest that the authors release an implementation of the proposed new optimizers (ExtraAdam) for a popular DL framework (e.g. pytorch) such that practitioners working with GANs can quickly try them out in a \\\"plug-and-play\\\" fashion.\", \"Proposition 2 is a bit misleading. While for \\\\eta \\\\in (0, 1) implicit and extrapolation are similar, adding the remark that implicit method is stable for any \\\\eta > 0 (and therefore can lead to an arbitrary fast convergence) would give a more balanced view. Right now, only the advantages of extrapolation method and disadvantages of implicit method are mentioned which I find unfair for the implicit method.\", \"The theory is presented for variational inequalities with monotone operators. For clarity it should be mentioned that GANs parametrized with neural nets lead to non-monotone VIs. A provably convergent algorithm for that setting is still an open problem, no?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
Hye64hA9tm | Measuring Density and Similarity of Task Relevant Information in Neural Representations | [
"Danish Pruthi",
"Mansi Gupta",
"Nitish Kumar Kulkarni",
"Graham Neubig",
"Eduard Hovy"
] | Neural models achieve state-of-the-art performance due to their ability to extract salient features useful to downstream tasks. However, our understanding of how this task-relevant information is included in these networks is still incomplete. In this paper, we examine two questions (1) how densely is information included in extracted representations, and (2) how similar is the encoding of relevant information between related tasks. We propose metrics to measure information density and cross-task similarity, and perform an extensive analysis in the domain of natural language processing, using four varieties of sentence representation and 13 tasks. We also demonstrate how the proposed analysis tools can find immediate use in choosing tasks for transfer learning. | [
"Neural Networks",
"Representation",
"Information density",
"Transfer Learning"
] | https://openreview.net/pdf?id=Hye64hA9tm | https://openreview.net/forum?id=Hye64hA9tm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1l0xiKlxE",
"SyxZ_mXK0m",
"r1gQHcar0X",
"BJgm9FTSCm",
"BkgH9OTSAQ",
"Skgy6Qpr07",
"rklXOqkaaQ",
"SJecygkZp7",
"BklIRmpxa7",
"rklOwd45hm",
"SyefBu5O3m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544751861627,
1543218024583,
1542998586817,
1542998410743,
1542998157038,
1542996919064,
1542417002752,
1541627873937,
1541620685961,
1541191776263,
1541085241728
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1490/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1490/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1490/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1490/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1490/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1490/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1490/Area_Chair1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1490/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1490/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1490/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper addresses important general questions about how linear classifiers use features, and about the transferability of those features across tasks. The paper presents a specific new analysis method, and demonstrates it on a family of NLP tasks.\\n\\nAll four reviewers (counting the emergency fourth review) found the general direction of research to be interesting and worthwhile, but all four shared several serious concerns about the impact and soundness of the proposed method. \\n\\nThe impact concerns mostly dealt with the observation that the method is specific to linear classifiers, and that it's only applicable to tasks for which a substantial amount of training data is available. \\n\\nAs the AC, I'm willing to accept that it should still be possible to conduct an informative analysis under these conditions, but I'm more concerned about the soundness issues: The reviewers were not convinced that a method based on the counting of specific features was appropriate for the proposed setting (due to rotation sensitivity, among other issues), and did not find that the experiments were sufficiently extensive to overcome these doubts.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting direction, but no compelling new method yet\"}",
"{\"title\": \"Thank you for elaborating and clarifying\", \"comment\": \"I appreciate the time you took to explain your reasoning about simpler methods, and I look forward to the comparisons you mentioned.\\n\\nIt also does sound like you you've already thought about how to adapt these ideas to other settings, which I think will be a good next test for these methods.\"}",
"{\"title\": \"Thank you for your review. Please find a few clarifications below.\", \"comment\": \"Thank you for your feedback. We are glad to know that you find the problem inherently interesting and important.\", \"re\": \"motivation for the restriction to linear models?\\n> Our motivation to use linear models is to keep the setup simple and fast. As the classifiers are able to extract task-specific information and reliably estimate transfer potential; changing to a different classifier like MLP, we believe, shouldn\\u2019t affect our results in a significant way. However, we will empirically verify this, and discuss this in the camera-ready/future versions of the paper.\"}",
"{\"title\": \"Thanks for the review! A few clarifications below\", \"comment\": \"We thank you for your thoughtful review. We are happy to learn that you believe it is an interesting direction that holds potential for high impact.\", \"re\": \"thoughts on how this could be applied outside the context of sentence representations and classification\\n> It is easy to adopt our approach to study the information encoded in the encoders for other problems involving structured prediction (say POS Tagging). Instead of using a decoder that takes in all the dimensions of the encoded input token, one could iteratively select dimensions that provide the highest gains in decoding the right target sequence (say POS tags). Our formulation is very general, and it could potentially also be applied to other modalities like images for tasks like image classification and captioning.\"}",
"{\"title\": \"Thanks for the review! A few clarifications below\", \"comment\": \"We thank the reviewer for their insightful and constructive feedback.\", \"re\": \"(W8) The proposed CLF weight difference method has some concerning aspects as well. For example say we had two task with exact opposite labels. They would have a very low weight difference score though they are ideal representations for each other\\n> You are right. For the very same reason, we take the inverse of the difference of normalized absolute classifier weights (Section 4.2).\", \"regarding_missing_values\": \"As we explain in the paper, classifier weight difference metric is only applicable in cases\\nwhere the number of features between the tasks are of the same size. Thus, 2 sentence input tasks and 1 sentence input tasks cannot be compared using the metric.\", \"references\": \"1.\\u201cWhy Neural Translations are the Right Length\\u201d :http://www.aclweb.org/anthology/D16-1248.pdf\\n2. On the Practical Computational Power of Finite Precision RNNs for Language Recognition: https://arxiv.org/abs/1805.04908\\n3. Learning to Generate Reviews and Discovering Sentiment: https://arxiv.org/abs/1704.01444\\n4. Visualizing and Understanding Recurrent Networks : https://arxiv.org/abs/1506.02078\"}",
"{\"title\": \"Thanks for the review! A few clarifications below.\", \"comment\": \"We thank the reviewer for the detailed and thorough reviews (that too, likely, on a short notice). We wish to clarify the following:\", \"re\": \"explanation of how hyperparameters were chosen, especially the \\\\alpha parameter\\n\\n> We discuss the motivation for the selection of \\\\alpha parameter in section 7.1; sorry for not mentioning it clearly. To determine the parameter, we used the elbow method (used to find an appropriate number of clusters for clustering) and observed that the \\u2018elbow\\u2019 in the relative accuracy vs dimensions plot was around the 80% accuracy mark for most tasks (which can be inferred from in Figure 2).\"}",
"{\"title\": \"Thanks for posting this anyway!\", \"comment\": [\"Your AC\"]}",
"{\"comment\": \"This is an emergency fill-in review that was originally asked for but now is unnecessary as the missing review was posted. Here it is anyways.\", \"review\": \"\", \"this_paper_attempts_to_answer_two_questions\": \"how densely is information included in sentence representations and how similar are encodings from encoders learned from different tasks?\", \"pros\": \"1) This paper analyzes representation from a perspective that seems distinct from previous work. It is somewhat in-line with work stating that NNs are heavily overparameterized, and this work might be considered how overparameterized the representations are for NLP tasks.\\n\\t2) They present a fairly new method for trying to predict what tasks might be useful pretraining for other tasks.\\n\\t3) Their motivation, thought process, and formalism for their method is well-written and very clear, if almost too long.\", \"cons\": \"Viewing this paper as making a methods contribution, I think the proposed approach is somewhat limited: \\n\\t1) the method is only applicable to linear classifiers. I understand practically the decision to use only linear classifiers, but this decision limits the set of representations that can be fairly studied with this method to only representations just before the final linear layer, as using other parts of the model's internal representation are confounded by the fact that they are optimized for use in non-linear models.\\n\\t2) the method is not comparable across tasks with the different input/output format (slightly mitigated by the fact that you can recast tasks, but it's hard to overcome the fundamental limitation of one input vs two input tasks without introducing some weirdness)\\n\\t3) the method seems limited to sentence-to-vector models\\nAlso, it'd be nice to give the upper bound on the quality of the approximation for the proposed greedy algorithm (I imagine it's something like (1 - 1/e) and the runtime.\\n\\t\\nAs an analysis paper, which I think is more compelling than as a methods paper, the results are fairly interesting, but there isn't enough discussion of the results and I have some concerns regarding the experiments:\\n\\t4) it would have been nice to have more quick experiments to sanity check the method for predicting transfer learning. Using the predicted transfer between SST2 and SST5 is a good starting point, but there could, and I think should, have been more, e.g.: between random subsets of the same task, between different genres within MNLI, between MNLI and SNLI. \\n\\t5) without a description of how the transfer learning is done, it's really hard to say how accurate these \\\"gold\\\" rankings of transfer learning are or what confounders are potentially introduced in their transfer learning approach\\n\\t6) I think there needed to be some explanation, even just a one sentence explanation of how hyperparameters were chosen, especially the \\\\alpha parameter. How quickly does the algorithm pick the entire set as \\\\alpha approach 1?\\nThe discussion of results is very short relative to the density of the experiments and plots. The early exposition explaining everything mathematically and intuitively is nice, but I think the notation was somewhat superfluous and could have been condensed to include more analysis/discussion of the results.\\n\\n\\nStyle / Presentation\\n\\t1) It'd be nice if each task was the same color across plots in Figure 2\\n\\t2) typos: section 3, p2: \\\"...we first define accuracy score of the best classifier...\\\"; section 1, last p: \\\"...transferring the knowledge acquired therefrom to improve performance...\\\"\\n\\t3) There's some related work analyzing contextual representations (outside sentence-to-vector) that would be worthwhile to mention, e.g. http://aclweb.org/anthology/D18-1179\", \"rating\": \"5\", \"confidence\": \"4\", \"title\": \"Now unneeded emergency fill-in review\"}",
"{\"title\": \"Some nice pieces and ideas but have concerns about methodology and shallow analysis\", \"review\": \"This paper tries to quantify how \\\"dense\\\" representations we need for a specific task -- more specifically, how many dimensions are needed from a given representation (for a given task) to achieve a percentage of the performance of the entire representation. The second thing the paper tries to quantify is how well representations learned for one task can be fine tuned for another. Experiments are conducted with 4 different representation technique on a dozen or so tasks.\", \"quick_summary\": \"While I liked aspects of this -- including the motivation of having a lightweight way of understanding how well representations transfer across tasks, overall my concerns surrounding the methodology and some missing analysis leads me to believe this needs more work before it is ready for publication.\", \"quality\": \"Below average\\nI believe the proposed techniques have some flaws which hurt the eventual method. There are also concerns about the motivations behind parts of the technique.\", \"clarity\": \"Fair\\nThere were some experimental details that were poorly explained but in general the paper was readable.\", \"originality\": \"Fair\\nThere were some nice ideas in the work but I remain concerned about aspects of it.\", \"significance\": \"Below average\\nMy concern is that the flaws in the method do not make it conducive to use as is.\\n\\n\\nStrengths / Things I liked:\\n\\n+ I really liked the motivating problem of being able to (hopefully cheaply / efficiently) estimate transfer potential to understand how well representations will perform on a different task.\\n\\n+ Multiple representations and tasks experimented with\\n\\nWeaknesses / Things that concerned me:\\n(In no specific order)\\n\\n- (W1) Adversely affected by rotations: One of my big concerns with the work is the way the CFS is computed. While it seems ok to estimate these different metrics using only linear models, my concern with this is that the linear models are only given a subset of the **exact** dimensions of the original representations. This is very much unlike the learning objectives of most of these representation learning methods and hence is highly biased and dependent on the actual methods and the random seeds used and the rotations it performs. (In many cases the representations are used starting with a fully connected layer bottom layer on top of the representations and hence rotations of the representations do not affect performance)\\n\\nLet's take an example: Say there is a single dimension of the representation that is a perfect predictor of a task. Suppose we rotated these representations. Now the signal from the original dimension is split across multiple dimensions and hence the CFS may be deceivingly high.\\n\\nTo me this is a big concern as different runs of the same representation technique can likely have very different CFS scores based on initializations and random seeds.\\n\\n- (W2) Related to the last line: I did not see any experiments / analysis showing how stable these different numbers are across different runs of the representation technique. Nor did I see any error bars in the experiments. This again greatly concerned me as I am not certain how stable these metrics are.\\n\\n- (W3) Baselines for transfer learning: I felt this was another notable oversight. I would have liked to see results for both trivial baselines like random ranking as well as more informed baselines where we can estimate transfer potential using say k representation techniques, and then use that to help us understand how well it would do on the other representations. This latter baseline is a zero-cost baseline as it is not even dependent on the method.\\n\\n- (W4) Metrics for ranking of transfer don't make sense (and some are missing) : I also don't understand how \\\"precision\\\" and NDCG are used as metrics. Based on my understanding the authors rank (which itself is questionable) the different tasks in order of potential for transfer and then call this the \\\"gold\\\" set. How is precision and NDCG calculated from this?\\n\\nMore importantly I don't believe looking at rank alone is sufficient since that completely obscures the actual performance numbers obtained via transfer. In most cases I would care about how well my model would perform on transfer not just which tasks I should transfer from. I would have wanted to understand something like the correlation of these produced scores with the actual ground truth performance numbers.\\n\\n- (W5) Multi-task learning: I did not see any mention or experiments of what can be expected when the representations are themselves trained on multiple tasks. (This seems like something that could easily be done in the empirical analysis as well and would provide richer empirical signals as well)\\n\\n- (W6) Motivation for CFS: I still don't fully understand the need to understand the density of the representation (especially in the manner proposed in the paper). Why is this an important problem? Perhaps expanding on this would be helpful\\n\\n- (W7) Alternatives to CFS / Computational concerns: A big concern I had was the computational expense of the proposed approach. Unfortunately I did not see any discussion about this in the paper or empirically.\\n\\nI find this striking because I can easily come up with cheaper alternatives to get at this \\\"density\\\". For example using LASSO / LARS like methods you can perhaps figure out a good reduced dimension set more efficiently.\\n\\nIf I were to go through the computation of then why not just train a smaller version of that representation technique instead and **directly** see how well it can encode data in k dimensions via that technique / for that task?\\n\\nAlternatively why not try using a factorization technique to reduce the rank and then see how well the method does for different ranks?\\n\\n- (W7b) Likewise I wonder if we could just measure transfer more directly as well and why we need to go via these CFS sets\\n\\n- (W8) The proposed CLF weight difference method has some concerning aspects as well. For example say we had two task with exact opposite labels. They would have a very low weight difference score though they are ideal representations for each other. Likewise looking at a difference of weight vectors seems arbitrary in other ways as well.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting approach to an important problem; but limited in scope and relevant comparisons\", \"review\": \"MEASURING DENSITY AND SIMILARITY OF TASK RELEVANT INFORMATION IN NEURAL REPRESENTATIONS\", \"summary\": \"This work attempts to define two kinds of metrics (metrics for information density and for information similarity) for the sake of automatically detecting similarity between tasks so that transfer learning can be done more efficiently. The concepts are clearly explained, and the metric for information density seems to match up with intuitions coming out of forward selections approaches. The metric for information transfer seems to be the commonplace metric that other works default to when they show that pre-trained representations are effective on downstream tasks. It is not clear that the notion of similarity through classifier weights makes sense, but see below for clarification questions. The problem addressed (automatic similarity scoring of tasks) is important for transfer learning, and thus the results have potential to be very impactful if they generalize to other kinds of tasks; as is, they seem to apply only to classification tasks, but that is a good step.\", \"pros\": \"Clearly written; experiments on the datasets chosen do seem to suggest that the proposed methods have potential. Brings in nice intuition from forward feature selection. An important problem with potential for high impact.\", \"cons\": \"It is not clear to me that the classifier difference metric is well-defined. Is there a constraint on the CFS and classifiers that ensure the difference between the weights really captures what is suggested? Is it not the case that classifier weights could come out quite different despite the tasks being quite similar if the linear classifiers learned to capitalize on dissimilar, yet equally fruitful patterns in the input features?\\n\\nDo you have thoughts on how this could be applied outside the context of sentence representations and further outside the context of classification? Those seem to be quite limiting features of these methods, which is not to say that they are not useful in that realm, but only to clarify my understanding of their possible scope of application.\\n\\nThese classification datasets are often so close, that I do wonder whether even simpler methods would work just as well. For example, clustering on bags-of-words might also show that SST, SST-fine, and IMDb are close/similar/transferable. The same could be said for SICK and SNLI. It would be nice to see a comparison to such baselines in order to get a sense of how the proposed methods give insights that other unsupervised or supervised methods might give just as well. Otherwise, it is hard to tell how significant these correlations are. Since the end goal is to determine transferability of tasks and not the methods, it does seem like there are simpler baselines that you could compare against.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting direction, but limited apparent utility\", \"review\": [\"This paper proposes simple metrics for measuring the \\\"information density\\\" in learned representations. Overall, this is an interesting direction. However there are a few key weaknesses in my view, not least that the practical utility of these metrics is not obvious, since they require supervision in the target domain. And while there is an argument to be made for the inherent interestingness of exploring these questions, this angle would be more compelling if multiple encoder architectures were explored and compared.\", \"The overarching questions that the authors set out to answer: How task-specific information is stored and to what extent this transfers, is inherently interesting and important.\", \"The proposed metrics and simple and intuitive.\", \"It is interesting that a few units seem to capture most task specific information.\", \"The envisioned scenario (and hence utility) of these metrics is a bit unclear to me here. As noted by the authors, transfer is most attractive in low-supervision regimes, w.r.t. the target task. Yet the metrics proposed depend on supervision in the target domain. If we already have this, then -- as the authors themselves note -- it is trivial to simply try out different source datasets empirically on a target dev set. It is argued that this is an issue because it requires training 2n networks, where n is the number of source tasks. I am unconvinced that one frequently enough has access to a sufficiently large set of candidate source tasks for this to be a real practical issue.\", \"The metrics are tightly coupled to the encoder used, and no exploration of encoder architectures is performed. The LSTM architecture used is reasonable, but it would be nice to see how much results change (if at all) with alternative architectures.\", \"The CFS metric depends on a hyperparameter (the \\\"retention ratio\\\"), which here is arbitrarily set to 80% without any justification.\", \"What is the motivation for the restriction to linear models? In the referenced probing paper, for example, MLPs were also used to explore whether attributes were coded for 'non-linearly'.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
H1g2NhC5KQ | Multiple-Attribute Text Rewriting | [
"Guillaume Lample",
"Sandeep Subramanian",
"Eric Smith",
"Ludovic Denoyer",
"Marc'Aurelio Ranzato",
"Y-Lan Boureau"
] | The dominant approach to unsupervised "style transfer'' in text is based on the idea of learning a latent representation, which is independent of the attributes specifying its "style''. In this paper, we show that this condition is not necessary and is not always met in practice, even with domain adversarial training that explicitly aims at learning such disentangled representations. We thus propose a new model that controls several factors of variation in textual data where this condition on disentanglement is replaced with a simpler mechanism based on back-translation. Our method allows control over multiple attributes, like gender, sentiment, product type, etc., and a more fine-grained control on the trade-off between content preservation and change of style with a pooling operator in the latent space. Our experiments demonstrate that the fully entangled model produces better generations, even when tested on new and more challenging benchmarks comprising reviews with multiple sentences and multiple attributes. | [
"controllable text generation",
"generative models",
"conditional generative models",
"style transfer"
] | https://openreview.net/pdf?id=H1g2NhC5KQ | https://openreview.net/forum?id=H1g2NhC5KQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Hye3dRXleN",
"HklLDIyapm",
"HygNdVy66m",
"r1eBLQ1pT7",
"Skxizm1aaQ",
"rylhcb16TX",
"BkekG-1TaQ",
"H1xk7NH9pX",
"SkgGgYEt6m",
"Hkgb4vEFpQ",
"BkxKA_vRnX",
"rylWfjH0hQ",
"B1la-jxRhQ",
"S1ll1KQ5hX",
"H1lXb9okiX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544728180081,
1542415966124,
1542415467634,
1542415180736,
1542415122766,
1542414740314,
1542414599096,
1542243351329,
1542174954137,
1542174504864,
1541466321425,
1541458697005,
1541438212540,
1541187799957,
1539451386550
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1489/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1489/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1489/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1489/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1489/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1489/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1489/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1489/AnonReviewer3"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1489/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1489/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1489/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper shows how techniques introduced in the context of unsupervised machine translation can be used to build a style transfer methods.\", \"pros\": [\"The approach is simple and questions assumptions made by previous style transfer methods (specifically, they show that we do not need to specifically enforce disentanglement).\", \"The evaluation is thorough and shows benefits of the proposed method\", \"Multi-attribute style transfer is introduced and benchmarks are created\", \"Given the success of unsupervised NMT, it makes a lot of sense to see if it can be applied to the style transfer problem\"], \"cons\": [\"Technical novelty is limited\", \"Some findings may be somewhat trivial (e.g., we already know that offline classifiers are stronger than the adversarials, e.g., see Elazar and Goldberg, EMNLP 2018).\"], \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"A simple and effective approach to style transfer based on recent developments in unsupervised NMT\"}",
"{\"title\": \"Regarding the convergence of the model\", \"comment\": \"Thank you for your question. As AnonReviewer3 mentioned, simply copying input sentences wouldn\\u2019t satisfy the auto-encoding part of equation (1), as noise has been added to sentences. However, it would indeed satisfy the back-translation loss.\\n\\nThe idea of denoising here is that by removing random words from a sentence, we hope to remove words that are required to infer the style.\\nFor instance, if the input sentence is: \\u201cthis place is awful\\u201d\", \"and_that_the_noised_sentence_becomes\": \"\\u201cthis place is <BLANK>\\u201d,\\nthe model will be trained to recover \\u201cthis place is awful\\u201d\", \"from\": \"(\\u201cthis place is <BLANK>\\u201d, ATTRIBUTE=NEGATIVE)\\n\\nSince there might be a lot of occurrences of \\u201cthis place is amazing\\u201d in the dataset, the model will have to learn to consider the provided attribute in order to give a high probability to \\u201cawful\\u201d without penalizing the perplexity on the positive reviews.\\n\\nThe general argument is that the decoder needs to learn to use the attribute information whenever the input to the system is very noisy. This applies as well when inputs come from the back-translation process. Noisy inputs are produced in the back-translation process at the beginning of training when the model is insufficiently trained and does not generate well, and when generations are produced at high temperature. When using high softmax temperatures, the model tends to exhibit lower content preservation and higher attribute transfer since the generated samples are very noisy and it is therefore more difficult to recover the original input in the back-translation process while the decoder is forced to better leverage the attribute information.\"}",
"{\"title\": \"Results of the comparison\", \"comment\": \"Thank you for your comment. We\\u2019ve added a comparison with Hu et al., 2017 in the revised paper using the code you mentioned. We found that this model obtained a good accuracy / BLEU score, but with pretty high perplexity. We\\u2019ve also added a reference to Yang et al in the related work section, thank you for pointing this out.\"}",
"{\"title\": \"Thank you for your review and raising interesting questions about this work. (part 2)\", \"comment\": \"\\u201cI think the last and most critical question is what the expected style-transferred rewriting look like. What level or kind of \\\"content-preserving\\\" do we look for?\\u201d - This is a great question, and a fundamental open research problem which, as far as we know, does not have a clear answer in existing literature. In our paper, we view this line of research as looking for better ways to generate rewrites of text along certain directions, and exactly the \\u201ckind\\u201d of what content is being preserved would ideally be one of the \\u201cknobs\\u201d that a system can control. The phrase \\u201cstyle transfer\\u201d is useful to refer to previous work that have adopted it from the image domain, but its framing is a bit narrow for the scope of rewriting types our work addresses. We believe that the trade-off between attribute control and content preservation should depend on two factors 1) the eventual use case of such a system (and style transfer is one use case, but another one would be to obtain more \\u201cinteresting\\u201d and varied generations by augmenting a retrieval system with rewriting capabilities in a controllable way, and 2) the nature of attributes being controlled. Firstly, in contrast to previous work, we present means to control this inherent trade-off in the form of a latent-space pooling operator which can adapted to a particular use case. Secondly, the proposed method is fundamentally one that learns an unsupervised mapping between two or more domains of text, and the nature of the learned mapping will certainly depend on the nature of the domains. For example, it is often possible to map between the positive and negative domains by replacing a few words or small phrases and as a result, we can expect our models to preserve a lot of the input. By contrast, attributes such as one\\u2019s age aren\\u2019t as \\u201clocal\\u201d and might require rewriting more content to successfully be altered. In that case, the content that is being preserved might be the general structure of the text, its sentiment, etc. To make the trade-off clearer, we have added a figure to the manuscript showing how it varies across training (Fig. 1 in the appendix); we also include illustrations of rewrites at different trade-off levels in Table 13.\\n\\n\\u201cTowards the end of Section 3, it says that \\\"without back-propagating through the back-translation generation process\\\". Can you elaborate on this and the reason behind this choice?\\u201d - Back-propagating through the back-translation process would require computing gradients through a sequence of discrete actions since generations are sampled from the decoder. While this may be achieved via policy-gradient methods such as REINFORCE or other approximations like the Gumbel-softmax trick, these have been known to perform very poorly in high dimensional action spaces due to high variance of the gradient estimates. This approach also has the disadvantage of biasing the model towards the degenerate solution of copying the input while ignoring attribute information entirely to satisfy the cycle-consistency objective, since the gradients flow through the entire cycle, which is what we observed in practice.\\n\\n\\u201cWhat does it mean by \\\"unknown words\\\" in \\\"... with 60k BPE codes, eliminating the presence of unknown words\\\" from Section 4?\\u201d - We meant that by using BPE, we can operate without replacing infrequent words with an <unk> token -- we do not have unknown words because these are decomposed into subword units that belong to the BPE dictionary.\\n\\n\\u201cwhat is the difference among the three \\\"Ours\\\" model?\\u201d - These models differ in the choice of hyperparameters (pooling kernel width and back-translation temperature) to demonstrate our model\\u2019s ability to control the content preservation vs attribute control trade-off. We have clarified this in the table caption.\\n\\n\\u201cthe perplexity of \\\"Input Copy\\\" is very high compared with generated sentences.\\u201d - This is true and we believe that this is a consequence of the fact that there is more diversity in the input reviews than in typical generations from ours and other systems. This lack of diversity is typical for models decoding with beam search, which leads to \\\"mode seeking behavior\\\" wherein the output generations contain fragments that occur most frequently in the training set. This results in the pre-trained LM assigning high likelihoods to these samples.\\n\\n\\u201cwhat does the \\\"attention\\\" refer to?\\u201d - The row in Table 7 that corresponds to \\\"-attention\\\" refers to a model that was trained without an attention mechanism in a vanilla sequence-to-sequence fashion, using the last hidden state of the encoder by concatenating it to the word embeddings at every time step of the decoder.\\n\\n\\u201cIn the supplementary material, there are lambda_BT and lambda_AE. But there is only one lambda in the loss function (1).\\u201d -Thank you for spotting this typo. We fixed this in the revised version of the paper.\"}",
"{\"title\": \"Thank you for your review and raising interesting questions about this work. (part 1)\", \"comment\": \"\\u201cIs there any difference between the two discriminators/classifiers?\\u201d - The discriminator and classifier have completely identical architectures - a 3 layer MLP with 128 dimensional hidden layers and LeakyReLU activations (now clarified in the model architecture paragraph in Section 3.3). We used two different terms to describe them since the classifier is fit post-hoc and doesn\\u2019t adapt to the encoder representations in a min-max fashion while the discriminator does. Moreover, the classifier is fully trained on the final encoder representations, while the discriminator is \\u201cchasing\\u201d them without fully training after each and every update of the encoder representations. This is indeed a bit confusing, and we have clarified this in the paper. While a discriminator trained more thoroughly at each iteration might disentangle representations more, our goal was not to look at whether disentangled representations can result in better performance, but whether current training practices actually result in disentangled representations (see responses below as well).\\n\\n\\u201cthere should be enough signal from the discriminator to adapt the encoder in order to learn a more disentangled representation.\\u201d - This is a valid concern, but the experiments we ran suggest that this does not change the main observation. For instance, we also experimented with larger coefficients of adversarial training of 1.0 and 10.0 (as well as no adversarial training on the other end of the spectrum). While the attribute recovery accuracy drops a little at higher coefficients, it is still much higher than the discriminator accuracy during training. Also, models trained with high adversarial training coefficients have extremely high reconstruction and back-translation losses. Results are presented below, for better formatting please refer to the revised version of our paper.\\n\\n Coef Disc(acc) Clf(acc)\\n 0 &\\t89.45% & 93.8%\\n 0.001 &\\t85.04% & 92.6%\\n 0.01 &\\t75.47% & 91.3%\\n 0.03 &\\t61.16% & 93.5%\\n 0.1 &\\t57.63% & 94.5%\\n 1.0 &\\t52.75% & 86.1%\\n 10 &\\t51.89% & 85.2%\\n\\n\\u201cOn the other hand, this does not answer the question if a \\\"true\\\" disentangled representation would give better performance. The inferior performance from the adversarially learned models could be because of the \\\"entangled\\\" representations.\\u201d - We agree completely. Our point is not that disentangled representations would not lead to good performance, but simply that disentanglement doesn't happen in practice with the kind of adversarially trained models typically used for this problem. We have made changes to the writing to make our stance clearer.\\n\\n\\u201cRequest for ablation study on pooling and other architectural design choices.\\u201d - In addition to the averaged attribute embeddings, we also explored using a separate embedding for each attribute combination in the cross-product of all possible attribute values. We found this to have similar performance to our averaging method. We decided against concatenating embeddings because we use the attribute embedding as the first input token to the decoder, and using a concatenation would mean dividing the embedding size for each attribute value by the number of attributes, to maintain to overall embedding size. This wouldn\\u2019t scale as well to settings with many possible attributes. We settled on the attribute embedding averages because of its simplicity.\\n\\nWe have included a plot (Figure 1) that shows the evolution of attribute control (accuracy) and content preservation (BLEU) over the course of training as a function of the pooling kernel width. This demonstrates the latent space pooling operator\\u2019s ability to trade off self-BLEU and accuracy - larger kernel widths favor attribute control while smaller ones favor content preservation.\\n\\n\\u201cAs long as the \\\"back-translation\\\" gives expected result, it seems not necessary to have \\\"meaningful\\\" or hard \\\"content-preserving\\\" latent representations when the generator is powerful enough.\\u201d\\nWe observed that operating without a DAE objective didn\\u2019t work since the model needs to be bootstrapped to be capable of producing outputs that are at least somewhat close to the original input before the back-translation process can take over. At the beginning of training, it is nearly impossible for the model to be able to recover the original input starting from a nearly random sequence of words. But it\\u2019s indeed true that later on the back-translation loss is enough: in practice, we in fact removed the DAE objective by progressively decreasing lambda_AE from 1 to 0 over the first 300,000 iterations (c.f. Appendix section), even though we didn\\u2019t observe a significant difference compared to simply fixing lambda_AE to 1.\"}",
"{\"title\": \"Thank you for the review & comments\", \"comment\": \"To make the architecture clearer, we updated the paper and added a paragraph, describing the architecture of the model in the \\u201cImplementation\\u201d section. That paragraph was previously in the appendix -- we hope inserting it into the main body makes the paper easier to follow.\\n\\nAs for our additions to the model, the methodology we used is similar to previous approaches in unsupervised machine translation, but with two key differences.\\n\\nFirst, our approach can handle multiple attributes, while previous approaches usually only consider two different domains (one for the positive reviews, and one for the negative reviews, for instance) and cannot be easily extended to multiple domains as they typically require one encoder and one decoder per domain. Our approach can handle multiple attributes at the same time, including categorical attributes (e.g. Table 9 in the Appendix).\\n\\nAlso, we introduced a pooling operator and we found it to be critical in our experiments. The problem we observed is that without it, the model has a tendency to converge to the \\u201ccopy mode\\u201d, where it simply copy words one by one, without taking the attribute input into consideration. We included a plot in the ablation study (Figure 1) that shows the evolution of the attribute transfer accuracy and the content preservation over training, for different pooling layer configurations. We can see that without the pooling operator, the model directly converges to the \\u201ccopy mode\\u201d, with a self-BLEU close to 90 after only a few epochs. A pooling operator with a window of size 8 not only alleviates this issue, but it also provides intermediate models during training with different trade-offs between content-preservation and attribute transfer.\"}",
"{\"title\": \"Thank you for the review!\", \"comment\": \"Thank you for your review. We are glad to see that you liked the paper and it's contributions.\"}",
"{\"comment\": \"Thanks for the comment! Yes, the denoising auto-encoder part could prevent the directly copying. However, it still couldn't guarantee the generated styles. If the auto-encoder totally ignores the style embedding and only learns to reconstruct the input sentences (even with noises), the equation. (1) is still converged. Hope the authors would discuss this issue.\", \"title\": \"The denoising auto-encoder part still couldn't guarantee the styles\"}",
"{\"title\": \"the denoising matters here\", \"comment\": \"The first part of the loss function (1), the denoising auto-encoder part, would help prevent the situation described. Since the input would have noise added, the simple copy operation can not be learned directly. But I still prefer the authors would give some discussions and maybe quantitative results regarding this.\"}",
"{\"comment\": \"Thanks for the interesting work. The results look amazing. I have a question about the loss function (Equation. 1). The loss function only consists of the reconstruction loss and another type reconstruction loss related to the back-translation, and there is no adversarial loss or classification loss to regularize the generated styles. How do you guarantee the generated sentences have correct styles?\\n\\nI can imagine that there is a local minimum of Equation. 1, where the decoders completely ignores the input style embedding and directly copy the input sentence. In this case, no matter which style you used, the input and output are the same, and the loss is zero. I'm wondering how do you prevent this situation happens?\\n\\nLooking forward to seeing the answers!\", \"title\": \"Convergence question in equation. (1)\"}",
"{\"comment\": \"There are also some relevant works that are missing in the references such as:\\nUnsupervised Text Style Transfer using Language Models as Discriminators by Yang etc al.\", \"title\": \"missing references\"}",
"{\"comment\": \"Thanks for the interesting work. It'd be nice to see an empirical comparison of this work to (Hu el al., 2017) which has released code here: https://github.com/asyml/texar/tree/master/examples/text_style_transfer. Based on my experience, (Hu el al., 2017) is usually a strong baseline on many datasets.\", \"title\": \"Empirical comparison to (Hu el al., 2017)\"}",
"{\"title\": \"This paper presents a model for text rewriting for multiple attributes.\", \"review\": \"This paper presents a model for text rewriting for multiple attributes, for example gender and sentiment, or age and sentiment. The contributions and strengths of the paper are as follows.\\n\\n* Problem Definition\\nAn important contribution is the new problem definition of multiple attributes for style transfer. While previous research has looked at single attributes for rewriting, \\\"sentiment\\\" for example, one could imagine controlling more than one attribute at a time. \\n\\n* Dataset Augmentation\\nTo do the multiple attribute style transfer, they needed a dataset with multiple attributes. They augmented the Yelp review dataset from previous related paper to add gender and restaurant category. They also worked with microblog dataset labeled with gender, age group, and annoyed/relaxed. In addition to these attributes, they modified to dataset to include longer reviews and allow a larger vocabulary size. In all, this fuller dataset is more realistic than the previously release dataset.\\n\\n* Model\\nThe model is basically a denoising autoencoder, a well-known, relatively simple model. However, instead of using an adversarial loss term as done in previous style transfer research, they use a back-translation term in the loss. A justification for this modeling choice is explained in detail, arguing that disentanglement (which is a target of adversarial loss) does not really happen and is not really needed. The results show that the new loss term results in improvements.\\n\\n* Human Evaluation\\nIn addition to automatic evaluation for fluency (perplexity), content preservation (BLEU score), and attribute control (classification), they ask humans to judge the output for the three criteria. This seems standard for this type of task, but it is still a good contribution.\\n\\nOverall, this paper presents a simple approach to multi-attribute text rewriting. The positive contributions include a new task definition of controlling multiple attributes, an augmented dataset that is more appropriate for the new task, and a simple but effective model which produces improved results.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Good work but better presentation needed\", \"review\": \"This work proposes a new model that controls several factors of variation in textual data where the condition on disentanglement is replaced with a simpler mechanism based on back-translation. It allows control over multiple attributes, and a more fine-grained control on the trade-off between content preservation and change of style with a pooling operator in the latent space.\\n\\nOne of the major arguments is it is unnecessary to have attribute-disentangled latent representations in order to have good style-transferring rewriting. In Table 2, the authors showed that \\\"a classifier that is separately trained on the resulting encoder representations has an easy time recovering the sentiment\\\" when the discriminator during training has been fooled. Is there any difference between the two discriminators/classifiers? If the post-fit classifier on top of the encoder representation can easily predict the correct sentiment, there should be enough signal from the discriminator to adapt the encoder in order to learn a more disentangled representation. On the other hand, this does not answer the question if a \\\"true\\\" disentangled representation would give better performance. The inferior performance from the adversarially learned models could be because of the \\\"entangled\\\" representations.\\n\\nAs the author pointed out, the technical contributions are the pooling operator and the support for multiple attributes since the loss function is the same as that in (Lample et. al 2018). These deserve more elaborated explanation and quantitative comparisons. After all, the title of this work is \\\"multiple-attribute text rewriting\\\". For example, the performance comparison between the proposed how averaged attribute embeddings and simple concatenation, and the effect of the introduced trade-off using temporal max-pooling.\\n\\nHow important is the denoising autoencoder loss in the loss function (1)? From the training details in the supplementary material, it seems like the autoencoder loss is used as \\\"initialization\\\" to some degree. As pointed out by the authors, the main task is to get fluent, attribute-targeted, and content-preserving rewriting. As long as the \\\"back-translation\\\" gives expected result, it seems not necessary to have \\\"meaningful\\\" or hard \\\"content-preserving\\\" latent representations when the generator is powerful enough.\\n\\nI think the last and most critical question is what the expected style-transferred rewriting look like. What level or kind of \\\"content-preserving\\\" do we look for? In Table 4, it shows that the BLEU between the input and the referenced human rewriting is only 30.6 which suggest many contents have been modified besides the positive/negative attribute. This can also be seen from the transferred examples. In Table 8, one of the Male example: \\\"good food. my wife and i always enjoy coming here for dinner. i recommend india garden.\\\" and the Female transferred rewriting goes as \\\"good food. my husband and i always stop by here for lunch. i recommend the veggie burrito\\\". It's understandable that men and women prefer different types of food even though it is imagination without providing context. But the transfer from \\\"dinner\\\" to \\\"lunch\\\" is kind of questionable. Is it necessary to change the content which is irrelevant to the attributes?\", \"other_issues\": [\"Towards the end of Section 3, it says that \\\"without back-propagating through the back-translation generation process\\\". Can you elaborate on this and the reason behind this choice?\", \"What does it mean by \\\"unknown words\\\" in \\\"... with 60k BPE codes, eliminating the presence of unknown words\\\" from Section 4?\", \"There is no comparison with (Zhang et. al. 2018), which is the \\\"most relevant work\\\".\", \"In Table 4, what is the difference among the three \\\"Ours\\\" model?\", \"In Table 4, the perplexity of \\\"Input Copy\\\" is very high compared with generated sentences.\", \"In Table 7, what does the \\\"attention\\\" refer to?\", \"In the supplementary material, there are lambda_BT and lambda_AE. But there is only one lambda in the loss function (1).\", \"Please unify the citation style.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Impressive experiments, but hard to determine how much is methodologically new here\", \"review\": \"The paper proposes \\\"style transfer\\\" approaches for text rewriting that allow for controllable attributes. For example, given one piece of text (and the conditional attributes associated with the user who generated it, such as their age and gender), these attributes can be changed so as to generate equivalent text in a different style.\\n\\nThis is an interesting application, and somewhat different from \\\"style transfer\\\" approaches that I've seen elsewhere. That being said I'm not particularly expert in the use of such techniques for text data.\\n\\nThe architectural details provided in the paper are quite thin. Other than the starting point, which as I understand adapts machine translation techniques based on denoising autoencoders, the modifications used to apply the technique to the specific datasets used here were hard to follow: basically just a few sentences described at a high level. Maybe to somebody more familiar with these techniques will understand these modifications fully, but to me it was hard to follow whether something methodologically significant had been added to the model, or whether the technique was just a few straightforward modifications to an existing method to adapt it to the task. I'll defer to others for comments on this aspect.\\n\\nOther than that the example results shown are quite compelling (both qualitatively and quantitatively), and the experiments are fairly detailed.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
S1g2V3Cct7 | Experience replay for continual learning | [
"David Rolnick",
"Arun Ahuja",
"Jonathan Schwarz",
"Timothy P. Lillicrap",
"Greg Wayne"
] | Continual learning is the problem of learning new tasks or knowledge while protecting old knowledge and ideally generalizing from old experience to learn new tasks faster. Neural networks trained by stochastic gradient descent often degrade on old tasks when trained successively on new tasks with different data distributions. This phenomenon, referred to as catastrophic forgetting, is considered a major hurdle to learning with non-stationary data or sequences of new tasks, and prevents networks from continually accumulating knowledge and skills. We examine this issue in the context of reinforcement learning, in a setting where an agent is exposed to tasks in a sequence. Unlike most other work, we do not provide an explicit indication to the model of task boundaries, which is the most general circumstance for a learning agent exposed to continuous experience. While various methods to counteract catastrophic forgetting have recently been proposed, we explore a straightforward, general, and seemingly overlooked solution - that of using experience replay buffers for all past events - with a mixture of on- and off-policy learning, leveraging behavioral cloning. We show that this strategy can still learn new tasks quickly yet can substantially reduce catastrophic forgetting in both Atari and DMLab domains, even matching the performance of methods that require task identities. When buffer storage is constrained, we confirm that a simple mechanism for randomly discarding data allows a limited size buffer to perform almost as well as an unbounded one. | [
"continual learning",
"catastrophic forgetting",
"lifelong learning",
"behavioral cloning",
"reinforcement learning",
"interference",
"stability-plasticity"
] | https://openreview.net/pdf?id=S1g2V3Cct7 | https://openreview.net/forum?id=S1g2V3Cct7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJlxA3vJx4",
"S1e7Vx12JV",
"rklWJVN5Rm",
"B1WC7GV9C7",
"BJxnm07qC7",
"Skg0ier92Q",
"Hyg6qmSY27",
"Hklq4yMYn7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544678599990,
1544445994776,
1543287769132,
1543287334411,
1543286308345,
1541193894445,
1541129108843,
1541115698211
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1488/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1488/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1488/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1488/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1488/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1488/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1488/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1488/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper and revisions have some interesting insights into using ER for catastrophic forgetting, and comparisons to other methods for reducing catastrophic forgetting. However, the paper is currently pitched as the first to notice that ER can be used for this purpose, whereas it was well explored in the cited paper \\\"Selective Experience Replay for Lifelong Learning\\\", 2018. For example, the abstract says \\\"While various methods to counteract catastrophic forgetting have recently been proposed, we explore a straightforward, general, and seemingly overlooked solution \\u2013 that of using experience replay buffers for all past events\\\". It seems unnecessary to claim this as a main contribution in this work. Rather, the main contributions seem to be to include behavioural cloning, and do provide further empirical evidence that selective ER can be effective for catastrophic forgetting.\\n\\nFurther, to make the paper even stronger, it would be interesting to better understand even smaller replay buffers. A buffer size of 5 million is still quite large. What is a realistic size for continual learning? Hypothesizing how ER can be part of a real continual learning solution, which will likely have more than 3 tasks, is important to understand how to properly restrict the buffer size.\\n\\nFinally, it is recommended to reconsider the strong stance on catastrophic interference and forgetting. Catastrophic interference has been considered for incremental training, where recent updates can interfere with estimates for older (or other values). This definition does not precisely match the provided definition in the paper. Further, it is true that forgetting has often been used explicitly for multiple tasks, trained in sequence; however, the issues are similar (new learning overriding older learning). These two definitions need not be so separate, and further it is not clear that the provided definitions are congruent with older literature on interference. \\n\\nOverall, there is most definitely useful ideas and experiments in this paper, but it is as yet a bit preliminary. Improvements on placement, motivation and experimental choices would make this work much stronger, and provide needed clarity on the use of ER for forgetting.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Some insights into using ER to reduce catastrophic forgetting, but requires a bit better placement\"}",
"{\"title\": \"Summary of changes made in revision\", \"comment\": \"We are grateful to all reviewers for their time and helpful suggestions. Our goal in this paper was to demonstrate that a surprisingly simple replay-based approach (CLEAR) dramatically reduces catastrophic forgetting in reinforcement learning tasks, improving upon existing methods while not requiring information about task boundaries. The reviewers agree that the paper is \\\"easy to follow\\\" (R1) and that the method \\\"achieves good performance\\\" (R2) and is \\\"simple while delivering interesting results\\\" (R3).\\n\\nR2 asks for clarification regarding the novelty of CLEAR over existing methods. We hope that the additional supplementary figures emphasize the improvement in performance offered by CLEAR, and in our response to R2 below we detail how each of the ingredients of CLEAR (on-policy learning, off-policy learning, and behavioral cloning) inform stability and/or plasticity. We also emphasize that replay has been thoroughly investigated in other contexts but (remarkably) has been largely ignored in the context of catastrophic forgetting, where we show that it is extremely powerful.\\n\\nR1 asks for clarification on several questions regarding our experimental setup, and suggests tables for numerical comparison of CLEAR to other methods, which we have added in our revision. We have endeavored to answer the specific questions at length in our response to R1 below.\\n\\nAs we understand, R3's main concern is that the reinforcement learning experiments involve synthetic tasks, which we have motivated in our response to R3 below.\", \"changes_made_in_our_revision\": [\"We have included in Appendix B the results of a new experiment, similar to that of the probe task (Figure 5). In the new experiment, we show that when using pure off-policy learning (instead of a mixture of on- and off-policy learning, as in CLEAR), the probe task does indeed decrease in performance when other tasks are learned before it. CLEAR avoids this failure mode by blending new experience with replay.\", \"We have added in Appendix C several new visualizations of our main results. In these figures, we plot the cumulative reward, which captures both performance and resistance to catastrophic forgetting.\", \"We have also added in Appendix C tables that quantitatively summarize the considerable benefit obtained by using CLEAR.\"]}",
"{\"title\": \"Response to review, and notes on revision\", \"comment\": \"We thank the reviewer for a thoughtful reading and for generally positive comments. As we understand, the reviewer's main concern is that the environments and tasks are synthetic. We would like to justify why we chose these particular environments and tasks. Our interest has been in reinforcement learning (RL) because, in the supervised learning case, catastrophic forgetting is generally resolved merely by storing a dataset. In RL, we view the synthetic problem setting as an absolutely mandatory first step to take before working on an application domain like robotics. Our experiments are in keeping with the standards in this field: work on continual learning to date in RL has all been on synthetic tasks, and we compare to this work, which allows us to provide effective benchmarks. The fact that our environments are simulated does not however imply that they are simple: they are state-of-the-art 3D environments, with DMLab introduced earlier this year and currently representing an advanced benchmark for RL systems.\\n\\nWith respect to potential biological connections, we have devoted only a paragraph to this matter, with the purpose of emphasizing that we think such connections are NOT present, as the reviewer rightly states.\\n\\nIn the revision, we have included in Appendix B the results of a new experiment, similar to that of the probe task (Figure 5). In the new experiment, we show that when using pure off-policy learning (instead of a mixture of on- and off-policy learning, as in CLEAR), the probe task does indeed decrease in performance when other tasks are learned before it. CLEAR avoids this failure mode by blending new experience with replay.\\n\\nWe have also, in our revision, added in Appendix C several new visualizations of our main results. In these figures, we plot the cumulative reward, which captures both performance and resistance to catastrophic forgetting, and we include tables that show the considerable benefit obtained by using CLEAR.\"}",
"{\"title\": \"Response to review, and notes on revision\", \"comment\": \"We thank the reviewer for these comments and have made additional changes to the paper to address them, as we describe below.\", \"q1\": \"We presented tasks cyclically in sequence for several reasons. Presenting all tasks before returning to any one of them to represents a \\u201cworst-case\\u201d scenario for catastrophic forgetting and tests our method in the hardest situation. Our experiment is designed to address exactly the scenario the reviewer describes -- spending a lot of time on the other tasks before returning to a specific one. The time spent on each task in the cycle is actually quite long, and if one imagines cutting off each figure after the first iteration of the cycle, one would end up with the figures the reviewer suggests. These figures would already provide ample support for all of our conclusions regarding CLEAR.\\n\\nFurther, presenting tasks cyclically is a natural model of learning in which similar experiences are revisited over and over. Early researchers of human memory, e.g. Ebbinghaus, considered memorization tasks in which memorized items were recurrent and revisited several days in a row or over longer inter-experiment intervals. Recurrent study experiments permit the evaluation of several effects, including the phenomenon of savings, in which forgotten memories are rapidly re-acquired with marginal subsequent study. Here, we are also interested in demonstrating that repeated exposure to a task can be used to train the behavior of an agent.\", \"q2\": \"This is an interesting phenomenon! It is a demonstration of genuine constructive interference or positive transfer in which learning other tasks promotes coherent exploratory behavior in natlab_varying_map_randomize. This interference does not detract from the conclusions of the figure that catastrophic interference is present in this as in other tasks, that CLEAR fixes the problem, and that the ability of CLEAR to learn from new experience is unaffected by the amount of information already in the replay buffer.\", \"q3\": \"We understand the motivation behind this question, but the specific memory requirements depend on implementation, including the use of compression and caching techniques, which are engineering-level questions, and beyond the scope of what we can present in the paper, which is focused on the benefits that a mixture of on- and off-policy learning with behavioral cloning provides with respect to learning and forgetting. Notably, the buffer can almost certainly be compressed considerably given the commonalities between experiences. What memory requirements are unavoidable can leverage hard drive storage, with minimal RAM needed.\", \"re_figure_5\": \"The collection of results in the other figures are designed to show how a newly introduced task affects learning on other tasks. In this experiment, we were specifically interested in one question: Does having a full replay buffer from past experiences on other tasks slow learning on a new task? In Figure 5, note that the final performance obtained on the probe task doesn\\u2019t depend on whether it comes after another task, implying that learning the task is largely independent of the other tasks, except for the initial positive transfer.\\n\\nIn the revision, we include a variation of the probe task experiment in Appendix B, in which we show that when using pure off-policy learning (instead of a mixture of on- and off-policy learning, as in CLEAR), the probe task does indeed decrease in performance when other tasks are learned before it. CLEAR avoids this failure mode by blending new experience with replay.\", \"re_numerical_comparison\": \"Absolutely, and we are very grateful for the suggestion. We have added tabulations of the cumulative sum of performance at the end of training for most experiments in Appendix C. We feel this measure captures both how quickly learning occurs and how much performance is maintained over the course of training on multiple tasks.\"}",
"{\"title\": \"Response to review, and notes on revision\", \"comment\": \"We thank the reviewer for a careful reading and for thoughtful comments. Our purposes in this work were to develop methods that are capable of mitigating catastrophic forgetting without using task identity or boundaries, while maintaining plasticity for learning from new experience. The reviewer has noted two excellent, related works, namely EWC and LwF, which we highlight in our literature review. However, both of these approaches require information about task identities and boundaries. Furthermore, as we demonstrate in Figures 6 and 14, even with task boundaries, CLEAR performs considerably better than EWC. We also show similar performance to Progress & Compress, which itself represents a state-of-the-art improvement upon EWC (and also requires task boundaries, unlike our method). In our revision, we have provided in Appendix C a more detailed quantitative comparison of CLEAR against baseline methods.\\n\\nAs the reviewer rightly points out, we are combining existing tools; the innovation is putting them together to make a highly effective, simple continual learning method. While in some respects, this application may be natural, it has evidently not been explored before, as evidenced by the fact that it outperforms state-of-the-art, highly-engineered systems.\\n\\nThe reviewer rightly calls attention to the stability-plasticity dilemma. To restate the ingredients of CLEAR, we are combining: (1) learning from on-policy experience, which yields plasticity and adaptiveness on new tasks, (2) off-policy replay to learn from past experience, and (3) behavioral cloning to maintain past behavior. Our experiments show that each of these ingredients is essential to our performance. \\n\\n1 - On-policy learning. In a new experiment within the revision (Appendix B), we demonstrate that removing the on-policy learning component from our algorithm (so that only off-policy replay and behavioral cloning are used) significantly damages the plasticity of the method. Our results show that when a new \\u2018probe\\u2019 task is introduced after learning several tasks, the probe task cannot be learned quickly without on-policy learning (note the difference between Figure 7 and Figure 5).\\n\\n2 - Off-policy learning. In Figures 3 and 11, we show that CLEAR is able to learn reasonably well from pure replay experience, demonstrating the importance of the off-policy learning component. (Clearly, behavioral cloning alone cannot increase performance on replay.)\\n\\n3 - Behavioral cloning. Finally, in Figures 2 and 10, we show that leaving out behavioral cloning significantly damages the ability of CLEAR to maintain past performance.\\n\\nThere is both literature on sequential task presentations meant to induce catastrophic forgetting and literature on replay. We bring these two worlds together, and show that the mixture of off- and on-policy learning, together with behavioral cloning, confers dramatic improvements for continual learning settings.\"}",
"{\"title\": \"Solid Paper, but Novelty over Experience Replay Needs Better Motivation\", \"review\": \"This paper proposes a particular variant of experience replay with behavior cloning as a method for continual learning. The approach achieves good performance while not requiring a task label. This paper makes the point that I definitely agree with that all of the approaches being considered should compare to experience replay and that in reality many of them rarely do better. However, I am not totally convinced when it comes to the value of the actual novel aspects of this paper.\\n\\nMuch of the empirical analysis of experience replay (i.e. the buffer size, the ratio of past and novel experiences, etc\\u2026) was not surprising or particular novel in my eyes. The idea of using behavior cloning is motivated fully through the lens of catastrophic forgetting and promoting stability and does not at all address achieving plasticity. This was interesting to me as the authors do mention the stability-plasticity dilemma, but a more theoretical analysis of why behavior cloning is somehow the right method among various choices to promote stability while not sacrificing or improving plasticity was definitely missing for me. Other options can certainly be considered as well if your aim is just to add stability to experience replay such a notion of weight importance for the past like in EwC (Kirkpatric et al., 2017) and many other papers or using knowledge distillation like LwF (Li and Hoeim, 2016). LwF in particular seems quite related. I wonder how LwF + experience replay compares to the approach proposed here. In general the discourse could become a lot strong in my eyes if it really considered various alternatives and explained why behavior cloning provides theoretical value. \\n\\nOverall, behavior cloning seems to help a little bit based on the experiments provided, but this finding is very likely indicative of the particular problem setting and seemingly not really a game changer. In the paper, they explore settings with fairly prolonged periods of training in each RL domain one at a time. If the problem was to become more non-stationary with more frequent switching (i.e. more in line with the motivation of lifelong learning), I would imagine that increasing stability is not necessarily a good thing and may slow down future learning.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"review\", \"review\": \"The paper proposes a novel trial to alleviate the catastrophic forgetting for continual learning which is kind a mixture model of on and off-policy. The core concept of the method is utilizing experience replay buffer for all past events with new experience. They mainly worked on their method in the setting of reinforcement learning. In the experiments, they show that the model successfully mitigate the catastrophic forgetting with this behavioral cloning, and has the performance comparable to recent continual learning approaches.\\n\\nThe paper is easy to follow, and the methodology is quite intuitive and straight forward. In this paper, I have several questions.\\n\\nQ1. I wonder the reason that every tasks are trained cyclically in sequence. And is there any trial to learn each task just once and observe the catastrophic forgetting of them when they have to detain the learned knowledge in a long time without training them again, as does most of visual domain experiments of the other continual learning research.\\n\\nQ2. In figure 5, I wonder why the natlab_varying_map_ramdomize(probe task) can perform well even they didn\\u2019t learn yet. The score of brown line increases nearly 60~70% of final score(after trained) during training the first task. Because the tasks are deeply correlated? or it is just common property of probe task?\\n\\nQ3. Using reservoir(buffer) to prevent catastrophic forgetting is natural and reasonable. Is there some of quantitative comparison in the sense of memory requirement and runtime? I feel that 5 or 50 million experiences at each task are huge enough to memorize and manage.\\n\\nAdditionally, in the experiment of figure 5, I think it could be much clear with a verification that the probe task is semantically independent (no interference) over all the other tasks. \\n\\nAlso, it is quite hard to compare the performance of the models just with plots. I expect that it could be much better to show some of quantitative results(as number).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Reduction in Catastrophic Forgetting by Augmented Experience Replay\", \"review\": \"The authors propose an approach to augment experience replay buffers with properties that can alleviate issues with catastrophic forgetting. The buffers are augmented by storing both new and historical experiences, along with the desired historical policy & value distribution. The AC learning now couples two additional losses that ensures the new policy does not drift away from old actor distribution (via KL) and new value does not drift away from old critic distribution (via L2 loss).\\n\\nThe authors provided clear experimental evidence that shows how an RL agent that does not use CLEAR will observe catastrophic when we sequentially train different tasks (and it is not due to destructive interference using the simultaneous and separate training/evaluation experiments). Author also showed how different replay make ups can change the result of CLEAR (and it's a matter of empirical tuning).\\n\\nThe formulation of CLEAR also is simple while delivering interesting results. It would have been nice to see how this is used in a practical setting as all these are synthetic environments / tasks. The discussion on relationship with biological mechanism also seems unnecessary as it's unclear whether the mechanism proposed is actually what's in the CLS.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
SkghN205KQ | Search-Guided, Lightly-supervised Training of Structured Prediction Energy Networks | [
"Amirmohammad Rooshenas",
"Dongxu Zhang",
"Gopal Sharma",
"Andrew McCallum"
] | In structured output prediction tasks, labeling ground-truth training output is often expensive. However, for many tasks, even when the true output is unknown, we can evaluate predictions using a scalar reward function, which may be easily assembled from human knowledge or non-differentiable pipelines. But searching through the entire output space to find the best output with respect to this reward function is typically intractable. In this paper, we instead use efficient truncated randomized search in this reward function to train structured prediction energy networks (SPENs), which provide efficient test-time inference using gradient-based search on a smooth, learned representation of the score landscape, and have previously yielded state-of-the-art results in structured prediction. In particular, this truncated randomized search in the reward function yields previously unknown local improvements, providing effective supervision to SPENs, avoiding their traditional need for labeled training data. | [
"structured prediction energy networks",
"indirect supervision",
"search-guided training",
"reward functions"
] | https://openreview.net/pdf?id=SkghN205KQ | https://openreview.net/forum?id=SkghN205KQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HyxKHNk8xN",
"SkxM8XSfRm",
"Hygke-HfAX",
"BJxCJPkzCQ",
"rkePgshh3Q",
"HyeoPm4t2Q",
"rkxbqGkLh7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545102401287,
1542767434293,
1542766822805,
1542743781839,
1541356271452,
1541124963248,
1540907657083
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1487/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1487/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1487/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1487/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1487/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1487/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1487/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes search-guided training for structured prediction energy networks (SPENs).\\n\\nThe reviewers found some interest in this approach, though were somewhat underwhelmed by the experimental comparison and the details provided about the method.\\n\\nR1 was positive and recommends acceptance; R2 and R3 thought the paper was on the incremental side and recommend rejection. Given the space restriction to this year's conference, we have to reject some borderline papers. The AC thus recommends the authors to take the reviewers comments in consideration for a \\\"revise and resubmit\\\".\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Incremental improvement over rank-based training of SPENs\"}",
"{\"title\": \"Discussion on reward value of the predicted outputs\", \"comment\": \"Thank you for your comments.\\n\\\"The main detail that I didn\\u2019t notice anywhere was a sentence or two describing the random search procedure used - adding this would further clarify your approach.\\\"\\n\\nPlease see the response to Reviewer3. We are also going to add that clarification to the revised version. \\n\\n\\\"I think it would have been interesting to see how the model performs in a semi-supervised task but perhaps this is better suited for future work.\\\"\\n\\nYes, that is interesting future work. \\n\\n\\\"Discussion on average reward value.\\\"\\nThe reward function of the citation field extraction task is constructed based on human knowledge, so it is noisy and incomprehensive; thus there is no guarantee that the structured output with the highest reward value is the best solution. As Table 1 indicates, the reward values for the iterative beam search is better than the reward values of both R-SPEN and SG-SPEN training methods, showing that R-SPEN and SG-SPEN training help SPENs to generalize the reward function using the unlabeled data.\"}",
"{\"title\": \"RL-based structured prediction vs SG-SPEN\", \"comment\": \"Thank you for your comments.\\nThere exists a body of works for using reward functions to train structured prediction models with reward function defines as task-loss[1,2,3], in which they suppose that the access to ground-truth labels to compute the task loss, pretraining the policy network, or training the critic. These approaches benefit from mixing strong supervision with the supervision from the reward function (task-loss), which is not comparable to our setting. \\nIn general, the main advantage of SPENs over RL-based training of structured prediction tasks (such as [4]) relies on the joint inference provided by SPEN. This joint inference relinquishes the need for reward shaping when we don't have partial rewards for incomplete structured outputs, which is a common problem in RL-based training. \\nMoreover, when the action space is very large and the reward function includes plateaus, training policy networks without pretraining with supervised data is very difficult. \\n[5] addresses the issue of sparse rewards by learning a decomposition of the reward signal, however, they still assume access to reference policy pretrained on supervised data for the structured prediction problems. In [5], the reward function is also the task-loss.\\nThe SG-SPEN addresses these problems differently, first it effectively trains SPENs that provides joint-inference, thus it does not require partial rewards. Second, the randomized search in reward function can easily avoid the plateaus in the reward function, which is essential for learning at the early stages. \\nWe believe that our policy gradients baselines are a strong representative of the reinforcement learning algorithms for structured prediction problems without any assumption about the ground-truth labels. \\n\\n\\\"the experiments on multi-label classification do not provide any comparison with SoTA methods while the two other use-cases provide some comparisons.\\\" \\n\\nDeep value network (Gygli et al, 2017), which we compared against, is the SOTA algorithm for multi-label classification. However, the reward function in the multi-label classification is an oracle that has access to the ground-truth label, so the light supervision from the reward function has no merits over the methods that benefit from strong supervision, while also has to explore the reward function for a better structured output and uses the capacity of neural network for learning the representation of these intermediate structured outputs.\\n\\n\\\"Moreover, as far as I understand, the different use-cases could be fully supervised, and different reward functions could be defined. So investigating more deeply the consequences of the nature of the supervision/reward on these use-cases could be interesting and strengthen the paper. \\\"\\n\\nThe message of the paper is not to use light supervision as an alternative to using ground-truth labels, but we are assuming that the expensive-to-collect ground-truth labels are not provided. \\nFor the task of multi-label classification, we assume an oracle that has access to ground-truth labels to measure how much we can learn when relying on search to explore the reward function. \\n\\n\\\"Note that in Section 3, the reward function R is never properly defined which would be nice.\\\" \\nWe are going to add that to the revised version.\\n\\n\\\"The fact that it is based on a margin could be discussed a little bit more since the effect of the margin is not clear in the paper (the value of alpha).\\\"\\nPlease see our response to Reviewer3.\\n\\n\\n[1]. Norouzi, M., Bengio, S., Jaitly, N., Schuster, M., Wu, Y., and Schuurmans, D., 2016. Reward augmented maximum likelihood for neural structured prediction. NIPS'16.\\n[2] Bahdanau, D., Brakel, P., Xu, K., Goyal, A., Lowe, R., Pineau, J., Courville, A. and Bengio, Y., 2017. An actor-critic algorithm for sequence prediction. ICLR'17.\\n[3] Ranzato, M.A., Chopra, S., Auli, M. and Zaremba, W., 2016. Sequence level training with recurrent neural networks. ICLR'16.\\n[4] Maes, F., Denoyer, L., and Gallinari, P., 2009. Structured prediction with reinforcement learning, Machine Learning. \\n[5] Daum\\u00e9 III, H., Langford, J., and Sharaf, A., 2018. Residual Loss Prediction: Reinforcement Learning With No Incremental Feedback. ICLR'18.\"}",
"{\"title\": \"SG-SPEN improves R-SPEN by addressing a fundamental problem in R-SPEN\", \"comment\": \"Thank you for your comments.\\nWe should first clarify that the R-SPEN training algorithm collects samples of structured outputs by performing gradient-descent inference over the energy function of SPEN not over the reward function as the reward function is not differentiable in most cases.\\nThe major contribution of our work is to improve R-SPEN regarding the selection of the pairs and provide new violations (optimization constraints) that better guide the test-time inference of SPEN to find the structured output with high reward value. \\n\\nR-SPEN attains F1 score of 40.1 on the multi-label classification with the same reward function used by SG-SPEN (SG-SPEN achieves F1 score of 44.0). We observe that R-SPEN has difficulty finding violations as training progresses. This is attributable to the fact that R-SPEN only explores the regions of the reward function based on the samples from the gradient-descent trajectory on the energy function, so if the gradient-descent inference is confined within local regions, R-SPEN cannot generate informative constraints. In contrast, SG-SPEN directly searches for violations in order to better learn from the reward function (violation is all you need [1]). We believe that this is an important improvement over R-SPEN training algorithm, which makes it possible to train SPENs using a variety of reward functions where R-SPENs may not be capable of learning (as shown in our shape parsing task).\\n\\nOur vanilla randomized search uniformly selects among the possible states of each output variable and output variables are ordered randomly. However, we can inject domain-knowledge to better explore the reward function, which is the target of our future work. In the search procedure, \\\\delta is a task-specific margin. For example, if your reward function is based on Chamfer distance (we used intersection over union, not Chamfer distance) for comparing two objects, the reward value is very small at the early stages of training, so setting \\\\delta = 0.1 basically requires a significant search budget to explore the reward function. \\nFor the shape parsing task, using the domain knowledge about the task, we can conclude that the reward function should have huge plateau (inconsistent parsing that evaluates to black images), so even small improvement that results in a valid parsing is preferred, thus selecting very small \\\\delta can accelerate the training at the early stages. However, in all of our experiments, we used fixed delta=0.1 for simplicity but we can dynamically select \\\\delta based on our search budget. \\n\\\\alpha has a similar effect but between the energy value and reward value as we need to magnify the differences between two reward values, so we can better rank the values on the energy function with respect to the values on the reward function.\\n\\n[1] Huang, L., Fayong, S., and Guo, Y., 2012, June. Structured perceptron with inexact search. NAACL'12.\"}",
"{\"title\": \"Incremental improvement over rank-based training of SPENs\", \"review\": \"# Summary\\n\\nThis paper proposes search-guided training for structured prediction energy networks (SPENs). SPENs are structured predictors that learn an input-dependent, non-linear energy function that scores candidate output structures. Many methods have recently been proposed for training SPENs. One in particular, rank-based training, has the advantage of supporting training from weak supervision in the form of a reward function. By performing gradient descent on this reward function, rank-based training generates output, improved output pairs that become margin-based constraints on the learning objective. Each constraint specifies a pair of outputs for a given input, and penalizes the current weights if the improved output is not scored higher than the other output by a certain margin.\\n\\nThis paper addresses a limitation of rank-based training, that this gradient descent procedure for finding output pairs may get stuck in plateaus. In search-guided training,\\u00a0truncated randomized searches are performed starting at an initial output to find an improved output. The paper says that the random search procedure is informed by the reward function, but it is not specific. Are steps in the search space performed uniformly at random? The paper only says that the returned improved example must score higher in the reward function by some margin \\\\delta that is \\\"based on the features of the reward function (range, plateaus, jumps)\\\" but it is not discussed how to identify these features of the reward function or how to set \\\\delta accordingly.\\n\\nExperiments are conducted multi-label classification, citation field extraction, and shape parsing. On multi-label classification search-guided SPENs (SG-SPENs) outperform structural SVM training of SPENs. Why is it not compared with rank-based training (R-SPENs)? On citation field extraction, SG-SPENs improves accuracy by two percentage points over R-SPENs. On shape parsing, R-SPENs fail because it cannot produce valid parsing programs as improved outputs. SG-SPENs perform well relative to other methods like iterative beam search and neural shape parsing.\\n\\n# Strengths\\n\\nSG-SPENs are better across the experiments than other SPEN training methods, though I do not know why they are not compared against R-SPENs on multi-label classification.\\n\\n# Weaknesses\\n\\nThe work seems incremental without any major new insights beyond the work on R-SPENs. The idea seems to reduce to doing random search instead of gradient descent on a reward function in order to produce output pairs.\\n\\nAs mentioned above, the paper is also light on details about how the experiments were conducted, such as setting \\\\delta and creating the space of operators to use when searching for improved outputs.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Useful extension of prior weakly-supervised SPEN work\", \"review\": \"Summary:\\nThis paper discusses a method to train SPENs when strong supervision is not provided. Instead, training feedback comes in the form of a scalar-valued scoring function for a provided input as well as a prediction. The approach taken here is similar to that described in [1] in that score-violating pairs are found using some procedure, which are then used to update the parameters of the model. The primary difference here is that a random search procedure is used to find score violations rather than the test-time inference procedure; this is justified by noting that the gradient descent procedure may become stuck in flat areas of the optimization surface and thus not encounter high-reward areas. Experiments are run on multilabel classification, citation field extraction, and shape parsing tasks to demonstrate the validity of this approach.\", \"comments\": \"Overall, this paper is very nicely written and presents its ideas very clearly. The base approach is the same as presented in [1], but the changes to the learning procedure are adequately justified (and the experiments corroborate this). Furthermore, everything is explained in sufficient detail to be easy to follow. The main detail that I didn\\u2019t notice anywhere was a sentence or two describing the random search procedure used - adding this would further clarify your approach.\\n\\nThe tasks chosen to evaluate these methods are diverse and indicate that this approach is broadly useful in situations where strong supervision may be hard to come by. I think it would have been interesting to see how the model performs in a semi-supervised task (i.e. where some small fraction of the data has labels), but perhaps this is better suited for future work. The one question I have regarding your results is the following: you include the average reward for the citation-field extraction task in your results table, but don\\u2019t seem to comment on this anywhere. Are there any conclusions that you think these results imply?\\n\\nThis paper is an excellent addition to the field of structured prediction, and thus I think it should be accepted.\\n\\n[1] Rooshenas, A., Kamath, A., & McCallum, A. (2018). Training Structured Prediction Energy Networks with Indirect Supervision. NAACL HLT 2018\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"The paper proposes to use a reward function to guide the learning of energy-based models for structured prediction. The idea is to update the energy function based on a random search algorithm guided by a reward function. At each iteration, the SPEN proposes a solution, then a better one is found by the search algorithm, and the energy function is updated accordingly. Experiments are made on three use-cases and show that this method is able to outperform other training algorithms for SPENs.\\n\\nIn term of model, the proposed algorithm is interesting since it can allow us to learn from weakly supervised datasets (i.e a reward function is enough). Note that in Section 3, the reward function R is never properly defined which would be nice. The algorithm is quite simple and well presented in the paper. The fact that it is based on a margin could be discussed a little bit more since the effect of the margin is not clear in the paper (the value of alpha). Moreover, the structured prediction problem has already been handled as the maximization of a reward function using RL techniques (see works by H. Daume, and works by F. Maes) and the interest of this approach w.r.t these papers is not clear to me. A clear discussion on that point (and experimental comparison) would be nice. \\n\\nThe experimental section could be improved. First, the experiments on multi-label classification do not provide any comparison with SoTA methods while the two other use-cases provide some comparisons. Moreover, as far as I understand, the different use-cases could be fully supervised, and different reward functions could be defined. So investigating more deeply the consequences of the nature of the supervision/reward on these use-cases could be interesting and strengthen the paper. Moreover, training sets are very small and it is difficult to know if this method can work on large-scale problems.\", \"pro\": [\"interesting algorithm for structured prediction (base on reward)\", \"interesting results on some (toy) use-cases\"], \"cons\": [\"Lack of discussion on the positive/negative point of the approach w.r.t SoTA, and on the influence of the reward function\", \"Lack of experimental comparisons\", \"Only toy (but complicated) problems with limited training sets\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HyGhN2A5tm | Multi-Agent Dual Learning | [
"Yiren Wang",
"Yingce Xia",
"Tianyu He",
"Fei Tian",
"Tao Qin",
"ChengXiang Zhai",
"Tie-Yan Liu"
] | Dual learning has attracted much attention in machine learning, computer vision and natural language processing communities. The core idea of dual learning is to leverage the duality between the primal task (mapping from domain X to domain Y) and dual task (mapping from domain Y to X) to boost the performances of both tasks. Existing dual learning framework forms a system with two agents (one primal model and one dual model) to utilize such duality. In this paper, we extend this framework by introducing multiple primal and dual models, and propose the multi-agent dual learning framework. Experiments on neural machine translation and image translation tasks demonstrate the effectiveness of the new framework.
In particular, we set a new record on IWSLT 2014 German-to-English translation with a 35.44 BLEU score, achieve a 31.03 BLEU score on WMT 2014 English-to-German translation with over 2.6 BLEU improvement over the strong Transformer baseline, and set a new record of 49.61 BLEU score on the recent WMT 2018 English-to-German translation. | [
"Dual Learning",
"Machine Learning",
"Neural Machine Translation"
] | https://openreview.net/pdf?id=HyGhN2A5tm | https://openreview.net/forum?id=HyGhN2A5tm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BkeV9DKgl4",
"HJg-jI7a1N",
"ryxsHLHn0X",
"BklUtMfnCX",
"ryePF6wKRm",
"BJgSyP6fC7",
"HklU6LTzRm",
"rJxe6Bpf07",
"HJgc0z6z07",
"H1l6suNjhX",
"HklXxIdqn7",
"HkeVjIX9hX",
"BJxU4lAYh7",
"Hkxwf3VKnm",
"Hkxo4B28n7",
"Hylw94tIh7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1544750987573,
1544529560663,
1543423554850,
1543410302013,
1543236991102,
1542801117198,
1542801085699,
1542800823898,
1542800082284,
1541257381089,
1541207531267,
1541187227772,
1541165102253,
1541127183216,
1540961587183,
1540949135396
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1486/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1486/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1486/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1486/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1486/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1486/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1486/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1486/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1486/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1486/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1486/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1486/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1486/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1486/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"A paper that studies two tasks: machine translation and image translation. The authors propose a new multi-agent dual learning technique that takes advantage of the symmetry of the problem. The empirical gains over a competitive baseline are quite solid. The reviewers consistently liked the paper but have in some cases fairly low confidence in their assessment.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Accept\"}",
"{\"title\": \"Further response to AnonReviewer1\", \"comment\": \"Dear AnonReviewer1,\\n\\nBefore the final decision concludes, do you have further questions regarding our rebuttal and updated paper? Our paper revision includes reorganization of the introduction to our framework (Section 3.1), the additional experiments on WMT18 English->German translation challenge (Section 3.4), the additional study on diversity of agents (Appendix A), and quantitative evaluation on image-to-image translations (Section 4.3 and 4.4) following your suggestions.\\n\\nIn particular, we would like to highlight that: \\n(1) The calibration of BLEU score: We would like to point out that our improvement over the previous state-of-the-art baselines is substantial. For example, on the WMT2014 En->De translation task, the performance of the transformer baseline is 28.4 BLEU score [1] (our baseline matches this performance). The improvement over this baseline is 0.61 in [2], 0.8 in [3] (1.3 BLEU improvement over the re-implemented 27.9 baseline in [3]) and 0.9 in [4], while ours is 1.65 BLEU score. \\n(2) The baselines: As we explained in the previous response, we are using the state-of-the-art transformer as our backbone model, and comparing against all the relevant algorithms including KD, BT and the traditional 2-agent dual learning (Dual-1). Moreover, we also show on WMT18 En->De challenge that our method can further improve the state-of-the-art model trained with extensive resources (Section 3.4 of our updated paper).\\n\\nWe hope our rebuttal and paper revision could address your concerns. We welcome further discussion and are willing to answer any further questions.\\n\\n[1] Vaswani, Ashish, et al. \\\"Attention is all you need.\\\" Advances in Neural Information Processing Systems. 2017.\\n[2] He, Tianyu, et al. \\\"Layer-Wise Coordination between Encoder and Decoder for Neural Machine Translation\\\". Advances in Neural Information Processing Systems. 2018. \\n[3] Shaw, Peter, Jakob Uszkoreit, and Ashish Vaswani. \\\"Self-Attention with Relative Position Representations.\\\" In Proc. of NAACL, 2018.\\n[4] Anonymous. Universal transformers. In Submitted to International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=HyzdRiR9Y7. Under review as a conference paper at ICLR 2019\"}",
"{\"title\": \"Calibration of score\", \"comment\": \"Dear Authors,\\n\\nThank you for pointing out the extensive relevant literature. I had indeed underestimated the improvement in BLUE score and will update my score to a 6.\"}",
"{\"title\": \"Further response to AnonReviewer3\", \"comment\": \"Dear AnonReviewer3:\\n\\nThanks for your response to our rebuttal. However, it is unclear to us why you believe that the general contribution of the paper remains too small for ICLR because of the subjectivity of your criticism. What does \\\"too small\\\" mean exactly? \\n\\nOur best interpretation of your concern is \\\"the increase in performance is minimal and the increased computational cost/complexity substantial\\\". While this is a legitimate concern, we do not believe the concern is sufficiently substantial to justify a rating of the paper below the acceptance threshold for the following reasons: \\n\\n1. \\\"The increase in performance is minimal \\\": \\nWhile the performance improvement may appear to be small, it is known that the improvement of BLEU score is difficult, and the magnitude of improvement from our methods is better than or at least comparable to the reported improvement on this task by recent papers published in major venues such as NeurIPS. For example, on the WMT2014 En->De translation task, the performance of the transformer baseline is 28.4 BLEU score [1] (our baseline matches this performance). The improvement over this baseline is 0.61 in [2], 0.8 in [3] (1.3 BLEU improvement over the re-implemented 27.9 baseline in [3]) and 0.9 in [4], while ours is 1.65 BLEU score. We perform paired bootstrap sampling [5] for significance test using the script in Moses [6]. Our improvement over the baselines are statistically significant with p < 0.01 across all machine translation tasks.\\nMoreover, as we pointed out in the previous response, our method has achieved the best performance so far on IWSLT 2014 De->En and WMT 2018 En->De. Our main point here is that our experimental results have provided solid evidence that the proposed new method has clearly advanced the state of the art on multiple tasks. \\n\\n2. \\\"The increased computational cost/complexity substantial\\\": \\nAs we already explained in our previous response, the computational complexity can be further reduced (there are potentially other ways to further improve efficiency), so this is not an *inherent* deficiency of the proposed new approach, but rather interesting new research questions that can be further investigated in the future. Thus in this sense, our work has also opened up some new interesting research directions. \\n\\nWe welcome further discussion and are willing to answer any further questions. \\n\\n\\n[1] Vaswani, Ashish, et al. \\\"Attention is all you need.\\\" Advances in Neural Information Processing Systems. 2017.\\n[2] He, Tianyu, et al. \\\"Layer-Wise Coordination between Encoder and Decoder for Neural Machine Translation\\\". Advances in Neural Information Processing Systems. 2018. \\n[3] Shaw, Peter, Jakob Uszkoreit, and Ashish Vaswani. \\\"Self-Attention with Relative Position Representations.\\\" In Proc. of NAACL, 2018.\\n[4] Anonymous. Universal transformers. In Submitted to International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=HyzdRiR9Y7. Under review as a conference paper at ICLR 2019\\n[5] Koehn, Philipp. \\\"Statistical significance tests for machine translation evaluation.\\\" Proceedings of the 2004 conference on empirical methods in natural language processing. 2004.\\n[6] https://github.com/moses-smt/mosesdecoder/blob/master/scripts/analysis/bootstrap-hypothesis-difference-significance.pl\"}",
"{\"title\": \"Paper revision and summary of contributions\", \"comment\": \"\", \"dear_reviewers\": \"Thanks for the valuable comments and discussion. \\n\\nOur paper revision seeks to clarify the introduction to our framework and strengthen the experiment results, which includes: (1) reorganization and clarification in Section 3.1; (2) the additional study on diversity of agents (Appendix A); (3) the additional experiment results on WMT18 English->German translation challenge (Section 3.4); and (4) quantitative evaluation on image-to-image translations (Section 4.3 - 4.4).\\n\\nIn particular, we would like to highlight our contribution in this work. We are the first to incorporate multiple agents into the dual learning framework, extending traditional dual learning to a much more general concept. The multi-agent dual learning framework, which is generally applicable to many different tasks, has significantly pushed the frontier towards dual learning. In particular, we show how the proposed general framework can be adapted to the machine translation and image translation tasks. The method is non-trivial yet very easy to apply and has been proved to be very powerful across many different translation tasks with our extensive empirical studies:\\n \\n1) Our proposed framework has achieved broad success: we have evaluated our method on five image-to-image translation tasks, and six machine translation tasks across different language pairs, different dataset scale (small dataset like IWSLT and large dataset like WMT) and different machine learning setting (supervised and unsupervised). Our method demonstrates consistent and substantial improvements over the standard baseline and traditional (two-agent) dual learning method. \\n \\n2) The multi-agent dual learning framework also pushes forward the state-of-the-art performances. On IWSLT 2014 German->English translation, we set a new record of a 35.44 BLEU score. On the recent WMT 2018 English ->German translation, we achieve the state-of-the-art performance of a 49.61 BLEU score, outperforming the challenge champion by over 1.3 BLEU score.\\n\\nWe believe we have made decent contributions in this paper based on all these above. We welcome further discussion and are willing to answer any further questions. \\n\\nThanks,\\nThe Authors\"}",
"{\"title\": \"Response to AnonReviewer3 [1/2]\", \"comment\": \"Summary: our response includes (1) Clarification on Equation 8 and its descriptions; (2) Explanations on computational cost; (3) Clarification on contribution and (4) Discussion on controlling complexity.\\n \\n** Equation 8 and its descriptions **\\nWe apologize for the confusions with equation 8. We have reorganized Section 3.1 in the update paper. To answer your questions:\\n1. Space Y: Space \\\\mathcal{Y} refers to the collection of all possible sentences of the Y domain language, instead of just the dataset (denoted by D_y, where we have D_y \\\\in \\\\mathcal{Y}). That's why it could be exponentially large.\\n2. Offline sampling: We do offline sampling by sampling all the x_hat and y_hat with f_i and g_i respectively in advance (for i>=1). We reorganized Section 3.1 and Algorithm 1 to more clearly explain how to estimate the gradients and do the offline sampling. \\n \\n** Explanations on Computational Cost **\\nThe computational cost refers to GPU time for training. Although pre-training can be parallelized, the total GPU time will not be reduced. For example, on WMT14 En<->De task, it takes 40 GPU days (5 days on 8 GPU) to train one model (agent). Pre-training more agents takes more GPU time with either more GPUs to train in parallel or longer training time. This is what we mean by \\\"increased computational cost\\\" with more agents. \\nHowever, as is shown from our experiments, we can obtain significant improvements over the strong baseline models with multiple but not too much agents (e.g. with n=3, which brings tolerable increase in computational cost yet substantial gain). Note that we do not increase the computational cost during inference.\\n \\n** Contribution & Improvement **\\nWe propose a new multi-agent dual learning framework that leverages more than one primal models and dual models in the learning system. Our framework has demonstrated its effectiveness on multiple machine translation and image translation tasks:\\n1. We work on six NMT tasks to evaluate our algorithm (see Section 3). Our improvement over the strong baselines with the state-of-the-art transformer model is not minimal. As can be seen from the recent literature in NMT [2][3], transformer is a powerful and robust model, and improving BLEU by 1 point over such strong baseline is generally considered as a non-trivial progress. Our method yields consistent and substantial improvement across all the benchmark datasets.\\n2. Our method is capable of further improving the state-of-the-art model. We work on WMT18 English-to-German translation tasks, and achieve a 49.61 BLEU score, which outperforms the champion system by 1.31 point and sets a new record on this task (see Table 4 in Section 3.4 of our updated paper).\\n3. Our method also works for unsupervised image generation. We achieve consistent improvements over CycleGAN quantitatively and qualitatively (See Section 4).\"}",
"{\"title\": \"Response to AnonReviewer3 [2/2]\", \"comment\": \"** Controlling Complexity **\\nIn this paper, we focus on demonstrating the effectiveness of our proposed method, while the issue of efficiency is not yet well explored. We agree with you that training efficiency is indeed also a very important issue. Setting a reasonable number of agents as we did in the paper is one way to control the complexity within a tolerable level while obtaining substantial gain. \\n\\nAccording to your comments, we further present a simple yet effective strategy to minimize the training complexity without too much loss in performance -- by generating different agents from a single run with warm restart. Specifically, we work with the following two settings:\\n\\n1. Warm restart by learning rate schedule. \\n(a) Setting: We employ the warm restart strategy in [1], where the warm restart is emulated by increasing the learning rate. Specifically, learning rate starts from an initial value L, then decays with a cosine annealing. Once a cyclic iteration is reached, the learning rate is increased back to the initial value and then followed with cosine decay. At the end of each cycle where the learning rate is of the minimal value, the model is approximately a local optimal. Thus, we can use multiple different such local optima as our agents.\\n (b) Pre-training Cost: Training one agent on IWSLT takes 3 days on 1 GPU (i.e. 3 GPU days). Thus, for Dual-5 model which involves 4 additional pairs of agents, the total pre-training cost in our original way through independent runs is 4 (pairs) * 2 (directions: De->En and En->De) * 3, in total 24 GPU days. With the new learning rate schedule, we can obtain the 4 pairs of agents with a single run which takes 2 (directions: De->En and En->De) * 3, in total 6 GPU days. Such a method is three times more efficient than the original way.\\n(c) Performance: With this strategy, we are able to achieve 35.07 and 29.40 BLEU with Dual-5 on IWSLT De->En and En->De respectively. Although not as good as our original method with higher complexity (e.g., 35.44 BLEU in De->En and 29.52 BLEU in En->De), such light-weighted version of our method is still able to outperform the baselines with large margin for over 1 BLEU score with minimal increase in training cost.\\n\\n2. Warm restart with different random seeds and training subsets. \\n(a) Setting: We first train a model to a stage that the model is not converged but has relatively good performance. We then use this model as warm start, and train different agents with different iteration over the dataset and different subsets. This strategy intuitively works better with larger dataset. We present results in WMT En<->Fr translation. \\n(b) Pre-training Cost: Training one agent on WMT En<->Fr dataset takes 7 days on 8 GPUs, in total 56 GPU days. For Dual-3 with 2 additional pairs of agents, the total pre-training cost is 2 * 2 * 56 = 224 GPU days. With the above strategy, we managed to decrease the cost into 2 * 56 + 2 * 8 = 128 GPU days.\\n(c) Performance: We are able to achieve 43.87 BLEU and 40.14 with Dual-3 on WMT En-Fr and Fr-En respectively, which improves 1.37 and 1.74 points over the baselines (42.5 for En->Fr and 38.4 for Fr->En).\\n\\nWith the above two strategies, we demonstrate that our framework is also capable of improving performance with large margin while introducing minimal computational cost. We will definitely further study the best strategy to minimize the training complexity while maintaining the improvements in our future work.\\n\\n** Textual Notes **\\nThanks for pointing it out. We edit the writing in our updated paper. \\nAlthough with the same term, the \\\"multi-agent\\\" in this paper has no relationship with multi-agent reinforcement learning. To avoid further confusion in the discussion period, currently we decide not to change the paper title during rebuttal.\\n\\nWe hope the above explanations could address your concerns. Please kindly check our updated paper with clarification and new experimental results.\\nThanks for your time and valuable feedbacks.\\n\\n[1] Loshchilov, Ilya, and Frank Hutter. \\\"Sgdr: Stochastic gradient descent with warm restarts.\\\" In Proc. of ICLR, 2017.\\n[2] Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. Self-attention with relative position representations. In Proc. of NAACL, 2018.\\n[3] Anonymous. Universal transformers. In Submitted to International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=HyzdRiR9Y7. Under review as a conference paper at ICLR 2019\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your review and valuable comments!\", \"summary\": \"our response includes: (1) Clarification on language translation baselines; (2) Discussion on image translation evaluation; (3) Reference and clarification.\\n \\n** Language Translation Baselines **\\n1. For the baseline models reported: \\n1.1) We use the transformer model with \\\"transformer_big\\\" setting [1], which is a strong baseline that outperforms almost all previously popular NMT models based on CNN [2] and LSTM [3]. Transformer is the state-of-the-art NMT architecture. Our numbers of the baseline transformer model match the results reported in [1].\\n1.2) In addition to the standard baseline models, we also compare our method against all the relevant algorithms including knowledge distillation (KD) and back translation (BT).\\n1.3) As can be seen in many well-known and recent NMT works ([4], [5]), it is a common practice to use transformer as the robust baseline model. Furthermore, it is also shown from these works that it is hard to improve over the transformer baseline, and 0.5-1 BLEU score improvement is already considered substantial.\\n\\n2. We further add newly obtained results on the WMT18 challenge. We compare our method with both the champion translation system MS-Marian (WMT18 En->De challenge champion). Our method achieves the state-of-the-art result on this task. \\n---------------------------------------------------------------------------\\n WMT En->De 2016 2017 2018\\n---------------------------------------------------------------------------\\nMS-Marian (ensemble) 39.6 31.9 48.3\\nOurs (single) 40.68 33.47 48.89\\nOurs (ensemble) 41.23 34.01 49.61\\n---------------------------------------------------------------------------\\nPlease refer to Section 3.4 \\\"Study on generality of the algorithm\\\" for more details and Table 4 for full results in our updated paper.\\n \\n** Image Translation Evaluation **\\nFor image-to-image translation tasks, we further add two quantitative measures: (1) We use the Fr\\u00e9chet Inception Distance (FID) [6], which measures the distance between generated images and real images to evaluate the painting to photos translation. (2) We use \\\"FCN-score\\\" evaluation on the cityscape dataset following [7]. The results are reported in Table 6 and Table 7 respectively. Multi-agent dual learning framework can achieve better quantitative results than the baselines.\\n\\nWe are not sure what you meant by \\u201cHow does their ensemble method compare to just their single-agent dual method?\\u201d. The standard CycleGAN model (baseline) already leverages both primal and dual mappings, which is equivalent to our \\u201cDual-1\\u201d model in NMT experiments, i.e., the dual method with only one pair of agents f_0 and g_0. Our model involves two additional pairs of agents (f_1 and g_1, f_2 and g_2) during training. Unlike ensemble learning, only one agent (f_0 for forward direction, or g_0 for backward direction) is used during inference.\\n\\n** Reference **\\nThanks for pointing a reference paper \\\"Multi-Column Deep Neural Networks for Image Classification\\\" (briefly, MCDNN) and we have added reference to it (Section 4).\\nAlthough MCDNN also uses multiple agents (i.e., several columns of deep neural networks), it differs from our model in two aspects: (1) Our work leverages the duality of a pair of dual tasks while this paper does not; (2) In an MCDNN framework, during the training phase, all the columns are updated by winner-take-all rule; and during inference, all columns work like an ensemble model through weighted average. In comparison, we only update one primal and one dual agent during training, and use one agent for inference.\\n\\n** Clarity **\\nThanks for pointing out that our original introduction to the names of baselines and models is not very clear. Please kindly refer to first paragraph in Section 3.3.\\n \\nYou may check our updated paper with clarification and new experimental results.\\nThanks for your time and feedbacks.\\n\\n[1] Vaswani, Ashish, et al. \\\"Attention is all you need.\\\" In NIPS. 2017.\\n[2] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional Sequence to Sequence Learning. In Proc. of ICML, 2017.\\n[3] Wu, Yonghui, et al. \\\"Google's neural machine translation system: Bridging the gap between human and machine translation.\\\" arXiv preprint arXiv:1609.08144 (2016).\\n[4] Chen, Mia Xu, et al. \\\"The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation.\\\" In Proc. of the ACL, 2018.\\n[5] Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. Self-attention with relative position representations. In Proc. of NAACL, 2018.\\n[6] Heusel, Martin, et al. \\\"Gans trained by a two time-scale update rule converge to a local nash equilibrium.\\\" In NIPS, 2017.\\n[7] Isola, Phillip, et al. \\\"Image-to-image translation with conditional adversarial networks.\\\" In CVPR, 2017\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your comments and suggestions!\", \"summary\": \"our response includes (1) Clarification on math equations; (2) Analysis on diversity of additional agents; (3) Quantitative analysis for image translation.\\n\\n** Clarification on Mathematics in Section 3.1 **\\nWe apologize for the confusions in Section 3.1. We have reorganized this section, as shown in our updated paper. For your questions:\\n1. About equation 8, indeed there is a typo and should be a \\\"partial\\\" sign in front of the \\\"\\\\delta\\\" function in the numerator. Thanks for pointing this out.\\n2. The details of derivative estimation can be found in Section 3.1 (especially equation 9 and 10 in our updated version.\\n \\n** Study on diversity of agents **\\n1. You are right. We obtained distinct \\\"agents\\\" f_i and g_i through multiple independent runs with different random seeds and different input orders of the training samples. As far as we know, there's no common quantitative metric to measure the diversity among models in NMT. But we agree with you that intuitively, more diversity among agents leads to greater improvements. \\n\\n2. Following your suggestions, we add a study on the diversity of agents (presented in Appendix A of the updated paper). We design three group of agents with different levels of diversity: (E1) Agents with the same network structure trained by independent runs, i.e., what we use in Section 3.3; (E2) Agents with different architectures and independent runs; (E3) Homogeneous agents of different iteration, i.e., the checkpoints obtained at different (but close) iterations from the same run. We evaluate the above three settings on IWSLT2014 De<->En dataset. The diversity of the above three settings would intuitively be (E2)>(E1)>(E3). We present full results in Figure 4 (Appendix A), where the BLEU scores with Dual-5 model are: \\n\\n--------------------------------------------------------\\n E1 E2 E3\\n--------------------------------------------------------\\nEn -> De 35.44 35.56 34.97\\nDe -> En 29.52 29.58 29.28\\n--------------------------------------------------------\\n\\nFrom the above results, we can see that diversity among agents indeed plays an important role in our method. There are, of course, many other ways to introduce more diversity, including using different optimization strategies, or training with different subsets as you suggested. All of these can potentially bring further improvements to our framework, yet are not the focus of this work. From the current studies, we show that our algorithm is able to achieve substantial improvement with a reasonable level of diversity. We leave more comprehensive studies on diversity to future work.\\n\\nPlease kindly refer to Appendix A for more detailed results.\\n\\n** Quantitative analysis for image translation **\\nThanks for your suggestions. We add two quantitative measures on image translation tasks: (1) We use the Fr\\u00e9chet Inception Distance (FID score) [1], which measures the distance between generated images and real images to evaluate the painting to photos translation. (2) We use \\\"FCN-score\\\" evaluation on the cityscape dataset following [2]. The results are reported in Table 6 and Table 7 respectively. Multi-agent dual learning framework can achieve better quantitative results than the baselines.\\n\\n** Term usage of \\\"multi-agents\\\" **\\nAlthough with the same term, the \\\"multi-agent\\\" or \\\"agent\\\" in this paper has no relationship with multi-agent reinforcement learning. You are right in that the term \\\"agent\\\" in our context refers to \\\"mapping\\\" or \\\"network\\\". To avoid further confusion in the discussion period, currently we decide not to change the term usage throughout the paper during rebuttal; instead, we will change the term after the acceptance/rejection decision.\\n\\nYou can check our updated paper with clarification and new experimental results.\\nThanks for your time and valuable feedbacks.\\n\\n[1] Heusel, Martin, et al. \\\"Gans trained by a two time-scale update rule converge to a local nash equilibrium.\\\" Advances in Neural Information Processing Systems. 2017.\\n[2] Isola, Phillip, et al. \\\"Image-to-image translation with conditional adversarial networks.\\\" In CVPR, 2017\"}",
"{\"title\": \"Straightforward Idea, pretty good results, some things should be clarified (potential issue with the maths).\", \"review\": \"Summary\\n\\nThe paper proposes to modify the \\\"Dual Learning\\\" approach to supervised (and unsupervised) translation problems by making use of additional pretrained mappings for both directions (i.e. primal and dual). These pre-trained mappings (\\\"agents\\\") generate targets from the primal to the dual domain, which need to be mapped back to the original input. It is shown that having >=1 additional agents improves training of the BLEU score in standard MT and unsupervised MT tasks. The method is also applied to unsupervised image-to-image \\\"translation\\\" tasks.\\n\\nPositives and Negatives\\n+1 Simple and straightforward method with pretty good results on language translation.\\n+2 Does not require additional computation during inference, unlike ensembling.\\n-1 The mathematics in section 3.1 is unclear and potentially flawed (more below).\\n-2 Diversity of additional \\\"agents\\\" not analyzed (more below).\\n-3 For image-to-image translation experiments, no quantitative analysis whatsoever is offered so the reader can't really conclude anything about the effect of the proposed method in this domain.\\n-4 Talking about \\\"agents\\\" and \\\"Multi-Agent\\\" is a somewhat confusing given the slightly different use of the same term in the reinforcement literature. Why not just \\\"mapping\\\" or \\\"network\\\"?\\n\\n-1: Potential Issues with the Maths.\\n\\nThe maths is not clear, in particular the gradient derivation in equation (8). Let's just consider the distortion objective on x (of course it also applies to y without loss of generality). At the very least we need another \\\"partial\\\" sign in front of the \\\"\\\\delta\\\" function in the numerator. But again, it's not super clear how the paper estimates this derivative. Intuitively the objective wants f_0 to generate samples which, when mapped back to the X domain, have high log-probability under G, but its samples cannot be differentiated in the case of discrete data. So is the REINFORCE estimator used or something? Not that the importance sampling matter is orthogonal. In the case of continuous data x, is the reparameterization trick used? This should at the very least be explained more clearly.\\n\\nNote that the importance sampling does not affect this issue.\\n\\n-2: Diversity of Agents.\\n\\nAs with ensembles, clearly it only helps to have multiple agents (N>2) if the additional agents are distinct from f_1 (again without loss of generality this applies to g as well). The paper proposes to use different random seeds and iterate over the dataset in a different order for distinct pretrained f_i. The paper should quantify that this leads to diverse \\\"agents\\\". I suppose the proof is in the pudding; as we have argued, multiple agents can only improve performance if they are distinct, and Figure 1 shows some improvement as the number of agents are increase (no error bars though). The biggest jump seems to come from N=1 -> N=2 (although N=4 -> N=5 does see a jump as well). Presumably if you get a more diverse pool of agents, that should improve things. Have you considered training different agents on different subsets of the data, or trying different learning algorithms/architectures to learn them? More experiments on the diversity would help make the paper more convincing.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Applying ensembles to machine translation appears to result in good performance on language and image translation\", \"review\": \"The author's present a dual learning framework that, instead of using a single mapping for each mapping task between two respective domains, the authors learn multiple diverse mappings. These diverse mappings are learned before the two main mappings are trained and are kept constant during the training of the two main mappings. Though I am not familiar with BLEU scores and though I didn't grasp some of the details in 3.1, the algorithm yielded consistent improvement over the given baselines. The author's included many different experiments to show this.\\n\\nThe idea that multiple mappings will produce better results than a single mapping is reasonable given previous results on ensemble methods. \\n\\nFor the language translation results, were there any other state-of-the-art methods that the authors could compare against? It seems they are only comparing against their own implementations.\\n\\nObjectively saying that the author's method is better than CycleGAN is difficult. How does their ensemble method compare to just their single-agent dual method? Is there a noticeable difference there?\", \"minor_comments\": \"Dual-1 and Dual-5 are introduced without explanation.\\n\\nPerhaps I missed it, but I believe Dan Ciresan's paper \\\"Multi-Column Deep Neural Networks for Image Classification\\\" should be cited.\\n\\n### After reading author feedback\\nThank you for the feedback. After reading the updated paper I still believe that 6 is the right score for this paper. The method produces better results using ensemble learning. While the results seem impressive, the method to obtain them is not very novel; nonetheless, I would not have a problem with it being accepted, but I don't think it would be a loss if it were not accepted.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Extensive experiments and results, but not enough contribution\", \"review\": \"The paper shows extensive experimentation and improves the previous result in all cases. The proposed method is a straightforward extension and can be readily implemented and used.\\n\\nI have difficulty understanding equation 8 and the paragraph below. It seems like the authors use an equal weighting for the additional agents, however they mention using Monte Carlo to \\u201ctackle the intractability resulting form the summation over the exponentially large space y\\u201d. According to the paper the size of y is the dataset, is it exponentially large? Do the authors describe stochastic gradient descent? Also what do the authors mean by offline sampling? Do they compute the targets for f_0 and g_0 beforehand using f_1\\u2026n and g_1\\u2026n?\\n\\nThe results mention computational cost a few times, I was wondering if the authors could comment on the increase in computational cost? e.g. how long does \\u201cpre-training\\u201d take versus training the dual? Can the training of the pre-trained agents be parallelised? Would it be possible to use dropout to more computationally efficient obtain the result of an ensemble?\\n\\nIn general I think the authors did an excellent job validating their method on various different datasets. I also think the above confusions can be cleared up with some editing. However the general contribution of the paper is not enough, the increase in performance is minimal and the increased computational cost/complexity substantial. I do think this is a promising direction and encourage the authors to explore further directions of multi-agent dual learning.\", \"textual_notes\": [\"Pg2, middle of paragraph 1: \\u201cwhich are pre-trained with parameters fixed along the whole process\\u201d. This is unclear, do you mean trained before optimising f_0 and g_0 and subsequently held constant?\", \"Pg2, middle last paragraph: \\u201ctypical way of training ML models\\u201d. While the cross entropy loss is a popular loss, it is not \\u201ctypical\\u201d.\", \"Pg 3, equation 4, what does \\u201cbriefly\\u201d mean above the equal sign?\", \"Perhaps a title referring to ensemble dual learning would be more appropriate, given the possible confusion with multi agent reinforcement learning.\", \"################\"], \"revision\": \"I would like to thank the authors for the extensive revision, additional explanations/experiments, and pointing out extensive relevant literature on BLUE scores. The revision and comments are much appreciated. I have increased my score from 4 to 6.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"IWSLT De->En experiment details\", \"comment\": \"Thanks for the information. Here are our settings and some initial observations:\\n\\u00a0\\n** Settings **\\n\\u00a0 -\\u00a0 Hyperparameters: We set 'hparams_set=transformer_base', and experiment with the batch size of 4096 (default), 6400 (to approximate 320 sentences) and 320 tokens, and dropout rate of 0.1 (default) and 0.4 (since severe overfitting observed). The rest hyperparameters use the default value in 'transformer_base'.\\n\\u00a0 -\\u00a0 Optimization: We use the Adam optimizer with the same setting described in the paper (section 3.2 Optimization and Evaluation). \\n\\u00a0\\u00a0-\\u00a0 Evaluation: We use beam search with a beam size of 6 (paper section 3.2) in inference and\\u00a0use multi-bleu.pl to evaluate the tokenized BLEU.\\n\\nWe run the baseline and our algorithm with 5 agents (Dual-5) with the above settings. For our multi-agent model, we still use the same agents as the paper (transformer_small with 4 blocks) for sampling. \\nThe models are implemented with tensor2tensor v1.2.9 and trained on one M40 GPU. \\u00a0\\n\\u00a0\\n** Results **\\nBelow are the initial results. We are still working on the experiments.\\n\\u00a0\\n\\tTable 1. With dropout rate of 0.1 (default)\\n\\t-------------------------------------------------------------------------------\\n\\tBatch Size\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 4096\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 6400\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 320\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 \\n\\t-------------------------------------------------------------------------------\\n\\tBaseline\\u00a0\\u00a0\\u00a0 \\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a032.24\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 32.22\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 2.17\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 \\n\\tOurs (Dual-5)\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 34.59\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 34.58\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 3.65\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 \\n\\t-------------------------------------------------------------------------------\\n\\t\\u00a0\\n\\tTable 2. With dropout rate of 0.4 \\n\\t-------------------------------------------------------------------------------\\n\\tBatch Size\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 4096\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 6400\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 320\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 \\n\\t-------------------------------------------------------------------------------\\n\\tBaseline\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 34.40\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 34.43\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 2.37\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 \\n\\tOurs (Dual-5)\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 35.12\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 35.45\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 3.91\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 \\n\\t-------------------------------------------------------------------------------\", \"we_have_the_following_observations\": \"1) The default 'transformer_base' setting\\u00a0appears to suffer from severe overfitting (Table 1). We tune the dropout ratio and present results with dropout=0.4 in Table 2, where we indeed obtain better baseline results than our baselines with 'small' setting reported in the paper. The stronger baseline achieves a 34.43 BLEU score (with batch size 6400).\\n\\t2) We notice that with a batch size of 320 tokens (as the setting you suggested), the model is not well optimized with either dropout ratio. We are curious whether you are also using other different hyperparameters or optimization settings. We would be happy to re-evaluate our approach under the stronger baseline setting.\\n\\t3) From the results we have so far, our algorithm can still outperform the stronger baseline with a large margin, achieving 35.45 BLEU score (with batch size 6400).\\n\\u00a0\\nWe will keep working on experiments of IWSLT De-En under the 'base' settings and update our findings.\"}",
"{\"comment\": \"Thanks for your reply, I used 320 tokens to obtain a better result compared to the default settings.\", \"title\": \"batch size\"}",
"{\"title\": \"Reply to some details\", \"comment\": \"Thanks for your comments.\\n\\nFor IWSLT De-En, we use the 'transformer_small' setting (in paper section 3.2), in which the batch size is set to be 4096 tokens. We use multi-bleu.pl to evaluate the tokenized BLEU. \\n\\nThanks for providing a stronger baseline and we are working on it. To confirm, by 'batch size=320', are you referring to 320 tokens or sentences?\"}",
"{\"comment\": \"What's the batch size of your baseline system for IWSLT De-En? And which evaluation script do you use to measure the BLEU score?\\n\\nI run the T2T with transform_base parameters(batch size = 320), and achieve a BLEU score of 34.38, which is higher than your baseline (33.42). I use the multi_bleu.pl and tokenize the English and German using Moses toolkit.\", \"title\": \"some details\"}"
]
} |
|
rJl2E3AcF7 | Doubly Sparse: Sparse Mixture of Sparse Experts for Efficient Softmax Inference | [
"Shun Liao",
"Ting Chen",
"Tian Lin",
"Chong Wang",
"Dengyong Zhou"
] | Computations for the softmax function in neural network models are expensive when the number of output classes is large. This can become a significant issue in both training and inference for such models. In this paper, we present Doubly Sparse Softmax (DS-Softmax), Sparse Mixture of Sparse of Sparse Experts, to improve the efficiency for softmax inference. During training, our method learns a two-level class hierarchy by dividing entire output class space into several partially overlapping experts. Each expert is responsible for a learned subset of the output class space and each output class only belongs to a small number of those experts. During inference, our method quickly locates the most probable expert to compute small-scale softmax. Our method is learning-based and requires no knowledge of the output class partition space a priori. We empirically evaluate our method on several real-world tasks and demonstrate that we can achieve significant computation reductions without loss of performance. | [
"hierarchical softmax",
"model compression"
] | https://openreview.net/pdf?id=rJl2E3AcF7 | https://openreview.net/forum?id=rJl2E3AcF7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SkewqaD1lV",
"HklSWHq3p7",
"B1e8djk9Tm",
"HJxF95J5aX",
"HJgvmMlBTQ",
"B1euHOqi37",
"SklHkeMohX",
"rJxPUFHc3m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544678798646,
1542395132849,
1542220654014,
1542220433300,
1541894687507,
1541281856151,
1541246940753,
1541196110949
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1485/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1485/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1485/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1485/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1485/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1485/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1485/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1485/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This work proposes a new approximation method for softmax layers with large number of classes. The idea is to use a sparse two-layer mixture of experts. This approach successfully reduces the computation requires on the PTB and Wiki-2 datasets which have up to 32k classes. However, the reviewers argue that the work lacks relevant baselines such as D-softmax and adaptive-softmax. The authors argue that they focus on training and not inference and should do worse, but this should be substantiated in the paper by actual experimental results.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-review\"}",
"{\"title\": \"Summary of the revision and key points\", \"comment\": \"We thank reviewers for their time and valuable comments. We have revised our article based on reviewers' suggestions.\", \"we_want_to_summarize_the_key_points_of_this_work_as_follows\": [\"Our work focuses on speeding up softmax inference given large output dimension and achieved good empirical results on both synthetic and real dataset. For top-k language modeling task on Wiki-2, we can achieve more than 23x without any loss of performance.\", \"Our method is novel in terms of constructing the two-level overlapping hierarchy of output classes. The hierarchy is captured through the mixture model and group lasso technique. The inference speedup is achieved by such a hierarchy.\", \"The key difference between our work and existing methods is that our speedup is achieved by learning a new output embedding while most existing methods relied on approximating the trained/fixed embedding. This means our method is orthogonal with them in principle. One key advantage of our method is speedup without any loss while approximation based methods usually suffer the loss of performance.\"]}",
"{\"title\": \"Our work focuses on inference speedup, and compares to the best approach (NIPS'17) we were aware of\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your valuable comments. We have revised our writing in the revision, and will further improve its clarity. Please find our response as follows.\\n\\n- Algorithm 1 does not include mitosis, which may have an effect on the resulting approximation. \\n\\nMitosis training can be considered as executing Algorithm 1 for multiple times with an increasing number of experts and inherited initialization from last round by changing W^e and W^g. Also, training with mitosis achieves similar performance as training without it shown in Appendix B, Figure (a). \\n\\n- How are the lambda and threshold parameters tuned? The authors mention a validation set, are they just exhaustively explored on a 3D grid on the validation set? \\n\\nThe hyper-parameters related to DS-softmax (such as lambda) are tuned according to the performance on a validation dataset. Also, as we mentioned in the paper, only one hyper-parameter (group lasso lambda) needs to be tuned. The heuristic we use to tune group lasso lambda is to increase lambda, starting from a small value, until it hurts the performance. Also threshold and balancing lambda variables are kept fixed as (0.01 and 10). \\n\\n- Why would it be expected to be faster than all the other alternatives? Wouldn't similar alternatives like the sparsely gated MoE, D-softmax and adaptive-softmax have chances of being faster? \\n\\nIn terms of baselines, SVD-softmax (NIPS\\u201917) was chosen since it is a recent method that provides a significant inference speedup for softmax. Other alternatives, such as D-softmax and adaptive-softmax, focus on training instead of inference speedup. Furthermore, as claimed in their papers, they achieve limited speedup (around 5x) in language modeling, which is much worse than ours. With regards to Sparsely Gated MoE, it cannot speed up inference, since they select expert with full softmax.\\n\\nWe would like to emphasize that most existing methods for inference speedup focus on approximating trained softmax layer, which usually suffers a loss on performance. Our model allows the adaptive adjustment of the softmax layer, achieves speedup through capturing the two-level overlapped hierarchy during training, which is novel and does not suffer from the performance loss.\"}",
"{\"title\": \"Thank you for your feedback!\", \"comment\": \"Dear Reviewer:\\n\\nThank you for your valuable comments. We have addressed typos in the revision accordingly. And please find our response as follows.\\n\\n- Can you be more specific about the gains in training versus inference time?\\n\\nWe would like to emphasize that the our goal is to speed up the inference time for softmax, so we do not include any comparisons in terms of training time. According to our experiments, most speedup can be achieved in few epochs (given all other layers are pre-trained) so that the training time increase is not significant compared to the original one.\\n\\n- You motivate some of the work by the fact that the experts have overlapping outputs. Maybe in section 3.7 you can address how often that occurs as well? \\n\\nThanks for the suggestion. We demonstrate that ambiguous words are often overlapped between clusters as illustrated in Figure 3(b). We added one more Figure in Appendix B, Figure (b), to demonstrate the distribution of overlapping. \\n\\n- It wasn't clear how the sparsity percentage on page 3 was defined? \\n\\nSorry for the possible confusion. The sparsity in page 3 means the percentage of pruned words. We have added more clarifications in the revised version. \\n\\n- Can you motivate why you are not using perplexity in section 3.2?\\n\\nWe use top-k accuracy, instead of perplexity, because approximating top-k is required for most inference tasks in practice (see [1]). Perplexity captures the normalized log-likelihood of all possible words, while top-k accuracy is a better measure for inference speedup for top-k retrieval. For example, in some extreme cases, if a word only has a very small probability which makes it unpredictable at all (i.e. couldn\\u2019t be retrieved by top-k for any reasonably small k), it could still have a huge impact in terms of perplexity, but has a much smaller impact on top-k accuracy, which seems more reasonable given the goal of top-k retrieval.\\n\\n[1] Asymmetric LSH (ALSH) for Sublinear Time Maximum Inner Product Search (MIPS), NIPS 2014\"}",
"{\"title\": \"Clarifications: Sparsely-Gated MoE (Shazeer et al. 2017) cannot speed-up softmax inference\", \"comment\": \"Dear reviewer:\\n\\nWe appreciate your comments but it appears that there is some misunderstanding regarding our contribution in this work. \\n\\nOur work is for softmax inference speedup while Sparse-Gated MoE (MoE) was not designed to do so. It was designed to increase the model expressiveness. It cannot achieve speedup because each expert still contains full softmax space as we mentioned in the background section (page 2 line 21st) and method section (page 2 last 4th line). And since it is slower than the standard softmax by definition, we chose not to compare with it in the paper.\\n\\nOur algorithm addresses speed up in softmax inference. This is fundamentally different from Sparse-gated MoE. We divide the output space into multiple overlapped subsets. To find top-k predictions, we only search a few subsets. While in full softmax or MoE, the complexity is linear with output dimension. Therefore, we did not include a comparison with Sparsely-Gated MoE in our article and only compare with full softmax. \\n\\nJust for additional reference, we tested Sparsely-Gated MoE with different experts in PTB dataset; we compared the results to DS-Softmax. As expected, the Sparsely-Gated MoE does not achieve speedup in terms of softmax inference. \\n\\n______________________________________________\\nMethod | Top 1 | Top 5 |Top 10| FLOPs| \\nDS-8 | 0.257 | 0.448 | 0.530 | 2.84x |\\nMoE-8 | 0.258 | 0.448 | 0.530 | 1x |\\nDS-16 | 0.258 | 0.450 | 0.529 | 5.13x |\\nMoE-16 | 0.258 | 0.449 | 0.530 | 1x |\\nDS-32 | 0.259 | 0.449 | 0.529 | 9.43x |\\nMoE-32 | 0.259 | 0.450 | 0.531 | 1x |\\nDS-64 | 0.258 | 0.450 | 0.529 |15.99x|\\nMoE-64 | 0.260 | 0.451 | 0.531 | 1x |\\n______________________________________________\\n\\n* FLOPs means FLOPs reduction (i.e. baseline's FLOPs / target method's FLOPs).\"}",
"{\"title\": \"Good empirical results, but only one baseline and poor writing.\", \"review\": \"The present paper proposes a fast approximation to the softmax computation when the number of classes is very large. This is typically a bottleneck in deep learning architectures. The approximation is a sparse two-layer mixture of experts.\\n\\nThe paper lacks rigor and the writing is of low quality, both in its clarity and its grammar. See a list of typos below.\\n\\nAn example of lack of mathematical rigor is equation 4 in which the same variable name is used to describe the weights before and after pruning, as if it was computer code instead of an equation. Also pervasive is the use of the asterisk to denote multiplication, again as if it was code and not math.\\n\\nAlgorithm 1 does not include mitosis, which may have an effect on the resulting approximation.\\n\\nHow are the lambda and threshold parameters tuned? The authors mention a validation set, are they just exhaustively explored on a 3D grid on the validation set?\\n\\nThe results only compare with Shim et al. Why only this method? Why would it be expected to be faster than all the other alternatives? Wouldn't similar alternatives like the sparsely gated MoE, D-softmax and adaptive-softmax have chances of being faster?\\n\\nThe column \\\"FLOPS\\\" in the result seems to measure the speedup, whereas the actual FLOPS should be less when the speed increases. Also, a \\\"1x\\\" label seems to be missing in for the full softmax, so that the reference is clearly specified.\\n\\nAll in all, the results show that the proposed method provides a significant speedup with respect to Shim et al., but it lacks comparison with other methods in the literature.\", \"a_brief_list_of_typos\": \"\\\"Sparse Mixture of Sparse of Sparse Experts\\\"\\n\\\"if we only search right answer\\\"\\n\\\"it might also like appear\\\"\\n\\\"which is to design to choose the right\\\"\\nsparsly\\n\\\"will only consists partial\\\"\\n\\\"with \\u03b3 is a lasso threshold\\\"\\n\\\"an arbitrarily distance function\\\"\\n\\\"each 10 sub classes are belonged to one\\\"\\n\\\"is also needed to tune to achieve\\\"\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Need to discuss more about how Doubly Sparse is superior to Sparsely-Gated MoE\", \"review\": \"The paper proposes doubly sparse, which is a sparse mixture of sparse experts and learns a two-level class hierarchy, for efficient softmax inference.\\n\\n[+] It reduces computational cost compared to full softmax.\\n[+] Ablation study is done for group lasso, expert lasso and load balancing, which help understand the effect of different components of the proposed\\n[-] It seems to me the motivation is similar to that of Sparsely-Gated MoE (Shazeer et al. 2017), but it is not clear how the proposed two-hierarchy method is superior to the Sparsely-Gated MoE. It would be helpful the paper discuss more about this. Besides, in evaluation, the paper only compares Doubly Sparse with full softmax. Why not compare with Sparsely-Gated MoE?\\n\\nOverall, I think this paper is below the borderline of acceptance due to insufficient comparison with Sparsely-Gated MoE.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"New method for large scale softmax inference\", \"review\": \"In this paper the authors introduce a new technique for softmax inference. In a multiclass setting, the idea is to take the output of a NN and turn it into a gating function to choose one expert. Then, given the expert, output a particular category. The first level of sparsity comes from the first expert. The second level of sparsity comes from every expert only outputting a limited set of output categories.\\n\\nThe paper is easy to understand but several sections (starting from section 2) could use an english language review (e.g. \\\"search right\\\" -> \\\"search for the right\\\", \\\"predict next word\\\" -> \\\"predict the next word\\\", ...) In section 3, can you be more specific about the gains in training versus inference time? I believe the results all relate to inference but it would be good to get an overview of the impact of training time as well. You motivate some of the work by the fact that the experts have overlapping outputs. Maybe in section 3.7 you can address how often that occurs as well?\", \"nits\": [\"it wasn't clear how the sparsity percentage on page 3 was defined?\", \"can you motivate why you are not using perplexity in section 3.2?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
HkghV209tm | Optimistic Acceleration for Optimization | [
"Jun-Kun Wang",
"Xiaoyun Li",
"Ping Li"
] | We consider new variants of optimization algorithms. Our algorithms are based on the observation that mini-batch of stochastic gradients in consecutive iterations do not change drastically and consequently may be predictable. Inspired by the similar setting in online learning literature called Optimistic Online learning, we propose two new optimistic algorithms for AMSGrad and Adam, respectively, by exploiting the predictability of gradients. The new algorithms combine the idea of momentum method, adaptive gradient method, and algorithms in Optimistic Online learning, which leads to speed up in training deep neural nets in practice. | [
"optimization",
"Adam",
"AMSGrad"
] | https://openreview.net/pdf?id=HkghV209tm | https://openreview.net/forum?id=HkghV209tm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1xyzFyIg4",
"BkxG7ywE1E",
"HJlG728ZJE",
"HkgiB4shAX",
"S1eV7r_9Cm",
"rklDgYP907",
"SJg5TdwqAm",
"rJxli_wcAQ",
"H1eizHCITm",
"BJehLc1R3m",
"HJe5D4oj27",
"Hye3jHqf2Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545103623361,
1543954201826,
1543756825655,
1543447619102,
1543304475619,
1543301359076,
1543301314126,
1543301271653,
1542018323341,
1541433939703,
1541284962491,
1540691364176
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1484/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1484/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1484/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1484/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1484/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1484/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1484/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1484/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1484/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1484/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1484/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1484/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers expressed some interest in this paper, but overall were lukewarm about its contributions. R4 raises a fundamental issue with the presentation of the analysis (see the D_infty assumption). The AC thus goes for a \\\"revise and resubmit\\\".\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Borderline paper\"}",
"{\"title\": \"On $D_{\\\\infty}$ assumption\", \"comment\": \"I think that assuming that $D_\\\\infty$ is bounded is a lack of rigor. It is not a property of the problem but a property of the algorithm you use. The goal is to show the appealing properties of your algorithm.\\n\\nLet me give you an example. if we consider the simple sequence $x_{t+1} = x_t - \\\\eta x_t$ (which is basically gradient descent on the one dimensional objective $f(x) = x^2$. ) Then if we assume that $D_{\\\\infty} = sum_{t} \\\\|x_t\\\\|$ is bounded we have: \\n\\n$ \\\\eta x_{t} = x_t - x_{t+1} => \\\\frac{\\\\eta}{T}\\u00a0\\\\sum_{t=0}^T x_t = (x_0 - x_{T+1})/T \\n => \\\\eta \\\\|\\\\frac{1}{T} \\\\sum_{t=0}^T x_t \\\\| \\\\leq 2 D_\\\\infty/T$\\n\\nNow with $\\\\eta = T$ we get that the average of $x_t$ converge at a rate $O(1/T^2)$.\\n$\\\\|\\\\frac{1}{T} \\\\sum_{t=0}^T x_t \\\\| \\\\leq 2 D_\\\\infty/T^2$\\n\\nThat is not true because for $\\\\eta > 1$ it is easy to show that this sequence actually diverge !\\n\\nThis example underlines that $D_{\\\\infty}$ may actually depend on $\\\\eta$. \\n\\nI consider that it is paradoxical to assume that a sequence is bounded in order to eventually show that it's average actually converge.\"}",
"{\"title\": \"response\", \"comment\": \"Thanks for taking your time to respond. Unfortunately, I don't find the responses to be very satisfying.\", \"re_boundedness_of_weights\": \"This is actually a very strong assumption as it implicitly requires a certain kind of stability that ensures that the weights output by the algorithm never diverge. This is already not trivial to ensure when the objective is convex, and certainly much harder if it is non-convex. So, projections are absolutely necessary for this part of the analysis to work out.\", \"re_extrapolation_method\": \"I do understand that you want to approximate the gradient as the fixed point of (4), I am just having trouble understanding why this would be a reasonable approximation. Specifically, in what cases do you expect this approximation to be accurate? Is there a natural case where the gradients actually follow such a linear recursion? (Maybe linear regression?) What are cases where this method leads to a bad approximation?\\n\\nPlease consider discussing these issues in more detail in a future version.\"}",
"{\"title\": \"Response to author response\", \"comment\": \"Thanks for the response.\\n\\nI am aware that the algorithm doesn't reduce to AMSGrad when m_t=0, and I think that this makes it more important to carefully highlight when the new algorithm will outperform existing methods (such as AMSGrad). It's probably not reasonable to do this theoretically on real-world problems (although it would be interesting to show how the terms in question behave in your experiments), but even demonstrating cases where the proposed algorithm is better on toy objectives would be helpful for the reader.\"}",
"{\"title\": \"Response to AnonReviewer 1\", \"comment\": \"Thank you for the valuable comments. We have fixed most of the issued you raised under \\\"Detailed Comments\\\". For now, we leave the small capital letter typesettings for now but we will be happy to remove the capitalization later.\\n\\n== Regarding to $D_{\\\\infty}$: \\nWe assume that it is finite. If $w^*$ is finite, then $D_{\\\\infty}$ should be finite. We think it is a reasonable assumption. We will consider the constraint case later.\\n\\n== Regarding to the extrapolation method: \\nWe want to use the past few gradient vectors to predict $g_{t}$. So, $x^*$ is the fixed point of the (4).\\nIf $x_{t-1}= x^{*}$, then $x_{t} = x^{*}$. By using the extrapolation method, we basically assumes that the relation of gradient vectors satisfies (4). \\nWe admit that it is not true in general but the method helps to get a faster convergence in the experiments. \\n\\n== Experiment == \\nWe think that our results illustrate an obvious acceleration, especially in first few epochs. Since our work focuses on a \\\"optimistic\\\" modification to the AMSGrad algorithm, we choose to compare the improvement brought by different optimistic term based on tuned AMSGrad optimizer, which might be more convincing for the \\\"extra effect\\\".\\n\\nAgain, thanks for the detailed comments.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for the valuable comments and identifying the typos. We have fixed them. Please find as follows our response.\\nWe've adjusted the names of algorithms and updated a new version accordingly.\\n\\n== Regarding to Theorem 1:\\nThe proposed algorithm does not reduce to AMSGrad when $m_{t}=0$. So, the regret bound is different.\\nIf one remove line 9, or set $h_{t+1}=0$, then the last two terms of the regret bound would disappear,\\nwhich becomes the bound of AMSGrad (namely, (9)).\\n\\nAs AnonReviewer 4 points out, we should also compare the sum of the last two terms in Theorem 1 (namely, (8)) with the last term on (9). We are not claiming that the regret bound of Theorem 1 is always better than that of AMSGrad (namely, (9)).\\nOur original discussion actually means that if $m_{t}$ and $g_{t}$ is close, then the last term of (8) would dominate. We then try to upper-bound it in a very loose sense (i.e. (10)). We treat each $\\\\frac{(g_t[i]^2- h_t[i]^2)}{\\\\sqrt{\\\\hat{v}_{t-1}[i]}}$ as a constant and get a $O(\\\\sqrt{T})$ bound. However, in practice, $\\\\sqrt{\\\\hat{v}_{t-1}[i]}$ might grow over time,\\nand the growth rate is different for different dimension $i$. If $\\\\sqrt{\\\\hat{v}_{t-1}[i]}$ grows like $O(\\\\sqrt{t})$ then the last term of (8) is just $O(\\\\log T)$, which might be better than that of the last term on (9). One can also get a data dependent bound like the last term on (9). We just wanted to say that the regret bound might be better than that of AMSGrad under certain conditions.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for the comments and identifying the typos. We have fixed them. Please find as follows our response.\\nWe've updated the new version accordingly.\\n\\n== The extrapolation algorithm == \\nWe choose the particular algorithm by (Scieur et al. 2016) because it has good empirical performance\\nin practice. (Scieur et al. 2016) shows that using the last few updates of an optimization algorithm,\\nthe method can predict a point that is much closer to the optimum than the last update of the optimization algorithm.\\n\\n== Scaling the next direction ==\\nThis may be a good idea. We leave it as a future work.\\n\\n== \\\"... the gradient vectors at a specific time span is assumed to be captured by (5).\\\" ==\\nWe elaborate it in the new version accordingly.\\nWe want to have a good prediction $m_{t}$ of $g_{t}$ by using the past few gradients.\\nIf the past few gradients can be modeled by the equation approximately, then\\nthe method should predict the gradient well.\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"Thank you for the valuable comments. Please find as follows our response.\\nWe've updated the new version accordingly.\\n\\n== Iteration Cost == \\nYes, like optimistic algorithms in general, each iteration would be slightly more expensive. Using fewer iterations means that it has fewer access to training samples than what required by AMSGrad. The sample efficiency might have some advantages on certain applications. One can also design some algorithms or update rules to reduce the time of predicting $m_t$. \\n\\n== Theorem 1 == \\nYes, we should also compare the sum of the last two terms in Theorem 1 (namely, (8)) with the last term on (9). We are not claiming that the regret bound of Theorem 1 is always better than that of AMSGrad (namely, (9)). Our original discussion actually means that if $m_{t}$ and $g_{t}$ is close, then the last term of (8) would dominate. We then try to upper-bound it in a very loose sense (i.e. (10)). We treat each $\\\\frac{(g_t[i]^2- h_t[i]^2)}{\\\\sqrt{\\\\hat{v}_{t-1}[i]}}$ as a constant and get a $O(\\\\sqrt{T})$ bound. However, in practice, $\\\\sqrt{\\\\hat{v}_{t-1}[i]}$ might grow over time,\\nand the growth rate is different for different dimension $i$. If $\\\\sqrt{\\\\hat{v}_{t-1}[i]}$ grows like $O(\\\\sqrt{t})$ then the last term of (8) is just $O(\\\\log T)$, which might be better than that of the last term on (9). One can also get a data dependent bound like the last term on (9). We just wanted to say that the regret bound might be better than that of AMSGrad under certain conditions.\\n\\n== $D_{\\\\infty}$ == \\nWe assume that it is finite. If $w^*$ is finite, then $D_{\\\\infty}$ should be finite. We think it is a reasonable assumption. Thanks for your suggestion, we will consider the constraint case later.\\n\\n== Discussion/Description of Algorithm 2 == \\nThanks for the suggestion. We updated it accordingly.\\n\\n== Comparison with the update of ADAM-DISZ on Page 6 == \\nThanks for the suggestion. We updated the discussion accordingly.\\nIn short, combining (8) and (9) in Algorithm 2, we get that\\n$$ w_{t+1} = w_{t-1/2} - \\\\eta_t \\\\frac{\\\\theta_t}{\\\\sqrt{\\\\hat{v}_t}}\\n- \\\\eta_{t+1} \\\\frac{4}{1-\\\\beta_1} \\\\frac{h_{t+1}}{\\\\sqrt{\\\\hat{v}_t}}\\n$$ Notice that, $w_{t+1}$ is updated from $w_{{t-1/2}}$ instead of $w_{{t}}$. So, ADAM-DISZ is not really similar to the proposed algorithm here.\\n\\n== Experiments == \\nOn the left we want to show the whole training path, and we followed the choice of axis units as in previous related works (e.g Reddi et al. 2018). On the right we want to focus on the acceleration in relatively early stage, so we choose to plot against number of epochs to highlight this point. We have added some explanation in the caption of figure 1. Thanks for your suggestion.\"}",
"{\"title\": \"An interesting way to combine regularized approximate minimal polynomial extrapolation and optimistic online learning.\", \"review\": [\"This paper provides an interesting way to combine regularized approximate minimal polynomial extrapolation and optimistic methods. I like the idea and the experimental results look promising. However, I have some concerns:\", \"I'm wondering if the comparison with the baseline is fair. Actually, one iteration of Optimistic-AMSGrad is more expensive than one iteration of AMSGrad since it requires to compute m_{t+1}. The authors should explain to what extend this computation is significantly cheaper that a backprop (if it actually is).\", \"The discussion after Theorem 1 is not clear. To me it is not clear whether or not Optimistic-AMSGrad has a better Regret that AMSGrad: you did not compare the *sum* of the two additional term with the term with a $\\\\log(T)$ (second line of (8) with second lien of (9)). Do you get a better regret that O(\\\\sqrt{T}), a better constant ? Moreover you did not justify why it is reasonable to assume that each $g_t[i]^2-h_t[i]^2/\\\\sqrt{v_{t-1}[i]}$ are bounded.\", \"I'm also concerned about the definition of $D_{\\\\infty}$. Did you prove that this constant is not infinite ? (Reddi et al. 2018) proposed a projected version of their algorithm and did the analysis assuming that the constraints set was bounded. In your Algorithm 2 would you project in Line 8 and 9 or only on line 9 ? I think that the easiest fix would be to provide a projected version of your algorithm and to do your analysis with the standard assumption that the constraints set is bounded.\", \"The description of Alg 2 is not clear. \\\"Notice that the gradient vector is computed at w_t instead of w_{t\\u22121/2}\\\" why would it be $w_{t-1/2}$ ? in comparison to what ? \\\"Also, w_{t+ 1/2} is updated from {w_{t\\u2212 1/2} instead of w_t.\\\" Same. The comments are made without any description of the algorithm, fact is, if the reader is not familiar with the algorithm (which is introduced in the following page) the whole paragraph is hard to catch.\", \"Actually, Page 6 you explain how the optimistic step of Daskalikis et al. (2018) is unclear but you can merge the updates Lines 8\", \"and 9 to $w_{t+1} = w_{t} - \\\\eta_{t+1} \\\\frac{4 h_{t+1}}{(1-\\\\beta_1) \\\\sqrt{v_t}} - \\\\eta_t \\\\\\u00b1rac{\\\\theta_t}{\\\\sqrt{v_t}} + \\\\eta_t \\\\frac{4 h_{t}}{(1-\\\\beta_1) \\\\sqrt{v_{t-1}}}$ (Plug line 8 in line 9 and then plug Line 9 at time t) to get a very similar update. If you look more closely at Daskalakis et al. 2018 their guess $m_{t+1}$ is $g_t$. Finally you Theorem 2 is stated in a bit unfair way since you also require $\\\\beta_1 <\\\\sqrt{\\\\beta_2}$, moreover it seems that Theorem 2 is no longer true anymore if, as you says, you impose that the second moment of ADAM-DISZ is monotone adding the maximization step.\", \"About the experiments, I do not understand why there is the number of iterations in the left plots and the number of epochs on the right plots. It makes the plots hard to compare.\", \"You should compare your method to extragradient methods.\", \"To sum up, this paper introduce interesting results. The combination of (Scieur et al. 2016) and optimistic online learning is really promising and solid theoretical results are claimed. However, some points should be clarified (see my comments above). Especially, I think that the authors focused too much (sometimes being unfair in their discussion and propositions) on showing how their algorithm is better than (Daskalakis et al. 2018) whereas as they mentioned it \\\"The goals are different.\\\" ADAM-DISZ is designed to optimize games and is similar to extragradient. It is know that extragradient methods are slower than gradient methods for single objective minimization because of the extrapolation step using a too conservative signal for single objective minimization.\"], \"some_minor_remarks\": [\"Page One \\\"NESTEROV'SMETHOD\\\"\", \"\\\"which can be much smaller than \\\\sqrt{T} of FTRL if one has a good guess.\\\" You could refer to Section 3 or something else because otherwise this sentence remains mysterious. What is a good guess (OR maybe you could say that standard \\\"good\\\" guesses are either the previous gradient or the average of the previous gradients)\", \"\\\"It combines the idea of ADAGRAD (Duchi et al. (2011)), which has individual learning rate for different dimensions.\\\" what is the other thing combined ?\", \"Beginning of Sec 3.1 $\\\\psi_t$ represent $diag(v_t)^{1/2}$.\", \"===== After Authors Response =====\", \"As developed in my response \\\"On $D_{\\\\infty}$ assumption \\\" to the reviewers, I think that the assumption that $D_\\\\infty$ bounded is a critical issue.\", \"That is why I am moving down my grade.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Simple but nice extension with existing ideas from the literature\", \"review\": \"In this manuscript, the authors borrow the idea of \\\"optimism\\\" from the online learning literature and apply it to two frequently used methods for neural network training (AMSGrad and ADAM). More or less, replicating the theory known in the literature, they give a regret analysis. The manuscript ends with a comparison of the optimistic methods against their plain counterparts on a set of test problems.\\n\\nThis is a well-written paper filling a gap in the literature. Through the contribution does not seem significant, the results do support that such extensions should be out there. In addition to a few typos, some clarification on several points could be quite useful:\\n\\n1) It is not clear why the authors use this particular extrapolation algorithm?\\n\\n2) If we have the past r+1 gradients, can we put them into use for scaling the next direction like in quasi-Newton methods?\\n\\n3) The following part of the sentence is not clear: \\\"... the gradient vectors at a specific time span is assumed to be captured by (5).\\\"\\n\\n4) \\\\nabla is missing at the end of the line right after equation (6).\\n\\n5) The second line after Lemma 2 should be \\\"... it does not matter how...\\\" (The word 'not' is missing.)\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"reasonable algorithms, no surprises\", \"review\": [\"The paper proposes new online optimization algorithms by adding the idea of optimistic updates to the already popular components of adaptive preconditioning and momentum (as used in AMSGrad and ADAM). Such optimistic schemes attempt to guess the yet-unseen gradients before each update, which can lead to better regret guarantees when the guesses are accurate in a certain sense. This in turn can lead to faster convergence when the resulting algorithm is used in an optimization framework. The specific contribution of the present paper is proving formally that optimistic updates can indeed be combined with advanced methods like ADAM and AMSGrad, also providing a regret analysis of the former algorithm. On the practical front, the authors also propose a method closely resembling Anderson acceleration for guessing the next gradient, and the eventual scheme is shown to work well empirically in training deep neural networks.\", \"The idea of optimistic updates has been popular in recent years within the online-learning literature, and has been used with particularly great success for achieving improved convergence guarantees for learning equilibria in games. More recently, optimistic updates have also appeared in more \\\"practical\\\" settings such as training GANs, where they were shown to improve stability of training. The present paper argues that the idea of optimism can be useful for large-scale optimization as well, if the gradient guesses are chosen appropriately.\", \"I have lukewarm feelings about the paper. On the positive side, the proposed method is a natural and sensible combination of solid technical ideas, and its theoretical analysis appears to be correct. As the authors point out, their algorithm incorporates the idea of optimism in a much more natural way than the related optimistic ADAM algorithm previously proposed by Daskalakis et al. (2018) does. The experiments also indicate some advantage of optimism in the studied optimization problems.\", \"On the other hand, the theoretical contribution is marginal: the algorithm and its analysis is a straightforward combination of previous ideas and the result itself doesn't strike me as surprising at all. Then again, perhaps this is more of a presentation issue, as it may be the case that the authors did not manage to highlight clearly enough the technical challenges they needed to overcome to prove their theoretical results. Furthermore, I find the method for guessing the gradients to be rather arbitrary and poorly explained---at least I'm not sure if anyone unfamiliar with the mentioned gradient extrapolation methods would find this approach to be sensible at all.\", \"I am not fully convinced by the experimental results either, since I have an impression that the gradient-guessing method only introduces yet another knob to turn when tuning the hyperparameters, and it's not clear at all that this new dimension would indeed unlock levels of performance that were not attainable before. Indeed, the authors seem to fix all hyperparameters across all experiments and only switch around the optimistic components, rather than finding the best tuning for each individual algorithm and comparing the respective results. Also, I don't really see any qualitative improvements in the learning curves due to the new components---but maybe I just can't read these graphs properly since I have more of a \\\"theorist\\\" background.\", \"The writing is mostly OK, although there is room for improvement in terms of English use (especially on the front of articles which seem to be off in almost every sentence).\", \"Overall, I don't feel very comfortable about suggesting acceptance, mostly because I find the results to be rather unsurprising. I suggest that the authors try to convince me of the nontrivial challenges arising in the analysis, or about the definite practical advantage that optimism can buy for large-scale optimization.\", \"Detailed comments\", \"=================\", \"pp.1, abstract: \\\"We consider new variants of optimization algorithms.\\\"---This sentence is rather vague and generic. I guess you wanted to refer to *convex* optimization algorithms, which is actually what you consider in the paper. No need to be embarrassed about assuming convexity...\", \"pp.1: A general nuisance with the typesetting that already shows on the first page is that italic and small capital fonts are used excessively and without any clearly identifiable logic. Please simplify.\", \"pp.1: \\\"AdaGrad [...] exploits the geometry of the data and performs informative update\\\"---this makes it sound like other algorithms make non-informative updates.\", \"pp.1: Regret was not defined even informally in the introduction, yet already some regret bounds are compared, highlighting that one \\\"can be much smaller than O(\\\\sqrt{T})\\\". This is not very friendly for readers with no prior experience in online learning.\", \"pp.1: \\\"Their regret analysis are the regret analysis in online learning.\\\"---What is this sentence trying to say?\", \"pp.2: For this discussion of FTRL, it would be useful to remark that this algorithm really only makes sense if the loss function is convex. Also related to this discussion: you mention that the bound for optimistic FTRL can be much smaller than \\\\sqrt{T}, but never actually say that \\\\sqrt{T} is minimax optimal---without this piece of context, this statement has little value.\", \"pp.3: \\\"ADAM [...] does not converge to some specific convex functions.\\\"---I guess I understand what this sentence is trying to say, but it certainly doesn't say it right. (Why would an *algorithm* converge to a *function*?)\", \"pp.3, bottom: This description of \\\"extrapolation methods\\\" is utterly cryptic. What is x_t here? What is the \\\"fixed point x^*\\\"? Why is this scheme applicable at all here? (Why would one believe the errors to be near-linear in this case? Would this argument work at all for non-convex objectives?)\", \"pp.5, Lemma 1: Why would one expect D_\\\\infty to be finite? In order to ensure this, one would need to project the iterates to a compact set.\", \"pp.5, right after lemma 1: \\\"it does matter how g_t is generated\\\" -> \\\"it does *not* matter how g_t is generated\\\"\", \"pp.6, top: \\\"we claimed that it is smaller than O(\\\\sqrt{T}) so that we are good here\\\"---where exactly did this claim appear, and in what sense \\\"are we good here\\\"? Also, the norms in this paragraph should be squared.\", \"pp.6, Sec 3.2: While this section makes some interesting points, its tone feels a bit too apologetic. E.g., saying that \\\"[you] are aware of\\\" a previous algorithm that's similar to yours and doubling down on the claim that \\\"the goals are different\\\" makes the text feel like you're taking a defensive stance even though I can't see a clear reason for this. In my book, the approach you propose is clearly different and more natural for the purpose of your study.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Main idea is not sufficiently novel and technical and empirical results are not convincing enough\", \"review\": \"This paper combines recent results in online learning and convex optimization, specifically adaptivity, momentum, and optimism. The authors add an optimistic gradient prediction step into the AMSGrad algorithm proposed by Reddi et al, 2018. Moreover, they propose using the RMPE algorithm of Scieur et al, 2016 to come up with the gradient prediction step. The new method that they introduce is called Optimistic AMSGrad, and the authors present both theoretical guarantees as well as numerical experiments justifying this new method.\\n\\nThe paper is relatively well-written, and the authors do a good job of explaining recent work on adaptivity, momentum, and optimism in online learning and convex optimization to motivate their algorithm. The algorithm is also presented clearly, and the fact that the method is accompanied by both a regret bound as well as numerical experiments is appreciated.\\n\\nAt the same time, I found the presentation of this work to be a little misleading. The idea of applying optimism to Adam was already presented in Daskalakis et al, 2018. The algorithm in that paper is, in fact, called \\\"Optimistic Adam\\\". I found it very strange that the authors chose to rename that algorithm in this paper. There are two main differences between Optimistic Adam in Daskalakis et al, 2018 and Optimistic AMSGrad. The first is the extension from Adam to AMSGrad, which involves an extra maximization step (line 7 in Algorithm 2) that is immediate. The second is the choice of gradient prediction method. Since Daskalakis et al, 2018 were concerned with equilibrium convergence, they opted to use the most recent gradient as the prediction. On the other hand, the authors in this work are concerned with general online optimization, so they use a linear combination of past gradients as the prediction, based on a method introduced by Scieur et al, 2016. On its own, I do not find this extensions to be sufficiently novel or significant to merit publication. \\n\\nThe fact that this paper includes theoretical guarantees for Optimistic AMSGrad that were missing in Daskalakis et al, 2018 for Optimistic Adam does make it a little more compelling. However, I found the bound in Theorem 1 to be a little strange in that\\n(1) it doesn't reduce to the AMSGrad bound when the gradient predictions are 0 and (2) it doesn't seem better than the AMSGrad or optimistic FTRL bounds. The authors claim to justify (2) by saying that the extra g_t - h_t term is O(\\\\sqrt{T}), but the whole appeal of adaptive algorithms is that the \\\\sqrt{T} terms are data-dependent. The empirical results also do not include error bars, which makes it hard to judge their significance. \\n\\nThere were also many grammatical errors and typos in the paper.\", \"other_comments_and_questions\": \"1) Page 1: \\\"Their theoretical analysis are the regret analysis in online learning.\\\" Grammatical error.\\n2) Page 2: \\\"The concern is that how to get good m_t\\\". Grammatical error.\\n3) Page 3: \\\"as the future work\\\". Grammatical error.\\n4) Page 3: \\\"Nestrov's method\\\". Typo. \\n5) Page 4: \\\"with input consists of\\\". Grammatical error\\n6) Page 4: \\\"Call Algorithm 3 with...\\\" What is the computational cost of this step? One of the main benefits of algorithms like AMSGrad is that they run in O(d) time with very mild constants. \\n7) Page 4: \\\"For this extrapolation method to well well..., the gradient vectors at a specific time span is assumed to be captured by (5). If the gradient does not change significantly, this will be a mild condition.\\\" If the gradient doesn't change significantly, then choosing m_t = g_t would also work well, wouldn't it? Can you come up with examples of objectives for which this method makes sense? Even toy ones would strengthen this paper.\\n8) Page 5: Equation (8). As discussed above, this bound doesn't appear to reduce to the AMSGrad bound for m_t = 0, which makes it a little unsatisfying. The fact that there is an extra expression that isn't in terms of the \\\"gradient prediction error\\\" that one has for optimistic FTRL also makes the bound a little strange.\\n9) Page 7: \\\"The conduct Optimistic-AMSGrad with different values of r and observe similar performance\\\". You should mention that you show the performance for some of these different values in the appendix.\\n10) Page 7: \\\"multi-classification problems\\\". Typo.\\n11) Page 7: Figure 1. Without error bars, it's impossible to tell whether these results are meaningful. Moreover, it's strange to evaluate algorithms with online convex optimization guarantees on off-line non-convex problems.\\n12) Page 7: \\\"widely studied and playing\\\". Grammatical error.\\n13) Page 8: \\\"A potential directions\\\". Grammatical error.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rygjN3C9F7 | The Variational Deficiency Bottleneck | [
"Pradeep Kr. Banerjee",
"Guido Montufar"
] | We introduce a bottleneck method for learning data representations based on channel deficiency, rather than the more traditional information sufficiency. A variational upper bound allows us to implement this method efficiently. The bound itself is bounded above by the variational information bottleneck objective, and the two methods coincide in the regime of single-shot Monte Carlo approximations. The notion of deficiency provides a principled way of approximating complicated channels by relatively simpler ones. The deficiency of one channel w.r.t. another has an operational interpretation in terms of the optimal risk gap of decision problems, capturing classification as a special case. Unsupervised generalizations are possible, such as the deficiency autoencoder, which can also be formulated in a variational form. Experiments demonstrate that the deficiency bottleneck can provide advantages in terms of minimal sufficiency as measured by information bottleneck curves, while retaining a good test performance in classification and reconstruction tasks. | [
"Variational Information Bottleneck",
"Blackwell Sufficiency",
"Le Cam Deficiency",
"Information Channel"
] | https://openreview.net/pdf?id=rygjN3C9F7 | https://openreview.net/forum?id=rygjN3C9F7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SkeLgP9egN",
"BkgLu-miR7",
"Syxm7-moAm",
"ryxlCAGjAm",
"SkeI70MiAm",
"H1l-kpRKC7",
"Byx7lzhRhQ",
"SJgxZjtRnX",
"BkerznG53Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544754925556,
1543348590038,
1543348507246,
1543347911978,
1543347742124,
1543265496913,
1541485035246,
1541475064040,
1541184524588
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1483/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1483/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1483/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1483/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1483/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1483/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1483/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1483/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"Strengths: The paper presents an alternative regularized training objective for supervised learning that has a reasonable theoretical justification. It also has a simple computational formula.\", \"weaknesses\": \"The experiments are minimal proofs of concept on MNIST and fashion MNIST, and the authors didn't find an example where this formulation makes a large difference. The resulting formula is very close to existing methods. Finally the paper is a bit dense and the intuitions we should gain from this theory aren't made clear.\", \"points_of_contention\": \"One reviewer pointed out the close connection of the new objective to IWAE, and the authors added a discussion of the relation and showed that they're not mathematically equivalent. However, as far as I can tell they're almost identical in purpose: As k -> \\\\infty in IWAE, the encoder ceases to matter. And as M -> \\\\infty in VDB, we take the max over all encoders. Could the method proposed in this paper lead to an alternative to IWAE in the VAE setting?\", \"consensus\": \"Consensus wasn't reached, but the \\\"7\\\" reviewer did not appear to have put much though into their review.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Valid theory, but quite close to existing work\"}",
"{\"title\": \"Reference added\", \"comment\": \"Thank you for the comment!\\n\\nWe have now included the said reference.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": [\"Thank you for your comments!\", \"We have enhanced Appendix E with a discussion on some variants of the VAE that generalize the standard evidence lower bound (ELBO) by incorporating different bottleneck constraints to learn better representations. In particular, we discuss how our unsupervised objective (p. 18, equation 38) relates to the beta-VAE and the importance weighted autoencoder (Appendix E.1, p. 17,18).\", \"Please note that our unsupervised objective (p. 18, equation 38) contains the beta-VAE as a special case when we use only one sample from the encoding distribution (M=1). This means that we are naturally comparing with that method.\", \"Appendix E.2 (p. 18) now discusses implementation details of the unsupervised objective. Finally, we have included some visualizations in Appendix E.3 for the MNIST and FMNIST datasets for different values of M and beta.\", \"We agree that more comparisons will be beneficial in investigating the properties of the proposed method. This is something we are actively working on.\"]}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your comments!\\n\\n* In our method, \\\"``more informative\\\" means \\\"``less deficient\\\". \\nWe have added a figure tracing the mutual information between representation and output I(Z;Y) vs. the minimality term I(Z;X) for different values of beta (see Figure 2, lower right panel), when training with our loss function. This is the usual information bottleneck curve. The deficiency bottleneck curve (Figure 2, upper right panel) traces the corresponding sufficiency term J(Z;Y) (which is just the entropy of the labels minus our loss) vs. I(Z;X) for different values of beta. The text now makes this more explicit (see p.7, first paragraph). Note that for M=1, J(Z;Y) = I(Z;Y). We can see that when training with our loss, we achieve approximately the same level of sufficiency (measured in terms of I(Z;Y)) while consistently achieving more compression (note the log ordinate for I(Z;X) in the lower left panel in Fig. 2) for a wide range of beta values.\\n\\n* We included two new figures plotting the representation for MNIST (p. 19, Figure 7) and Fashion-MNIST (p. 19, Figure 8) in Appendix E.3 for an unsupervised version of the VDB objective (p. 18, equation 38).\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your comments!\\n\\n* We included a table showing accuracy numbers for different values of beta and M (see p. 6, Table 1) for the latent bottleneck sizes K=256 (Figure 2) and K=2 (Figure 3). \\n\\n* In relation to the figures, we have improved these in the revision. We are added a figure tracing the mutual information between representation and output I(Z;Y) vs. the minimality term I(Z;X) for different values of beta (see Figure 2, lower right panel), when training with our loss function. This is the usual information bottleneck curve. This contrasts with the deficiency bottleneck curve (Figure 2, upper right panel) which traces the corresponding sufficiency term J(Z;Y) (which is just the entropy of the labels minus our loss) vs. I(Z;X) for different values of beta. Note that for M=1, J(Z;Y) = I(Z;Y). We apologize for the confusion. The text now makes this more explicit (see p.7, first paragraph).\\n\\n* In response to your question about how we estimate the mutual information. Yes, we minimize an upper bound on both the deficiency and the rate term (see p.3, equation 3 and discussion leading up to the VDB objective in equation 4). The estimation of this upper bound is simplified by our choice of the prior and the encoding distribution which are diagonal Gaussians. The KL term can be computed and differentiated without estimation. We estimate the expected loss term using Monte Carlo sampling. We draw samples from the encoder using the reparameterization trick and leverage automatic differentiation (in Tensorflow) to compute the gradients. Since the expectation is inside the log, gradient updates may have higher variance for larger values of M. \\n\\nOur model is a classifier and our loss term is a tighter bound on the misclassification error (bias) than the usual cross-entropy loss as in the VIB (see p. 12, equation 13). Trading bias for variance has been investigated in some recent works (see, e.g., Bamler, Robert, et al. \\\"Perturbative black box variational inference.\\\" NIPS 2017). See last paragraph in p. 18 for the related discussion in the unsupervised setting.\\n\\n* In relation to the connection to IWAE, we have included a detailed discussion in Appendix E.1. \\n\\nThe method is different from ours, except in the limiting case where M = 1 and beta =1, in which case it coincides with the beta-VAE and also with our method. After taking a close look, we make the following observations: \\n\\nFor M > 1, the IWAE bound does not admit a decomposition like the standard ELBO (see equation 29 and 36) into a reconstruction loss term and a regularization term. In particular, this implies we cannot trade-off reconstruction fidelity for learning more meaningful representations by incorporating bottleneck constraints. See ensuing discussion in p.18 following equation 36. \\nIn contrast, our method has a tuning parameter beta. \\n\\nThe IWAE bound is known to be equivalent to the ELBO in expectation with a more complex approximate posterior qIW (see p.17, equation 34 and 35 and references therein in Appendix E.1). For beta values other than 1, a naive trick would be to plant qIW in liue of qphi in equation 37 (p. 18) to get a beta-IWAE of sorts. It is not entirely clear however, why we would want to do so when modulating beta already suffices to tune the VAE towards autoencoding (low beta) or autodecoding behavior (high beta) depending on the requirement at hand. A similar argument goes in the direction of an \\\"Importance weighted Variational Information Bottleneck\\\". We have not explored if and how using more expressive posteriors such as the qIW (p. 17, equation 35) can help the supervised bottleneck formulations in VDB or VIB. This remains a scope for future study. \\n\\nWe are now also citing the paper Yuri Burda, Roger Grosse, Ruslan Salakhutdinov. Importance Weighted Autoencoders. ICLR 2016.\"}",
"{\"comment\": \"The connection between the information bottleneck, compression, and deep neural networks is also described in Shwartz-Ziv & Tishby 2017 [https://arxiv.org/abs/1703.00810].\\nThat work should be referenced.\", \"title\": \"Related work\"}",
"{\"title\": \"New representation learning objective using Chanel Deficiency\", \"review\": \"This paper used the concept based on channel deficiency to derive a variational bound similar to variational information bottleneck. Theoretical analysis shows that this bound is an lower bound on the VIB objective. The empirical analysis shows it outperforms VIB in some sense.\\n\\nI think this paper's contribution is rather theoretical than practical. The experiments section can be improved in the following aspect:\\n- Figure 2 are hard to read for different M's. It would be better if the authors can show the exact accuracy numbers rather than the overlapped lines\\n- I(Z;Y) vs I(Z;X) graph is typically used in a VIB setting. In the paper's variational deficiency setting, although plotting I(Z;Y) vs I(Z;X) is necessary, it would be also helpful for the authors' to plot Deficiency vs I(Z;X), because this is what new objective is trading-off. \\n- Again, Figure 3, it is hard to see the benefits for increasing M from the visualizations for different clusterings. \\n- How do the paper estimate I(Z;Y) and I(Z;X) for plotting these figures? Does the paper use lower bound or some estimators? It should be made clear in the paper since these are non-trivial estimations.\\n\\nLast comment is that, although the concept of `deficiency` in a bottleneck setting is novel, the similar idea for tighter bound of log likelihood has already been pursed in the following paper:\\n\\n- Yuri Burda, Roger Grosse, Ruslan Salakhutdinov. Importance Weighted Autoencoders. ICLR 2016\\n\\nIt was kind of surprising that the authors did not cite this paper given the results are pretty much the same. It would also be helpful for the authors to do a comparison or connection section with this paper. \\n\\nI like the paper in general, but given it still has some space for improvement, I would keep my decision as boarder line for now.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"The paper presents a method of learning representations that is based on minimizing \\\"deficiency\\\" rather than optimizing for information sufficiency.\", \"review\": \"The paper presents a method of learning representations that is based on minimizing \\\"deficiency\\\" rather than optimizing for information sufficiency. While perfect optimization of the sufficiency term in IB is equivalent to minimizing deficiency, the thesis of the paper is that the variational upper bound on deficiency is easier to optimize, and when optimized produces\\nbetter (more compressed representations), while performing equally on test accuracy.\\n\\n\\n\\nThe paper is well written and easy to read. The idea behind the paper (optimizing for minimizing deficiency instead of sufficiency in IB) is interesting, especially because the variational formulation of DB is a generalization of VIB (in that VIB reduces to VDB for M=1). What takes away from the paper is that while perfect optimization of IB/sufficiency is equivalent to perfect optimization of DB, it is not clear what happens when perfection is not achieved. Further, the authors claim that DB is able to obtain more compressed representations (But is the goal a compressed representation, or an informative one?). The paper would also benefit from evaluation of the representation itself, and comparison to other non-information bottleneck based algorithms.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Good Writing, Comparisons Needed.\", \"review\": \"This paper introduces deficiency bottleneck for learning a data representation and represent complicated channels using simpler ones. This problem has a natural variational form that can be easily implemented from VIB. Experiments show good performance comparing to VIB.\\n\\nThis paper is well-written and easy to read. The idea using KL divergence creating a deficiency channel to learn data representation is very natural. It is interesting that this formulation could be understood as minimizing a regularized risk gap of statistical decision problems, which justifies the usage of deficiency bottleneck (eq.9). \\n\\nMy biggest concern is the lack of comparison with other representation learning methods, which is a very well studied problem. However, it looks like authors only compared with VIB which is similar to the proposed method in terms of the objective function. For example, how does the method compare with (variants of) Variational Autoencoder? A discussion on this or some empirical evaluations would be nice.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
BJgsN3R9Km | AntMan: Sparse Low-Rank Compression To Accelerate RNN Inference | [
"Samyam Rajbhandari",
"Harsh Shrivastava",
"Yuxiong He"
] | Wide adoption of complex RNN based models is hindered by their inference performance, cost and memory requirements. To address this issue, we develop AntMan, combining structured sparsity with low-rank decomposition synergistically, to reduce model computation, size and execution time of RNNs while attaining desired accuracy. AntMan extends knowledge distillation based training to learn the compressed models efficiently. Our evaluation shows that AntMan offers up to 100x computation reduction with less than 1pt accuracy drop for language and machine reading comprehension models. Our evaluation also shows that for a given accuracy target, AntMan produces 5x smaller models than the state-of-art. Lastly, we show that AntMan offers super-linear speed gains compared to theoretical speedup, demonstrating its practical value on commodity hardware. | [
"model compression",
"RNN",
"perforamnce optimization",
"langugage model",
"machine reading comprehension",
"knowledge distillation",
"teacher-student"
] | https://openreview.net/pdf?id=BJgsN3R9Km | https://openreview.net/forum?id=BJgsN3R9Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Bkgmkr1NxN",
"S1gAXzz7RQ",
"Byg72bzX07",
"Hyx1FWMQAm",
"ryguS-zXAm",
"HyeuzZGX0X",
"rJxe6mWXCX",
"HkloA--QRQ",
"SyeVNxFq27",
"rkxi2FMq2m",
"B1e1FpE42m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544971482955,
1542820390264,
1542820266807,
1542820214636,
1542820159989,
1542820111570,
1542816695775,
1542816210908,
1541210155903,
1541183923274,
1540799863260
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1482/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1482/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1482/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1482/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1482/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1482/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1482/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1482/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1482/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1482/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1482/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The submission proposes a method that combines sparsification and low rank projections to compress a neural network. This is in line with nearly all state-of-the-art methods. The specific combination proposed in this instance are SVD for low-rank and localized group projections (LGP) for sparsity.\\n\\nThe main concern about the paper is the lack of stronger comparison to sota compression techniques. The authors justify their choice in the rebuttal, but ultimately only compare to relatively straightforward baselines. The additional comparison with e.g. Table 6 of the appendix does not give sufficient information to replicate or to know how the reduction in parameters was achieved.\\n\\nThe scores for this paper were borderline, and the reviewers were largely in consensus with their scores and the points raised in the reviews. Given the highly selective nature of ICLR, the overall evaluations and remaining questions about the paper and comparison to baselines indicates that it does not pass the threshold for acceptance.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Area chair recommendation\"}",
"{\"title\": \"Response to Review\", \"comment\": \"Thank you for the constructive feedback. We respond to the 4 comments separately, as OpenReview does not allow us to post a single long response.\"}",
"{\"title\": \"Response to Comment 1\", \"comment\": \"1. Balancing different parts of Knowledge Distillation is trivial.\\n\\nWe find the two techniques ((a) using MSE, KL divergence and label loss together, (b) efficient hyperparameter search) quite helpful when applying KD. They are intuitive but we did not find any related work talking about them. Therefore, we hope to share the good practice with the readers of the paper. If they are discussed in any related work (we are not aware), kindly point out and we will be glad to cite and adjust the claims accordingly.\"}",
"{\"title\": \"Response to Comment 2\", \"comment\": \"2. Missing experimental details.\\n\\nWe added the missing details in the paper (Appendix C)\\n a. Optimization method used: ADAM\\n b. Block Sizes are described in 5.1.1 and 5.1.2\\n c. Block diagonal structures were used in both input and hidden weights.\\n d. We did not alter any other hyper-parameters for the PTB and BiDAF models. The full set of \\n hyper-parameters can be found here (\", \"ptb\": \"https://github.com/tensorflow/models/tree/master/tutorials/rnn/ptb\", \"bidaf\": \"https://github.com/allenai/bi-att-flow\\n e. We will also release the codes of our implementations once the paper is accepted so all the \\n results are easily reproducible.\"}",
"{\"title\": \"Response to Comment 3\", \"comment\": \"3. Choice of baseline\\n\\nWe understand the reviewer's concern regarding the choice of baseline. In fact, we considered several compression techniques to identify the strongest baselines to compare with AntMan. Eventually, we chose ISS as the baseline, because for RNNs ISS satisfies two important criteria that we believe are most critical for model compression techniques, i) Good theoretical reduction in compute and memory achievable without sacrificing accuracy, ii) Good computation efficiency of the compressed model to fully exploit the theoretical reduction in computation. Most compression techniques do not satisfy both of these criteria. \\n\\nBased on the reviewer's feedback, we discuss various compression techniques in regards to the aforementioned criteria and how they compare to AntMan. We also provide experimental results when applicable. We have added this discussion and results to the appendix A of the paper.\\n\\na. Quantization: 16 and 8-bit quantization (original 32-bit) can be supported fairly easily on commodity hardware, resulting in a maximum compression of 4x. Even more aggressive quantization (e.g., 2-7 bit) hardly provides additional computational benefit because commodity hardware does not support those in their instruction set, while 1-bit quantization does not offer comparable accuracy. \\n\\nIn comparison, we demonstrate that AntMan achieves up to 100x reduction in computation without loss in accuracy. Moreover, quantization can be applied to AntMan to further reduce the computation, and vice versa, as quantization and AntMan are complementary techniques.\\n\\nb. Pruning: Unstructured sparsity resulting from most pruning techniques is not computationally efficient. For both PTB and BiDAF, our experiments show that pruning can achieve over 10x in computation reduction without even using knowledge distillation. However, due to poor computational efficiency, 10x theoretical reduction translates to less than 4x reduction in wall-clock time, while 10x reduction in Ant-Man translates up to 30x reduction in wall-clock time as demonstrated in the paper. Due to this inefficiency in computation, we do not consider pruning as a competitive baseline to compare against. (Table 2)\\n\\nPruning can also be used to produce structured sparsity such as blocked sparsity. However, structured sparsity requires implementing specialized kernels to take advantage of the computation reduction. Its efficiency greatly depends on the implementation, and in general is far from the theoretical computation reduction.\\n\\nc. Direct Design: Reviewer 2 suggested comparing with AntMan with smaller RNN models trained using the larger teacher model. Our results show that for the same level of compression AntMan achieves much higher accuracy. (Table 3)\\n\\nd. SVD : We constructed compressed models by replacing matrix-multiplication with SVD, and trained the SVD based models using knowledge distillation. Once again, we find that for the same level of compression, AntMan achieves much higher accuracy than SVD. (Table 3)\\n\\ne. Block Tensor Decomposition (BTD): BTD is designed to compress RNNs whose inputs are produced by convolution based models, and contain certain redundancies. AntMan, on the other hand, is generic to all RNN based models. Also, BTD is designed to compress only the input vector and not the hidden vectors. This hinders the performance of BTD over a range of RNNs, where the hidden vectors are also large and hence we didn\\u2019t consider BTD as a strong baseline to compare against.\\n\\nIn summary, we considered several compression techniques when choosing our baseline. We found that most techniques either did not achieve good computation reduction with comparable accuracy or were computationally inefficient to execute. ISS is a strong baseline, providing both good theoretical reduction and good computational efficiency, which we chose to compare with thoroughly.\", \"table_2\": \"Theoretical vs Actual Performance gain on PTB using unstructured Pruning vs AntMan\\n\\nTheoretical Compression Actual Performance Gain\\n Pruning AntMan\\n10x 4x 30x\", \"table_3\": \"Test Perplexity for PTB model for various levels of computation compression using different compression techniques\\n\\nCompute Reduction Small RNN Perplexity SVD RNN Perplexity AntMan Perplexity\\n10x 80.06 78.63 74.6\\n50x 99.96 81.4 77.4\\n100x 129.3 88.59 78.6\"}",
"{\"title\": \"Response to Comment 4\", \"comment\": \"4. Unfair comparison with ISS\\n\\nWhile we agree that ISS with KD would be interesting to try out, we do consider our current comparison fair as we are comparing with the full set of techniques described in the ISS paper. The authors of the ISS paper do not claim that their techniques should be used together with Knowledge Distillation. On the contrary, KD is part of our training process. Therefore, we are comparing the entirety of our techniques to the entirety of the ISS techniques.\\n\\nTraining with ISS would also require finding solutions to additional problems that are not described in the ISS paper. For example, ISS induces sparsity using a group lasso term in the loss function. As KD also introduces additional teacher loss to the loss function, it becomes necessary to balance the different loss terms that are targeted at achieving different goals (sparsity vs accuracy). It is not clear how these terms can be balanced systematically. Further, ISS requires tuning various hyper-parameters such as dropout, weight decay, and learning rate. In the presence of additional terms the loss function, it becomes necessary to do a full hyper-parameter sweep to identify the best set of parameters. \\n\\nWe like the reviewer's suggestion on using ISS with KD and are working actively on getting these results, as our end goal is to find the best compression techniques for our production models. However, we consider it to be non-trivial, and well beyond what is described in the ISS paper.\\n\\nWe also appreciate that the reviewer took an in-depth look at the results to point out that in the absence of KD, AntMan does worse than ISS for PTB. While this is true for PTB, we also want to point out that this is not always the case. For BiDAF, AntMan has better F1 score than ISS for the same level of compression in the absence of KD. For example, using LowRank LGP, in the absence of KD, AntMan can achieve average compute reduction of 8x for BiDAF while achieving F1 score over 76. On the other hand for F1 scores less than 76, ISS provides a compression of less than 5x on average. We have added these results in Appendix B.\"}",
"{\"title\": \"Response to Review\", \"comment\": \"Thank you for the constructive feedback.\", \"1\": \"Is the training done on CPU or GPUs? How is the training time?\", \"cpu_or_gpu\": \"While we trained the AntMan models on GPU, our tensorflow implementation is architecture agnostic. It can be trained on either.\", \"training_time_and_efficient_implementation\": \"AntMan implementation is in fact extremely simple. Each AntMan based RNN module can be implemented with less than a 100 lines of tensorflow code, and efficiently trained on either CPU or GPU.\\n\\nWe simply replace the matrix multiplications in the RNN with AntMan modules which are themselves composed of smaller matrix-multiplications. As the basic building block is still a matrix-vector multiplication, they can be computed efficiently on both CPU and GPU.\\n\\nPTB takes less than 6 hours to train on a single Titan XP GPU, while the BIDAF model takes about 20-40 hours to train. The training time of the compressed model is comparable as of the original model.\", \"2\": \"Why is AntMan applied just to RNNs?\\n\\nIt is definitely true that AntMan can be applied to any Matrix-vector product in a neural network. We do not claim that AntMan only works for RNN. We focus on RNNs because matrix-vector multiplies are the primary source of performance bottleneck in RNNs. We plan to try out AntMan on other networks as part of our future work.\", \"3\": \"Another baseline is needed for comparison. The reviewer suggests a directly designed small\\nRNN trained using Knowledge distillation.\\n\\n\\tWe understand the reviewer's concern regarding the choice of baseline. In fact, we considered several compression techniques to identify the strongest baselines to compare with AntMan. Eventually, we chose ISS as the baseline, because for RNNs ISS satisfies two important criteria that we believe are most critical for model compression techniques, i) Good theoretical reduction in compute and memory, ii) Good computation efficiency of the compressed model that results in an actual reduction in the wall-clock time. Most compression techniques do not satisfy both of these criteria. \\n\\nHowever, we do think that alternate baseline suggested by the reviewer makes good sense. We have run the suggested experiments and added the results comparing AntMan with directly designed small RNN models in the appendix. To summarize, we find that for a given level of compression, directly designed small RNN model trained with the original teacher model has much higher perplexity than AntMan compressed model trained with the same teacher, demonstrating that AntMan indeed provides new value. The results are presented in the table below. We have also added them to the appendix of our paper.\", \"table_1\": \"Test Perplexity for PTB model for various levels of computation compression\\n\\nCompute Reduction Small RNN Perplexity AntMan Perplexity\\n10x 80.06 74.6\\n50x 99.96 77.4\\n100x 129.3 78.6\"}",
"{\"title\": \"Response to Review\", \"comment\": \"Thank you for the constructive feedback.\\n\\n1. Comparing against quantization and pruning\\n\\nWe understand the reviewer's concern on the choice of baseline, specifically the lack of comparison with quantization and pruning. In fact, we considered several compression techniques including quantization and pruning to identify the strongest baselines to compare with AntMan. Eventually, we chose ISS as the baseline, because for RNNs ISS satisfies two important criteria that we believe are most critical for model compression techniques, i) Good theoretical reduction in compute and memory, ii) Good computation efficiency of the compressed model that results in an actual reduction in the wall-clock time. Most model compression techniques lack one of these criteria.\\n\\nWhile Quantization and pruning are effective techniques for model compression, the former provides limited compression, while the latter is not computationally efficient. More specifically, we did not consider quantization and pruning as strong baselines for the following reasons.\\n\\n1a. Quantization: Even without running any experiments, we already know the maximum model compression that is achievable using quantization. Commodity hardware can support computation on up to 8-bit integers. Even more aggressive quantization (e.g., 2-7 bit) hardly provides additional computational benefit because commodity hardware does not support those in their instruction set, while 1-bit quantization does not offer comparable accuracy. So theoretically, the maximum level of compression achievable through quantization alone on commodity hardware is from 32-bit to 8-bit resulting in a maximum compression of 4x, while AntMan can achieve up to 100x with no accuracy loss. \\n\\nFurthermore, quantization and AntMan are complimentary techniques, which can be used together on a model.\\n\\n1b. Pruning: It is difficult to translate the computation reduction from pruning based techniques into actual performance gains.\\n\\nWhile we did not present pruning results in the paper, we did try out techniques on both PTB and BiDAF models to generate random sparsity as well as blocked sparsity. In both cases, we were able to get more than 10x reduction in computation even in the absence of Knowledge distillation. Therefore pruning provides excellent computation reduction. \\n\\nHowever, as discussed in the paper, those theoretical computational reductions cannot be efficiently converted into practical performance gains: Unstructured sparsity resulting from pruning suffers from poor computation efficiency; a 10x theoretical reduction leads to less than 4x improvement in performance while AntMan achieves 30x performance gain with 10x computation reduction for PTB like models.\\n\\nIt is possible to achieve structured sparsity such as block sparsity through pruning. However, structured sparsity requires implementing specialized kernels to take advantage of the computation reduction. \\n\\nDue to these reasons, we felt that Pruning techniques are weaker baselines to compare against. On the contrary, both ISS and AntMan achieve good computation reduction and can be efficiently executed using readily available BLAS libraries such as Intel MKL resulting in superlinear speedups. This is why we chose ISS as a primary baseline. We have added a section in the appendix discussing the limitation of various compression techniques including quantization and pruning in comparison to AntMan.\\n\\n\\n2. Why just RNNs?\\n\\nIt is definitely true that AntMan can be applied to any Matrix-vector product in a neural network. We do not claim that AntMan only works for RNNs. We focus on RNNs because matrix-vector multiplies are the primary source of performance bottleneck in RNNs. We plan to try out AntMan on other networks as part of our future work.\\n\\n\\n3. Cost of Finding P and Q?\\n\\nThe cost of finding P and Q can be neglected because we do not explicitly calculate them. Lets use SVD (PQ = A) for clarification. Instead of first training to find the weights of matrix A, and then decomposing into P and Q, we replace A as PQ directly in the neural network, allowing the back-propagation to find the appropriate values of P and Q that minimizes the loss function.\\n\\n4. What about combining SVD and LGP in Table 2? What about LGP Dense?\\n\\tIn Table 2 we presented only those results that yielded maximum computation reduction with minimum loss in accuracy. For PTB, LowRank LGP incurs more accuracy loss than just LGP. For example, for 10x compression, LGP achieves a perplexity of 74.69 while the best LowRank LGP achieves 75.99.\"}",
"{\"title\": \"Good paper, but needs stronger baseline.\", \"review\": \"Model Compression is used to reduce the computational and memory complexity of DL models without significantly affecting accuracy. Existing works focused on pruning and regularization based approaches where as this paper explores structured sparsity on RNNs, using predefined compact structures.\\n\\nThey replace matrix-vector multiplications which is the building computational block part of RNNs, with localized group projections (LGP). where LGP divides the input and output vectors into groups where the elements of the output group is computed as a linear combination of those from the corresponding input group. Moreover, they use a permutation matrix or a dense-square matrix to combine outputs across groups. They also combine LGP with low-rank matrix decomposition in order to further reduce the computations.\", \"strong_points\": \"Paper shows how combining the SVD and LGP can reduce computation. In particular in matrix-vector multiplications Ax, low rank reduces the computation by factorizing A into smaller matrices P and Q, while LGP reduces computation by sparsifying these matrices without changing their dimensions.\\n\\nThe paper discussed that their model target labels alone does not generalize well on test data and they showed teacher-student training helps greatly on retaining accuracy. They use the original uncompressed model as the teacher, and train the compressed model(student) to imitate the output distribution of the teacher, in addition to training on the target labels.\\n\\nPaper is well written and easy to follow. \\n\\nThis paper would be much stronger if it compared against quantization and latest pruning techniques. \\n\\nThis paper replace matrix-vector multiplications with the lowRank-LGP, but they only consider RNN networks. I am wondering how it affects other models given the fact that matrix-vector multiplications is the core of many deep learning models. It is not clear why their approach should only work for RNNs.\\n\\nTable 1 shows the reduction in computation and model size over the original matrix-vector multiplications Ax. I think in this analysis the computation of the those approaches are neglected. For example running the SVD alone on A (n by m matrix) takes O(m^2 n+n^3). That is true that if P and Q are given, then the cost would be n(m+n)/r. However, finding P and Q takes O(m^2 n+n^3) that could be very expensive when matrices are large.\\n\\nTable 2 only shows the LGB-shuffle resuts. What about the combined SVD and LGP? Similarly in Table 4, what is the performance of the LGB-Dense?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Good design\", \"review\": \"This paper proposed to use sparse low-ranking compression modules to reduce both computation and memory complexity of RNN models. And the model is trained using knowledge distillation.\", \"clarity\": \"I think Fig1a can be improved. Initially I don't understand how the shuffle part works. It will be more clear if the mx1 vectors have the same length and the two (m x1) labels are in the same height.\", \"originality\": \"The method is quite interesting and should be interesting to many people.\", \"pros\": \"1) The method reduces computation and memory complexity at the same time.\\n2) The result looks impressive.\", \"cons\": \"1) Is the training of AntMan models done on GPU or CPU? How is the training time. It seems efficient implementation of the model on GPU can be challenging. \\n2) It seems the modules can be used to replace any dense matrix in the neural networks. I'm not sure why it is applied on RNN only.\\n3) I think another baseline is needed for comparison, a directly designed small RNN model trained using knowledge distillation. In this way, we can see if the sparse low-rank compression provides new values.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Nice idea but the experiments are quite insufficient\", \"review\": \"This paper presents a network compression method based on block-diagonal sparse structure for RNN. Two kinds of group mixing methods are discussed. Experiments on PTB and SQUAD have shown its superiority over ISS.\\nThe idea present is interesting, and this paper is easy to follow. However, this paper can be improved from the following perspectives.\\n1.\\tThe method of balancing the quantity of different parts in knowledge distillation is trivial. It is quite general trick.\\n2.\\tDetails of experimental setup were unclear. For example, the optimization method used, the block size, and the hyper-parameters were unclear. In addition, it is also unclear how the block diagonal structure was used for the input-to-hidden weight matrix only or all weights. \\n3.\\tIn addition, the proposed method was compared with ISS only. Since there are many methods of compressing RNNs, comparison with other competitors (e.g., those presented in Related work) are necessary. Moreover, more experiments with other tasks in addition to NLP will be better. \\n4.\\tIn Table 2, the comparison with ISS seems be unfair. The proposed methods, i.e., LGP-shuffle was obtained based on the distillation. However, ISS was trained without distillation. From Table 3, when Cmse and Ckl were set to zero, the result was much worse. The reviewer was wondering that how does ISS with distillation perform.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SJxsV2R5FQ | Learning sparse relational transition models | [
"Victoria Xia",
"Zi Wang",
"Kelsey Allen",
"Tom Silver",
"Leslie Pack Kaelbling"
] | We present a representation for describing transition models in complex uncertain domains using relational rules. For any action, a rule selects a set of relevant objects and computes a distribution over properties of just those objects in the resulting state given their properties in the previous state. An iterative greedy algorithm is used to construct a set of deictic references that determine which objects are relevant in any given state. Feed-forward neural networks are used to learn the transition distribution on the relevant objects' properties. This strategy is demonstrated to be both more versatile and more sample efficient than learning a monolithic transition model in a simulated domain in which a robot pushes stacks of objects on a cluttered table. | [
"Deictic reference",
"relational model",
"rule-based transition model"
] | https://openreview.net/pdf?id=SJxsV2R5FQ | https://openreview.net/forum?id=SJxsV2R5FQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Byluqeapam",
"HJlW_1KC3m",
"ByePXT9anX",
"HJgA6WL5nm",
"SyelB6J93m"
],
"note_type": [
"official_comment",
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1542471824481,
1541472104828,
1541414175214,
1541198278405,
1541172535681
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1481/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1481/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1481/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1481/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1481/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Author rebuttal\", \"comment\": \"We thank the reviewers for their constructive feedback and address individual questions below.\", \"ar2\": \"\", \"q\": \"scalability of the approach\", \"a\": \"We emphasize that the rule learning EM approach can be done offline. The online prediction only requires a forward pass on the learned predictor for each applicable rule.\", \"ar1\": \"\"}",
"{\"metareview\": [\"pros:\", \"the paper is well-written and precise\", \"the proposed method is novel\", \"valuable for real-world problems\"], \"cons\": [\"Reviewer 2 expresses some concern about the organization of the paper and over-generality in the exposition\", \"There could be more discussion of scalability\"], \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper and valuable direction\"}",
"{\"title\": \"Learning sparse relational transition models\", \"review\": \"In the manuscript \\\"Learning sparse relational transition models\\\", the authors combine neural nets with relational models, using ideas from linguistics. They apply this to learning the representations of the space in which a simulated robot operates in a reinforcement learning ML paradigm. This work is of interest to the AI community and ICLR is a good venue for this work.\\n\\nThe authors apply their model in particular to a problem in which the simulated robot must rearrange objects in space, and they achieve reasonable accuracy.\", \"major_points\": [\"Organisationally, I thought that the authors could have gotten to the loss function sooner, as much of the development of the theory is lacking in motivation until specific tasks are defined.\", \"The application domain seemed to lose some of the power of the linguistic analysis they were doing to develop the representation through \\\"properties\\\" and \\\"action templates\\\". These definitions were quite general, but it was unclear if more than a few (with few parameters) were used in the actual application, and so it's unclear that so much generality was required by the application.\", \"The authors could have compared with more modern deep learning techniques for reinforcement learning such as DeepMimic (Peng et al 2018).\"], \"minor_points\": [\"Typesetting periods \\\"Pasula et al. and\\\" -> \\\"Pasula et al.\\\\ and\\\"\", \"Page 2: \\\"value of a note\\\" -> \\\"value of a node\\\"\", \"3.1 was hard to follow.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An approach to learn lifted transition rules using neural networks that take advantage of relational structure\", \"review\": \"An approach is proposed that learns transition rules in terms of local contexts. Specifically, transition rules make predictions as a distribution over the set of possible states based on local context of objects. A learning algorithm is described that learns the transition rules by maximizing the conditional likelihood. To learn the rules jointly with selecting the right samples for the transition rule, and EM algorithm is proposed.\\n\\nThe paper is well-written. The contribution seems significant considering that relational structure is integrated with neural networks in a systematic manner. Though written from the perspective of learning transition rules for tasks such as robotic manipulation, I think similar ideas can be for general tasks that can benefit from both relational structure and neural network representation. Learning lifted rules has also been studied in domains such as ILP and Statistical Relational Learning (Getoor and Taskar 07)(lifted rules with uncertainty). I think including their perspective and commenting on their relationship with the proposed work will be useful.\\n\\nExperiments are performed on a robotic manipulation task involving pushing a stack of blocks in a cluttered environment. A method that does not take object relations into account and simply predicts the state transition is used as baseline for comparison. The proposed approach shows the benefits of exploiting the structure between objects. There is not too much discussion on scalability. Does the propose method scale up for learning transition rules in real tasks? Are there any tradeoffs involved, etc. would be good to know.\\nIn summary, this seems to be a well-written and novel contribution.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"L EARNING SPARSE RELATIONAL TRANSITION MODELS\", \"review\": \"\", \"summary\": \"This work is about learning state-transition models in complex domains represented as sets of objects, their properties, ``\\\"deictic\\\" reference functions between sets of objects, and possible actions (or action templates). A parametric model for the actions is assumed, and these parameters act on a neural net that learns the transition model (probabilistic rule) from the current state to the next one. It is basically this nonlinear transition model implemented by a network which makes this work different from previous models described in the literature. The relational transition model proposed is sparse, based on the assumption that actions have only ``local effects on related objects. The prediction model itself is basically a Gaussian distribution whose mean and variances are represented by neural nets. For jointly learning multiple rules, a clustering strategy is presented which assigns experience samples to transition rules. The method is applied to simulated data in the context of predicting pushing stacks of blocks on a cluttered table top.\", \"evaluation\": \"The type of problems addressed in this paper is challenging and highly relevant for solving problems in the ``real'' world. Although the method proposed is in some sense a direct generalization of the work in [Pasula et al.], it still contains many novel and interesting aspects.Any single part of the model (like the use of Gaussians parametrized by functions implemented via neural nets) is somehow ``standard in deep latent variable models, but in complex real-world rule-learning problems the whole system presented defines certainly a big improvement over the state-of-the-art, which in my opinion has the potential to indeed advance this field of research.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
ryljV2A5KX | IB-GAN: Disentangled Representation Learning with Information Bottleneck GAN | [
"Insu Jeon",
"Wonkwang Lee",
"Gunhee Kim"
] | We present a novel architecture of GAN for a disentangled representation learning. The new model architecture is inspired by Information Bottleneck (IB) theory thereby named IB-GAN. IB-GAN objective is similar to that of InfoGAN but has a crucial difference; a capacity regularization for mutual information is adopted, thanks to which the generator of IB-GAN can harness a latent representation in disentangled and interpretable manner. To facilitate the optimization of IB-GAN in practice, a new variational upper-bound is derived. With experiments on CelebA, 3DChairs, and dSprites datasets, we demonstrate that the visual quality of samples generated by IB-GAN is often better than those by β-VAEs. Moreover, IB-GAN achieves much higher disentanglement metrics score than β-VAEs or InfoGAN on the dSprites dataset. | [
"Unsupervised disentangled representation learning",
"GAN",
"Information Bottleneck",
"Variational Inference"
] | https://openreview.net/pdf?id=ryljV2A5KX | https://openreview.net/forum?id=ryljV2A5KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJxRGkjpgV",
"B1gSkApel4",
"HyxtBKBJgV",
"rJlUMwlC1N",
"HkgL54gpRX",
"Bke74c_cA7",
"rkgsWcu9AX",
"HJeR0Du50X",
"SygaMvd5AQ",
"H1xIo_iea7",
"S1lLJvolTX",
"SklaxQY0nm"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545608982133,
1544768989167,
1544669505309,
1544582925906,
1543468174432,
1543305771500,
1543305730599,
1543305173985,
1543304981469,
1541613726443,
1541613278424,
1541473013212
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1480/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1480/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1480/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1480/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1480/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1480/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1480/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1480/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1480/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1480/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1480/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1480/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Dear ArearChair1\", \"comment\": \"We appreciate AreaChair1 and every reviewers for pointing out weak point as well as strong point of our paper. We will keep your advice in mind and try to update or change some of the settings in our experiments to make out our paper much valuable. Thank you.\"}",
"{\"metareview\": \"Strengths: This paper introduces a clever construction to build a more principled disentanglement objective for GANs than the InfoGAN. The paper is relatively clearly written. This method provides the possibility of combining the merits of GANs with the useful information-theoretic quantities that can be used to regularize VAEs.\", \"weaknesses\": \"The quantitative experiments are based entirely around the toy dSprites dataset, on which they perform comparably to other methods. Additionally, the qualitative results look pretty bad (in my subjective opinion). They may still be better than a naive VAE, but the authors could have demonstrated the ability of their model by comparing their models against other models both qualitatively and quantitatively on problems hard enough to make the VAEs fail.\", \"points_of_contention\": \"The quantitative baselines are taken from another paper which did zero hyperparameter search. However the authors provided an updated results table based on numbers from other papers in a comment.\", \"consensus\": \"Everyone agreed that the idea was good and the experiments were lacking. Some of the comments about experiments were addressed in the updated version but not all.\", \"confidence\": \"2: The area chair is not sure\", \"recommendation\": \"Reject\", \"title\": \"Nice idea, experiments lacking\"}",
"{\"title\": \"Dear AreaChair1\", \"comment\": \"We understand your concern that the hyperparameters for the baselines are not thoroughly explored in [1]. Per your suggestion, we will modify the final draft as follows.\\n(1) We will update the baseline scores in Table 1 with the scores in the original papers [2,3,4], including Kim & Mnih\\u2019s model. We believe these values are obtained based on their optimal settings on the d-sprite dataset. \\n(2) Due to delicacy of hyperparameter setting, we will tone down to \\u201cour method is comparable\\u201d instead of \\u201cour method is better than those models.\\u201d\\n(3) We will make public the source code of IB-GAN for fair comparison in the following research. \\n\\nPlease remind that our model is a new GAN-based model that inherits the merit of GANs but also achieves comparable results to other existing VAE-based models [1,2,3,4]. We also presented a new way of constraining the generative mutual information and discovered an interesting connection between GAN and Information Bottleneck (or rate distortion) theory. \\n\\nFor your information, we quickly summarize the best scores (i.e. disentanglement metric values of [2]) from the original papers on the dSprites dataset.\\n\\n \\u2502 [1] [2] [3] [4] \\n\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256a\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\n VAE \\u2502 0.63(0.06), beta=1 - - - \\n Beta-VAE \\u2502 0.63(0.10), beta=4 0.72*, beta=4 0.78*, beta=2 0.695*, beta=16 \\n FactorVAE \\u2502 - 0.83*, gamma=35 0.73*, gamma=5 0.79*, gamma=30 \\n Beta-TCVAE\\u2502 0.62(0.07), beta=4 - - 0.78*, beta=4 \\n HFVAE \\u2502 0.63(0.08), beta=4 - - - \\n CHyVAE \\u2502 - - 0.77*, v=200 - \\n \\u2502 Table 3 Figure 4 Figure 4 Figure 8 \\n(* denotes the best average scores deduced from the figures in the original papers, where the best average scores are not literally exhibited.) \\n\\nOur model achieves 0.80 \\u00b1 0.07 on average with 32 trials at beta=0.14. According to the table, our IB-GAN shows better scores than most of the reported VAE methods except Kim & Mnih\\u2019s model [2]: 0.83. Thus, we can conclude that our IB-GAN is comparable with the state-of-the-art models and the small standard deviation (0.07) implicates that our training is stable. \\n\\nThank you for great comments!\\nFrom authors. \\n\\n[1] Esmaeili et al., Structured Disentangled Representations, arXiv:1804.02086, 2018\\n[2] Kim & Mnih, Disentangling by Factorising, ICML, 2018\\n[3] Ansari & Soh, Hyperprior Induced Unsupervised Disentanglement of Latent Representations, arXiv:1809.04497, 2018\\n[4] Locatello et al., Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations, arXiv:1811.12359, 2018\"}",
"{\"title\": \"Question about the baseline results\", \"comment\": \"I'm worried that the paper the authors copied their baseline quantitative results from didn't conduct a thorough (or any?) hyperparameter tuning in its experiments.\\n\\nIn their rebuttal, the authors state that \\\"All the beta-VAE based baselines [3,4,5,6] in Section 4.1 are the state-of-the-art models in disentanglement representation learning.\\\" However, the baseline results are all taken from \\\"Structured Disentangled Representations\\\" [https://arxiv.org/pdf/1804.02086.pdf], which doesn't contain any discussion of hyperparameter tuning. The baselines are also missing the method of Kim & Mnih.\\n\\nIt seems strange that the follow-ups to the beta-VAE were measured to have mostly identical or worse performance. It also seems strange that they all use identical values of beta, when for instance the beta-TCVAE is weighting a different term and Chen et al. reported optimal beta values that were an order of magnitude different from the beta-VAE.\"}",
"{\"title\": \"To Reviewer3.\", \"comment\": \"We thank Reviewer3 for your encouraging and constructive comments. Please see blue fonts in the newly uploaded draft to check how our paper is updated.\\n\\n1. The tightness of the bounds\\n========================================\\nWe have updated Figure 4. in Appendix B. to show the changes of the upper-bound and the lower-bound of MI over the training iterations. The gap between the upper and lower bound of MI over the different beta is summarized in Figure 5.(c). Both the upper and lower-bound of MI tend to decrease as the beta increases, which demonstrates that the upper-bound (with the beta) can constrain the lower-bound well.\\n\\n2. The effect of beta (or gamma) on lower/upper bounds and disentangle metric score\\n========================================\\nThe effects of various betas on the variational lower/upper bounds of IB-GAN objective and the disentanglement score are presented in Figure 5. When beta = 0.212, IB-GAN achieved the best disentanglement score of 0.91.\\n\\n3. Adding one or two dataset\\n========================================\\nWe believe it would be meaningful if we could present our model on additional MNIST/FMNIST dataset with the categorical distributions. However, due to the time constraints, we have focused on the experiment on dSprites to see the effect of beta (and of the upper-bound). These experiments are reported in the Appendix. \\n\\n4. Why lambda=150, beta=1? Why \\u201clambda > beta\\u201d implies maximizing I(X,Z)?\\n========================================\\nAccording to the rate-distortion theory, the beta is the Lagrange multiplier of an optimization problem, and the ratio between lambda and beta can determine its theoretical optimality [1]. Therefore, we fixed the lambda to 1 in the newly updated paper (in fact we moved lambda to the left term of Eq.(6). And the effect of beta in the range of [0, 1.2] on dSprites dataset is summarized in Appendix B. \\n\\nTo answer the question of why \\u201clambda > beta implies maximizing I(X,Z)?\\u201d \\nIn Figure 5.(b), if beta >= 0.7 and lambda =1 then the upper bound of MI shrink down closer to the zero. In fact, the theoretical optimality of the IB-GAN objective can be decided by the ratio between lambda and beta (or just by the beta when lambda is fixe to 1) [1]. \\n\\n5. What if lambda = beta. Is this the same as not using I(X, Z)?\\n========================================\\nYes. If we set \\u201cbeta = lambda\\u201d, IB-GAN\\u2019s behaviors are similar to those of the normal GAN (although our model is slightly different due to the introduction of stochastic layers before the generator). The convergence behavior when lambda = beta = 1 is exhibited in Figure 4.(d): both MI bounds are very close to zero.\"}",
"{\"title\": \"To Reviewer1.\", \"comment\": \"We are sincerely grateful for Reviewer1\\u2019s thoughtful review. Please see blue fonts in the newly uploaded draft to check how our paper is updated.\\n\\n1. Comparison with GILBO paper\\n========================================\\nThe lower-bound of I_g(X,Z) in our paper is described in GILBO paper as a \\u201cmetric\\u201d for the learned capacity of any generative models. Specifically, they discovered the correlation between I_g(X,Z) and FID score, implying the higher variational lower bound of MI could indicate that the learned generator is producing more distinctive samples without mode collapses. Thank you for the helpful reference which deepens the understanding of our work. We have referenced this work as you recommended. Still, the novelty of this work lies in the variational upper-bound on I_g(X,Z) and its effect on disentangled representation learning.\\n\\n2. VAEs do not have to utilize Gaussian observation models and can use powerful autoregressive decoders (e.g. arxiv:1611.02731).\\n========================================\\nCould you checkout that why \\u201cPowerful autoregressive decoders are not good for disentanglement learning?\\u201d (https://towardsdatascience.com/with-great-power-comes-poor-latent-codes-representation-learning-in-vaes-pt-2-57403690e92b )\\n\\nRecently, as you mentioned, Alemi et al. [1] demonstrate the possibility of achieving the minimum rate with small distortion as well by adopting a complex auto-regressive decoder in the beta-VAE model by setting beta < 1. But, their experiment is conducted on a relative simple dataset (e.g. MNIST) and not designed for measuring the disentanglement scores explicitly.\\n\\nIn contrast, we achieves the best disentanglement score as exhibited in Section 4.1. IB-GAN also generates images of high quality without any autoregressive decoder. Hence, we believe the visual quality of the generated image could be improved further if a stronger decoder is adapted.\\n\\nWe would like to emphasize the difference between beta-VAE and IB-GAN. The beta-VAE learns to directly generate images from the code z. Instead, the generator in IB-GAN learns to minimize the rate (or the divergence between G(z) and data distribution p(x)) while minimizing the reconstruction error of noise z from its coding x. Given a fixed beta-VAE model, a large beta could consequence in a large distortion. In IB-GAN, however, the generator can learn to generate images of high quality even though the generator is statistically independent of the representation encoder. More discussion is presented in Section 3.3 and Appendix A.\\n\\n3. What prevents e(r|z) from being a near identity in this situation, for which there could be a large generative mutual information?\\n========================================\\nThank you for correction our confusion on this. Regardless of its dimension, as the KL(e(r|z)||m(r)) is close to zero, r contains no information about z [1]. Hence, as beta becomes larger IB-GAN reduces to normal GAN. If beta = 0, there is no constraint on the representation r, then IB-GAN reduces to InfoGAN (although bottleneck architecture remains). In Figure 6. some independent dimensions of r vectors do not affect the changes in the generated image, while their KL independence divergence is closed to zero in Figure 1.(b). More discussion can be found in Section 3.3 and Appendix A in our newly updated paper.\"}",
"{\"title\": \"To Reviewer1.\", \"comment\": \"(cont'd)\\n\\n4. Batch normalization\\n========================================\\nApparently, we agree that the batch normalization could affect the output of the deep neural network. \\nAnother hypothesis is that the batch normalization in a deterministic model makes the capacity of the model infinite since it introduces learnable scaling parameters that are not penalized [2].\\n\\nHowever, we think the introduction of MI constraint term in our paper gives us way to limit the model capacity (which offsets the effect of batch normalization giving a infinite capacity). In our study, the bottleneck representation encoder e(r|z) is a stochastic model, which means that the deep learning model here is a block box to estimate the mu(z) and var(z) of the distribution of e(r|z) (or the mu(x) and var(x) of the variational reconstructor q(z|x)). Hence, the bias from the batch normalization in IB-GAN architecture converges into the estimation of these parameters. Also, the capacity of both the encoder e(r|z) and the variational reconstructor q(z|x) is bounded by KL(e(r|z)||m(r)) as well. Lastly, other VAE baselines [3,4] in Section 4.1 used batch normalization in their model. \\n\\nWe have conducted a mini test on our model to check if we can see this behavior a bit. We changed the mean of the input Gaussian noise p(z) to some constant C (i.e. 100) while training IB-GAN. It seems that the convergence behavior of IB-GAN objective stumbles a while but very quickly becomes stable, meaning the variational reconstruction q(z|x) starts to predict the input source z well, and overall model capacity was constrained due to the upper-bound. Therefore, we think that our model is not too sensitive to the potential bias effect of batch normalization. However, we believe more investigation on this issue will be very important. We welcome further informations or discussions on this issue. \\n\\n5. The tightness of upper and lower bound\\n========================================\\nWe have updated Figure 4. in Appendix B. to show the changes of the upper-bound and the lower-bound of MI over the training iterations. The gap between the upper and lower bound of MI over the different beta is summarized in Figure 5.(c). Both the upper and lower-bound of MI tend to decrease as the beta increases, which indicates that the upper-bound with beta can constrain the lower-bound well. The gap between the lower-bound and upper-bound also decreases as the beta increases.\\n\\n6. The effect of lambda and beta\\n========================================\\nThe effects of various betas on the variational lower/upper bounds of IB-GAN objective and the disentanglement score are presented in Figure 5. When beta = 0.212, IB-GAN achieved the best disentanglement score of 0.91 in our experiments.\\n\\n8. The effect of constraining the MI between X and Z?\\n========================================\\nIn IB-GAN The generator can learn disentangled representation by constraining the MI between X and Z. The effect of beta on the disentanglement representation learning is updated in Section 3.3.\\n\\nWe have also tried to describe the disentangling promoting behavior of IB-GAN in terms of the rate-distortion theory [1]. To summarize it, maximizing the lower bound I(z, G(r(z))) in IB-GAN increases I(r, G(r)) since I(z, G(r(z))) <= I(r, G(r)) due to its causal relationship: z->r->x->z\\u2019. Maximizing I(r, G(r)) promotes the statistical dependency between r and G(r), while r and x have to be efficient coding which minimizes their excess rate. Therefore, KL divergence with factored Gaussian on r promotes the statistical independence of representation encoder, and statistically distinctive factors or features among the image distribution p(x) must be coordinated with the independent factor of r to maximize the statistical dependency between the source codings: I(z, G(r(z))). For more explanation, please refer to Section 3.3 of updated paper. \\n\\nThank you again for the thoughtful insights and comments.\\n[1] A. A. Alemi et al. Fixing a Broken ELBO. arXiv:1711.00464 2017.\\n[2] A Rate-Distortion Theory of Adversarial Examples, under-review, 2019\\n[3] Chen et al., Isolating Sources of Disentanglement in Variational Autoencoders, NIPS 2018.\\n[4] Esmaeili et al., Structured Disentangled Representations, arXiv:1804.02086, 2018\"}",
"{\"title\": \"To Reviewer2.\", \"comment\": \"We thank Reviewer2 for positive and constructive reviews. Below, we respond to each comment in details. Please see blue fonts in the newly uploaded draft to check how our paper is updated.\\n\\n1. The effect of beta and gamma on the degree of disentanglement\\n========================================\\nThe effects of various betas (i.e. [0,1.2] ) with fixed gamma on the disentanglement score is presented in Figure 5. in Appendix B. When, beta = 0.212, IB-GAN achieved a peak disentanglement score of 0.91.\\n\\n2. Why IB creates the success in disentangling?\\n========================================\\nWe have updated our new founding of why our Generator can learn disentangled representation in Section 3.3. Additionally, the effect of beta on the disentanglement metric scores is presented in Figure 5.(d) in Appendix B. We tried to explain the disentangling promoting behavior of IB-GAN with the concepts from the rate-distortion theory in [2].\\n\\nTo summarize it, maximizing the lower bound I(z, G(r(z))) in IB-GAN increases I(r, G(r)) since I(z, G(r(z))) <= I(r, G(r)) due to its causal relationship: z->r->x->z\\u2019. Maximizing I(r, G(r)) promotes the statistical dependency between r and G(r), while r and x have to be efficient coding which minimizes their excess rate. Therefore, KL divergence with factored Gaussian on r promotes the statistical independence of representation encoder, and statistically distinctive factors or features among the image distribution p(x) must be coordinated with the independent factor of r to maximize the statistical dependency between the source codings: I(z, G(r(z))). For more explanation, please refer to Section 3.3 of updated paper.\\n\\n3. Comparison with stronger beta-VAE variants\\n========================================\\nAll the beta-VAE based baselines [3,4,5,6] in Section 4.1 are the state-of-the-art models in disentanglement representation learning.\\n\\nRecently, Alemi et al. [2] demonstrate the possibility of achieving the minimum rate with small distortion by adopting a complex auto-regressive decoder in the beta-VAE model by setting beta < 1. However, their experiment is conducted on a relatively simple dataset (e.g. MNIST) and not designed for measuring the disentanglement scores explicitly. \\n\\nIn contrast, IB-GAN achieved the best disentanglement score, which is summarized in Table 1 in Section 4.1. Moreover, the quality of generated image samples is visually-pleasing without using any complex auto-regressive decoder in IB-GAN. We believe that if we use any stronger autoregressive decoder in our model, the visual quality could be further improved.\\n\\nWe would like to emphasize the difference between beta-VAE and IB-GAN. The beta-VAE learns to directly generate images from the code z. Instead, the generator in IB-GAN learns to minimize the rate (or the divergence between G(z) and data distribution p(x)) while minimizing the reconstruction error of noise z from its coding x. Given a fixed beta-VAE model, a large beta could consequence in a large distortion. In IB-GAN, however, the generator can learn to generate images of high quality even though the generator is statistically independent of the representation encoder. More discussion in the context of rate-distortion theory [2] is presented in Section 3.3 and Appendix A.\\n\\nThank you again for the thoughtful insights and comments.\\n[1] C. P. Burgess et al. Understanding disentangling in \\u03b2-VAE. NIPSw 2017.\\n[2] A. A. Alemi et al. Fixing a Broken ELBO. arXiv:1711.00464 2017.\\n[3] Higgins et al., beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework, ICLR 2017\\n[4] Kim & Mnih, Disentangling by Factorising, ICML 2018\\n[5] Chen et al., Isolating Sources of Disentanglement in Variational Autoencoders, NIPS 2018.\\n[6] Esmaeili et al., Structured Disentangled Representations, arXiv:1804.02086, 2018\"}",
"{\"title\": \"To Reviewer3.\", \"comment\": \"(cont'd)\\n\\n6. Why maximizing I(X,Z) includes a disentangled representation?\\n========================================\\nWe have updated our new founding of why our Generator can learn disentangled representation in Section 3.3. Additionally, the effect of beta on the disentanglement metric scores is presented in Figure 5.(d) in Appendix B. We tried to explain the disentangling promoting behavior of IB-GAN with the concepts from the rate-distortion theory in [1]. \\n\\nTo summarize it, maximizing the lower bound I(z, G(r(z))) in IB-GAN increases I(r, G(r)) since I(z, G(r(z))) <= I(r, G(r)) due to its causal relationship: z->r->x->z\\u2019. Maximizing I(r, G(r)) promotes the statistical dependency between r and G(r), while r and x have to be efficient coding which minimizes their excess rate. Therefore, KL divergence with factored Gaussian on r promotes the statistical independence of representation encoder, and statistically distinctive factors or features among the image distribution p(x) must be coordinated with the independent factor of r to maximize the statistical dependency between the source codings: I(z, G(r(z))). For more explanation, please refer to Section 3.3 of updated paper.\\n\\n7. Is it the case that a more disentangled generator is also capable of producing more distinct samples? \\n========================================\\nRecently, the lower bound I_g(z, G(z)), named GILBO, is also proposed as a \\u201cuniversal measure\\u201d for the learned capacity of any given generative models in the paper [3]. They discovered some correlation between the variational lower-bound and FID score, implying the higher variational lower-bound of generative MI indicates the generator produces more distinctive samples without mode collapses or collisions. \\n\\nAlthough we did not use the 'exp' function in the equation you suggested, instead, we collected Ig[x_i, z_i] with 20000 samples and averaged it. We believe the lower bound scores in Figure 5.(b) can be seen as the generalization ability of the generator according to [3]. However, the lower bound score is not linearly correlated with FID in Figure 1.(c) in [3]. In fact, the FID score \\\"can\\\" reach good scores at the proper lower bound.\\n\\nIn our studies, we observed at the lower bound scores with beta around [0.141-0.282] in Figure 5, the good disentanglement metric scores are achieved. Similar to GILBO, we could not find a linear relationship between the disentanglement and generalization ability of Generator. Our hypothesis is if we compress the generator too much with the large beta to get disentangled effect, the generalization ability decreases. Therefore, we believe an optimal balance point exists for both good disentanglement and generalization ability. And, we can assume that maximum disentanglement performance a model can achieve depends on its architecture (or its capacity). \\n\\nThank you again for the thoughtful insights and comments.\\n[1] A. A. Alemi et al. Fixing a Broken ELBO. arXiv:1711.00464 2017.\\n[2] A. A. Alemi et al. Deep Variational Information Bottleneck. ICLR 2017.\\n[3] A, A. Alemi et al. GILBO: One Metric to Measure Them All. ICLR 2018.\"}",
"{\"title\": \"A paper with several interesting ideas; Experimental evaluation could do with extra work\", \"review\": \"(Apologies for this belated review)\\n\\nSummary \\n\\nThe authors propose a GAN-based approach to learning disentangled representations that combines elements InfoGAN with recent Information-Bottleneck (IB) perspectives on variational auto-encoders. In addition to minimizing the normal GAN loss, the authors propose to maximize a lower bound on the mutual information under the generative model, whilst minimizing an upper bound\\n\\n\\tIg[X,Z] = E_p(X,Z)[log p(X,Z) - log p(X) - log p(Z)]\\n\\nIn order to optimize this objective whilst retaining the likelihood-free property of GANs, the authors propose to define a generative model with an intermediate representation r, which allows them to define a likelihood-free generator x = G(r) whilst defining a parametric distribution p(r,z) = e\\u03c8(r|z) p(z). This enables the authors to define model architectures that jointly train an encoder q\\u03c6(z | x) and a GAN-style generator using an objective that incorporates inductive biases for learning disentangled representations\\n\\nQuantitative evaluation is performed on d-Sprites (where metrics for disentanglement are evaluated), and qualitative results are shown for Celeb-A and the Chairs dataset. \\n\\n\\nComments\\n\\nI think this is a paper that presents several interesting ideas. Integrating IB-based ideas into the InfoGAN framework is a useful contribution. Moreover, I think that the way they authors integrate a likelihood-free generative model with an inference model is something of a contribution in its own right. I particularly like the idea of the intermediate representation. \\n\\nHaving done some work in this space, I would say that the results on d-Sprites are quite good. Aside from the numerical scores in the table aside, the latent traversals in Figure 2 show a good degree of disentanglement. There is a reason that many of the recent papers don\\u2019t show these traversals; it turns out to quite difficult to disentangle shape from the other variables, and even rotation tends to correlate with some of the other latents in many cases.\\n\\nThat said, I would say that the experiments could do with some additional work. I would like to see some discussion of how tight/loose the upper and lower bounds are (some convergence plots would be helpful in this regard). I would also like to see some experiments that evaluate different choices for \\u03bb and \\u03b2 (along with some discussion of how these values were chosen \\u2013 see below). Finally, could the authors find one or two additional datasets? I generally find it difficult to evaluate results on Celeb-A (other than the qualitative evaluation \\u201cthe images look sharper than those produced by VAEs\\u201d). Even something like MNIST/FMNIST would be OK for purposes of evaluating inclusion of Discrete/Concrete variables and/or extrapolation to unseen combinations of factors (as in the Esmaeli et al. paper). \\n\\nOverall, I would say that this is a potentially strong paper, but that experimental evaluation does need work. I\\u2019d be willing to look at an updated version of the paper and adjust my score accordingly if the authors can provide one. \\n\\nQuestions \\n\\n- Could the authors comment on why they need to set \\u03bb=150, \\u03b2=1? On a quick read, it is not immediately obvious to me why \\u03bb > \\u03b2 implies that we will maximize Ig[X,Z] is this simply because maximizing the lower bound will win out over minimizing the upper bound? When we set \\u03bb=\\u03b2, since this would yield a zero loss when the bounds are tight, is that correct? In this case we presumably not necessarily expect to maximize Ig[X,Z] w.r.t. \\u03b8?\\n\\n- What is perhaps missing from this paper is a discussion of *why* maximizing Ig[X,Z] induces a disentangled representation. One hypothesis could be that be that, for a given number of uncorrelated latent variables, a disentangled generator is simply more efficient in terms of the number of distinct samples X it can construct. In this context, it would be interesting if the the authors could report their Ig[X,Z] bounds. In particular, could they compute\\n\\n\\texp[Ig[X,Z]] / N\\n \\nIntuitively, this number indicates how many examples the generator can produce relative to the number of training examples N. Is it the case that a more disentangled generator is also capable of producing more distinct samples? \\n\\n\\nMinor \\n\\n- This is a bit of a pet peeve of mine: Is it really true that GANs lean a representation? A representation is generally a mapping from data to features. A GAN is a mapping from features to data. The authors in this paper do train an encoder to invert the generative model, which learns representation, and certainly a disentangled GAN arguably is useful for more controllable forms of generation in its own right, it just seems that we should not conflate the two. \\n\\n- Fix: General consent -> General consensus\\n- Fix: good representation -> good representations\\n- Fix: such disentangled representation -> disentangled representations\\n- Fix: (?Higgins et al., 2017b; 2018)\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Elegant approach, well presented, more experimental validation of the core intuition would have been nice\", \"review\": \"This work addresses the problem of unsupervised disentangled representation learning, and leverages insights and intuitions about utilizing an information bottleneck (IB) approach to encourage disentangling. In particular, building upon insights of how beta-VAE can be understood (and improved upon) by understanding it in terms of IB, the authors propose to modify GANs to include an IB, so as to leverage similar disentangling benefits. The promise is that this approach could utilise the strengths of the GAN framework over VAEs, such as the often sharper reconstructions and the ease of including discrete latents in addition to continuous ones.\\n\\nTo implement their proposal in practice, the authors introduce a neat trick to control an upper bound for the additional mutual information term that the new approach -- termed IB-GAN -- requires. This adds just one layer of complexity to the GAN setup via adding a stochastic representation model between the latent representation and the generator, and has elegant limiting cases that recover both the standard GAN and the InfoGAN approach.\\n\\nThe paper is clearly presented and the intuitive arguments can be readily followed, even though the resulting loss formulation is a bit tricky to justify without expanding upon the underlying motivation. \\n\\nThe approach is tested on three standard datasets and two different metrics that have previously been used for benchmarking unsupervised disentangling, and the results look convincing enough to demonstrate the improvement over existing GAN approaches. \\n\\nStill, the experimental section is arguably the weakest part of the paper, as there are now stronger beta-VAE variants as baselines available, so I am taking the numbers for VAE-based methods in the quantitative assessment with a grain of salt. More importantly, though, as the motivation of the work is that introducing an information bottleneck is what creates the success in disentangling, it would have been nice to see this effect more clearly broken out in experiments directly demonstrating the effect of beta and gamma on the degree of disentanglement. \\n\\nOverall, though, this is an interesting contribution to the rapidly developing subfield of unsupervised disentangling, and I would expect the introduction of IB ideas into GAN setups to drive further advances in representation learning techniques. \\n\\n===\", \"update\": \"I am happy with the clarifications and the changes to the manuscript, and have increased my rating accordingly from 6 to 7.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Nice idea. Experiments lacking.\", \"review\": \"Please have your submission proof-read for English style and grammar issues.\\n\\nThis paper introduces the IB-GAN and information bottleneck inspired GAN variant. The ordinary GAN objective is modified to include a variational lower and upper bound on the generative mutual information. This should allow one to control the amount of information in the representation of the GAN, in contrast to the InfoGAN which simply maximizes the mutual information. While lower bounding the generative mutual information is straight forward and only requires a variational inverting network (some q(z|x)) upper bounding the generative mutual information is trickier. Here the paper offers a very nice solution. Formally they realize a modified Markov chain Z -> R -> X where R is made explicitly stochastic. By Data Processing Inequality I(Z;X) <= I(Z;R) and with a tractable e(r|z), only a variational marginal m(r) is needed to obtain a variational upper bound on the mutual information in the GAN. This then gives a GAN objective that looks like the information bottleneck interpretation of the VAE.\\n\\nWhile the idea for obtaining a variational upper bound on the generative mutual information is novel and clever, the experiments in the paper are lacking.\\n\\nIt should be noted that the variational lower bound on the generative mutual information has already been introduced as the GILBO (generative information lower bound) (arxiv:1802.04874) \\n\\nI take issue with the discussion in the \\\"Reconstruction of input noise z\\\" section. It is claimed that beta-VAE \\\"applies the MSE loss to x and uses beta > 1\\\". VAEs do not have to utilize gaussian observation models and can use powerful autoregressive decoders (e.g. arxiv:1611.02731). \\n\\nLater down the page it is claimed that when m(r) and p(z) have the same distributional form and dimensionality the R will become independent of Z. I do not believe this. What prevents e(r|z) from being a near identity in this situation, for which there could be a large generative mutual information?\\n\\nThe experiments used batch normalization, itself a stochastic procedure that would make their tractable densities incorrect. There is no discussion of the effect batch norm would have on their bounds.\\n\\nMy principal complaint is the general lack of experimental evidence. The paper suggests what appears to be a nice framework and simple procedure for controlling the information flow in a GAN. To do so they introduce two Lagrange multipliers, beta and lambda in their notation (Equation 11) but there are no experiments showing the effect of these two hyperparameters. They have what should be both an upper and lower bound on the same quantity, the generative mutual information, but these are not shown separately for any of their experiments. There is no discussion of how tight the bounds are and if they approach each other. There is no discussion of how the beta and lambda might influence them either individually or jointly. There is no evidence to demonstrate the effect of constraining the mutual information between X and Z.\\n\\nIn short, the paper offers what appears to be a very clever idea, but does very little to experimentally explore its effects.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rkxoNnC5FQ | SPIGAN: Privileged Adversarial Learning from Simulation | [
"Kuan-Hui Lee",
"German Ros",
"Jie Li",
"Adrien Gaidon"
] | Deep Learning for Computer Vision depends mainly on the source of supervision. Photo-realistic simulators can generate large-scale automatically labeled synthetic data, but introduce a domain gap negatively impacting performance. We propose a new unsupervised domain adaptation algorithm, called SPIGAN, relying on Simulator Privileged Information (PI) and Generative Adversarial Networks (GAN). We use internal data from the simulator as PI during the training of a target task network. We experimentally evaluate our approach on semantic segmentation. We train the networks on real-world Cityscapes and Vistas datasets, using only unlabeled real-world images and synthetic labeled data with z-buffer (depth) PI from the SYNTHIA dataset. Our method improves over no adaptation and state-of-the-art unsupervised domain adaptation techniques. | [
"domain adaptation",
"GAN",
"semantic segmentation",
"simulation",
"privileged information"
] | https://openreview.net/pdf?id=rkxoNnC5FQ | https://openreview.net/forum?id=rkxoNnC5FQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1gAoe9TyN",
"rkgid_JICQ",
"ByeIUukIAQ",
"SyglnrZ_6m",
"rylJefZd67",
"rJxTZW-dTm",
"ByeHihmQ67",
"rJe47he8hX",
"ryedXdc4hm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544556710142,
1543006322619,
1543006285699,
1542096295754,
1542095335300,
1542095108912,
1541778589412,
1540914203930,
1540823072336
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1479/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1479/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1479/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1479/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1479/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1479/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1479/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1479/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1479/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes an unsupervised domain adaptation solution applied for semantic segmentation from simulated to real world driving scenes. The main contribution consists of introducing an auxiliary loss based on depth information from the simulator. All reviewers agree that the solution offers a new idea and contribution to the adaptation literature. The ablations provided effectively address the concern that the privileged information does in fact aid in transfer. The additional ablation on the perceptual loss done during rebuttal is also valuable and should be included in the final version.\\n\\nThe work would benefit from application of the method across other sim2real dataset tasks so as to be compared to the recent approaches mentioned by the reviewers, but the current evaluation is sufficient to demonstrate the effectiveness of the approach over baseline solutions.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"A new approach to learning from simulated data with privileged information\"}",
"{\"title\": \"A revised version with additional experiments (ablation study) is available.\", \"comment\": \"We have added the experimental results for SPIGAN-no-PI without perceptual loss, named SPIGAN-base in our latest revised version. As shown in Table 2, we observe that SPIGAN-no-PI (with perceptual loss) outperforms SPIGAN-base (without perceptual loss) in both datasets. This implies that perceptual regularization indeed helps stabilizing the adaptation during training, as suggested in Shrivastava et al., (2016) and Bousmalis et al., (2017). Furthermore, SPIGAN-base only improves over the source model by +1.6% mIoU on Cityscapes and has slightly worse negative transfer than SPIGAN-no-PI on Vistas, whereas the full SPIGAN significantly improves performance as previously discussed. This provides further evidence to back our claim: regularization, including our PI-based term, is indeed a key component to improve generalization performance in domain adaptation.\"}",
"{\"title\": \"A revised version with additional experiments (ablation study) is available.\", \"comment\": \"We have added the experimental results for SPIGAN-no-PI without perceptual loss, named SPIGAN-base in our latest revised version. As shown in Table 2, we observe that SPIGAN-no-PI (with perceptual loss) outperforms SPIGAN-base (without perceptual loss) in both datasets. This implies that perceptual regularization indeed helps stabilizing the adaptation during training, as suggested in Shrivastava et al., (2016) and Bousmalis et al., (2017). Furthermore, SPIGAN-base only improves over the source model by +1.6% mIoU on Cityscapes and has slightly worse negative transfer than SPIGAN-no-PI on Vistas, whereas the full SPIGAN significantly improves performance as previously discussed. This provides further evidence to back our claim: regularization, including our PI-based term, is indeed a key component to improve generalization performance in domain adaptation.\"}",
"{\"title\": \"Authors' response to Reviewer 2\", \"comment\": \"Thank you very much for your precise comments and suggestions. We are delighted you found our idea smart, valuable, and our results promising. We have revised the paper by answering your comments as described below.\", \"q1\": \"SPIGAN-no-PI better than FCN on Cityscapes, worse on Vistas + basic structures of SPIGAN and FCN\\nThank you for your detailed questions. We have clarified the following in the updated submission.\\n\\nSPIGAN's task network (T in Fig.2) is exactly the FCN network used as baseline in Table 2. At test time, we run only this task network T, which differs only by its weights from the FCN baseline. In the case of the FCN baseline, these weights are trained in a supervised fashion in simulation (source only). In the case of SPIGAN, the task network's weights are obtained via our unsupervised domain adaptation algorithm (using sim and unlabeled target data), with the goal of improving generalization performance over the domain gap. This explains why SPIGAN's results are better than FCN's.\\n\\nSPIGAN-no-PI also performs domain adaptation, but does not use Privileged Information (PI), which we postulate is helpful. SPIGAN-no-PI improves generalization performance over FCN (no adaptation) on Cityscapes, but not on Vistas. We measured this phenomenon, called negative transfer in the Domain Adaptation literature, in Table 2 (last column) and qualitatively visualized it in Figures 5-8. These results confirm that PI indeed helps, as SPIGAN improves generalization performance overall (better mIoU) and reduces individual negative transfer cases. The root cause of the difference in behavior of SPIGAN-no-PI between Cityscapes and Vistas is discussed in more details in section 4.3, and in the response to Reviewer 1. It is due to a larger visual variety in Vistas than in Cityscapes.\", \"q2\": \"Ablation study on perceptual loss\\nThis is an interesting additional experiment to run. We did not initially run it, because the focus of the analysis is on measuring the relative importance of our contribution (PI), which is why we discussed only SPIGAN-no-PI vs SPIGAN, both using the perceptual loss. We will add results for SPIGAN-no-PI without perceptual loss in the next revised version (we are currently running these additional experiments).\", \"q3\": \"Validation set + hyper-parameters\\nThank you for pointing out a part of our main text that can be clarified. We follow the common protocol in unsupervised domain adaptation [Shrivastava et al., 2016, Zhu et al., 2017, Bousmalis et al., 2017]: we tune hyper-parameters using grid search on a small validation set different than the target set. For Cityscapes, we use a subset of the validation set of Vistas, and vice-versa. Note that the values found are the same across datasets and experiments, which shows they have a certain degree of robustness and generalization. We have added a clarification in section 4.1.\\n\\nMoreover, our hyper-parameters described in section 4.1 confirmed that the two most important factors in the objective are the GAN and task losses (\\\\alpha=1, \\\\beta=0.5). This is intuitive, as the goal is to improve the generalization performance of the task network (the task loss being an empirical proxy) across a potentially large domain gap (addressed first and foremost by the GAN loss). At a secondary level of importance come the regularization terms in the objective: 1) the perceptual loss (for stabilizing the GAN training), and 2) our PI loss, which is an additional constraint on the adaptation. This is again intuitive, as the regularizers are not the main learning objective. We have added details and loss curves in section 4.1 in the revised version.\", \"q4\": \"Better discuss negative transfer rate\\nThank you for your suggestion. We believe Table 2 and Figures 5-8 quantitatively and qualitatively describe an important causal explanation for our mean IoU results: the relative importance of instances with negative transfer, an important failure mode of domain adaptation methods in general (cf. Csurka, G. (2017): Domain adaptation for visual applications: A comprehensive survey). Previous related works we compare to do not measure this phenomenon or discuss it in depth, hence why we proposed this new complementary measure and only limited the discussion to our ablative analysis. We expanded on this point in section 4.3, and hope our negative transfer metric will encourage other researchers to discuss negative transfer in more depth and compare to our results.\", \"q5\": \"Two missing related papers:\\nThank you for pointing out these missing citations. We discuss these two works in section 2 in our revised version. For fair comparison, we only listed the second paper's results in the updated Table 1, because the Synthia-to-Cityscapes results in the first paper are based on a reduced ontology, while all other methods (including ours) report results on 16 classes. Our method outperforms Zhou et al when using the same resolution and FCN8s task network.\"}",
"{\"title\": \"Authors' response to Reviewer 3\", \"comment\": \"Thank you for your very detailed review and generous feedback towards making our submission even stronger. We are happy you found our work on this challenging problem valuable and novel. We have revised the paper by following your comments, as described in more details below. All the changes are visible using the \\\"Show Revisions\\\" tool on OpenReview.\", \"q1\": \"Missing citations\\nThank you for pointing out the missing citations, which are relevant indeed. We have added and discussed briefly the provided references in the revised version of the related work.\", \"q2\": \"Ablation study on perceptual loss\\nWe agree this is an interesting additional experiment to run to have a completely thorough ablative analysis. We did not initially run it, because the focus of the analysis is on measuring the relative importance of our contribution (PI), which is why we discussed only SPIGAN-no-PI vs SPIGAN, both using the perceptual loss. We will add results for SPIGAN-no-PI without perceptual loss in the next revised version (we are currently running these additional experiments).\", \"q3\": \"Clarity of the paper\\nThank you for the detailed suggestions. We have added related background material in section 2 in the current revised version. We have also added the target-only results to both Table 1 and Table 2.\", \"q4\": \"Convergence plots and training analysis\\nWe have added the loss curves and a related discussion in section 4.1, confirming the stability of our training regime.\", \"q5\": \"validation set + hyper-parameters\\nThank you for pointing out a part of our main text that needs to be clarified. Setting hyper-parameters in a fully unsupervised setting is challenging indeed. As you mention, we ensured we do not use any labels from the target dataset. We follow the common protocol in unsupervised domain adaptation [Shrivastava et al., 2016, Zhu et al., 2017, Bousmalis et al., 2017, Sankaranarayanan et al., 2018]: we tune hyper-parameters using grid search on a small validation set different than the target set. For Cityscapes, we use a subset of the validation set of Vistas, and vice-versa. Note that the values found are the same across datasets and experiments, which shows they have a certain degree of robustness and generalization. We have added a clarification in section 4.1.\\n\\nMoreover, our hyper-parameters described in section 4.1 confirmed that the two most important factors in the objective are the GAN and task losses (\\\\alpha=1, \\\\beta=0.5). This is intuitive, as the goal is to improve the generalization performance of the task network (the task loss being an empirical proxy) across a potentially large domain gap (addressed first and foremost by the GAN loss). At a secondary level of importance come the regularization terms of the loss, which in our case are \\u201ccontents-preserving\\u201d related: 1) the perceptual loss, which accounts for the semantics of the scene (is used for stabilizing the GAN training as mentioned in Shrivastava et al.), and 2) our PI loss, which accounts for the geometry of the scene and is an additional constraint on the adaptation. This is again intuitive, as the regularizers are not the main learning objective. The right balance of these two type of \\u201ccontent\\u201d-preserving factors was found via grid search as described above. We have added the details and loss curves in section 4.1 in the revised version.\"}",
"{\"title\": \"Authors' response to Reviewer 1\", \"comment\": \"Thank you very much for your feedback and valuable comments. We are happy you found our submission to be a valuable contribution to the community. We have revised the paper by following your comments, as explained below. All the changes are visible using the \\\"Show Revisions\\\" tool on OpenReview.\", \"q1\": \"The use of 360x640 as resolution\", \"this_resolution_was_used_for_two_main_reasons\": \"1) fair comparison (this resolution is part of the standard protocol used by the related works we compare to), 2) faster exploration. Nonetheless, we agree that higher resolution experiments would be interesting. Therefore, following your comments, we ran additional experiments using a much higher resolution (512 x 1024), and got results competitive with the state-of-the-art, reinforcing our previous experimental conclusions. The details are updated in Table 1 in the latest revised version of our manuscript.\", \"q2\": \"The use of FCN8s\\nWe agree that using bigger and better backbones than FCN8s is likely to result in significant accuracy improvements. We choose to use FCN8s to have a fair comparison with previous domain adaptation works in the literature (cf. Hoffman et al., 2016, Zhang et al., 2017, Sankaranarayanan et al., 2018, and Zou et al. 2018), where FCN8s is widely used. Moreover, we were seeking to simplify our pipeline by reducing the size of the different models that are part of SPIGAN and the time taken to train these models in order to stay within a constrained computational budget for training. In this regard, FCN8s provides us with a simple architecture of low memory footprint and fast to train, which makes exploration easier and faster, thus enabling our ablative analysis. Furthermore, we believe this improves the reproducibility of the paper.\", \"q3\": \"More details on what is happening with Vistas and SPIGAN-no-PI.\\nFollowing your remarks, we have investigated further the difference between our Cityscapes and Vistas results. We could not find any outstanding problem in the training of our baselines or methods: we use the same code, experimental protocol, and parameter tuning in all cases (discussed in more details in the updated Section 4.1). The only difference between SPIGAN-no-PI and SPIGAN is the addition of the PI-based term (Eq.5) in the learning objective (Eq.1). This added term acts as a regularizer, aiming to constrain the optimization to preserve the PI, which is depth information in this case. Our assumption is that this added term improves generalization performance. We in fact run our SPIGAN-no-PI experiments by just setting the PI-regularization hyperparameter \\\\gamma in Eq.1 to 0 and not running the corresponding P network.\\n\\nConsequently, we believe the difference between Cityscapes and Vistas is indeed explained by the difference between the datasets themselves. Cityscapes is a more visually uniform benchmark than Vistas: Cityscapes was recorded in a few German cities in nice weather, Vistas contains crowdsourced data from all over the world with varying cameras, environments, and weathers. This makes Cityscapes more amenable to image translation methods (including SPIGAN-no-PI), as can be seen in Figure 5 where a lot of the visual adaptation happens at the color and texture levels. Furthermore, a larger domain gap is known to increase the risk of negative transfer (cf. Csurka, G. (2017). Domain adaptation for visual applications: A comprehensive survey. arXiv preprint arXiv:1702.05374.). This is indeed what we quantitatively measured in Table 2 and qualitatively confirmed in Figure 6. SPIGAN-no-PI suffers more from this issue than SPIGAN, which in our view validates our hypothesis: PI improves generalization performance. Note, however, that SPIGAN still suffers from similar but less severe artifacts in Figure 6. They are just more consistent with the depth of the scene, which helps addressing the domain gap and avoids the catastrophic failures visible in SPIGAN-no-PI.\\n\\nWe have clarified the previous points in the revised submission.\"}",
"{\"title\": \"interesting use of depth information from simulators as priviledged information for unsupervised domain adaptive segmentation\", \"review\": \"The paper focuses on the problem of semantic segmentation across domains. The most standard setting for this task involves real world street images as target and synthetic domains as sources with images produced by simulators of photo-realistic hurban scenes. This work proposes to leverage further depth information which is actually produced by the simulator together with the source images but which is in general not taken into consideration.\", \"the_used_deep_architecture_is_a_gan_where_the_generator_learning_is_guided_by_three_components\": \"(1) the standard discriminator loss (2) the cross entropy loss for image segmentation that evaluates the correct label assignment to each image pixel (3) an l1-based loss which evaluates the correct prediction of the depth values in the original and generated image. A further perceptual regularizer is introduced to support the learning.\\n\\n+ overall the paper is well organized and easy to read\\n+ the proposed idea is smart: when starting from a synthetic domain there may be several hidden extra information that are generally neglected but that can instead support the learning task\\n+ the experimental results seem promising \\n\\nStill, I have some concerns\\n\\n- if the main advantage of the proposed approach is in the introduction of the priviledged information, I would expect that disactivating the related PI loss we should get back to results analogous of those obtained by other competing methods. However from Table 2 it seems that SPIGAN-no-PI is already much better than the FCN Source baseline in the Cityscape case and much worse in the Vistas case. This should be better clarified -- are the basic structure of SPIGAN and FCN analogous? \\n\\n- the ablation does not cover an analysis on the role of the perceptual regularizer. This is also related to the point above: the use of a perceptual loss may introduce a basic difference with respect to competing methods. It should be better discussed.\\n\\n- section 4.1 mentions the use of a validation set. More details should be provided about it and on how the hyperparameters were chosen.\\nA possible analysis on the robustness of the method to those parameters could provide some further intuition about the network stability.\\nIt might be also interesting to check if the the loss weights provide some intuition about the relative importance of the losses in the learning process.\\n\\n- the negative transfer rate is another way to measure the advantage of using the PI with respect to not using it. However, since it is not evaluated for the competing methods its value does not add much information and indeed it is only quickly mentioned in the text. It should be better discussed.\\n\\n- some recent papers have shown better results than those considered here as baseline:\\n[Learning to Adapt Structured Output Space for Semantic Segmentation, CVPR 2018]\\n[Unsupervised Domain Adaptation for Semantic Segmentation via Class-Balanced Self-Training, ECCV 2018]\\nthey should be included as related work and considered as reference for the experimental results.\\n\\nOverall I think that the proposed idea is valuable but the paper should better clarify the points mentioned above.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"[REVISED REVIEW] Interesting way of using depth data from a simulator as Privileged Information.\", \"review\": \"This papers presents an unsupervised domain adaptation algorithm for semantic segmentation. A generative adversarial network is envisaged to carry out synthetic-to-real image translation. In doing so, depth information extracted from a simulator is used as privileged information (PI) to boost the transfer on the target domain, regularizing the model and ensuring a better generalization.\\n\\n*Quality*\\nThe paper addresses a relevant problem, which is the adaptation of methods from simulated data to real ones. The authors devise a convincing method which takes advantage of state-of-the art generative adversarial architectures and privileged information. \\n\\n*Clarity*\\nThe paper is sufficiently well written. In general, the main idea and proposed method are clear and easy to follow. The only problem is that some background concepts (such as privileged information or unsupervised domain adaptation) are given for granted, compromising the readability for someone not familiar with those topics. \\nOn a more technical side, for reproducibility purposes, the following aspects have to be clarified:\\n1.\\tDetails about the validation set used for grid search. Is the validation set extracted from the target domain? (In principled labels from the target domain should not be used during learning).\\n2.\\tNumber of iterations before convergence: is the training of the network numerically stable? Are there issues in convergence of some of the sub-modules? Which one is leading the learning?\\n3.\\tComments about the relative magnitudes of losses. This will maybe give some intuitions about the values used for the hyper-parameters (e.g., the L_PI is only weighted by 0.1).\\n\\n*Originality*\\nThe way authors take advantage of depth information extracted from a simulator as privileged information is novel in the sense that, with respect to the original student-teacher paradigm of the paper by Vapnik & Vashist, here the idea of privileged information is interpreted as a regularizer to boost the training stage. \\n\\n*Significance*\\nThe application of semantic segmentation in urban scenes for navigation tasks is relevant. The scored results are on pair with/ superior to state-of-the-art in unsupervised domain adaptation. \\nHowever, the ablation study could be more extensive in order to understand the contribution of the several components, besides the PI network. In fact, it would be interesting to analyze the contribution of the perceptual loss (and others). Also, one could include the target-only result (as done in original LSD paper) to provide an upper bound on the best accuracy that is achievable.\\n\\n*Pros*\\n1. The applicative setting of semantic segmentation in urban scenes for navigation is relevant. \\n2. Using privileged information from simulators seems novel and well presented in this paper.\\n3. Strong experimental results achieved in challenging benchmarks.\\n\\n*Cons*\\n1. The regularization effect of the PI network could be supported by a more extensive ablation study of the model, for example by ablating the several losses used (in particular, the perceptual loss).\\n2. A quite relevant amount of hyper-parameters need to be cross-validated. Is the method robust against different parameters\\u2019 configuration?\\n3. Missing citations [1, 2]: there are works in the literature that can hallucinate a missing modality during testing. Although such works approach a different problem, authors should cite them.\\n\\n[1] Judy Hoffman, Saurabh Gupta, Trevor Darrell - Learning with Side Information through Modality Hallucination \\u2013 CVPR 2016\\n[2] Nuno Garcia, Pietro Morerio, Vittorio Murino - Modality Distillation with Multiple Stream Networks for Action Recognition \\u2013 ECCV 2018\\n\\n*Final Evaluation*\\nThe authors face the challenging synthetic-to-real adaptation setup, with an interesting usage of z-buffer from a simulator as privileged information. Overall, the work is fine, apart from the following points.\\n1.\\tIn addition to a few missing citations [1, 2], an ablation study on the perceptual loss is necessary to dissect the impact of each component of the pipeline. \\n2.\\tThe clarity of the paper can be improved by adding some background material on unsupervised domain adaptation and learning with privileged information (PI), as to better highlight the technical novelty of using PI within a L1 regularizer. \\n3.\\tThe training stage of all submodules could have better investigated, for instance, by providing some convergence plots of the loss functions across iterations.\\n4.\\tHow to do grid search for parameters in a domain adaptation setting is always a delicate aspect and authors seem elusive on that respect. \\n5.\\tAgain about hyper-parameters. Due to their high number, some sensitivity analysis should have provided.\\nAs it is, the paper\\u2019s strengths slightly outperform the weaknesses, leading to an overall borderline-accept. If authors implement the suggested modification, a full acceptance will be feasible.\\n\\n[COMMENTS AFTER AUTHORS' RESPONSE]\\nAfter the rebuttal provided by authors, all raised questions and criticisms have been fully solved. Therefore, I recommend for a full acceptance.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Privileged information for domain adaptation\", \"review\": \"This article addresses the problem of domain adaptation of semantic segmentation methods from autonomous vehicle simulators to the real world. The key contribution of this paper is the use of privileged information for performing the adaptation. The method is of those called unsupervised domain adaptation as no labels from the target domain are used for the adaptation. The method is based on a GAN with: a) A generator that transforms the simulation images to real appearance; b) A discriminator that distinguish between real and fake images; c) a privileged network that learns to perform depth estimation; and d) the task networks that learns to perform semantic segmentation. Privileged information is very few exploited in simulations and I consider it an important way of further exploit these simulators.\\n\\nThe article is clear, short, well written and very easy to understand. The method is effective as it is able to perform domain adaptation and improve over the compared methods. There is also an ablation study to evaluate the contribution of each module. This ablation study shows that the privileged information used helps to better perform the adaptation. The state of the art is comprehensive and the formulation seams correct. The datasets used for the experiments (Synthia, Cityscapes and Vistas) is very adequate as they are the standard ones.\", \"some_minor_concerns\": [\"The use of 360x640 as resolution\", \"The use of FCN8 instead of something based on Resnet or densenet\", \"I would like some more details on what is happening with Vistas dataset. SPIGAN-no-PI underperforms the source model. By looking at Figure 4 we can observe that the transformation of the images is not working properly as many artifacts appear. In SPIGAN those artifacts does not appear and then the adaptation works better. Could it be a problem in the training?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rJlcV2Actm | MahiNet: A Neural Network for Many-Class Few-Shot Learning with Class Hierarchy | [
"Lu Liu",
"Tianyi Zhou",
"Guodong Long",
"Jing Jiang",
"Chengqi Zhang"
] | We study many-class few-shot (MCFS) problem in both supervised learning and meta-learning scenarios. Compared to the well-studied many-class many-shot and few-class few-shot problems, MCFS problem commonly occurs in practical applications but is rarely studied. MCFS brings new challenges because it needs to distinguish between many classes, but only a few samples per class are available for training. In this paper, we propose ``memory-augmented hierarchical-classification network (MahiNet)'' for MCFS learning. It addresses the ``many-class'' problem by exploring the class hierarchy, e.g., the coarse-class label that covers a subset of fine classes, which helps to narrow down the candidates for the fine class and is cheaper to obtain. MahiNet uses a convolutional neural network (CNN) to extract features, and integrates a memory-augmented attention module with a multi-layer perceptron (MLP) to produce the probabilities over coarse and fine classes. While the MLP extends the linear classifier, the attention module extends a KNN classifier, both together targeting the ''`few-shot'' problem. We design different training strategies of MahiNet for supervised learning and meta-learning. Moreover, we propose two novel benchmark datasets ''mcfsImageNet'' (as a subset of ImageNet) and ''mcfsOmniglot'' (re-splitted Omniglot) specifically for MCFS problem. In experiments, we show that MahiNet outperforms several state-of-the-art models on MCFS classification tasks in both supervised learning and meta-learning scenarios. | [
"deep learning",
"many-class few-shot",
"class hierarchy",
"meta learning"
] | https://openreview.net/pdf?id=rJlcV2Actm | https://openreview.net/forum?id=rJlcV2Actm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1gf-LXYe4",
"Hyg6ZDZYRm",
"rJxHQ4-KAQ",
"BJxFU-ltA7",
"BJlDageK0Q",
"BJeoaTJYRQ",
"rklk3pJt0X",
"B1xSrU1YC7",
"BJgsHqI8pm",
"HyleW0Hj3Q",
"BJgxSoFu3X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545315834051,
1543210756822,
1543210012993,
1543205201257,
1543205054606,
1543204291341,
1543204263402,
1543202365501,
1541986883465,
1541262839715,
1541081911768
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1478/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1478/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1478/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1478/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1478/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1478/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1478/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1478/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1478/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1478/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1478/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers raised a number of concerns including low readability/ clarity of the presented work and methodology, insufficient and at times unconvincing experimental evaluation of the proposed, and lack of discussion on pros and cons of the presented. The authors\\u2019 rebuttal addressed some of the reviewers\\u2019 comments but failed to address all concerns and reconfirmed that relatively large changes are still needed for the paper to be useful to the readers. Hence, although I believe this could be a very interesting paper, I cannot suggest it at this stage for presentation at ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview\"}",
"{\"title\": \"New baselines with and without class hierarchy; t-SNE visualization reflects memory\\u2019s representativeness; Memory update is cheap (1)\", \"comment\": \"Thanks for your comment! The following lists your questions and the corresponding replies:\", \"q1\": \"It is better to offer some discussion about the learned memory slots in the view of \\u201cdiverse and representative feature\\u201d.\", \"r1\": \"This is a good idea. In Figure 4 of Appendix-D, we visualize the feature vectors for the memory slots and the feature vectors of the images using T-SNE, where different colors represent different fine classes. It shows that for each class, the small number of feature vectors in memory are diverse and sufficiently representative of the whole class.\", \"q2\": \"It may be more interesting if its effectiveness can be proved or justified theoretically.\", \"r2\": \"It is interesting but also challenging to provide a thorough theoretical analysis of the proposed memory augmented neural nets, considering that the theoretical properties of some basic neural networks without memory update are unclear. Intuitively, there is a trade-off: more prototypes lead to more representativeness, but too many will weaken generalization and efficiency. We will keep studying its theoretical properties in the future.\", \"q3\": \"It is better to compare MahiNet with other state-of-the-art works, such as Relation Network whose performance is higher than Prototypical Net. In addition, if more challenging datasets can be further evaluated in the experiments, the paper might be more convincing.\", \"r3\": \"Thanks for the suggestion! We add relation network [1] as a new baseline in Table 4. Our method outperforms it in several different settings in the table. We briefly list the experiment results as follows:\\n\\nComparison w.r.t. the accuracy (%) of different approaches in meta-learning scenario on mcfsImageNet. \\u201cn-k\\u201d represents n-way (class) k-shot. \\n----------------------------------------------------------------------------------------------------------\\nModel Hierarchy 5-10 20-5 20-10 \\nPrototypical network N 78.48\\u00b10.66 67.78\\u00b10.37 70.11\\u00b10.38 \\nRelation net N 74.12\\u00b10.78 52.66\\u00b10.43 55.45\\u00b10.46\\n----------------------------------------------------------------------------------------------------------\\nMahiNet Y 80.74\\u00b10.66 70.11\\u00b10.41 73.50\\u00b10.36 \\n----------------------------------------------------------------------------------------------------------\\n\\nIn our experiments, relation network fails when the number of classes (ways) is large: it usually stops at a suboptimal solution after the first several iterations (under different learning rates). We show this phenomenon in Figure 3 of Appendix-D by plotting the training loss and accuracy for the first 100 iterations. The loss and accuracy stay almost the same for iterations afterwards.\\n\\nOne primary reason to build new datasets in this paper specifically for many-class few-shot learning problem is that the existing datasets are not challenging enough or do not fulfill the requirement of this problem. As shown by the comparison in Table 2, mcfsImageNet is the most challenging one for this problem since it has much more classes than others and the total number of images is large.\"}",
"{\"title\": \"New baselines with and without class hierarchy; t-SNE visualization reflects memory\\u2019s representativeness; Memory update is cheap (2)\", \"comment\": \"Q4: The hierarchy information provides the guidance to fine-grained classification, which not only can be added to MahiNet but also the other models. Therefore, to prove its effectiveness, it is better to add hierarchy information to other models for comparison.\", \"r4\": \"In the new experiments shown in Table 6 in Appendix D, we add the class hierarchy to relation network. As shown in the table, the class hierarchy improves the accuracy by more than 1%. We briefly list the experiment results as follows:\", \"the_improvement_of_class_hierarchy_on_relation_network_on_mcfs_imagenet\": \"---------------------------------------------------------------------------------------------------\\nModel Hierarchy 5 way 5 shot 5 way 10 shot\\nRelation network N 63.02\\u00b10.87 74.12\\u00b10.78 \\nRelation network Y 66.82\\u00b10.86 75.31\\u00b10.90\\n---------------------------------------------------------------------------------------------------\\nMahiNet Y 74.98\\u00b10.75 80.74\\u00b10.66\\n---------------------------------------------------------------------------------------------------\", \"q5\": \"Regarding the results on the column of 50-5 and 50-10 in Table 4, when the number of class increases to 50, the results are just slightly higher than prototypical network.\", \"r5\": \"When the number of classes increases, the complexity of the task dramatically increases, and improving the performance of few-shot learning becomes much harder. Hence, comparing to >2-3% improvements on 20-way, the improvements on 50-way (>1% for 5-shot and >0.7% for 10-shot) are still significant considering the number of classes increases from 20 to 50. Note the reported performance of 50-way can be further improved by specifically tuning hyperparameters for 50-way, because the current hyperparameters are the same for 50-way, 20-way, and 5-way and achieved by tuning on 20-way experiments. We did not do a heavy tuning specifically for each setting due to our limited computational resources.\", \"q6\": \"Considering that the memory update mechanism is of the high resource consumption and complexity, it is better to provide more details about clustering, and training and testing time.\", \"r6\": \"The time costs of memory update and clustering are negligible comparing to the total time costs, because we only keep and update a very small memory. For each epoch of supervised learning, the average clustering time is 30s and is only 7.6% of the total epoch time (393s). Within an epoch, the memory update time (0.02s) is only 9% of the total iteration time (0.22s). In meta-learning, we use a fixed memory storing all training samples and do not update it. So it has the same time cost as prototypical network [2].\\n\\n[1] Flood Sung Yang, Li Yongxin, Tao Xiang, Zhang, Philip HS Torr, and Timothy M. Hospedales. Learning to compare: Relation network for few-shot learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.\\n[2] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems (NIPS), 2017.\"}",
"{\"title\": \"First success of applying class hierarchy to few-shot learning; More clarifications added; New discussions of several theoretical properties (1)\", \"comment\": \"Thanks for your comments and suggestions! You can find our response to each of your questions/comments in the following.\", \"q1\": \"The model figure is useful but could be refined to add additional clarity -- particularly in the case of the KNN learning procedure.\", \"r1\": \"We have refined the figure to make it more clear. The learning of KNN classifier aims to optimize 1) the similarity metric parameterized by the attention (detailed in Section 2.3); and 2) a small support set of feature vectors per class stored in memory (detailed in Section 2.4). For a given sample, we compute the similarity scores of its feature vector to all the feature vectors from the support set per class by Eq. (8), and aggregate the scores to obtain a similarity score per class by Eq. (9). Then, a softmax is applied to compute probability over all classes from the similarity scores on all classes. During training, the attention block is updated by standard backpropagation, while the memory update procedure is detailed in line 5-9 and line 11-15 of Algorithm 1.\", \"q2\": \"Is it really true that class-hierarchies have never been used to perform coarse-to-fine inference in past work?\", \"r2\": \"In this paper, class hierarchy is specifically used to make many-class few-shot learning possible. To the best of our knowledge, we are the first to successfully employ the class hierarchy information to improve few-shot learning. We tried the published code with class hierarchy of [2] in both supervised few-shot learning and semi-supervised few-shot learning scenarios. However, all experiments failed to bring improvement in performance. We have updated the manuscript to clarify this point.\", \"q3\": \"It is unclear to me which model setup was used in experiments.\", \"r3\": \"As explicitly shown in Figure 2 and explained in the first paragraph of Section 2.2, in supervised learning we use MLP for coarse classification and MLP+KNN for fine classification, while in meta-learning we use MLP+KNN for coarse classification and MLP for fine classification. Intuitively, MLP performs better when data is sufficient (supervised learning), while the KNN classifier is more stable in few-shot scenario (meta-learning). Hence, we always apply MLP to coarse prediction, and apply KNN to fine prediction. In addition, we use KNN to assist MLP for coarse classification in meta-learning (when data might be insufficient even for coarse classes), and use MLP to assist KNN for fine classification in supervised learning (when data per fine-class is much more than the meta-learning case).\"}",
"{\"title\": \"First success of applying class hierarchy to few-shot learning; More clarifications added; New discussions of several theoretical properties (2)\", \"comment\": \"Q4: Can anything theoretical be shown about the class hierarchy based classification technique?...can something more precise be said about how it restricts the hypothesis space?\", \"r4\": \"The high-level idea of this paper follows Neural Turing Machine [1], which is a well-known memory-based system. We follow NTM to create our memory-augmented mechanism, and make some specific modification to adapt the many-class few-shot problem.\\n\\nThe hypothesis space is significantly reduced after applying class hierarchy. This can be shown by comparing Eq.(5) and Eq.(6), where Eq.(6) is derived from Eq.(5) after applying the class hierarchy. In particular, without class hierarchy, Eq.(5) will assign each fine class a nonzero probability, i.e., the hypothesis space is $C*F$ ($C$ is the number of coarse classes and $F$ is the number of fine classes in each coarse class), while Eq.(6) only assign nonzero probabilities to the fine classes within the coarse classes and rule out all the other fine classes, i.e., the hypothesis space is $F$.\", \"q5\": \"Simple theoretical analysis should be provided. For example, in the limit of infinite data, does the memory-based KNN learning procedure actually produce the right classifier?...The procedure for updating the KNN memory is intuitive, but can anything more be said about it? In isolation, is the KNN learning procedure at least consistent -- i.e. in the limit of large data does it converge to the correct classifier?\", \"r5\": \"The memory updating procedure presented in Section 2.4 is a modified version of the coreset algorithm for streaming k-center clustering (Section 3 of [3]), where the distance metric for the clustering is the hamming distance between prediction and ground truth (0 if correct and 1 if wrong). In particular, if the prediction is correct, the new sample will be assigned to a feature vector in memory (i.e., a cluster centroid) and update it (Eq.(11)), otherwise the new sample will be written to the cache (Eq.(12)) as a new feature vector (to form new clusters). At the end of each epoch, according to the utility scores, we replace $r$ feature vectors in memory with the $r$ new cluster centroids computed from the cache, so the size of memory (the number of cluster centroids) keeps the same.\\n\\nIf we assume that the correct classifier is the optimal (smallest error rate) KNN classifier with support set of limited size k (an NP-hard problem as k-center clustering), the theoretical properties of coreset guarantee that the support set achieved by our memory update procedure is a factor 8 approximation to the optimal KNN classifier on the hamming distance. However, since our training simultaneously updates the similarity matric (attention) and the input features to the KNN, the theoretical analysis becomes too complicated and the above result is not rigorous. So we are not sure if it is a good idea to include it in the paper.\\n\\n\\n[1] Alex Graves, Greg Wayne, Ivo Danihelka. Neural Turing Machines. arXiv preprint arXiv:1410.5401, 2014. \\n[2] Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B. Tenenbaum, Hugo Larochelle and Richard S. Zemel. Meta-Learning for Semi-Supervised Few-Shot Classification. In International Conference on Learning Representations (ICLR), 2018.\\n[3] Moses Charikar, Chandra Chekuri, Tom\\u00e1s Feder, and Rajeev Motwani. 1997. Incremental clustering and dynamic information retrieval. In Proceedings of the twenty-ninth annual ACM symposium on Theory of computing (STOC '97).\"}",
"{\"title\": \"Exploring class hierarchy in few-shot learning is non-trivial; Clarity improved; More detailed explanations (1)\", \"comment\": \"Thanks for your comments and suggestions! First of all, we want to emphasize that solving many-class few-shot problem with the class hierarchy is not trivial. It is not only about the design of the loss defined on the hierarchy, but also the model structure, i.e., which model should be used for coarse classification and which for fine classification, how to combine them in a unified model and how to let them cooperate with each other during training and inference. These are challenging problems with many possible options, but not every option leads to good performance. We tried the published code with class hierarchy of [1] in both supervised few-shot learning and semi-supervised few-shot learning scenarios. However, all experiments failed to bring improvement in performance. In this paper, we carefully design new loss functions and network structures, which aim to maximize the assistance of the class hierarchy, and use the extra information provided by it to precisely solve the many-class few-shot problem.\\n\\nWe are very pleased to answer other questions/comments from you one by one as follows.\", \"q1\": \"How the class hierarchy is got, manually set or automatically generate? Whether the ideas still work if some coarse classes share the same fine class?\", \"r1\": \"In this paper, we use the class hierarchy provided by the original ImageNet and Omniglot datasets (details can be found in the 2nd and 3rd paragraphs of Appendix-A). In practice, the class hierarchy (the coarse class labels in specific) we required is usually available or cheap to achieve.\\n\\nIt is interesting to study the case when some coarse classes share the same fine classes. The idea of this paper can be applied to this case. In particular, we only need to modify the right-hand side of Eq.(6) by summing over all possible coarse classes $z_i$, which is a result of applying Eq.(5) to the multiple coarse label case. This leads to a modified loss function. Most of the other parts of MahiNet can be kept the same.\", \"q2\": \"The so-called attention module is just classic KNN operations, please don't name it attention just because the concept \\\"attention\\\" is hot.\", \"r2\": \"Attention module in MahiNet (Eq.(8)) is not classic KNN operations: it provides a learnable distance/similarity metric ($g()$ and $h()$ are learnable) for the KNN classifier (while classic KNN uses a fixed metric); and the produced similarities of each sample to its K nearest neighbors are used as weights to compute the final prediction of the sample. It plays exactly the same role as the attention modules used in other meta-learning models such as matching networks. Since learnable similarity and similarity weighted averaging are the two critical features of attention mechanism, it is more accurate to call it attention instead of KNN here.\"}",
"{\"title\": \"Exploring class hierarchy in few-shot learning is non-trivial; Clarity improved; More detailed explanations (2)\", \"comment\": \"Q3: Why different \\\"attention\\\" operations are used for supervised learning and meta-learning?\", \"r3\": \"We tried both options in Eq.(8) for the two learning scenarios in our experiments. According to our experience, while dot-product attention works better for supervised learning, Euclidean distance based attention is preferred in meta-learning (this is consistent with the observations in prototypical networks [2]).\", \"q4\": \"How to get the pre-trained models for supervised learning?\", \"r4\": \"We train the CNN (ResNet)+MLP model by using the standard backpropagation to minimize the sum of cross entropy loss on coarse-classes and fine-classes.\", \"q5\": \"What will happen if alternatively apply supervised learning and meta-learning?\", \"r5\": \"Theoretically, supervised learning and meta-learning have different optimization objectives, so alternatively applying the two might even increase the objective value of each (assuming each solves a minimization). In addition, our model is slightly different in these two settings (meta-learning uses a fixed memory and does not have memory update module), so alternatively applying the two is not even optimizing the same model structure.\\n\\nIntuitively, if the training/test sets of the tasks in meta-learning are sampled from the training/test set of supervised learning (which does not always hold in practice), training the model in supervised learning mode is helpful to improve the performance of meta-learning. However, this is a cheating and is not legal, because supervised learning exposes all the classes, which contain the classes of test tasks for meta-learning, but these classes should be unseen during training for meta-learning.\", \"q6\": \"In Table 4, why MahiNet(Mem-2) w/o Attention and Hierarchy performs better than the one w/o attention?\", \"r6\": \"Since Table 4 does not include \\u201cMahiNet (Mem-2) w/o attention\\u201d, we assume that the reviewer refers to \\u201cMahiNet (Mem-2) w/o Hierarchy\\u201d. The main reason is that the many-class few-shot data without class hierarchy is not sufficient to learn a trustworthy attention (similarity metric), and such attention without sufficient training might be harmful to the final performance. In contrast, with class hierarchy, attention only needs to distinguish much fewer fine classes within each coarse class, and the learned attention (even on few-shot data) can faithfully reflect the local similarity within each coarse class.\", \"q7\": \"The authors just compare storing the average features and all features, I think results of different prototype number should be given, since one of their claims to apply KNN is to maintain a small memory.\", \"r7\": \"In our experiments of supervised learning, the memory size is only 754*12(the number of memory features)/125321(the number of image features) = 7.2% of the dataset size. We also tried to increase the memory size to about 10%, but the improvement on performance is neglectable. In each task of meta-learning, since every class only has few-shot samples, the memory required to store all the data is not large. For example, in the 20-way-1-shot setting, the memory size is only 10.2KB, storing 20 features.\\n\\n\\n[1] Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B. Tenenbaum, Hugo Larochelle and Richard S. Zemel. Meta-Learning for Semi-Supervised Few-Shot Classification. In International Conference on Learning Representations (ICLR), 2018.\\n[2] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems (NIPS), 2017.\"}",
"{\"title\": \"Summary of Changes in Updated Draft\", \"comment\": \"We appreciate all the reviewers for their detailed comments and constructive suggestions. We also noticed all reviewers agreed that the problem and ideas presented in this paper are novel and interesting. We carefully updated the manuscript based on all the reviewers\\u2019 comments and highlighted all the modifications in the manuscript. Here is a summarization of our modifications :\\n\\n1.\\tIn order to improve clarity, we added more explanations and rewrote some parts of the paper where the reviewers asked for clarification. For example, we revised Figure 2, added the statement that we are the first to successfully apply the class hierarchy in few-shot learning, compared the time costs of memory update and clustering with the total time costs, and added more details about pre-training and analysis to experimental results.\\n\\n2.\\tWe did more experiments on a new baseline method. In particular, we compared to the original relation networks and relation networks with class hierarchy (in the same way as MahiNet). All the new results are reported in Table 4 and Appendix-D.\\n\\n3.\\tWe provide more explanation about memory update and KNN learning. In particular, we visualize the feature vectors selected into memory by T-SNE, and compare them with the rest feature vectors they are trying to represent. The 2D plot in Figure 4 of Appendix-E shows their diversity and representativeness. We also provide an analysis of memory usage at the end of the second paragraph in Section 4.1 and Appendix-E.\"}",
"{\"title\": \"Class hierarchy and cache-based nearest neighbors for many class / few shot\", \"review\": \"This paper presents methods for (1) adding inductive bias to a classifier through coarse-to-fine prediction along a class hierarchy and (2) learning a memory-based KNN classifier through an intuitive procedure that keeps track of mislabeled instances during learning. Further, the paper motivates focused work on the many class / few shot classification scenario and creates new benchmark datasets from subsets of imagenet and omniglot that match this scenario. Experimental results show gains over popular competing methods on these benchmarks.\\nOverall, I like the motivation that this paper provides for many class / few shot and find some of the methods proposed interesting. Yet there are issues with clarity of presentation that made it somewhat difficult to fully understand the exact procedures that were implemented. The model figure is useful, but could be refined to add additional clarity -- particularly in the case of the KNN learning procedure. \\nI'm not entirely familiar with recent work in this sub-field, so it is difficult for me to judge the novelty of the proposed procedure. Is it really true that class-hierarchies have never been used to perform coarse-to-fine inference in past work? If so, this should be state clearly. If not, related work should be mentioned and compared against. Finally, while the procedures are intuitive -- the takeaway of this paper could be substantially improved if even simple theoretical analysis were provided. For example, in the limit of infinite data, does the memory-based KNN learning procedure actually produce the right classifier?\", \"misc_comments_questions\": \"-The paper says at least twice that coarse classification will be performed with an MLP, while fine classification will use a KNN -- yet, the model section also state that both coarse and fine use both MLP and KNN. It is unclear to me which model setup was used in experiments. \\n-Can anything theoretical be shown about the class hierarchy based classification technique? Intuitively, it does add inductive bias through a manually defined taxonomy, but can something more precise be said about how it restricts the hypothesis space? This procedure is simple enough that I would be surprised if similar techniques had not be studied thoroughly in the statistical learning theory literature. \\n-The procedure for updating the KNN memory is intuitive, but can anything more be said about it? In isolation, is the KNN learning procedure at least consistent -- i.e. in the limit of large data does it converge to the correct classifier? Maybe this is trivial to prove, but is worth including.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Need to clarify some procedures\", \"review\": \"This paper try to formulate many-class-few-shot classification problem from 2 perspectives: supervised learning and meta-learning. Although solving this problem with class hierarchy is trivial, combing MLP and KNN in these two ways seems interesting to me. I still have several questions:\\n\\n1) How the class hierarchy is got , manually set or automatically generate? Whether the ideas still work if some coarse classes share same fine class?\\n2) The so-called attention module is just classic KNN operations, please don't naming it attention just because the concept \\\"attention\\\" is hot.\\n3) Why different \\\"attention\\\" operations are used for supervised learning and meta-learning?\\n4) How to get the pre-trained models for supervised learning?\\n5) What will happen if alternatively apply supervised learning and meta-learning?\\n6) In Table 4, why MahiNet(Mem-2) w/o Attention and Hierarchy performs better than the one w/o attention?\\n7) The authors just compare storing the average features and all features, I think results of different prototype number should be given, since one of their claim to apply KNN is to maintain a small memory.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"An interesting paper which may need further enhancement\", \"review\": \"This study explores the class hierarchy to solve many-class few-short learning problem in both traditional supervised learning and meta-learning. The model integrates both the coarse-class and fine-class label as the supervision information to train the DNN, which aims to leverage coarse-class label to assist fine-class prediction. The core part in the DNN is memory-augmented attention model that includes at KNN classifier and Memory Update mechanism. The re-writable memory slots in KNN classifier aim to maintain multiple prototypes used to describe the data sub-distribution within a class, which is insured by designing the memory utility rate, cache and clustering component in Memory Update mechanism. This study presents a relatively complex system that combines the idea of matching networks and prototypical networks.\\n\\nOne of the contributions is that the study puts forward a concept of the many-class few-short learning problem in both supervised learning and meta-learning scenarios, and uses a dataset to describe this problem.\\n\\nUsing the memory-augmented mechanism to maintain multiple prototypes is a good idea. It may be more interesting if its effectiveness can be proved or justified theoretically. Furthermore, it is better to offer some discussion about the learned memory slots in the view of \\u201cdiverse and representative feature\\u201d.\\n\\nThe experiment results in Table 4 and Table 5 compare the MahiNet with Prototypical Net on the mcfsImageNet and mcfsOmniglot dataset. It is better to compare MahiNet with other state-of-the-art works, such as the Relation Network whose performance is higher than Prototypical Net. In addition, if more challenging datasets can be further evaluated in the experiments, the paper might be more convincing.\\n\\nIn my opinion, the hierarchy information provides the guidance to fine-gained classification, which not only can be added to MahiNet but also the other models. Therefore, to prove its effectiveness, it is better to add hierarchy information to other models for comparison. In addition, regarding the results on the column of 50-5 and 50-10 in Table 4, when the number of class increase to 50, the results are just slightly higher than prototypical network. Considering that the memory update mechanism is of the high resource consumption and complexity, it is better to provide more details about clustering, and training and testing time.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
S1g9N2A5FX | Interpretable Continual Learning | [
"Tameem Adel",
"Cuong V. Nguyen",
"Richard E. Turner",
"Zoubin Ghahramani",
"Adrian Weller"
] | We present a framework for interpretable continual learning (ICL). We show that explanations of previously performed tasks can be used to improve performance on future tasks. ICL generates a good explanation of a finished task, then uses this to focus attention on what is important when facing a new task. The ICL idea is general and may be applied to many continual learning approaches. Here we focus on the variational continual learning framework to take advantage of its flexibility and efficacy in overcoming catastrophic forgetting. We use saliency maps to provide explanations of performed tasks and propose a new metric to assess their quality. Experiments show that ICL achieves state-of-the-art results in terms of overall continual learning performance as measured by average classification accuracy, and also in terms of its explanations, which are assessed qualitatively and quantitatively using the proposed metric. | [
"Interpretability",
"Continual Learning"
] | https://openreview.net/pdf?id=S1g9N2A5FX | https://openreview.net/forum?id=S1g9N2A5FX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1xkPBTtlN",
"rJxm-HpaJ4",
"B1xXNM8qpX",
"Bkggfq0VTm",
"HkeYvOCVTm",
"HJeXDu9h2X",
"ryg4Fgr9hX"
],
"note_type": [
"official_comment",
"meta_review",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1545356631074,
1544570107257,
1542246954727,
1541888520426,
1541888096896,
1541347419102,
1541193851738
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1476/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1476/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1476/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1476/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1476/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1476/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1476/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Regarding R2\", \"comment\": \"Thanks for the assessment. Regarding R2, this review arrived two weeks late. Thanks again.\"}",
"{\"metareview\": \"The presented method proposes to use saliency maps as a component for an additional metric of forgetting in continual learning, and as a tool as additional information to improve learning on new tasks.\", \"pros\": [\"R2 & R3: Clearly written and easy to follow.\", \"R3: New metric to compare saliency masks\", \"R3: Interesting idea to utilize previously learned saliency masks to augment learning new tasks.\", \"R1: Performance improvements observed.\"], \"cons\": [\"R1 & R2: Novelty is limited in the context of prior works in this field. Unanswered by authors.\", \"R2: Concerns around method's ability to use salient but disconnected components. Unanswered by authors.\", \"R2: Experiments needed on more realistic datasets, such as ImageNet. Unanswered by authors.\", \"R3: Performance gains are small.\", \"R1 & R2: Literature review is insufficient.\", \"Reviewers are leaning reject, and R2's concerns have not been answered by the authors at all. Idea seems interesting, authors are encouraged to take into careful consideration the feedback from authors and continue their research.\"], \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Saliency maps utilized for continual learning, but concerns around novelty and performance improvements.\"}",
"{\"title\": \"Interpretable Continual Learning\", \"review\": [\"Authors propose an incremental continual learning framework which is based on saliency maps on the learned tasks(i.e., explanations) with the ultimate goal of learning new tasks, while avoiding catastrophic forgetting. To this end, authors employ an attention mechanism based on average saliency masks computed on the predictions of the earlier task. In addition, a new metric, Flexible Saliency Metric (FSM) is proposed to evaluate the generated saliency maps. Authors employ three public, well-known datasets to evaluate the performance of their proposed framework.\", \"The paper is well written and easy to follow. The methodology is sound and the results demonstrate that the proposed framework outperforms very recent conditional learning approaches. Nevertheless I have some major concerns with the methodology, proposed evaluation metric and experiments. Please find below my comments.\", \"Technical novelty is rather limited. Contribution is incremental with respect to previous works on CL, as they use the variational CL (VCL) framework of Nguyen at al, 2018 and the weight of evidence (WE), as used in Zintgraf et al., 2017, to compute the saliency maps. From these saliency maps, a mask is computed to focus the attention in subsequent tasks, by averaging the explanations. This, however, limits the applicability of the proposed framework to \\u2018similar\\u2019 images (as pointed out by the authors). Another limitation of this technique is that explanations on learned tasks should correspond, spatially, to meaningful/discriminative areas for new tasks. Otherwise, the use of explanations on this CL approach would not work.\", \"According to the authors, one of the limitations of known metrics to evaluate CL approaches is that \\u2018the area of the saliency regions should be all connected, wasting opportunity to identify salient but possibly non-connected areas, such as the two eyes of an animal\\u2019. Nevertheless, I do not see how this can be alleviated in the proposed FSM. The first term of eq (8), i.e., log(d_sal) will be large in the case of, for example, the two eyes of an animal, favouring again for connected saliency regions. How d_sal is computed? Is it a dense matrix between all pair of points?\", \"Being the FSM one of the main contributions of this work, experiments to assess its usability are insufficient. Authors should correlate the values obtained across the different CL frameworks with FSM to the actual performance in terms of precision/accuracy. Results demonstrate that the proposed ICL approach achieves the lowest values, in terms of FSM, but any interpretation can be done if it is not correlated with well established evaluation metrics.\", \"Furthermore, it would be interesting to see how this method performs in more complex datasets, such as ImageNet, where tasks within the continual learning process may differ a lot.\", \"I also feel the literature on CL is scarce and it does not motivate the choices of the manuscript. Authors should include a more detailed literature on this problem.\", \"Minor comments\", \"In page 3, which is the difference between benchmarks and medical data, as datasets? Public medical data are also benchmarks.\", \"How the z value in eq (6) is found? An ablation study to see the impact on the final results would be interesting.\", \"In page 5, when describing the limitations of current methods for saliency map evaluation (\\u2018It remains tricky how to identify,\\u2026.,etc),what does etc mean? Please be more concise on the limitations.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We thank the reviewer for their time and welcome feedback, which we are incorporating into the revised version.\", \"r\": [\"Yellow color in plots\"], \"a\": \"-- We have changed the yellow colour in Figures 2, 3 and 4 to black. Yellow is now no longer used in any plots.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for their time and welcome feedback, which we are incorporating into the revised version.\", \"r\": [\"Clarity\"], \"a\": \"-- We have fixed the typos in the revised version, thank you: i) The first sentence of the third paragraph in Section 4 now reads: \\u201cFor input images of ..., the averaged weight of evidence matrix is referred to as $\\\\text{WE}_{\\\\bm{i}}(\\\\bm{x}) \\\\in \\\\RR^{\\\\bm{r} \\\\times \\\\bm{c}}$.\\u201d ii) In page 6: \\u201cThe size of the surrounding square \\u2026 is 16 $\\\\times$ 16 pixels.\"}",
"{\"title\": \"A paper with a relevant and interesting contribution that lacks clarity and motivation\", \"review\": \"Summary:\\nIn this paper, the authors propose a framework for continual learning based on explanations for performed classifications of previously learned tasks. In this framework, an average saliency map is computed for all images in the test set of a previous task to identify image regions, which are important for that task. When learning the next task, this average saliency map is used in an attention mechanism to help learning the new task and to prevent catastrophic forgetting of previously learned tasks. Furthermore, the authors propose a new metric for the goodness of a saliency map by taking into account the number of pixels in the map, the average distance between pixels in the map, as well as the prediction probability given only the salient pixels.\\nThe authors report that their approach achieves the best average classification accuracy for 3 out of 4 benchmark datasets compared to other state-of-the-art approaches.\", \"relevance\": \"This work is relevant to researchers in the field of continual/life-long learning, since it proposes a framework, which should be possible to integrate into different approaches in this field.\", \"significance\": \"The proposed work is significant, since it explores a new direction of using learner generated, interpretable explanations of the currently learned task as help for learning new tasks. Furthermore, it proposes a new metric for the goodness of saliency maps.\", \"soundness\": \"In general, the proposed approach of using the average saliency map as attention mask for learning appears to be reasonable. However, the following implicit assumptions/limitations of the approach should be made more clear:\\n\\t- important features for the new task should be in similar locations as important features of the old task (for example, one would expect that the proposed approach would negatively affect learning the new task if the important features of the old task were all located in the bottom of the image, while all important features for the new task are in the top)\\n\\t- the locations for important features should be comparatively stable (for example, one would expect the average saliency map to become fairly meaningless if important features, such as the face of a dog, can appear anywhere in the image. Therefore, an interesting baseline for the evaluation of the ICL approach would be a predefined, fixed attention map consisting of concentric circles with the image center as their center, to show that the proposed approach does more than just deemphasizing the corners of the image)\\n\\nFurthermore, the authors appear to imply that increased FSM values for an old task after training on a new task indicate catastrophic forgetting. While this is a reasonable assumption, it does not necessarily seem to be the case that a larger, more disconnected saliency map indicates worse classification performance. Comparatively small changes in FSM may not affect the classification performance at all, while larger changes may not necessarily lead to worse classifications either. For example, by increasing the amount or size of image regions to be considered, the classifier may accidentally become more robust on an old task. Therefore, it may be a good idea for the authors to analyze the correlation between FSM changes and accuracy changes.\", \"evaluation\": \"The evaluation of the proposed approach on the four used datasets appears to be reasonable and well done. However, given that the achieved performance gains over the state-of-the-art are fairly small, it would be good to assess if the obtained improvements are statistically significant. \\nFurthermore, it may be informative to show the saliency maps in Figure 5 not only for cases in which the learner classified the image correctly in both time steps, but also cases in which the learner classified the image correctly the first time and incorrectly the second time. Additionally, the previously mentioned evaluation steps, i.e., using a fixed attention map as baseline for the evaluation and evaluating the correlation between FSM and accuracy may be informative to illustrate the advantages of the proposed approach.\", \"clarity\": \"The paper is clearly written and easy to follow. One minor issue is that the first sentence of the third paragraph in Section 4 is not a full sentence and therefore difficult to understand.\\nFurthermore, on page 6, it is stated that the surrounding square $\\\\hat{x}_i$ is 15 x 15 pixels, while the size of the square $x_i$ is 10 x 10. This appears strange, since it would mean that $x_i$ cannot be in the center of $\\\\hat{x}_i$.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Reasonable approach with good results; incremental novelty\", \"review\": \"This paper proposes an extension to the continual learning framework using existing variational continual learning (VCL) as the base method. In particular, it proposes to use the weight of evidence (WE) (from Zintgraf et al 2017) for each task. Firstly, this WE can be used to visualize the learned model (as used in Zintgraf et. al. 2017). The novelty of this paper is:\\n1. to use this WE from the current task to generate a silence map (by smoothing the WE) for the next task. This is interpreted the learned the learned attention region. Such an approach is named Interpretable COntinual Learning (ICL) \\n2. The paper proposes a metric for the saliency map naming FSM which is an extension of existing metric SSR. The extension is to take pixel count to compute the area instead of using rectangular region area, as well as taking the distance between pixels into account. This metric can be used to evaluate the level of catastrophic forgetting.\", \"pro\": \"In general, the idea is very intuitive and make sense. The paper also demonstrates superior performance with the proposed method on continual learning on all classic tasks comparing with VCL and EWC. \\nThe presentation is very easy to follow. \\nIt seems like a valid and flexible extension that can be used in other continual learning frameworks.\", \"cons\": \"The theoretical contribution is very limited. The work is rather incremental from current state-of-the-art methods. \\nThere should be a better discussion of related work on the topic. The paper currently only mentions the most related work for the proposed method, using the whole section 2 to describe VCL and use section 3 to describe FSM and half of section 5 to describe SSR. A general overview of related work in these directions are needed.\", \"other\": \"1. The paper should also consider more recently proposed evaluation metrics such as discussed in https://arxiv.org/pdf/1805.09733.pdf \\n2. The author should try to avoid using yellow color in plots.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
Syx5V2CcFm | Universal Stagewise Learning for Non-Convex Problems with Convergence on Averaged Solutions | [
"Zaiyi Chen",
"Zhuoning Yuan",
"Jinfeng Yi",
"Bowen Zhou",
"Enhong Chen",
"Tianbao Yang"
] | Although stochastic gradient descent (SGD) method and its variants (e.g., stochastic momentum methods, AdaGrad) are algorithms of choice for solving non-convex problems (especially deep learning), big gaps still remain between the theory and the practice with many questions unresolved. For example, there is still a lack of theories of convergence for SGD and its variants that use stagewise step size and return an averaged solution in practice. In addition, theoretical insights of why adaptive step size of AdaGrad could improve non-adaptive step size of SGD is still missing for non-convex optimization. This paper aims to address these questions and fill the gap between theory and practice. We propose a universal stagewise optimization framework for a broad family of non-smooth non-convex problems with the following key features: (i) at each stage any suitable stochastic convex optimization algorithms (e.g., SGD or AdaGrad) that return an averaged solution can be employed for minimizing a regularized convex problem; (ii) the step size is decreased in a stagewise manner; (iii) an averaged solution is returned as the final solution. % that is selected from all stagewise averaged solutions with sampling probabilities increasing as the stage number.
Our theoretical results of stagewise {\ada} exhibit its adaptive convergence, therefore shed insights on its faster convergence than stagewise SGD for problems with slowly growing cumulative stochastic gradients. To the best of our knowledge, these new results are the first of their kind for addressing the unresolved issues of existing theories mentioned earlier. Besides theoretical contributions, our empirical studies show that our stagewise variants of SGD, AdaGrad improve the generalization performance of existing variants/implementations of SGD and AdaGrad. | [
"optimization",
"sgd",
"adagrad"
] | https://openreview.net/pdf?id=Syx5V2CcFm | https://openreview.net/forum?id=Syx5V2CcFm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1xqT4_AyN",
"HJxhO0mfR7",
"H1gh7N1GRX",
"BJxi_yj_67",
"S1e5VAq_Tm",
"SJgxuy1OpQ",
"HJe3T0ADa7",
"r1eJ7CRDaX",
"BJx2sTRDp7",
"rJlhyXZ5nX",
"S1gZiSu_3m",
"SJlOL-OBj7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544615105654,
1542762099947,
1542743075698,
1542135667212,
1542135346374,
1542086503857,
1542086339757,
1542086167088,
1542086052470,
1541178083556,
1541076377210,
1539830095620
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1475/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1475/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1475/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1475/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1475/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1475/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1475/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1475/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1475/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1475/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1475/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1475/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper develops a stagewise optimization framework for solving non smooth and non convex problems. The idea is to use standard convex solvers to iteratively optimize a regularized objective with penalty centered at previous iterates - which is standard in many proximal methods. The paper combines this with the analysis for non-smooth functions giving a more general convergence results. Reviewers agree on the usefulness and novelty of the contribution. Initially there were concerns about lack of comparison with current results, but updated version have addressed this issue. The main weakness is that the results only holds for \\\\mu weekly convex functions and the algorithm depends on the knowledge of \\\\mu. Despite this limitations, reviewers believe that the paper has enough new material and I suggest for publication. I suggest authors to address these issues in the final version.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"ICLR 2019 decision\"}",
"{\"title\": \"Thanks for the detailed response\", \"comment\": \"The authors' response clarify the difference between this work and the Natasha paper. My concern is addressed.\"}",
"{\"title\": \"Thanks for authors' clarification\", \"comment\": \"Dear authors,\\n\\nI have read your rebuttals, and the following is my feedback:\\n1) The inequality is correct.\\n2) My concern is resolved.\\n3) Although you tune the \\\\gamma as per the validation error, I still don't think the algorithm and analysis in the paper match the experiment. In practice, people do not make deep learning problem convex at each stage. In spite of this, it is still an interesting paper that tries to analyze the stage-wise learning rate scheme. \\n\\nOverall, I will upgrade my score to 6.\"}",
"{\"title\": \"Thanks R3 for the review\", \"comment\": \"Dear Reviewer 3,\\n\\nThanks for reviewing our paper. \\n\\nWe would like to request you to read our responses that clarify your concerns. To summarize our responses: (i) the inequality (3) is indeed correct; (ii) we did analyze the algorithms employing SGD, and momentum-based SGD; (iii) the value \\\\gamma is selected based on the validation performance, which is a standard approach for setting parameters. \\n\\nPlease take them into account when making the final recommendation. Great Thanks! \\n\\nRegards\\nAuthors\"}",
"{\"title\": \"regarding the choice of \\\\gamma\", \"comment\": \"The underlying true value of \\\\mu is hard to estimate. Instead, we manually tune the value of \\\\gamma according to the validation error. Please also note that we indeed provide an example showing that deep neural network can be a weakly convex function when the activation function is smooth (e.g., sigmoid, exponential linear units, please see Ex. 2 on the page 5).\\n\\nIn our experiments, we use rectified linear function as the activation function because (i) it is widely used; (ii) we would like to demonstrate that our theory is not only good by itself but also provides guidance for the practice. Please note that such an approach (i.e., analyzing a well-behaved problem in theory and doing experiments for more challenging problems in practice) is commonly adopted in the community of deep learning, e.g., Adam (Kingma and Ba, 2015 ICLR), AMSGrad (Reddit et al., 2018 ICLR), and Sinha et al. (2018 ICLR). In contrast, our theory is much more general for non-smooth and non-convex problems. \\n\\nDiederik P. Kingma, Jimmy Ba. Adam: A Method for Stochastic Optimization. In ICLR 2015. \\n\\nSashank J Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. In ICLR 2018. \\n\\nAman Sinha, Hongseok Namkoong, John Duchi. Certifying Some Distributional Robustness with Principled Adversarial Training. In ICLR 2018.\"}",
"{\"title\": \"regarding the generic framework\", \"comment\": \"We did analyze different methods in our framework including SGD (section 4.1), stochastic heavy-ball method (also known as stochastic momentum method) (corresponding to \\\\rho =0 in Algorithm 4, see Section 4.3), and stochastic Nesterov accelerated gradient method (corresponding to \\\\rho=1 in Algorithm 4, see Section 4.3). Indeed, we provide a general convergence theory in Theorem 1 such that any suitable stochastic convex optimization algorithms can be analyzed. For example, for AMSGRAD (a variant of Adam with theoretical guarantee for convex optimization), we can derive a similar convergence result (i.e., 1/epsilon^4 iteration complexity) for our framework employing AMSGRAD as the subroutine. Other methods can be also analyzed in our framework (e.g., RMSProp (Mukkamala & Hei 2018)). We are not clear what does the reviewer mean by acceleration of SGD or momentum SGD. Indeed, this is first work that establishes non-asymptotic convergence of stagewise momentum SGD (similar to algorithms used in practice for deep learning) for non-smooth non-convex problems.\\n\\nMahesh Chandra Mukkamala, Matthias Hein. Variants of RMSProp and Adagrad with Logarithmic Regret Bounds. ICML 2018.\"}",
"{\"title\": \"clarification of (3)\", \"comment\": \"It is indeed correct. Please note that when x=argmin_x f(x), we have \\\\hat x= x and f(\\\\hat x) = f(x). Then the inequality f(\\\\hat x)\\\\leq f(x) is still correct. We have provided a proof of (3) in the Appendix of the revision (Appendix H on page 22), though it has been proved in earlier works. Please take a look.\"}",
"{\"title\": \"Clarification of difference from Natasha and the convergence measure\", \"comment\": \"Thanks for the valuable comments.\", \"q1\": \"Missing reference (Natasha).\", \"a\": \"We have included the discussion about Natasha in the revision (the end of Related Work on page 3). We agree that both papers use the idea of adding a strongly convex regularizer to the objective function. However, this is a commonly used technique. It dates back to the proximal point method proposed in 1970s (e.g., Rockafellar (1970)). The recent works that use this idea for non-convex optimization include Carmon et al. (2016), Allen-Zhu (2017), Lan & Yang (2018) for smooth problems, and Davis & Grimmer (2017) for non-smooth problems. We have discussed the later work in the original submission. In the revision, we add the discussion about other works that add strongly convex regularizer to the objective. The key differences between our paper and the Natasha paper of Allen-Zhu is summarized below:\\na.\\tFirst, Allen-Zhu considers finite-sum problems, and assume the objective function has a smooth component. In contrast, we consider more general stochastic problems without assuming the function is smooth. Please check the Ex. 2 on page 5 for an example of non-smooth and non-convex functions, for which our algorithm is applicable but Natasha is not applicable. \\nb.\\tDue to the strong condition (i.e., finite-sum structure and smoothness) made in the Natasha paper, they are able to get better complexity in terms of epsilon. However, in this paper we focus on how to explain the success of heuristic used in practice for solving deep learning problems, including stagewise step size, averaging and adaptive step size. Our theory covers most commonly used stochastic algorithms used in practice. \\n\\nRockafellar, R. T. (1970). Convex analysis. Princeton: Princeton University Press.\\nYair Carmon, John C. Duchi, Oliver Hinder, and Aaron Sidford. Accelerated methods for non-convex optimization, arXiv, 2016. \\nGuanghui Lan and Yu Yang. Accelerated stochastic algorithms for nonconvex finite-sum and multi-block optimization. CoRR, abs/1805.05411, 2018.\", \"q2\": \"About the choice of the convergence measure.\\nA. First, please note that when the objective function is non-smooth (that is considered in this paper), it is challenging for an iterative algorithm to find a solution x_t such that dist(0, \\\\partial f(x_t))\\\\leq \\\\epsilon. We have given one example in the paper (see the paragraph after eq. (3)). Consider min |x|, for an iterative algorithm that produces a non-optimal solution x_t (that is not zero), then dist(0, \\\\partial f(x_t)) is always 1. Indeed, this observation has been reported in several previous papers for non-smooth and non-convex optimization (Davis & Drusvyatskiy, 2018a; Drusvyatskiy & Paquette, 2018; Davis & Grimmer, 2017). To address this issue, the convergence measure based on the Moreau envelope\\u2019s gradient is used following these papers, which ensures that the found solution x_t is very close to a solution that is epsilon stationary. \\n\\nSecond, the good news. Actually, when the objective function is smooth, the upper bound of the the Moreau envelope\\u2019s gradient\\u2019s norm can be translated to an upper bound of the (projected) gradient\\u2019 norm that is commonly used as a convergence for smooth functions. Please see eqn. (4) and (5) and texts around them in the revision. It means that in the smooth case (which is a special case of weakly convex), the convergence of the |\\\\nabla f_\\\\gamma(x_\\\\tau)| indeed transfers to a convergence of |\\\\nabla f(x_\\\\tau)|.\"}",
"{\"title\": \"Thanks for liking our paper\", \"comment\": \"Thanks for your interest and the valuable comments on our work.\", \"q1\": \"It will be nice to have more experiments on the ImageNet data set.\", \"a\": \"Thanks for the suggestions. We have corrected the typos in the revision. We will look into the referred papers of stochastic momentum methods carefully and include them in appropriate places in the final version.\", \"q2\": \"Another possible nice experiment will be a comparison of the four stagewise methods.\", \"q3\": \"Missing reference and typos\"}",
"{\"title\": \"Novel idea, Like the paper\", \"review\": \"Summary:\\nThe paper presents an analysis and numerical evaluation of stagewise SGD, ADAGRAD and Stochastic momentum methods for solving stochastic non-smooth non-convex optimization problems.\", \"comments\": \"I find the ideas presented in this paper very interesting. The convergence analysis seems correct and the paper is reasonably well written, and tackles an important problem. \\n\\nThe analysis holds for \\u03bc-weekly convex functions. This assumption is really important for the development of the algorithm and the proposed analysis. I like the fact that the authors provide two examples showing that popular objective functions in machine learning satisfy this assumption.\\n\\nThe numerical evaluation is adequate showing the effectiveness of the proposed stagewise algorithms. However i have the follow suggestions/minor comments:\\n\\n1) It will be nice to have also some plots showing the performance of the proposed method on the ImageNet dataset. \\n2) Another possible nice experiment will be a comparison of the four stagewise methods (SGD,ADAGRAD,SHB,SNAG) on the same dataset. Which one behaves better?\", \"minor_comments\": \"1) The captions of the figures can be more informative (mention also the division by column). First column is SGD, Second column Adagrad, etc.\\n2) Typos: \\nSection 1, last bullet point, second line: \\\"stagwise\\\"\\nSection 5, second paragraph , first line :\\\"their their\\\"\\npage 8, 3 line from the bottom: \\\"seems, indicate\\\"\\n\\n2) Missing reference.\", \"in_the_area_of_stochastic_gradient_methods_with_momentum_many_papers_have_been_proposed_recently_for_the_case_of_convex_optimization_that_worth_to_be_mentioned\": \"Gadat, S\\u00e9bastien, Fabien Panloup, and Sofiane Saadane. \\\"Stochastic heavy ball.\\\" Electronic Journal of Statistics 12.1 (2018): 461-529.\\nLoizou, Nicolas, and Peter Richt\\u00e1rik. \\\"Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods.\\\" arXiv preprint arXiv:1712.09677 (2017).\\nLan, Guanghui, and Yi Zhou. \\\"An optimal randomized incremental gradient method.\\\" Mathematical programming (2017): 1-49.\\n\\nOverall, I suggest to accept this paper.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Comparison over related work should be clarified. Measure of convergence rate should be justified\", \"review\": \"Non-convex optimization is a hot topic since many machine learning problems can be formulated as non-convex problems. In this paper, the authors propose a universal stage-wise algorithm for weakly convex optimization problems. The idea is to add a strongly convex regularizer centered at an iterate of previous stage to the objective function. This builds a convex function which can be optimized by any standard methods in the convex optimization setting. The authors developed convergence rates in expectation in terms of the gradient of envelope. Empirical results are also reported to show the effectiveness of the method.\", \"comments\": \"(1) The weakly-convex concept considered in this paper is very similar to the bounded non-convexity considered in the paper (Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter) (not cited). In particular, the Natasha paper also developed a multi-stage algorithm for bounded non-convexity optimization problems by adding strongly-convex regularizers centered at iterates of previous stages. The authors should discuss more extensively the related work to clarify their novelty.\\n\\n(2) The convergence rate is measured by $\\\\nabla\\\\phi_\\\\gamma(x_\\\\tau)$. However, according to (3) , this only guarantees an upper bound on $\\\\text{dist}(0,\\\\partial\\\\phi_\\\\gamma(\\\\text{prox}_{\\\\gamma\\\\phi_\\\\gamma}(x_\\\\tau)))$. The output of the algorithm is $x_\\\\tau$ instead of $\\\\text{prox}_{\\\\gamma\\\\phi_\\\\gamma}(x_\\\\tau)$. Is it possible to derive an upper bound on $\\\\text{dist}(0,\\\\partial\\\\phi_\\\\gamma(x_\\\\tau))$?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting attempt trying to analyze the practical learning rate setting of SGD\", \"review\": \"In the paper, the authors try to analyze the convergence of stochastic gradient descent based method with stagewise learning rate and average solution in practice. The paper is very easy to follow, and the experimental results are clear. The following are my concerns:\\n\\n1. In function (3), for any x in R^d, if \\\\hat x = prox_\\\\gamma f (x), then f(\\\\hat x ) <= f(x). This inequality looks not correct to me. If x = argmin_x f(x), the above inequality is obviously wrong. It looks like that function (3) is a very important basis for the whole paper.\\n \\n2. By using the weakly convex assumption and solving f_s, the authors transform a nonconvex nonsmooth problem to a convex problem. However, the paper didn't mention how to select \\\\gamma in the algorithm. This parameter is nontrivial, if you set a small value, the problem is not convex and the analysis does not hold. In the experiment, the authors tune \\\\gamma from 1 to 2000, which means that u < 1 or u < 1/2000. Given neural network is a u-weakly convex problem or u-smooth problem, the theory does not match the experiment. \\n\\n3. The authors propose a universal stagewise optimization framework and mention that the stagewise ADAGRAD obtains faster convergence than other analysis. My question is that, if it is a generic framework, how about the convergence rate for other methods? is there also acceleration for SGD or momentum SGD?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJx9EhC9tQ | Reasoning About Physical Interactions with Object-Oriented Prediction and Planning | [
"Michael Janner",
"Sergey Levine",
"William T. Freeman",
"Joshua B. Tenenbaum",
"Chelsea Finn",
"Jiajun Wu"
] | Object-based factorizations provide a useful level of abstraction for interacting with the world. Building explicit object representations, however, often requires supervisory signals that are difficult to obtain in practice. We present a paradigm for learning object-centric representations for physical scene understanding without direct supervision of object properties. Our model, Object-Oriented Prediction and Planning (O2P2), jointly learns a perception function to map from image observations to object representations, a pairwise physics interaction function to predict the time evolution of a collection of objects, and a rendering function to map objects back to pixels. For evaluation, we consider not only the accuracy of the physical predictions of the model, but also its utility for downstream tasks that require an actionable representation of intuitive physics. After training our model on an image prediction task, we can use its learned representations to build block towers more complicated than those observed during training. | [
"structured scene representation",
"predictive models",
"intuitive physics",
"self-supervised learning"
] | https://openreview.net/pdf?id=HJx9EhC9tQ | https://openreview.net/forum?id=HJx9EhC9tQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BygDo0-ggE",
"BJxwg4pjAX",
"H1eh77VKR7",
"H1x_6tMgAm",
"rygjPOfgRX",
"HJgCWEMxCm",
"rkglZ8xo2Q",
"Bkgyox9DnQ",
"SyxLdj-IhX",
"H1e1DWaMqm",
"B1exCohM97",
"SkeuI3gb5m",
"SJl7bdRy5Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"official_comment",
"comment",
"comment"
],
"note_created": [
1544720030563,
1543390191368,
1543222051806,
1542625728452,
1542625379483,
1542624262086,
1541240311999,
1541017750863,
1540918126440,
1538605399406,
1538603975997,
1538489423941,
1538414587296
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1474/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1474/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1474/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1474/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1474/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1474/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1474/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1474/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1474/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1474/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1474/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.\\n\\n- The problem is interesting and challenging\\n- The proposed approach is novel and performs well.\\n\\n2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.\\n\\n- The clarity could be improved\\n\\n3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it\\u2019s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.\\n\\nMany concerns were clarified during the discussion period. One major concern had been the experimental evaluation. In particular, some reviewers felt that experiments on real images (rather than in simulation) was needed.\\nTo strengthen this aspect, the authors added new qualitative and quantitative results on a real-world experiment with a robot arm, under 10 different scenarios, showing good performance on this challenging task. Still, one reviewer was left unconvinced that the experimental evaluation was sufficient.\\n\\n4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.\\n\\nConsensus was not reached. The final decision is aligned with the positive reviews as the AC believes that the evaluation was adequate.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"novel approach with good performance on interesting and challenging problem; clarity could be improved\"}",
"{\"title\": \"Draft update\", \"comment\": \"We would like to thank the reviewers and commenters for their feedback on our submission. Our revised draft incorporates many of their suggestions. Most importantly:\\n\\n1. We have run our model and planning procedure on a Sawyer robotic arm using real goal images. Results can be found at the following website: https://sites.google.com/view/object-models \\nas well as in the new Section 3.4 and Appendix B of the revision. Our results, robot stacking of up to 9 shapes directly from real images, has not been demonstrated in prior work, regardless of the complexity of those shapes.\\n\\n2. We have given a more precise description of the planning procedure in Algorithm 1 on page 4.\\n\\nOther changes are discussed in the individual responses below.\"}",
"{\"title\": \"Unchanged\", \"comment\": \"I thank the authors for the changes made to the document, which clarify some of my questions.\\nI still think that the experimental part of the paper is too weak for a publication at ICLR at this point.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your thorough feedback. We have uploaded a revision to make the model and planning procedure clearer. We will upload a second revision this coming week to include results on a Sawyer robot using real image inputs (qualitative results are given below in a video link).\\n\\n-- Use of segments in lieu of object property supervision\\nWe have made more explicit the use of object segmentation in the Section 2 notation (where we describe the model and planning procedure). The segmentations correspond to only the visible parts of objects. We have clarified this in Section 3.1, where we describe data collection.\\n\\nWe have evaluated our approach on a Sawyer arm using physical blocks to demonstrate applicability to the real world (goo.gl/151BT1). Here we used simple color cues to segment the image observations.\\n\\n-- Planning procedure\\nWe have added an algorithmic description on page 4 (Algorithm 1: Planning Procedure) to make this section clearer. To answer your question about comparing scenes with different number of objects: we match a proposed object to the goal object which minimizes L2 distance in the learned object representations. Some goal objects will be unaccounted for until the last step of the planning algorithm, when there is an action for each object in the goal image. \\n\\nWe have added details of the cross-entropy method (CEM) to Step 4 of Section 2.5. We sampled actions beginning from a uniform distribution and used CEM to update the sampling distribution. We used 5 CEM iterations with 1000 samples per iteration. Because all of the samples could be evaluated in batch-mode, there was little overhead to evaluating a large number of samples. \\n\\n-- Training procedure\\nPer your suggestion, we have moved the training procedure to come after the model description and before the planning algorithm.\\n\\n-- Clarification on rendering module\\nWe use a weighted average for composing individual object images so that the rendering process is fully differentiable. This design decision makes end-to-end training of the perception, physics, and rendering modules easier.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your thorough feedback. To address your comment about experiments in a real-world environment, we have tested our model on a Sawyer robot with real camera images. A representative video can be found here:\\ngoo.gl/151BT1\\nWe will update the paper to include these results this coming week. Additionally, we have already updated the paper to make the model and planning procedure clearer. Below, we describe some of these changes. \\n\\n-- Clarification on object encodings\\nWe have explained more thoroughly that the object encodings are not supervised directly to have semantically meaningful components like position or orientation. As compared to most prior work on object-factorized representations, we do not assume access to ground truth properties for the objects. This is why the perception module cannot be trained independently; we have no supervision for its outputs. Instead, we train the perception, graphics, and physics modules jointly to reconstruct the current observations and predict the subsequent observation (Figure 2c). In this way, the object representations come to encode these attributes without direct supervision of such properties. Of course, learning representations via a reconstruction objective is not unique to our paper; what we show is that these representations can be sufficient for planning in physical understanding tasks.\\n\\n-- Relation to prior work\\nThe most relevant works about learning physics and rendering you might be referring to are Neural Scene De-rendering (NSD) and Visual Scene De-animation (VDA). These works learn object encodings by direct supervision of properties like position and orientation. As discussed in the previous section, we weaken the requirement for ground-truth object properties, instead requiring only segments instead of attribute annotations. We previously cited VDA and have now added NSD along with a short description of this supervision difference. \\n\\nSE3-Nets used point cloud shape representations, whereas we use learned representations driven by an image prediction task. \\n\\n-- Generalization to real physical interactions\\nWe have now demonstrated our model in the physical world (see video link given above). \\n\\n-- No-physics baseline, SAVP\\nYes, it is not surprising that a model which did not predict physics did not perform well. We included this model as an ablation because we can better understand how our full model makes decisions by comparing it to the physics-ablated version, as in Figure 6. The SAVP baseline takes in a previous frame in the form of an object sample, similar to how our model views a sample by rendering an object mid-air, allowing for a head-to-head comparison to a black-box frame prediction approach.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your feedback and suggestions. We have updated the paper to make the planning algorithm clearer, give short descriptions of CEM and perceptual losses, and incorporate your terminology suggestions (\\u2018rectangular cuboid\\u2019, \\u2018unary\\u2019, \\u2018binary\\u2019, etc). At the request of other reviewers, we have also tested our approach on a physical Sawyer robot. The following video gives a qualitative result analogous to Figure 4:\\ngoo.gl/151BT1\\nThese results will be included in the paper in a second revision this week. Below, we give more details about the current changes.\\n\\n-- Evaluation on downstream tasks \\nDownstream task results were in the original submission (all Figures after 3 and Table 1); we have updated the paper to better differentiate between image prediction results in isolation and the use of our model\\u2019s predictions in a planning procedure to build towers. \\n\\nFigure 4 shows qualitative results on this building task, and Table 1 gives quantitative results. Figures 5 and 6 give some analysis of the procedure by which our model selects actions. Figure 7 briefly shows how our model can be adapted to other physics-based tasks: stacking to maximize height, and building a tower to make a particular block stable. \\n\\n-- Planning algorithm\\nWe have added a more precise algorithmic description on page 4 to make the tower-building procedure clearer (Algorithm 1: Planning Procedure).\\n\\n-- Oracle models\\nWe have added a sentence to the Table 1 caption to explain why O2P2 outperforms Oracle (pixels). The Oracle (pixels) model has access to the true physics simulator which generated the data, but not an object-factorized cost function. Instead, it uses pixel-wise L2 over the entire image (Section 3.2). The top row of Figure 4 is illustrative here: the first action taken by Oracle (pixels) was to drop the blue rectangular cuboid in the bottom left to account for both of the blue cubes in the target. Our model, despite having a worse physics predictor, performs better by virtue of its object factorization. \\n\\n-- Figure 4 clarification\\nWe have updated the caption of Figure 4 and changed some text in the graphic. Figure 4 shows qualitative results on the tower building task described above. We show four goal images (outlined in green), and the towers built by each of five methods. This figure has a few utilities:\\n 1. It illustrates what our model\\u2019s representations capture well for planning and what they do not. For example, most mistakes made by our model concern object colors. This suggests that object positions are more prominently represented by our model\\u2019s representations than color. \\n 2. It shows why an object-factorization is still useful even if one has access to the \\u201ctrue\\u201d physics simulator (as discussed in the previous question).\\n 3. It shows that the types of towers being built in the downstream task are not represented in the training set of the perception, graphics, and physics modules (depicted in Figure 3, where we show reconstruction and prediction results). The object-factorized predictions allow our model to generalize out of distribution more effectively than an object-agnostic video prediction model (Table 1). \\n\\n-- Reinforcement learning baseline\\nWe have found that a PPO agent works poorly on this task, possibly due to the high dimensionality of the observation space (raw images). We will continue to try to get this baseline to work for the next revision, and would be happy to try out any other RL algorithms that the reviewer might suggest.\"}",
"{\"title\": \"Very good idea, lacks in presentation/formalization and in experimental evaluation\", \"review\": \"A method is proposed, which learns to reason on physical interactions of different objects (solids like cuboids, tetrahedrons etc.). Traditionally in related work the goal is to predict/forecast future observations, correctly predicting (and thus learning) physics. This is also the case in this paper, but the authors explicitly state that the target is to evaluate the learned model on downstream tasks requiring a physical understanding of the modelled environment.\\n\\nThe main contribution here lies in the fact that no supervision is used for object properties. Instead, a mask predictor is trained without supervision, directly connected to the rest of the model, ie. to the physics predictor and the output renderer. The method involves a planning phase, were different objects are dropped on the scene in the right order, targeting bottom objects first and top objects later. The premise here is that predicting the right order of the planning actions requires understanding the physics of the underlying scene.\\n\\nI particularly appreciated the fact, that object instance renderers are combined with a global renderer, which puts individual images together using predicted heatmaps for each object. With a particular parametrization, these heatmaps could be related to depth maps allowing correct depth ordering, but depth information has not been explicitly provided during training.\", \"important_issues\": \"One of the biggest concerns is the presentation of the planning algorithm, and more importantly, a proper formalization of what is calculated, and thus a proper justification of this part. The whole algorithm is very vaguely described in a series of 4 items on page 4. It is intuitively almost clear how these steps are performed, but the exact details are vague. At several steps, calculated entities are \\u201ccompared\\u201d to other entities, but it is never said what this comparison really results in. The procedure is reminiscent of particle filtering, in that states (here: actions) are sampled from a distribution and then evaluated through a likelihood function, resulting in resampling. However, whereas in particle filtering there is clear probabilistic formalization of all key quantities, in this paper we only have a couple of phrases which describe sampling and \\u201ccomparisons\\u201d in a vague manner.\\n\\nSince the procedure performs planning by predicting a sequence of actions whose output at the end can be evaluated, thus translated into a reward, I would have also liked a discussion (or at least a remark) why reinforcement learning has not been considered here.\\n\\nI am also concerned by an overclaim of the paper. As opposed to what the paper states in various places, the authors really only evaluate the model on video prediction and not on other downstream tasks. A single downstream task is very briefly mentioned in the experimental section, but it is only very vaguely described, it is unclear what experiments have been performed and there is no evaluation whatsoever.\", \"open_questions\": \"Why is the proposed method better than one of the oracles?\", \"minor_remarks\": \"It is unclear what we see in image 4, as there is only a single image for each case (=row) and method (=column). \\n\\nThe paper is not fully self-contained. Several important aspects are only referred to by citing work, e.g. CEM sampling and perceptual loss. These are concepts which are easy to explain and which do not take much space. They should be added to the paper.\\n\\nA threshold is mentioned in the evaluation section. A plot should be given showing the criterion as a function of this threshold, as is standard in, for instance, pose estimation literature.\\n\\nI encourage the authors to use the technical terms \\u201cunary terms\\u201d and \\u201cbinary terms\\u201d in the equation in section 2.2. This is the way how the community referred to interactions in graphical models for relational reasoning long before deep learning showed up on the horizon, let\\u2019s be consistent with the past.\\n\\nI do not think that the physics module can be reasonable be called a \\u201cphysics simulator\\u201d as has been done throughout the paper. It does not simulate physics, it predicts physics after learning, which is not a simulation.\\n\\nA cube has not been confused with a rectangle, as mentioned in the paper, but with a rectangular cuboid. A rectangle is a 2D shape, a rectangular cuboid is a 3D polyhedron.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Intriguing idea, but the paper is not sufficiently clear, and the experimental evaluation is weak.\", \"review\": \"Summary:\\nThe paper presents a platform for predicting images of objects interacting with each other under the effect of gravitational forces. Given an image describing the initial arrangement of the objects in a scene, the proposed architecture first detects the objects and encode them using a perception module. A physics module then predicts the final arrangement of the object after moving under the effects of gravity. A rendering module takes as input the predicted final positions of objects and returns an image. The proposed architecture is trained by using pixel labels only, by reducing the gaps between the predicted rendered images and the images returned by the MuJuCo physics engine. This error's gradient is back-propagated to the physics and perception modules. The proposed platform is also used for planning object placements by sampling a large number of object shapes, orientations and colors, predicting the final configurations, and selecting initial placements that lead to final configurations that are as close as possible to given goal configurations using the L2 norm in the VGG features. Experiments performed in a simple blocks world show that the proposed approach is not only useful for prediction, but can also be used for planning object placements.\", \"clarity\": \"The paper is not very well written. The description of the architecture should be much more precise. Some details are given right before the conclusion, but they are still just numbers and leave a lot of questions unanswered. For instance, the perception module is explained in only a few line in subsection 2.1. Some concrete examples could help here. How are the object proposals defined? How are the objects encoded? What exactly is being encoded here? Is it the position and orientation?\", \"originality\": \"The proposed architecture seems novel, but there are many closely related works that are based on the same idea of decomposing the system into a perception, a physics simulation, and a rendering module. Just from the top of my head, I can think of the SE3-Nets. There is also a large body of work from the group of Josh Tanenbaum on similar problems of learning physics and rendering. I think this concept is not novel anymore and the expectations should be raised to real applications.\", \"significance\": \"The simplicity of the training process that is fully based on pixel labeling makes this work interesting. There are however some issues related to the experimental evaluation that remains unsatisfactory. First, all the experiments are performed on a single benchmark, we cannot easily draw conclusions about a given algorithm based on a single benchmark. Second, this is a toy benchmark that with physical interactions that are way less complex than interactions that happen between real objects. The objects are also not diverse enough in their appearances and textures. I wonder why the authors avoided collecting a dataset of real images of objects and using it to evaluate their algorithm instead of the toy artificial data. I also suspect that with 60k training images, you can easily overfit this task. How can this work generalize to real physical interactions? How can you capture mass and friction, for example?\\nPlanning is based on sampling objects of different shapes and colors, do you assume the existence of such library in advance? \\nThe baselines that are compared to are also not very appropriate. For instance, comparing to no physics does not add much information. We know that the objects will fall after they are dropped, so the \\\"no physics\\\" baseline will certainly perform badly. Comparisons to SAVP are also unfair because it requires previous frames, which are not provided here, and SAVP is typically used for predicting the very next frames and not the final arrangements of objects, as done here.\", \"in_summary\": \"I think the authors are on something here and the idea is great. However, the paper needs to be made much clearer and more precise, and the experimental evaluation should be improved by performing experiments in a real-world environment. Otherwise, this paper will not have much impact.\", \"post_rebuttal_update\": \"The paper was substantially improved. New experiments using real objects have been included, this clearly demonstrates the merits of the proposed method in robotic object manipulation.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting paper; some assumptions should be stated more clearly\", \"review\": \"edit: the authors nicely revised the submission, I think it is a very good paper. I increased my rating.\\n\\n-----\\n\\nThis paper presents a method that learns to reproduce 'block towers' from a given image. A perception model, a physics engine model, and a rendering engine are first trained together on pairs of images.\\nThe perception model predicts a representation of the scene decomposed into objects; the physics engine predicts the object representation of a scene from an initial object representation; the rendering engine predicts an image given an object representation.\\n\\nEach training pair of images is made of the first image of a sequence when introducing an object into a scene, and of the last image of the sequence, after simulating the object's motion with a physics engine. The 3 parts of the pipeline (perception, physics, rendering) are trained together on this data.\\n\\nTo validate the learned pipeline, it is used to recreate scenes from reference images, by trying to introduce objects in an empty scene until the given scene can be reproduced. It outperforms a related pipeline that lacks a scene representation based on objects.\\n\\nThis is a very interesting paper, with new ideas:\\n- The object-based scene representation makes a lot of sense, compared to the abstract representation used in recent work. \\n- The training procedure, based on observing the result of an action, is interesting as the examples are easy to collect (except for the fact that the ground truth segmentation of the images is used as input, see below).\\n\\nHowever, there are several things that are swept 'under the carpet' in my opinion, and this should be fixed if the paper is accepted.\\n\\n* the input images are given in the form of a set of images, one image corresponding to the object segmentation. This is mentioned only once (briefly) in the middle of the paragraph for Section 2.1, while this should be mentioned in the introduction, as this makes the perception part easier. There is actually a comment in the discussion section and the authors promised to clarify this aspect, which should indeed be more detailed. For example, do the segments correspond to the full objects, or only the visible parts?\\n\\n* The training procedure is explained only in Section 4.1. Before reaching this part, the method remained very mysterious to me. The text in Section 4.1 should be moved much earlier in the paper, probably between current sections 2.3 and 2.4, and briefly explained in the introduction as well.\", \"this_training_procedure_is_in_fact_fully_supervised___which_is_fine_with_me\": [\"Supervision makes learning 'safer'. What is nice here is that the training examples can be collected easily - even if the system was not running in a simulation.\", \"if I understand correctly the planning procedure, it proceeds as follows:\", \"sampling 'actions' that introduce 1 object at a time (?)\", \"for each sampled action, predicting the scene representation after the action is performed, by simulating it with the learned pipeline,\", \"keeping the action that generates a scene representation close to the scene representation computed for the goal image of the scene.\", \"performing the selected action in a simulator, and iterate until the number of performed actions is the same as the number of objects (which is assumed to be known).\", \"-> how do you compare the scene representation of the goal image and the predicted one before the scene is complete? Don't you need some robust distance instead of the MSE?\", \"-> are the actions really sampled randomly? How many actions do you need to sample for the examples given in the paper?\"], \"i_also_have_one_question_about_the_rendering_engine\": \"Why using the weighted average of the object images? Why not using the intensity of the object with the smallest predicted depth? It should generate sharper images. Does using the weighted average make the convergence easier?\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Implementation details\", \"comment\": \"Thank you for your questions. We will include an appendix with more implementation details (currently in Section 5) in the next version. In the meantime, here we describe the reconstruction process in more depth.\\n\\n1. The perception network has four convolutional layers (32, 64, 128, 256 channels) with ReLU nonlinearities followed by a fully connected layer. It predicts a set of object representations given an image at t=0:\\n\\n o_0 = f_percept(I_0)\\n\\n2. The physics engine consists of a pairwise interaction MLP and single-object transition MLP, each with two hidden layers. It predicts object representations at the next timestep given an initial configuration: \\n\\n o_1 = f_physics(o_0)\\n\\n(To see f_physics broken down into separate terms for the two MLPs, see Section 2.2)\\n\\n3. The rendering engine has two networks, which we will call f_image and f_heatmap. For each object o_{t,i} in a set of objects o_t at timestep t, f_image predicts a three-channel image and f_heatmap predicts a single-channel heatmap. We render each object separately with f_image and then combine these images by a weighted averaging over objects, where the weights come from the negatives of the heatmaps (passed through a nonlinearity). More precisely, denoting the heatmaps at time t for all objects as\\n\\n\\tH_t = softmax( -f_heatmap(o_t) ),\\n\\nthe j^th pixel of the predicted composite image is then:\\n\\n \\\\hat{I}_{t,j} = \\\\sum_i f_image( o_{t,i} )_j * H_{t,i,j},\\n\\nwhere H_{t,i,j} is the j^th pixel of the heatmap for the i^th object at time t. \\n\\nBoth networks have a single fully-connected layer followed by four deconvolutional layers with ReLU nonlinearities. f_image has (128, 64, 32, 3) channels and f_heatmap has (128, 64, 32, 1) channels. From here on, we will use f_render to describe this entire process:\\n\\n \\\\hat{I}_t = f_render(o_t)\\n\\nThe equations here risk making all of this seem more complicated than it really is. The high-level picture is that we need a way to produce a single image from a set of objects, so we render each object separately and then take a weighted average over the individual images in something that could be thought of as a soft depth pass. \\n\\n4. Reconstructing an image at the observed timestep then looks a lot like an auto-encoder:\\n\\n\\t\\\\hat{I}_0 = f_render( f_percept(I_0) )\", \"reconstructing_an_image_at_the_next_timestep_uses_the_physics_engine_in_between\": \"\\\\hat{I}_1 = f_render( f_physics( f_percept(I_0) ) )\\n\\t\\nThese equations are reflected in the loss functions on page 6. (For example, the physics engine is only trained via the loss from reconstructing I_1, since it is not used in reconstructing I_0.) We used ground truth object segments in our experiments, which we discuss in the answer to the question on 10/02/2018.\"}",
"{\"title\": \"Comparisons and training clarifications\", \"comment\": \"Thank you for your detailed feedback.\\n\\n1. This is a good point. We cite Neural Expectation Maximization (N-EM; Greff et al, 2018) when discussing disentangled object representations, but Relational NEM (R-NEM) is indeed more relevant because it incorporates physical interactions into the model. It is our understanding that the R-NEM code works only on binary images of 2D objects, whereas we consider color images of 3D objects. R-NEM focuses on disentangling objects completely unsupervised, so does not use object segments but is evaluated on simpler inputs. In comparison, we focus on using object representations for downstream tasks, so assume an accurate preprocessing step to give segments but use our object representations in contexts other than prediction (like block stacking). \\n\\nThese works tackle complementary pieces of the same larger problem, and one could imagine a full pipeline using something like R-NEM to discover segments to feed into our method for planning action sequences with learned object representations. We will add this discussion to the next version of the paper. \\n\\n2 Yes, we outline the segmented images in orange in Figure 2c because we are using ground truth object segments. We assume we have access to this preprocessing at both train and test time. We will make this more clear in the main text.\\n\\n3. The rendered scene has a few forward-facing lights at about two-thirds of the image\\u2019s height, so most objects appear a bit brighter before they are dropped. You can also see this happening in Figure 6. \\n\\n4. We train the model to reconstruct images at both t=0 and t=1 given the observation at t=0. The loss for the image at t=0 is equation (1) on page 6:\\n\\n L_2(\\\\hat{I}_0, I_0) + L_vgg(\\\\hat{I}_0, I_0),\\n\\nwhere L_vgg is a perceptual loss in the feature space of the VGG network. The analogous loss for t=1 is equation (2). As you mention, reconstructing the t=0 image essentially amounts to bypassing the physics engine. A more complete description is given in the last paragraph on page 5.\\n\\nPlease let us know if you have any follow-up questions.\"}",
"{\"comment\": \"Dear authors, I think conceptually that this is a very nice paper and I like the choice of experiments. I have just a few comments:\\n\\n(1) The authors say that: \\u201cExisting works that have investigated the benefit of using objects have either assumed that an interface to an idealized object space already exists or that supervision is available to learn a mapping between raw inputs and relevant object properties (for instance, category, position, and orientation).\\u201d\\n\\nThe following paper is very relevant and they don\\u2019t make either of the assumptions that the authors state in their paper (quoted above). RELATIONAL NEURAL EXPECTATION MAXIMIZATION: UNSUPERVISED DISCOVERY OF OBJECTS AND THEIR INTERACTIONS - Steenkiste et al. ICLR 2018. Steenkiste et al. automatically learn to segment objects and predict the physics across multiple time steps. A detailed comparison between the authors' model and that of Steenkiste et al. would make the authors contributions more clear.\\n\\n(2) Could you please clarify if *ground truth* object segments are fed into the Perception model? If *ground truth* object segments are used, this should be made more clear. (The last line in the caption of Figure 2 is not sufficient to make this clear in the main text).\\n\\n(3) Very minor, but in Figure 2, the yellow triangle appears to change colour, to green.\\n\\n(4) How exactly is the model is Figure 2 trained? Is it trained to predict t=1 given t=0? If so how are reconstructions in Figure 3, for t=0 obtained? Is the physics engine bypassed to obtain a reconstruction for t=0? This is not clear. Is the reconstruction (error) for t=0 used to train the model? It is not clear what loss functions are used for training?\\n\\n(5) Figure 5 and section 4.3 are really nice!\", \"title\": \"Nice paper, just a few comments\"}",
"{\"comment\": \"This is a very nice paper! However, I wish it included some more details of the implementation (perhaps a future revision could include an appendix?) For example, how did you get the region proposals/segmentation for each video frame? What exactly are the equations involved in the reconstruction process?\", \"title\": \"More implementation details\"}"
]
} |
|
BkltNhC9FX | Posterior Attention Models for Sequence to Sequence Learning | [
"Shiv Shankar",
"Sunita Sarawagi"
] | Modern neural architectures critically rely on attention for mapping structured inputs to sequences. In this paper we show that prevalent attention architectures do not adequately model the dependence among the attention and output tokens across a predicted sequence.
We present an alternative architecture called Posterior Attention Models that after a principled factorization of the full joint distribution of the attention and output variables, proposes two major changes. First, the position where attention is marginalized is changed from the input to the output. Second, the attention propagated to the next decoding stage is a posterior attention distribution conditioned on the output. Empirically on five translation and two morphological inflection tasks the proposed posterior attention models yield better BLEU score and alignment accuracy than existing attention models. | [
"posterior inference",
"attention",
"seq2seq learning",
"translation"
] | https://openreview.net/pdf?id=BkltNhC9FX | https://openreview.net/forum?id=BkltNhC9FX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Hkl0IAVxgN",
"HkeNhpWNyE",
"H1lZgtGmyN",
"r1lWSVVj0Q",
"H1lBWBQcCQ",
"HkeaY7QqR7",
"r1xb9bQ9AX",
"B1l0ox750X",
"ByebxyX9C7",
"SylcXSSt6m",
"HyezVpjwpm",
"rJlU-8qJTX",
"B1xBIw_93X",
"BJe8NreK2m",
"HklfYhUe2m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544732245527,
1543933356116,
1543870696739,
1543353400950,
1543283964795,
1543283588565,
1543283080812,
1543282854207,
1543282408586,
1542178082109,
1542073641897,
1541543421809,
1541207885173,
1541109037960,
1540545658467
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1473/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1473/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1473/AnonReviewer1"
],
[
"~Yoon_Kim1"
],
[
"ICLR.cc/2019/Conference/Paper1473/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1473/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1473/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1473/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1473/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"~Yoon_Kim1"
],
[
"ICLR.cc/2019/Conference/Paper1473/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1473/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1473/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers of this paper agreed that it has done a stellar job of presenting a novel and principled approach to attention as a latent variable, providing a new and sound set of inference techniques to this end. This builds on top of a discussion of the limitations of existing deterministic approaches to attention, and frames the contribution well in relation to other recurrent and stochastic approaches to attention. While there are a few issues with clarity surrounding some aspects of the proposed method, which the authors are encouraged to fine-tune in their final version, paying careful attention to the review comments, this paper is more or less ready for publication with a few tweaks. It makes a clear, significant, and well-evaluate contribution to the field of attention models in sequence to sequence architectures, and will be of great interest to many attendees at ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"One of the better papers at the conference\"}",
"{\"title\": \"\", \"comment\": \"Thanks for the suggestion. We will take this into account and contextualize better in the next draft.\"}",
"{\"title\": \"Thanks for your response\", \"comment\": \"I thank the authors for improving the clarity of the model derivation and updating the paper to mention related work and alternative derivations. I agree that the author's formulation provides novel and interesting insights. However, I would just like the final version of the paper to be more explicit - preferable in both the introduction and model derivation - about the relation of their models to the latent/hard attention models that have been discussed here. Just mentioning these papers in the related work section is not sufficient to fully contextualize this work (as was asked for by the other reviewers and commenters as well). Mentioning that these models are essentially neural generalizations of the classical IBM alignment models (Brown et al., 1993) is also helpful for contextualization.\"}",
"{\"comment\": \"Thanks for the detailed response!\", \"title\": \"thanks!\"}",
"{\"title\": \"\", \"comment\": \"We have rewritten Section 2.2 of the paper, which simplifies the presentation and makes the need for posterior attention more obvious. The network architecture and connections are the same as standard soft attention model. The difference is entirely on how attention is computed.\"}",
"{\"title\": \"Author response to reviewer\", \"comment\": \"We thank the reviewer for their feedback.\\nWe have rewritten the derivation of our factorization and made the assumptions clearer in Section 2.2 .\\nSection 2.2.1 has also been revised describing the different variants and their intuition, deriving them all from Eqn 4.\\nWe have also fixed some notational discrepancies as pointed out by the reviewer for which we are thankful.\\n\\nQA\\n1)\\nWe have rewritten that section, but the simplification comes about because of the Markovian assumption that P(a_t|a_{<t}) = P(a_t|a_{t-1}). This makes \\\\sum_{a_{t-1}} P(a_t|a_{<t})P(a_{<t}|y_{<t}) = \\\\sum_{a_{t-1}} P(a_t|a_{<t}) P(a_{t-1}|y_{<t}). \\n\\n\\n2)\\nThe Taylor trick was used by [1] to simplify the expectation computation. Essentially if the average value of a function is computed at different points, one can compute the Taylor expansion of the function at average of the points leaving only second order terms.\\n\\n\\\\Sigma f(x_i) = \\\\Sigma f( xm + x_i - xm) = \\\\Sigma [ f(xm) + f\\u2019(xm)(x_i - xm) + second order terms ] = \\\\Sigma f(xm) + df(xm)\\\\Sigma(x_i - xm) + second order = \\\\Sigma f(xm) + df(xm)*0 + second order \\\\approx \\\\Sigma f(xm)\\n\\n3)\\ns_t is the decoder state after feeding in output y_{t-1} and attention at step {t-1}. Like in standard seq2seq literature, we rely on the decoding RNN state to capture the dependence on history of output tokens. Under the assumption that y_t depends directly on attention 'a' at t and previous tokens, we use the decoder state s_t and the encoder state x_{a}. Indeed as pointed 'j' was a typo.\\n\\n4)\\nThe main difference between the prior-joint and postr-joint model is which attention gets propagated further down. The prior-joint model behaves analogously to the standard soft-attention in ignoring any interaction between output and attention. In fact, it is a version of an IBM model 1. We have expanded on this in Section3 paragraph 7 and Section4 paragraph 1\\n\\n[1] Xu et al; Show, attend and tell: Neural image caption generation with visual attention , 2015\"}",
"{\"title\": \"Author response to reviewer\", \"comment\": \"We thank the reviewer for the comments.\\nIn light of comments about some of the notation and description from all reviewers, we have revised the model description considerably. We have also fixed some notational inconsistencies as pointed out.\\n\\nWe have also revised Section 2.2.1 to better explain the formula and intuition of the coupling energies.\"}",
"{\"title\": \"Author response to reviewer\", \"comment\": \"We thank the reviewer for the feedback. We have discussed the papers mentioned by you and other reviewers in the Related work section, and also added new empirical comparisons.\\n\\nWe are also very grateful for suggesting the alternative derivation. We have added a discussion regarding your suggestion in Section 2.4 . We have also simplified our derivation by explicitly stating and pulling up the\\nMarkov assumption about attention dependencies earlier.\\n\\n\\nThe prior joint model is indeed related to a neural IBM model 1, and has been used in multiple recent works as also pointed out by Yoon Kim.\\n\\nFrom an efficiency perspective, the various posterior attention models are only marginally slower than prior-joint which does the more compute-intensive part of calculating P(y_t) for each of the top-K attentions. Thereafter, for tasks like translation, the coupled attention computation almost comes for \\\"free\\\". In fact, we observed no measurable difference in the average time per step between the two models\\n\\nMost seq2seq models rely upon attention feeding at all timesteps, and so we had not experimented with that model. We are providing some of the results of the experiment in the response here.\\n Dataset B=4 B=10\\nde-en 28.8 28.6\\nen-de 24.0 23.9\\nen-vi 26.9 26.6\\n\\nThese numbers are roughly on par with soft-attention and show the importance of feeding the attention context.\\n\\nWe also ran some experiments with the suggestion of feeding the prior attention, which are as follows\\n B=4 B=10\\nen-vi 27.3 27.0\\nvi-en 25.7 25.7\\n\\nThese results are similar to or slightly worse than the prior-joint model. We are currently in the process of evaluating this on more tasks.\"}",
"{\"title\": \"\", \"comment\": \"1)\\nYes, we have used the straight through estimator. On our larger datasets we were not able to do full enumeration because of memory constraint. For En-Vi we can run the exact enumeration and for that task the top-k marginalization reduced time per-step by around 50\\\\% (0.354s vs 0.655s per step) and the required memory by a factor of 4 with very minor impact on BLEU\\n\\n2)\\nWe thank you for giving pointers to related work. The reviewers also pointed similar works. We have discussed them in the Related Work section of the revised version. Also, we have included some experimental comparisons with all of these.\"}",
"{\"comment\": \"The contribution is interesting, but besides the experimental part is a little bit too dry. The paper would immensely benefit of a more high level description and insights about the architecture proposed, as well as a graphical representation (such as a block diagram) to make the architecture understandable at a first glance.\", \"title\": \"More high level insights needed\"}",
"{\"comment\": \"Yes it'd be nice to see a comparison of this work to (Deng et al., 2018) which also models attention as a latent variable and has released code here: https://github.com/harvardnlp/var-attn\", \"title\": \"Empirical Comparison to (Deng et al., 2018) ?\"}",
"{\"comment\": \"- There are several recent works that have also formalized attention as a latent variable and have exactly/approximately optimized the log marginal likelihood. It would be great to see this work put in context of existing work!\\n\\nWu et al. Hard Non-Monotnic Attention for Character-Level Transduction. EMNLP 2018\\nShankar et al. Surprisingly Easy Hard-Attention for Sequence to Sequence Learning. EMNLP 2018.\\nDeng et al. Latent Alignment and Variational Attention. NIPS 2018.\", \"question\": [\"How do you differentiate through the top-K approximation? Do you use the straight through estimator? How much faster was top K vs actually enumerating?\"], \"title\": \"one question/comment\"}",
"{\"title\": \"Very interesting contribution\", \"review\": \"This paper proposes a new sequence to sequence model where attention is treated as a latent variable, and derive novel inference procedures for this model. The approach obtains significant improvements in machine translation and morphological inflection generation tasks. An approximation is also used to make hard attention more efficient by reducing the number of softmaxes that have to be computed.\", \"strengths\": [\"Novel, principled sequence to sequence model.\", \"Strong experimental results in machine translation and morphological inflection.\"], \"weaknesses\": [\"Connections can be made with previous closely related architectures.\", \"Further ablation experiments could be included.\"], \"the_derivation_of_the_model_would_be_more_clear_if_it_is_first_derived_without_attention_feeding\": \"The assumption that output is dependent only on the current attention variable is then valid. The Markov assumption on the attention variable should also be stated as an assumption, rather than an approximation: Given that assumption, as far as I can tell the (posterior) inference procedure that is derived is exact: It is indeed equivalent to the using the forward computation of the classic forward-backward algorithm for HMMs to do inference.\\nThe model\\u2019s overall distribution can then be defined in a somewhat different way than the authors\\u2019 presentation, which I think makes more clear what the model is doing:\\np(y | x) = \\\\sum_a \\\\prod_{t=1}^n p(y_t | y_{<t}, x, a_t) p(a_t | y_{<t}, x_ a_{t-1}). \\nThe equations derived in the paper for computing the prior and posterior attention is then just a dynamic program for computing this distribution, and is equivalent to using the forward algorithm, which in this context is:\\n \\\\alpha_t(a) = p(a_t = a, y_{<=t}) = p(y_t | s_t, a_t =a) \\\\sum_{a\\u2019} \\\\alpha_{t-1}(a\\u2019) p(a_t = a | s_t, a_{t-1} = a\\u2019) \\n\\nThe only substantial difference in the inference procedure is then that the posterior attention probability is fed into the decoder RNN, which means that the independence assumptions are not strictly valid any more, even though the structural assumptions are still encoded through the way inference is done. \\n[1] recently proposed a model with a similar factorization, although that model did not feed the attention distribution, and performed EM-like inference with the forward-backward algorithm, while this model is effectively computing forward probabilities and performing inference through automatic differentiation.\\n\\nThe Prior-Joint variant, though its definition is not as clear as it should be, seems to be assuming that the attention distribution at each time step is independent of the previous attention (similar to the way standard soft attention is computed) - the equations then reduce to a (neural) version of IBM alignment model 1, similar to another recently proposed model [2]. These papers can be seen as concurrent work, and this paper provides important insights, but it would strengthen rather than weaken the paper to make these connections clear. \\n\\nThe results clearly show the advantages of the proposed approach over soft and sparse attention baselines. However, the difference in BLEU score between the variants of the prior or posterior attention models is very small across all translation datasets, so to make claims about which of the variants are better, at a minimum statistical significance testing should be done. Given that the \\u201cPrior-Joint\\u201d model performs competitively, is it computationally more efficient that the full model? \\n\\nThe main missing experiment is not doing attention feeding at all. The other experiment that is not included (as I understood it) is to compute prior and posterior attention, but feed the prior attention rather than the posterior attention. \\n\\nThe paper is mostly written very clearly, there are just a few typos and grammatical errors in sections 4.2 and 4.3. \\n\\nOverall, I really like this paper and would like to see it accepted, although I hope that a revised version would make the assumptions the model is making clearer and make connections to related models clearer. \\n \\n[1] Neural Hidden Markov Model for Machine Translation, Wang et al, ACL 2018. \\n[2] Hard Non-Monotonic Attention for Character-Level Transduction, Wu, Shapiro and Cotterell, EMNLP 2018.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Posterior attention improves sequence to sequence learning\", \"review\": \"Originality: Existing attention models do not statistically express interactions among multiple attentions. The authors of this manuscript reformulate p(y|x) and define prior attention distribution (a_t depends on previous outputs y_<t) and posterior attention distribution (a_t depends on current output y_t as well), and essentially compute the prior attention at current position using posterior attention at the previous position. The hypothesis and derivations make statistical sense, and a couple of assumptions/approximations seem to be mild.\", \"quality\": \"The overall quality of this paper is technically sound. It pushs forward the development of attention models in sequence to sequence mapping.\", \"clarity\": \"The ideas are presented well, if the readers go through it slowly or twice. However, the authors need to clarify the following issues:\\nx_a is not well defined. \\nIn Section 2.2, P(y) as a short form of Pr(y|x_1:m) could be problematic and confusing in interpretation of dependency over which variables.\", \"page_3\": \"line 19 of Section 2.2.1, should s_{n-1} be s_{t-1}?\\nIn Postr-Joint, Eq. (5) and others, I believe a'_{t-1} is better than a', because the former indicate it is attention for position t-1.\\n\\nI am a bit lost in the description of coupling energies. The two formulas for proximity biased coupling and monotonicity biased coupling are not well explained. \\n\\nIn addition to the above major issues, I also identified a few minors: \\nsignificant find -> significant finding\", \"last_line_of_page_2\": \"should P(y_t|y_<t, a_<n, a_n) be P(y_t|y_<t, a_<t, a_t)?\\ntop-k -> top-K\\na equally weighted combination -> an equally weighted combination\\nSome citations are not used properly, such as last 3rd line of page 4, and brackets are forgotten in some places, etc.\\nEnd of Section 3, x should be in boldface.\\nnon-differentiability , -> non-differentiability,\\nFull stop \\\".\\\" is missing in some places.\\nLuong attention is not defined.\", \"significance\": \"comparisons with an existing soft-attention model and an sparse-attention model on five machine translation datasets show that the performance of using posterior attention indeed are better than benchmark models.\", \"update\": \"I have read the authors' response. My current rating is final.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This paper presents a novel posterior attention model for seq2seq problems. The PAM exploits the dependencies among attention and output variables, unlike existing attention models that only gives ad-hoc design of attention vectors. The experiments demonstrate their claimed advantages.\", \"review\": \"Pros:\\n1. This work presents a novel construction of the popularly-used attention modules. It points out the problems lied in existing design that attention vectors are only computed based on parametric functions, instead of considering the interactions among each attention step and output variables. To achieve that, the authors re-write the joint distribution as a product of tractable terms at each timestamp and fully exploit the dependencies among attention and output variables across the sequence. The motivation is clear, and the proposed strategy is original and to the point. This makes the work relative solid and interesting for a publication. Furthermore, the authors propose 3 different formulation for prior attention, making the work even stronger.\\n2. The technical content looks good, with each formula written clearly and with sufficient deductive steps. Figure 1 provides clear illustration on the comparison with traditional attentions and shows the advantage of the proposed model.\\n3. Extensive experiments are conducted including 5 machine translation tasks as well as another morphological inflection task. These results make the statement more convincing. The authors also conducted further experiments to analyze the effectiveness, including attention entropy evaluation.\", \"cons\": \"1. The rich information contained in the paper is not very well-organized. It takes some time to digest, due to some unclear or missing statements. Specifically, the computation for prior attention should be ordered in a subsection with a section name. The 3 different formulations should be first summarized and started with the same core formula as (4). In this way, it will become more clear of where does eq(6) come from or used for. Currently, this part is confusing.\\n2. Many substitutions of variables take place without detailed explanation, e.g., y_{<t} with s_t, a with x_{a} in (11) etc. Could you explain before making these substitutions?\\n3. As mentioned, the PAM actually computes hard attentions. It should be better to make the statement more clear by explicitly explaining eq(11) on how it assembles hard attention computation.\", \"qa\": \"1. In the equation above (3) that computes prior(a_t), can you explain how P(a_{t-1}|y_{<t}) approximates P(a_{<t}|y_{<t})? What's the assumption?\\n2. How is eq(5) computed using first order Taylor expansion? How to make Postr inside the probability? And where does x_a' come from?\\n3. Transferring from P(y) on top of page 3 to eq(11), how do you substitute y_{<t}, a_t with s_t, x_j? Is there a typo for x_j?\\n4. Can you explain how is the baseline Prior-Joint constructed? Specifically, how to compute prior using soft attention without postr?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
r1xFE3Rqt7 | Adaptive Mixture of Low-Rank Factorizations for Compact Neural Modeling | [
"Ting Chen",
"Ji Lin",
"Tian Lin",
"Song Han",
"Chong Wang",
"Denny Zhou"
] | Modern deep neural networks have a large amount of weights, which make them difficult to deploy on computation constrained devices such as mobile phones. One common approach to reduce the model size and computational cost is to use low-rank factorization to approximate a weight matrix. However, performing standard low-rank factorization with a small rank can hurt the model expressiveness and significantly decrease the performance. In this work, we propose to use a mixture of multiple low-rank factorizations to model a large weight matrix, and the mixture coefficients are computed dynamically depending on its input. We demonstrate the effectiveness of the proposed approach on both language modeling and image classification tasks. Experiments show that our method not only improves the computation efficiency but also maintains (sometimes outperforms) its accuracy compared with the full-rank counterparts. | [
"Low-Rank Factorization",
"Compact Neural Nets",
"Efficient Modeling",
"Mixture models"
] | https://openreview.net/pdf?id=r1xFE3Rqt7 | https://openreview.net/forum?id=r1xFE3Rqt7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkenHJVzxN",
"rkxICFPfkN",
"SyldotDfkV",
"Syg3sevJJ4",
"SygF_SIkkV",
"SJxCZq2i0Q",
"ryxVHego0Q",
"Byxuj9_cAQ",
"SJxCVNotR7",
"HyxyQ-C_6X",
"SJeU9eAupQ",
"Bkg8CkAupQ",
"S1e1aJRdTX",
"rylcGdES6m",
"r1lHGXH0hX",
"B1g0zh_7hQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544859460510,
1543825870211,
1543825823919,
1543626915735,
1543624048576,
1543387653691,
1543335996359,
1543305888192,
1543250997948,
1542148375002,
1542148238200,
1542148045864,
1542148022748,
1541912593638,
1541456652522,
1540750358484
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1472/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1472/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1472/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1472/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1472/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1472/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1472/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1472/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1472/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1472/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1472/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1472/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1472/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1472/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1472/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1472/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper is clearly written and well motivated, but there are remaining concerns on contributions and comparisons.\\n\\nThe paper received mixed initial reviews. After extensive discussions, while the authors successfully clarified several important issues (such as computation efficiency w.r.t splitting) pointed out by Reviewer 4 (an expert in the field), they were not able to convince him/her about the significance of the proposed network compression method.\", \"reviewer_4_has_the_following_remaining_concerns\": \"1) This is a typical paper showing only FLOPs reduction but with an intent of real-time acceleration. However, wall-clock speedup is different from FLOPs reduction. It may not be beneficial to change the current computing flow optimized in modern software/hardware. This is one of major reasons why the reported wall-clock time even slows down. The problem may be alleviated with optimization efforts on software or hardware, then it is unclear how good/worse will it be when compared with fine-grain pruning solutions (Han et al. 2015b, Han et al. 2016 & Han et al. 2017), which achieved a higher FLOP reduction and a great wall-clock speedup with hardware optimized (using ASIC and FPGA);\\n\\n2) If it is OK to target on FLOPs reduction (without comparison with fine-grain pruning solutions), \\n 2.1) In LSTM experiments, the major producer of FLOPs -- the output layer, is excluded and this exclusion was hidden in the first version. Although the author(s) claimed that an output layer could be compressed, it is not shown in the paper. Compressing output layer will reduce model capacity, making other layers more difficult being compressed. \\n 2.2) In CNN experiments, the improvements of CIFAR-10 is within a random range and not statistically significant. In table 2, \\\"Regular low-rank MobileNet\\\" improves the original MobileNet, showing that the original MobileNet (an arXiv paper) is not well designed. \\\"Adaptive Low-rank MobileNet\\\" improves accuracy upon \\\"Regular low-rank MobileNet\\\", but using 0.3M more parameters. The trade-off is unclear.\\n\\nIn addition to these remaining concerns of Reviewer 4, the AC feels that the paper essentially modifies the original network structure in a very specific way: adding a particular nonlinear layer between two adjacent layers. Thus it seems a little bit unfair to mainly use low-rank factorization (which can be considered as a compression technique that barely changes the network architecture) for comparison. Adding comparisons with fine-grain pruning solutions (Han et al. 2015b, Han et al. 2016 & Han et al. 2017) and a large number of more recent related references inspired by the low-rank baseline (M. Jaderberg et al 2014) , as listed by Reviewer 4, will make the proposed method much more convincing.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"The paper is clearly written, but there are remaining concerns on contributions and comparisons\"}",
"{\"title\": \"Thanks for the suggestions!\", \"comment\": \"We will review these papers and add them into revision accordingly.\"}",
"{\"title\": \"Our responses\", \"comment\": \"Dear AC,\\n\\nThanks for your time and detailed comments. Please find our responses below.\\n\\n- Is the proposed adaptive mixture of low-rank factorization network trained end-to-end, or trained by approximating Wh, where W is pre-trained?\\n\\nIn this work, both low-rank and the proposed method are evaluated based on end-to-end training. However, as an extension to low-rank factorization, our method has arguably the same scope of applicable scenarios as the low-rank factorization, meaning that it could be both applied end-to-end (Linear bottleneck in MobileNet V2) and/or by approximating a pre-trained W.\\n\\n-has anyone tried similar types of architecture but does not impose PI(h) to have a block structure?\\n\\nThere are some work (Jia et al., 2016; Ha et al., 2016) utilizes non-block-diagonal PI(h) but without U and V, basically, it becomes PI(h)*h, where PI(h) are fully adaptive. However, their methods are not scalable as PI(h) could be high-dimensional. For example, a 3x3 convolutional kernel with 256 channels could have 3*3*256*256 parameters, thus the output dimension of PI(h) function can be ~0.6M. In our method, we utilize the block-diagonal structure to make the original weight matrix W dynamic, i.e. W(h) = U*PI(h)*V, and efficient at the same time. This provides another perspective of viewing our method: enable dynamic weights with efficient computation.\\n\\n- Is it really that fair to compare the original network with this modified network that effectively has more nonlinear layers?\", \"our_main_comparisons_are_between_two_methods\": \"(1) regular low-rank factorization, i.e. Wh = U*V*h, and (2) the proposed adaptive mixture of low-rank method, i.e. Wh = \\\\sum_k \\\\pi(k) * U_k*V_k*H = U * PI(h) * V * h. Between these two, the number of layers are the same (although our method turns the linear bottleneck into a essentially non-linear bottleneck). The computation cost (FLOPs) is also very close, and we showed that latter performance significantly better. While it is interesting to compare the proposed method with other compact architectures in the wild (note: we did compare to MobileNet V2 which uses low-rank factorization and was a fairly recent SOTA), or debating whether the low-rank factorization is a good/fair way to factorize deep nets, our contribution mainly focuses on improving the well known low-rank technique with a simple and efficient technique, which also sheds the light to possibly deep nets with scalable dynamic weights as mentioned above.\\n\\n- Error bars were added to none of the figures and tables, making it difficult to judge whether 1.x% or 2.x% improvement is statistically significant.\\n\\nWe will add standard deviation for Cifar10 in the further revision. For large scale image classification on ImageNet, we would also like to note that most existing literature ignores the error bars (it is a very large dataset with a fixed validation set, and it could be expensive to gather multiple data points that are potentially similar given the training is stable), and >1% is usually considered as significant improvement on this dataset, as evidenced by (Howard et al., 2017, Sandler et al., 2018).\\n\\nThanks,\\nAuthors\"}",
"{\"title\": \"Missing a list of references on DNN acceleration based on low rank decomposition\", \"comment\": \"A list of previous publications is missing in the references. Those publications inspired and advanced the low-rank baseline (M. Jaderberg et al 2014) mentioned in this paper. Please consider to cite them.\\n\\nM. Denil, B. Shakibi, L. Dinh, M. A. Ranzato, and N. de Freitas. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems (NIPS). 2013.\\n\\nE. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. In Advances in Neural Information Processing Systems (NIPS). 2014.\\n\\nV. Lebedev, Y. Ganin, M. Rakhuba, I. Oseledets, and V. Lempitsky. Speeding-up convolutional neural networks using fine-tuned cp-decomposition. In International Conference on Learning Representations (ICLR), 2015.\\n\\nY. Ioannou, D. P. Robertson, J. Shotton, R. Cipolla, and A. Criminisi. Training cnns with low-rank filters for efficient image classification. In International Conference on Learning Representations (ICLR), 2016.\\n\\nY.-D. Kim, E. Park, S. Yoo, T. Choi, L. Yang, and D. Shin. Compression of deep convolutional neural networks for fast and low power mobile applications. In International Conference on Learning Representations (ICLR), 2016.\\n\\nC. Tai, T. Xiao, X. Wang, and W. E. Convolutional neural networks with low-rank regularization. In International Conference on Learning Representations (ICLR), 2016.\\n\\nX. Zhang, J. Zou, K. He, and J. Sun. Accelerating very deep convolutional networks for classification and detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(10):1943\\u20131955, Oct 2016. \\n\\nWen, Wei, Cong Xu, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. \\\"Coordinating filters for faster deep neural networks.\\\" In The IEEE International Conference on Computer Vision (ICCV). 2017.\"}",
"{\"title\": \"Questions to the authors\", \"comment\": \"Dear Authors and Reviewers,\\n\\nGiven the disagreements between the reviewers, I took a careful read of the key technical parts of your paper. I have a couple of questions:\\n\\n1) Is the proposed adaptive mixture of low-rank factorization network trained end-to-end, or trained by approximating Wh, where W is pre-trained?\\n\\n2) One may consider that using a linear low-rank factorization as a bottleneck layer is not changing the network architecture (in particular, depth of the network, since U(Vh) is just used to approximate Wh for lower computation). In your case, however, I would analogize it to be U*PI(h)*V*h, where the block structured PI(h)=diag[pi(h)_1,...,pi(h)_1,pi(h)_2,...,pi(h)_2,...pi(h)_K] involves a nonlinear transformation of h; from this perspective, your network architecture has actually been modified (i.e., no longer for reducing computation of the same structured network), where a linearly transformed h (i.e. V*h) is reweighed by a nonlinearly transformed h and then linearly transformed by U before sending it to the next layer. The computation saving mainly comes from the fact the unique number of elements in the block structured PI(h) is K. My questions: (1) has anyone tried similar types of architecture but does not impose PI(h) to have a block structure? (2) Is it really that fair to compare the original network with this modified network that effectively has more nonlinear layers? Could you choose deeper networks with similar computation that are trained end-to-end, and compare their performance? Without such type of comparison, I am somewhat hesitate to accept the claims. \\n\\n3) (As Reviewer 4 also pointed out) Error bars were added to none of the figures and tables, making it difficult to judge whether 1.x% or 2.x% improvement is statistically significant.\\n\\nThanks,\\nAC\"}",
"{\"title\": \"Further clarifications on other issues\", \"comment\": \"Thanks for the follow-up comments. And we are glad the reviewer now agrees that our method does not break the bulk computation of the low-rank factorization. And that supports what we essentially claimed: the proposed adaptive low-rank method has similar computation cost as regular low-rank factorization, but can improve its theoretical expressiveness and practical accuracy. So for cases where the low-rank factorization can be useful, our method can be seen as a drop-in replacement that improves its performance without extra cost.\\n\\nPlease find below further clarifications for specific issues mentioned.\\n\\n1) The definition of low-rank ratio, i.e. 2d/(m+n), was given in 3rd paragraph in Sec 4.2 and also mentioned in Figure 6. We agree that d/min(m, n) could be a valid alternative. However, since we use this ratio as a kind of axis and attach FLOPs in every cases, so this does not affect the comparisons of accuracy/perplexity versus FLOPs.\\n\\n2) In table 6, we observe the FLOPs agree pretty well with wall-clock time. For example, 1/16 bottleneck, the FLOPs is almost identical, and the wall-clock time differs within 5% (we didn\\u2019t optimize our implementation at kernel level).\\n\\n3) The previous experimental result (in the table of our previous response) misled the reviewer in attributing the increase of time at the 1/2 bottleneck to disadvantage of our method, while our method has similar run-time as regular low-rank. So we chose the squared matrix for experiments to avoid such confusion. However, we will also add the table of non-square matrix result in the revision to relax the choice of target matrix.\"}",
"{\"title\": \"Computation segments issue is clarified while others remain\", \"comment\": \"Thanks for the positive clarifications, especially appendix D on how to merge computation segments into two matrix-vector multiplications. A add-on, the equation should be able to generalize to K<d, where the mixing coefficients just need to be duplicated and tiled in a typical way.\\n\\nOn the speed when rank ratio is 1/2: \\n1) \\\"low-rank ratio\\\" is not clearly defined. It is misleading to refer to 2d/(m+n) as \\\"low-rank ratio\\\". Conventionally, full rank is min(m, n) and a more reasonable definition of rank ratio is d/min(m, n).\\n2) the dimension is 1024x1024 square in Table 6, where the FLOPs is the same but wall-clock time increases, validating my previous arguments. \\n3) the dimension was 650x1500 in a comment but changes to 1024x1024 in the appendix. It may be more reasonable to pick up a dimension used in the benchmarks of CNNs/LSTMs? When converging to a conclusion \\\"With similar speedup, the proposed adaptive lowrank\\nfactorization provides better theoretical expressiveness and accuracy/perplexity improvement in practice\\\", the speed should be measured for dimensions in Table 2 for example.\", \"typo\": \"\\\"rand-d\\\" in the caption of table 1.\"}",
"{\"title\": \"A simple rewrite of equations can make sure NO splitting a bulk of computation to segments\", \"comment\": \"Thanks for your follow-up feedback! We would like to provide further clarifications.\\n\\n-- Can you clarify what is \\\"weighting bottleneck\\\"? Concretely, in Figure 3(b), although one low-rank branch can be done using two matrix-vector multiplications, but there are K branches. The computation segments are also clearly shown in Eq. (1). \\n\\nThe K branches in Figure 3(b) and Eq. (1) are *conceptual* and for the ease of understanding. However, *computational* wise, they can be done with regular big chunk of matrix multiplication. More specifically, linear transform with eq. (1) could be re-written as W(h) h = U (\\\\pi(h) \\\\odot (V^Th)), where \\\\pi(h) weights bottleneck (V^Th) before another linear transformation, instead of \\\\sum_k (\\\\pi(h)_k U^k (V^k)^T h). Since low-rank dimension is usually small, so we (by default) set K to the low-rank dimension, i.e. V \\\\in R^{d\\\\times K}. With this formulation we DO NOT need to break the regular low-rank of Wh = UV^Th into K branches, rather we just use non-linear \\\\pi(h) to re-weight the bottleneck (V^Th). We added a section in last to second section in appendix to describe this process more clearly.\\n\\nRegarding to the slower wall-clock time for \\u00bd bottleneck, it is actually due to the increased FLOPs, NOT that splitting a bulk of computation to segments (note that both regular and adaptive low-rank have the same issue). For a non-square matrix, introducing \\u00bd bottleneck could actually increase the FLOPs. E.g. splitting 650 x 1500 to 650x538 and 538x1500 (where 538 = average(650, 1500) / 2) will result in 19% increase of FLOPs, which is the case for both regular and adaptive low-rank factorizations. When the matrix to factorize is squared matrix, the FLOPs would not increase for \\u00bd bottleneck. We provide more rigorous experiments for this case in the last section of the appendix.\\n\\n-- FLOPs in the output layer are missing when reporting computation reduction. \\n\\nWe clarified the FLOPs in the revision to explicit mention it does not include the output layer. The speed-up of softmax (which is itself very challenging) is out of the scope of this work, and our main focus is the main LSTM layers and Convolutional layers. Thus we only apply the low-rank factorization on the LSTM layers to demonstrate adaptive low-rank work better than its regular low-rank counterpart. In the future, we are interested to extend it to broader cases including softmax.\"}",
"{\"title\": \"Original rating remains\", \"comment\": \"Thanks for feedbacks! I remain my first rating as no new but helpful info provided to solve the concerns, because of following reasons:\\n\\n-- Can you clarify what is \\\"weighting bottleneck\\\"? The computation of mixing weight is fine. The explanation regarding \\\"mixing weight\\\" and \\\"weighting bottleneck\\\" is unrelated. Concretely, in Figure 3(b), although one low-rank branch can be done using two matrix-vector multiplications, but there are K branches. The computation segments are also clearly shown in Eq. (1). The \\\"computation efficiency\\\" problem is further supported by the table kindly provided by the author(s): \\n-----------------------------------------------------------------\\n Method | low-rank ratio; time in ms\\n-----------------------------------------------------------------\\n | 1 | 1/2 | 1/4 | 1/8 | 1/16 \\n-----------------------------------------------------------------\\nFull Rank | 10.8 | N/A | N/A | N/A | N/A\\nregular LR | N/A | 13.1 | 6.6 | 3.3 | 2.0\\nadaptive LR | N/A | 16.5 | 8.6 | 4.5 | 2.8\\n-----------------------------------------------------------------\\nIn regular LR, breaking one matrix to two matrices even increases wall-clock time from 10.8ms to 13.1ms when the ratio is 1/2, proving the disadvantage generated by splitting a bulk of computation to segments; in adaptive LR, breaking it to 2*K consistently increases wall-clock time. \\nI agree the problem may be alleviated with optimization efforts on library or hardware, then it is unclear how good/worse will it be when compared with fine-grain pruning solutions (Han et al. 2015b & Han et al. 2016), which achieved a higher FLOP reduction.\\n\\n-- I agree that mixing coefficients are from non-linear neural networks, while the mixing step is linear once coefficients are known. The only difference is the coefficients are \\\"dynamic and data dependent, making it some sense of non-linear combination\\\".\\n\\n-- FLOPs in the output layer are missing when reporting computation reduction. SVD-softmax will deteriorate/compress the model capacity, making compression in LSTMs more challenging.\"}",
"{\"title\": \"Thanks for your feedback!\", \"comment\": \"We would like to thank the reviewer for the time and valuable feedback. We also like to add on the implication of proposition 1, which demonstrates that the mixture of low-rank factorizations are actually learned non-linear transformation, which is more expressive than the linear one of regular low-rank factorization, i.e. the former cannot be approximated by the latter.\"}",
"{\"title\": \"Our response\", \"comment\": \"We would like to thank the reviewer for the time and valuable comments. Please find our response below.\\n\\n\\n- The results in table 2 indicate 2.5% and 1.4% improvement. This should be corrected.\\n\\nThanks for pointing out our typos on the improvement rates. The correct ones should be (1) (70.5-68.8)/68.8=2.5% and (2) (73.1-71.7)/71.7=1.95%. We will update them in the revision.\\n\\n\\n- The authors should include the performance of the full rank CNN for the toy example in Figure 1. A Neural Net with 2 neurons in the hidden layer can not learn the XOR/XNOR efficiently . So its rank-1 factorization can only perform as good as the original CNN.\\n\\nWe\\u2019d like to clarify the toy example in Figure 1, the input data point is 2-dimensional, and a MLP (not CNN) is used as function to classify the labels, i.e. P(y|x) = softmax(W\\u2019\\u03c3(Wx)), where W \\u2208 R^{2\\u00d72}. This is the original full rank model, which is able to effectively learn the synthetic XOR/XNOR task. However, when we factorize W using two 2\\u00d71 matrices, i.e. W = UV^T, the induced linear bottleneck largely degenerates the performance (Figure 1b). After applying the proposed method, the performance can be largely improved (Figure 1c).\\n\\n\\n- In (1), the dimensions of U^k and V^k should be mentioned explicitly.\\n\\nThanks for the suggestion, we will mention it in the revision.\\n\\n\\n- The choice of \\u201ck\\u201d in (1) should be discussed. How does it relate to the overall accuracy / compression of the CNN?\\n\\nThe discussion of k is presented in the experiments. We tested different K, and found that using more mixtures generally leads to better results, although the performance starts to plateau when the number of mixtures is large enough. However, to obtain a larger compression rate and speedup, the rank-d we use in the low-rank factorization can be already small, thus the extras of using different number of mixtures may not differ too much. For this reason, we can just set K as rank-d.\\n\\n\\n- The extension to mode-3 and and mode-4 tensors which are more common in CNNs is not straightforward.\\n\\nThanks for the questions. We willingly acknowledge that we only considered low-rank factorization of 2d matrices in the scope of this work, which has already found applications in many deep neural network scenarios. For CNNs (which was targeted in this work), we apply our method with widely-used compact depth-separable convolution layers such that it does not require a direct mode-3 tensor factorization. \\n\\nWe also believe it could be straightforward to extend this framework to high-order tensor factorization with minor adjustments. For example, we could apply our method to CP decomposition (https://en.wikipedia.org/wiki/Tensor_rank_decomposition), simply by extending each mixture from two low-rank vector products to three low-rank vector products.\\n\\n\\n- In the imagenet experiment, the number of mixtures (k) is set to the rank (d). How is the rank computed for every layer?\\n\\nTo ensure a fair comparisons with MobielNets, we simply followed the setting in the original MobileNetV2 paper by setting the number of channel of bottleneck to be \\u2159 of the output channels.\\n\\n\\n- In Fig 7, row 0 and row 8 look identical. Is this indicative of something?\\n\\nIn Fig 7, the labels for each row are in the same order as in original CIFAR-10 dataset, namely \\\"airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck\\\". The row 0 and the row 8 correspond to the class of airplane and the class of ship respectively, which suggests the learned mixtures are class discriminative.\"}",
"{\"title\": \"Our clarification (cont'd)\", \"comment\": \"- In LSTM (especially in language modeling), FLOPs in the output layer are large because of a large vocabulary size (650x10000 in the experiments). However, output layer is not explicitly mentioned in the paper.\\n\\nThanks for point this out. We only calculate the FLOPs of LSTM layer, and do not count the FLOPs in output softmax layer. This is because we only apply the low-rank factorization on the LSTM layers to demonstrate adaptive low-rank enjoys better expressiveness. We will explicit mention it in the revision. The output softmax is unarguably computation hungry, and can be quite challenging to tackle by itself (there are papers like SVD-softmax trying to tackle this problem), thus we do not include its discussion in this paper.\\n\\n\\n- Accuracy improvement in Table 1 is not statistically significant, but used more parameters. \\n\\nCIFAR dataset is a relatively small and simple dataset, which does not need huge network capacity. Our experiments also show that when the rank is high, both method suffers from little accuracy drop. However, when the rank is low, our method significantly outperforms regular low-rank. Furthermore, the effectiveness of our method is also tested on larger and more convincing dataset ImageNet (in Table 2), where the performance is even more significant.\\n\\n\\n- When we deploy models across platforms, we cannot guarantee they use the same random generator and has a consistent implementation.\\n\\nYes, we agree the random seed generation can be different across platforms. However, the random projection is just a simple yet interesting comparison for us to better understand the problem. Our main results focus on the one using learned mixing weight computation.\\n\\n\\n- sparse gating of low-rank branches may make this method more computation efficient.\\n\\nYes, we agree. And we would like to note that our default setting is to set the number of mixtures to the rank so currently it can also be seen as gating, computational wise.\"}",
"{\"title\": \"Clarifications: our method does not break a single efficient computation to multiple computation segments\", \"comment\": \"We thank the reviewer for the time and detailed and valuable comments. Please find our response below.\\n\\n\\n- computation efficiency: in the original low-rank approximation, only two matrix-vector multiplications are needed, but this paper increases it to 2*K plus some additional overheads. Although the theoretical FLOPs can be cut, but modern hardware runs much faster when a whole bulk of data are computed all together.\\n\\nThanks for the detailed analysis. We would like to point out that the reviewer\\u2019s analysis is based on a specific type of implementation of our method, where K matrix-vector multiplications are conducted. However, this implementation can be easily optimized (especially when we set K to rank where the rank is small): we can still use two matrix-vector multiplications, with a smaller matrix-vector multiplications to compute the mixing weights, and a vector-vector multiplication for weighting bottleneck. This avoids 2*K matrix multiplications. The proposed implementation supports massive parallel computing and also has good data locality, thus it is very efficient to compute.\\n\\nWe willingly admit that the current paper mainly focus the FLOPs, as the actual inference time depends on implementation, and also include runtime by other factors (such as final softmax layer in RNN language modeling). Nevertheless, we conducted some preliminary experiments with non-optimized implementation on RNN (without final softmax), and measure the actual inference time with CPU as shown in the table below. We observed that the proposed method still provides similar speedup as regular low-rank (while getting significantly better accuracy/perplexity).\\n\\n-----------------------------------------------------------------\\n Method | low-rank ratio; time in ms\\n-----------------------------------------------------------------\\n | 1 | 1/2 | 1/4 | 1/8 | 1/16 \\n-----------------------------------------------------------------\\nFull Rank | 10.8 | N/A | N/A | N/A | N/A\\nregular LR | N/A | 13.1 | 6.6 | 3.3 | 2.0\\nadaptive LR | N/A | 16.5 | 8.6 | 4.5 | 2.8\\n-----------------------------------------------------------------\\n\\nWe would like to improve our implementation of the adaptive low-rank to further speed up the actual inference time, and conduct more run-time comparisons in the future.\\n\\n\\n- low-rank approximation: decomposing a matrix to a mixture of low-rank spaces is equivalent to decomposing to one single low-rank space (the only difference is the combination is data dependent).\\n\\nWhile we appreciate that the reviewer\\u2019s careful observation, especially on that our method can also be applied to the scenarios where the weight matrices are pre-trained. But we would like to point out a inaccurate assertion that our method is equivalent to regular low-rank factorization in [1][2]. Essentially, as shown in proposition 1, our method is non-linear transformation while the regular low-rank is linear. They are not equivalent theoretically, and in practice, as explicitly demonstrated in the toy example (Figure 1), the proposed method is much more expressive than regular low-rank, meaning it can approximate better with around the same amount of parameters and computation. Our experiments on RNN and CNN also show that adaptive low-rank enjoys better expressiveness than conventional low-rank decomposition.\\n\\n\\n- efficient architecture design: branching and gating is not new [3][4]. I like the results in Table 2, but, to show the efficiency, it should have been compared with more SOTA compact models like CondenseNet. \\n\\nThanks for the multi-angular analysis. In our understanding, [3] proposes GoogleNet using static branching, and [4] proposes a dynamic network based on input samples. These two papers are orthogonal to our method. We aim to propose an simple yet expressive low-rank decomposition method to speed up the inference of matrix multiplication, which is the fundamental operation in modern neural networks. Therefore, we can apply the method to any existing network architectures.\\n\\nRegarding empirical evaluation, we aim to show our adaptive low-rank is better than regular low-rank method in our experiments, so we used a very similar yet powerful architecture MobileNet to exclude the interference of other factors, making sure it is a fair comparison. We aim to improve low-rank decomposition itself that is general but not to design a specific compact network architecture.\"}",
"{\"title\": \"Small improvement but breaks a single efficient computation to multiple computation segments (no wall-clock time reported)\", \"review\": \"The paper proposes an input-dependent low rank approximations of weight matrices in neural nets with a goal of accelerating inference. Instead of decomposing a matrix (a layer) to a low rank space like previous work did, the paper proposes to use a linear combination/mixture of multiple low rank spaces. The linear combination is dynamic and data dependent, making it some sense of non-linear combination. The paper is interesting, however,\", \"i_doubt_its_significance_at_three_aspects\": \"(1) computation efficiency: the primary motivation of this paper is to accelerate inference stage; however, it might not be wise to break computation in a single low-rank space to segments in multiple low-rank spaces. In the original low-rank approximation, only two matrix-vector multiplications are needed, but this paper increases it to 2*K plus some additional overheads. Although the theoretical FLOPs can be cut, but modern hardware runs much faster when a whole bulk of data are computed all together. Because of this, the primary motivation in this paper wasn't successfully supported by wall-clock time;\\n(2) low-rank approximation: low-rank approximation only makes sense when matrices are known and redundant, otherwise, no approximation target exists (i.e., what matrix is it approximating?). Because of this, low-rank neural nets [1][2] start from trained models, approximate it and fine-tune it, while this method trains from scratch without an approximation target. Although, we can fit the method to approximate trained matrices, then decomposing a matrix to a mixture of low-rank spaces is equivalent to decomposing to one single low-rank space (the only difference is the combination is data dependent). Therefore, I view this paper more in a research line of designing compact neural nets, which brings me to a concern in (3).\\n(3) efficient architecture design: essentially, the paper proposes a class of compact neural nets, at each layer of which there are K \\\"low-rank\\\" branches with a gating mechanism to select those branches. However, branching and gating is not new [3][4]. I like the results in Table 2, but, to show the efficiency, it should have been compared with more SOTA compact models like CondenseNet.\", \"clarity\": \"How FLOPs reduction are exactly calculated? I am not convinced by FLOPs reduction in the LSTM experiments, since in LSTM (especially in language modeling), FLOPs in the output layer are large because of a large vocabulary size (650x10000 in the experiments). However, output layer is not explicitly mentioned in the paper.\", \"improvement\": \"(1) Accuracy improvement in Table 1 is not statistically significant, but used more parameters. For example, an improvement of 93.01% over 92.92% is within an effect of training noise;\\n(2) It is a little hacking to conclude that a random matrix P_random has a small storage size because we can storage a seed for recovery. When we deploy models across platforms, we cannot guarantee they use the same random generator and has a consistent implementation;\\n(3) sparse gating of low-rank branches may make this method more computation efficient.\\n\\n[1] E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. In Advances in Neural Information Processing Systems (NIPS). 2014.\\n[2] M. Jaderberg, A. Vedaldi, and A. Zisserman. Speeding up convolutional neural networks with low rank expansions. In Proceedings of the British Machine Vision Conference (BMVC), 2014.\\n[3] Szegedy, Christian, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. \\\"Going deeper with convolutions.\\\" In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9. 2015.\\n[4] Huang, Gao, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, and Kilian Q. Weinberger. \\\"Multi-scale dense networks for resource efficient image classification.\\\" arXiv preprint arXiv:1703.09844 (2017).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"The paper introduces a low rank factorization strategy for neural network compression. They propose a data adaptive model to approximate the weights as a learned mixture of low rank factorizations. The method is novel and the results look promising. A limitation of the method presented is that its is applicable only to weights arising as mode-2 tensors (matrices).\", \"review\": \"Some suggested improvements follow below\\n\\n1. It is claimed (page 2, last paragraph) that the proposed method leads to a 3.5% and 2.5% improvement in top-1 accuracy over the mobilenet v1 and v2 models. However the results in table 2 indicate 2.5% and 1.4% improvement. This should be corrected.\\n2. The authors should include the performance of the full rank CNN for the toy example in Figure 1. A Neural Net with 2 neurons in the hidden layer can not learn the XOR/XNOR efficiently . So its rank-1 factorization can only perform as good as the original CNN.\\n3. In (1), the dimensions of U^k and V^k should be mentioned explicitly.\\n4. The choice of \\u201ck\\u201d in (1) should be discussed. How does it relate to the overall accuracy / compression of the CNN?\\n5. The paper addresses low rank factorization for \\u201cMLP\\u201d, RNN/LSTM and \\u201cpointwise\\u201d convolutions. All of these have weights in the form of matrices (mode 2 tensors). The extension to mode-3 and and mode-4 tensors which are more common in CNNs is not straightforward.\\n6. In the imagenet experiment, the number of mixtures (k) is set to the rank (d). How is the rank computed for every layer?\\n7. In Fig 7, row 0 and row 8 look identical. Is this indicative of something?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Nice intuitive paper\", \"review\": \"In this paper, the authors propose a compression technique to reduce the number of parameters to learn in a neural network without losing expressiveness.\\nThe paper nicely introduces the problem of lack in espressiveness with low-rank factorizations, a well-known technique to reduce the number of parameters in a network.\\nThe authors propose to use a linear combination of low-rank factorizations with coefficients adaptively computed on data input. Through a nice toy example based on XNOR data, they provide a good proof of concept showing that the accuracy of the proposed technique outperforms the classical low-rank approach.\\nI enjoyed reading the paper, which gives an intuitive line of reasoning providing also extensive experimental results on multilayer perceptron, convolutional neural networks and recurrent neural networks as well.\\nThe proposal is based on an intuitive line of reasoning with no strong theoretical founding. However, they provide a quick theoretical result in the appendix (Proposition 1) but, I couldn\\u2019t understand very well its implications on the expressiveness of proposed method against classical low-rank approach.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
SJxFN3RcFX | Functional Bayesian Neural Networks for Model Uncertainty Quantification | [
"Nanyang Ye",
"Zhanxing Zhu"
] | In this paper, we extend the Bayesian neural network to functional Bayesian neural network with functional Monte Carlo methods that use the samples of functionals instead of samples of networks' parameters for inference to overcome the curse of dimensionality for uncertainty quantification. Based on the previous work on Riemannian Langevin dynamics, we propose the stochastic gradient functional Riemannian dynamics for training functional Bayesian neural network. We show the effectiveness and efficiency of our proposed approach with various experiments. | [
"samples",
"model uncertainty quantification",
"bayesian neural network",
"functionals",
"networks",
"parameters"
] | https://openreview.net/pdf?id=SJxFN3RcFX | https://openreview.net/forum?id=SJxFN3RcFX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SkeIqjMelV",
"HyeDT0djn7",
"SygA0zT93m",
"SylyBDEqhX"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544723341808,
1541275326808,
1541227221819,
1541191479386
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1471/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1471/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1471/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1471/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper addresses a promising and challenging idea in Bayesian deep learning, namely thinking about distributions over functions rather than distributions over parameters. This is formulated by doing MCMC in a functional space rather than directly in the parameter space. The reviewers were unfortunately not convinced by the approach citing a variety of technical flaws, a lack of clarity of exposition and critical experiments. In general, it seems that the motivation of the paper is compelling and the idea promising, but perhaps the paper was hastily written before the ideas were fully developed and comprehensive experiments could be run. Hopefully the reviewer feedback will be helpful to further develop the work and lead to a future submission.\", \"note\": \"Unfortunately one review was too short to be informative. However, fortunately the other two reviews were sufficiently thorough to provide enough signal.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting idea but has significant technical flaws and lacks clarity\"}",
"{\"title\": \"An interesting idea plagued by flaws in presentation, inconsistent notation, and lack of critical experiments\", \"review\": \"The authors propose an approximate MCMC method for sampling a posterior distribution of weights in a Bayesian neural network. They claim that existing MCMC methods are limited by poor scaling with dimensionality of the weights, and they propose a method inspired by HMC on finite-dimensional approximations of measures on an infinite-dimensional Hilbert space (Beskos et al, 2011). In short, the idea is to use a low dimensional approximation to the parameters (i.e. weights) of the neural network, representing them instead as a weighted combination of basis functions in neural network parameter space. Then the authors propose to use HMC on this lower dimensional representation. While the idea is intriguing, there are a number of flaws in the presentation, notational inconsistencies, and missing experiments that prohibit acceptance in the current form.\\n\\nThe authors define a functional, f: \\\\theta -> [0, 1], that maps neural network parameters \\\\theta to the unit interval. They claim that this function defines a probability distribution on \\\\theta, but this not warranted. First, \\\\theta is a continuous random variable and its probability density need not be bounded above by one; second, the authors have made no constraints on f actually being normalized. \\n\\nThe second flaw is that the authors equate a posterior on f given the data with a posterior on the parameters \\\\theta themselves. Cf. Eq 4 and paragraph above. There is a big difference between a posterior on parameters and a posterior on distributions over parameters. Moreover, Eq. 5 doesn't make sense: there is only one posterior f; there are no samples of the posterior. \\n\\nThe third problem appears in the start of Section 3, where the authors now call the posterior U(theta) instead of f. They make a finite approximation of posterior U(\\\\theta) = \\\\sum_i \\\\lambda_i u_i, which is inconsistent with Beskos et al. I believe the authors intend to use a low dimensional approximation to \\\\theta rather than its posterior U(\\\\theta). For example, if \\\\theta = \\\\sum_i \\\\lambda_i u_i for fixed basis functions u_i, then you can approximate a posterior on \\\\theta with a posterior on \\\\lambda.\\n\\nThe fourth, and most important problem, is that the basis functions u_i are never defined. How are these chosen? Beskos et al use the eigenfunctions of the Gaussian base measure \\\\pi_0, but no such measure exists here. Moreover, this choice will have a substantial impact on the approximation quality. \\n\\nThere are more inconsistencies and notational problems throughout the paper. Section 4.1 begins with a mean field approximation that seems out of place. Section 3 clearly states that the posterior on theta is approximated with a posterior on lambda, and this cannot factorize over the dimensions of theta. Finally, the authors again confuse the posterior on weights with a posterior on distributions of weights in Eq 11. \\\\tilde{U} is introduced as a function of lambda in Eq 14 and then called with f in line 4 of Alg. 1. These two types are not interchangeable. \\n\\nThese inconsistencies cast doubt on the subsequent experiments. Assuming the algorithm is correct, a fundamental experiment is still missing. \\nTo justify this approach, the authors should show how the posterior approximation quality varies as a function of the size of the low dimensional approximation, D.\\n\\nI reiterate that the idea of approximating the posterior distribution over neural network weights with a posterior distribution over a lower dimensional representation of weights is interesting. Unfortunately, the abundance of errors in presentation cloud the positive contributions of this paper.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Unclear writing and contributions\", \"review\": \"This paper considers a new learning paradigm for Bayesian Neuron Networks (BNN): learning distribution in the functional space, instead of weight space. A new SG-MCMC variant is proposed in Algorithm 1, and applied to sampling in a\\n\\\"functional space\\\". The approach is demonstrated on various tasks.\", \"quality\": \"Low, due to the low clarity detailed below.\", \"clarity\": \"I do not fully follow the core algorithm: The posterior is U_D(\\\\theta) = \\\\sum_{i=1}^D \\\\lambda_i * u_i, where \\\\lambda_i is represented as MCMC samples, what is u_i then? I guess u_i is defined in (2), which is approximated in (3) if weight sample is used. However, how is u_i represented in the functional approach? I guess it is similar to the weight-based approach. If this is true, how could we distinguish between a functional approach and weight-based approach?\\n\\nThe proposed SGFuncRLD is essentially Adam plus Gaussian noise, but performed in a so-called \\\"functional space\\\"? It is therefore not surprise to me that SGFuncRLD performs better than pSGLD (RMSprop plus Gaussian noise), just as Adam performs better than RMSprop. If we only focus on the new SG-MCMC approach itself, the authors need to justify: (1) the smoothed gradient is an unbiased gradient estimator, how does it effect convergence? Does it guarantee to true posterior? this should be done in theory. (2) The SGFuncRLD algorithm itself is the same with pSGLD except the smoothed gradient part. This makes the clear comparison even important. Does SGFuncRLD perform better just because the proposed smoothed gradient, or because the sampling is done in the functional space?\", \"my_suggestions\": \"Please disentangle the contributions clearly. There are two things: (1) smooth gradient, (2) sampling in a functional space. Which one really contributes the performance improvement?\\n\\nTo demonstrate (1), the authors could at least conduct on a toy distribution, to demonstrate the difference with pSGLD, regardless it is to the functional space or the weight space. \\nTo demonstrate (2), the authors could apply the same SG-MCMC variant to the functional space and to the weight space, and see the difference.\", \"originality\": \"To me, the idea of learning uncertainty of BNN in the functional space appeared in Prof. Yee Whye Teh's NIPS 2017 presentation. The motivation in his presentation is very clear. However, how to implement this abstract idea in practice is unclear yet. This submission is the first attempt. However, I am concerned about the real contribution.\", \"significance\": \"It is a very interesting research direction. The paper could have been significant if every part is clearly motivated and demonstrate. At this point, I am not fully convinced.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A functional version of Riemannian Langevin dynamics is used in order to perform inference with Bayesian neural networks. It is not quite convincing due to a lack of effort in explaining the approach.\", \"review\": \"The idea of extending Riemannian Langevin dynamics to functional spaces is elegant, however it is extremely hard to follow the proposed method as details are kept to a minimum. The finite approximation of the posterior distribution is a function of the parameters theta, however it displays parameters lambda. The couple of sentences: \\\"Then by sampling \\u03bb, we sample a functional f equivalently. The Riemannian Langevin dynamics on the functional space can thus be written as: (6)\\\" come without a single explanation.\\n\\nMinor comments\\n* Max and Whye is the casual version for reference Welling and Teh.\\n* proper nouns in References should be capitalized\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
B1xFVhActm | Fake Sentence Detection as a Training Task for Sentence Encoding | [
"Viresh Ranjan",
"Heeyoung Kwon",
"Niranjan Balasubramanian",
"Minh Hoai"
] | Sentence encoders are typically trained on generative language modeling tasks with large unlabeled datasets. While these encoders achieve strong results on many sentence-level tasks, they are difficult to train with long training cycles.
We introduce fake sentence detection as a new discriminative training task for learning sentence encoders. We automatically generate fake sentences by corrupting original sentences from a source collection and train the encoders to produce representations that are effective at detecting fake sentences. This binary classification task turns to be quite efficient for training sentence encoders. We compare a basic BiLSTM encoder trained on this task with strong sentence encoding models (Skipthought and FastSent) trained on a language modeling task. We find that the BiLSTM trains much faster on fake sentence detection (20 hours instead of weeks) using smaller amounts of data (1M instead of 64M sentences). Further analysis shows the learned representations also capture many syntactic and semantic properties expected from good sentence representations. | [
"fake sentence detection",
"sentence encoders",
"training task",
"sentence",
"tasks",
"encoders",
"fake sentences",
"task",
"generative language",
"large unlabeled datasets"
] | https://openreview.net/pdf?id=B1xFVhActm | https://openreview.net/forum?id=B1xFVhActm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Hkla3T4le4",
"rygnMnO92Q",
"BJlpX_HYnX",
"B1e6e0PInX"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544732085354,
1541209107628,
1541130277476,
1540943349017
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1470/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1470/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1470/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1470/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper presents a new unsupervised training objective for sentence-to-vector encoding, and shows that it produces representations that often work slightly better than those produced by some prominent earlier work.\\n\\nThe reviewers have some concerns about presentation, but the main issue\\u2014which all three reviewers pointed to\\u2014was the lack of strong recent baselines. Sentence-to-vector representation learning is a fairly active field with an accepted approach to evaluation, and this paper seems to omit conspicuous promising baselines. This includes labeled-data pretraining methods which are known to work well for English (including results from the cited Conneau paper)\\u2014while these may be difficult to generalize beyond English, this paper does not attempt such a generalization. This also includes more recent unlabeled-data methods like ULMFiT or Radford et al.'s Transformer which could be easily trained on the same sources of data used here. The authors argue in the comments that these language models tend to use more parameters, but these additional parameters are only used during pretraining, so I don't find this objection compelling enough to warrant leaving out baselines of this kind. Baselines of both kinds have been known for at least a year and come with distributed models and code for close comparison.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting idea, but missing crucial baseline\"}",
"{\"title\": \"Simple technique, good results, not enough substance.\", \"review\": \"Summary: Derive sentence representations from a bidirectional LSTM encoder trained to distinguish real sentences from fake ones. Fake sentences are derived from real ones by swapping two words or dropping a single word (yielding two different models). The resulting representations are applied to various sentence classification tasks by using them as input to a logistic regression classifier trained for the task. Results are generally better than similar experiments performed with SkipThought vectors trained on the same Toronto BookCorpus.\\n\\nThis is a reasonable idea, and the win over SkipThought is quite convincing, but the paper is short on substance, and parts are confusing or superfluous. Some problems and questions:\\n\\n1) Most of section 2 could be omitted, since it doesn\\u2019t really add insight to the well-established idea of pre-training parameters on an auxiliary task. \\n\\n2) Section 3 calls the Conneau et al (2017) transfer approach supervised. It also distinguishes between semi-supervised approaches that \\u201cdo task-specific adaptation using labeled data\\u201d and unsupervised approaches (including the current one) that also must do exactly that.\\n\\n3) In 4.2, does the 3-layer MLP have non-linearities in its hidden layers? If so, it\\u2019s not equivalent to a single linear layer as claimed, regardless of whether a non-linearity is applied to its output. If not, there is no point in using 3 layers.\\n\\n4) Section 5 gives only minimal descriptions of the tasks - often just acronym and type, presumably because they are borrowed from Conneau et al (2017, 2018). More information needs to be provided.\\n\\n5) Section 6 should show the best results from the Conneau et al papers for calibration.\\n\\n6) Were the baseline systems also supplied with Glove word embeddings? Do they have the same number of parameters?\\n\\n7) Details of the logistic regression classifier?\\n\\n8) Why train on your method on only 1M sentences, since training is fast? Wouldn\\u2019t using more text give better results?\\n\\n9) Given the recent very strong results from the ELMo paper (which you cite), the current paper doesn\\u2019t seem complete without some attempt to replicate this as a baseline - eg, use a deeper encoder, combine state vectors through layers, etc. These features aren\\u2019t incompatible with your objective, which might make for an interesting extension.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Nice simple idea but insufficient execution and discussion\", \"review\": \"Summary:\\n=======\\nThe paper proposes a discriminative training formulation for learning sentence representations, where a classifier is required to distinguish between real and fake sentences. The sentences are encoded with a Bi-LSTM and the resulting sentence representations are then used in a number of sentence-level tasks (classification, entailment, and retrieval). The experiments show benefits on most tasks compared to Skip-Thought and FastSent baselines, and the information captured by the representations is analyzed with probing tasks, showing that they are better at capturing certain kinds of information like the presence or order of words. \\n\\nThe paper proposes a simple and fairly effective approach for learning sentence encoders. The basic idea is appealing and the experimental results are fairly good. However, at present it seems like more work is required for delivering a comprehensive evaluation and analysis. My main concerns with the paper are the insufficient comparison with prior work, its lack of clarity and organization in certain places, and the limited amount of work. Please see below detailed comments on these and other points, as well as suggestions for how to improve some of these issues.\", \"major_comments\": [\"==============\", \"1. Better baselines and comparisons:\", \"The results are compared only with SKip-Thought and (the weaker) FastSent. However, there are far better models by now. First, already in the Skip-Thought paper there is a version combining Naive Bayes bi-gram features which performs much better on some benchmarks, for example that version would be better than the paper's results on MR (80.4).\", \"Moreover, there have been many newer papers with better results on many of the tasks [1, 2, 4, and references therein]. At the very least, mention should be made that there are better published results, and ideally there should be some comparison to the more relevant papers [1, and maybe others].\", \"2. Paper organization and clarity:\", \"I found Section 2 to be unnecessarily lengthy and disorganized. It mixes motivation with modeling, introduces excessive notation, sometimes without clearly defining it (what is L_{aux}? Why is U in eq. 2 not defined on first usage?), and digresses to weakly related discussions (the link to GANs seems vague and the relation to Introspective Neural Networks is not made clear). The last paragraph is largely redundant with the introduction.\", \"There is also a statement that seems just wrong: \\\"maximizing the data likelihood P(enc(x,\\\\theta_1)|\\\\theta_1,\\\\thera_3)\\\" -- the data likelihood is P(X | ...). Maximizing the encoding of x can be trivially achieved by simply having a constant encoding whose probability is 1.\", \"The entire Section 2 can be condensed to one or two paragraph, essentially deriving the discriminative training task in equations (1) and (2).\", \"On the paper organization level, this lengthy section is followed by the related work and then section 4 on \\\"training tasks for encoders\\\". There is again redundancy between section 4 and 2. Consider merging sections 2 and 4 into one Methodology section, where the general task is formulated, the sentence encoding (Bi-LSTM with max-pooling) and binary classifier (the MLP) are defined, and the fake sentence generation is described. This would make a better flow and remove excessive text.\", \"3. Motivation and advantages of the approach:\", \"The approach is motivated by shortcomings of sentence encodings based on language modeling, such as Skip-Thought, which are computationally intensive due to the large output space and the complicated decoding process. This is an appealing motivation, although there have also been simpler methods for sentence representations that work as well as or better than Skip-Thought [1, 2].\", \"The second motivation is not clear to me, and the claim that \\\"the training text collection should include many instances of sentences that have only minor lexical differences but found in completely different contexts\\\" needs more support, either theoretical or empirical. Why wouldn't a language model be able to distinguish such differences?\", \"The advantages of the binary classification task make sense. The point about forcing the encoder to track both syntax and semantics is interesting. Have you tried to analyze whether this indeed happens? The probing tasks are a good way to evaluate this, but most of them are syntactic, except SOMO and perhaps CoordInv and BShift. Still, more analysis of this point would be good.\", \"One concern with generating fake sentences by swapping words is that it would not apply to languages with free word order. Have you considered how well your approach would work on other languages?\", \"4. Relevant related work:\", \"The fake data generation resembles noise used in denoising auto-encoders. A recent application is in unsupervised neural machine translation [3], but there is relevant prior work (see references in [3]).\", \"The binary classification task resembles that in [1], where they train a classifier to distinguish between the representation of a correct neighbor sentences and incorrect sentences.\", \"5. Ideas for more experiments and analysis:\", \"The results are fairly good by using only 1M sentences. How good would they be with the full corpus? What's the effect of training data size on the method?\", \"Table 4 is providing nice examples showing how the fake sentence task generates better sentences representations. Can this be measured on a larger set of examples in aggregate? Why is t-SNE needed for calculating the neighbor rank?\", \"Proving tasks are very interesting, but the discussion is limited. A more detailed discussion and analysis would be useful.\", \"Consider other techniques for generating fake sentences.\"], \"minor_comments\": [\"==============\", \"Related work: the Skip-Thought decoder is a unidirectional LSTM and not a bidirectional one as mentioned, right?\", \"Related work: more details on supervised approaches would be useful.\", \"Section 4.1: how many fake exampels are generated from every real example? Have you experimented with this?\", \"Section 4.2 mentions 2 hidden layers in the MLP but figure 3 indicates 3 layers.\", \"Is there a reason to use multiple layers without a non-linearity in the MLP? This seems unusual. In terms of expressivity, this is equivalent to using one larger linear layer, although there might be some benefit in optimization.\", \"Table 1 seems unnecessary as there is no discussion of how dataset statistics refer to the results. It's enough to refer to previous work.\", \"What are some results missing in table 2, specifically SKipthought (1M) on COCO datasets?\", \"The paragraph on sentence encoder implementation mentions a \\\"validation set accuracy of 89 for word shuffle\\\". Which validation set is that? How is convergence determined for word drop?\", \"In analyzing sentence lengths, figure 2 shows the fake sentence to be similar to SKip-Thought on short sentences in SST. Do you have any idea why? Also, fake sentence is better than Skip-Thought on all lengths in MR, not just longer sentences, so I'm not sure there's any signal there.\", \"Figure 3: what is the test set for WordShuffle?\", \"The idea to create negative samples focused towards specific phenomena sounds like a good way to go\", \"Writing, grammar, etc.:\", \"======================\", \"Introduction, paragraph 3, last sentence: start with \\\"The\\\".\", \"Introduction, paragraph 4, first sentences: discriminative training task fake sentence detection -> discriminative training task *of* fake sentence detection\", \"Motivation: an useful -> a useful; we assumes -> we assume; then number -> the number; this much -> this is much\", \"Motivation: do not differ -> do not differ much?\", \"Related work: skip-gram -> skip-gram model; Training Skipthought model -> Training a Skipthought model\", \"Section 4: Prior work use -> Prior work uses/used\", \"Section 4.2: space between \\\"Multi-layer Perceptron\\\" and \\\"(MLP)\\\". This also happens with other acronyms.\", \"Page 6: Our models, however, train -> are trained\", \"Table 3 caption: is bigram in -> is bigram; is co-ordination is -> is-coordination\", \"Page 7: The analysis ... also indicate*s* ... but do*es* not ...\", \"Figure 3 caption: classification/proving task -> tasks\", \"References: fix capitalization in paper titles\", \"References\", \"==========\", \"[1] Logeswaran and Lee, An efficient framework for learning sentence representations\", \"[2] Khodak et al., A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors\", \"[3] Artetxe et al., Unsupervised Neural Machine Translation\", \"[4] Arora et al., A Compressed Sensing View of Unsupervised Text Embeddings, Bag-of-n-Grams, and LSTMs\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Method description confusing; empirical comparison against previous work is lacking\", \"review\": [\"This paper proposes a method for learning sentences encoders using artificially generated (fake) sentences. While the idea is interesting, the paper has the following issues:\", \"There are other methods that aim at generating artificial training data, e.g.: Z. Zhao, D. Dua, S. Singh. Generating Natural Adversarial Examples. International Conference on Learning Representations (ICLR). 2018, but no direct comparison is made. Also InferSent (which is cited as related work) trains sentence encoders on SNLI: https://arxiv.org/pdf/1705.02364.pdf. Again a comparison is needed as the encoders learned perform very well on a variety of tasks. Finally, the proposed idea is very similar to ULMfit (https://arxiv.org/pdf/1801.06146.pdf) which trains a language model on a lot of unlabeled data and then finetunes it discriminatively. Finally, there should be a comparison against a langauge model without any extra training in order to assess the benefits of the fake sentence classification part of the model.\", \"It is unclear why the fake sentence construction method proposed by either swapping words or just removing them produces sentences that are fake and/or useful to train on. Sure it is simple, but not necessarily fake. A language model would be able to discriminate between them anyway, by assigning high probability to the original ones, and low probability to the manipulated ones. Not sure we need to train a classifier on top of that.\", \"I found the notation in section 2 confusing. What kind of distribution is P(enc(x,theta1)|theta2, theta3)? I understand that P(x|theta) is the probability of the sentence given a model, but what is the probability of the encoding? It would also be good to see the full derivation to arrive at the expression in the beginning of page 3.\", \"An argument in favour of the proposed method is training speed; however, given that less data is used to train it, it should be faster indeed. In fact, if we consider the amount of time per million sentences, the previous method considered in comparison could be faster (20 hours of 1M sentences is 1280 hours for 64M sentences, more than 6 weeks). More importantly, it is unclear from the description if the same data is used in training both systems or not.\", \"It is unclear how one can estimate the normalization factor in equation 2; it seems that one needs to enumerate over all fake sentences, which is a rather large number due to the number of possible word swaps in the sentence,\", \"I am not sure the generator proposed generates realistic sentences only, \\\"Chicago landed in John on Friday\\\" is rather implausible. Also there is no generation method trained here, it is rule-based as far as I can tell. There is no way to tell the model trained to generate a fake sentence as far as I can tell.\", \"It is a bit odd to criticise other methods ofr using LSTMs with \\\"millions of parameters\\\" while the proposed approach also uses them. A comparison should calculate the number of parameters used in either case.\", \"what is the motivation for having multiple layers without non-linearity instead of a single layer?\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
HkeKVh05Fm | Multi-Grained Entity Proposal Network for Named Entity Recognition | [
"Congying Xia",
"Chenwei Zhang",
"Tao Yang",
"Yaliang Li",
"Nan Du",
"Xian Wu",
"Wei Fan",
"Fenglong Ma",
"Philip S. Yu"
] | In this paper, we focus on a new Named Entity Recognition (NER) task, i.e., the Multi-grained NER task. This task aims to simultaneously detect both fine-grained and coarse-grained entities in sentences. Correspondingly, we develop a novel Multi-grained Entity Proposal Network (MGEPN). Different from traditional NER models which regard NER as a sequential labeling task, MGEPN provides a new method that proposes entity candidates in the Proposal Network and classifies entities into different categories in the Classification Network. All possible entity candidates including fine-grained ones and coarse-grained ones are proposed in the Proposal Network, which enables the MGEPN model to identify multi-grained entities. In order to better identify named entities and determine their categories, context information is utilized and transferred from the Proposal Network to the Classification Network during the learning process. A novel Entity-Context attention mechanism is also introduced to help the model focus on entity-related context information. Experiments show that our model can obtain state-of-the-art performance on two real-world datasets for both the Multi-grained NER task and the traditional NER task. | [
"entity proposal network",
"entity recognition",
"entities",
"proposal network",
"new",
"ner",
"task",
"ner task",
"novel",
"classification network"
] | https://openreview.net/pdf?id=HkeKVh05Fm | https://openreview.net/forum?id=HkeKVh05Fm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Byxx2T6glN",
"ryekgAcKA7",
"Hyl4pH5tC7",
"BklfWVcFA7",
"HJx9i5BTh7",
"ryx2YYTqh7",
"Sye6lx5K37"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544768935923,
1543249383356,
1543247292404,
1543246842014,
1541393057581,
1541228931982,
1541148660824
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1468/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1468/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1468/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1468/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1468/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1468/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1468/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The authors present a method for fine grained entity tagging, which could be useful in certain practical scenarios.\\n\\nI found the labeling of the CoNLL data with the fine grained entities a bit confusing. The authors did not talk about the details of how the coarse grained labels were changed to fine grained ones. This detail is important and is missing from the paper. Moreover, there are concerns about the novelty of the work, both in terms of the task definition and the model (see the review of Reviewer 1, e.g.).\\n\\nThere is consensus amongst the reviewers, in that, their feedback is lukewarm about the paper.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta Review\"}",
"{\"title\": \"Comments on Review\", \"comment\": \"Thanks a lot for your review.\\n\\n1, The details for creating the MG datasets:\\n\\nFirst, we find out all the entities labeled in the original dataset such as the CoNLL2003 dataset. Then we output all the long entities which have overlappings with other known entities. Invalid entity pairs are filtered by a manual check by humans. In this way, we obtain a dictionary which stores long entities and it's containing short entities. Using this entity-containing dictionary, we labeled these short entities in the sentences where the long entities are labeled. The whole dataset cannot be released directly due to the copyright issues on the original CONLL2003 dataset and OntoNotes 5.0 dataset. However, we can release the entity-containing dictionary and the pipeline scripts. Through data processing, the MG datasets can be easily recovered if the original CONLL2003 dataset and OntoNotes 5.0 dataset are accessible.\\n\\n2, Traditional NER task is addressed by sequence labeling models. However, entities in the same sentence may have overlaps. Our proposed model is able to overcome the weakness of sequential labeling schema and recognize overlapping entity recognition by proposing entities first and then classifying them into different categories.\\n\\n3, Bad cases where MG NER fails:\\n\\n\\u201cOf course , it was discovered that his men 's basketball coach had directly given six grand to a Yugoslavian recruit , but that was just his stipend for travel and coach Jim O'Brien was just having a problem with the currency exchange rates ... right Andy ?\\u201d, is a test instance in the ACE2005 dataset. Our proposed model recognized \\\"right Andy\\\" as an entity with type PERSON, however the ground truth for \\\"right Andy\\\" is not an entity in this sentence. \\u201cAndy\\u201d is PERSON but \\u201cright Andy\\u201d is not. \\\"right\\\" might be recognized as an adjective which describes the qualities or states of \\\"Andy\\\". So additional information like the characteristic or property of words may help us recognize the correct entities.\"}",
"{\"title\": \"Comments on Review\", \"comment\": \"Thanks a lot for your review.\\n\\nWe\\u2019ve updated our table with baselines [1,2,3] in the revised paper.\", \"this_is_a_brief_version_of_multi_grained_ner_f1_performance_on_the_test_sets_of_three_datasets\": \"MG CoNLL2003 \\tMG OntoNotes 5.0 \\t\\t ACE 2005\\nLample et al. (2016)\\t | 78.52\\t\\t |\\t\\t 67.36 | \\t57.63 |\\t\\nXu et al. (2017)\\t\\t | 82.44\\t\\t |\\t\\t 70.58 | \\t60.6 |\\nKatiyar & Cardie (2018) | \\t -\\t\\t \\t | \\t -\\t |\\t\\t70.5 |\\nJu et al. (2018) | 85.34 | 77.22 | 72.2 |\\nWang & Lu (2018) | 87.92 | 79.64 | 74.5 |\\n-------------------------------------------------------------------------------------------------------------------------------\\nMGEPN\\t\\t\\t\\t | 90.80 | 81.46 | 74.76 |\\n\\n[1] A. Katiyar and C. Cardie, Nested Named Entity Recognition Revisited, NAACL/HLT 2018, June, 2018.\\n[2] M. Ju, et al., A neural layered model for nested named entity recognition, NAACL/HLT 2018, June 2018.\\n[3] Wang et al., Neural Segmental Hypergraphs for Overlapping Mention Recognition, EMNLP 2018, Nov 2018.\"}",
"{\"title\": \"Comments on Review\", \"comment\": \"Thanks a lot for your review.\\n\\n1, For the notations:\\n\\n1) $T$ in $2D_sl*2T$ below Eq. 4 should be updated as $R$.\\n2) $R$ is the max length of an entity proposal. In order to limit the number of generated proposals, we set the maximum length of an entity proposal to R.\\n3) $t$ in Eq 5 should be updated as $r$.\\n\\nWe've updated these typos in the revised version.\\n\\n2, For generating proposals:\\n\\tThe max length of an entity proposal is set as R. We use a sliding window to generate entity proposals that revolve around each token position. For each token position k, we will generate R entity proposals with length varies from 1 to R. \\n\\n\\tFor example, we have an utterance with \\\"t1, t2, t3, t4, t5\\\" and R is set as 5. We will generate 5 proposals revolving around t1, 5 proposals revolving around t2, 5 proposals revolving around t3, ..., 5 proposals revolving around t5. The way five proposals are devised guarantee that each candidate entity within the max length will be captured by one of the proposals at a certain token position. Figure 3 shows an example of the five proposals generated at t3. \\n\\tWe also generate proposals for other token positions using the same strategy.\\n\\tFor t1, we will generate 5 proposals: (t1),(t1,t2),(t0,t1,t2),(t0,t1,t2,t3),(t-1,t0,t1,t2,t3).\\n\\tFor t2, we will generate 5 proposals: (t2),(t2,t3),(t1,t2,t3),(t1,t2,t3,t4),(t0,t1,t2,t3,t4).\\n\\tFor t4, we will generate 5 proposals: (t4),(t4,t5),(t3,t4,t5),(t3,t4,t5,t6),(t2,t3,t4,t5,t6).\\n\\tFor t5, we will generate 5 proposals: (t5),(t5,t6),(t4,t5,t6),(t4,t5,t6,t7),(t3,t4,t5,t6,t7).\\n\\n\\tProposals that contain invalid indexes like (t0,t1,t2), (t5, t6) will be deleted. Hence we can obtain all the valid entity proposals under the condition that the max length is R.\\n\\n3, The R in this sentence, \\\"s_k contains 2R scores including R scores for being an entity and R scores for not being an entity at position k\\\", is a number. We use a two-class softmax function to determine the quality of an entity proposal. For each proposal, we will have one score for being an entity and one score for not being an entity. And there are R entity proposals at position k. So, we have 2R scores for each token position at k and s_k is a vector with dimension 2*R.\\n\\n4, Our system is composed of two modules: the Proposal Network and the Classification Network. The Proposal Network generates proposals on possible entity candidates, where the classification network determines the entity type for each entity candidate. To this end, we provide the experiment results to evaluate the performance of the proposal network on the multi-grained ner task in Table 3 and the performance on the traditional ner task in Table 5.\\n\\n5, We\\u2019ve updated our table with baselines [4, 5, 6] in the revised paper. [1] is proposed as a multi-task learning problem, hence we didn't compare with this model. Works like [1,2,3] are updated in the related work.\", \"this_is_a_brief_version_of_multi_grained_ner_f1_performance_on_the_test_sets_of_three_datasets\": \"MG CoNLL2003 \\tMG OntoNotes 5.0 \\t\\t ACE 2005\\nLample et al. (2016)\\t | 78.52\\t\\t |\\t\\t 67.36 | \\t57.63 |\\t\\nXu et al. (2017)\\t\\t | 82.44\\t\\t |\\t\\t 70.58 | \\t60.6 |\\nKatiyar & Cardie (2018) | \\t -\\t\\t \\t | \\t\\t\\t |\\t\\t70.5 |\\nJu et al. (2018) | 85.34 | 77.22 | 72.2 |\\nWang & Lu (2018) | 87.92 | 79.64 | 74.5 |\\n-------------------------------------------------------------------------------------------------------------------------------\\nMGEPN\\t\\t\\t\\t | 90.80 | 81.46 | 74.76 |\\n\\n[1] Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction, EMNLP 2018\\n[2] End-to-end neural coreference resolution, EMNLP 2017\\n[3] Jointly predicting predicates and arguments in neural semantic role labeling, ACL 2018\\n[4] Nested Named Entity Recognition Revisited, NAACL 2018\\n[5] A Neural Layered Model for Nested Named Entity Recognition, NAACL 2018\\n[6] Neural Segmental Hypergraphs for Overlapping Mention Recognition, EMNLP 2018\"}",
"{\"title\": \"Lack of comprehensive related work and lack of clarity in the writing\", \"review\": \"This paper proposed a entity proposal network for named entity recognition which can be effectively detect overlapped spans. The model obtain good performance on both Multi-grained NER task and traditional NER task. The paper is in general well written, the idea of proposal network break the traditional framework of sequence tagging formulation in NER task and thus can be effectively applied to detect overlapped named entities.\\n\\nHowever, I still have many concerns regarding the notation, the novelty of the paper, and the comparison with related literature, especially on previous overlapped span detection NER papers. The detailed concerns and questions are as follows:\\nThe notations are very confusing. Many of the notations are not defined. For example, what does $T$ in $2D_sl*2T$ below Eq. 4 indicates? What does $R$ scores means? I guess $R$ does not equal to number of entity types, but I\\u2019m not sure what $R$ exactly indicates. If $R$ is not number of entity types, why do you need R scores for being an entity and R scores for not being an entity? And what is $t$ in Eq 5? Is that entity type id or something else?\\nI\\u2019m still confused how you select the entity spans from a large number of entity candidates. In Figure 5, if the max window length is 5, there may be more span candidates than the listed 5 examples, such as t_3 t_4 t_5. How do you prune it out?\\nTable 5 is weird. There is not comparison with any baselines but just a report of the performance with this system. I don\\u2019t know what point this table is showing.\\nThis is not the first paper that enumerates all possible spans for NER task.The idea of enumerating possible spans for NER task has appeared in [1] and can also effectively detect overlapped span. I would like to see the performance comparison between the two systems. The enumerating span ideas has been applied in many other tasks as well such as coreference resolution [2]and SRL[3], none of which is mentioned in related work.\\nI feel that most of the gain is from ELMo but not the model architecture itself, since in Table 4, the improvement from the ELMo is only 0.06. The LSTM-LSTM-CRF is without adding ELMo, which is not a fair comparison. \\nThe comparison of baselines is not adequate and is far from enough. The paper only compares with LSTM+CRF frameworks, which are not designed for detecting overlapped spans. There are many papers on detecting overlapping spans, such as [4], [5] and [6]. It\\u2019s important to compare with those paper since those methods are especially designed for overlapped span NER tasks.\\n[1] Multi-Task Identification of Entities, Relations, and Coreferencefor Scientific Knowledge Graph Construction, EMNLP 2018\\n[2] End-to-end neural coreference resolution, EMNLP 2017\\n[3] Jointly predicting predicates and arguments in neural semantic role labeling, ACL 2018\\n[4] Nested Named Entity Recognition Revisited, NAACL 2018\\n[5] A Neural Layered Model for Nested Named Entity Recognition, NAACL 2018\\n[6] Neural Segmental Hypergraphs for Overlapping Mention Recognition, EMNLP 2018\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting task of multi-grained NER, reasonable models.\", \"review\": \"<Summary>\\nAuthors propose the \\u201cMulti-grained NER (MGNER) task\\u201d which aims at detecting entities at both coarse and fine-grained levels. Authors propose a Multi-grained Entity Proposal Network (MGEPN) which comprises (1) a Proposal Network that determines entity boundaries, and (2) a Classification network that classifies each proposed segment of an entity.\\n\\nThe task is primarily tested against the proposed method itself. The proposed method does outperform traditional sequence-labeling baseline model (LSTM-LSTM-CRF), validating the proposed approach. When the proposed model (trained with extra MG data) is evaluated on the traditional NER task (on test sets), however, no significant improvement is observed -- I believe this result is understandable though, because e.g. MG datasets have slightly different label distributions from original datasets, hence likely to result in lower recall, etc.\\n\\n<Comments>\\nThe task studied is interesting, and can potentially benefit other downstream applications that consume NER results -- although it seems as though similar tasks have been studied prior to this study. The novelty of the proposed architecture is moderate - while each component of the model does not have too much technical novelty, the idea of separating the model into a proposal network and a classifier seems to be a new approach in the context of NER (that diverges from the traditional sequence labelling approaches), and is reasonably designed for the proposed task.\\n\\nThe details for creating the MG datasets is missing - are they labeled by human labelers, or bootstrapped? Experts or crowd-sourced? By how many people? Will the new datasets be released? Please provide clarifications.\\n\\nThe proposed approach does not or barely outperform base models when tested on the traditional NER task -- the proposed work thus can be strengthened by better illustrating the motivation of the MGNER task and/or validating its efficacy in other downstream tasks, etc. \\n\\n\\nAuthors could provide better insights into the new proposed task by providing more in-depth error analysis - especially the cases when MG NER fails as well (e.g. when coarse-grained prediction predicts a false positive named-entity, etc.)\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Nested NER dection\", \"review\": \"This paper describes multi-grained entity recognition. Experimental results show that the proposed Multi-Grained Proposal Network achieve better performance on NER tasks.\", \"major_comments\": \"- A major weakness of this paper is lack of citations to recent related studies. There are studies on nested NER published this June:\\n\\nA. Katiyar and C. Cardie, Nested Named Entity Recognition Revisited, NAACL/HLT 2018, June, 2018.\\nM. Ju, et al., A neural layered model for nested named entity recognition, NAACL/HLT 2018, June 2018.\\n\\nYou need to compare these conventional methods to your proposed method.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
BJl_VnR9Km | A Model Cortical Network for Spatiotemporal Sequence Learning and Prediction | [
"Jielin Qiu",
"Ge Huang",
"Tai Sing Lee"
] | In this paper we developed a hierarchical network model, called Hierarchical Prediction Network (HPNet) to understand how spatiotemporal memories might be learned and encoded in a representational hierarchy for predicting future video frames. The model is inspired by the feedforward, feedback and lateral recurrent circuits in the mammalian hierarchical visual system. It assumes that spatiotemporal memories are encoded in the recurrent connections within each level and between different levels of the hierarchy. The model contains a feed-forward path that computes and encodes spatiotemporal features of successive complexity and a feedback path that projects interpretation from a higher level to the level below. Within each level, the feed-forward path and the feedback path intersect in a recurrent gated circuit that integrates their signals as well as the circuit's internal memory states to generate a prediction of the incoming signals. The network learns by comparing the incoming signals with its prediction, updating its internal model of the world by minimizing the prediction errors at each level of the hierarchy in the style of {\em predictive self-supervised learning}. The network processes data in blocks of video frames rather than a frame-to-frame basis. This allows it to learn relationships among movement patterns, yielding state-of-the-art performance in long range video sequence predictions in benchmark datasets. We observed that hierarchical interaction in the network introduces sensitivity to memories of global movement patterns even in the population representation of the units in the earliest level. Finally, we provided neurophysiological evidence, showing that neurons in the early visual cortex of awake monkeys exhibit very similar sensitivity and behaviors. These findings suggest that predictive self-supervised learning might be an important principle for representational learning in the visual cortex. | [
"cortical models",
"spatiotemporal memory",
"video prediction",
"predictive coding"
] | https://openreview.net/pdf?id=BJl_VnR9Km | https://openreview.net/forum?id=BJl_VnR9Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ryxPJUOZxN",
"r1ljdclkgV",
"HJlgtsjCJV",
"rkxQNsjRyN",
"Syer38msk4",
"rJx0ap4c1N",
"HJgbiy9tAm",
"r1emuJqFCQ",
"SylbH1cKCm",
"rJl2cTKtRm",
"S1xdd6tY0X",
"S1g4KnKYAQ",
"HyxuQhYYAX",
"HJloz_KK0X",
"SyxwDou63X",
"rygqq3Tc3X",
"HkxSQjyS3Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544811999042,
1544649331247,
1544629111931,
1544629035460,
1544398508701,
1544338886377,
1543245721182,
1543245674564,
1543245624856,
1543245204135,
1543245167991,
1543244924468,
1543244832242,
1543243795143,
1541405534830,
1541229714364,
1540844316604
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1467/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1467/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1467/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1467/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1467/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1467/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1467/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1467/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1467/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1467/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1467/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1467/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1467/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1467/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1467/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1467/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1467/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"There was major disagreement between reviewers on this paper. Two reviewers recommend acceptance, and one firm rejection. The initial version of the manuscript was of poor quality in terms of exposition, as noted by all reviewers. However, the authors responded carefully and thoroughly to reviewer comments, and major clarity and technical issues were resolved by all authors.\\n\\nI ask PCs to note that the paper, as originally submitted, was not fit for acceptance, and reviewers noted major changes during the review process. I do believe this behavior should be discouraged, since it effectively requires reviewers to examine the paper twice. Regardless, the final overall score of the paper does not meet the bar for acceptance into ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Significant revisions in review\"}",
"{\"title\": \"Manuscript was complete content-wise. The presentation was significantly improved during the revision phase.\", \"comment\": \"I only found the presentation to be lacking in the initial submission.\\nI've seen countless submissions that were unfinished in terms of their content with vague claims. This manuscript was not one of these submissions!\\nWhile I agree with disincentivizing incomplete work, I believe the initial manuscript was complete in terms of content and readable to a large degree. In the revised version, the readability and the description of model details has significantly improved.\"}",
"{\"title\": \"Response to reviewer 1's new comments on rebuttal [Part 2]\", \"comment\": \"We tried our best to submit the best version of the paper by the deadline, but we did have our limitations. We were blind to our own imperfection at the time. We did not submit the paper by the deadline with the intent of \\u201cfinishing the paper during the rebuttal period\\u201d. We appreciated the reviewers\\u2019 suggestions and careful reviews, and we did take the opportunities, as we believe to be permitted by ICLR policy, to improve our presentation.\\n\\nIndeed, we believe we have already suffered for the imperfection in our original presentation \\u2013 Reviewer 2 for example maintained the original 7 score, even though (s)he appreciated the contributions of the paper even in the first round, and has clearly taken this issue into account; Reviewer 3 increased to score to 7 from 3, as (s)he promised to do if we improved our writing, because (s)he could see the contribution and potential impact of the paper even in the original submission, despite our shortcomings in presentation in the first round. Had we done a better job in presentation and paper writing, we believe we would have been given even better scores as the paper would add diversity and significant values to ICLR contributions and would build connections between machine learning and neuroscience. \\n\\nWhile we can understand and sympathize with Reviewer 1\\u2019s philosophy, we also believe ICLR policy should be uniformly applied to all submissions on what kinds of revisions are allowed, and what constitutes the key aspects of the papers for comparison, judging and final evaluation for acceptance.\"}",
"{\"title\": \"Response to reviewer 1's new comments on rebuttal [Part 1]\", \"comment\": \"The authors addressed most of my comments. However, I think it needs to be taken into account how unfinished the original submission was. The authors admit to submitting the paper at the last minute, presumably hoping to use the review period to finish the paper. I know that this is common practice, but that does not make it acceptable. Submitting unfinished work with the hope to finish it later is unfair towards authors who submit finished papers in time. Allowing this behavior also exacerbates incentives to work fast rather than thoroughly. This leads to poor science. Finally, submitting unfinished work wastes reviewer time, eroding reviewer motivation to perform thorough initial reviews.\\n\\nThe review process exists to encourage thorough work and good scientific practice. In my opinion, this includes disincentivizing the last-minute submission of unfinished work.\\n\\nAuthors\\u2019 responses: We are sorry that reviewer 1 declined to re-assess our paper in its current form on the ground that (1) our paper was \\u201cunfinished\\u201d in the original submission, and (2) it would be unfair to our competitors who submitted \\u201cfinished\\u201d papers in the original deadline. In our defense, our paper was \\u201cunfinished\\u201d only in terms of our presentation and English writing, missing typographical and a grammar check in certain sections of the paper. Our technical work and all the core results and graphs were finished by the time of submission and they have remained the same in the revision. We have added one subfigure(4D) and two Supplementary sections to satisfy Reviewer 1\\u2019s suggestions but these are not central to our paper. These additions are: (1) Figure 4D was added to address Reviewer 1\\u2019s request that we make explicit the training time-performance comparison between our model and PredRNN and PredNet. We should further point out that training time was not a core contribution of our paper, nor is efficiency our major claim, and Figure 4D thus is not critical. (2) Supplementary Section B was added to compare representations and decoding results between the representations of our models and the other models, also thanks to Reviewer 1\\u2019s suggestion. We should point out that other papers on the same topic mostly reported and compared performance without providing insights and information on the representations \\u2013 this includes PredNet and PredRNN. (3) Supplementary Figure 11 was added also to satisfy reviewer 1\\u2019s suggestion on the Meyer and Olson\\u2019s experiment, and again that is not critical to the paper. Because of that, we moved the Meyer and Olson\\u2019s IT experiment to the Supplementary information.\\n\\nThus, we argue that our technical work in fact was complete and finished by the time of the original submission. We have not changed our models, and we have not added significant new core result figures in the new revision. If technical and conceptual contributions and experimental results are key for judging one paper against another, we believe our revision is NOT unfair to other competitors. \\n\\nWe do agree and admit that our original submission\\u2019s writing is far from perfect and some of our presentation left much to be desired. We also are thankful for the reviewers\\u2019 helpful comments, and appreciated ICLR\\u2019s current policy of allowing revision of the manuscript for clarity of presentation and to address the reviewers\\u2019 questions and concerns. We have not tried to game the system as we have not upgraded our models or changed or upgraded our basic core findings and results. We did use this opportunity to polish the presentation of the paper based on reviewers\\u2019 suggestions and criticisms.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for taking the time to reevaluate our paper. We really appreciate it.\"}",
"{\"title\": \"My concerns have been addressed!\", \"comment\": \"I believe all points I raised were addressed in the revision. I increased the score, recommending the paper for acceptance.\"}",
"{\"title\": \"Point-by-point address to reviewer's concerns [Part 3]\", \"comment\": \"2. Figure 1 is not fully annotated and could be clearer. What does the asterisk mean? Why are there multiple arrows between the P\\u2019s? What do the small arrows next to the big arrows mean? Please expand the legend. Consider using colors to differentiate between components.\", \"re\": \"Thank you, we have corrected that.\\n\\nThe paper contains intriguing ideas about the benefits of sparse and predictive coding, and the direct comparison to biological data potentially broadens the impact of the work. However, major claims are unsubstantiated, and accuracy and clarity need to be improved to make the manuscript acceptable. \\nThe benefit of sparse convolution and residual coding of video has been demonstrated by Pan et al. (2018) in the context of video processing. It is also demonstrated in Lotter and Cox\\u2019s PredNet, though they might not have realized at the time that their predictive coding scheme actually has the benefit of learning sparse convolution kernel and has the benefit of computational efficiency. In the theoretical neuroscience community, sparse coding is considered mostly for coding efficiency, not for making computation efficient as well. We made this observation based on Pan et al\\u2019s contribution, and based on comparisons between our frame-to-frame model with Lotter and Cox\\u2019s PredNet, and our Block-to-Block model with and without Pan\\u2019s sparsification. We have documented all these in Figure 4C to clarify these issues, but this observation, though interesting, is really not the main contribution of the paper.\"}",
"{\"title\": \"Point-by-point address to reviewer's concerns [Part 2]\", \"comment\": \"3. In Figure 6, the authors claim that more layers lead to \\u201cbetter\\u201d representations. What does \\u201cbetter\\u201d mean? It is implied that the networks with more layers actually make the different motions more discriminable. Please quantify this. For example, a linear classifier could be trained on the neural activations. Also, how is this related to the rest of the paper? Do the authors claim that this result is unique to the proposed architecture? In that case, please provide a quantitative comparison to the PredNet or PredRNN++.\", \"re\": \"PredNet has a hierarchy of representation for making predictions on prediction errors. That is, in PreNet, each layer\\u2019s LSTM is trying to predict the prediction errors observed in the earlier layer. PredRNN++ likely have a hierarchical representation of spatiotemporal features in the intermediate layers but remember that their highest layer output the prediction at the image level, so it is functions like an LSTM-based autoencoder. Our hierarchical prediction network (HPNet) is designed to address these conceptual deficiencies or problems (in terms of neural plausibility in our mind) in these two models by having both feedforward analyzed feature representation and feedback expected feature representation at every layer, and then compute prediction errors at each layer. It is a very simple conceptual framework common to many the classical hierarchical cortical processing model frameworks (Mumford, Ullman etc.). We now expand our Related Work section to provide a broader view on these issues.\", \"minor_comments\": \"1.I don\\u2019t understand the \\u201ctension\\u201d between hierarchical feature representations and residual representations brought up in Section 2. Do the PredNet and PredRNN++ not contain a hierarchy of representations?\"}",
"{\"title\": \"Point-by-point address to reviewer's concerns [Part 1]\", \"comment\": \"Thank you for the valuable feedback and comments. Below we address your comments point by point.\\n\\nThe paper contains intriguing ideas about the benefits of sparse and predictive coding, and the direct comparison to biological data potentially broadens the impact of the work. However, major claims are unsubstantiated, and accuracy and clarity need to be improved to make the manuscript acceptable.\", \"major_concerns\": \"1. The authors claim that their architecture is more efficiency because it uses sparse coding of residuals. Implementation details and some quantitative arguments, ideally benchmarks, need to be provided to show that their architecture is actually more efficient than PredRNN++ and PredNet.\", \"re\": \"Note, we have changed C (chunk) to B (block) in order to have more consistent notations and terminology. Is it a fair comparison with the baseline model PredRNN++? During testing, all the five networks (B-B, B-F, F-F, PredNet, PredRNN++) had access to the same number of frames (the first 20 frames) and have to predict the future 20 frames of the 40-frame test sets. During training, they were all trained on 40 frames movies of the training sets drawn from the same database. The comparison is fair in the sense that they have equal access to the same amount of information and they have to solve the same problem. Both PredNet and PredRNN++ took in one frame at a time to predict one frame at a time, while our B-B took in a block of frames to predict a block of frames. PredRNN++ used a stack of LSTM to remember sequences and learn the feature transformation in the fashion of an autoencoder, while HPNet used the idea of a spatiotemporal block as well as a hierarchy of LSTM to do the same. Absolute fair comparison is difficult but we are fair at least in the amount of information available to each model, as reviewer asked.\"}",
"{\"title\": \"Point-by-point address to reviewer's concerns [Part 2]\", \"comment\": \"7. This work is interesting because it proposes a sequence prediction technique that accounts well for familiarity effects found in different regions of the visual system. \\u2026 I believe this work at the intersection of deep learning and neuroscience is an interesting contribution for both fields. However, the paper would benefit from these clarifications and a thorough proof-reading for the many typos present in the text.\\n\\n\\nRe. Thank you for your recognition and appreciation of the contributions of our work. We have revised our paper very carefully and tried to better explain why this is indeed an interesting (and important) contribution to both fields.\"}",
"{\"title\": \"Point-by-point address to reviewer's concerns [Part 1]\", \"comment\": \"Thank you for the valuable feedback and comments. Below we address your comments point by point.\\n\\n1. The authors claim repeatedly that using the prediction error framework is computationally more efficient than alternatives but they do not show this.\", \"re\": \"Yes, our apologies. We have revised our paper very carefully and extensively.\"}",
"{\"title\": \"Point-by-point address to reviewers\\u2019 concerns [Part 2]\", \"comment\": \"8. Adding a sentence explaining the intuition behind using SatLU in equation (1) might be helpful.\", \"re\": \"SATLU is a saturating non-linearity set at the maximum pixel value: SatLU(x; p_{max}):= min(p_{max}, x). Definitions: f is non-saturating iff (|limz\\u2192\\u2212\\u221ef(z)|=+\\u221e)\\u2228|limz\\u2192+\\u221ef(z)|=+\\u221e), f is saturating iff ff is not non-saturating, as we now explained it more clearly in Section 3.4.\\n\\n9. Strengths:\\nThe performance improvements over competing methods on Moving-MNIST and KTH presented in the experimental section are significant. The analysis seems fairly thorough.\\n\\nYes, our analysis is not perfect, but better than many other state-of-the-art video prediction models which did not provide representational analysis to reveal the underlying reasons explaining why their models actually work better. \\n\\n10. To summarize my feedback: I think experimental results and analysis are strong, but the presentation is strongly lacking! The description of the approach definitely needs to be improved to make replication of the results easier. It might help to have someone who doesn\\u2019t know the model already read the description and explain it back to you while revising the draft. I hope I could provide some helpful suggestions. I would recommend the manuscript for acceptance, if the presentation is significantly improved! \\n\\nYes. Thank you very much for all the helpful suggestions and generosity despite our shortcomings. We hope our serious revision of our manuscript will allow it to meet the standard of excellence for ICLR.\"}",
"{\"title\": \"Point-by-point address to reviewers\\u2019 concerns [Part 1]\", \"comment\": \"Thank you for the valuable feedback and comments. Below we address your comments point by point.\\n\\n1. The description of the sparse predictive module is difficult to follow, and I am not sure I understood it completely. I find it a bit unintuitive to start the description with the errors, instead of explaining what is computed from beginning to end. The section reads more like a loose description of isolated parts instead of an integrated whole. Maybe walking the reader step-by-step through one complete iteration of the computation helps to clarify this. Also, not every character in equations 1-5 and the algorithm has been defined. For example, what is L?\", \"re\": \"We have added more background from theoretical neuroscience -- Mumford\\u2019s ideas on analysis by synthesis and Ullman\\u2019s counter-stream model, which is the inspiration of the development of our model. We also provided some recent neurophysiological studies on prediction errors in the inferotemporal cortex (Meyer and Olson 2012), as well as prediction related memory recall phenomena in the primary visual cortex (V1) of mice (Han et al. 2008, Xu et al. 2012). Our study on V1 and V2 neurons\\u2019 sensitivity to memory of familiar complex video episodes is novel. We moved our simulation results of Meyer and Olson (2012) to the Appendix to yield room for some additional clarifying discussion on this experiment.\"}",
"{\"title\": \"General responses to the reviewers and the program committee\", \"comment\": \"We thank the reviewers for reading our paper carefully, despite our poor presentation and numerous typographical mistakes. We thank reviewer 1 for recognizing that \\u201cthe experimental results and analysis are strong\\u201d and for stating that our paper would be acceptable if the presentation were improved. We also thank reviewer 2 for recognizing that our \\u201cwork is interesting because it proposes a sequence prediction technique that accounts well for familiarity effects found in different regions of the visual system.\\u201d and that this work is \\u201cat the intersection of deep learning and neuroscience is an interesting contribution for both fields.\\u201d Finally, we thank reviewer 3 for recognizing \\u201cthe paper contains intriguing ideas about the benefits of sparse and predictive coding, and the direct comparison to biological data potentially broadens the impact of the work\\u201d.\\nWe have seriously revised and proofread the paper and we hope that the current version will receive a more favorable score.\", \"here_is_a_highlight_of_the_contribution_of_our_paper\": \"(1) Our model HPNet provides an alternative to the Predictive Coding model, the basis of PredNet, which is quite popular in neuroscience. The model might be the first computational competent hierarchical neural cortical model that implements the classical computational framework for cortical processing (analysis by synthesis, counter-stream architecture, interactive activation and adaptive resonance) and is competitive in solving real computer vision problems. It works better because the synthesis is no longer done by simple deconvolution or multiplying through feedback connection weights as in classical models but are generated by the gated recurrent circuits in LSTM. It shows that feature hierarchy works better than prediction error hierarchy (PredNet). \\n(2) We provided thorough analysis of the representations of our network to understand what could be the reasons that the network is working better than PredNet or PredRNN++. We discovered that recurrent feedback has reshaped the representations of the early modules (layers), making \\u201cneurons\\u201d in the bottom modules sensitive to memories of global movement patterns, i.e. more abstract concepts, rather than just local spatiotemporal features in their receptive fields. The semantic clustering of global movement patterns might have contributed to better long-range video prediction by facilitating the relationship learning of movement patterns. Thanks to the reviewer\\u2019s suggestion, we performed a decoding experiment and showed quantitatively that global movement patterns have indeed become more segregated and discriminable in the representation of the early modules due to feedback, and more importantly, HPNet\\u2019s hierarchical representations contain semantic clusters, achieving 63% decoding accuracy in the 4th layer for classifying movement patterns, while PredRNN\\u2019s and particularly PredNet\\u2019s hierarchical LSTMs provide little semantic information (all layers) for decoding the global movement patterns (< 26%) \\u2013 See Appendix B.\\ndecoding results (for all layers) are either close to chance or \\n(3) The most interesting part of our story is that we found that neurons in the early visual cortex of awake monkeys developed similar sensitivity to memories of global movement patterns in video when they are repeatedly exposed to a set of movies. This discovery, under the ``computational illumination\\u2019\\u2019 of HPNet, provides new insights and concrete evidence to the potential computational logic of recurrent feedback in the cortex, and gives us more faith on the neural plausibility of t this class of predictive self-supervised learning models. \\n\\nWe believe these core claims are substantiated by our experiments, analysis and data. The idea that sparse coding might make convolution fast (reviewer 3) is not really our contribution. Pan et al. (2018) have provided experimental results on video processing that show sparsifying the representation can speed up computation. (see our point-by-point response to reviewer 3 for details). However, we do apologize if we inadvertently made statements that gave the mistaken impression that HPNet is faster to train than PredNet and PredRNN++. We have added a graph (Figure 4d) documenting the training time and performance of the different models, which clearly shows PredNet is the fastest and ours is the slowest to train. HPNet is processing spatiotemporal blocks with 3D convolutional LSTM, and it has a feature hierarchy in both its feedforward and feedback paths absent in the other two models, so naturally it would take longer to train than PredRNN++. It is only with sparsification and taking longer strides in the sliding window that we can train HPNet at comparable times. \\n\\nWe hope the reviewers and the program committee seriously consider our revised paper for its potential impact in both neuroscience and machine learning.\"}",
"{\"title\": \"SOTA results in video prediction and interesting analysis but the presentation is severely lacking clarity\", \"review\": \"Summary:\\nThe paper presents a novel architecture for video prediction consisting of a feed-forward path with sparse convolutions and an LSTM generating predictions of chunks of video based on the sequence of input chunks. A feedback path links the LSTMs of the different sparse prediction modules. Experiments in video prediction are performed on moving-MNIST and the KTH action recognition dataset and the model achieves state-of-the-art performance on both. Interestingly, the model is exhibits prediction suppressions effects as have been observed during neurophysiological experiments in the inferotemporal cortex of macaque monkeys. The proposed method exhibits prediction suppression effects also in the lower layers, motivating a neurophysiological experiment in the earlier V1/V2 regions, which yielded an observation similar to the model\\u2019s prediction.\", \"strengths\": \"The performance improvements over competing methods on Moving-MNIST and KTH presented in the experimental section are significant. The analysis seems fairly thorough.\", \"weaknesses_and_requests_for_clarification\": [\"The description of the sparse predictive module is difficult to follow, and I am not sure I understood it completely. I find it a bit unintuitive to start the description with the errors, instead of explaining what is computed from beginning to end. The section reads more like a loose description of isolated parts instead of an integrated whole. Maybe walking the reader step-by-step through one complete iteration of the computation helps to clarify this. Also, not every character in equations 1-5 and the algorithm has been defined. For example, what is L?\", \"The text makes it sound like the idea of using 3d convolutions in a convLSTM is novel. 3D convLSTMs have been previously used in 3d vision, see\", \"Choy, C. B., Xu, D., Gwak, J., Chen, K., & Savarese, S. (2016, October). 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In European conference on computer vision (pp. 628-644). Springer, Cham.\", \"The application of 3d convLSTMs to video might be new, but the mentioned paper by Choy et al. (2016) should be cited.\", \"You mention that padding is used for rows and columns. Are you using padding on the temporal axis as well?\", \"The paper seems to be written in a rush, as it contains way too many typos and grammar mistakes, e.g. \\u201ca hierarchical of\\u201d (should be \\u201ca hierarchy of\\u201c or just \\u201chierarchical\\u201d), \\u201cfeedforwad\\u201d, \\u201cExpriment\\u201d (section 4 heading), \\u201cachievedbetter\\u201d, \\u201ctrained monkeys to image pairs\\u201d, \\u201cpervious\\u201d, \\u201cperserves\\u201d, \\u201cprocessure\\u201d, \\u201csequnence\\u201d \\u201cviusal\\u201d. Many typos could have been caught by a spellcheck! This would improve readability a lot!\", \"The citations are not properly formatted: (1) If the author names are used as part of the sentence, use e.g. Lotter et al. (2016), else (2) If the author names are not part of the sentence, use (Lotter et al., 2016). These two styles are mixed randomly in the current draft. This makes the manuscript, which already contains a lot of language mistakes, difficult to read.\", \"Abbreviations that are used but not introduced: CNN, IT, PSTH, DCNN, LSTM.\", \"The related work section could benefit from referring to some of the related work in neuroscience.\", \"Adding a sentence explaining the intuition behind using SatLU in equation (1) might be helpful\"], \"to_summarize_my_feedback\": \"I think experimental results and analysis are strong, but the presentation is strongly lacking! The description of the approach definitely needs to be improved to make replication of the results easier. It might help to have someone who doesn\\u2019t know the model already read the description and explain it back to you while revising the draft. I hope I could provide some helpful suggestions. I would recommend the manuscript for acceptance, if the presentation is significantly improved!\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting bio-inspired sequence prediction network explaining familiarity effects in early and late visual system.\", \"review\": \"The authors propose a biologically inspired ANN to predict a video sequence, that performs better than previous biologically inspired video sequence predictors (>PredNet and >PredRNN+). Their model also accounts for familiarity effects (i.e. decrease in neural activations when repeatedly presenting the same visual sequence) found in primate early visual system V1/V2 (data recorded for this article) and late visual system IT.\\n\\nThis work is interesting because it proposes a sequence prediction technique that accounts well for familiarity effects found in different regions of the visual system.\", \"however_one_of_the_claims_does_not_seem_supported_by_data\": \"1. The authors claim repeatedly that using the prediction error framework is computationally more efficient than alternatives but they do not show this.\\n\\nFurthermore, the article would benefit from the following clarifications:\\n\\n2. It is unclear how their network performance compares to state-of-the-art NON neurally plausible models of sequence prediction.\\n\\n3. It is unclear from the introduction how they modified the network proposed by (Pan et al) to obtain their network. \\n\\n4. \\\"The SSIM index over time shows that the C-C method is more effective than C-F method, for C-F method performs better than C-C method in the short term perdiction when ground truth images are provided, but setting sliding window is too time-consuming, much more than the performance increase\\\"\\nPlease clarify this statement.\\n\\n5. Macaque experiments: Some experiments on macaques were performed for this article, but there is no mention of ethical guidelines and whether they were respected.\\n\\n6. Many typos are present in the text!\\n\\nI believe this work at the intersection of deep learning and neuroscience is an interesting contribution for both fields. However, the paper would benefit from these clarifications and a thorough proof-reading for the many typos present in the text.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Clarity and quantification need improvement\", \"review\": \"This paper proposes a network architecture inspired by the primate visual cortex. The architecture includes feedforward, feedback, and local recurrent connections, which together implement a predictive coding scheme. Some versions of the network are shown to outperform the similar PredNet and PredRNN architectures on two video prediction tasks: moving MNIST and KTH human actions. Finally, the authors provide neural data from monkeys and argue that their network shows similarities to the biological data.\\n\\nThe paper contains intriguing ideas about the benefits of sparse and predictive coding, and the direct comparison to biological data potentially broadens the impact of the work. However, major claims are unsubstantiated, and accuracy and clarity need to be improved to make the manuscript acceptable.\", \"major_concerns\": \"1. The authors claim that their architecture is more efficient because it uses sparse coding of residuals. Implementation details and some quantitative arguments, ideally benchmarks, need to be provided to show that their architecture is actually more efficient than PredRNN++ and PredNet.\\n\\n2. It is unclear whether the PredRNN++ should be compared to the C-C or C-F version of the network. Does the PredRNN++ have access to as many current and future frames as the C-C net? Is this a fair comparison? Please provide a clearer description of the different versions of your network and how they relate to the baseline models. That section in particular has many confusing typos (frame-by-chunk, chunk-by-frame abbreviations mixed up).\\n\\n3. In Figure 6, the authors claim that more layers lead to \\u201cbetter\\u201d representations. What does \\u201cbetter\\u201d mean? It is implied that the networks with more layers actually make the different motions more discriminable. Please quantify this. For example, a linear classifier could be trained on the neural activations. Also, how is this related to the rest of the paper? Do the authors claim that this result is unique to the proposed architecture? In that case, please provide a quantitative comparison to the PredNet or PredRNN++.\\n\\n4. In Figure 9, the presentation is highly confusing. Plots (c) to (h) are clearly made to look like the monkey data in (b) (nonlinear x-axes?), but show totally different timescales (training epochs vs. milliseconds). Please explain why it makes sense to compare these timescales. Also, what does it mean for a training epoch to have a negative value?\", \"minor_comments\": \"1. I don\\u2019t understand the \\u201ctension\\u201d between hierarchical feature representations and residual representations brought up in Section 2. Do the PredNet and PredRNN++ not contain a hierarchy of representations?\\n\\n2. Figure 1 is not fully annotated and could be clearer. What does the asterisk mean? Why are there multiple arrows between the P\\u2019s? What do the small arrows next to the big arrows mean? Please expand the legend. Consider using colors to differentiate between components.\\n\\n3. I don\\u2019t understand Figure 4c. According to the text, this plot shows \\u201ceffectiveness as a function of time\\u201d, but the x-axis is labeled \\u201cLayer Number\\u201d. What does \\u201ceffectiveness over time\\u201d mean? What does the y-label mean (SSIM per day?)? What is \\u201ctrunk prediction\\u201d (not mentioned anywhere in the text)?\\n\\n4. For Figure 9, it is pointed out that activity is expected to be lower for E neurons, but is also lower for R and P. This is interesting and also applies to Figure 8, so it would be good to see Figure 8 split up by E/R/P, too. \\n\\n5. The word \\u201cFigure\\u201d is missing before figure references.\\n\\n6. Please proof-read for typography, punctuation and grammar.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
rJl_NhR9K7 | ISA-VAE: Independent Subspace Analysis with Variational Autoencoders | [
"Jan Stühmer",
"Richard Turner",
"Sebastian Nowozin"
] | Recent work has shown increased interest in using the Variational Autoencoder (VAE) framework to discover interpretable representations of data in an unsupervised way. These methods have focussed largely on modifying the variational cost function to achieve this goal. However, we show that methods like beta-VAE simplify the tendency of variational inference to underfit causing pathological over-pruning and over-orthogonalization of learned components. In this paper we take a complementary approach: to modify the probabilistic model to encourage structured latent variable representations to be discovered. Specifically, the standard VAE probabilistic model is unidentifiable: the likelihood of the parameters is invariant under rotations of the latent space. This means there is no pressure to identify each true factor of variation with a latent variable.
We therefore employ a rich prior distribution, akin to the ICA model, that breaks the rotational symmetry.
Extensive quantitative and qualitative experiments demonstrate that the proposed prior mitigates the trade-off introduced by modified cost functions like beta-VAE and TCVAE between reconstruction loss and disentanglement. The proposed prior allows to improve these approaches with respect to both disentanglement and reconstruction quality significantly over the state of the art. | [
"representation learning",
"disentanglement",
"interpretability",
"variational autoencoders"
] | https://openreview.net/pdf?id=rJl_NhR9K7 | https://openreview.net/forum?id=rJl_NhR9K7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1g-X8JpyN",
"Skxp0qAYRQ",
"HyxTpuAtAX",
"rklh9ApFC7",
"HJlHXsVrTm",
"rylHP6da3m",
"SJxusGwwnQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544513048685,
1543264980954,
1543264452638,
1543261843677,
1541913372645,
1541406044541,
1541005983617
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1466/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1466/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1466/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1466/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1466/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1466/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1466/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes to improve VAE by using a prior distribution that has been previously proposed for independent subspace analysis (ISA). The clarity of the paper could be improved by more clearly describing the proposed method and its implementation details. The originality is not that high, as the main change to VAE is replacing the usual isotropic Gaussian prior with an ISA prior. Moreover, the paper does not provide comparison to VAEs with other more sophisticated priors, such as the VampPrior, and it is unclear whether using the ISA prior makes it difficult to scale to high-dimensional observations. Therefore, it is difficult to evaluate the significance of ISA-VAE. The authors are encouraged to carefully revise their paper to address these concerns.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"VAE with ISA prior\"}",
"{\"title\": \"Comment to Reviewer 3\", \"comment\": \"We are glad that R3 appreciates the \\\"competitive quantitative experiment results and promising qualitative results\\\" and finds an important contribution to the state-of-the art in our publication.\", \"we_also_thank_for_the_constructive_feedback_which_we_address_in_the_following\": \"1) Since we only modify the prior and keep the approximate posterior following the original VAE approach as a multivariate Gaussian, the reparameterization trick can be applied as usual. The proposed method only requires to compute the log likelihood of the prior. We added a paragraph to section 3 that explains this in more detail.\\n\\n2) We added a description of the training process and the code of the encoder and decoder to the appendix.\\n\\n3) We reworked the figures and captions and provide more details on the experiments especially towards reproducability and clarity.\\n\\n4) We have reworked the notations towards correctness and clarity.\"}",
"{\"title\": \"Comment to Reviewer 2\", \"comment\": \"We thank Reviewer 2 for the constructive feedback and appreciate that R3 \\\"agree[s] with the motivation of the manuscript, and in particular the choice of the prior distribution\\\". Also we thank Reviewer 3 for further comments that helped to improve the clarity of the paper.\\n \\nWe are glad that both reviewers R2 and R3 support the general approach taken and found the motivation of our work and the experiments convincing. We believe that the revised manuscript addresses all of the concerns of R3. Sincerely we hope that R3 might want to reconsider the revised manuscript for publication when coming to the final review score at the end of the rebuttal phase.\", \"we_will_now_reply_to_the_concerns_raised_by_r3_one_by_one\": \"P1) As in the original VAE approach, the form of the approximate posterior q(x) is a Gaussian distribution, for which mean and variance are defined by the encoder. We now mention this explicitely in the manuscript.\\n\\nP2) We agree with Reviewer 3 that the reparameterization trick is crucial for the variational autoencoder approach. We'd like to point out that the proposed approach is fully compatible with the reparameterization trick:\\n \\nTo use the proposed prior distribution in a variational autoencoder, the only requirement is that we are able to compute the log density log p_ISA(z) of a sample z. The density is defined in Eq. 7.\\n\\nThe KL-divergence can then be computed for each sample z by\\n\\n- log p_ISA(z) + log q_Gaussian(z|x)\\n\\nAs discussed in Roeder et al. 2017 this approach even has potential advantages (variance reduction) in comparison to a closed form KL-divergence. We discuss this further at the end of Sec. 3.2.\\n\\nWe do not have to modify the approximate posterior, thus the reparameterization trick can be applied. This is now described in the paragraph \\\"Sampling and the Reparameterization Trick\\\".\\n \\nIf we also want to sample from the trained generative model, the other requirement is that we are able to sample from the prior distribution.\\nThis is indeed possible for the proposed prior (Sinz and Bethge 2010), and we include the sampling scheme in the appendix as Algorithm 1.\\n\\n\\nP3) To choose the layout we follow a strategy similar to the one proposed in Sinz et al. 2009b and evaluate the MIG scores and reconstruction loss for different layouts. The best performing layouts are compared in Fig. 6. To increase the clarity of the plot we now show mean values with error bars.\", \"description_of_the_overall_method\": \"We added a detailed description of the modified ELBO for ISA-VAE and ISA-TCVAE at the end of Sec. 3.2.\\n\\n\\nUse of the terms ISA-VAE, ISA-beta-VAE, beta-VAE, beta_ISA-VAE, etc.\\n \\nWe thank the reviewer for this comment as it helps to enhance the readability of the manuscript. We have simplified the terms that denote instances of the proposed method to ISA-VAE and ISA-TCVAE. Please note that the terms \\\"beta-VAE\\\" and \\\"beta-TCVAE\\\" were defined in their respective publications and we will continue to use these terms to denote these approaches.\\n\\nS1) We revisited the original definitions in Sinz and Bethge 2010 and found an inconsistency: v_1, ..., v_l_0 were used to denote both the function values (Table 1 in Sinz and Bethge 2010) and the subspaces themselves (p. 3433 in Sinz and Bethge 2010). We now denote the function values with v_1, ..., v_l_0 and the subspaces with \\\\mathcal(V)_1, ..., \\\\mathcal(V)_l_0 and provide definitions of both in the revised manuscript. Thank you for drawing our attention to this.\\n\\nS2) This statement holds for the children of the root node i in 1,...,l_0.\\nFor the root node i=0 it holds that\\nn_{0,k} = n_{k} for k in 1,...,l_0\\nie. the dimensionality of each ISA subspace. Please also refer to Fig. 1 (b) which visualizes an ISA model.\\n\\nS3) The Lp-nested distribution is defined in Eq. 7, and the family of ISA-models is obtained by plugging in Lp-nested functions of the form of Eq. 9 into Eq. 7. To increase clarity we now denote the probability density Eq. 7 with p_ISA(z) instead of rho(z).\\n\\nS4) The only free parameters of the family are the parameters of the ISA-layout and those are evaluated as described in our response to P3. We think that learning these parameters is an exciting direction for future research.\\n\\nS5) Eq. 7 [now Eq. 6] is an example of an Lp-nested distribution and is presented for didactic purposes only. The ISA models used in the experiments use the subclass of Lp-nested distributions as defined in Eq. 10 [now Eq. 9].\\n \\nS6) See response to S4 and P3.\\n\\nS7) The result presented in Fig. 4 a) is a result from an early experiment. We now present the result for the same parameters as in Fig. 5.\\n\\nS8) These plots are the standard evaluation plots for the dSprites dataset as introduced in Chen et al. 2018. We added a description to the figure caption and the appendix.\\n\\n\\nRoeder, G., Yuhaei, W., and Duvenaud. D., Sticking the Landing: Simple, Lower-Variance Gradient Estimators for Variational Inference, NIPS 2017\"}",
"{\"title\": \"Comments to Reviewer 4\", \"comment\": \"We thank the reviewer for the constructive feedback that improved the clarity of the paper. We have reworked section 4 and improved the figure captions.\\n\\n1. \\\"The authors used MIG throughout section 4. But I have no idea what it is.\\\"\\n\\nWe extended our description of the MIG score in section 4.\\n\\n\\\"Does a better MIG necessarily imply a good reconstruction?\\\"\\n\\nTo evaluate the quality of the reconstruction we reported the log likelihood of the reconstruction in Fig. 3, 5 and 6. We demonstrate with our quantitative evaluation that the proposed prior mitigates the trade-off between reconstruction quality and disentanglement (this trade-off is discussed in section 4.3)\\n\\n\\\"I am not sure if we can quantify the model performance by the mere MIG, and suggest the authors provide results of image generations as other GAN or VAE papers do.\\\"\", \"as_requested_we_provide_several_results_of_image_generations\": \"latent traversals in the main manuscript in Fig. 3 and in the appendix in Fig. 7, 8 and 9. We also provide examples of image reconstruction in Fig. 10 in the appendix.\", \"as_remark_on_this_topic\": \"We preferred a quantitative evaluation over a qualitative demonstration. Our quantitative analysis uses the data of 16 evaluations * 11 different values of beta = 176 trained models for each method, meaning that Fig. 4 aggregates the results of 704 experiments. We believe that this provides more evidence than a qualitative demonstration of generated images from a single successful experiment.\\n\\n\\n2. \\\"Is the 'interpretation' important for high dimensional code $z$? If yes, can the authors show an example of interpretable $z$?\\\"\\n\\nTo provide an example of an interpretable z we depicted the standard evaluation plots of the MIG score in Fig. 4, that are produced with the reference implementation of Chen et al. 2018 available on https://github.com/rtqichen/beta-tcvae\\nIn fact the different dimensions of z encode individual underlying generative factors of the dataset, namely position in x, position in y, scale, and rotation angle. We have added these interpretations of the latent dimensions to Fig. 3.\\n\\n\\n3. I had difficulty reading Section 4, since the authors didn't give many technical details; I don't know what the encoder, the decoder, and the specific prior are. \\n\\nThe prior is the independent subspace analysis model that is proposed in this paper.\\nWe used the same encoder and decoder architecture as in Chen et al. 2018 and have added a detailled description to the appendix A.5.\\n\\n\\n4. The authors should have provided a detailed explanation of what the figures are doing and explain what the figures show. I was unable to understand the contribution without explanations.\\n\\nWe kindly ask the reviewer to specify which figures is referred to. We have reworked many of the figures and improved the captions. In Fig. 4 for example we follow standard practice established in Chen et al. 2018 for visualizing latent representations for the dSprites dataset.\\n\\n\\n5. Can the authors compare the proposed prior with VampPrior [1]?\\n\\nIt would be interesting to consider a mixture of Gaussians model for learning latent representations.\\nFor the dataset we looked at specifically, a latent factor model is more appropriate (rather than a clustering model) which is captured by the proposed independent subspace model.\"}",
"{\"title\": \"The paper was not clearly written and failed to provide enough details.\", \"review\": \"The paper used the family of $L^p$-nested distributions as the prior for the code vector of VAE and demonstrate a higher MIG. The idea is adopted from independent component analysis that uses rotationally asymmetric distributions. The approach is a sort of general framework that can be combined with existing VAE models by replacing the prior. However, I think the paper can be much improved in terms of clarity and completeness.\\n\\n1. The authors used MIG throughout section 4. But I have no idea what it is. Does a better MIG necessarily imply a good reconstruction? I am not sure if we can quantify the model performance by the mere MIG, and suggest the authors provide results of image generations as other GAN or VAE papers do. \\n2. Is the \\\"interpretation\\\" important for high dimensional code $z$? If yes, can the authors show an example of interpretable $z$?\\n3. I had difficulty reading Section 4, since the authors didn't give many technical details; I don't know what the encoder, the decoder, and the specific prior are. \\n4. The authors should have provided a detailed explanation of what the figures are doing and explain what the figures show. I was unable to understand the contribution without explanations.\\n5. Can the authors compare the proposed prior with VampPrior [1]?\\n\\nThe paper should have been written more clearly before submission.\\n[1] Tomczak, Jakub M., and Max Welling. \\\"VAE with a VampPrior.\\\" arXiv preprint arXiv:1705.07120 (2017).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Several Interesting New Priors Are Proposed For The Latent Variables in VAE\", \"review\": \"The authors point out several issues in current VAE approaches, including the rotational symmetric Gaussian prior commonly used. A new perspective on the tradeoff between reconstruction and orthogonalization is provided for VAE, beta-VAE, and beta-TCVAE. By introducing several non rotational-invariant priors, the latent variables' dimensions are more interpretable and disentangled. Competitive quantitative experiment results and promising qualitative results are provided. Overall, I think this paper has proposed some new ideas for the VAE models, which is quite important and should be considered for publication.\", \"here_i_have_some_suggestions_and_i_think_the_authors_should_be_able_to_resolve_these_issues_in_a_revision_before_the_final_submission\": \"1) The authors should describe how the new priors proposed work with the \\\"reparameterization trick\\\". \\n2) The authors should at least provide the necessary implementation details in the appendix, the current manuscript doesn't seem to contain enough information on the models' details.\\n3) The description on the experiments and results should be more clear, currently some aspects of the figures may not be easily understood and need some imagination. \\n4) There are some minor mistakes in both the text and the equations, and there are also some inconsistency in the notations.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Overall method is not presented\", \"review\": \"This paper presents a methodology to bring together independent subspace analysis and variational auto-encoders. Naturally, in order to do that, the authors propose a specific family of prior distributions that lead to subspace independence the Lp-nested distribution family. This prior distribution is then used to learn disentangled and interpretable representations. The mutual information gap is taken as the measure of disentanglement, while the reconstruction loss measures the quality of the representation. Experiments on the sPrites dataset are reported, and comparison with the state of the art shows some interesting results.\\n\\nI understand the limitations of current approaches for learning disentangled representations, and therefore agree with the motivation of the manuscript, and in particular the choice of the prior distribution. However, I did not find the answer to some important questions, and generally speaking I believe that the contribution is not completely and clearly described.\\nP1) What is the shape of the posterior distribution?\\nP2) How does the reparametrization trick work in your case?\\nP3) How can one choose the layout of the subspaces, or this is also learned?\\n\\nMoreover, and this is crucial, the proposed method is not clearly explained. Different concepts are discussed, but there is no summary and discussion of the proposed method as a whole. The reader must infer how the method works from the different pieces. \\n\\nWhen discussing the performance of different methods, and even if in the text the four different alternatives are clearly explained, in figure captions and legens the terminology changes (ISA-VAE, ISA-beta-VAE, beta-VAE, beta-ISA-VAE, etc). This makes the discussion very difficult to follow, as we do not understand which figures are comparable to which, and in which respect.\\n\\nIn addition, there are other (secondary) questions that require an answer.\\nS1) After (10) you mention the subspaces v_1,...v_l_o. What is the formal definition of these subspaces?\\nS2) The definition of the distribution associated to ISA also implies that n_i,k = 1 for all i and k, right?\\nS3) Could you please formally write the family of distributions, since applying this to a VAE is the main contribution of your manuscript?\\nS4) Which parameters of this family are learned, and which of them are set in advance?\\nS5) From Figure 4 and 5, I understand that the distributions used are of the type in (7) and not (10). Can you comment on this?\\nS6) How is the Lp layout chosen?\\nS7) Why the Lp layout for ISA-beta-VAE in Figure 5 is not the same as in Figure 4 for ISA-VAE?\\nS8) What are the plots in Figure 4? They are difficult to interpret and not very well discussed.\\n\\nFinally, there are a number of minor corrections to be made.\", \"abstract\": \"latenT\\nEquation (3) missig a sum over j\\nFigure 1 has no caption\\nIn (8), should be f(z) and not x.\\nBefore (10), I understand you mean Lp-nested\\nI did not find any reference to Figure 3\\nIn 4.1, the standard prior and the proposed prior should be referred to with different notations.\\n\\nFor all these reasons I recommend to reject the paper, since in my opinion it is not mature enough for publication.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
SkMON20ctX | On the Trajectory of Stochastic Gradient Descent in the Information Plane | [
"Emilio Rafael Balda",
"Arash Behboodi",
"Rudolf Mathar"
] | Studying the evolution of information theoretic quantities during Stochastic Gradient Descent (SGD) learning of Artificial Neural Networks (ANNs) has gained popularity in recent years.
Nevertheless, these type of experiments require estimating mutual information and entropy which becomes intractable for moderately large problems. In this work we propose a framework for understanding SGD learning in the information plane which consists of observing entropy and conditional entropy of the output labels of ANN. Through experimental results and theoretical justifications it is shown that, under some assumptions, the SGD learning trajectories appear to be similar for different ANN architectures. First, the SGD learning is modeled as a Hidden Markov Process (HMP) whose entropy tends to increase to the maximum. Then, it is shown that the SGD learning trajectory appears to move close to the shortest path between the initial and final joint distributions in the space of probability measures equipped with the total variation metric. Furthermore, it is shown that the trajectory of learning in the information plane can provide an alternative for observing the learning process, with potentially richer information about the learning than the trajectories in training and test error. | [
"Stochastic gradient descent",
"Deep neural networks",
"Entropy",
"Information theory",
"Markov chains",
"Hidden Markov process."
] | https://openreview.net/pdf?id=SkMON20ctX | https://openreview.net/forum?id=SkMON20ctX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1eM4vbLgE",
"H1gigse307",
"rkeVazW5AX",
"HJluK-W9RQ",
"B1xvAxW5C7",
"HJlbKkWqA7",
"Byg9ksJZaQ",
"BkgWjike67",
"BJlBFC6EjQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545111338441,
1543404275272,
1543275195944,
1543274880218,
1543274702779,
1543274361221,
1541630690105,
1541565337447,
1539788412542
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1465/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1465/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1465/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1465/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1465/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1465/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1465/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1465/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1465/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes a quantity to monitor learning on an information plane which is related to the information curves considered in the bottleneck analysis but is more reliable and easier to compute.\\n\\nThe main concern with the paper is the lack of interpretation and elaboration of potential uses. A concern is raised that the proposed method abstracts away way too much detail, so that the shapes of the curves are to be expected and contain little useful information (see AnonReviewer2 comments). The authors agree to some of the main issues, as they pointed out in the discussion, although they maintain that the method could still contain useful information. \\n\\nThe reviewers are not very convinced by this paper, with ratings either marginally above the acceptance threshold, marginally below the acceptance threshold, or strong reject.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"New approach to monitor learning, more work needed to clarify meaning and potential use\"}",
"{\"title\": \"On the generality of learning trajectories in the information plane and the case of perfect SGD learning\", \"comment\": \"Answer to \\\"learning one label at a time\\\":\\n\\nthis means that we force the ANN to discriminate first only the label of the first class by an appropriate cost function. This corresponds to one versus all learning of the first label where the labels of other classes are all chosen equal to \\u201cnot-the-first-class\\u201d. Then we move on to the next label and train the network to discriminate between first and second classes versus the rest of the labels. In that manner, the labels are learned incrementally until all labels are learned. \\n\\n----------------------------------------------------------\\n\\nAnswer to \\u201cA lot of algorithms fall into your theory\\u201d and \\u201ci would assume all reasonable algorithms to perform that way. i.e. all algorithms that follow a version of the (natural) gradient. In normal gradient descent, we also see that the first thing we get right are the biases on the outputs, therefore i would claim this is a property of the learning-problem, not the algorithm. So far the article did not provide any evidence towards the contrary. \\u201c\\n\\nWe agree with your comment. Indeed, we tried to emphasize the very same issue in our general comment as the strength of our framework. Certain features of the trajectory, such as increasing tendency of H(\\\\hat Y), is indeed expected to hold generally for many \\u201creasonable\\u201d learning algorithms and therefore we suggested that this trajectory can be used to observe whether a learning algorithm is acting \\u201creasonably\\u201d or not. This claim is explored with new experimental results where we tried to show that this trajectory is different for underfitting, overfitting, and for example in learning one label at a time. Hence, the information plane might be used alternatively to monitor the learning process with potentially more enlightening information about the learning.\\n\\nThere is one caveat here. The SMLC was mainly adopted to explain the behavior of SGD in ANNs for perfect learning. This is definitely not expected to hold in general. However, we found it important to mention the similarity between the perfect SGD trajectory and the shortest path between probability measures, which can motivate cost functions promoting the shortest path directly if this path is optimal from learning perspective.\"}",
"{\"title\": \"More discussions supporting the value and potential of the proposed experiments\", \"comment\": \"We thank the reviewer for his/her comments. Valid concerns about the paper have been pointed out, which we aim to clarify in the latest version of the paper and the following comments:\\n\\n 1. \\u201cThis is known to happen, because almost all models include a bias on the output.\\u201d\\n\\nRegarding the comment about the bias, first, it should be noted that learning the marginal first is not universal and depends on the learning strategy. For instance if only one class is learned at a time, then the marginal is learned along with the training. See the figures attached to the general comment above. Therefore the relation between bias and learning marginals is far from being trivial and heavily depends on the learning algorithm. This hints to our claim that understanding different learning trajectories in the information plane can be illuminating for analyzing the learning process.\\n\\n\\n 2. \\u201cwhile showing some parabola like shape, there are big differences in how the shapes are looking like\\u201d\\n\\nNote that $\\\\alpha$-SMLC is a model used to explore how SGD is acting on probabilities during learning. Admitting its imperfection, similarities besides the parabolic shapes are notable. For example, the $\\\\alpha$-SMLC predicts that the inflection point of $H(\\\\hat y | y)$ gets closer to the bound as $p$ increases, which is also observed in the experiments. \\n\\nDespite this, we share the reviewer\\u2019s concern and believe that elaborating sophisticated extensions of the $\\\\alpha$-SMLC, that more closely resemble SGD, may be a fruitful research direction.\\n\\n 3. \\u201cThere is no actual connection to SGD left\\u201d\\n\\nThe connection to SGD arises from the fact that it can be modeled as a HMP. As far as the learning process can be modeled as a HMP, we expect the analysis to hold as well. However, SGD learning in ANNs is interesting as it ends up close to Fano\\u2019s curve (as predicted by the $\\\\alpha$-SMLC) and as it is indicated in Appendix, it manages to learn the true labels from the noisy ones. \\n\\n\\n 4. \\u201cone could think about a model which can not model a bias and the inputs are mean-free thus it is hard to learn the marginal distribution, which might change the trajectory\\u201d\\n\\nWe are not sure what the reviewer means by this but we think this concern may be addressed in the first point (\\u201cgenerality\\u201d) of the general comment above. Note that a model that is not able to learn the marginal distribution violates assumption A3 from that comment.\"}",
"{\"title\": \"More discussions + new experiment showing the potential of using the information plane\", \"comment\": \"We would like to thank the reviewer for his/hers comments. We addressed the main concerns about the paper as follows:\\n 1. \\u201cthe trajectory of the experiment v.s. SMLC (Figure 3), they look similar at first glance. But if you look at it carefully, you will notice that the color of them are different!\\u201d\\nThe points are colored according to the % of training time, so it is dependent on the number of epochs considered. Although we conjecture that the trajectory of SGD should be the observed parabolic shape, the convergence speed can vary from model to model. Currently, $\\\\alpha$-SMLC is parameterized linearly in $\\\\alpha$ which is an arbitrary choice. With more general parametrization for $\\\\alpha$, one can get a different coloring. Therefore the colors do not play any significant role in the claim.\\n\\n 2. \\u201c(1) what does the shape trajectory mean (2) what do the connection between the trajectory and Markov chain means (3) how can these connections be potentially useful to improve training algorithm?\\u201d\\nPlease see the second point (\\u201cmeaning\\u201d) from the general comment above.\\n\\n 3. \\u201cI suggest the authors using SGD instead of GD throughout the paper.\\u201d\\nWe have taken this comment into account and updated the paper accordingly.\"}",
"{\"title\": \"New experiment showing that the information plane carries useful information about the training process that is hidden in train/test error.\", \"comment\": \"We would like to thank the reviewer for his/hers comments, which lead us to improve our paper. Since these comments are shared with other reviewers we have posted the in a general comment above. Here is a summary of the main concerns of the reviewer:\\n 1. \\u201cHow general is this?\\u201d\\n\\nPlease see the first point (\\u201cgenerality\\u201d) from the general comment above.\\n\\n 2. \\u201cMeaning of this trajectory.\\u201d\\n\\nPlease see the second point (\\u201cmeaning\\u201d) from the general comment above.\\n\\n 3. \\u201cI think the paper lacks a take-away.\\u201d\\n\\nPlease see the second point (\\u201cmeaning\\u201d) from the general comment above.\"}",
"{\"title\": \"General Comments for all Reviewers\", \"comment\": \"We thank all reviewers for their comments. Here we address some concerns about the paper that are common among the reviews:\\n\\n1. (Generality) How General is this trajectory?\\n\\nAn important motivation of this work is to explore the generality of the observed trajectories and its interpretation. \\n\\nWe have tried to show that as far as some assumptions are in place, some features of the trajectory are to be expected in general:\\n\\nA1) The labels Y are uniformly distributed.\\nA2) The parameter updates during learning are done using a function of the previous parameters and an independent random variable representing the training batch, i.e., $\\\\theta_{n+1} = f(\\\\theta_{n}, U)$. This assumption holds for SGD training of ANNs.\\nA3) We assume perfect learning, that is $g(\\\\theta_n, .)$ converges to $c(.)$.\\n\\nWith these assumptions, the trajectory of learning in the information plane is independent from the architecture of the classifier (which may not even be an ANN). \\n\\nFirst, since the output labels are uniformly distributed, the entropy $H(\\\\hat y)$ tends to increase to its maximum during successful training which is motivated by Proposition 3. \\n\\nUnderstanding learning as maximization of mutual information, the shape of $H(\\\\hat y | y)$ is also expected to consist of one or more bumps with local maxima in the middle. The number of bumps depends on the trajectory and can be different if the learning strategy is different. For instance, if the learning algorithm is devised to learn one label at a time, the trajectory is quite different as it is shown in the Figure (here: https://ibb.co/f9H4TzP ) for 3 and 10 classes.\\n\\nThe parabola shape of the conditional entropy, however, is conjectured to be due to SGD changing the probability of labels to the ground truth labels by moving on the shortest path on the space of joint probability measures. This claim requires more theoretical and experimental investigation and it is relegated to future works.\\n\\n2. (Meaning) What is the meaning of these trajectories?\\n\\nWe propose that this trajectory in the information plane carries many useful information about the training process and can be used effectively to observe the state of training beyond mere measurement of training/test error. To back up this claim we included a new experiment in the paper in which we use the trajectories to spot underfitting and overfitting. \\n\\nA common practical issue in training learning algorithms is to find roots of the low accuracy and see if the obtained accuracy is the best we can do. This can be done in a straightforward way using the information plane. \\n\\nWe show that, regardless of the error, an underfitted classifier lies far from Fano\\u2019s bound which implies that the accuracy can be still improved. On the other hand, an overfitted classifier lies very close to the Fano bound and moves on this curve. From this experiment, we conclude that taking a look at the trajectory of these quantities in other scenarios (e.g. adversarial training) may reveal characteristic behaviors thus providing new insights.\"}",
"{\"title\": \"unclear motivation\", \"review\": [\"In summary, this paper does the following:\", \"The initial problem is to analyze the trajectory of SGD in training ANNs in the space of P of probability measures on Y \\\\times Y. This problem is interesting, but difficult.\", \"the paper constructs a Markov chain that follows a shortest path in TV metric on P\", \"(the \\\\alpha SMLC)\", \"through experiments, the paper shows that the trajectories of SGD and \\\\alpha-SMLC have similar conditional entropy.\"], \"my_issues_with_this_paper_are\": \"a/ The main result is a simulation. How general is this? Could it depend on the dataset? Could you provide some intuition or prove that for certain dataset, these two trajectories are the same (or very close)? \\nb/ Meaning of this trajectory. This is not the trajectory in P, it is the trajectory of the entropies. In general, is there an intuitive explanation on why these trajectories are similar? And what does it mean -- for example, what would be a possible implication for training SGD? Could it be that all learning methods will have this characteristic parabolic trajectory for entropies? \\nc/ The theoretical contribution is minor: both the techniques and results quoted are known. \\n\\nOverall, I think the paper lacks a take-away. It is an interesting observation that the trajectory of \\\\alpha-SMLC is similar to that of SGD in these plots, but the authors have not made a sufficient effort to interpret this.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"ICLR 2019 Conference Paper1465 AnonReviewer1\", \"review\": \"This paper study the trajectory of H(\\\\hat{y}) versus H(\\\\hat{y}|y) on the information plane for stochastic gradient descent methods for training neural networks. This paper was inspired by (Ziv and Tishby 17'), but instead of measuring the mutual information I(X;T) and I(Y:T), this paper proposed to measure H(\\\\hat{y}) and H(\\\\hat{y}|y), which are much easier to compute but carries similar meaning as I(Y;T) and I(X;T).\\n\\nThe interesting part of this paper appears in Section 4, where the author makes a connection between the SGD training process and \\\\alpha-SMLC(strong Markov learning chain). SMLC is just simply linear combination of the initial distribution and the final stable distribution of the labels. The authors show that the trajectory of the real experiment is similar to that of SMLC.\\n\\nGenerally I think the paper is well-written and clearly present the ideas. Here are some pros and cons.\", \"pros_1\": \"The trajectory presented in this paper is much more reliable than that in (Ziv and Tishby 17'), since measuring the entropy and conditional entropy of discrete random variables are much easier. Also it is easy for people to believe that the trajectory holds for various neural network structure and various activation functions.\", \"pros_2\": \"The connection to SMLC is interesting and it may contain lot of insights.\", \"cons_1\": \"One of my major concern is --- if you look at the trajectory of the experiment v.s. SMLC (Figure 3), they look similar at first glance. But if you look at it carefully, you will notice that the color of them are different! For SGD, the trajectory goes to the turning point very soon (usually no more than 10% of the training steps), whereas SMLC goes to the turning point much slower. How do the authors think about this phenomenon and what does this mean?\", \"cons_2\": \"This paper is going to be more meaningful if the author can provide some discussions, especially about (1) what does the shape trajectory mean (2) what do the connection between the trajectory and Markov chain means (3) how can these connections be potentially useful to improve training algorithm? I understand that these questions may not be clearly answerable, but the authors should make this paper more inspiring such that other researchers can think deeper after reading this paper.\", \"cons_3\": \"I suggest the authors using SGD instead of GD throughout the paper. Usually GD means true gradient descent, but the paper is talking about batched stochastic gradient descent. GD does not have Markovity.\\n\\nGenerally, I think the paper is on the borderline. I think the paper is acceptable if the author can provide more insights (against Cons 2).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Results of questionable value\", \"review\": \"The paper tries to describe SGD from the point of view of the distribution p(y',y) where y is (a possibly corrupted) true class-label and y' a model prediction. Assuming TV metric of probabilities, a trajectory is defined which fits to general learning behaviour of distributions.\\n\\nThe issue is that the paper abstracts the actual algorithm, model and data away and the only thing that remains are marginal distributions p(y) and conditional p(y'|y). At this point one can already argue that the result is either not describing real behavior, or is trivial. The proposed trajectory starts with a model that only predicts one-class (low entropy H(y') and high conditional entropy) and ends with the optimal model. the trajectory is linear in distribution space, therefore one obtains initially a stage where H(y') and H(y'|y) increase a lot followed by a stage where H(y'|y) decrease.\\n\\nThis is known to happen, because almost all models include a bias on the output, thus the easiest way to initially decrease the error is to obtain the correct marginal distribution by tuning the bias. Learning the actual class-label, depending on the observed image is much harder and thus takes longer. Therefore no matter what algorithm is used, one would expect this kind of trajectory with a model that has a bias.\\n\\nIt also means that the interesting part of an analysis only begins after the marginal distribution is learned sufficiently well. and here the experimental results deviate a lot from the theoretical prediction. while showing some parabola like shape, there are big differences in how the shapes are looking like.\\n\\nI don't see how this paper is improving the state of the art, most of the theoretical contributions are well known or easy to derive. There is no actual connection to SGD left, therefore it is even hard to argue that the predicted shape will be observed, independent of dataset or model(one could think about a model which can not model a bias and the inputs are mean-free thus it is hard to learn the marginal distribution, which might change the trajectory)\\n\\n Therefore, I vote for a strong reject.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJeu43ActQ | NOODL: Provable Online Dictionary Learning and Sparse Coding | [
"Sirisha Rambhatla",
"Xingguo Li",
"Jarvis Haupt"
] | We consider the dictionary learning problem, where the aim is to model the given data as a linear combination of a few columns of a matrix known as a dictionary, where the sparse weights forming the linear combination are known as coefficients. Since the dictionary and coefficients, parameterizing the linear model are unknown, the corresponding optimization is inherently non-convex. This was a major challenge until recently, when provable algorithms for dictionary learning were proposed. Yet, these provide guarantees only on the recovery of the dictionary, without explicit recovery guarantees on the coefficients. Moreover, any estimation error in the dictionary adversely impacts the ability to successfully localize and estimate the coefficients. This potentially limits the utility of existing provable dictionary learning methods in applications where coefficient recovery is of interest. To this end, we develop NOODL: a simple Neurally plausible alternating Optimization-based Online Dictionary Learning algorithm, which recovers both the dictionary and coefficients exactly at a geometric rate, when initialized appropriately. Our algorithm, NOODL, is also scalable and amenable for large scale distributed implementations in neural architectures, by which we mean that it only involves simple linear and non-linear operations. Finally, we corroborate these theoretical results via experimental evaluation of the proposed algorithm with the current state-of-the-art techniques. | [
"dictionary learning",
"provable dictionary learning",
"online dictionary learning",
"sparse coding",
"support recovery",
"iterative hard thresholding",
"matrix factorization",
"neural architectures",
"neural networks",
"noodl"
] | https://openreview.net/pdf?id=HJeu43ActQ | https://openreview.net/forum?id=HJeu43ActQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ByxrhUoBe4",
"ByeGb44y0X",
"rJlnpeEk0m",
"SJxXh0Q1A7",
"SJgzc8XTnX",
"B1xtSy6t2Q",
"B1x81eSE2m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545086637255,
1542566906054,
1542566084196,
1542565547404,
1541383817690,
1541160768742,
1540800478081
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1464/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1464/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1464/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1464/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1464/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1464/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1464/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"Alternating minimization is surprisingly effective for low-rank matrix factorization and dictionary learning problems. Better theoretical characterization of these methods is well motivated. This paper fills up a gap by providing simultaneous guarantees for support recovery as well as coefficient estimates for linearly convergence to the true factors, in the online learning setting. The reviewers are largely in agreement that the paper is well written and makes a valuable contribution. The authors are advised to address some of the review comments around relationship to prior work highlighting novelties.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"contribution towards tractable dictionary learning and sparse coding.\"}",
"{\"title\": \"Additional comparative experimental evaluations of convergence and computational time, clarifications on tuning and recovery guarantees\", \"comment\": \"We are grateful to the reviewer for the comments. In this revision, we have corrected the minor typos, added additional comparisons, and added a proof map for easier navigation of the results. Specific comments are addressed below.\\n\\n1. Regarding exact recovery guarantees \\u2014 NOODL converges geometrically to the true factors. Therefore, the error drops exponentially with iterations t. In other words, as t \\u2014> infinity A_i \\u2014> A^*_i for i in [1,m] and x_j \\u2014> x^*_j for j in [1,m], where x_j is in R^m. We have added this clarification in Section 1.1.\\n\\n2. On tuning parameters \\u2014 There are primarily three tuning parameters, namely eta_x (step-size for the IHT step), tau (threshold for the IHT step), and eta_A (step-size for the dictionary update step.) Our main result prescribes the theoretical values of these as shown in assumptions A.5 and A.6. Here, eta_x = Omega_tilde(k/sqrt(n)), tau = Omega_tilde(k\\u02c62/n), and eta_A = Theta(m/k). We have updated A.6. to include the order of these parameters.\\n\\nThe specific choices of these parameters, like other similar problems, depend on some a priori unknown parameters (e.g. the sparsity k, and the incoherence mu) which makes some level of tuning unavoidable. This is true for Arora '15 and Mairal '09, as well, where tuning is required for the choice of step-size for dictionary update, and for choice of regularization parameter and the step-size for coefficient estimation via FISTA. Note that, in our experiments we fix the step-size for FISTA as 1/L, where L is the estimate of the Lipschitz constant (since A is not known exactly).\\n\\nAlternately, since NOODL involves gradient-based updates for the coefficients and the dictionary, tuning (the step-sizes and the threshold) is relatively straightforward in practice, since it is based on a gradient descent strategy. In fact, to compile the experiments presented in this paper, we fixed step-size, eta_x, and threshold, tau, and tuned the step-size parameter eta_A only (Theta(m/k)). The choices of eta_A are 30 for k = 10,20 and eta_A = 15 for k =50,100, as shown in Fig.2., eta_A mostly effects the convergence rate as long as it is chosen in Theta(m/k). \\n\\nAlso, as shown in Table 4 (Appendix E), the tuning process for l1-based algorithms (i.e. FISTA) takes more time, since one needs to scan over the range of the regularization parameter to find one that works. This (a) adds to the computational time, and (b) since the dictionary is not known exactly, may guarantee recovery of coefficients only in terms of closeness in l2-norm sense, due to the error-in-variables (EIV) model for the dictionary. In this sense, NOODL is (a) simple to tune, (b) assures guaranteed recovery of both factors, and (c) is fast due to its geometric convergence properties. These factors highlight its applicability in practical DL problems. \\n\\n3. Definition of Hard Thresholding (HT) \\u2014 As per the recommendation of the reviewer, we have repeated the definition of hard-thresholding (HT) initially presented in the \\\"Notation\\\" sub-section, in Section 2 for clarity.\\n\\n4. Comparison to other Online DL algorithms \\u2014 As correctly observed by the reviewer, the overall structure of NOODL is similar to successful online DL algorithms. These successful algorithms (such as Mairal '09) leverage the progress made on both factors for convergence, however, do not guarantee recovery of the factors. On the other hand, the state-of-the-art provable DL algorithms focus on the progress made on only one of factors (the dictionary), and do not have good performance in practice, since they incur a non-negligible bias; see Section 5 and Appendix E. NOODL bridges the gap between these two. In addition to our main theoretical result, which establishes conditions for exact recovery of both factors at a geometric rate, NOODL also has superior empirical performance, leading to a neurally-plausible practical online DL algorithm with strong guarantees; see Section 3 and 4. Our work also paves way for the development and analysis of related alternating optimization-based techniques.\\n\\nOn reviewer's recommendation, we compare the performance of NOODL with one of the most popular alternating minimization-based online DL algorithm used in practice -- Mairal `09 -- in Fig. 2 and Table 4 (Appendix E). In this work, the authors show that alternating between a l1-based sparse approximation step and dictionary update based on block co-ordinate descent converges to a stationary point. The other comparable techniques shown in Table 1, are not ``online\\u2019\\u2019 and/or require stringent initializations, in terms of closeness to the true dictionary, as compared to NOODL. \\n\\nOur experiments show that due to the geometric convergence to the true factors, NOODL outperforms competing state-of-the-art provable online DL techniques both in terms of overall computational time, and convergence performance. These additional expositions further showcase the contributions of our work both on theoretical and practical online DL front.\"}",
"{\"title\": \"Additional intuition behind the main result, and comparative evaluation of computational time\", \"comment\": \"We thank the reviewer for the comments. As correctly observed by the reviewer, Arora et. al. 2015 suffers from a bias in estimation both in the analysis and in the empirical evaluations. The source of this bias term is an irreducible error in the coefficient estimate (formed using the hard-thresholding step). NOODL overcomes this issue by introducing a iterative hard-thresholding (IHT)-based coefficient update step, which removes the dependence of the error in estimated coefficient on this irreducible error, and ultimately the dictionary estimate.\\n\\nIntuitively, this approach highlights the symbiotic relationship between the two unknown factors \\u2014 the dictionary and the coefficients. In other words, to make progress on one, it is imperative to make progress on the other. To this end, in Theorem 1 we first show that the coefficient error only depends on the dictionary error (given an appropriate number of IHT iterations R), i.e. we remove the dependence on x_0 which is the source of bias in Arora et. al. 2015. We have added the intuition corresponding to this in the revised paper after the statement of Theorem 1 in Section 3. \\n\\nAnalysis of Computational Time \\u2014 We have added the average per iteration time taken by various algorithms considered in our analysis in Table~4 and Appendix E. \\nThe primary takeaway is that although NOODL takes marginally more time per iteration as compared to other methods when accounting for just one (Lasso-based) sparse recovery for coefficient update, it (a) is in fact faster per iteration since it does not involve any computationally expensive tuning procedure to scan across regularization parameters; owing to its geometric convergence property (b) achieves orders of magnitude superior error at convergence, and as a result, (c) overall takes significantly less time to reach such a solution; see Appendix E for details.\\n\\nWe would like to add that since NOODL involves simple separable update steps, this computation time can be further lowered by distributing the processing of individual samples across cores of a GPU (e.g. via TensorFlow) by utilizing the architecture shown in Fig. 1. We plan to release all the relevant code as a package in the future.\\n\\nIn this revision, we have added comparison to Mairal '09, a popular online DL algorithm. Further, we have also added a proof map, in addition to the Table 3, for easier navigation of the results.\"}",
"{\"title\": \"Clarifications on noise tolerance properties and assumption A.4.\", \"comment\": \"We would like to thank the reviewer for the comments and for raising some subtle yet important questions. We address and clarify specific comments below. We have also made corresponding changes in the revised paper, and have added a proof map, in addition to the Table 3, for easier navigation of the results. We have also added comparisons with Mairal `09, and experimental evaluation of computational time.\\n\\n1. Noise Tolerance \\u2014 NOODL also has similar tolerance to noise as Arora et. al. 2015 and can be used in noisy settings as well. We focus on the noiseless case here to convey the main idea, since the analysis is already very involved. Nevertheless, the proposed algorithm can tolerate i.i.d. sub-Gaussian noise, including Gaussian noise and bounded noise, as long as the ``noise\\u2019\\u2019 is dominated by the ``signal\\u2019\\u2019. Under the noisy case, the recovered dictionary and coefficients will converge to a neighborhood of the true factors, where the neighborhood is defined by the properties of the additive noise. \\n\\nIn other words, the noise terms will lead to additional terms which will need to be controlled for the convergence analysis. Specifically, the noise will add a term to the coefficient update in Lemma 2, and will effect the threshold, tau. For the dictionary, the noise will result in additional terms in Lemma 9 (which ensures that the updated dictionary maintains the closeness property). A precise characterization of the relationship between the level of noise the size of convergence neighborhood requires careful analysis, which we defer to future effort.\\n\\n2. On eps_t and A.4. \\u2014 Indeed, we don\\u2019t need to assume that eps_t is bounded. Specifically, using the result of Lemma 7, we have that eps_0 undergoes a contraction at every step, therefore, eps_t <= eps_0. For our analysis we fix eps_t = O^*(1/log(n)), which follows from the assumption on eps_0= O^*(1/log(n)) and Lemma 7. On reviewer\\u2019s comments, we have updated A.4., and moved the note about eps_t = O^*(1/log(n)) to the Appendix A.\\n\\n3. Exact recovery of factors \\u2014 Also, we would like to point that NOODL recovers both the dictionary and coefficients exactly at a geometric rate. This means that as t\\u2014> infinity both the dictionary and coefficients estimates converge to the true factors without incurring any bias. We have added a clarification corresponding to this in the revised paper in Section 1.1 and after the statement of Theorem 1 in Section 3.\"}",
"{\"title\": \"A novel, alternating minimization algorithm for sparse coding\", \"review\": \"The paper considers the problem of dictionary learning. Here the model that we are given samples y, where we know that y = Ax where A is a dictionary matrix, and x is a random sparse vector. The goal is typically to recover the dictionary A, from which one can also recover the x under suitable conditions on A. The paper shows that there is an alternating optimization-based algorithm for this problem that under standard assumptions provably converges exactly to the true dictionary and the true coefficients x (up to some negligible bias).\\n\\nThe main comparison with prior work is with [1]. Both give algorithms of this type for the same problem, with similar assumptions (although there is some difference; see below). In [1], the authors give two algorithms: one with a better sample complexity than the algorithm presented here, but which has some systematic, somewhat large, error floor which it cannot exceed, and another which can obtain similar rates of convergence to the exact solution, but which requires polynomial sample complexity (the explicit bound is not stated in the paper). The algorithm here seems to build off of the former algorithm; essentially replacing a single hard thresholding step with an IHT-like step. This update rule is able to remove the error floor and achieve exact recovery. However, this makes the analysis substantially more difficult. \\n\\nI am not an expert in this area, but this seems like a nice and non-trivial result. The proofs are quite dense and I was unable to verify them carefully.\", \"comments\": [\"The analysis in [1] handles the case of noisy updates, whereas the analysis given here only works for exact updates. The authors claim that some amount of noise can be tolerated, but do not quantify how much.\", \"A.4 makes it sound like eps_t needs to be assumed to be bounded, when all that is required is the bound on eps_0.\", \"[1] Arora, S. Ge, R., Ma, T. and Moitra, A. Simple, Efficient, and Neural Algorithms for Sparse Coding. COLT 2015.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Interesting paper, clear contribution. More intuition would be great.\", \"review\": \"The paper deals with the problem of recovering an exact solution for both the dictionary and the activation coefficients. As other works, the solution is based on a proper initialization of the dictionary. The authors suggest using Aurora 2015 as a possible initialization. The contribution improves Arora 2015 in that it converges linearly and recovers both the dictionary and the coefficients with no bias.\\n\\nThe main contribution is the use of a IHT-based strategy to update the coefficients, with a gradient-based update for the dictionary (NOODL algorithm). The authors show that, combined with a proper initialization, this has exact recovery guaranties. Interestingly, their experiments show that NOODL converges linearly in number of iterations, while Arora gets stuck after some iterations.\\n\\nI think the paper is relevant and proposes an interesting contribution. The paper is well written and the key elements are in the body. However, there is a lot of important material in the Appendix, which I think may be relevant to the readers. It would be nice to have some more intuitive explanations at least of Theorem 1. Also, it is clear in the experiments the superiority with respect to Arora in terms of iterations (and error), but what about computational time?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"We found this work interesting. We think that some issues need to be further addressed by the authors.\", \"review\": \"The main contributions of this work are essentially on the theoretical aspects. It seems that the proposed algorithm is not very original because its two parts, namely prediction (coefficient estimation) and learning (dictionary update) have been widely used in the literature, using respectively a IHT and a gradient descent. The authors need to describe in detail the algorithmic novelty of their work.\\n\\nThe definition of \\u201crecovering true factor exactly\\u201d need to be given. The proposed algorithm involves several tuning parameters, when alternating between two updating rules, an IHT-based update for coefficients and a gradient descent-based update for the dictionary. Therefore, an appropriate choice of their values need to be given.\\n\\nIn the algorithm, the authors need to define the HT function in (3) and (4).\\n\\nIn the experiments, the authors compare the proposed method to only the one proposed by Arora et al. 2015. We think that this is not enough, and more extensive experimental results would provide a better paper. \\n\\nThere are some typos that can be easily found, such as \\u201cof the out algorithm\\u201d.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
SkguE30ct7 | Neural Model-Based Reinforcement Learning for Recommendation | [
"Xinshi Chen",
"Shuang Li",
"Hui Li",
"Shaohua Jiang",
"Le Song"
] | There are great interests as well as many challenges in applying reinforcement learning (RL) to recommendation systems. In this setting, an online user is the environment; neither the reward function nor the environment dynamics are clearly defined, making the application of RL challenging.
In this paper, we propose a novel model-based reinforcement learning framework for recommendation systems, where we develop a generative adversarial network to imitate user behavior dynamics and learn her reward function. Using this user model as the simulation environment, we develop a novel DQN algorithm to obtain a combinatorial recommendation policy which can handle a large number of candidate items efficiently. In our experiments with real data, we show this generative adversarial user model can better explain user behavior than alternatives, and the RL policy based on this model can lead to a better long-term reward for the user and higher click rate for the system. | [
"Generative adversarial user model",
"Recommendation system",
"combinatorial recommendation policy",
"model-based reinforcement learning",
"deep Q-networks"
] | https://openreview.net/pdf?id=SkguE30ct7 | https://openreview.net/forum?id=SkguE30ct7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkeoHwvlgV",
"BkePyI9vAX",
"B1lpq75DRQ",
"H1xg9f5PCX",
"SJl7NZqD0Q",
"Byljz3KPC7",
"SklsdEJh2m",
"Bylww959nX",
"HygC7wF9nQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544742723121,
1543116255177,
1543115669165,
1543115400448,
1543115050715,
1543113746744,
1541301363199,
1541216863213,
1541211942105
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1463/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1463/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1463/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1463/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1463/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1463/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1463/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1463/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1463/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper formulates the recommendation as a model-based reinforcement learning problem. Major concerns of the paper include: paper writing needs improvement; many decisions in experimental design were not justified; lack of sufficient baselines; results not convincing. Overall, this paper cannot be published in its current form.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Improvements needed\"}",
"{\"title\": \"Response to Reviewer1\", \"comment\": \"We appreciate your constructive and detailed comments! We present our clarification in the following:\\n\\n(1)The idea of using a learned reward function instead of manually defined one sound sweet. But based on (7) and (8), the reward function is essentially giving more rewards for the action that the user really clicks on.\", \"one_can_interpret_the_mini_max_framework_in_two_ways\": \"(i)The user behavior model \\\\phi acts as a generator which generates the user's next actions based on her history, while the reward r acts as a discriminator which tries to differentiate user's actual actions from those generated by the behavior model \\\\phi. More specifically, the learned reward function r will extract some statistics from both real user actions and model user actions, and try to magnify their differences (or make a larger negative gap). In contrast, the learned user behavior model will try to make the difference smaller, and hence more similar to the real user behavior.\\n(ii)Alternatively, the optimization can also be interpreted as a game between an adversary and a learner where the adversary tries to minimize the reward of the learner by adjusting r, while the learner tries to maximize its reward by adjusting \\\\phi to counteract the adversarial moves. This gives the user behavior training process a large-margin training flavor, where we want to learn the best model even for the worst scenario. \\n\\nWe added these additional intuitive descriptions in sec 4.1 and 4.3 to make it more clear. \\n\\n(2)How much difference is there compared with traditional manual reward design of giving a click with a reward of 1? \\n\\nFirst, the learned reward function provides more information about a user\\u2019s preference, and provide a better interpretation of user behavior. Setting reward function to 1/-1 cannot fully differentiate user\\u2019s preference over different items. \\n\\nSecond, the learned reward function also helps reinforcement learning to learn a better policy. This can be explained by the reward shaping phenomenon in reinforcement learning, where continuous reward signals can help reinforcement learning algorithm converge better than sparse binary signals. We\\u2019ve also included an experimental comparison with the reward as 1/-1 and showed that the learned continuous rewards lead to better policies.\\n\\n(3)The assumption \\\"in each pageview, users are recommended a page of k items and provide feedback by clicking on only one of these items; and then the system recommends a new page of k items\\\" does not sound realistic. What if the users click on multiple items?\\n\\nWe model the multiple-click case as a sequence of clicks with the same display-set. Essentially, we need an ordered list to fit either our position weighting scheme or LSTM.\\n\\n(4)The combinatorial recommendation is useful in the recommendation setting. But it is also important to get the correct ranking order for items from the recommendation list, ie, the best item should rank on the top of the list. Is this principal guaranteed in the combinatorial recommendation proposed in this paper? It is not discussed in this paper.\\n\\nWe can use the user behavior model \\\\phi together with the cascading Q-networks to address the ranking question: \\n(i) First, the cascading network will select k items from the candidate pool. \\n(ii) Then, the user model can be used to assign a likelihood to each selected item. The item with high likelihood will be ranked higher. \\nWe note that the cascading Q-networks themselves do not explicitly guarantee the ranking of individual items, and the networks will score a set of items jointly. We can use the user behavior model to rank because experiments in Sec 6.2 already show that our user behavior model performs well in terms of ranking the displayed items.\\n\\n(5)The authors claim to provide an efficient combinatorial recommendation but fail to provide any computational complexity analysis or providing any analysis on training or serving time. Is the proposed algorithm computationally practical to be deployed in a real system?\\n\\nFirst, to search the optimal action, there are (n choose k) = n! / (k!(n-k)!) many candidates. With our designed cascaded Q-networks, we only need to search over n candidates for k times. Thus, we can obtain the optimal action with O(kn) computations. We mention this briefly in the last paragraph in sec 5.1.\\n\\n(6)The experiments are too weak because the baselines are old and state of art methods are missing from the comparison.\\n\\nIn the revised version, we\\u2019ve compared to 7 strong baselines in 6 datasets. Besides, we want to clarify that the previous 2 baseline methods(LR and CCF) are already strong baselines since they\\u2019ve been augmented with wide & deep feature layers.\"}",
"{\"title\": \"Response to Reviewer2\", \"comment\": \"Thank you very much for your review and suggestions! We present our clarification in the following:\\n\\n(1)in Figure 6 no comparison with model-free (policy-gradient type) of approaches\\n\\nIn Figure 6 (now Figure 7 in revised version), we compared deep Q-learning without using user model for adaption. Furthermore, LinUCB is another model-free approach which assumes an adversarial user. In both cases, our model-based adaptation produces better results. \\n\\n(2)there is not a lot of detail on the value of the generative adversarial network for the user behavior dynamics, thus this prevents the reader from fully understanding the contribution\\n\\nOur GAN framework learns both users model and the corresponding reward function in a unified framework. The values are reflected in:\\n(i) The framework allows us to learn a better user model by using the learned the loss function (the reward r). \\n(ii) The framework allows later reinforcement learning to be carried out with a principled reward function, rather than manually designed reward. \\n(iii) The framework allows us to perform model-based RL and online adaptation for new users to achieve better results. \\n\\n(3)only 2 datasets are used\\n\\nIn the revised version, we compared with 7 strong baselines in 6 datasets. In most datasets, our method achieves the best results. \\n\\n(4)only 100 users for test users seems few\\n\\nWe have the policies tested on 1,000 users, but in the first version, we only plotted the results on 100 users. In the revised version, we updated the figures and the numbers to present the results on 1,000 users.\"}",
"{\"title\": \"Response to Reviewer3(continue)\", \"comment\": \"(7)Section 4.3 to be relatively unclear.\", \"one_can_interpret_the_mini_max_optimization_in_two_ways\": \"The user behavior model \\\\phi acts as a generator which generates the user's next actions based on her history, while the reward r acts as a discriminator which tries to differentiate user's actual actions from those generated by the behavior model \\\\phi. More specifically, the learned reward function r will extract some statistics from both real user actions and model user actions, and try to magnify their differences (or make a larger negative gap). In contrast, the learned user behavior model will try to make the difference smaller, and hence more similar to the real user behavior.\\nAlternatively, the optimization can also be interpreted as a game between an adversary and a learner where the adversary tries to minimize the reward of the learner by adjusting r, while the learner tries to maximize its reward by adjusting \\\\phi to counteract the adversarial moves. This gives the user behavior training process a large-margin training flavor, where we want to learn the best model even for the worst scenario. \\n\\nWe added these additional intuitive descriptions in sec 4.1 and 4.3 to make it more clear. \\n\\n(8)if Eq. 7 is equivalent to Eq. 8, why is the solution of 8 used only to initialize 7?\\n\\nThe general formulation in Eq7 can use many different regularization terms, such as L2 regularization and f-divergence, beyond just Shannon entropy function, to induce potentially different user behaviors. This is analogous to GAN where a different variational form of the divergence can be used, such as Jensen-Shannon divergence, f-divergence, and Wasserstein divergence.\\n\\nEq9 in the revised version (previous Eq8) has two use cases: \\n(i) If Shannon entropy is used, we can directly use it for learning reward function and the user behavior model is related to the reward in closed form. \\n(ii) If other regularizations are used, the reward function from Eq9 can be used as initialization for the general optimization algorithm in Eq8 which works for any regularization. We include the experiments of training our model with L2 regularization using this initialization scheme in the revised paper.\\n\\n(9)Your cascading DQN idea seems like a good one. It would be nice to check if the constraints are correctly learned. If not, this seems like it would do no better than a greedy action-by-action solution.\\n\\nThank you for suggesting the constraint-checking! We empirically plot the quantity in the left-hand side(as y) and the right-hand side(as x) of the constraints. The scatter-plot is included as Figure 6, where we observe that the points are approximately along the diagonal.\\n\\nAlso, to make an action-by-action greedy policy, we design another action value function as Q(s, a1, a2, a3)=Q(s, a1)+Q(s, a2)+Q(s, a3) and include the comparison of its performance in table 2. Our cascading Q-network is much better than this greedy approach. Especially, when k is larger, the gap between their performances is larger.\\n\\n(10)In Section 6.1, it would be good to discuss the pre-processing in the main text.\\n\\nIn the revised version, we added some descriptions to the main text.\\n\\n(11)In 6.2, your baselines seem a bit weak. \\n\\nIn the revised version, we\\u2019ve compared to 7 strong baselines in 6 datasets. Besides, we want to clarify that the previous 2 baseline methods(LR and CCF) are already strong baselines since they\\u2019ve been augmented with wide & deep feature layers.\\n\\n(12)Related work: it would probably be good to survey some of the multi-arm bandit literature. There is also some CF-RL work which should be cited.\\n\\nPrevious RL-based methods are either using manually designed reward (eg. +1/-1 for click/no click) or model free. In the revision, we\\u2019ve compared with RL methods with manually designed rewards and model-free based RL approach, as well as multi-arm bandit based method (LinUCB). Our method is consistently better than these alternatives. \\n\\n(13)Section 6.2 and Table 1. I believe that Recall@k is most common in recommendation-systems-for-implicit-data literature. Or, are you assuming that what people do not click on are true negatives? This doesn't seem quite right as users are only allowed to click on a single item.\\n\\nThe experiments In Section 6.2 is to show that our user behavior model can make the most accurate prediction for users' behavior. Essentially, given user\\u2019s previous choices, we want to predict what is her *next choice*. In this case, only 1 item is positive, so we only report precision. \\n\\n(14)In Section 6.3, could you clarify how do you learn your reward model that is used to train the various methods?\\n\\nThe reward model learned together with user\\u2019s behavior model via the mini-max optimization. Once both are learned, it is treated as the simulation environment needed for reinforcement learning. Then various RL policies can be learned by interacting with this environment.\"}",
"{\"title\": \"Response to Reviewer3\", \"comment\": \"Thanks for your effort in providing this detailed review! We present our clarification in the following:\\n\\n(1)I am not clear on whether or not in the proposed model, users are \\\"allowed\\\" to not click on a recommendation.\\n\\n\\u2018Not click\\u2019 is always treated as one action in each pageview. In the revised paper (Sec 4.1 Remark(ii)), we provide more explanation to make it clearer.\\n\\n(2)Section 4. I am not sure that using the Generative Adversarial Network terminology is useful here. \\n\\nWe want to generate the user\\u2019s next action based on her current state (i.e. her historical sequence of actions). In other words, the behavior model \\\\phi aims at mimicking the user\\u2019s behavior as a result of optimizing an unknown reward function r. Thus, one needs to simultaneously estimate \\\\phi and r. The estimation framework is a mini-max optimization resembling the generator(\\\\phi) and discriminator(r). The benefit of this GAN framework is that one can view the learning of the reward r as learning a loss function for the generative model \\\\phi. The learned loss function can lead to a better user behavior model than obtained via a predefined loss function.\\n\\nIn the revised paper, we enriched the description and explanation of both the user model and its mini-max formulation in Sec 4.1 and 4.3.\\n\\n(3)Remark in Section 4.1: It seems like a user not clicking on something is also useful information. Why not model it?\\n\\n\\u2018No click\\u2019 is always modeled as one action. We do not use a specific notation to indicate \\u2018no click\\u2019. Instead, it is denoted as one of the items. If the user does not click, then she is clicking the \\u2018no click\\u2019 item. In the revised paper (Sec 4.1 Remark(ii)), we\\u2019ve clarified this.\\n\\n(4)I am a bit unclear on the value of Lemma 1. Further, what are the assumptions behind it? (also what is this temperature parameter eta?)\", \"lemma_1_has_two_major_values\": \"(i) It helps the model interpretation and makes the exploration-exploitation nature of the model become clear. Our general formulation can use many different regularization terms, such as L2 regularization and f-divergence, beyond just Shannon entropy function, to induce potentially different user behaviors. This is analogous to generative adversarial networks where a different variational form of the divergence can be used, such as Jensen-Shannon divergence, f-divergence, and Wasserstein divergence. Lemma 1 holds only when the regularization function is the negative Shannon entropy. For other regularization functions, the resulting user behavior model does not have a closed form, but also induces some form of exploration-exploitation trade-off.\\n(ii) Lemma 1 is also used to prove Lemma 2, which gives us a way to initialize the mini-max optimization problems and make the training more stable for more general divergences.\\n\\nFrom equation (3), the regularization parameter eta represents the exploration level of the user. When eta is smaller, the user is more exploratory. When eta is larger, the user is more stubborn to choose the item with the highest reward. We discuss the parameter eta under Lemma 1 in the revised paper. \\n\\n(5)In Section 4.2, the size of your model seems to grow linearly with the number of user interactions. That seems like a major advantage of RNNs/LSTMs. In practice, I imagine you cull the history at some fixed point?\\n\\nYes, we use a fixed time window of history for the position weigh model. In practice, Backpropagation over time in RNN/LSTM also stops at certain fixed time steps. \\n\\n(6)What is the advantage of learning a reward?\\n\\nFirst, the learned reward function provides more information about a user\\u2019s preference, and provide a better interpretation of user behavior. Setting reward function to 1/-1 cannot fully differentiate user\\u2019s preference over different items. \\n\\nSecond, the learned reward function also helps reinforcement learning to learn a better policy. This can be explained by the reward shaping phenomenon in reinforcement learning, where continuous reward signals can help reinforcement learning algorithm converge better than sparse binary signals. We\\u2019ve also included an experimental comparison with the reward as 1/-1 and showed that the learned continuous rewards lead to better policies in the revised version.\"}",
"{\"title\": \"Paper Revision 1\", \"comment\": \"Dear reviewers,\\n\\nThank you for your insightful suggestions! We have revised our paper to address some common concerns as well as individual questions. As for common questions, \\n\\n(1) We\\u2019ve compared with 7 strong baselines in 6 different datasets.\\nBesides, we want to clarify that the previous 2 baseline methods (LR and CCF) are actually strong baselines since both have wide & deep(W&D) feature layers.\\n\\n(2) We\\u2019ve clarified the explanation of the GAN user model and the mini-max formulation.\\nWe reduced the length of introducing the RL framework and added more interpretation for the user model as well as the mini-max formulation in sec 4.1 and 4.3.\\n\\n(3) We\\u2019ve improved the writing.\\nTypos and grammar errors are carefully adjusted.\\n\\nAs for reviewer\\u2019s individual questions, we will clarify separately below.\"}",
"{\"title\": \"Interesting problem and ideas, manuscript may not be ready for publication yet\", \"review\": \"This paper proposes to frame the recommendation problem as one of (model-based) RL. The two main innovations are: 1) a model and objective for learning the environment and reward models; 2) a cascaded DQN framework for reasoning about a combinatorial number of actions (i.e., which subset of items to recommend to the user).\\n\\nThe problem is clearly important and the authors' approach focuses on solving some of the current issue with deployment of RL-based recommenders. Overall the paper is relatively easy to follow, but the current version is not the easiest to understand and, in particular, it may be worth providing more intuitions (e.g., about the GAN-like setup). I also found that several decisions are not properly justified. The novelty of this paper seems reasonably high but my impression is that other/stronger baselines would make the study more convincing. Copy-editing the paper would also greatly improve readability.\", \"detailed_comments\": [\"I am not clear on whether or not in the proposed model, users are \\\"allowed\\\" to not click on a recommendation. It sounds like the authors in fact allow it but I think that could be made clearer.\", \"Section 4. I am not sure that using the Generative Adversarial Network terminology is useful here. Specifically, it is not clear what is your generative model over (I imagine next state and reward?).\", \"Remark in Section 4.1: It seems like a user not clicking on something is also useful information. Why not model it?\", \"I am a bit unclear on the value of Lemma 1. Further, what are the assumptions behind it? (also what is this temperature parameter eta?)\", \"In Section 4.2, the size of your model seems to grow linearly with the number of user interactions. That seems like a major advantage of RNNs/LSTMs. In practice, I imagine you cull the history at some fixed point?\", \"What is the advantage of learning a reward? E.g., a very simple reward would be to give a positive reward if a user clicks on a recommended item and a negative reward otherwise. What does your learned reward allow beyond this?\", \"Section 4.3. I also found Section 4.3 to be relatively unclear. I find that more intuition would be helpful.\", \"Also, if Eq. 7 is equivalent to Eq. 8, then why is the solution of 8 used only to initialize 7? I guess it may have to do with not finding the global optimum.\", \"Your cascading DQN idea seems like a good one. It would be nice to check if the constraints are correctly learned. If not, this seems like it would do not better than a greedy action-by-action solution. Is that correct?\", \"In Section 6.1, it would be good to discuss the pre-processing in the main text since it's pretty important to understand the study (e.g., evaluate is impact).\", \"In 6.2, your baselines seem a bit weak. Why not compare to more recent CF models (e.g., including Session-Based RNNs which you cite earlier)?\", \"Related work: it would probably be good to survey some of the multi-arm bandit literature. There is also some CF-RL work which should be cited (perhaps there are a few things in there that should be compared to in Section 6.3 & 6.4).\", \"Section 6.2 and Table 1. I believe that Recall@k is most common in recommendation-systems-for-implicit-data literature. Or, are you assuming that what people do not click on are true negatives? This doesn't seem quite right as users are only allowed to click on a single item.\", \"In Section 6.3, could you clarify how do you learn your reward model that is used to train the various methods?\", \"There are many typos and grammatical errors in the paper. I would suggest that the authors carefully copy-edit the manuscript.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"review\", \"review\": \"This paper belongs to the space of treating recommendation as a reinforcement learning problem, and proposed a model-based (cascaded DQN) approach, using a generative adversarial network to simulate user rewards.\", \"pros\": [\"proposed a set of cascading Q functions, to learn a recommendation policy\", \"unified min-max optimization to learn the behavior model and the reward function\", \"interesting idea of using generative adversarial networks to simulate user rewards.\"], \"cons\": [\"in Figure 6 no comparison with model-free (policy-gradient type) of approaches\", \"there is not a lot of detail on the value of the generative adversarial network for the user behavior dynamics, thus this prevents the reader from fully understanding the contribution\", \"only 2 datasets are used\", \"only 100 users for test users seems few\", \"why only 1000 active users were sampled from MovieLens?\", \"Personally, I would prefer less details on formulating the recommendation problem as an RL problem (as there have been other papers before with a similar formulation) and more detail on the simulation user reward model, and in general in sections 4 and 5. Also, the experiments could be strengthened.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review for Neural Model-Based Reinforcement Learning for Recommendation\", \"review\": \"The authors propose a deep reinforcement learning based recommendation algorithm. Instead of manually designing reward function for RL, a generative adversarial network was proposed to learn the reward function based on user's dynamic behavior. The authors also try to provide an efficient combinatorial recommendation algorithm by designing a cascade DQN. The authors hold their experiments on the Movielens and Ant Financial news dataset. The authors adopt logistic regression (LR) and collaborative competitive filtering(CCF) as comparison baseline to evaluate recommendation performance. The authors also compared their proposed RL policy CQDN with LinUCB.\\n\\n[Pros in Summary]\\n1. Recommendation in the deep neural network based RL is a hot topic.\\n\\n2. The motivation for using a self-learned rewards function and provide efficient combinatorial recommendation is interesting.\\n\\n[Cons in Summary]\\n1. The motivations/claimed contributions are not well supported/illustrated by the proposed algorithm or experiments.\\n\\n2. Some assumptions may not be realistic.\\n\\n3. The experiment is not sufficient without enough state of art baselines.\\n\\n4. The writing of this paper needs improvement.\\n\\n[Thoughts, Questions, and Problems in Details]\\n1. The idea of using a learned reward function instead of manually defined one sound sweet. But based on (7) and (8), the reward function is essentially giving more rewards for the action that the user really clicks on. How much difference is there compared with traditional manual reward design of giving a click with a reward of 1, especially given the circumstance that a lot of manual intervention is actually used in designing loss function like (7)?\\n\\nMoreover, in the experiment, there is no comparison experiment evaluating the difference between using a self-learned reward function vs. a traditional manual designed reward function.\\n\\n2. The assumption \\\"in each pageview, users are recommended a page of k items and provide feedback by clicking on only one of these items; and then the system recommends a new page of k items\\\" does not sound realistic. What if the users click on multiple items?\\n\\n3. The combinatorial recommendation is useful in the recommendation setting. But it is also important to get the correct ranking order for items from the recommendation list, ie, the best item should rank on the top of the list. Is this principal guaranteed in the combinatorial recommendation proposed in this paper? It is not discussed in this paper.\\n\\n4. The authors claim to provide an efficient combinatorial recommendation but fail to provide any computational complexity analysis or providing any analysis on training or serving time. Is the proposed algorithm computationally practical to be deployed in a real system?\\n\\n5. The experiments are too weak because the baselines are old and state of art methods are missing from the comparison.\\n\\n6. Typos and grammar errors across the paper, to name a few\\n\\n\\\"we will also estimate a user behavior model associate with the reward function\\\"\\n\\n\\\"a model for the sequence of user clicking behavior, discussion its parametrization and parameter estimation.\\\"\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
rJedV3R5tm | RelGAN: Relational Generative Adversarial Networks for Text Generation | [
"Weili Nie",
"Nina Narodytska",
"Ankit Patel"
] | Generative adversarial networks (GANs) have achieved great success at generating realistic images. However, the text generation still remains a challenging task for modern GAN architectures. In this work, we propose RelGAN, a new GAN architecture for text generation, consisting of three main components: a relational memory based generator for the long-distance dependency modeling, the Gumbel-Softmax relaxation for training GANs on discrete data, and multiple embedded representations in the discriminator to provide a more informative signal for the generator updates. Our experiments show that RelGAN outperforms current state-of-the-art models in terms of sample quality and diversity, and we also reveal via ablation studies that each component of RelGAN contributes critically to its performance improvements. Moreover, a key advantage of our method, that distinguishes it from other GANs, is the ability to control the trade-off between sample quality and diversity via the use of a single adjustable parameter. Finally, RelGAN is the first architecture that makes GANs with Gumbel-Softmax relaxation succeed in generating realistic text. | [
"RelGAN",
"text generation",
"relational memory",
"Gumbel-Softmax relaxation",
"multiple embedded representations"
] | https://openreview.net/pdf?id=rJedV3R5tm | https://openreview.net/forum?id=rJedV3R5tm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkexOxGYa7",
"HkeOU5ZYa7",
"ryxH-q-FpQ",
"HyeNJtWtaQ",
"S1li5OZtpQ",
"HJxjoDZtTX",
"H1xW5Lt1pm",
"rygzGj-p2X",
"B1gqWjV5hX",
"HJxbxO4cn7"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1542164583949,
1542163024474,
1542162940763,
1542162651605,
1542162579285,
1542162339481,
1541539465103,
1541376777691,
1541192450098,
1541191657107
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1462/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1462/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1462/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1462/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1462/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1462/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1462/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1462/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1462/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1462/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Good Improvement\", \"comment\": \"The changes addressed my concerns and the score is now improved from 6 to 8.\\n\\nFor \\\"teacher-forcing\\\": I misread MLE as the same as MaliGAN, given the name similarities (\\\"maximum likelihood\\\"). Thanks for the clarification.\"}",
"{\"title\": \"Reply to Reviewer3 (Part 2)\", \"comment\": \"5. \\u201cDiscuss the phenomenon -- NLL_gen increases after pre-training, while NLL_oracle decreases after pretraining in Figure 3?\\u201d\\n\\nThis phenomenon is not unique in RelGAN (with a small \\\\beta_max in particular). In general, GANs can generate text with better quality but are also more likely to suffer from mode collapse than the MLE trained counterparts on text generation, as has been shown in many previous works, such as [2,3,4]. This is why we have always seen a decrease in the NLL_oracle score (better sample quality) together with an increase of the NLL_gen score (worse sample diversity) while training current GANs (not only RelGAN with a small \\\\beta_max) on the synthetic data. What makes RelGAN different from other GANs is that we can control the trade-off between sample quality and diversity with a single tunable hyperparameter. Tables 2 & 3 showed that if \\\\beta_max is tuned properly, we can make the sample diversity of RelGAN to be very close to (or even better than) that of the MLE pre-trained LSTMs, while at the same time achieving much better sample quality. \\n\\n6. \\u201cCan human evaluation be performed since automatic metrics are not reliable enough?\\u201d\\n\\nThanks for the suggestion! We have done human evaluation for the EMNLP2017 WMT News dataset on Amazon Mechanical Turk, where the evaluation criteria details are provided in Table 5 (Appendix B.2 in the revised version of the paper) and the results are shown in Table 4 (Section 3.3 in the revised version of the paper). We can see the human scores in Table 4 also prefer RelGAN to other GANs and the MLE baseline models.\\n\\nPlease let us know if we have addressed your concerns and if you have further comments. \\n\\n\\n[1] Lu et al., \\u201cNeural Text Generation: Past, Present and Beyond.\\u201d arXiv preprint arXiv:1803.07133.\\n[2] Fedus et al., \\u201cMaskGAN: Better Text Generation via Filling in the _.\\u201d in ICLR 2018.\\n[3] Zhu et al., \\u201cTexygen: A Benchmarking Platform for Text Generation Models.\\u201d in SIGIR 2018. \\n[4] Semeniuta et al., \\u201cOn accurate evaluation of gans for language generation.\\u201d arXiv preprint arXiv:1806.04936, 2018.\"}",
"{\"title\": \"Reply to Reviewer3 (Part 1)\", \"comment\": \"Thank you very much for your review and helpful comments. We address your specific questions and comments below:\\n\\n*Related Work*\\n1. TextGAN: Thanks for the reviewer\\u2019s suggestion! We have provided a more detailed discussion in the \\u201cRelated Work\\u201d Section to clarify the difference between RelGAN and TextGAN (Zhang et al., 2017) in dealing with the non-differentiability issue.\\n\\n2. FM-GAN: Thanks for pointing out this recent paper. We have also added a discussion of this paper in the \\u201cRelated Work\\u201d Section.\\n\\n*Questions*\\n1. \\u201cDoes discriminator need pre-training?\\u201d\\n\\n No, the discriminator in RelGAN does not need pre-training.\\n\\n2. \\u201cHow does RelGAN compared to MaskGAN?\\u201d\\n\\n[1] has recently showed that MaskGAN has much lower BLEU scores compared to our baseline models (MLE, SeqGAN, RankGAN, LeakGAN), where the evaluation settings in [1] are the same with our work. Thus, MaskGAN also performs worse than RelGAN in terms of BLEU scores. \\n\\n3. \\u201cWhat are the self-BLEU score results since it was used in previous work?\\u201d\", \"a_short_answer\": \"We did evaluate all methods using our implemented self-BLEU scores. However, we found that they may not be suitable for evaluating sample diversity. Also, we found an issue in the implementation of self-BLEU on the open-source Texygen platform that was used in prior work. We are currently in contact with authors of Texygen regarding the issue. Before we reach an agreement, we think it would be better not to use self-BLEU scores.\", \"more_details\": \"As far as we know, self-BLEU scores are first proposed by the authors of the Texygen benchmarking platform to evaluate diversity, where the basic idea is to calculate the BLEU scores by choosing each sentence in the set of generated sentences as hypothesis and the others as reference, and then take an average of BLEU scores over all the generated sentences. However, when looking into the implementation of self-BLEU scores: https://github.com/geek-ai/Texygen/blob/master/utils/metrics/SelfBleu.py, we found a severe issue inside for evaluating self-BLEU over training: Only in the first time of evaluation that the reference and hypothesis come from the same \\u201ctest data\\u201d (i.e. the whole set of generated sentences). After that, the hypothesis keeps updated but the reference remains unchanged (due to \\u201cself.is_first=False\\u201d), which means hypothesis and reference are not from the same \\u201ctest data\\u201d any more, and thus the scores obtained under this implementation is not self-BLEU scores! \\n\\nTo this end, we modified their implementation to make sure that the hypothesis and reference are always from the same \\u201ctest data\\u201d and found that the self-BLEU (2-5) scores are always 1 when evaluating all the models (MLE, SeqGAN, RankGAN, LeakGAN and RelGAN). Also as inspired by the Reviewer2\\u2019s comments, we tried to reduce the number of the \\u201ctest data\\u201d by applying a small portion of the whole generated data as reference and still get 1 for the self-BLEU scores (even for the portion 5%). Therefore, if we understand correctly, self-BLEU scores may not be suitable for evaluating sample diversity. \\n\\n4. \\u201cWhy is the \\\\beta_max value used in the synthetic and real datasets quite different?\\u201d\\n\\nAs there is a trade-off between sample diversity and quality in RelGAN, controlled by the tunable parameter \\\\beta_max in both synthetic and real data, we can adjust \\\\beta_max depending on different evaluation goals. In the synthetic data experiments, the goal is to show that RelGAN could outperform other models in terms of NLL_oracle (i.e. sample quality) regardless of the sample diversity. Thus, we chose very small \\\\beta_max (1 or 2). In the real data experiments, however, the goal was to show that RelGAN could generate real text with both high quality and better diversity. Thus, we chose some intermediate values of \\\\beta-max (100 or 1000) to find a good trade-off where both BLEU and NLL_gen scores can outperform other models. If instead we had chosen \\\\beta_max=1, we would get significantly higher BLEU scores (as shown in Table 7 of Appendix H in the revised version of the paper), but at the cost of worse NLL_gen scores than other models (which would be in conflict with the main goal of the real data experiments).\"}",
"{\"title\": \"Reply to Reviewer2 (Part 2)\", \"comment\": \"3. \\u201cIt would make sense for the paper to show both ends of these failing cases with the exploration on more values of \\\\beta_max.\\u201d\\n\\nThank you for the suggestion. We have added Appendix H to explore the impact of different inverse temperature \\\\beta_max in RelGAN, especially where the failing cases at the two extremes are discussed. \\n\\nFirst, our results confirm that, similar to the synthetic data experiments, there also exists a consistent trade-off between sample quality and diversity in the real data, controlled by the tunable hyperparameter \\\\beta_max. It also reveals failing cases at the two extremes: On the one hand, if \\\\beta_max is too small, RelGAN suffers from severe mode collapse (large NLL_gen scores) and training instability (high variance of scores). On the other hand, if \\\\beta_max is too large, the sample quality improvement of RelGAN becomes marginal (low BLEU scores). Therefore, in the main text we have chosen the two intermediate values, \\\\beta_max = 100 & 1000, to show the advantages of RelGAN over other models, while still demonstrating its ability to control the trade-off between sample quality and diversity.\\n\\n4. \\u201cThe first paragraph in section 2.2.2 in terms of describing mode collapse is misleading.\\u201d\\n\\nWe agree that the last sentence of the first paragraph in Section 2.2.2 is a little bit misleading. We appreciate the reviewer\\u2019s suggestion and have changed it to \\u201cIntuitively, this might be one factor that contributes to mode collapse in RelGAN on text generation.\\u201d, which we believe avoids making an argument that applies to mode collapse in general GANs on image generation.\\n\\nPlease let us know if we have addressed your concerns and if you have further comments.\"}",
"{\"title\": \"Reply to Reviewer2 (Part 1)\", \"comment\": \"Thank you very much for your review and helpful comments. We address your specific questions and comments below:\\n\\n1. \\u201cThe paper does not compare with RNNs trained using only the \\\"teacher-forcing\\\" algorithm without using GAN.\\u201d\\n\\nWe think there are some misunderstandings here that we would like to clarify. One of our baseline algorithms is \\u201cMLE\\u201d, where recurrent networks are trained by using the teacher-forcing algorithm (see Tables 1, 2 & 3 for comparison with RelGAN). RelGAN consistently outperforms the baseline model \\u201cMLE\\u201d in terms of both BLEU scores (sample quality) and NLL_gen (sample diversity). For clarity, we have added a sentence in Section 3.1 to explicitly explain \\u201cMLE\\u201d in the revised version of the paper. \\n\\n2. \\u201cWhether using BLEU on the entire testing dataset is a good idea for benchmarking is controversial.\\u201d\\n\\nFirst, we agree that absolute BLEU scores depend on the size of test dataset. As such, we have used different subsets (25%, 50%, 75%, 100%) of the original test data for COCO Image Captions to evaluate the generated text from both RelGAN and MLE. We evaluate each subset of the test data 6 times and record the average BLEU scores. We find that for both RelGAN and MLE, the (average) BLEU scores consistently increase with the fraction of test data used. Therefore, whenever we show BLEU scores, the size of the test data MUST be provided as well for fair comparison. This is similar to the Frechet inception distance (FID) score used for evaluating generated images, where the number of generated and real samples used to calculate the FID score also influences the value of the score. \\n\\nSecond, the following empirical observation shows that the reviewer\\u2019s concern on the \\u201ccontroversiality of BLEU scores\\u201d may be not that severe in this paper: The relative BLEU score differences between RelGAN and MLE remain approximately invariant for different portions of the test data. For instance, BLEU-2 with portion 25% is 0.750 for RelGAN vs. 0.649 for MLE (with the difference 0.101) and BLEU-2 with portion 75% is 0.811 for RelGAN vs. 0.718 for MLE (with the difference 0.093). Thus, if we focus on the relative comparison between different models, the size of the test dataset may not matter much, as long as it is not too small or too large. \\n\\nFinally, if we correctly understand the reviewer's suggestion of using a \\u201cteacher-forcing\\u201d trained RNN for evaluation, the reviewer refers to the \\u201cvalidation perplexity\\u201d metric. [1] has previously used validation perplexity to evaluate the sample quality of MaskGAN, where they showed that in some cases, the perplexity increases steadily while sample quality still remained relatively consistent. More broadly, [2] has also pointed out that validation perplexity does not necessarily correlate with sample quality in the evaluation of generative models, and so validation perplexity may not be a good replacement for BLEU scores in terms of evaluating sample quality. Instead, as suggested by Reviewer3, we have added the human Turing test on Amazon Mechanical Turk (where results are shown in Table 4 in the revised version of the paper), which we think could be a good complementary to BLEU scores.\\n\\n\\n[1] Fedus et al., \\u201cMaskGAN: Better Text Generation via Filling in the _.\\u201d in ICLR 2018.\\n[2] Theis et al., \\u201cA note on the evaluation of generative models.\\u201d in ICLR 2016.\"}",
"{\"title\": \"Reply to Reviewer1\", \"comment\": \"Thank you very much for your review and helpful comments. We address your specific questions and comments below:\\n\\n1. \\u201cNon-differentiability is different to denote the gradient as 0.\\u201d\\n\\nWe agree that in general non-differentiability does not directly imply a vanishing gradient as in (4). To draw this conclusion, we first consider that the sampling operations in (3) are not differentiable, i.e., the output of the generator is discrete, taking values from a finite set. This implies a step function (with multiple steps in general) at the end of the generator, which is not differentiable only at a finite set of points (with measure zero). Since the derivative of a step function is 0 almost everywhere, the gradient of the generator\\u2019s output w.r.t. its parameters will also be zero almost everywhere. For clarity, we have added this reasoning to the revised version of the paper.\\n\\n2. \\u201cAre the multiple representations in discriminator simply multiple \\u201cEmbedding\\u201d matrices?\\u201d\\n\\nYes, we apply multiple different embedding matrices, each of which linearly transforms one input sentence into a separate embedded representation. In our proposed discriminator framework, each embedded representation is independently passed through the later layers of the discriminator neural network (denoted as \\u201cCNN-based classifier\\u201d in the paper) and the loss function to obtain an individual score (e.g. \\u201creal\\u201d or \\u201cfake\\u201d). Finally, the ultimate score to be propagated back to the generator is the average of these individual scores. Our ablation study experiments showed the advantages of this simple improvement in the discriminator.\\n\\n3. \\u201cWhy curves in RelGAN eventually fall after around 1000 iterations?\\u201d\\n\\nGood point! We have added Appendix F in the revised version of the paper to discuss this phenomenon. As shown in Figure 10, there is a diversity-quality transition during training of RelGAN: Early on in training, it learns to aggressively improve sample quality while sacrificing diversity. Later on, it turns instead to maximizing sample diversity while gradually decreasing sample quality. Intuitively, we think it may be much easier for the generator to produce realistic samples -- regardless of their diversity -- in order to fool the discriminator in the early stages of training. As the discriminator becomes better at distinguishing samples with less diversity over iterations, the generator has to focus more on producing more diverse samples to fool the discriminator.\\n\\n4. \\u201cDo you try training from scratch without pre-training?\\u201d\\n\\nYes, we have tested the performance of RelGAN without pre-training by using different GAN losses (including WGAN-GP as the reviewer has mentioned). The results and analysis are provided in Appendix G. We find that without pre-training, there is still a significant improvement for RelGAN compared with the random generation, in particular for the standard GAN loss. In contrast, without pre-training, previous RL-based GANs for text generation, such as SeqGAN and RankGAN, always get stuck around their initialization points and are not able to improve performance at all. This demonstrates that RelGAN may be a more promising GAN architecture to explore in order to reduce dependence on pre-training for text generation. In future work, we plan to do an extensive hyperparameter search to further improve the performance of RelGAN without pre-training.\", \"related_work\": \"Thanks for pointing out this paper. We have added a discussion of this paper in the \\u201cRelated Work\\u201d Section.\\n\\nPlease let us know if we have addressed your concerns and if you have further comments.\"}",
"{\"metareview\": \"\", \"pros\": [\"well-written and clear\", \"good evaluation with convincing ablations\", \"moderately novel\"], \"cons\": \"- Reviewers 1 and 3 feel the paper is somewhat incremental over previous work, combining previously proposed ideas.\\n\\n(Reviewer 2 originally had concerns about the testing methodology but feels that the paper has improved in revision)\\n(Reviewer 3 suggests an additional comparison to related work which was addressed in revision)\\n\\nI appreciate the authors' revisions and engagement during the discussion period. Overall the paper is good and I'm recommending acceptance.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper; a little incremental\"}",
"{\"title\": \"Interesting work which makes Gumbel-softmax relaxation work in GAN-based text generation using a relational memory\", \"review\": \"Overall:\\nThis paper proposes RelGAN, a new GAN architecture for text generation, consisting of three main components: a relational memory based generator for the long-distance dependency modeling, the Gumbel-Softmax relaxation for training GANs on discrete data, and multiple embedded representations in the discriminator to provide a more informative signal\\nfor the generator updates.\", \"quality_and_clarity\": \"The paper is well-written and easy to read.\", \"originality\": \"Although each of the components (relational memory, Gumbel-softmax) was already proposed by previous works, it is interesting to combine these into a new GAN-based text generator. \\nHowever, the basic setup is not novel enough. The model still requires pre-training the generator using MLE. The major difference are the architectures (relational memory, multi-embedding discriminator) and training directly through Gumbel-softmax trick which has been investigated in (Kusner and Hernandez-Lobato, 2016).\", \"significance\": \"The experiments in both synthetic and real data are in detail, and the results are good and significant.\\n\\n-------------------\", \"comments\": \"-- In (4), sampling is known as non-differentiable which means that we cannot get a valid definition of gradients. It is different to denote the gradient as 0.\\n-- Are the multiple representations in discriminator simply multiple \\u201cEmbedding\\u201d matrices?\\n-- Curves using Gumbel-softmax trick + RM will eventually fall after around 1000 iterations in all the figures. Why this would happen?\\n-- Do you try training from scratch without pre-training? For instance, using WGAN as the discriminator\", \"related_work\": \"-- Maybe also consider to the following paper which used Gumbel-softmax relaxation for improving the generation quality in neural machine translation related?\\nGu, Jiatao, Daniel Jiwoong Im, and Victor OK Li. \\\"Neural machine translation with gumbel-greedy decoding.\\\" arXiv preprint arXiv:1706.07518 (2017).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good Paper\", \"review\": \"Update: the authors' response and changes to the paper properly addressed the concerns below. Therefore the score was improved from 6 to 8.\\n\\n----\", \"the_paper_makes_several_contributions\": \"1. it extends GAN to text via Gumbel-softmax relaxation, which seems more effective than the other approaches using REINFORCE or maximum likelihood principle. 2. It shows that using relational memory for LSTM gives better results. 3. Ablation study on the necessity of the relational memory, the relaxation parameter and multi-embedding in the discriminator is performed.\\n\\nThe paper's ideas are novel and good in general, and would make a good contribution to ICLR 2019. However, there are a few things in need of improvement before it is suite for publication. I am willing to improve the scores if the following comments are properly addressed.\\n\\nFirst of all, the paper does not compare with recurrent networks trained using only the \\\"teacher-forcing\\\" algorithm without using GAN. This means that at a high level, the paper is insufficient to show that GAN is necessary for text generation at all. That said, since almost every other text GAN paper also failed to do this, and the paper's contribution on using Gumbel-softmax relaxation and the relational memory is novel, I did not get too harsh on the scoring because of this.\\n\\nSecondly, whether using BLEU on the entire testing dataset is a good idea for benchmarking is controversial. If the testing data is too large, it could be easily saturated. On the other hand, if the testing data is small, it may not be sufficient to capture the quality well. I did not hold the authors responsible on this either, because it was used in previously published results. However, the paper did propose to use an oracle, and it might be a good idea to use a \\\"teacher-forcing\\\" trained RNN anyways since it is necessary to show whether GAN is a good idea for text generation to begin with (see the previous comment).\\n\\nA third comment is that I had wished the paper did more exploration on the relaxation parameter \\\\beta. Ideally, if \\\\beta is too large, the output would be too skewed towards a one-hot vector such that instability in the gradients occurs. On the other hand, if \\\\beta is too small, the output might not be close enough to one-hot vectors to make the discriminator focus on textual differences rather than numerical differences (i.e., between a continuous and a one-hot vector). It would make sense for the paper to show both ends of these failing cases, which is not apparent with only 2 hyper-parameter choices.\\n\\nFinally, the first paragraph in section 2.2.2 suggests that the gap between discrete and continuous outputs is the reason for mode collapsing. This is false. For image generation, when all the outputs are continuous, there is still mode collapsing happening with GANs. The authors could say that the discrete-continuous gap contributes to mode-collapsing, but this is not too good either because it will require the paper to conduct experiments beyond text generation to show this. Authors should make changes here.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"interesting idea and experiments well-executed\", \"review\": \"==========================\\nI have read the authors' response and other reviewers' comments. Thanks the authors for taking great effort in answering my questions. Generally, I feel satisfied with the repsonse, and prefer an acceptance recommendation. \\n==========================\", \"contributions\": \"The main contribution of this paper is the proposed RelGAN. First, instead of using a standard LSTM as generator, the authors propose using a relational memory based generator. Second, instead of using a single CNN as discriminator, the authors use multiple embedded representations. Third, Gumbel-softmax relaxation is also used for training GANs on discrete textual data. The authors also claim the proposed model has the ability to control the trade-off between sample quality and diversity via a single adjustable parameter.\", \"detailed_comments\": \"(1) Novelty: This paper is not a breakthrough paper, mainly following previous work and propose new designs to improve the performance. However, it still contains some novelty inside, for example, the model choice of the generator and discriminator. I think the observation that the temperature control used in the Gumbel-softmax can reflect the trade-off between quality and diversity is interesting. \\n\\nHowever, I feel the claim in the last sentence of the abstract and introduction is a little bit strong. Though this paper seems to be the first to really use Gumbel-softmax for text generation, similar techniques like using annealed softmax to approximate argmax has already been used in previous work (Zhang et al., 2017). Since this is similar to Gumbel-softmax, I think this may need additional one or two sentences to clarify this for more careful discussion. \\n\\nFurther, I would also recommend the authors discuss the following paper [a] to make this work more comprehensive as to the discussion of related work. [a] also uses annealed softmax approximation, and also divide the GAN approaches as RL-based and RL-free, similar in spirit as the discuss in this paper. \\n\\n[a] Adversarial Text Generation via Feature-Mover's Distance, NIPS 2018.\\n\\n(2) Presentation: This paper is carefully written and easy to follow. I enjoyed reading the paper. \\n\\n(3) Evaluation: Experiments are generally well-executed, with ablation study also provided. However, human evaluation is lacked, which I think is essential for this line of work. I have a few questions listed below.\", \"questions\": \"(1) In section 2.4, it mentions that the generator needs pre-training. So, my question is: does the discriminator also need pre-training? If so, how the discriminator is pre-trained?\\n\\n(2) In Table 1 & 2 & 3, how does your model compare with MaskGAN? If this can be provided, it would be better. \\n\\n(3) Instead of using NLL_{gen}, a natural question is: what are the self-BLEU score results since it was used in previous work?\\n\\n(4) The \\\\beta_max value used in the synthetic and real datasets is quite different. For example, \\\\beta_max = 1 or 2 in synthetic data, while \\\\beta_max = 100 or 1000 is used in real data. What is the observation here? Can the authors provide some insights into this?\\n\\n(5) I feel Figure 3 is interesting. As the authors noted, NLL_gen measures diversity, NLL_oracle measures quality. Looking at Figure 3, does this mean GAN model produces higher quality samples than MLE pretrained models, while GAN models also produces less diverse samples than MLE models? This is due to NLL_gen increases after pretraining, while NLL_oracle further decreases after pretraining. However, this conclusion also seems strange. Can the authors provide some discussion on this? \\n\\n(6) Can human evaluation be performed since automatic metrics are not reliable enough?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
H1xwNhCcYm | Do Deep Generative Models Know What They Don't Know? | [
"Eric Nalisnick",
"Akihiro Matsukawa",
"Yee Whye Teh",
"Dilan Gorur",
"Balaji Lakshminarayanan"
] | A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. Generative models are widely viewed to be robust to such mistaken confidence as modeling the density of the input features can be used to detect novel, out-of-distribution inputs. In this paper we challenge this assumption. We find that the density learned by flow-based models, VAEs, and PixelCNNs cannot distinguish images of common objects such as dogs, trucks, and horses (i.e. CIFAR-10) from those of house numbers (i.e. SVHN), assigning a higher likelihood to the latter when the model is trained on the former. Moreover, we find evidence of this phenomenon when pairing several popular image data sets: FashionMNIST vs MNIST, CelebA vs SVHN, ImageNet vs CIFAR-10 / CIFAR-100 / SVHN. To investigate this curious behavior, we focus analysis on flow-based generative models in particular since they are trained and evaluated via the exact marginal likelihood. We find such behavior persists even when we restrict the flows to constant-volume transformations. These transformations admit some theoretical analysis, and we show that the difference in likelihoods can be explained by the location and variances of the data and the model curvature.
Our results caution against using the density estimates from deep generative models to identify inputs similar to the training distribution until their behavior for out-of-distribution inputs is better understood. | [
"deep generative models",
"out-of-distribution inputs",
"flow-based models",
"uncertainty",
"density"
] | https://openreview.net/pdf?id=H1xwNhCcYm | https://openreview.net/forum?id=H1xwNhCcYm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJl0-fqLeV",
"SJxIWB4gl4",
"BklXQj-R1V",
"rkxi_sW8kN",
"B1eudmtWkN",
"HJlpx-we1E",
"SkluvEgeJN",
"rJxBdmxxyV",
"BJgniGai0m",
"HJxL_ysjC7",
"HylTaY-jAm",
"SyedvzM9Am",
"rygFbvpYRX",
"Skgkfk6tRm",
"ByxeVRhY0m",
"BygHqa2F0X",
"SygxWa3Y0m",
"ByloTnnFRQ",
"HkgLWfveT7",
"BkgNd7oT3m",
"r1eWc6qjnX",
"BJe__C5d3Q",
"HJx8SCQEn7",
"BJl6am8z37",
"ByefSPJRsm"
],
"note_type": [
"official_comment",
"meta_review",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment",
"comment"
],
"note_created": [
1545146885689,
1544729853678,
1544588058843,
1544063859031,
1543766895807,
1543692533365,
1543664735967,
1543664493254,
1543389860035,
1543380846148,
1543342532941,
1543279200101,
1543259905164,
1543257863134,
1543257639836,
1543257485414,
1543257336478,
1543257283284,
1541595645882,
1541415787948,
1541283208880,
1541086832444,
1540795965822,
1540674500811,
1540384570027
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1461/Area_Chair1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1461/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1461/AnonReviewer3"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1461/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1461/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1461/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1461/AnonReviewer3"
],
[
"~Shengyang_Sun4"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"title\": \"Re: on point 5 and asymmetric behaviour\", \"comment\": \"Thanks for your comment and question.\\n\\nPer the reviewers' requests for more evidence of the phenomenon on additional data sets, we wanted to bolster the 'motivating observations' section with experiments that better exhibit the curious out-of-distribution behavior. We found that the FashionMNIST-vs-MNIST pair illustrated the phenomenon better (i.e. larger BPD gap) than the NotMNIST-vs-MNIST pair and hence we replaced those results in the main text. We will also add the corresponding plots to Appendix B showing the asymmetric behavior for this pair as well (due to time constraints, we couldn't update all of the figures in the Appendix during the rebuttal period). This was the only reason for the switch. If you think the NotMNIST-vs-MNIST experiment is more interesting for some other reason, please do let us know your thoughts. \\n\\nWe wouldn't claim the asymmetry \\\"solves\\\" the issue since (i) even for models trained on SVHN, there could be other datasets that lead to higher likelihood and (ii) it does not immediately reveal a procedure to correct the CIFAR10-vs-SVHN (or similar) issue. The second-order analysis in Section 5 is still our best explanation for the asymmetric behavior. That is, the interaction between the model curvature and the data set variance leads to the phenomenon, and when the sign of the difference in variances is switched (which occurs when the train and OOD sets are switched), then we expect the phenomenon behavior to flip as well.\"}",
"{\"metareview\": \"This paper makes the intriguing observation that a density model trained on CIFAR10 has higher likelihood on SVHN than CIFAR10, i.e., it assigns higher probability to inputs that are out of the training distribution. This phenomenon is also shown to occur for several other dataset pairs. This finding is surprising and interesting, and the exposition is generally clear. The authors provide empirical and theoretical analysis, although based on rather strong assumptions. Overall, there's consensus among the reviewers that the paper would make a valuable contribution to the proceedings, and should therefore be accepted for publication.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting empirical observation and analysis\"}",
"{\"comment\": \"could you please explain why the notmnist results were removed in the latest draft? I found these did illustrate well the issue this paper is trying to get across, albeit the asymmetric behaviour as reported in the appendix -- also, while on this, I'm also surprised that the official reviewers didn't ask more about this. Could you provide some thoughts on why reversing the train/test roles of data sets solves the pathological high test-likelihood issue? Thanks!\", \"title\": \"on point 5 and asymmetric behaviour\"}",
"{\"title\": \"Re: Density Estimation Observation Appears Elsewhere\", \"comment\": \"(Apologies for late response, we missed this earlier)\\n\\nThanks for pointing us to your work. We will incorporate it into our discussion of related work.\"}",
"{\"title\": \"Re: Thank you for your feedback\", \"comment\": \"Thank you for these suggestions, Reviewer #3. We probably won't be able to add them in the next week---as many of us authors are traveling to / attending NeurIPS---but we will add them to the next iteration of the draft.\"}",
"{\"title\": \"Thank you for your feedback\", \"comment\": \"And thank you for revising the text. My main concerns are addressed, and the issue #5 is pretty minor given the other assumption made in the analysis.\\n\\nI am not a statistics expert, if one wants to test whether two univariate Gaussians have different means or not, a student-t test can be used. In this case of multivariate Gaussians, a brief search suggests using its generalization, \\\"Hotelling's two-sample t-squared statistics/test\\\". In the end, one wants to compare the distance (considering different dimensions have different correlations, the Mahalanobis distance is better) between the two means, and compare its scale to the covariance matrices of both Gaussians.\\n\\nA rougher test is see if one Gaussian's mean lies inside the confidence interval of the other Gaussian. See multivariate normal distribution's confidence interval.\\n\\nIn the case that the tests fail, one can see how much the test statistics are larger than e.g. the 95% quantile of the corresponding test distributions.\"}",
"{\"title\": \"Re: Related work\", \"comment\": \"Hello Kimin,\\n\\nThanks for pointing us to your work. We will incorporate it into our discussion of related work.\"}",
"{\"title\": \"Re: Comments on the rebuttal\", \"comment\": \"Thank you for your responses and continuing the discussion, Reviewer #3. Our replies are below.\\n\\n2. \\\"All I am asking is that the paper warns its readers of this shortcoming at the beginning of the analysis.\\\":\\n\\nFair point. We will add a sentence at the beginning of Section 5 to make explicit that these expressions are approximations. \\n\\n\\n4. We perfectly agree with your 'better description': \\\"one of the terms encourages the sensitivity....But we tried and it's not working.\\\" This is exactly what we wanted to convey in the draft, and we thought we clarified this point in our rebuttal by saying \\\"Our point is made in the context of volume term which is only one of the terms in the change-of-variable objective.\\\" We'll revise the draft to further emphasize our remarks pertain to the volume term only.\\n\\n\\n5. \\\"...making it 150 which is huge (actual value is probably smaller)\\\"\\n\\nThe difference is certainly much smaller. It would be 150 only if the histograms were perfectly separated to each end of the x-axis in Figure 6 (a) of the original draft, which is not the case at all. What metric / plot would convince you? Some statistic of the dimension-wise means?\"}",
"{\"title\": \"Comments on the rebuttal\", \"comment\": \"Thank you for your response. The extra results are promising, which makes the paper quite stronger. Other questions are addressed well. Now I am mainly focused on these three issues:\\n\\n2. Second order analysis, but only on the *sign* of the *difference* of two pdfs\\n\\nI would think that since x is an image, it would be hard to approximate a distribution with a mixture of a thousand Gaussians, let alone one Gaussian. Even if you are taking the difference of two pdfs, and taking the sign of the difference, a Gaussian would give you a hypersphere, not a large amounts of irregular-shaped blobs scattered through the image space.\\n\\nIt IS indeed inevitable that when theoretically analyzing deep networks, we have to start somewhere easy, and log-quadratic pdfs are a valid starting point. All I am asking is that the paper warns its readers of this shortcoming at the beginning of the analysis.\\n\\n4. Loss actively increasing volume term unlike prior work\\n\\nIt does seem that way, but by the same argument I can claim that any loss function function has a L2 component in it: if your loss is f(theta), then you just write f(theta) = g(theta) + |theta|_2^2, where g(theta) = f(theta) - L2. My bold claim only makes sense if in fact all terms in g(theta) collectively does not do much on the L2. Unfortunately this is not the case in this paper. \\n\\nSpecifically in this paper, the latent density term is the happiest if you make f nearly degenerate (everything maps to a tiny proximity of argmax_z{ p(z) }, for example), making the volume term nearly zero. And the volume term is needed to change this into something meaningful. The two terms strike a balance. So it is not right to claim f(x) encourages sensitivity if one term encourages it and another discourages it. -- Especially considering the experiment fixing the volume term did not make SVHN and CIFAR closer. A better way to describe this story is can be along the lines of \\\"one of the terms encourages the sensitivity (but the other discourages it), and that term makes SVHN likelihood pretty high, so one may think this is the issue. But we tried and it's not working\\\".\\n\\n5. Are SVHN and CIFAR centers close?\\n\\n*Individually*, each dimension of the means is quite close, but remember that two mean vectors are close only if *everything* is close. These are I assume 32x32=1024 feature space, so you would amplify the estimated 0.15 by 1024, making it 150 which is huge (actual value is probably smaller). Since this is used for the difference of two distributions approximated by log-quadratics, one should see the drop of the approximated density function when you move as far as to the mean of the other distribution. I am not convinced that it is small.\"}",
"{\"comment\": \"> 1. (Also AREA CHAIR NOTE): Another parallel submission to ICLR titled \\u201cGenerative Ensembles for Robust Anomaly Detection\\u201d makes similar observations and seemed to suggest that ensembling can help counter the observed CIFAR/SVHN phenomena unlike what we see in Figure 10.\\n\\nThe parallel submission called Deep Anomaly Detection with Outlier Exposure also makes the observation that SVHN examples have higher likelihood than CIFAR-10 examples, and they also propose a way to correct this behavior. This is in Section 4.4 of https://openreview.net/pdf?id=HyxCxhRcY7\\nThe results also suggest that SVHN results are one of the worst-cases for density estimators; density estimators are not as bad on many other datasets.\", \"title\": \"Density Estimation Observation Appears Elsewhere\"}",
"{\"title\": \"Re: Figure 4 d)\", \"comment\": \"No, the BPD never becomes lower for CIFAR-10 than for SVHN under any setting of the training time, optimization strategy, regularization type / strength, and model size that we tried. It depends on what you mean by \\u2019not complex enough.\\u2019 We achieve sampling and BDP numbers on par with SOTA so we don\\u2019t think that the explanation is simply to use a bigger model. In fact, the Glow model trained by the authors of \\u201cGenerative Models for Robust Anomaly Detection\\u201d (https://openreview.net/forum?id=B1e8CsRctX) is as large as Kingma & Dhariwal's (2018), and they report the same phenomenon. If by \\u2019not complex enough\\u2019 you mean that Glow could possibly be generally improved to better represent the training density, then sure, perhaps some innovation applied to Glow could make the model richer and fix the issue. We do not believe such an innovation is trivial though, given how persistent the phenomenon is across hyperparameters and when ensembling (Appendix F).\"}",
"{\"title\": \"Figure 4 d)\", \"comment\": \"From Figure 4 d), we see that, due to the inductive bias of the model, SVHN has lower bpd. \\nIf the model were trained further, would the bpd of the training set ever become lower than SVHN test? \\n\\nIf yes, then doesn't this indicate that, due to early stopping, the models are underfitting the CIFAR test set? In other words, generalizing density estimation from CIFAR training set to CIFAR test set is challenging and thus the models underfit the CIFAR test set, resulting in the simpler dataset (SVHN) having higher likelihood due to the inductive bias of the model. So possibly, given more data or a better inductive bias, this problem would go away? \\nIf no, then it seems that the model is not complex enough since it is unable to obtain a lower bpd on CIFAR train compared to SVHN. \\n\\nHave you tested this? What are your thoughts?\"}",
"{\"title\": \"Revised Draft\", \"comment\": \"We have uploaded a revised draft in which we have attempted to incorporate the reviewers\\u2019 suggestions. In particular, the new draft includes the following significant revisions:\\n\\n1. Additional Data Sets: In Section 3 we now report results for Glow trained and tested on the following data sets (in addition to CIFAR-10 vs SVHN): FashionMNIST (train) vs MNIST (test), CelebA (train) vs SVHN (test), ImageNet (train) vs CIFAR10/CIFAR100/SVHN (test). The phenomenon of interest (i.e. higher likelihood on out-of-distribution test data) is observed for all of these new pairs. Furthermore, we include the empirical means and variances of these data sets in the analysis in Section 5 and show that they agree with our original draft\\u2019s conclusions.\\n\\n2. Related Work: We discuss the \\u0160kv\\u00e1ra et al. (2018) work (and other concurrent work) in Section 6, as suggested by Reviewer #3. \\n\\n3. Equation Spacing: We fix the spacing issue mentioned by Reviewer #1.\\n\\n4. Revised Plot of Empirical Means: Reviewer #3 had doubts about to what degree the data set means overlap. We believe this doubt was due to the range of the x-axis in what was formerly Figure 6 (a)---now Figure 5 (a). We have revised the figure to have range 0-255 (normalized to 0-1) and added the additional data sets. \\n\\n5. Removal of NotMNIST results: We have removed from the main text the NotMNIST vs MNIST experiment that was reported in the original draft. However, the Appendix (most crucially Figures 8 and 13) still contains NotMNIST results and has not yet been updated with the new data sets. We will fix this in the next draft.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thanks again, Reviewer #3, for your thought-provoking critique. We respond to your other comments below.\\n\\n1. \\u201cIn particular, Section 4 is a series of empirical analyses, based on one dataset pair\\u2026.However, only 1 dataset pair is experimented -- there should be more to ensure the findings generalize, since Sections 3 and 4 rely completely on empirical analysis.\\u201d \\n\\nSee general responses #1 and #3.\\n\\n\\n2. \\u201cIt is good that Section 5 has some theoretical analysis. But I personally find it very disturbing to base it on a 2nd order approximation of a probability density function of images when modeling something as intricate as models that generate images. At least this limitation should be pointed out in the paper\\u2026.Section 5 is based on a 2nd order expansion on the $log p(x)$ given by a deep network -- I shouldn't be the judge of this, but from a realistic perspective this does not mean much.\\u201d\\n\\nSee general response #2. We emphasize that we are not trying to approximate the density function, only approximate the difference and characterize its sign. Moreover, the special structure of CV-Glow makes these derivative-based approximations better behaved and more tractable than an expansion of a generic deep neural network.\\n\\n\\n3. \\u201cSome parts of the paper feel long-winded and aimless\\u2026.In general, the paper is clear and easy to understand given enough reading time, but feels at times long-winded. Section 2 background takes too much space. Section 3 too much redundancy -- it just explains that SVHN has a higher likelihood when trained on CIFAR, and a few variations of the same experiment.\\u201d\\n\\nWe will attempt to make the writing more concise. But we believe that most, if not all, of Section 2 is necessary in order to make the paper self-contained and accessible to someone who has never before seen invertible generative models. While we are fastidious in our experimental description in Section 3, we think it is necessary since this is the foundational section of the paper.\\n\\n\\n4. \\u201cI don't think Glow necessarily is encouraged to increase sensitivity to perturbations. The bijection needs to map training images to a high-density region of the Gaussian, and that aspect would make the model think twice before making the volume term too large.\\u201d\\n\\nWe are not saying that the model will totally disregard the latent density and attempt to scale the input to very large or infinite values. Our point is made in the context of volume term which is only one of the terms in the change-of-variable objective. The log volume term in the change-of-variable objective is maximizing the very quantity (the Jacobian\\u2019s diagonal terms) that the cited work on derivative-based regularization penalties has sought to minimize. The maximization of the derivatives in the objective directly implies increased sensitivity to perturbations.\\n\\n\\n5. \\u201cFigure 6(a) [Figure 5(a) in revised draft] clearly suggests that the data mean for SVHN and CIFAR are very different, instead of similar.\\u201d\\n\\nWe are not sure how you are drawing this conclusion; perhaps from the scale of the x-axis? The histogram in Figure 6 (a) (original draft) has an x-axis covering the interval [0.4, 0.55], meaning the maximal difference between a mean in *any pair of dimensions* is 0.15. Scaling back to pixel units, 0.15 * 255 = 38.25, meaning that 38.25 pixels is the maximum difference in means. While this is not a difference of zero, we don\\u2019t see how you could say this \\u201cclearly suggests\\u201d that the means are \\u201cvery different.\\u201d In the latest draft, this figure---now Fig 5 (a)---has an x-axis that spans from 0-255. Hopefully the overlap in the means in now conspicuous. \\n\\n\\n6. \\u201cHowever, there are papers empirically analyzing novelty detection using generative model -- should analyze or at least cite: V\\u00edt \\u0160kv\\u00e1ra et al. Are generative deep models for novelty detection truly better? at first glance, their AUROC is never under 0.5, indicating that this phenomenon did not appear in their experiments although a lot of inlier-novelty pairs are tried.\\u201d\\n\\nThank you for pointing us to this work. We cite it in the revised draft. It looks like they test on UCI data sets of dimensionality less than 200, and therefore their results speak to a much different data regime than the one we are studying.\\n\\n\\n7. \\u201cA part of the paper's contribution (section 5 conclusion) seem to overlap with others' work. The section concludes that if the second dataset has small variances, it will get higher likelihood. But this is too similar to the cited findings on page 6 (models assign high likelihood to constant images).\\u201d\\n\\nWhile we do also analyze constant images, we believe that our results for multiple data set pairs (FashionMNIST-MNIST, CIFAR10-SVHN, CelebA-SVHN, ImageNet-CIFAR10/CIFAR100/SVHN) and for multiple deep generative models (flow-based models, VAE, PixelCNN) is novel. Our conclusions are arrived at through focused experimentation and a novel analytical expression applied to CV-Glow.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thanks again, Reviewer #2, for your insightful feedback. We respond to your other comments below.\\n\\n1. \\u201cWhy investigate a component specific to just flow-based models (the volume term)? It seems reasonable to suspect that the phenomenon may be due to a common cause in all three model types.\\u201d \\n\\nSee general response #3.\\n\\n\\n2. \\u201cFor instance, the experiments seem to indicate that generalizing density estimation from CIFAR training set to CIFAR test set is likely challenging and thus the models underfit the true data distribution, resulting in the simpler dataset (SVHN) having higher likelihood.\\u201c\\n\\nWe do not believe our models are necessarily underfit. In fact, we found that Glow had a tendency to *overfit,* and that one must carefully set Glow\\u2019s l2 penalty and choose its scale parametrization (exp vs sigmoid, see Appendix D) in order to prevent it from doing so. We thought this overfitting to the training data could be a reason for the phenomenon and therefore we tuned our implementations to have reasonable generalization. \\n\\n\\n3. \\u201cIt would have been nice if this paper explored more than just MNIST vs NotMNIST and SVHN vs CIFAR10, so that the readers can gain a better feel for when generative models will be able to detect outliers. For instance, a scenario where the data statistics (pixel means and variances) are nearly equivalent for both datasets would be interesting.\\u201d\\n\\nSee general response #1 in regards to data sets and additional results. Thank you for the suggestion of looking at data sets with similar statistics. We do this, in a way, with our second order analysis and the \\u2018gray-ing\\u2019 experiment in Figure 5 (b) (formerly Figure 6 (b) in the original draft). Gray CIFAR-10 (blue dotted line) nearly overlaps with original SVHN (red solid line) in terms of their log p(x) evaluations. Figure 12 (formerly Figure 13) then shows the latent (empirical) distribution of the gray images, and we see that the gray CIFAR-10 latent variables nearly overlap with the SVHN latent variables. This is to be expected though, given the overlapping p(x) histograms, since the probability assigned by CV-Glow (in comparison to other inputs) is fully determined by the position in latent space.\\n\\n4. \\u201cThe second order analysis is good but it seems to come down to just a measure of the empirical variances of the datasets.\\u201d \\n\\nSee general response #2.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thanks again, Reviewer #1, for your thoughtful comments. We respond to your other comments below.\\n\\n1. \\u201cIt seems like one could detect most SVHN samples just by the virtue that there likelihoods are much higher than even the max threshold determined by the CIFAR-train histogram?\\u201d\\n\\nThis is an interesting idea, but we are not sure it is applicable. If one looks closely at Figure 2 (b), there are still blue and black histogram bars (denoting CIFAR-10 train and test instances) covering the entirety of SVHN\\u2019s support (red bars). \\n\\n\\n2. \\u201c[The constant input]\\u2019s mean (=0 trivially) is clearly different from the means of the CIFAR-10 images (Figure 6a) so the second order analysis of Section 5 doesn\\u2019t seem applicable.\\u201d\\n\\nSee general response #2.\\n\\n\\n3. \\u201cHow much of this phenomena do you think is characteristic for images specifically? Would be interesting to test anomaly detection using deep generative models trained on modalities other than images.\\u201d\\n\\nWe have not tested non-image data, since images are the primary focus of work on generative models, but this is an interesting area for future work. \\n\\n\\n4. \\u201cSamples from a CIFAR model look nothing like SVHN. This seems to call the validity of the anomalous into question. Curious what the authors have to say about this.\\u201d\\n\\nThis is a very good point. See our response to Shengyang Sun\\u2019s comment below. We see think this phenomenon has to do with concentration of measure and typical sets, but we do not yet have a rigorous explanation. \\n\\n\\n5. \\u201cThere seems to be some space crunching going on via Latex margin and spacing hacks that the authors should ideally avoid :)\\u201d\", \"we_have_fixed_the_spacing_in_the_latest_draft\": \")\"}",
"{\"title\": \"General Rebuttal (2/2)\", \"comment\": \"3. Purpose / Direction of Section 4 [R2, R3]: R2 asks \\u201cWhy investigate a component specific to just flow-based models (the volume term)? It seems reasonable to suspect that the phenomenon may be due to a common cause in all three model types.\\u201d While the phenomenon is common to multiple deep generative model classes, as Figure 3 shows, we found it very hard to analyze all three models simultaneously, on equal footing, due to their different structures and inference requirements. For instance, how can we compare VAEs and PixelCNNs while controlling for the former\\u2019s approximate inference requirements? How do we know any problems with densities / outlier detection aren\\u2019t due to a sub-optimal inference model or the variational approximation? We thought we would make more headway by restricting the analysis to invertible models since they (i) admit exact likelihood calculations and (ii) have nice analytical properties stemming from the bijection constraint. Having made this decision, we then thought the next natural step is to look at both terms in the change-of-variables objective---the density under p(z) and the volume term---to see if one of these in particular was the cause. After seeing Figure 4 (c, d) (Figure 4 (a, b) in revised draft), we thought that the volume term is the culprit, which then lead to examination of constant-volume Glow (CV-Glow) (i.e. \\u2018constant volume\\u2019 across all inputs) as described on page 6. While the volume term was a bit of a red herring, we thought the progression from {VAE, PixelCNN, NVP-Glow} \\u2192 {NVP-Glow} \\u2192 {CV-Glow} was a logical way to further examine the problem for an increasing tractable model class.\\n\\nRelatedly, R3 writes of Section 4: \\u201cSection 4 is a series of empirical analyses, based on one dataset pair\\u2026.However, only 1 dataset pair is experimented -- there should be more to ensure the findings generalize, since Sections 3 and 4 rely completely on empirical analysis\\u2026.Section 4 seems to lack a high-level idea of what it want to prove -- the hypothesis around the volume term is dismissed shortly after, and it ultimately proves that we do not know what is the reason behind the high SVHN likelihood, making it look like a distracting side-experiment.\\u201d The purpose of focusing on just CIFAR-10 vs SVHN in Section 4 is to drill-down and isolate why the phenomenon is happening in this one particular case. We think this is an appropriate approach, as we didn\\u2019t want to introduce too many experimental variables, as explained above. Furthermore, the presence of this phenomenon for SVHN vs CIFAR-10 alone warrants investigation since those data sets are extremely popular in the ML community. Yet, we have since added additional data sets (see general response #1) and hope the reviewer is now satisfied with this additional evidence of the phenomenon's prevalence.\"}",
"{\"title\": \"General Rebuttal (1/2)\", \"comment\": \"Thank you, reviewers, for your fair and helpful comments. We\\u2019ve provided a general response below that addresses concerns common to multiple reviewers. We\\u2019ll also respond to reviewers individually regarding issues particular to their review.\\n\\n1. Limited Number of Data Sets [R2, R3]: We have now added additional results to Section 3 (Figures 1 and 2) showing that the phenomenon (higher likelihood on non-train data) occurs for FashionMNIST (train) vs MNIST (test), CelebA (train) vs SVHN (test), ImageNet (train) vs CIFAR10/CIFAR100/SVHN (test). Furthermore, we have included these data sets into our plot of the empirical means and variances (Section 5), showing that our second-order analysis and \\u2018sitting inside of\\u2019-conclusion agrees with these additional observations. \\n\\n2. Accuracy / Generality of Second-Order Analysis [R1, R2, R3]: All reviewers bring up questions about the second-order analysis. Starting with R1, they question how Equation 5 applies / can be interpreted for constant images. To slightly correct R1\\u2019s statement, the constant image with high likelihood under the SVHN-trained model is x=128. Normalizing by the number of pixels, i.e. 128/265=0.5, places this constant image almost in the exact center of the means plot in Figure 6 (a)---thus, the second-order analysis does apply. Then turning to Equation 5 and plugging in the variance Var[\\\\delta(128)]=0, we have:\\n\\nE_q [log p(x)] - E_p* [log p(x)] \\\\approx \\u00bd * (negative number for CV-Glow) * (0 - Sigma_p*) >= 0.\\n\\nHence the second-order analysis still holds for the delta function located at 128 and agrees with the empirical result. We will add this derivation to the appendix. \\n\\nMoving on to R2, they state that the second-order analysis reduces to \\u201cjust a measure of the empirical variances of the datasets.\\u201d This is true and was done so purposefully. CV-Glow is the only generative model that we know of that (i) has high-capacity and (ii) is amenable to the second-order analysis. For all other models mentioned (VAE, PixelCNN, NVP-Glow), the second-order equation depends on the second derivatives of the neural network w.r.t its input. It\\u2019s hard, if not impossible, to say anything general about how these second derivatives behave across the input space, let alone across re-fittings of the model. CV-Glow uniquely has second derivatives that simplify to a function of (i) the log-convexity of the latent distribution and (ii) the square of the 1x1 convolutional kernel\\u2019s parameters. Since both of these terms have a constant sign, the interesting part of the equation does indeed boil down to \\u201ca measure of the empirical variances of the datasets.\\u201d The complications introduced by the model have been taken out and what\\u2019s left is a function of the data statistics, which does allow for some general conclusions. We will try to clarify this reasoning / motivation in the paper, as space permits. Furthermore, the fact that our second-order analysis lead us to and agrees with the additional experiments (see general response #1) and the gray-ing attack (Figure 5 (b), formerly Figure 6 (b) in the original draft), we see this as evidence of its validity.\\n\\nLastly, we address R3\\u2019s comments that they \\u201cfind it very disturbing to base [analysis] on a 2nd order approximation of a probability density function.\\u201d We agree that trying to approximate a neural-network-based density with only a second-order representation is a tall order. But this is not precisely what we are doing. Rather, we are approximating *the difference* in density functions, and therefore we only care about *the sign* of the expression. We believe the second-order expression is a useful representation for this. Moreover, if we assume the data distributions have no cross-moments, then from Equation 11 we notice that the diagonal derivatives are zero for second-order and beyond, thus making the second-order expansion exact. For these two reasons, we don\\u2019t believe our approximation is \\u201cdisturbing.\\u201d And since we are working with deep generative models, any analytical statements will require rather strong assumptions.\"}",
"{\"title\": \"Re: Image Samples\", \"comment\": \"Thank you for your comment, Shengyang. This is a good point and something we were a bit puzzled by as well. Our current hypothesis is that the SVHN samples do not fall within the model\\u2019s typical set. To elaborate, in high dimensions samples at or very near to the mode are unlikely. See the high-dimensional Gaussian example discussed here: https://www.inference.vc/high-dimensional-gaussian-distributions-are-soap-bubble/ While you are correct in that the variances in data space are not drastically different, the variances of each data set\\u2019s latent variables (Figure 12, top column, middle) are well separated, with SVHN\\u2019s variance being much smaller. Thus the distribution in latent space may be a better way to characterize the model\\u2019s typical set as samples are first drawn in latent space and then passed to the inverse function.\"}",
"{\"title\": \"Re: Measurement and distribution\", \"comment\": \"Thanks for your questions, comments, and compliments. As for considering other divergences / discrepancies, indeed using these for either parameter estimation or evaluation could lead to different results. It is an area of future work. Given the prevalence of fitting models via maximum likelihood (KLD[p_empirical || p_model]), we thought reporting the result for just this divergence a worthy contribution.\\n\\nAs for your second question, we\\u2019re not certain we completely understand your point. Can you clarify a bit more, please? A perceived mismatch between distance in pixel space vs semantic space may be due to natural images having a common global structure. The models then extract mostly the shared structure and not the details that we visually cue upon.\"}",
"{\"title\": \"Interesting work and analysis\", \"review\": \"I really enjoyed reading the paper! The exposition is clear with interesting observations, and most importantly, the authors walk the extra mile in doing a theoretical analysis of the observed phenomena.\", \"questions_for_the_authors\": \"1. (Also AREA CHAIR NOTE): Another parallel submission to ICLR titled \\u201cGenerative Ensembles for Robust Anomaly Detection\\u201d makes similar observations and seemed to suggest that ensembling can help counter the observed CIFAR/SVHN phenomena unlike what we see in Figure 10. Their criteria also accounts for the variance in model log-likelihoods and is hence slightly different.\\n2. Even though Figure 2b shows that SVHN test likelihoods are higher than CIFAR test likelihoods, the overlap in the histograms of CIFAR-train and CIFAR-test is much higher than the overlap in CIFAR-train and SVHN-test. If we define both maximum and minimum thresholds based on the CIFAR-train histogram, it seems like one could detect most SVHN samples just by the virtue that there likelihoods are much higher than even the max threshold determined by the CIFAR-train histogram?\\n3. Why does the constant image (all zeros) in Figure 9 (appendix) have such a high likelihood? It\\u2019s mean (=0 trivially) is clearly different from the means of the CIFAR-10 images (Figure 6a) so the second order analysis of Section 5 doesn\\u2019t seem applicable.\\n4. How much of this phenomena do you think is characteristic for images specifically? Would be interesting to test anomaly detection using deep generative models trained on modalities other than images.\\n5. One of the anonymous comments on OpenReview is very interesting: samples from a CIFAR model look nothing like SVHN. This seems to call the validity of the anomalous into question. Curious what the authors have to say about this.\", \"minor_nitpick\": \"There seems to be some space crunching going on via Latex margin and spacing hacks that the authors should ideally avoid :)\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting example of density modelling shortcoming\", \"review\": \"This paper displays an occurrence of density models assigning higher likelihood to out-of-distribution inputs compared to the training distribution. Specifically, density models trained on CIFAR10 have higher likelihood on SVHN than CIFAR10. This is an interesting observation because the prevailing assumption is that density models can distinguish inliers from outliers. However, this phenomenon is not encountered when comparing MNIST and NotMNIST. The SVHN/CIFAR10 phenomenon has also been shown in concurrent work [1].\\n\\nGiven that you observed that SVHN has higher likelihood on all three model types (PixelCNN, VAE, Glow), why investigate a component specific to just flow-based models (the volume term)? It seems reasonable to suspect that the phenomenon may be due to a common cause in all three model types. For instance, the experiments seem to indicate that generalizing density estimation from CIFAR training set to CIFAR test set is likely challenging and thus the models underfit the true data distribution, resulting in the simpler dataset (SVHN) having higher likelihood. \\n\\nGiven the title of the paper, it would have been nice if this paper explored more than just MNIST vs NotMNIST and SVHN vs CIFAR10, so that the readers can gain a better feel for when generative models will be able to detect outliers. For instance, a scenario where the data statistics (pixel means and variances) are nearly equivalent for both datasets would be interesting. The second order analysis is good but it seems to come down to just a measure of the empirical variances of the datasets. \\n\\nThis paper is well written. I think the presentation of this density modelling shortcoming is a good contribution but leaves a bit to be desired. \\n\\n[1] Choi, H. and Jang, E. Generative Ensembles for Robust Anomaly Detection. https://arxiv.org/abs/1810.01392\", \"pros\": [\"Interesting observation of density modelling shortcoming\", \"Clear presentation\"], \"cons\": [\"Lack of a strong explanation for the results or a solution to the problem\", \"Lack of an extensive exploration of datasets\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Very interesting finding; insufficient empirical analysis, theory with approximations too bold\", \"review\": [\"Pros:\", \"The finding that SVHN has larger likelihood than CIFAR according to networks is interesting.\", \"The empirical and theoretical analyses are clear, seem thorough, and make sense.\", \"Section 5 can provide some insight when the model is too rigid and too log-concave (e.g. Gaussian).\"], \"cons\": [\"The premises of the analyses are not very convincing, limiting the significance of the paper.\", \"In particular, Section 4 is a series of empirical analyses, based on one dataset pair. In 3/4 of the pairs the author tried, this phenomenon is not there. Whether the findings generalize to other situations where the phenomenon appears is uncertain.\", \"It is good that Section 5 has some theoretical analysis. But I personally find it very disturbing to base it on a 2nd order approximation of a probability density function of images when modeling something as intricate as models that generate images. At least this limitation should be pointed out in the paper.\", \"Some parts of the paper feel long-winded and aimless.\", \"[Quality]\", \"See above pros and cons.\"], \"a_few_less_important_disagreement_i_have_with_the_paper\": \"- I don't think Glow necessarily is encouraged to increase sensitivity to perturbations. The bijection needs to map training images to a high-density region of the Gaussian, and that aspect would make the model think twice before making the volume term too large.\\n- Figure 6(a) clearly suggests that the data mean for SVHN and CIFAR are very different, instead of similar.\\n\\n[Clarity]\\nIn general, the paper is clear and easy to understand given enough reading time, but feels at times long-winded.\\nSection 2 background takes too much space.\\nSection 3 too much redundancy -- it just explains that SVHN has a higher likelihood when trained on CIFAR, and a few variations of the same experiment.\\nSection 4 seems to lack a high-level idea of what it want to prove -- the hypothesis around the volume term is dismissed shortly after, and it ultimately proves that we do not know what is the reason behind the high SVHN likelihood, making it look like a distracting side-experiment.\", \"a_few_editorial_issues\": \"- On page 4 footnote 2, as far as I know the paper did not define BPD.\\n- There are two lines of text between Fig. 4 and Fig. 5, which is confusing.\\n\\n[Originality]\\nI am not an expert in this specific field (analyzing generative models), but I believe this analysis is novel.\\nHowever, there are papers empirically analyzing novelty detection using generative model -- should analyze or at least cite:\\n V\\u00edt \\u0160kv\\u00e1ra et al. Are generative deep models for novelty detection truly better? \\n ^ at first glance, their AUROC is never under 0.5, indicating that this phenomenon did not appear in their experiments although a lot of inlier-novelty pairs are tried.\\nA part of the paper's contribution (section 5 conclusion) seem to overlap with others' work. The section concludes that if the second dataset has small variances, it will get higher likelihood. But this is too similar to the cited findings on page 6 (models assign high likelihood to constant images).\\n\\n[Significance] \\nThe paper has a very interesting finding; pointing out and in-depth analysis of negative results should benefit the community greatly.\\nHowever, only 1 dataset pair is experimented -- there should be more to ensure the findings generalize, since Sections 3 and 4 rely completely on empirical analysis. According to the conclusions of the paper, such dataset pairs should be easy to find -- just find a dataset that \\\"lies within\\\" another. Did you try e.g. CIFAR-100 train and CIFAR-10 test?\\nSection 5 is based on a 2nd order expansion on the $log p(x)$ given by a deep network -- I shouldn't be the judge of this, but from a realistic perspective this does not mean much.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"comment\": \"Thank you for this interesting work.\\n\\nIt is astonishing that a well-trained CIFAR10 model assigns larger log-likelihood to the SVHN datasets. \\n\\nWhat confuses me is that why the samples from such models won't generate SVHN-like images. According to your derivation, the SVHN variances is only marginally smaller than CIFAR10 variances, therefore it is probably not due to that SVHN-like figures live in a much smaller subspace that are unlikely to sample from.\", \"title\": \"Image Samples\"}",
"{\"comment\": \"Thanks very much for the excellent work. It is very interesting to see the distribution from this perspectives. I took a look on the paper Theis2016, it seems besides BPD, KLD, MMD, JSD are considered, is it possible that CIFAR10 and SVHN can be different based on these three measurement?\\n\\nThis also reminds me of domain shift problem, which aims to align p(x,y), can I understand in this way that although in data space, CIFAR and SVHN are similar (in term of the BPD number), however, in semantic level (y), they are still large gap between this two?\\n\\nThanks again for the excellent work~~\", \"title\": \"Measurement and distribution\"}"
]
} |
|
ryewE3R5YX | Characterizing Attacks on Deep Reinforcement Learning | [
"Chaowei Xiao",
"Xinlei Pan",
"Warren He",
"Bo Li",
"Jian Peng",
"Mingjie Sun",
"Jinfeng Yi",
"Mingyan Liu",
"Dawn Song."
] | Deep Reinforcement learning (DRL) has achieved great success in various applications, such as playing computer games and controlling robotic manipulation. However, recent studies show that machine learning models are vulnerable to adversarial examples, which are carefully crafted instances that aim to mislead learning models to make arbitrarily incorrect prediction, and raised severe security concerns. DRL has been attacked by adding perturbation to each observed frame. However, such observation based attacks are not quite realistic considering that it would be hard for adversaries to directly manipulate pixel values in practice. Therefore, we propose to understand the vulnerabilities of DRL from various perspectives and provide a throughout taxonomy of adversarial perturbation against DRL, and we conduct the first experiments on unexplored parts of this taxonomy. In addition to current observation based attacks against DRL, we propose attacks based on the actions and environment dynamics. Among these experiments, we introduce a novel sequence-based attack to attack a sequence of frames for real-time scenarios such as autonomous driving, and the first targeted attack that perturbs environment dynamics to let the agent fail in a specific way. We show empirically that our sequence-based attack can generate effective perturbations in a blackbox setting in real time with a small number of queries, independent of episode length. We conduct extensive experiments to compare the effectiveness of different attacks with several baseline attack methods in several game playing, robotics control, and autonomous driving environments. | [
"attacks",
"drl",
"deep reinforcement",
"taxonomy",
"attack",
"deep reinforcement learning",
"great success",
"various applications",
"computer games",
"robotic manipulation"
] | https://openreview.net/pdf?id=ryewE3R5YX | https://openreview.net/forum?id=ryewE3R5YX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJle9p1HlN",
"BklYLpkBl4",
"HJgJC2JBlE",
"r1lbyB8feV",
"SkeBrJ4oyN",
"Byl9vVV5yE",
"SkxGwtoYJ4",
"HyxZTlh_k4",
"B1xIHog5RQ",
"HkxZ4jxqC7",
"Bklh-sx9AQ",
"B1g-xog5AX",
"Bkg3wUSThQ",
"rkeB4Y6hhX",
"BJe9EVVYnX"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545039239666,
1545039184551,
1545039047314,
1544869080958,
1544400701202,
1544336482342,
1544300889731,
1544237241405,
1543273278236,
1543273256740,
1543273220172,
1543273193361,
1541391972023,
1541359916590,
1541125170118
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1460/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1460/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1460/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1460/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1460/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1460/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1460/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1460/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1460/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1460/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1460/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1460/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1460/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1460/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1460/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Thanks for the further comments!\", \"comment\": \"Thanks for the feedback! We have made the taxonomy to be specific to RL, namely, what kind of attacks may exist for specific components of MDP in RL environments. Traditionally, this attack only applies to neural network\\u2019s input observations. In the torcs environment, the road is relatively broader than the enduro environment, which makes it harder to attack since the vehicle would always have some space to allow some attacks and still remain stable. We also updated the paper here:\", \"https\": \"//drive.google.com/file/d/1pVSpI-q_vAOwaDS_Qz7enrPsVpdlohRp/view?usp=sharing\"}",
"{\"title\": \"Thanks for the further comments!\", \"comment\": \"Thanks for the feedback! We tried to improve the corollary further using mathematical notation instead of informal prose. The improved version can be found in the updated paper here: https://drive.google.com/file/d/1pVSpI-q_vAOwaDS_Qz7enrPsVpdlohRp/view?usp=sharing\\n\\nBasically, the first corollary is to prove that our SFD method is more efficient in terms of estimating nontrivial gradients with absolute value no less than threshold theta, and it also comes up with a conclusion in terms of the truncation error upper bound our SFD can make using finite difference method and disregarding small gradients. Corollary 2 describes the conclusion that using our optimal frame based selection, we can achieve more efficient attack by attacking frames with larger variance of Q value than attacking frames with smaller variance of Q value.\", \"discussion_of_the_frequency_of_the_crashes_or_other_metrics_relevant_to_the_task\": \"we tried to decompose the torcs reward into two parts, one is progress reward, namely, how many distance the vehicle traveled, the another one is the reward related with crash. We include the figure in the updated paper\\u2019s appendix. This figure shows the individual steps\\u2019 reward where the positive and negative rewards with absolute value between 0 and 2 are regular progress reward, and the -2.5 and -3.0 rewards correspond to catastrophe reward, we can see from this figure that our method is able to achieve significant catastrophe attack effects and the potential risk is severe.\"}",
"{\"title\": \"Thanks for the further comments!\", \"comment\": \"Since this paper is more about attack instead of defense, due to the page limit, we only include a brief discussion about how to make rl more secure with our proposed attack methods. In terms of the ordering of priority for defense with respect to the risk caused by the attacks and the likelihood of the attacks, environment dynamics attack is easier to deploy than other attack methods, since it does not require us to make digital amendment of the observations or control signal given by the policy, it should be addressed first. Then the observation based attack and action based attack should be addressed, though deploying these attacks require some access to the policy networks\\u2019 software system. We added a new section in the paper section 6 about this. The new paper can be accessed here: https://drive.google.com/file/d/1pVSpI-q_vAOwaDS_Qz7enrPsVpdlohRp/view?usp=sharing\"}",
"{\"metareview\": \"The authors have delivered an extensive examination of deep RL attacks, placing them within a taxonomy, proposing new attacks, and giving empirical evidence to compare the effectiveness of the attacks. The reviewers and AC appreciate the broad effort, comprising 14 different attacks, and the well-written taxonomic discussion. However, the reviewers were concerned that the paper had significant problems with clarity of technical presentation and that the attacks were not well grounded in any sort of real world scenario. Although the authors addressed many concerns with their revision and rebuttal, the reviewers were not convinced. The AC believes that R1 ought to have increased their score given their comments and the resulting rebuttal, but the paper remains a borderline reject even with a corrected R1 score.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}",
"{\"title\": \"Thanks for the revision.\", \"comment\": \"After reading the revised manuscript, some of my concerns, especially in Q2, are resolved and the quality is improved. On the other hand, in this type of research, I think it is important to give ordering of priority for defense with respect to the risk caused by the attacks and likelihood of the attacks, which will give a strong message for the community of RL. The discussion for each attack is still made individually and need more discussions to learn what we should do to make RL secure. For this reason, I'd like to retain the evaluation score.\"}",
"{\"title\": \"thanks for the revision\", \"comment\": \"I thank the authors for taking time to prepare a revision of the paper.\\n\\nSection 3 is indeed greatly improved and helps me better understand the significance of the various technical contributions presented.\\n\\nHowever, sections 4 and 5 are still difficult to parse and the significance of the technical and experimental results presented are still unclear. For example, in section 4, corollaries 1 and 2 are mathematical statements but are stated in informal language, making it difficult to understand the precise statements being made. \\n\\nIn section 5, several attack algorithms are compared in terms of cumulative and epsiodic reward without explaining the significance of the reduction in reward in terms of the task being solved. For example, on Torcs, I would prefer to have seen a discussion of the frequency of crashes or other metrics relevant to the task at hand. Otherwise, the significance of the performance improvement in the attacks is difficult to evaluate.\"}",
"{\"title\": \"re\", \"comment\": \"Thank you for taking the time to write a response. I've increased my score by one to take into account changes described in the response. However, I would recommend making the taxonomy more specific to RL given that similar taxonomies were previously proposed in the context of classification. It would also help to have a better explanation for the difference between TORCS and Enduro: typically more complex problems have been found easier to attack (relatively---in terms of the perturbation magnitude), explaining why that is not the case here would be valuable.\"}",
"{\"title\": \"Reply to related work\", \"comment\": \"Hi, thanks for pointing out this related work. We will cite it in our final version.\"}",
"{\"title\": \"To reviewer 3\", \"comment\": \"Thanks for the useful feedback for our paper!\", \"q\": \"In Table 2, how should the L2 distance be interpreted?\", \"a\": \"In table 2, the L2 distance is a measure of the difference between the achieved state and the target state. We\\u2019ve clarified this in our revision. The measure is ad-hoc, but we also include a graphical example of an achieved state in figure 4. The adversary is successful if the target attack has qualitatively been achieved.\"}",
"{\"title\": \"To reviewer 2\", \"comment\": \"We thank the reviewer for useful feedback.\", \"q\": \"The work is premature and will need to be redone once more robust agents are available in practical RL settings\", \"a\": \"Recently, deep RL has been deployed in the real world, such as RL based computer games (AlphaGo), RL based robots, etc. Therefore, it\\u2019s very important to develop attack method to evaluate the robustness of the RL policies to ensure they can be safely deployed. However, current methods based on transferability still haves suboptimal performance and have strong assumptions about the knowledge of the victim policy, which may not be realistic in real world scenarios. Therefore, we proposed more novel and efficient black-box attack methods.\"}",
"{\"title\": \"To reviewer 1\", \"comment\": \"We thank the reviewer for useful feedback.\", \"q\": \"It\\u2019s important for security analysis to learn about the worst case. This work does not give a deep insight into what we should do to make RL secure.\", \"a\": \"We agree that attack algorithm development should give insight about the worst case and insight about how to make RL secure. These proposed attack methods can be integrated into the training of RL policies to improve the policy\\u2019s robustness. For example, environment dynamics attack method can be used to perturb the training environment so as to find some challenging environments or find ways to cause catastrophe, and help to explore the worst case. These methods can also provide evaluation benchmarks to measure RL policies\\u2019 robustness once they are full trained. We now add a paragraph in section 6 discussing the connection of our work with robust RL training.\"}",
"{\"title\": \"Summary of Revision\", \"comment\": \"We thank the reviewers for their valuable comments and suggestions. Based on the reviews, we made the following update to our revision:\\n1. Since we analyzed several attacks (14) corresponding to different types based on our taxonomy and it is a bit hard to illustrate each of them, we reorganized most parts of our paper in the revision to make the taxonomy clearer. We also emphasized our main contributions in section 1, simplified the description of previous proposed attack and put more words on our contributions. \\n2. We made further categorization about white-box and black-box attack based on the detailed knowledge the attacker has about the victim policy, and updated table 1.\\n3. We added discussion about the connection between the proposed attacks against DRL and real-world scenarios in section 3, and we also added discussion and potential directions about how such vulnerability analysis on RL can help to build robust RL systems in section 6. We also discussed the settings in which each attack method is applicable when we introduce them. \\n4. We discussed in greater detail the proposed black-box attack algorithms\\u2019 properties and applicability of the proposed black-box attack to victim models of limited knowledge in section 4.1.2, and emphasize our contribution on the proposed method and made the corresponding analysis clearer in corollary 1 and corollary 2.\\n5. We added more results analysis, specifically, how different methods compare to each other in terms of their attack efficiency and performance and in terms of the difference of the knowledge required about the victim policy in section 5.2.\"}",
"{\"title\": \"Connections of each attack setting to a specific threat scenario should be discussed.\", \"review\": \"The attack methods are clearly and extensively described and the paper is well organized overall. Some of the attacks are a straightforward variation of known attacks. Strong original contributions are not found in this work while I do not think lack of original contributions is a minus for this type of paper. One concern is that the connections of each attack setting to a specific threat scenario in the real world are not discussed in this paper. The authors display 14 types of attacks under various settings. Which attack is likely to be performed by what kind of adversaries in what situation? For this type of security research, contribution becomes weak without a connection to a threat in the real world. Suppose attack scenario A destroys a policy network more seriously than attack scenario B. Even in such a situation, a rescue for attack scenario A might not be needed if attack scenario A is not realistic at all. Even if connections to threats in the real world is not clear, it would be important for security analysis to learn about the worst case. Unfortunately, this work simply exhibits a catalogue of attacks against RL and does not give a deep insight into what we should do to make RL secure.\\n\\nThe summarization of the attack scenarios against RL is high quality and the results shown in this paper would be useful for many researchers. I expect authors to give more discussions on connections to the real world.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"interesting paper, unsure of experimental validation\", \"review\": \"The authors design a new taxonomy of attacks on deep RL agents - they developed three classes of attacks - attacks the modify the observation given to the agent, attacks that modify the action used by the agent and attacks that change the dynamics of the environment. In settings that have been studied previously, the authors show that they can find attacks more effectively than previous approaches can. They also study learning based and online attack generation approaches, that can be effectively used to quickly find adversarial attacks on the agent. The authors validate their approaches experimentally on Mujoco tasks and the TORCS driving simulator.\", \"quality\": \"I found the paper's contributions difficult to understand - the significance of the three classes of attacks is not properly explained (in particular, I found the action perturbation to be difficult to justify in a real world setting). Further, the difficulty of generating attacks in each of these classes and the need for new algorithms is not explained properly. The need for effective ways to quickly generate adversarial attacks in RL is clear, but the authors' experiments don't seem to clarify that their proposed aproaches achieve this goal.\", \"clarity\": \"The organization of section 4 makes the paper difficult to read - I would separate the taxonomy from the contribution of novel ways of generating adversarial attacks (the latter, imo, is the more significant contribution).\", \"originality\": \"To the best of my knowledge, the authors propose novel kinds of attacks as well as novel attack algorithms on RL agent.\", \"significance\": \"The problem considered is certainly significant. Despite the successes achieved by DeepRL, their robustness (in terms of distribution shifts, adversarial noise, model errors etc.) is of great importance when considering deploying these models. However, the presentation and experiments leave me unconvinced that the presented approaches are a significant step ahead in attack generation (particularly in ways to generate attacks that can efficiently be incorporated into adversarial training of RL agents).\\n\\nCons\\n1. Unclear presentation of technical contributions, experimental results do not support the key contributions of faster attack generation\\n2. I am also unconvinced of the relevance of blackbox attack algorithms given the nascent stage of deepRL - since these agents are just being developed and their abilities need to improve significantly before they become deployable (and blackbox adversarial attacks are a real concern), I feel this work is premature and will need to be redone once more capable/robust agents can be trained for practical RL settings\\n\\n###\\nIn light of the revision, I have revised my score given the rewriting of section 3 that addresses the second con I raised above. However, due to the lack of clarity in presentation of the technical results in section 4 and the experiments in section 5, I feel that the paper still require improvement before it can be accepted.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"review\", \"review\": \"This submission sets out to taxonomize evasion-time attacks against deep RL and introduce several new attacks, including two heuristics for efficient evasion-time attacks and attacks that target the environment dynamics and RL system\\u2019s actions. The main limitation of this paper is probably its broad scope, which unfortunately prevents it in its current form from addressing each of the goals stated in the introduction systematically to draw conclusive takeaways.\\n\\nTaxonomizing the space of adversaries targeting deep RL at test time is a valuable contribution. While the existing taxonomy is a good start, it would be useful if you can clarify the following points in your rebuttal. Why were the \\u201cfurther categorization\\u201d items separated from adversarial capabilities? Being constrained to real-time or physical perturbations appears to be another way to describe the adversary\\u2019s capabilities. In addition, is there a finer-grained way to characterize the adversary\\u2019s knowledge beyond white-box vs. black-box? This binary perspective is common but not very informative. One way to move forward would be for instance to think about the different components of a RL system, and identify those that are relevant to have knowledge of when adversaries are mounting attacks. It would also be helpful to position prior work in the taxonomy. Finally, the taxonomy currently stated in the submissions is more a taxonomy of attacks (or adversaries) than a taxonomy of vulnerabilities, so the title of Section 3 could perhaps be updated accordingly. \\n\\nSection 4.1 gives a good overview of different attack strategies against RL based on modifying the observations analyzed by the agent. Many of these attacks are applications of known attack strategies and will be familiar to readers with adversarial ML background (albeit some of these strategies were previously introduced and evaluated against \\u201csupervised\\u201d classifiers only). One point was unclear however: why is the imitation learning based black-box attack not a transferability-based attack? As far as I could understand, the strategy described corresponds exactly to the commonly adopted strategy of transferring adversarial examples found on a substitute model (see for instance \\u201cIntriguing properties of neural networks\\u201d by Szegedy et al. and \\u201cPractical Black-Box Attacks against Machine Learning\\u201d by Papernot et al.). In other words, Section 4.1 could be rescoped to put emphasis on the attack strategies that have not been explored previously in the context of reinforcement learning: e.g., the finite difference approach with adaptive sampling or the universal attack with optimal selection of initial frames. It is unfortunate that the treatment of these two attacks is currently deferred to the appendix as they make the paper more informative. Similarly, Sections 4.2 and 4.3 would benefit from being extended to put forward the new attack threat model considered in these two sections. \\n\\nWhile the introduction claimed to make a systematic evaluation of attacks against RL, the presentation of the experimental section can be improved to ensure the analysis points out the relevant takeaways. For instance, it is unclear what the differences are between results on TORCS and other tasks included in the Appendix. Specifically, results on Enduro do not seem as conclusive as those presented on TORCS. Do you have some intuition as to why that is the case? In Figure 7, it appears that a large number of frames need to be manipulated before a drop on cumulative reward is noticeable. Previous efforts manipulated single frames only, could you stress why the setting is different here? Throughout the section, many Figures are small and it is difficult to infer whether the difference between the white-box and black-box variants of an attack is significant or not. Could you analyze this in more details in the text? In Table 2, how should the L2 distance be interpreted? In other words, when is the adversary successful? \\n\\nIf you can clarify any of the points made above in your rebuttal, I am of course open to revise my review.\", \"editorial_details\": \"Figures are not readable when printed. \\nFigure 5 is improperly referenced in the main body of the paper.\", \"figure_7\": \"label is incorrect for Torcs and Hopper (top of figure)\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
BJxvEh0cFQ | K for the Price of 1: Parameter-efficient Multi-task and Transfer Learning | [
"Pramod Kaushik Mudrakarta",
"Mark Sandler",
"Andrey Zhmoginov",
"Andrew Howard"
] | We introduce a novel method that enables parameter-efficient transfer and multi-task learning with deep neural networks. The basic approach is to learn a model patch - a small set of parameters - that will specialize to each task, instead of fine-tuning the last layer or the entire network. For instance, we show that learning a set of scales and biases is sufficient to convert a pretrained network to perform well on qualitatively different problems (e.g. converting a Single Shot MultiBox Detection (SSD) model into a 1000-class image classification model while reusing 98% of parameters of the SSD feature extractor). Similarly, we show that re-learning existing low-parameter layers (such as depth-wise convolutions) while keeping the rest of the network frozen also improves transfer-learning accuracy significantly. Our approach allows both simultaneous (multi-task) as well as sequential transfer learning. In several multi-task learning problems, despite using much fewer parameters than traditional logits-only fine-tuning, we match single-task performance.
| [
"deep learning",
"mobile",
"transfer learning",
"multi-task learning",
"computer vision",
"small models",
"imagenet",
"inception",
"batch normalization"
] | https://openreview.net/pdf?id=BJxvEh0cFQ | https://openreview.net/forum?id=BJxvEh0cFQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1ll1oYQl4",
"S1eTI1PuAX",
"BJgEyp8gAQ",
"SJeD6ZMT6Q",
"B1lPdZz6a7",
"H1lFC1M6p7",
"BygmSOC2hm",
"SJgssRVq3X",
"S1e7BbGqj7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544948439524,
1543167829300,
1542642907799,
1542427071202,
1542426991146,
1542426577396,
1541363770943,
1541193379353,
1540133178746
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1458/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1458/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1458/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1458/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1458/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1458/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1458/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1458/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1458/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"Reviewers largely agree that the proposed method for finetuning the deep neural networks is interesting and empirical results clearly show the benefits over finetuning only the last layer. I recommend acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Simple and effective parameter efficient method for finetuning\"}",
"{\"title\": \"Response to Authors\", \"comment\": \"Thanks to the authors for their reply. I am satisfied with the current state of the paper and tend to keep my score.\"}",
"{\"title\": \"Thanks for the response\", \"comment\": \"Several changes have been made to my comments, thanks for pointing out the mistakes.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We thank AnonReviewer1 for the review. Below are our responses inline.\\n\\n>> * explain the choice of the hyper-parameters of RMSProp (paragraph under Table 1).\\n\\nThe hyper-parameters are the same as those in the standard setup for MobilenetV2 or InceptionV3. We have added a line in the experiments section mentioning this.\\n\\n>> * fix Figure 3, it's impossible to read in the paper-printed version\\n\\nThe four subfigures are now split into two rows and are now hopefully easily readable. \\n\\n>> * explain how the average number of parameters per model in computed in Tables 4 and 5. E.g. 700K params/model in the first column of Table 4 is misleading - I suppose the shared parameters are not taken into account. The same holds for 0 in the second column, etc.\\n\\nThank you for pointing this out. We had mistakenly only counted the non-shared parameters in the models, and forgot to include the last layer parameters in the second column. This has now been corrected to simply the total number of parameters trained. \\n\\n>> * add a proper discussion for domain adaptation part. The simple \\\"The results are shown in Table 5\\\" is not enough. \\n\\nDone. \\n\\n>> * consider leaving the discussion of cost-efficient model cascades out. The presented details are too condensed and do not add value to the paper.\\n\\nMakes sense. We moved these results to the appendix to be included in the full version.\\n\\n>> * explain how different resolutions are managed by the same model in the domain adaptation experiments.\\n\\nWe added a line in the paper stating the images are brought to the right resolution using bilinear interpolation before passing as input to each model.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank AnonReviewer2 for the review. Below is our detailed response.\\n\\n>> 1. Only MobilenetV2 and InceptionV3 are evaluated as classification model, while the residual connection based models such as ResNet, DenseNet are not included. Would it be very different regarding the conclusion of this paper?\\n\\nWe experimented extensively with multiple tasks (classification, detection, multi-task learning) and datasets instead of trying more models for the same task, as we intended to test the effectiveness of our method in various situations. Further, MobileNetV2 has residual connections, which encouraged us to believe that the results on other residual connection based models would be similar. \\n\\nWe ran experiments with ResNet and got similar results. For instance, transfer learning accuracy from ImageNet to Cars goes up from 61.4% (last layer fine-tuning) to 73% (S/B patch + last layer fine-tuning). From ImageNet to Aircraft, accuracy goes up from 51.8% (last layer) to 62.5% (S/B patch + last layer). In the interest of space, we did not think it added much to the experimental section of the paper.\\n\\n>> 2. It seems that the only effective manner is by fine tuning the parameters of both batch normalisation related and lasts layer, while fine tuning last layer seems to be having the main impact on the final result. In Table 4, authors do not even provide the results fine tuning last layer only.\\n\\nFine-tuning the last layer is not always required. For instance, in domain adaptation (Sec 5.4), the model patch consists of only the batch normalization parameters, and the resulting accuracies match or exceed those of individually trained models. \\n\\nFrom Figure 3 and Table 4, we see that fine-tuning scales, biases (S/B) and depthwise (DW) along with last layer causes an average 50% relative improvement in accuracy over fine-tuning only the last layer while being only a small (4%) increase in terms of number of parameters over the last layer.\\n\\nWhen performing multi-task or transfer learning across different tasks (e.g. ImageNet \\u2192 Places365), it becomes necessary to have different last layers as the output spaces are different. In Table 4, the second column corresponds to the case where only the last layer is separate for each task. We apologize if this was not clear - we have now updated Table 4 headers to explicitly reflect this fact. \\n \\n>> 3. The organisation of the paper and the order of illustration is a bit confusing.\\n\\nWe will be happy to modify the paper if the reviewer elaborates on this point.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank AnonReviewer3 for the review. Below are our responses to specific comments.\\n\\n>> 1. The memory benefit is obvious, it would be interesting to know the training speed compared to fine-tuning methods (both the last layer and the entire network)?\\n\\nGenerally, we did not see a large variation in training speed on the datasets that we tried. All fine-tuning approaches needed 50-200K steps depending on the learning rate and the training method. While different approaches definitely differ in the number of steps necessary for convergence, we find these changes to be comparable to changes in other hyper-parameters such as learning rate, and generally not providing a clear signal worth articulating in the paper. \\n\\n>> 2. It seems that DW patch has limited effects compared to S/B patch. It would be nice to have some analysis of this aspect.\\n\\nYes, DW patch seems to be less powerful than S/B patch. Generally, DW patch resulted in about 5-10% percentage points lower accuracy than the S/B patch while having comparable number of parameters. However, it does add a lot of value when used in conjunction with S/B patch. For instance, from the top two figures in Figure 3, we see that fine-tuning the combination of DW and S/B patches (4% of the network parameters) closes the accuracy gap between S/B patch (1% of the network parameters) and fine-tuning the last layer (37% of the network parameters). \\n\\nIf the reviewer thinks that adding the performance of DW only patch would be a useful addition to Figure 3, we are happy to do that. We had excluded it in the interest of not crowding the plots.\"}",
"{\"title\": \"Interesting results on transfer learning\", \"review\": \"The authors proposed an interesting method for parameter-efficient transfer learning and multi-task learning. The authors show that in transfer learning fine-tuning the last layer plus BN layers significantly improve the performance of only fine-tuning the last layer. The results are surprisingly good and the authors also did analysis on the relationship between embedding space and biases.\\n\\n1. The memory benefit is obvious, it would be interesting to know the training speed compared to fine-tuning methods (both the last layer and the entire network)?\\n2. It seems that DW patch has limited effects compared to S/B patch. It would be nice to have some analysis of this aspect.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Inspiring thought, though lack of sufficient proofs\", \"review\": \"This paper explored the means of tuning the neural network models using less parameters. The authors evaluated the case where only the batch normalisation related parameters are fine tuned, along with the last layer, would generate competitive classification results, while using very few parameters comparing with fine tuning the whole network model. However, several questions are raised concerning the experiment design and analysis:\\n1. Only MobilenetV2 and InceptionV3 are evaluated as classification model, while other mainstream models such as ResNet, DenseNet are not included. Would it be very different regarding the conclusion of this paper?\\n2. It seems that the only effective manner is by fine tuning the parameters of both batch normalisation related and lasts layer, while fine tuning last layer seems to be having the main impact on the final result. In Table 4, authors do not even provide the results fine tuning last layer only.\\n3. The organisation of the paper and the order of illustration is a bit confusing. e.g. later sections are frequently referred in the earlier sections. Personally I would prefer a plain sequence than keep turning pages for confirmation.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting idea and fair evaluation. Accept with minor changes.\", \"review\": \"Summary: the paper introduces a new way of fine-tuning neural networks. Instead of re-training the whole model or fine-tuning the last few layers, the authors propose to fine-tune a small set of model patches that affect the network at different layers. The results show that this way of fine-tuning is superior to above mentioned typical ways either in accuracy or in the number of tuned parameters in three different settings: transfer learning, multi-task learning and domain adaptation.\", \"quality\": \"the introduced way of fine-tuning is interesting alternative to the typical last layer re-training. I like that the authors present an intuition behind their approach and justify it by an illustrative example. The experiments are fair, assuming the authors explain the choice of hyper-parameters during the revision.\", \"clarity\": \"in general the paper is well-written. The discussion of multi-task and domain adaptation parts can be improved though.\", \"originality\": \"the contributions are novel to my best knowledge.\", \"significance\": \"high, I believe the paper may facilitate a further developments in the area.\", \"i_ask_the_authors_to_address_the_following_during_the_rebuttal_stage\": [\"explain the choice of the hyper-parameters of RMSProp (paragraph under Table 1).\", \"fix Figure 3, it's impossible to read in the paper-printed version\", \"explain how the average number of parameters per model in computed in Tables 4 and 5. E.g. 700K params/model in the first column of Table 4 is misleading - I suppose the shared parameters are not taken into account. The same holds for 0 in the second column, etc.\", \"add a proper discussion for domain adaptation part. The simple \\\"The results are shown in Table 5\\\" is not enough.\", \"consider leaving the discussion of cost-efficient model cascades out. The presented details are too condensed and do not add value to the paper.\", \"explain how different resolutions are managed by the same model in the domain adaptation experiments.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
S1lDV3RcKm | MisGAN: Learning from Incomplete Data with Generative Adversarial Networks | [
"Steven Cheng-Xian Li",
"Bo Jiang",
"Benjamin Marlin"
] | Generative adversarial networks (GANs) have been shown to provide an effective way to model complex distributions and have obtained impressive results on various challenging tasks. However, typical GANs require fully-observed data during training. In this paper, we present a GAN-based framework for learning from complex, high-dimensional incomplete data. The proposed framework learns a complete data generator along with a mask generator that models the missing data distribution. We further demonstrate how to impute missing data by equipping our framework with an adversarially trained imputer. We evaluate the proposed framework using a series of experiments with several types of missing data processes under the missing completely at random assumption. | [
"generative models",
"missing data"
] | https://openreview.net/pdf?id=S1lDV3RcKm | https://openreview.net/forum?id=S1lDV3RcKm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJxKlQDZeE",
"SJlTFNdNCm",
"BkxEh-d4Rm",
"BklU-g_VCm",
"S1xWJ2UW6Q",
"Hkgdo5Da2m",
"SylB2O2q27"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544807153036,
1542911108629,
1542910380400,
1542909949632,
1541659608567,
1541401248304,
1541224620841
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1457/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1457/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1457/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1457/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1457/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1457/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1457/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes an adversarial framework that learns a generative model along with a mask generator to model missing data and by this enables a GAN to learn from incomplete data.\\nThe method builds on AmbientGAN but it is a novel and clever adjustment to the specific problem setting of learning from incomplete data, that is of high practical interest.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Intersting idea with practical impact\"}",
"{\"title\": \"MisGAN is a flexible and extensible framework\", \"comment\": \"Thank you for the constructive comments, which we address below.\\n\\n> I notice the images used in the experiments are small size. It would be interesting to test the performance on a larger image.\\n\\nLearning the distribution of high-resolution images poses a different challenge to GAN-based models that is orthogonal to missing data. However, the MisGAN framework is flexible enough to incorporate various techniques that can improve training GANs with large images such as Karras et al. (2017) or Brock et al. (2018).\\n\\n> Another direction would be testing the robustness of the model, for example, what will happen if the observation is also noisy?\\n\\nIf the training data is noisy, the unmodified MisGAN would learn the distribution of noisy data as well. If we are interested in recovering the distribution of the denoised data, we can replace the data generator in MisGAN by an AmbientGAN with certain assumption on the noise structure. The recovery would depend on how accurate the noise assumption is. Note that if we use a complex noise model, the problem might become highly ill-posed and the learned distribution of the clean data can be quite different from the actual one as we showed in Figure 5. In this case, we will need to introduce some domain-specific priors to further regularize the problem.\\n\\n> Some discussion about the potential extensions will also be helpful. For example, can the proposed network be used to solve the compressive sensing problem with a real value mask instead of binary valued.\", \"the_noise_structure_of_the_missing_data_has_a_special_property_that_makes_misgan_possible\": \"the missingness is fully observed. This provides a strong signal to regularize the originally highly ill-posed distribution learning problem. MisGAN might be applicable to other noise models that have similar properties.\\n\\n> Although these two theorems are not directly related to the properties of the proposed network structure. But it does provide some nice intuition.\\n\\nWe emphasize that, as we pointed out at the end of Appendix A, an implication of the theorem is that MisGAN overall learns the joint distribution p(x_obs, m). This is different from most of the work in the literature that aims at modeling p(x_obs | m) instead, which ignores the missing data mechanism.\\n\\n\\nREFERENCES\\n\\nKarras, T., Aila, T., Laine, S., & Lehtinen, J. (2017). Progressive growing of GANs for improved quality, stability, and variation.\\nBrock, A., Donahue, J., & Simonyan, K. (2018). Large scale GAN training for high fidelity natural image synthesis.\"}",
"{\"title\": \"On the benchmark data used by GAIN\", \"comment\": \"Thank you for the constructive comments, which we address below.\\n\\nAs stated in the introduction, MisGAN is designed for learning the distribution from high-dimensional data in the presence of a potentially large amount of missing values. However, the five benchmark datasets that GAIN (Yoon et al, 2018) is evaluated on are quite small (see Table 1 in the supplementary materials of Yoon et al. (2018) for details). The average number of examples in a dataset is less than 20,000 and the smallest one has only 569 examples. Moreover, the dimensionality of the data is also relatively small, where the average dimensionality is about 36 while the smallest one is 16. As a result, GAN-based models like MisGAN are not particularly suitable for this situation. The table below compares the imputation RMSE of MisGAN with GAIN and MICE, one of the strong baseline GAIN compares to. The first two rows in the table directly come from Table 2 in Yoon et al. (2018), where the results of MICE (R) are computed using the R package MICE. We find that a popular Python implementation of MICE, fancyimpute, outperforms all the methods on all five datasets as shown in the third row (and it runs faster than the R implementation). However, MisGAN performs worse than GAIN on most of the cases due to the fact that the datasets are too small to learn a good data generator that drives imputation. On the other hand, data-efficient methods like MICE appear to be a better modeling choice when data is scarce. Nevertheless, learning distributions from small-scale incomplete data with MisGAN is an interesting direction for future investigation.\\n\\n Breast Spam Letter Credit News\\nGAIN .0546 .0513 .1198 .1858 .1441\\nMICE (R) .0646 .0699 .1537 .2585 .1763\\nMICE (fancyimpute) .0498 .0494 .1126 .1217 .1426\\nMisGAN .0855 .0637 .1632 .1656 .2442\\n\\nTo better understand the imputation behavior on high-dimensional incomplete data, for which MisGAN targets, we choose to perform a set of controlled experiments with different missing distributions under a wide range of missing rates (unlike Yoon et al. (2018) that only assessed 20% missingness). We choose to evaluate on image datasets so we can visually judge if the evaluation agrees with our intuition as in Figure 21. Note that for the MNIST results in Figure 7, we use fully-connected imputers in MisGAN as if the model has no prior knowledge about the structure of the underlying data to demonstrate that MisGAN can be applied to generic data other than images. \\n\\nWe note that fancyimpute\\u2019s MICE not only outperforms both MisGAN and GAIN on the benchmark datasets in Yoon et al. (2018) but runs much faster. However, it can hardly scale up to data like MNIST, although being only 784-dimensional.\\n\\n\\nREFERENCES\", \"fancyimpute\": \"https://github.com/iskandr/fancyimpute\"}",
"{\"title\": \"Evaluation of imputation using RMSE\", \"comment\": \"Thank you for the constructive comments, which we address below.\\n\\n> In a real life application, one would pick the mode of the distribution of the missing samples, and not sample from that distribution as the authors seems to be doing in this paper.\\n\\nIn this work, the imputation model we proposed learns an implicit model that generates samples from p(x_mis | x_obs) where the density is not explicitly defined. This implies that the imputer is likely to generate samples around the modes of the distribution, although the modes are not explicitly known. However, MisGAN is compatible with density-based imputation methods as well. For example, we can define a density model similar to variational autoencoders (Kingma and Welling, 2014) that imposes an isotropic Gaussian noise at the end of the deterministic decoder. With such density model, we can then optimize the latent code using gradient methods to output the mode (local maxima of the density function) of p(x_mis | x_obs) or amortize density maximization using a similar imputation network. We leave the comprehensive evaluation of this alternative imputation method for future work.\\n\\n> The authors measure the success of their algorithm by computing FID scores for the randomly inputed images. That is the authors use a metric which measures a distance between the distribution of the generated images and images in a dataset. This is fine and interesting to know, but people also care about the distance of the completed pixels from the ground truth (missing) values.\\n\\nTo evaluate the imputation performance in a controlled experiment, we chose to assess the FID between the imputed data and the originally fully-observed data instead of following the commonly used approach that computes the RMSE against the ground truth for the following reasons: In a complex system, the conditional distribution p(x_mis | x_obs) is likely to be highly multimodal. It\\u2019s not guaranteed that the ground truth of the missing features in the incomplete dataset created under the missing completely at random (MCAR) assumption correspond to the global mode of p(x_mis | x_obs). A good imputation model might generate samples associated with a higher density than the ground truth (or from other modes that are similarly probable). In this case, it will lead to a large error in terms of metrics like RMSE as multiple modes might be far away from each other in a complex distribution. On the other hand, our evaluation methods using FID provides a practical way to assess how close a model imputes according to p(x_mis | x_obs) by comparing distributions collectively.\\n\\nAs a concrete example, Figure 20 in Appendix J (updated) compares the two evaluation metrics on MNIST: our distribution-based FID and the ground truth-based RMSE. It shows that the rankings under most of the missing rates we assessed are not consistent across the two metrics. In particular, under 90% missing rate, MisGAN is worse than GAIN and matrix factorization in terms of RMSE, but significantly better in terms of FID. Figure 21 plots the imputation results of the three methods mentioned above. We can see that MisGAN produces the most visually promising completion even though its RMSE is much higher than the other two. It\\u2019s not surprising as the mean of p(x_mis | x_obs) minimizes the squared error in expectation, even if it might have low density. This probably explains why the blurry completion results produced by matrix factorization achieve the lowest RMSE.\\n\\n> I found the 'marketing'/presentation of the algorithm little misleading, especially in the introduction, given that there exists another GAN based imputation algorithm.\\n\\nUnlike GAIN, the main goal of this work, as we stated in the introduction, is trying to learn the distribution from high-dimensional incomplete data, which is applicable to a broader range of tasks other than missing data imputation. For example, we can train the model with interpretable priors in the generator network for better exploratory analysis. Moreover, a good generative model usually provides simpler or more effective algorithms for missing data imputation. For example, we can instead follow the imputation procedure in Rezende et al. (2014) accompanied with MisGAN to handle the situation when the missing data mechanism is different from the training distribution.\\n\\n\\nREFERENCES\\n\\nKingma, D. P., & Welling, M. (2014). Auto-encoding variational bayes.\\nRezende, D. J., Mohamed, S., & Wierstra, D. (2014). Stochastic backpropagation and approximate inference in deep generative models.\"}",
"{\"title\": \"Good paper but need to rectify few things\", \"review\": \"This is a good paper, as we have good experimental evidence that the proposed method seems to have some advantage over baseline methods.\\n\\nThe authors measure the success of their algorithm by computing FID scores for the randomly inputed images. That is the authors use a metric which measures a distance between the distribution of the generated images and images in a dataset. This is fine and interesting to know, but people also care about the distance of the completed pixels from the ground truth (missing) values. (E.g. https://www.cs.rochester.edu/u/jliu/paper/Ji-ICCV09.pdf)\\n\\nThis is important, because in a real life application, one would pick the mode of the distribution of the missing samples, and not sample from that distribution as the authors seems to be doing in this paper. \\n\\nI would therefore suggest adding experiments where authors pick the mode of the distribution and estimate an error metric such as root mean square error (RMSE or PSNR https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio ) \\n\\nI also found the 'marketing'/presentation of the algorithm little misleading, especially in the introduction, given that there exists another GAN based imputation algorithm. I think the authors should clearly state in the introduction that the other algorithm, abbreviated GAIN, exists as a GAN based missing data completion method. Then they should point out the differences of this algorithm from GAIN. Namely they should elaborate verbally on why learning the missing data distribution helps. Overall, what I am trying to say is, the key idea of this paper - that is learning the mask distribution - is not well motivated in this paper. \\n\\nDespite my concerns above, I recommend an accept. The algorithm seems novel, and there is some experimental results to back it up.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Resolving a major challenge in AmbientGAN, by focusing on a very specific application.\", \"review\": \"Building upon the success of AmbientGAN by Bora, Price, and Dimakis, this paper studies one of the major issues that is not resolved in AmbientGAN: the distribution of the data corruption is typically unknown. In general this is an ill-defined problem to solve, as the data corruption distribution is not identifiable from the corrupted data. The major insight of this paper is to identify a plausible setting where such identifiability issues are not present. Namely, the corruption itself is identifiable from the corrupted data. The brilliance of this paper is in identifying this niche application of data imputation/missing data/incomplete data.\\n\\nOnce the goal is set to train a GAN on incomplete data, the solution somewhat follows in a straightforward manner from AmbientGAN. Pass the generated output through a masking operator, which is also trained. Train the masking operator on the masking pattern of the real (corrupted) data. Imputation generator and discriminator also follows in a straightforward manner. \\n\\nA major shortcoming of this paper is that the performance of the proposed approach is not fully supported by extensive experiments. For example, a major application of such imputation solution will be predicting missing data in real world applications, such as recommendation systems, or biological experimental data. A experimental setting in \\\"GAIN: Missing Data Imputation using Generative Adversarial Nets\\\" provides an excellent benchmark dataset, and imputation approaches should be compared against GAIN in those scenarios.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Nice extension of AmbientGAN with detail experiment analysis\", \"review\": \"This paper proposed a new network structure to learn GAN with incomplete data, and it is a nice extension of AmbientGAN. Two theorems are provided for better understanding the potential effect of the missing values. Improved results compared with state-of-the-art methods on MNIST, CIFAR-10 and CelebA are presented. Overall, the paper is well organized, and the experiment results are sufficient to demonstrate the advantages of the proposed method. I particular like figure5 where AmbientGAN failed in this case.\\n\\n Several suggestions about improving the paper. I notice the images used in the experiments are small size. It would be interesting to test the performance on a larger image. Another direction would be testing the robustness of the model, for example, what will happen if the observation is also noisy? Some discussion about the potential extensions will also be helpful. For example, can the proposed network be used to solve the compressive sensing problem with a real value mask instead of binary valued. \\n\\nI did not dive into the detail of the prove of theorems. And it seems valid by reading through each step. Although these two theorems are not directly related to the properties of the proposed network structure. But it does provide some nice intuition.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
S1xLN3C9YX | Learnable Embedding Space for Efficient Neural Architecture Compression | [
"Shengcao Cao",
"Xiaofang Wang",
"Kris M. Kitani"
] | We propose a method to incrementally learn an embedding space over the domain of network architectures, to enable the careful selection of architectures for evaluation during compressed architecture search. Given a teacher network, we search for a compressed network architecture by using Bayesian Optimization (BO) with a kernel function defined over our proposed embedding space to select architectures for evaluation. We demonstrate that our search algorithm can significantly outperform various baseline methods, such as random search and reinforcement learning (Ashok et al., 2018). The compressed architectures found by our method are also better than the state-of-the-art manually-designed compact architecture ShuffleNet (Zhang et al., 2018). We also demonstrate that the learned embedding space can be transferred to new settings for architecture search, such as a larger teacher network or a teacher network in a different architecture family, without any training. | [
"Network Compression",
"Neural Architecture Search",
"Bayesian Optimization",
"Architecture Embedding"
] | https://openreview.net/pdf?id=S1xLN3C9YX | https://openreview.net/forum?id=S1xLN3C9YX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJlB0kSJjE",
"SklsukS1oE",
"SJxVEEQHcV",
"BygNIKvjKN",
"H1xpoLviKE",
"rJlTnfKnEE",
"BJlkwaRjV4",
"rJgifbHiNE",
"SJlfSUgjEE",
"B1xjh9VUx4",
"r1lSlEa4x4",
"HkgEmh0Wg4",
"HJgs_Ol51V",
"BkePNnddJN",
"rJeIV9wpRX",
"rygTLwXI07",
"Skl77Dx7A7",
"r1e9ywl7A7",
"S1eGpLg70m",
"HJx0P9-xAQ",
"BygS45WxAm",
"ryglzcbg0m",
"BylTSOvc6X",
"BJei_UPcTX",
"S1ectNwqam",
"S1eeX4vcTX",
"SygyWQPqT7",
"BJxRNWmx6Q",
"Bygrx6CthX",
"B1e700_3sQ",
"B1lpeTl3YQ"
],
"note_type": [
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1556201421295,
1556201331160,
1555538988040,
1554901323725,
1554900645147,
1549730484569,
1549688150575,
1549648146957,
1549628985859,
1545124530701,
1545028588769,
1544838171829,
1544321139043,
1544223790592,
1543498285697,
1543022421175,
1542813466524,
1542813410266,
1542813369789,
1542621798410,
1542621740952,
1542621704191,
1542252612986,
1542252146937,
1542251649931,
1542251544091,
1542251255329,
1541579062063,
1541168364800,
1540292299153,
1538161908797
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1455/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1455/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1455/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1455/Authors"
],
[
"~Miao_Zhang1"
],
[
"ICLR.cc/2019/Conference/Paper1455/Authors"
],
[
"~Miao_Zhang1"
],
[
"ICLR.cc/2019/Conference/Paper1455/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1455/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1455/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1455/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1455/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1455/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1455/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1455/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1455/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1455/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1455/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1455/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1455/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1455/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1455/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1455/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1455/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1455/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1455/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1455/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1455/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Code Released\", \"comment\": \"Code is available at https://github.com/Friedrich1006/ESNAC .\"}",
"{\"title\": \"Code Released\", \"comment\": \"Code is available here now: https://github.com/Friedrich1006/ESNAC . Thanks for your interest in our work!\"}",
"{\"comment\": \"Do you have an estimate for when it will be released?\", \"title\": \"Time Estimate\"}",
"{\"title\": \"Code will be released soon\", \"comment\": \"Yes, the code will be released soon. Stay tuned!\"}",
"{\"comment\": \"Will the code for this project be released? It'd be wonderful if it were released.\", \"title\": \"Code for this project\"}",
"{\"title\": \"Details about Bi-LSTM\", \"comment\": \"At each Bi-LSTM step, we pass the configuration information of one layer to the Bi-LSTM. The input dimension is (m+2n+6). Here n is the maximum number of layers in the network. Details about the representation for layer configuration can be found in Appendix, Sec 6.2.\\n\\nThe number of Bi-LSTM steps is the same as the number of layers in the network. When one layer is removed, we replace it with an identity layer. The configuration of this 'removed layer' will still be passed to the Bi-LSTM, but here the configuration is updated to an identity layer, different from the original layer. This implementation choice makes the number of Bi-LSTM steps fixed to the number of layers in the given teacher network. But we choose to replace a removed layer as an identity layer instead of actually removing it simply because it is easier to implement and is equivalent to actually removing it. We always get a fixed size embedding for the whole architecture no matter how many layers are in the network because of the average pooling.\\n\\nIn terms of the architecture details of the Bi-LSTM, we use 4 stacked Bi-LSTM cells and the dimension of the hidden state is 64.\"}",
"{\"comment\": \"Really thanks for your reply! So my question becomes that, does the number of units in Bi-LSTM equal to the maximal number of layers in the architecture?\\n\\nSincerely!\", \"title\": \"number of the hidden state and the maximal number of layers\"}",
"{\"title\": \"Average pooling ensures the length of the architecture embedding is fixed\", \"comment\": \"Thanks for your interest in our paper! The embedding for the whole architecture is learned by the Bi-LSTM. After passing the configuration information of each layer into the Bi-LSTM, we gather all the hidden states, apply average pooling to these hidden states and then apply L2 normalization to the pooled vector to obtain the architecture embedding. The average pooling ensures that we obtain a fixed length vector for the whole architecture no matter how many layers we have. The length of the architecture embedding equals the dimension of the hidden state of the Bi-LSTM.\"}",
"{\"comment\": \"Dear authors,\\n\\nI have thoroughly read this paper, and I found it is very interesting and easy to follow, only one place I can not understand.\\n\\nYou said you use Bi-LSTM to learn the embedding representation, which comes from \\\"the N2N layer removal\\\". My question is, when you remove one layer, do you still use the same length vector to represent the whole architecture? even though several genes are with no meaning? How do you embed the vectors (whole architectures with different number of layers) into same length vectors?\", \"title\": \"Representation of the whole architecture after remove a layer\"}",
"{\"title\": \"Discussion of NAO\", \"comment\": \"Thanks for the valuable comment. This paper \\u201cNeural Architecture Optimization\\u201d (NAO) was publicly available on Arxiv at the end of August 2018, about one month before the submission deadline for ICLR. We will add a discussion of NAO in the related work section in the final version of this paper.\\n\\nNAO and our work share the idea of mapping network architectures into a latent embedding space and carrying out the optimization in this learned embedding space. We are happy to see the idea of mapping network architectures into a latent embedding space has a bigger impact than what\\u2019s stated in our paper.\\n\\nBut NAO and our work have fundamentally different motivations for mapping neural network architectures into a continuous space, which further lead to different architecture search frameworks. NAO maps network architectures to a continuous space such that they can perform gradient based optimization to find better architectures. However, our motivation for the embedding space is to make it easy to define a similarity metric (kernel function) between architectures with complex skip connections and multiple branches.\", \"here_is_the_text_from_the_related_work_from_nao_paper\": \"\\u201cHowever, the effectiveness of GP heavily relies on the choice of covariance functions K(x, x\\u2019) which essentially models the similarity between two architectures x and x\\u2019. One need to pay more efforts in setting good K(x, x\\u2019) in the context of architecture design.\\u201d Our work proposes a learnable kernel function K(x, x\\u2019) for the architecture domain while NAO does not build upon a Gaussian process and does not touch upon how to define such a kernel function.\\n\\nThe general framework of NAO is gradient based, i.e., NAO selects architectures for evaluation at each architecture search step by using the gradient direction induced by the performance predictor while our search method builds upon Bayesian optimization by defining the kernel function defined over our proposed embedding space.\"}",
"{\"comment\": \"Dear the authors,\\n\\nPlease refer to the NeurIPS'18 paper: \\\"Neural Architecture Optimization\\\", which maps the NN architectures into their embeddings. It seems a lack of gental discussion (not simple reference) with such a related work is not sound.\\n\\nBest\", \"title\": \"Please refer to \\\"Neural Architecture Optimization\\\"\"}",
"{\"metareview\": \"The authors propose a method to learn a neural network architecture which achieves the same accuracy as a reference network, with fewer parameters through Bayesian Optimization. The search is carried out on embeddings of the neural network architecture using a train bi-directional LSTM. The reviewers generally found the work to be clearly written, and well motivated, with thorough experimentation, particularly in the revised version. Given the generally positive reviews from the authors, the AC recommends that the paper be accepted.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Architecture search through Bayesian Optimization\"}",
"{\"title\": \"Response to R3\\u2019s concerns\", \"comment\": \"We thank the reviewer for the feedback. Here are our responses:\\n\\n** Response to the concern about the objective function **\\n\\n(1) We agree that maximizing the log marginal likelihood is a principled objective function, but our choice of maximizing the GP predictive posterior p(f(x_i) | f(D \\\\ x_i)) is also reasonable and not a hack. The posterior distribution guides the search process by influencing the choice of architectures for evaluation at each step. The value of p(f(x_i) | f(D \\\\ x_i)) indicates how accurate the posterior distribution characterizes the statistical structure of the function, where f(x_i) is the specific performance value obtained by evaluating the architecture x_i and \\u2018D \\\\ x_i\\u2019 refers to all the evaluated architectures other than x_i. So we believe maximizing p(f(x_i) | f(D \\\\ x_i)) is a suitable training objective for learning the embedding space.\\n\\n(2) The key idea of our work is to learn an embedding space for the architecture domain, i.e., a unified representation for the configuration of architectures. The learned embedding space can be used to compare architectures with complex skip connections and multiple branches and we can combine it with any Sequential Model-Based Optimization (SMBO) method to search for desired architectures. While the choice of the objective function itself is important, the idea of mapping an architecture to a latent embedding space is valid no matter what objective function we choose, which is influenced by many factors such as the specific choice of the SMBO method (we choose GP based Bayesian optimization in this work), the principle or the intuition of the objective function and whether the objective function gives good empirical performance or not.\\n\\n(3) Yes, we do add a small value on the diagonal of the covariance matrix in our implementation. Here is the code snippet to compute the log determinant in our implementation: \\u201ctorch.logdet(self.K + self.beta * torch.eye(self.n))\\u201d, where K is the covariance matrix, beta is a small positive value and n is the dimension of the matrix. \\n\\nWe would like to point out that beta here actually refers to the variance of the Gaussian noise. When assuming an additive Gaussian noise with the variance denoted by beta, the formula of the log marginal likelihood naturally contains the term \\u201cself.K + self.beta * torch.eye(self.n)\\u201d and we do not need to add an extra small value on the diagonal of K.\\n\\n** Response to the concern about the random sampling **\\n\\n(1) The comparison between our method and \\u2018Random Search\\u2019 is fair because, in the implementation, our method and \\u2018Random Search\\u2019 use **exactly the same** sampling procedure with the same hyperparameter values to sample architectures. Also, our method and \\u2018Random Search\\u2019 train the same number of architectures in the whole search process.\\n\\nThe difference between our method and \\u2018Random Search\\u2019 is that \\u2018Random Search\\u2019 randomly samples architectures and train them to get the performance while our method carefully selects the architecture for evaluation by maximizing the acquisition function at each architecture search step. The way we maximize the acquisition function is to randomly sample a set of architectures, evaluate their acquisition function values and choose the architecture with the highest acquisition function value. The randomly sampling procedure used in maximizing the acquisition function in our method is **exactly the same** as the one used in \\u2018Random Search\\u2019. Note that the evaluation of acquisition value for one architecture is super fast, which only involves forwarding the architecture configuration parameters to the LSTM and does not involve any training of this architecture.\\n\\n(2) Yes, we have explored different hyperparameter values used in the random sampling procedure, but did not notice an improvement in the performance for either our method or \\u2018Random Search\\u2019.\\n\\n(3) Random Search is not the only baseline we have. We have compared our method to N2N (a reinforcement learning based method, see Table 1) and the state-of-the-art manually designed compact architecture ShuffleNet (see Table 2) and have demonstrated superior performance.\"}",
"{\"title\": \"Concerns remain...\", \"comment\": \"Many thanks for the response.\\n\\nUnfortunately, original concerns 2 and 3 remain for me.\\n\\nSpecifically, the fact that the authors finally decide to maximize the GP predicitive posterior (and the combination with the multiple Kernel strategy) seems hacky and unprincipled to me. Also the comment \\\"we observe the loss is numerically unstable due to the log determinant of the covariance matrix\\\" makes me worry, did you try to add some jitter to the the diagonal of the covariance (e.g., add 1e-4 * sp.eye(dim_cov))? ()\\n\\nThis makes me wonder if the comparison with the random sampling procedure is fair. There are a lot of hyperparameters in the proposed sampling procedure. Did the authors explore whether different values give better performance of the random sampling?\\n\\nOverall, I like the approach taken in this paper and the other reviewers seem to like this work, so I would like to be more supportive. However, at this stage, I do not feel changing my score. Can the authors or the other reviewers make me notice if I am missing anything and/or if (and why) my concerns are seen as unimportant?\"}",
"{\"title\": \"Response to author feedback\", \"comment\": \"I have read the authors responses and taken a look and the updated manuscript. They have done a good job. I have hence increased my score accordingly.\"}",
"{\"title\": \"Comparison to TPE\", \"comment\": \"We first do not consider adding skip connections between layers and focus on layer removal and layer shrinkage only, i.e., we search for a compressed architecture by removing and shrinking layers from the given teacher network. Therefore, the hyperparameter we need to tune include for each layer whether we should keep it or not and the shrinkage ratio for each layer. This results in 64 hyperparameters for ResNet-18 and 112 hyperparameters for ResNet-34. The results are summarized in the attached table. Comparing \\u2018TPE - removal + shrinkage\\u2019 and \\u2018Ours - removal + shrinkage\\u2019, we can see that our method outperforms TPE and can achieve higher accuracy with a similar size.\\n\\nNow, we conduct experiments with adding skip connections. Besides the hyperparameters mentioned above, for each pair of layers where the output dimension of one layer is the same as the input dimension of another layer, we tune a hyperparameter representing whether to add a connection between them. The results in 529 and 1717 hyperparameters for ResNet-18 and ResNet-34 respectively. In this representation, the original hyperparameter space is extremely high-dimensional and we think it would be difficult to directly optimize in this space. We can see from the table that for ResNet-18, the \\u2018TPE\\u2019 results are worse than \\u2018TPE - removal + shrink\\u2019. We do not show the \\u2018TPE\\u2019 results for ResNet-34 here because the networks found by TPE have too many skip connections, which makes it very hard to train. The loss of those networks gets diverged easily and do not generate any meaningful results.\\n\\nBased on the results on \\u2018layer removal + layer shrink\\u2019 only and the results on the full search space, we can conclude that our method is better than optimizing in the original space especially when the original space is very high-dimensional.\\n\\n Accuracy #Params Ratio Times f(x)\\nCIFAR-100\\nResNet-18 TPE - removal + shrink 70.60%\\u00b10.69% 1.30M\\u00b10.28M 0.8843\\u00b10.0249 8.99x\\u00b12.16x 0.8849\\u00b10.0111\\n TPE 65.17%\\u00b13.14% 1.54M\\u00b11.42M 0.8625\\u00b10.1267 11.82x\\u00b17.69x 0.8041\\u00b10.0595\\n Ours - removal + shrink 72.57%\\u00b10.58% 1.42M\\u00b10.52M 0.8733\\u00b10.0461 8.85x\\u00b13.97x 0.9062\\u00b10.0081\\n Ours 73.83%\\u00b11.11% 1.87M\\u00b10.08M 0.8335\\u00b10.0073 6.01x\\u00b10.26x 0.9123\\u00b10.0151\\nResNet-34 TPE - removal + shrink 72.26%\\u00b10.83% 2.36M\\u00b10.45M 0.8893\\u00b10.0211 9.24x\\u00b11.59x 0.9065\\u00b10.0072\\n Ours - removal + shrink 73.72%\\u00b11.33% 2.75M\\u00b10.55M 0.8711\\u00b10.0257 8.01x\\u00b11.70x 0.9205\\u00b10.0117\\n Ours 73.68%\\u00b10.57% 2.36M\\u00b10.15M 0.8895\\u00b10.0069 9.08x\\u00b10.59x 0.9246\\u00b10.0076\", \"caption\": \"\\u2018TPE - removal + shrink\\u2019 and \\u2018Ours - removal + shrink\\u2019 refer to results of TPE and our method when only considering layer removal and layer shrinkage. \\u2018TPE\\u2019 and \\u2018Ours\\u2019 refers to results of TPE and our method when considering the full search space, including layer removal, layer shrinkage and adding skip connections.\"}",
"{\"title\": \"Response to questions about the treatment of the GP hyperparameters\", \"comment\": \"Yes, the hyperparameters of the GP are fixed in our experiments. We have tried optimizing the kernel width parameter $\\\\sigma$ (defined in Eq 4) and the LSTM weights jointly before but we found that empirically gives worse results. In their experiments [1], the representation of the configuration space is fixed (for example, they represent the architecture configuration with the value of hyperparameters) but in our work, the latent space is learned and keeps being updated during the search process. Optimizing both the GP hyperparameters and the latent space allows more flexibility and may achieve better performance if there are enough training samples for the LSTM. However, in the architecture search scenario, we can only evaluate a few number of architectures, in which case we think fixing the GP hyperparameters and only learning the latent space itself is better.\\n\\n[1] Snoek, J., Larochelle, H., & Adams, R. P. (2012). Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems (pp. 2951-2959).\"}",
"{\"title\": \"Paper is updated\", \"comment\": \"Yes, we have updated the paper and included the additional results in the appendix (see Sec 6.6 and Table 7).\"}",
"{\"title\": \"Will add the results of TPE\", \"comment\": \"Thanks for the reply! We agree that the paper would be more convincing if we can compare to applying TPE in the original space. We are running experiments for that and will follow up here once we have the results.\"}",
"{\"title\": \"Question about the treatment of the GP hyperparameters\", \"comment\": \"Does this mean you keep the hyperparameters of the Gaussian process fixed? Why not adapting them by either optimizing the marginal log-likelihood or marginalizing over them as described in https://papers.nips.cc/paper/4522-practical-bayesian-optimization-of-machine-learning-algorithms.pdf\"}",
"{\"title\": \"Response to additional experiments\", \"comment\": \"Thanks for providing the additional experiments. This is very valuable information. Could you add that to the appendix?\"}",
"{\"title\": \"Response to comparison to existing methods\", \"comment\": \"I do follow the intuition of the paper that, compared to existing methods, the proposed method learns an embedding in order to allow for measuring similarities between architectures in a latent space rather than in the much more complicated original space.\\nIt is true that existing BO methods are complementary to the presented approach, however, for me it remains open whether the learned embedding is actually helpful for Bayesian optimization and improves upon methods that only operate in the original input space. \\nI still feel that the paper would be much more convincing, if it contains an experiment that shows that BO in the original space (e.g TPE) is outperformed by Bayesian optimization that uses the latent embedding.\"}",
"{\"title\": \"Response to the reviewer's questions\", \"comment\": \"Thanks for the useful feedback! Here is our response:\\n\\n*** Response to the question about the motivation and the presentation of this paper: ***\\n\\nWe thank the reviewer for the suggestion about the presentation of the paper. We have edited the introduction to motivate our method more in the context of model compression. We also include exploring its application to the general NAS problem as our future work in the conclusion section.\\n\\n\\n*** Response to the question about the using the log marginal likelihood as the objective function: ***\\n\\nWe agree that the log marginal likelihood is the standard objective function in previous works on kernel learning. However, we do not use the log marginal likelihood for the following two reasons:\\n\\n(1) We empirically find that maximizing the log marginal likelihood yields worse results than maximizing the predictive GP posterior. Here are the results:\\n\\nCIFAR-100\\t\\t Accuracy\\t\\t#Params\\t\\tRatio\\t\\t Times\\t\\tf(x)\\nVGG-19\\t Log Marginal\\t69.90%\\u00b10.69%\\t1.50M\\u00b10.68M\\t0.9254\\u00b10.3382\\t16.14x\\u00b19.22x\\t0.9422\\u00b10.0071\\n\\t Ours\\t 71.41%\\u00b10.75%\\t2.61M\\u00b10.61M\\t0.8699\\u00b10.0306\\t7.99x\\u00b11.99x\\t0.9518\\u00b10.0158\\n\\t\\t\\t\\t\\t\\t\\nResNet-18\\tLog Marginal\\t72.80%\\u00b11.11%\\t1.72M\\u00b10.18M\\t0.8467\\u00b10.0160\\t6.57x\\u00b10.67x\\t0.9033\\u00b10.0094\\n\\t Ours\\t 73.83%\\u00b11.11%\\t1.87M\\u00b10.08M\\t0.8335\\u00b10.0073\\t6.01x\\u00b10.26x\\t0.9123\\u00b10.0151\\n\\t\\t\\t\\t\\t\\t\\nResNet-34\\tLog Marginal\\t73.11%\\u00b10.57%\\t3.34M\\u00b10.48M\\t0.8435\\u00b10.0224\\t6.47x\\u00b10.89x\\t0.9059\\u00b10.0134\\n\\t Ours\\t 73.68%\\u00b10.57%\\t2.36M\\u00b10.15M\\t0.8895\\u00b10.0069\\t9.08x\\u00b10.59x\\t0.9246\\u00b10.0076\\n\\n'Log Marginal' refers to training the LSTM by maximizing the log marginal likelihood. 'Ours' refers to maximizing p(f|D).\\n\\n(2) Also, when using the log marginal likelihood, we observe the loss is numerically unstable due to the log determinant of the covariance matrix in the log likelihood. The training objective usually goes to infinity when the dimension of the covariance matrix is larger than 50, even with smaller learning rates, which may harm the search performance.\\n\\nTherefore, we train the LSTM parameters by maximizing the predictive GP posterior.\\n\\n\\n*** Response to questions about the sampling procedure: ***\\n\\nHere are the details about how we sample one compressed architecture. This sampling procedure is used in both the \\u2018Random Search\\u2019 baseline and the optimization of the acquisition function in our method.\\n\\n(1) For layer removal, only layers whose input dimension and output dimension are the same are allowed to be removed. Each removable layer can be removed with probability p_1. However, if the probability is fixed, the diversity of sampled architectures would be reduced. For example, if we fix p_1 to 0.5, a compressed architecture with over 70% layers removed can hardly be generated. To encourage the diversity of random samples, p_1 is first randomly drawn from the set P_1={0.3, 0.4, 0.5, 0.6, 0.7} at the beginning of generating a new compressed architecture.\\n\\n(2) For layer shrinkage, we divide layers into groups and for layers in the same group, the number of channels are always shrunken with the same ratio. The layers are grouped according to their input and output dimension. This is to make sure the network is still valid after the layer shrinkage. The shrinkage ratio for each group is drawn from the uniform distribution U(0.0, 1.0).\\n\\n(3) For adding skip connections, only when the output dimension of one layer is the same as the input dimension of another layer, the two layers can be connected. When there are multiple incoming connections for one layer, the outputs of source layers are added up to form the input for that layer. For each pair of connectable layers, a connection can be added between them with probability p_3. Similar to p_1 in layer removal, p_3 is not fixed but randomly drawn from the set P_3={0.003, 0.005, 0.01, 0.03, 0.05} at the beginning of generating a compressed architecture. Values in P_3 are relatively small, because we found in experiments that adding too many skip connections empirically harm the performance of compressed architectures.\\n\\nCombining all these three kinds of randomly sampled operations, a compressed architecture is generated from the teacher architecture. We have tried to include more values in the set P_1 and P_3 but that does not yield any improvement in the performance.\"}",
"{\"title\": \"Response to the reviewer's questions; Updated the results of multiple runs; Clarification about the originality\", \"comment\": \"We thank the reviewer for the feedback and suggestions. We have addressed all the questions here:\\n\\n*** Response to questions about the performance of N2N: ***\\n\\nAll the numbers of N2N are from their original paper but N2N did not test their method to compress ShuffleNet so we do not have the performance of N2N on ShuffleNet. N2N did not test their method under the setting VGG-19 on CIFAR-100 either. For ResNet-34 on CIFAR-100, N2N only provides results of layer removal (indicated by \\u2018N2N - removal\\u2019 in Table 1 in our paper) so for fair comparison, we compare \\u2018N2N - removal\\u2019 with \\u2018Ours - removal\\u2019, which refers to only considering the layer removal operation in the search space. \\u2018Ours - removal\\u2019 also significantly outperforms \\u2018N2N - removal\\u2019 in terms of both the accuracy and the compression ratio.\\n\\n\\n*** Response to questions about experiment results: ***\\n\\nWe re-run the experiments for 3 times and update the results in the paper (please check the PDF). In Table 1, we show the mean and standard deviation of the results for \\u2018Ours\\u2019 and \\u2018Random Search\\u2019. We observe that after multiple runs, the average performance of our method also outperforms all the baselines as before.\\n\\n\\n*** Response to questions about the related work: ***\\n\\nWe have updated the paper and added this paper in related work. Also in the conclusion section, we think it\\u2019s an interesting future direction to combine their method with our proposed embedding space to identify the Pareto set of the architectures that are both small and accurate. Thanks for suggesting the related work!\\n\\n\\n*** Response to questions about the originality of our work: ***\\n\\nWe would like to emphasize that our key contribution is a novel method that incrementally learns an embedding space for the architecture domain, i.e., a unified representation for the configuration of architectures. The learned embedding space can be used to compare architectures with complex skip connections and multiple branches and we can combine it with any Sequential Model-Based Optimization (SMBO) method (we choose GP based BO algorithms in this work) to search for desired architectures. Based the learned embedding space, we present a framework of searching for compressed network architectures with Bayesian optimization (BO). The learned embedding provides a feature space over which the kernel function of BO is defined. Under this framework, we propose a set of architecture operators for generating architectures for search and a multiple kernel strategy to encourage the search algorithm to explore more diverse architectures.\\n\\nWe demonstrate that our method can significantly outperform various baseline methods, such as random search and N2N (Ashok et al.,2018). The compressed architectures found by our method are also better than the state-of-the-art manually-designed compact architecture ShuffleNet (Zhang et al., 2018). We also demonstrate that the learned embedding space can be transferred to new settings for architecture search, such as a larger teacher network or a teacher network in a different architecture family, without any training.\"}",
"{\"title\": \"Response to Questions about TPE [1], SMAC [2] and their applications to NAS [3][4]:\", \"comment\": \"We thank the reviewer for the detailed feedback. Here is our response to questions about TPE [1], SMAC [2] and their applications to NAS [3][4]:\\n\\n*** Our key contribution ***\\n\\nWe would like to emphasize that our key contribution is a novel method that incrementally learns an embedding space for the architecture domain, i.e., a unified representation for the configuration of architectures, which includes the number of layers, the type and configuration parameters of each layer and how layers are connected to each other. The learned embedding space can be used to compare architectures with complex skip connections and multiple branches and we can combine it with any Sequential Model-Based Optimization (SMBO) method to search for desired architectures. In this work, we define the kernel function (similarity metric between the configuration of architectures) over this incrementally larned space and apply Bayesian optimization to search for desired architectures. The focus of our work is not the use of Bayesian optimization (or some other SMBO methods) but how the embedding or the representation for the configuration of architectures itself can be learned over time. Other than the Gaussian process regression used in this paper, our method can be combined with more sophisticated SMBO methods such as TPE [1] and SMAC [2]. But this is beyond the focus of this work.\\n\\n *** Details about TPE and SMAC ***\\n\\nTPE [1] is a hyperparameter optimization algorithm based on a tree of Parzen estimator. In TPE [1] and its application to NAS [4], they use Gaussian mixture models (GMM) to fit the probability density of the hyperparameter values, which indicates that they determine the similarity between two architecture configurations based on the Euclidean distance in the original hyperparameter value domain. However, instead of comparing architecture configurations in the original hyperparameter value domain, we transform architecture configurations into our learned embedding space and compare them in the learned embedding space. Also in [1] and [4], each architectural hyperparameter is optimized independently of others and it is almost certainly the case that the optimal values of some hyperparameters depend on settings of others. This issue can be solved by applying TPE over our learned unified representation for all the configuration parameters.\\n\\nSMAC [2] is a random-forest-based Bayesian optimization method. In SMAC [2] and its application to NAS [3], they compare two architecture configurations with a combined kernel that is *manually* defined based on the Euclidean distance or the Hamming distance between corresponding configuration parameter values. However, we compare two architecture configurations with an *automatically* learned kernel function defined over a \\u2018data-driven\\u2019 embedding space that is incrementally learned during the optimization. [3] can possibly benefit from our work by replacing their manually defined kernel with our learned kernel function.\\n\\n*** Our method is complementary to TPE and SMAC ***\\n\\nBoth TPE and SMAC focus on improving SMBO methods while our novelty is not in the use of Bayesian optimization methods. Our main contribution is the incrementally learning of an embedding to represent the configuration of network architectures such that we can carry out the optimization over the learned space instead of the original domain of the value of configuration parameters. Our method is complementary to TPE and SMAC and can be combined with them when being applied to NAS.\\n\\n*** [3] and [4] do not tune how the layers are connected to each other. ***\\n\\nAlso, TPE [1] and SMAC [2] have been applied to neural architecture search [3][4] before, however the connections between layers in the architectures tuned in [3] and [4] are fixed while we allow the addition of skip connections to optimize how the layers are connected. We believe optimizing how the layers are connected is crucial for the performance of the architecture and we have validated this in the ablation study (Table 5 in Appendix 6.3).\"}",
"{\"title\": \"Response to Questions about NASBOT [5] and questions about the LSTM training objective\", \"comment\": \"*** Response to the question about NASBOT [5]: ***\\n\\nYes, our work is related to NASBOT as mentioned in the related work. Different from our incrementally learned embedding space for the architecture domain, their proposed OTAMANN distance is a *manually defined* distance metric between architectures and can also be used to compare architectures with different topologies. But we find it is non-trivial to integrate OTAMANN distance into our pipeline. Their public implementation is customized to their search space (searching for architectures from the scratch), which is significantly different from our search space (searching for compressed architectures based on a teacher network). Also, to compute OTAMANN distance, one needs to *manually define* a layer label mismatch cost matrix but in their implementation, they treat the residual block as a special layer type while in our work, a residual block is not specially treated but broken down into several layers with skip connections. This makes it hard to integrate OTAMANN distance into our pipeline. We are looking into their code and trying our best for this.\\n\\n*** Response to \\u201cHow do you make sure that the LSTM learns a meaningful embedding space?\\u201d: ***\\n\\nThe predictive GP posterior guides our choice of the architectures for evaluation at each search step, therefore we learn a meaningful embedding space by updating the LSTM weights \\u03b8 to maximize \\\\Sum_i log p(f(xi) | f(D \\\\ xi); \\u03b8), which is a measurement of how accurate the posterior distribution is. The higher the value of p(f(xi) | f(D \\\\ xi); \\u03b8) is, the more accurately the posterior distribution characterizes the statistical structure of the function f and the more the function f is consistent with the GP prior. Thus we define the loss function (Eq 5) based on p(f|D).\\n\\n\\n*** Response to \\u201cIt is also a bit unclear why the performance f is not used directly instead of p(f|D).\\u201d: ***\\n\\nWe agree that a meaningful embedding space should be predictive of the function value (the performance of the architecture). However directly training the LSTM by regressing the function value with a Euclidean loss does not let us directly evaluate how accurate the posterior distribution characterizes the statistical structure of the function. As we have mentioned above, the posterior distribution guides our search process by influencing the choice of architectures for evaluation at each step. Therefore, we believe p(f|D) is a more suitable training objective for our search algorithm than regressing the value of f. To validate this, we have tried changing the objective function from maximizing p(f|D) to regressing the value of f with a Euclidean loss and here are the results:\\n\\n\\nCIFAR-100\\t\\t Accuracy\\t #Params\\t Ratio\\t Times\\t f(x)\\nVGG-19\\t Euclidean\\t70.95%\\u00b11.07%\\t2.47M\\u00b11.26M\\t0.8771\\u00b10.0627\\t9.62x\\u00b14.55x\\t0.9453\\u00b10.0092\\n\\t Ours\\t 71.41%\\u00b10.75%\\t2.61M\\u00b10.61M\\t0.8699\\u00b10.0306\\t7.99x\\u00b11.99x\\t0.9518\\u00b10.0158\\n\\t\\t\\t\\t\\t\\t\\nResNet-18\\tEuclidean\\t71.67%\\u00b10.67%\\t1.62M\\u00b10.27M\\t0.8560\\u00b10.0243\\t7.07x\\u00b11.09x\\t0.8917\\u00b10.0137\\n\\t Ours\\t 73.83%\\u00b11.11%\\t1.87M\\u00b10.08M\\t0.8335\\u00b10.0073\\t6.01x\\u00b10.26x\\t0.9123\\u00b10.0151\\n\\t\\t\\t\\t\\t\\t\\nResNet-34\\tEuclidean\\t72.87%\\u00b11.11%\\t2.49M\\u00b10.60M\\t0.8834\\u00b10.2814\\t8.90x\\u00b12.04x\\t0.9127\\u00b10.0103\\n\\t Ours\\t 73.68%\\u00b10.57%\\t2.36M\\u00b10.15M\\t0.8895\\u00b10.0069\\t9.08x\\u00b10.59x\\t0.9246\\u00b10.0076\\n\\n'Euclidean' refers to training the LSTM by regressing the value of f with a Euclidean loss. 'Ours' refers to maximizing p(f|D).\\n\\nWe observe that maximizing p(f|D) consistently yields better results than regressing the value of f with a Euclidean loss.\"}",
"{\"title\": \"Response to other questions ; Updated results of multiple runs\", \"comment\": \"Here is our response to other questions:\\n\\n*** Response to questions about experimental details: ***\\nWe re-run the experiments for 3 times and update the results in the paper (please check the PDF). In Table 1, we show the mean and standard deviation of the results for \\u2018Ours\\u2019 and \\u2018Random Search\\u2019. We observe that after multiple runs, the average performance of our method also outperforms all the baselines as before.\\n\\nThe mean of the Gaussian process prior is set to zero. The Gaussian noise variance is set to 0.05. The kernel width parameter $\\\\sigma$ (defined in Eq 4) in the RBF kernel is set as $\\\\sigma^2=0.01$.\\n\\n\\n*** Response to questions about related work: ***\\n\\nThanks for suggesting the related work. We have updated the paper and added [6] and [7] in the related work section. For your convenience, here is the text about [6] and [7] in the paper: \\u201cOur work can also be viewed as carrying out optimization in the latent space of a high dimensional and structured space, which shares a similar idea with previous literature [6][7]. For example, [6] presents a new variational auto-encoder to map kernel combinations produced by a context-free grammar into a continuous and low-dimensional latent space.\\u201d\\n\\n*** Response to \\u201cWhat do you mean with the sentence \\\"works on BO for NAS can only tune feed-forward structures\\\" in the related work section?\\u201d: ***\\n\\nWe are sorry for the confusion of the term \\u2018feed-forward structures\\u2019 in this sentence. We have corrected the sentence to \\u201cHowever, most existing works on BO for NAS only show results on tuning network architectures where the connections between network layers are fixed, i.e., most of them do not optimize how the layers are connected to each other.\\u201d For example, [8] tunes the hidden size, the embedding size and other architectural parameters in the language model but it does NOT change how the layers in the model are connected to each other. Our results (Table 5 in Appendix 6.3) show that optimizing how the layers are connected (in this work, by adding skip connections) is crucial to the performance of the compressed network architecture.\\n\\nThe fundamental reason why previous works on BO for NAS do not optimize how the layers are connected is that there lacked a principled way to quantify the similarity between two architectures with complex skip connections, which is addressed by our proposed learnable embedding space. They can benefit our proposed method to be extended to optimize how the layers are connected.\\n\\n*** Response to questions about the motivation of using multiple kernels: ***\\n\\nSorry for the confusion in Sec 3.3. We have edited Sec 3.3 to make the motivation more clear. The main motivation of training multiple kernels is to encourage the search algorithm to explore more diverse architectures. We only evaluate 160 architectures during the whole search process so it is possible the learned kernel is overfitted to the training samples and bias the following sampled architectures for evaluation. To encourage the search algorithm to explore more diverse architectures, we propose the usage of multiple kernels, motivated by the bagging algorithm, which is usually employed to avoid overfitting.\\n\\nRegarding the statement about the first architecture biasing the LSTM, this statement is invalid in the current context and we have removed it from the paper. This was a conjecture at the early development stage of this work and we mistakenly put it here.\"}",
"{\"title\": \"interesting idea but...\", \"review\": \"In this work, the authors propose a new strategy to compress a teacher neural network. Briefly, the authors propose using Bayesian optimization (BO) where the accuracy of the networks is modelled using a Gaussian Process function with a squared exponential kernel on continuous neural network (NN) embeddings. Such embeddings are the output of a bidirectional LSTM taking as input the \\u201craw\\u201d (discrete) NN representations (when regarded as a covariance function of the \\u201craw\\u201d (discrete) NN representations, the kernel is a deep kernel).\\n\\nThe authors apply this framework for model compression. In this application, the search space is the space of networks obtained by sampling reducing operations on a teacher network. In applications to CIFAR-10 and CIFAR-100 the authors show that the accuracies of the compressed network obtained through their method exceeds accuracies obtained through other methods for compression, manually compressed networks and random sampling.\\n\\nI have the following concerns/questions:\\n\\n1)\\tThe authors motivate their work in the introduction by discussing the importance of learning a good embedding space over network architectures to \\u201cgenerate a priority ordering of architectures for evaluation\\u201d. Within the proposed BO framework, this would require the optimization of the expected improvement in a high-dimensional and discrete space (the space of NN architectures), which \\u201cis non-trivial\\u201d. In this work, the authors do not try to solve this general problem, but specialize their work to model compression, which has a much lower dimensional search space (space of networks obtained by sampling reducing operations on a teacher network). For this reason, I believe the presentation and motivation of this work is not presented clearly. Specifically, while I agree that the methods and results in this paper can be relevant to the problem of getting NN embeddings for a larger search space, this should be discussed in the conclusion/discussion as future direction, rather than as motivating example. Generally, I think the method should be described in the context of model compression rather than as a general method for neural architecture search (NAS) method (in my understanding, its use for NAS would be unfeasible). \\n\\n2)\\tI have been wondering why the authors optimize the kernel parameters by maximizing the predictive GP posterior rather than maximizing the GP log marginal likelihood as in standard GP regression?\\n\\n3)\\tThe sampling procedure should be explained in greater detail. How many reducing operations are sampled? This would be important to fully understand the random search method the authors consider for comparison in their experiments. I expect that the results from that method will strongly depend on the sampling procedure and different choices should probably be explored for a fair comparison. Do the authors have any comment on this?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"interesting idea but the paper needs further work\", \"review\": \"================\\nPost-Rebuttal\\n================\\n\\nI thank the authors for the larger amount of additional work they put into the rebuttal. Since the authors addressed my main concerns, i e. comparison to existing methods, clarifications of the proposed approach, adding references to related work, I will increase my score and suggest to accept the paper.\\n\\n\\n\\n\\nThe paper describes a new neural architecture search strategy based on Bayesian optimization to find a compressed version of a teacher network. The main contribution of the paper is to learn an embedding that maps from a discrete encoding of an architecture to a continuous latent vector such that standard Bayesian optimization can be applied. \\nThe new proposed method improves in terms of compressing the teacher network with just a small drop in accuracy upon an existing neural architecture search method based on reinforcement learning and random sampling.\\n\\n\\nOverall, the paper presents an interesting idea to use Bayesian optimization on high dimensional discrete problems such as neural architecture search. I think a particular strength of this methods is that the embedding is fairly general and can be combined with various recent advances in Bayesian optimization, such as, for instance, multi-fidelity modelling.\\nIt also shows on some compression experiments superior performance to other state-of-the-art methods.\\n\\nHowever, in its current state I do not think that the paper is read for acceptance:\\n\\n- Since the problem is basically just a high dimensional, discrete optimization problem, the paper misses comparison to other existing Bayesian optimization methods such as TPE [1] / SMAC [2] that can also handle these kind of input spaces. Both of these methods have been applied to neural architecture search [3][4] before. Furthermore, since the method is highly related to NASBOT [5], it would be great to also see a comparison to it.\\n\\n- I assume that in order to learn a good embedding, similar architectures need to be mapped to latent vector that are close in euclidean space, such that the Gaussian process kernel can model any correlation[7]. How do you make sure that the LSTM learns a meaningful embedding space? It is also a bit unclear why the performance f is not used directly instead of p(f|D). Using f instead of p(f|D) would probably also make continual training of the LSTM easier, since function values do not change.\\n\\n- The experiment section misses some details:\\n - Do the tables report mean performances or the performance of single runs? It would also be more convincing if the table contains error bars on the reported numbers.\\n - How are the hyperparameters of the Gaussian process treated?\\n \\n- The related work section misses some references to Lu et al.[6] and Gomez-Bombarelli et al.[7] which are highly related.\\n\\n- What do you mean with the sentence \\\"works on BO for NAS can only tune feed-forward structures\\\" in the related work section? There is no reason why other Bayesian optimization should not be able to also optimize recurrent architectures (see for instance Snoek et al.[8]). \\n\\n- Section 3.3 is a bit confusing and to be honest I do not get the motivation for the usage of multiple kernels. Why do the first architectures biasing the LSTM? Since Bayesian optimization with expected improvement samples around the global optimum, should not later evaluated, well-performing architectures more present in the training dataset for the LSTM?\\n\\n\\n[1] Algorithms for Hyper-Parameter Optimization\\n J. Bergstra and R. Bardenet and Y. Bengio and B. Kegl\\n Proceedings of the 25th International Conference on Advances in Neural Information Processing Systems (NIPS'11)\\n\\n[2] Sequential Model-Based Optimization for General Algorithm Configuration\\n F. Hutter and H. Hoos and K. Leyton-Brown\\n Proceedings of the Fifth International Conference on Learning and Intelligent Optimization (LION'11)\\n\\n[3] Towards Automatically-Tuned Neural Networks\\n H. Mendoza and A. Klein and M. Feurer and J. Springenberg and F. Hutter\\n ICML 2016 AutoML Workshop\\n\\n[4] Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures\\n J. Bergstra and D. Yamins and D. Cox\\n Proceedings of the 30th International Conference on Machine Learning (ICML'13)\\n\\n[5] Neural Architecture Search with Bayesian Optimisation and Optimal Transport\\n K. Kandasamy and W. Neiswanger and J. Schneider and B. P{\\\\'{o}}czos and E. Xing\\n abs/1802.07191\\n\\n[6] Structured Variationally Auto-encoded Optimization\\n X. Lu and J. Gonzalez and Z. Dai and N. Lawrence\\n Proceedings of the 35th International Conference on Machine Learning\\n\\n[7] Automatic chemical design using a data-driven continuous representation of molecules\\n R. G\\u00f3mez-Bombarelli and J. Wei and D. Duvenaud and J. Hern\\u00e1ndez-Lobato and B. S\\u00e1nchez-Lengeling and D. Sheberla and J. Aguilera-Iparraguirre and T. Hirzel. and R. Adams and A. Aspuru-Guzik\\n American Chemical Society Central Science\\n\\n[8] Scalable {B}ayesian Optimization Using Deep Neural Networks\\n J. Snoek and O. Rippel and K. Swersky and R. Kiros and N. Satish and N. Sundaram and M. Patwary and Prabhat and R. Adams\\n Proceedings of the 32nd International Conference on Machine Learning (ICML'15)\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A nice paper questioned by the significance of the results\", \"review\": \"Review:\\n\\nThis paper proposes a method for finding optimal architectures for deep neural networks based on a teacher network. The optimal network is found by removing or shrinking layers or adding skip connections. A Bayesian Optimization approach is used by employing a Gaussian Process to guide the search and the acquisition function expected improvement. A special kernel is used in the GP to model the space of network architectures. The method proposed is compared to a random search strategy and a method based on reinforcement learning.\", \"quality\": \"The quality of the paper is high in the sense that it is very well written and contains exhaustive experiments with respect to other related methods\", \"clarity\": \"The paper is well written in general with a few typos, e.g., \\n\\n\\t\\\"The weights of the Bi-LSTM \\u03b8, is learned during the search process. The weights \\u03b8 determines\\\"\", \"originality\": \"The proposed method is not very original in the sense that it is a combination of several known techniques. May be the most original contribution is the proposal of a kernel for network architectures based on recurrent neural networks.\\n\\n\\tAnother original idea is the use of sampling to avoid the problem of doing kernel over-fitting. Something that can be questioned, however, in this regard is the fact that instead of averaging over kernels the GP prediction to account for uncertainty in the kernel parameters, the authors have suggested to optimize a different acquisition function per each kernel. This can be problematic since for each kernel over-fitting can indeed occur, although the experimental results suggest that this is not happening.\", \"significance\": \"Why N2N does not appear in all the CIRFAR-10 and CIFAR-100 experiments? This may question the significance of the results.\\n\\n\\tIt also seems that the authors have not repeated the experiments several times since there are no error bars in the results.\\n\\tThis may also question the significance of the results. An average over several repetitions is needed to account for the randomness in for example the sampling of the network architectures to learn the kernels.\\n\\n\\tBesides this, the authors may want to cite this paper\\n\\n\\tHern\\u00e1ndez-Lobato, D., Hernandez-Lobato, J., Shah, A., & Adams, R. (2016, June). Predictive entropy search for multi-objective Bayesian optimization. In International Conference on Machine Learning (pp. 1492-1501).\\t\\n\\n\\twhich does multi-objective Bayesian optimization of deep neural networks (the objectives are accuracy and prediction time).\", \"pros\": [\"Well written paper.\", \"Simply idea.\", \"Extensive experiments.\"], \"cons\": [\"The proposed approach is a combination of well known methods.\", \"The significance of the results is in question since the authors do not include error bars in the experiments.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Typos in the Paper\", \"comment\": \"In the right part of Table 2, 'Architecture Teacher #Params' should be 'Teacher Accuracy #Params' and 'Congiguration Teacher #Params' should be 'Configuration Accuracy #Params'.\"}"
]
} |
|
HyM8V2A9Km | ACTRCE: Augmenting Experience via Teacher’s Advice | [
"Yuhuai Wu",
"Harris Chan",
"Jamie Kiros",
"Sanja Fidler",
"Jimmy Ba"
] | Sparse reward is one of the most challenging problems in reinforcement learning (RL). Hindsight Experience Replay (HER) attempts to address this issue by converting a failure experience to a successful one by relabeling the goals. Despite its effectiveness, HER has limited applicability because it lacks a compact and universal goal representation. We present Augmenting experienCe via TeacheR's adviCE (ACTRCE), an efficient reinforcement learning technique that extends the HER framework using natural language as the goal representation. We first analyze the differences among goal representation, and show that ACTRCE can efficiently solve difficult reinforcement learning problems in challenging 3D navigation tasks, whereas HER with non-language goal representation failed to learn. We also show that with language goal representations, the agent can generalize to unseen instructions, and even generalize to instructions with unseen lexicons. We further demonstrate it is crucial to use hindsight advice to solve challenging tasks, but we also found that little amount of hindsight advice is sufficient for the learning to take off, showing the practical aspect of the method. | [
"language goals",
"task generalization",
"hindsight experience replays",
"language grounding"
] | https://openreview.net/pdf?id=HyM8V2A9Km | https://openreview.net/forum?id=HyM8V2A9Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Sygs_p5NeN",
"SJg3i9FpJ4",
"rygbwTvyy4",
"SJlXVLXkJN",
"BkgIAzNs0Q",
"HJepDTmj07",
"rye9Dr5FAX",
"H1eOc0EDRQ",
"SJeYHWVvRm",
"SylOg-VwA7",
"Hke7RgNv0X",
"r1gQngEvCQ",
"H1x4lSQDAm",
"B1gVMQ7wRX",
"r1xEsMXDAX",
"BkxtS7Ko2X",
"rJgEVQD9nm",
"H1gEaMA1nX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545018738610,
1544555172140,
1543630168887,
1543611947051,
1543353037861,
1543351652789,
1543247201536,
1543093904290,
1543090497435,
1543090415676,
1543090378714,
1543090347318,
1543087339677,
1543086859621,
1543086747976,
1541276481097,
1541202732264,
1540510395541
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1453/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1453/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1453/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1453/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1453/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1453/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1453/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1453/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1453/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1453/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1453/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1453/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1453/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1453/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1453/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1453/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1453/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1453/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper was reviewed by three experts (I assure the authors R3 is indeed familiar with RL and this area). Initially, the reviews were mixed with several concerns raised. After the author response, R2 and R3 recommend rejecting the paper, and R1 is unwilling to defend/champion/support it (not visible to the authors). The AC agrees with the concerns raised (in particular by R2) and finds no basis for overruling this recommendation. We encourage the authors to incorporate reviewer feedback and submit a stronger manuscript at a future venue.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}",
"{\"title\": \"Thank you\", \"comment\": \"I would like to thank the authors for their detailed response and paper updates. In particular, the table illustrating the comparison with [1] is instructive and should definitely be part of the paper. My concern '2a' still stands. Focusing on the VizDoom task, it really seems to me that this paper has applied HER to the VizDoom task, and achieved greater sample efficiency, as well as models that can converge on slightly harder versions of the task. However, since the VizDoom goal is specified with language, and HER is basically agnostic about the goal specification, it is not really clear to me that ACTRCE constitutes a new algorithm in this context. As mentioned in my review, I would really encourage the authors to consider tasks which are not originally specified in language.\\n\\nI will retain my rating (marginally below acceptance).\"}",
"{\"title\": \"Will add the following description to the next revised version.\", \"comment\": \"We thank reviewer for reading our response and the revised submission. Regarding to the description about the extra requirement of the environment, we are very sorry that we mistakenly forgot to update the paper on this matter. In section 4.1, we will add another paragraph called \\\"ACTRCE Teacher Implementation\\\" before \\\"Training details\\\", that states the following:\\n\\nCompared to the baseline DQN, implementing ACTRCE's teacher in our experiments required us to modify the environment to use its internal state (i.e. actual coordinates of the agent versus the objects) to generate the set of instruction goals that were reached (positive reward) or not reached (negative reward) automatically. \\n\\nWe will make sure to add this paragraph in our next revised version. Thank you again for your suggestion.\"}",
"{\"title\": \"Regarding revisions\", \"comment\": \"I read the authors' response and looked at the revised submission. The authors mentioned in their response that they have added the note that as compared to the baselines, the proposed method requires access to the set of goals and extra information about which goal was reached in each episode. Where was this revision added?\"}",
"{\"title\": \"We evaluated models with similar metric to SPL. Our results indicate ACTRCE did better than DQN and A3C when the objects are further from the starting state.\", \"comment\": \"Thank you for your suggestion. We would first want to clarify that in the ViZDoom environment, the episode will be terminated when the maximum episode length is reached, or the agent reaches (being within certain threshold distance from) an object.\\n\\nWe followed your recommendation and tried to come up with a metric similar to SPL. Our results indicate that, as the objects are further from the starting state (approximated by the episode length before reaching the target), the improvement of our proposed method over A3C and DQN becomes more pronounced. A3C and DQN have a drastic performance decrease for longer episodes, whereas ACTRCE remains unaffected. \\n\\nSpecifically, we performed the following evaluation. We take the final trained model, and run 100 test episodes, each with different object combination and placement, noting whether each run was successful or not (binary value). We then construct a *cumulative success rate* curve, where the x-axis is the episode length, and the y-axis is the fraction of total number of episodes that were successful and had episode length *less or equal* to the x-axis value. We can see that the curve is monotonically increasing, with the y-axis being the overall success rate when the x-axis value is the maximum episode length. This is similar in spirit to the precision-recall curve; the larger the area under this curve is, the better the model is, because it will be able to have more of successful trajectories that are short early on.\\n\\nWe added a new Appendix Section I to discuss this metric and show results for the 3 ViZDoom environment tasks (Figure 12). In the 5 objects hard mode (Figure 12a), we observe that all 3 training algorithms had similar performance until around episode length of 20, where ACTRCE has more successful trajectories which are longer. In the 7 objects hard mode (Figure 12b), ACTRCE maintains a similar behaviour while the baseline DQN essentially was only able to get success on very short episodes (i.e. when the target object was very close by). Lastly, in the 5 objects composition task (Figure 12c), the curve for ACTRCE indicates that there were 2 groups of trajectories: one requiring less than 10 time-steps, while another requiring over 20 time-steps. The former group occurs when the two target objects are adjacent to one another, making it easy to reach the second object after the first in only a few time steps. The latter group occurs when the target objects are not adjacent to each other, and thus requires the agent to more carefully turn around and avoid hitting other objects when trying to reach the second target.\"}",
"{\"title\": \"Our experimental protocols and training/test splits mirror previous established works.\", \"comment\": \"We have used different mazes for training and test evaluation in our multi-task experiments on both KrazyGridWorld and VizDoom. Our experimental protocols and training/test splits mirror previous established works [1,2]. Our Zero-Shot Learning (ZSL) experiments are replicated from Chaplot et al. 2017 using the same train/test split for the language commands. Our results are directly comparable with the prior works on these two environments. Please see below for the details:\\n\\nFor KrazyGridWorld (KGW), during training, we sample a random configuration of the maze at the beginning of the episode, as well as randomize the agent start position. This means that the positions and the colors of the goal and the lava changes when we reset the environment. We did not perform a train and test split. However, given there are many possible configurations of the maze (we have 9x9 choose 9 ~ 2.6 x 10^11 configurations if ignoring the attributes), we do not think the agent is able to memorize all mazes, and the success rates we report should be indicative of the generalization behavior.\\n\\nFor ViZDoom, we have a train and test split for the language commands (as done in Chaplot et al. 2017). We sample an instruction from the training set (during training) or from the test set (during the Zero Shot evaluation), and then sample the required number of objects and randomize their positions in the room. In the hard mode, the objects locations are generated via the Poisson distribution, and the agent\\u2019s position/view direction is also randomized. Therefore, each instantiation of the environment most likely has a different placement and combination of objects from any environment ever seen before. \\n\\n[1] Chaplot et al., \\u201cGated-Attention Architectures for Task-Oriented Language Grounding\\u201d. https://arxiv.org/abs/1706.07230. AAAI, 2018\\n[2] Stadie et al, \\u201cThe Importance of Sampling in Meta-Reinforcement Learning\\u201d, https://arxiv.org/abs/1803.01118. NeurIPS, 2018.\"}",
"{\"title\": \"additional information needed\", \"comment\": [\"Two points are missing in the rebuttal. Please address them as well:\", \"I had asked whether the train and test configurations are the same or not. This is left unanswered.\", \"Episode length alone does not show anything. For example, an agent that learns to stop after one step has episode length of 1. Please include a 2D plot of success rate vs episode length or some metric similar to SPL that captures both success rate and episode length.\"]}",
"{\"title\": \"Paper Revision Update\", \"comment\": \"We thank the reviewers for their response and suggestions. We have updated the paper based on their feedback, with the following changes:\\n \\n1. Additional ViZDoom experiments with A3C reproduced baseline performance for single target with 5 objects in easy and hard mode, in Appendix G (requested by all reviewers)\\n2. Added average episode length in ViZDoom in Appendix H\\n3. Added more details on teacher advice generation in ViZDoom in Appendix B.5\\n4. Added more literature review, especially with works suggested by reviewer 3\"}",
"{\"title\": \"[1/4] Thank you for your feedback. We hope the reviewer can take time to revisit the revised version of our paper in light of our response, and reconsider the score\", \"comment\": \"Thank you for your time in reviewing our work.\\n \\nMany of the existing works on grounded RL and language grounded navigation approaches rely on human engineered auxiliary reward [4,5] or reward shaping [6,7]. We would like to emphasize that the main focus of our work is to tackle the sparse reward, multi-goal reinforcement learning problem. In particular, we used language as the goal representation. In our experiments, we chose to use navigation-style environments in 2D (KrazyGridWorld) and 3D (ViZDoom) with goal specified with language and receiving sparse reward. We strongly disagree that these are 'toy' environments. We believe that the sparsity of the reward in the current environments we investigated, as well as their language semantics, still provide enough complexity to illustrate the benefits of our proposed framework from the perspective of reinforcement learning algorithms:\\nACTRCE can solve challenging 3D navigation tasks, such as the introduced composition task with 2 target objects in ViZDoom, while using a non-language representation (i.e. one-hot representation for each instruction) failed to learn.\\nLittle amounts of hindsight advice (1%) is sufficient for learning to take off, which can help this method be practical with real human feedback. The original HER paper always added hindsight experience during the entire training, since obtaining the goal was trivial in their case.\\nAs with [1], our agent can generalize to unseen instructions, and in additional also to instructions with unseen lexicons during training.\\n \\nWe do appreciate Reviewer-3's feedback that there are many other indoor navigation environments (such as House3D, AI2-THOR), along with specialized metrics (such as the suggested SPL), and terminologies (such as the different types of zero shot tasks). We believe that our work can be applied to the tasks in those environments to improve the sample efficiency and performance.\\n\\nWe hope the reviewer can take time to revisit the revised version of our paper in light of our response, and reconsider the score.\", \"we_address_the_specific_issues_raised_below\": \"\"}",
"{\"title\": \"[2/4] Response to Reviewer 3\", \"comment\": \">Concerns about the baselines: \\u201cThe result of DQN is surprising (it is always zero). DQN is not that bad. Probably, there is a bug in the implementation.\\u201d, \\u201cDoes the proposed method provide improvements over A3C as well?\\u201d\\nWe want to emphasize that the environments we experimented on have sparse reward. DQN is known to have difficulty in performing in sparse reward environments, as noted in the original DQN paper (e.x. on Montezuma\\u2019s revenge) as well as HER paper. We have tested our DQN implementation in less challenging setting for ViZDoom, using 5 objects--instead of 7--in hard mode (single target), which is identical to Chaplot et al. 2017 set up: \\n\\n+---------------------+------------------------------------------------------------------------+-------------------------------------------------+\\n| | MT | ZSL |\\n+---------------------+------------------------------------------------------------------------+-------------------------------------------------+\\n| # of frame s | 8M | 16M | 150M | N/A | 16M | 150M | N/A |\\n| A3C [1] | - | - | - | 0.83 | - | - | 0.73 |\\n| A3C (Reprod) | 0.10 +/- 0.01 | 0.09 +/- 0.04 | 0.73 +/- 0.01 | - | - | 0.71 +/- 0.02| - |\\n| DQN | 0.4 +/- 0.2* | 0.73 +/- 0.08 | - | - | 0.75 +/- 0.05 | - | - |\\n| ACTRCE(Ours)|*0.69 +/- 0.04*|* 0.83 +/- 0.02*| - | - |*0.77 +/- 0.02*| - | - |\\n+--------------------+--------------------+--------------------- +------------------+---------+--------------------+------------------+-------+\\n\\nIn the DQN and ACTRCE experiments, we trained for 16 million frames. In that environment set up, our DQN implementation was able to learn, but with a larger variance during training. Note that ACTRCE was able to achieve almost the same multitask (MT) performance as DQN with half the frames (ACTRCE 0.69 at 8M vs DQN 0.73 at 16M frames). As the reward becomes more sparse, then our contribution of applying hindsight feedback becomes crucial to achieving good performance in the environment. \\n\\nWe had attempted to reproduce the A3C results from Chaplot et al. 2017 based on their available implementation online [2] for the single target task with 5 objects in easy and hard mode, but was not able to achieve the same published performance for the hard task, given our computation budget. Note that A3C was almost an order of magnitude less sample efficient than DQN/ACTRCE. Given the same number of frames (16M), A3C has not started to learn. Only by 150M frames that the performance of DQN and A3C was similar.\\n\\nWe have included this table in Appendix G in our revision of the paper. \\n\\n>\\u201cIt is not clear how a description for a point along the way is provided (when the agent is not at a target). It is not clear how those feedback sentences are generated.\\u201d\\nWe refer the reviewer to the paragraph just above section 3.1, where we state \\u201cNote that in the above formulation, we assume a MDP setting, and let the teacher give advice based solely on the terminal state\\u201d. Hence in our experiments, the teacher advice is only generated from the terminal state of the agent. We then make a copy of the transitions in the episode and replace the goal with the teacher advice and corresponding reward (1 or 0). For ViZDoom, we simply do not give any positive teacher advice when the agent did not reach any object, but we still give negative teacher advice (reward=0), describing one of the objects that the agent did not reach in the episode. We had originally tried to give the positive advice of \\u201cReach no object\\u201d (reward = 1) when the agent did not reach any objects at the end of the episode, but it did not improve the performance.\"}",
"{\"title\": \"[3/4] Response to Reviewer 3\", \"comment\": \">\\u201cThe environments are toy environments. The experiments should be carried out in more complex environments such as THOR or House3D that include more semantics.\\u201d, and \\u201cIt seems the same environment is used for train and test.\\u201d\\n\\nThank you for suggesting the AI2-THOR [3] and House3D [4] environments. These environments offer more realistic 3D environment rendering than ViZDoom. However, we believe that the ViZDoom environment still provides comparable semantics as the RoomNav task introduced in House3D. Given a house, there are 5 possible rooms (ex: kitchen, dining room, etc.) and 15 possible objects (ex. shower, sofa, etc.). In ViZDoom, there is only 1 room but 18 possible objects (5 types with different colour and sizes). The RoomNav environment directly encode each goal (called \\u2018concepts\\u2019) as a one-hot vector, similarly to our one-hot instruction representation experiment. A particular challenging aspect of ViZDoom instruction goal is that an instruction can refer to several possible objects amount the 18 objects (ex: \\u201cGo to the blue object\\u201d), although we restrict to having only one correct object present in a particular episode. Most importantly, the current setup in RoomNav uses reward shaping via auxiliary reward based on the approximate shortest distance between the agent and the target room, and various penalties for hitting an obstacle, and time penalty. In contrast, our ViZDoom set up has sparse reward, which is aligned with the main problem of the paper. \\n\\nFor AI2-THOR, the action space discretizes the scene space into a grid-world representation, due to 90 degree turn angle. In contrast, the turn angle in ViZDoom is much less than 90 degrees, leading to more fine grain control in the agent trajectory. Similarly to ViZDoom, an AI2-THOR scene is contained in 1 room, but there are 120 possible rooms belonging to one of the four room types (kitchen, living room, bedroom, bathroom). We acknowledge that the diversity of the scenes in AI2-THOR is greater. However, the arrangement of the objects in the scenes are fixed in AI2-THOR, while in ViZDoom the combination of objects present in the scene (upper bound of (18 choose 5) = 8568 and (18 choose 7) = 31824 combinations), as well as their spatial location, are randomized at each episode. This means that in the ViZDoom hard mode, every episode has most likely a unique room for navigation.\\n\\n>\\u201cThe paper discusses the advantages of word embeddings over one-hot vectors. That is obvious and not the goal of this paper.\\u201d\\n\\nWe apologize for a possible misunderstanding of the purpose of our one-hot vector instruction representation compared to the GRUs and pre-trained sentence representation. One of the purpose of the paper is to explore whether language representation of the goal can benefit HER. Then a reasonable baseline is to treat each instruction goal independently, i.e. one hot vector. Hence we approached this question by representing the goal either as a sentence (i.e. a sequence of tokens), versus as a one-hot vector (where the dimension is the number of training instructions). In both cases, the goal is eventually embedded to a fixed length vector, which is used to compute the gated attention values. \\n\\nWe will make this clearer in the next revision of our submission. \\n\\n>\\u201cThe episode length should be reported as well. I suggest using the SPL metric proposed by Anderson et al. in \\\"On Evaluation of Embodied Navigation Agents\\\".\\u201d\\nWe agree that episode length will be an informative metric. However, it is not trivial to extend the SPL metric to our environment as the environment does not provide the optimal steps to the target. However, we will provide a plot of the average episode length over the training timesteps, which shows a decreasing trend as the training progresses. \\n\\nPlease refer to Appendix H in the revised draft.\"}",
"{\"title\": \"[4/4] Response to Reviewer 3\", \"comment\": \">\\u201cReplacing one word with its synonym is considered as zero-shot. That is not really a zero-shot setting. Please refer to the following paper, which is missing in the related work:\\nInteractive Grounded Language Acquisition and Generalization in a 2D World, ICLR 2018\\u201d\\n\\nThank you for bringing Yu et al. 2018\\u2019s work to our attention, we will cite their paper in the related work section. In their terminology, our zero-shot results is equivalent to their ZS1 ( \\u201cinterpolation\\u201d to new combinations of previously seen words for the same use case). Our one word synonym experiment was applied to the ZS1 task (i.e. the testing instructions) as well as to training instructions. We feel that this is closer to the ZS2 in Yu et al. 2018\\u2019s work, as it is extrapolating to new words transferred from other use cases and models--in this case, a pre-trained sentence embedding model. Our work differs in that we are only training the agent on the navigation task, while Yu et al. 2018 has both navigation task and a question-answering task. Their ZS2 sentences contain a word that does not appear in the Navigation task training sentence but *does* appear in their Question-Answering training answer. \\n\\n>\\u201cWhat is the difference between this method and providing a large negative reward at a non-target object?\\u201d\\n\\nProviding large negative reward at a non-target object can lead to undesired behaviour of avoiding reaching any of the objects, only hitting walls or wandering around until the episode terminates after a maximum number of timesteps. This is because the expected return of not hitting anything is zero, compared to when the agent randomly reaches one of the (mostly likely) incorrect object and receiving a large negative reward. In comparison, our method only rewards the agent when it reaches the target object for the instruction.\", \"references\": \"[1] Chaplot et al., \\u201cGated-Attention Architectures for Task-Oriented Language Grounding\\u201d. https://arxiv.org/abs/1706.07230. ArXiv, 2017\\n[2] https://github.com/devendrachaplot/DeepRL-Grounding \\n[3] Zhu et al, \\u201cTarget-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning\\u201d. https://arxiv.org/abs/1609.05143. ArXiv, 2018\\n[4] Yu el al., \\u201cBuilding Generalizable Agents with a realistic and rich 3D environment\\u201d. https://arxiv.org/abs/1801.02209. ICLR, 2018\\n[5] Hermann et al. \\u201cGrounded language learning in a simulated 3d world.\\u201d https://arxiv.org/abs/1706.06551. ArXiv, 2017\\n[6] Misra et al.. Mapping instructions and visual observations to actions with reinforcement learning. EMNLP, 2017\\n[7] Das et al., \\u201cEmbodied Question Answering\\u201d. https://arxiv.org/abs/1711.11543. ArXiv, 2017\"}",
"{\"title\": \"Thank you for your comments and feedback! We made clarifications to our paper\", \"comment\": \"Thank you for taking the time to reviewing our paper and we appreciate the positive feedback.\", \"we_will_address_your_clarification_questions_below\": \">\\u201cIn Table 1, how many frames were DQN and ACTRCE trained for? I am wondering why the MT performance for DQN is so low. Did the DQN have Gated-Attention?\\u201d\\n\\nBoth DQN and ACTRCE were trained with 40 million frames, and both had identical architecture which uses Gated-Attention as in Chaplot et al. 2017 [1]. The difference with Chaplot et al\\u2019s set up is that we increased the number of objects from 5 to 7, and we also increased the size of the room by 50%. This made the reward in the environment even more sparse. We have tried experimenting on the easy and hard task using 5 objects (similar to [1]), and found that in those cases, the baseline DQN was in fact able to learn (see the next point below).\\n\\n>\\u201cIn Appendix D Training details, it is mentioned that you reproduce training using Asynchronous Advantage Actor Critic (A3C), where is A3C used in the experiments?\\u201d\\n\\nThank you for reviewer pointing this out. We had mistakenly left out this result in our initial submission. We had attempted to reproduce the A3C results from Chaplot et al. 2017 [1] based on their available implementation online [2] for the single target task with 5 objects, but was not able to achieve the same performance in the hard mode as published, given our computation budget. \\n\\n+---------------------+------------------------------------------------------------------------+-------------------------------------------------+\\n| | MT | ZSL |\\n+---------------------+------------------------------------------------------------------------+-------------------------------------------------+\\n| # of frame s | 8M | 16M | 150M | N/A | 16M | 150M | N/A |\\n| A3C [1] | - | - | - | 0.83 | - | - | 0.73 |\\n| A3C (Reprod) | 0.10 +/- 0.01 | 0.09 +/- 0.04 | 0.73 +/- 0.01 | - | - | 0.71 +/- 0.02| - |\\n| DQN | 0.4 +/- 0.2* | 0.73 +/- 0.08 | - | - | 0.75 +/- 0.05 | - | - |\\n| ACTRCE(Ours)|*0.69 +/- 0.04*|* 0.83 +/- 0.02*| - | - |*0.77 +/- 0.02*| - | - |\\n+--------------------+--------------------+--------------------- +------------------+---------+--------------------+------------------+-------+\\n\\nIn the DQN and ACTRCE experiments, we trained for 16 million frames. We will include these results for the 5 objects (easy and hard mode) in the next revision in Appendix G. \\n\\n>\\u201cThe composition task is very interesting, did the agent receive intermediate rewards for completing a part of the instruction in this task?\\u201d\\n\\nIn the composition task, the reward is still sparse--the agent only receives a reward of 1 if the agent reached both of the desired objects during the episode, in any order. A reward of 0 is given otherwise, i.e. when the agent reached only none/one object, or two objects that are not both desired ones. \\n\\n>\\u201cIn Appendix D Training details, what do you mean by 'chosen from the range {1000, 10000, 10000}'?\\u201d\\n\\nThe third number was a typo (100000). We simply wanted to denote that we tried those 3 replay buffer sizes when performing hyperparameter tuning. \\n\\n>\\u201cIt is important to note that as compared to the baselines, the proposed method requires access to the set of goals and extra information about which goal was reached in each episode.\\u201d\\n\\nWe will note this in our revision of the paper!\\n\\n[1] Chaplot et al., \\u201cGated-Attention Architectures for Task-Oriented Language Grounding\\u201d. https://arxiv.org/abs/1706.07230\\n[2] https://github.com/devendrachaplot/DeepRL-Grounding\"}",
"{\"title\": \"[1/2] Thank you for your feedback! We added comparison to previous work and clarify on the tasks\", \"comment\": \"Thank you for your time reviewing our paper and providing constructive feedback. The two main concerns are (1) the lack of comparison to existing work and (2) evaluating on tasks that already uses language specification. We provide experimental results for (1) and make additional justification for (2).\", \"concern_1\": \">\\u201dIt would seem like a natural comparison would be to take the model from Chaplot (leaving the task and architecture etc unchanged) and train it using ACTRCE.\\u201d\\n\\nWe indeed took the architecture from Chaplot et al. 2017 and removed one of the head, to predict only the Q-value instead of actor policy and critic heads for the A3C, for our DQN/ACTRCE training. For the 5 objects in hard mode (single target) we achieved similar performance to Chaplot et al. using ACTRCE, but with *an order of magnitude more sample efficiency* (16 million frames vs 150 million frames): \\n\\n+---------------------+------------------------------------------------------------------------+-------------------------------------------------+\\n| | MT | ZSL |\\n+---------------------+------------------------------------------------------------------------+-------------------------------------------------+\\n| # of frame s | 8M | 16M | 150M | N/A | 16M | 150M | N/A |\\n| A3C [1] | - | - | - | 0.83 | - | - | 0.73 |\\n| A3C (Reprod) | 0.10 +/- 0.01 | 0.09 +/- 0.04 | 0.73 +/- 0.01 | - | - | 0.71 +/- 0.02| - |\\n| DQN | 0.4 +/- 0.2* | 0.73 +/- 0.08 | - | - | 0.75 +/- 0.05 | - | - |\\n| ACTRCE(Ours)|*0.69 +/- 0.04*|* 0.83 +/- 0.02*| - | - |*0.77 +/- 0.02*| - | - |\\n+--------------------+--------------------+--------------------- +------------------+---------+--------------------+------------------+-------+\\n\\nPerformance averaged over 2 random seeds. We used the available online implementation for A3C [2] for reproducing the results, but was unable to match the published performance.\\nWe will include these results (including training plots) for the 5 objects (easy and hard mode) ViZDoom environment in the next revision in Appendix G.\", \"concern_2a\": \">\\u201cIn the VizDoom task, the goal specification is already in (templated) language. Given that this is the case, and the mapping from states to goals can be extracted from the environment anyway, it seems like the method that is applied really just reduces to a vanilla implementation of HER.\\u201d\\n\\nIn general, the HER framework requires a strategy for sampling goals for replay and the reward function relating the state, action, and goal to a scalar reward. In this sense, our work is one instantiation of HER, where the sampling strategy are the teachers\\u2019 language advice (for achieved goals with reward = 1 or unachieved goals with reward = 0). We are investigating a non-linear mapping between the state and the goal space (i.e. from pixels to sequence of tokens), in contrast to the linear mapping of the vanilla HER where the goal space is the same as the state space, or a subset of the dimensions of the state space. \\n\\nAnother distinction with the original HER is that they assumed that a state only satisfies one goal. However, in our case, the mapping from state to goal is one-to-many: there can be many different language sentences that describe what the agent accomplished. An agent reaching a short blue torch can be described with \\u201cGo to short blue torch\\u201d, or \\u201cGo to torch\\u201d (if there are no other torches in the environment), or \\u201cGo to the blue object\\u201d, etc. In the original HER, to generate goals implies sampling from the states visited, usually in the episode. With our set up, the goals can be from the set of training instructions that the agent has not encountered before as well. \\n\\n[1] Chaplot et al., \\u201cGated-Attention Architectures for Task-Oriented Language Grounding\\u201d. https://arxiv.org/abs/1706.07230\\n[2] https://github.com/devendrachaplot/DeepRL-Grounding\"}",
"{\"title\": \"[2/2] Thank you for your feedback! We added comparison to previous work and clarify on the tasks\", \"comment\": \"Concern 2b: > \\u201cI expected the ACTRCE approach to be applied to a task where the goal was not originally specified in language, perhaps by collecting language from human teachers. This would be a much more interesting experiment, addressing the question of whether human feedback in natural language can help the agent learn more quickly.\\u201d\\n\\nWe agree that applying the ACTRCE to a problem which the goal was not originally specified in language by collecting language from human teachers would be an interesting approach. In our work, we took a different approach where we converted a task originally specified in language and remove the language aspect, turning an instruction into one hot vector. In this setting, we were able to show that as we transitioned from single to composition task, using language representation helped the agent learn better than using a one-hot approach for each instruction\", \"smaller_concern\": \">\\\"ACTRCE - possibly the most tortured acronym in recent memory! How should it be pronounced?\\\"\\n\\nACTRCE is pronounced as \\\"actress\\\"\"}",
"{\"title\": \"Simple and intuitive idea but evaluation seems to be quite lacking\", \"review\": \"This paper considers the assumption implicit in hindsight experience replay (HER), namely that we have access to a mapping from states to goals. Rather than satisfying this requirement by defining goals as states, which involves great redundancy, the paper proposes a natural language goal representation. Concretely, for every state a teacher is used to provide a natural language description of the goal achieved in that state, which can be used to directly relabel the goal so the episode can be used as a positive experience.\", \"strengths\": [\"the proposed idea is simple and intuitively appealing, and shows much better results than the DQN baseline.\"], \"weaknesses\": [\"In the VizDoom task, the goal specification is already in (templated) language. Given that this is the case, and the mapping from states to goals can be extracted from the environment anyway, it seems like the method that is applied really just reduces to a vanilla implementation of HER. There seems to be little novelty in this. From reading the introduction and method, I expected the ACTRCE approach to be applied to a task where the goal was not originally specified in language, perhaps by collecting language from human teachers. This would be a much more interesting experiment, addressing the question of whether human feedback in natural language can help the agent learn more quickly.\", \"Even leaving aside the previous concern, it seems very difficult to put this work in the context of previous work on the same tasks. For example, it is not clear why there are no comparisons to the previous work on instruction following in VizDoom, as the setting appears to be exactly like Chaplot et al. 2017. It would seem like a natural comparison would be to take the model from Chaplot (leaving the task and architecture etc unchanged) and train it using ACTRCE. Is there any reason why this can\\u2019t be done? There is already so much existing work in this space, it seems quite unusual that the proposed new method is not compared to any existing work on an existing task.\"], \"summary\": \"This is a simple and intuitively appealing idea, but I find the evaluation to be quite lacking because the tasks already use a language specification (such that ACTRCE seems to be vanilla HER in application) and there are no comparisons to previous work. These two concerns seem quite substantial to me and make it difficult to recommend acceptance.\", \"smaller_issues\": [\"ACTRCE - possibly the most tortured acronym in recent memory! How should it be pronounced?\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good paper, very interesting analysis and insights\", \"review\": [\"This submission presents a method to improve the sample-efficiency of instruction-following models by leveraging the Hindsight Experience Replay framework with natural language goals.\", \"Here are my comments/questions:\", \"The paper is well written and easy to follow, it introduces a simple idea which achieves very good results.\", \"In addition to improving the performance as compared to the baselines, the authors perform a wide variety of experiments such as analysis of language representations, visualization of embeddings, etc. which lead several insightful results such as ability of sentence embeddings to generalize to unseen lexicon, ability of the model to perform well with just 1% advice.\", \"It is important to note that as compared to the baselines, the proposed method requires access to the set of goals and extra information about which goal was reached in each episode.\", \"In Table 1, how many frames were DQN and ACTRCE trained for? I am wondering why the MT performance for DQN is so low. Did the DQN have Gated-Attention?\", \"The composition task is very interesting, did the agent receive intermediate rewards for completing a part of the instruction in this task?\", \"Some implementation details questions:\", \"In Appendix D Training details, what do you mean by 'chosen from the range {1000, 10000, 10000}'?\", \"In Appendix D Training details, it is mentioned that you reproduce training using Asynchronous Advantage Actor Critic (A3C), where is A3C used in the experiments?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"not clear; surprising DQN results; toy environments\", \"review\": \"Paper Summary:\\nThe idea of the paper is to improve Hindsight Experience Replay by providing natural language instructions as intermediate goals.\", \"paper_strengths\": \"Unfortunately, there is not many positive points about the paper except that it explores an interesting direction.\", \"paper_weaknesses\": \"\", \"i_vote_for_rejection_of_the_paper_due_to_the_following_issues\": [\"It is not clear how a description for a point along the way is provided (when the agent is not at a target). It is not clear how those feedback sentences are generated. That is the main claim of the paper and it is not clear at all.\", \"The result of DQN is surprising (it is always zero). DQN is not that bad. Probably, there is a bug in the implementation. There should be comments on this in the rebuttal.\", \"According to several recent works, algorithms like A3C work much better than DQN. Does the proposed method provide improvements over A3C as well?\", \"The only measure that is reported is success rate. The episode length should be reported as well. I suggest using the SPL metric proposed by Anderson et al. in \\\"On Evaluation of Embodied Navigation Agents\\\".\", \"Replacing one word with its synonym is considered as zero-shot. That is not really a zero-shot setting. Please refer to the following paper, which is missing in the related work:\", \"Interactive Grounded Language Acquisition and Generalization in a 2D World, ICLR 2018\", \"The environments are toy environments. The experiments should be carried out in more complex environments such as THOR or House3D that include more semantics.\", \"What is the difference between this method and providing a large negative reward at a non-target object?\", \"The paper discusses the advantages of word embeddings over one-hot vectors. That is obvious and not the goal of this paper.\", \"It seems the same environment is used for train and test.\", \"------------------------\"], \"post_rebuttal_comments\": [\"Most of my concerns have been addressed. My new rating is 5. I like the idea of having a compact representation for the hindsight experience replay, but there are still a few issues:\", \"I expected more complexity in vision and language. I do not agree with the rebuttal that AI2-THOR or House3D are not suitable. This level of complexity would be ok if this paper was among the first ones to explore this domain, but there are already several works. The zero-shot setting (changing the word with its synonym) is also so simplistic.\", \"The proposed method uses much more annotations than the baselines so the comparisons are not really fair. This information should have been added to the baseline to see how this additional information changes the performance. Basically, it is not clear if the improvement should be attributed to the extra annotation or the way the advice is given.\", \"The writing is still confusing. For instance, it is mentioned that \\\"Concretely, for each state s \\u2208 S, we define T as a teacher that gives an advice T(s)\\\", while that is not true since later it is mentioned that \\\"the teacher give advice based solely on the terminal state\\\". These statements are contradictory, and it is not trivial at all to provide an advice for each state.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
BygIV2CcKm | Learning to Augment Influential Data | [
"Donghoon Lee",
"Chang D. Yoo"
] | Data augmentation is a technique to reduce overfitting and to improve generalization by increasing the number of labeled data samples by performing label preserving transformations; however, it is currently conducted in a trial and error manner. A composition of predefined transformations, such as rotation, scaling and cropping, is performed on training samples, and its effect on performance over test samples can only be empirically evaluated and cannot be predicted. This paper considers an influence function which predicts how generalization is affected by a particular augmented training sample in terms of validation loss. The influence function provides an approximation of the change in validation loss without comparing the performance which includes and excludes the sample in the training process. A differentiable augmentation model that generalizes the conventional composition of predefined transformations is also proposed. The differentiable augmentation model and reformulation of the influence function allow the parameters of the augmented model to be directly updated by backpropagation to minimize the validation loss. The experimental results show that the proposed method provides better generalization over conventional data augmentation methods. | [
"data augmentation",
"influence function",
"generative adversarial network"
] | https://openreview.net/pdf?id=BygIV2CcKm | https://openreview.net/forum?id=BygIV2CcKm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJlFiqpNgN",
"ByeE9y0zeV",
"rylTvkCFA7",
"Bke0k1CFCQ",
"rJgOoATFCm",
"HkxHblgRnm",
"BkgxppXT2X",
"Byld6s022Q"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545030304877,
1544900492392,
1543262052780,
1543261925586,
1543261856409,
1541435389084,
1541385655541,
1541364672082
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1452/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1452/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1452/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1452/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1452/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1452/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1452/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1452/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Thank you for the response\", \"comment\": \"Thank you for the detailed response. Though I partially agree with the author's responses (e.g., I do not agree with the argument about randomness), I believe this paper shows enough values to be above the bar.\"}",
"{\"metareview\": \"This paper proposes and end-to-end trainable architecture for data augmentation, by defining a parametric model for data augmentation (using spatial transformers and GANs) and optimizing validation classification error through the notion of influence functions. Experiments are reported on MNIST and CIfar-10.\\n\\nThis is a borderline submission. Reviewers found the theoretical framework and problem setup to be solid and promising, but were also concerned about the experimental setup and the lack of clarity in the manuscript. In particular, one would like to evaluate this model against similar baselines (e.g. Ratner et al) on a large-scale classification problem. The AC, after taking these comments into account and making his/her own assessment, recommends rejection at this time, encouraging the authors to address the above comments and resubmit this promising work in the next conference cycle.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting contribution, but not fully developed yet.\"}",
"{\"title\": \"Responses to AnonReviewer2\", \"comment\": \"The authors would like to thank all the reviewers for their valuable comments. There seems to be a gap between what was intended to be conveyed by the manuscript and what was understood by the reviewer. Hopefully, the revised manuscript will allow the reviewer to have a better understanding of what was intended.\\n\\n[R] For me, the argument of the paper is ambitious. Data augmentation for DNN includes different perspective, including nonlinearity, adversarial etc. Generalization of spatial and appearance models is not enough. The model formulate from a simple classification setting but does not involve too many for DNN models. I put more references below. \\n[A] The proposed transformation model generalizes non-rigid (nonlinear data augmentation is a rather ambiguous term-the reviewers are probably referring to non-rigid warping) warping. The adversarial method is limited in their flexibility of transformation as described in Section 2.1. This is because the adversarial update on flexible and complex transformation model yields an adversarial example. The paper only considered the classification setting, but extending it to other tasks is straightforward given the differentiable objectives (Eqs 14 and 19 are not restricted to classification setting).\\n\\n[R] The experimental results are not strong. Not all strong baselines are included (I put some in the references). The improvements are marginal. Besides, I need more experimental setting information. \\n[A] Thank you for the baselines of (semi-)supervised learning algorithms; however, the paper is not about the semi-supervised learning but is about the data augmentation. The proposed augmentation method can be used in conjunction with methods proposed in [a,b,c]. For example, [c] reported the results with or without augmentation in Tables 1 and 2. The benefit of the proposed method can be investigated by measuring the performance gain of the proposed augmentation method in comparison with the conventional augmentation method or without augmentation given the algorithm in [c].\\n\\n[R] The writing is not clear. For the related work part, it included many paragraph which are not related to the work, (e.g. GANs). In the introduction part, it did not mention the generalization of both spatial and appearance models, which is the main contribution. \\n[A] The manuscript has been revised by many people including a native English speaker to improve readability.\\nAs we stated in Section 2.2, we included GAN in related work to exploit its potential to augment data by generating class-conditional images. \\nAs AnonReviewer1 summarized, our main contribution is not the generalization of both spatial and appearance models, but our contribution is to propose an extension of the influence function for data augmentation. Influence of augmentation on validation loss is approximated and the augmentation model is learned under this approximation. Differentiable transformation models, the generalization of both spatial and appearance models, are proposed to carry out gradients of influence function to the transformation model. Also, we briefly mentioned the generalization of both spatial and appearance models by \\u201cWe also propose a differentiable augmentation model that generalizes the conventional augmentation method by the composition of predefined transformations\\u201d.\"}",
"{\"title\": \"Responses to AnonReviewer3\", \"comment\": \"We appreciate you for the detailed comments. We would like to share our responses to the concerns raised by the reviewer.\\n\\n[R] Experimental results can be stronger. Especially when compared to Ratner et al., this proposed method results in marginal performance gain. Given that Ratner et al.\\u2019s method trained the data augmentation module without supervision, the supervised learning in this paper does not show strong results. In addition, the paper did not report results on a more practical dataset (such ImageNet and Places). Even for Cifar-10, the reported numbers are away from the state-of-the-art. It is important to show the practical significance of the proposed method. \\n[A] We would like to point out that significance of supervision during learning an augmentation module is different from that of during learning a classification module. Supervision of the classification module (classical supervised learning) is expected to show dramatic performance gain over that of without supervision (classical unsupervised learning); however, supervision of the augmentation module may not show significant performance gain because the performance is measured by the end classifier which already includes label supervision.\\nThe experimental results focused on comparisons between the proposed method, heuristic and Ratner et al. under the exact same setting (hyperparameters and an architecture of the end classifier); however, we agree with the comment and were working on the experiments on ImageNet. But single run of the end classification model take roughly one week on ImageNet. Alternatively, we are currently working on the experiments on Cifar-10 for various state-of-the-art architectures and will report the results as soon as possible.\\n\\n[R] Data augmentation is naturally expected to be random, but the proposed method seems to learn a deterministic parameter for the augmenting transformation, which looks unnatural and limited. (Please clarify if I missed anything.) \\n[A] As described in the manuscript, the experiments were conducted in the same setting as Ratner et al.: training images are randomly transformed (crop and flip) before they fed into the transformation model. This injects randomness to augmented images. We revised the manuscript to include this process for clarity.\\n\\n[R] The proposed method requires a parametric model (e.g, STN, GAN). However, differentiable parametric models are not always easy to design. This probably can be the biggest obstacle to apply the proposed method widely. \\n[A] In deep learning, to find stable setups of hyperparameters and an architecture is not easy, especially for GAN, as you pointed out. But we found that using the known stable setup (from the original WGAN-GP paper) is enough to learn transformation models. For detail, we used the same setup as the original WGAN-GP paper (see the revised paper Appendix A and B):\", \"optimizer\": \"ADAM, lr=0.0001, beta_1=0.5, beta_2=0.9,\", \"generator\": \"Stacked transposed convolutional neural networks with ReLU,\", \"discriminator\": \"Stacked convolutional neural networks with LeakyReLU (0.2)\\nOther GAN variants with their known stable setups also worked, e. g. DCGAN setup used in the original DCGAN paper (ADAM optimizer with lr=0.0002, beta_1=-.5, beta_2=0.999). Furthermore, learning of the proposed transformation model is not much sensitive to hyperparameters and architecture design. We believe that this is because of the difference in objectives between the GAN variants and the transformation model. The GAN variants generate images from noise; however, the transformation model generates transformation parameters to be applied to the given image, which has more simple patterns than the image itself.\"}",
"{\"title\": \"Responses to AnonReviewer1\", \"comment\": \"We thank you for the constructive comments on our work. We have revised the manuscript to better explain the points you mentioned, and we hope this improving the clarity of the paper.\\n\\n[R] The biggest question I have is it seems from Eq. 15 that authors are proposing an augmentation approach where the augmented samples replace the original samples and not co-exist with them in the training set. I am not sure why Eq. 15 has to be set up like that, please elaborate. \\n[A] The augmentation process of an input image x is G(x, E(x)), and this is deterministic because E and G are neural networks. Thus, during learning the end classifier, the augmented sample actually replaces the original sample. \\nWe would like to point out that this replacement does not mean that original samples are excluded during learning the end classifier. If the original image x has the most influence than other transformed images, then E learns to generate G(x, E(x)) to be similar with x.\\n\\n[R] In Section 3.4 it is stated that only top fully connected layer of F is considered to compute influence function for augmentation. Does this also mean that when F is updated on augmented data only the top layer is updated? Please clarify. \\n[A] The approximation is only considered during learning E. All parameters of F are updated during learning end classifier.\\n\\n[R] The paper is a bit difficult to follow due to lack of clarity and few errors.\\n[A] We fixed all errors, and the manuscript has been revised for clarity and has been proofread by a native English speaker.\"}",
"{\"title\": \"valuable and publishable, some potential improvements\", \"review\": [\"This paper proposes an extension of the influence function study of Koh and Liang (2017) to data augmentation. Influence of augmentation, carried out via a parameterized and differentiable model, on validation loss is approximated and the augmentation model is learned under this approximation. Overall I think it is a valuable and publishable contribution. I do find the paper to be unclear and perhaps could be improved in a few ways. My main comments are:\", \"The biggest question I have is it seems from Eq. 15 that authors are proposing an augmentation approach where the augmented samples replace the original samples and not co-exist with them in the training set. I am not sure why Eq. 15 has to be set up like that, please elaborate.\", \"In Section 3.4 it is stated that only top fully connected layer of F is considered to compute influence function for augmentation. Does this also mean that when F is updated on augmented data only the top layer is updated? Please clarify.\", \"The paper is a bit difficult to follow due to lack of clarity and few errors:\", \"Section 2.1, Adversarial methods, \\u201cIn these methods, a simple composition\\u2026adversarial examples\\u201d sentence is unclear\", \"Page 2 footnote \\u201chowever, they are referred to as unsupervised due to learning is not involved\\u201d sentence is unclear\", \"Section 3.3 \\\\tilde{z} in first line should be \\\\tilde{z_i}\", \"Eq. 15 LHS should include \\\\tilde{z_i}\", \"Section 3.4 \\u201cadopts\\u201d -> \\u201cadopt\\u201d\", \"Section 3.4 \\u201cHVP\\u201d used without defining\", \"Empirical evidence, while not extensive, is satisfactory.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting paper, Inspiring theory, but need more experiments\", \"review\": \"[Summary]\\n\\nThis paper proposes a differentiable framework to learn to augment data for image classification. In particular, it uses spatial transformer and GANs as parametric data augmenters, and it formulates the validation set loss with respect to the data augmenter in a differentiable manner. \\n\\n[Pros]\\n\\n1.\\tThe proposed method does not require many trials of model training under different training data, and it learns the data augmentation directly using the final classification objective.\\n2.\\tIt is inspiring to extend the differentiable form of influence function across the training and validation set and then across the original and augmented data. This paper also makes use of the most recent related advance to enable stochastic learning. The theory of the paper is nice.\\n3.\\tThe experimental results on MNIST (with less labeled data) and Cifar-10 are encouraging.\\n\\n[Cons]\\n\\n1.\\tExperimental results can be stronger. Especially when compared to Ratner et al., this proposed method results in marginal performance gain. Given that Ratner et al.\\u2019s method trained the data augmentation module without supervision, the supervised learning in this paper does not show strong results. In addition, the paper did not report results on a more practical dataset (such ImageNet and Places). Even for Cifar-10, the reported numbers are away from the state-of-the-art. It is important to show the practical significance of the proposed method.\\n2.\\tData augmentation is naturally expected to be random, but the proposed method seems to learn a deterministic parameter for the augmenting transformation, which looks unnatural and limited. (Please clarify if I missed anything.) \\n3.\\tThe proposed method requires a parametric model (e.g, STN, GAN). However, differentiable parametric models are not always easy to design. This probably can be the biggest obstacle to apply the proposed method widely.\\n\\nOverall, the proposed method is very interesting. However, the experimental results are limited, and more discussions are needed.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"official review for \\\"Learning to Augment Influential Data\\\"\", \"review\": \"This paper proposed an\\n\\n\\n1. For me, the argument of the paper is ambitious. Data augmentation for DNN includes different perspective, including nonlinearity, adversarial etc. Generalization of spatial and appearance models is not enough. The model formulate from a simple classification setting but does not involve too many for DNN models. I put more references below. \\n\\n2. The experimental results are not strong. Not all strong baselines are included (I put some in the references). The improvements are marginal. Besides, I need more experimental setting information.\\n\\n3. The writing is not clear. For the related work part, it included many paragraph which are not related to the work, (e.g. GANs). In the introduction part, it did not mention the generalization of both spatial and appearance models, which is the main contribution.\", \"references\": \"a. Good Semi-supervised Learning that Requires a Bad GAN\\nb. Semi-supervised Learning with GANs: Manifold Invariance with Improved Inference \\nc. Temporal ensembling for semi-supervised learning\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
B1xU4nAqK7 | Unsupervised Exploration with Deep Model-Based Reinforcement Learning | [
"Kurtland Chua",
"Rowan McAllister",
"Roberto Calandra",
"Sergey Levine"
] | Reinforcement learning (RL) often requires large numbers of trials to solve a single specific task. This is in sharp contrast to human and animal learning: humans and animals can use past experience to acquire an understanding about the world, which they can then use to perform new tasks with minimal additional learning. In this work, we study how an unsupervised exploration phase can be used to build up such prior knowledge, which can then be utilized in a second phase to perform new tasks, either directly without any additional exploration, or through minimal fine-tuning. A critical question with this approach is: what kind of knowledge should be transferred from the unsupervised phase to the goal-directed phase? We argue that model-based RL offers an appealing solution. By transferring models, which are task-agnostic, we can perform new tasks without any additional learning at all. However, this relies on having a suitable exploration method during unsupervised training, and a model-based RL method that can effectively utilize modern high-capacity parametric function classes, such as deep neural networks. We show that both challenges can be addressed by representing model-uncertainty, which can both guide exploration in the unsupervised phase and ensure that the errors in the model are not exploited by the planner in the goal-directed phase. We illustrate, on simple simulated benchmark tasks, that our method can perform various goal-directed skills on the first attempt, and can improve further with fine-tuning, exceeding the performance of alternative exploration methods. | [
"exploration",
"model based reinforcement learning"
] | https://openreview.net/pdf?id=B1xU4nAqK7 | https://openreview.net/forum?id=B1xU4nAqK7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Hyl5JGcll4",
"BkevqyFqRQ",
"rJgyu1t9R7",
"rkxOBkK9CQ",
"BJgagv2OCQ",
"Sylir6eTh7",
"BJx8HYl52m",
"B1lbkkIPhX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544753634489,
1543307150595,
1543307111499,
1543307072161,
1543190260960,
1541373251445,
1541175613932,
1541000920839
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1451/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1451/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1451/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1451/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1451/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1451/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1451/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1451/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"Strengths\\n\\nThe paper proposes to include exploration for the PETS (probabilistic ensembles with trajectory sampling)\\napproach to learning the state transition function. The paper is clearly written.\\n\\nWeaknesses\", \"all_reviewers_are_in_agreement_regarding_a_number_of_key_weaknesses\": \"limited novelty, limited evaluation,\\nand aspects of the paper are difficult to follow or are sparse on details.\\nNo revisions have been posted.\\n\\nSummary\\n\\nAll reviewers are in agreement that the paper requires significant work and that it is not ready for ICLR publication.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"incremental, limited evaluation\"}",
"{\"title\": \"Thank you\", \"comment\": \"Dear reviewer,\\n\\nThank you very much for your review. In response to the main criticism from all reviewers here, we have been running additional experiments on new systems and towards increasingly our method's novelty but have been unable to complete these experiments in time. We will certainty incorporate all your helpful feedback into improving a future version of this work and are grateful for the time you spent on it.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Dear reviewer,\\n\\nThank you very much for your review. In response to the main criticism from all reviewers here, we have been running additional experiments on new systems and towards increasingly our method's novelty but have been unable to complete these experiments in time. We will certainty incorporate all your helpful feedback into improving a future version of this work and are grateful for the time you spent on it.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Dear reviewer,\\n\\nThank you very much for your review. In response to the main criticism from all reviewers here, we have been running additional experiments on new systems and towards increasingly our method's novelty but have been unable to complete these experiments in time. We will certainty incorporate all your helpful feedback into improving a future version of this work and are grateful for the time you spent on it.\"}",
"{\"title\": \"Update near end of discussion phase.\", \"comment\": \"They authors did not address the concerns mentioned in my review, nor have they addressed the concerns of the other reviewers.\\n\\nIn this situation, I stand by my original review.\"}",
"{\"title\": \"Weak experimental evaluation and lack of novelty\", \"review\": \"The authors address the problem of how to use unsupervised exploration in a first phase of reinforcement learning to gather knowledge that can be transferred to new tasks to improve performance in a second task when specific reward functions are available. The authors proposed a model-based approach which uses deep neural networks as a model for the environment. The model is PETS (probabilistic ensembles with trajectory sampling), an ensemble of neural networks whose outputs parametrize predictive distributions for the next state as a function of the current state and the action applied. To collect data during the unsupervised exploration phase, they use a metric of model uncertainty computed as follows: the average over all the particles assigned to each bootstrap is computed and the variance over these computed means is the\\nmetric of uncertainty. The authors validate their method on the HalfCheetah OpenAI gym environment where they consider 4 different tasks related to running forward, backward, tumbling forward and tumbling backward. The results obtained show that they outperform random and count based exploration approaches.\", \"quality\": \"I am concerned about the quality of the experimental evaluation of the method. The authors only consider a single environment for their experiments and artificially construct 4 relatively similar tasks. I believe this is insufficient to quantify the usefulness of the proposed method.\", \"clarity\": \"The paper is clearly written and easy to read.\", \"novelty\": \"The proposed approach seems incremental and lacks novelty. The described method for model-based exploration consists in looking at the mean of the prediction of each neural network in the ensemble and then computing the empirical average. This approach has been used before for active learning with neural networks ensembles:\\n\\nKrogh, Anders, and Jesper Vedelsby. \\\"Neural network ensembles, cross validation, and active learning.\\\" Advances in neural information processing systems. 1995.\\n\\nThe used model, PETS, is also not novel and the proposed methodology for having first an unsupervised learning phase and then a new specific learning task is also not very innovative.\", \"significance\": \"Given the lack of a rigorous evaluation framework and the lack of novelty of the proposed methods, I believe the significance of the contribution is very low.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An incremental work and needs more justification/clarification\", \"review\": \"The authors built upon the PETS algorithm to develop a state uncertainty-driven exploration strategy, for which the main point is to construct a reward function. The proposed algorithm was then tested on a specific domain to show some improvement.\\n\\nThe contribution of this paper may be limited, as it needs a specific setting, as shown in Figure 1. Furthermore, this paper is a bit difficult to follow, e.g., it was not until the 5th page to describe their algorithm. I summarize the pros and cons as follows.\", \"pros\": [\"The idea to include the exploration for PETS is somewhat interesting.\"], \"cons\": \"- The paper is a bit difficult to follow. Just to list a few places:\\n 1. The term \\\"unsupervised exploration\\\" was mentioned a few times in this paper. I am not sure if this is an accurate term. Is there a corresponding \\\"supervised exploration\\\" used elsewhere? \\n 2. When you introduced r_t in Section 3.3, how did you use it next? Was it used in Phase II?\\n 3. For the PETS (oracle) in Figure 4, why are the settings different for forward and backward tasks?\\n 4. What does \\\"random\\\" mean in Figure 4?\\n- The novelty of this paper is somewhat limited, as it requires a specific setting and has been applied in only one domain.\\n- There are a few grammar mistakes/typos in this paper. \\n 1. What is \\\"k\\\" in the equation for r_t?\\n 2. \\\"...we three methods...\\\" in Page 6.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Decent paper, but not very novel, sparse on details.\", \"review\": \"The paper performs model-based reinforcement learning. It makes two main contributions. First, it divides training into two phases: the unsupervised phase for learning transition dynamics and the second phase for solving a task which comes with a particular reward signal. The scope of the paper is a good fit for ICLR.\", \"the_paper_is_very_incremental\": \"the ideas of using an ensemble of models to quantify uncertainty, to perform unsupervised pre-training and to explore using an intrinsic reward signal have all been known for many years.\\n\\nThe contribution of the paper seems to be the combination of these ideas and the way in which they are applied to RL. I have the following observations / complaints about this.\\n\\n1. The paper is very sparse on details. There is no pseudocode for the main algorithm, and the quantity v^i_t (the epistemic variance on page 5) isn't defined anywhere. Without these things, it is difficult for me to say what the proposed algorithm is *exactly*.\\n\\n2. Sections 1 and 2 of the paper seem unreasonably bloated, especially given the fact that the space could have been more meaningfully used as per (1).\\n\\n3. The experimental section misses any kind of uncertainty estimates. If, as you say, you only had the computational resources for three runs, then you should report the results for all three. You should consider running at least one experiment for longer. This should be possible - a run of 50K steps of HalfCheetah takes about one hour on a modern 10-core PC, so this is something you should be able to do overnight.\\n\\n4. The exploration mechanism is a little bit of a mystery - it isn't concretely defined anywhere except for the fact that it uses intrinsic rewards. Again, please provide pseudocode.\\n\\nAs the paper states now, the lack of details makes it difficult for me to accept. However, I encourage the authors to do the following:\\n1. Provide pseudocode for the algorithm.\\n2. Provide pseudocode for exploration mechanism (unless subsumed by (1)).\\n3. Add uncertainty estimates to evaluation or at least report all runs.\\n\\nI am willing to re-consider my decision once these things have been done.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
BJ4BVhRcYX | INTERPRETABLE CONVOLUTIONAL FILTER PRUNING | [
"Zhuwei Qin",
"Fuxun Yu",
"Chenchen Liu",
"Xiang Chen"
] | The sophisticated structure of Convolutional Neural Network (CNN) allows for
outstanding performance, but at the cost of intensive computation. As significant
redundancies inevitably present in such a structure, many works have been proposed
to prune the convolutional filters for computation cost reduction. Although
extremely effective, most works are based only on quantitative characteristics of
the convolutional filters, and highly overlook the qualitative interpretation of individual
filter’s specific functionality. In this work, we interpreted the functionality
and redundancy of the convolutional filters from different perspectives, and proposed
a functionality-oriented filter pruning method. With extensive experiment
results, we proved the convolutional filters’ qualitative significance regardless of
magnitude, demonstrated significant neural network redundancy due to repetitive
filter functions, and analyzed the filter functionality defection under inappropriate
retraining process. Such an interpretable pruning approach not only offers outstanding
computation cost optimization over previous filter pruning methods, but
also interprets filter pruning process. | [
"convolutional filters",
"interpretable convolutional filter",
"filter",
"sophisticated structure",
"convolutional neural network",
"cnn",
"outstanding performance",
"cost",
"intensive computation",
"significant redundancies"
] | https://openreview.net/pdf?id=BJ4BVhRcYX | https://openreview.net/forum?id=BJ4BVhRcYX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ByxA1fAme4",
"rkgsMoynhQ",
"ByeoB1kn37",
"ryeWpT0j3Q",
"Hkxjz3Kin7",
"BkxAhzti37",
"B1g5pnOjh7",
"Byx5Nt_s2X",
"SJeJqGOsnX",
"H1g9VAPjn7",
"Syggr2Ushm",
"SyeCQ2ri3m",
"HyezvMrohm",
"r1e7sGdqnQ",
"ryg2HGdc27",
"S1eXcT8927",
"ryeOJfJ92Q",
"rkx77wMth7",
"S1x2SfVL3Q",
"SylR62qQ3Q"
],
"note_type": [
"meta_review",
"comment",
"official_comment",
"official_comment",
"comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"comment",
"comment",
"official_comment",
"comment",
"official_review",
"official_comment",
"comment",
"official_review",
"official_comment",
"official_review"
],
"note_created": [
1544966629919,
1541303058881,
1541300034631,
1541299640955,
1541278739042,
1541276341771,
1541274818447,
1541273906235,
1541272199446,
1541271089532,
1541266487959,
1541262374053,
1541259866045,
1541206683328,
1541206596502,
1541201291471,
1541169631704,
1541117723162,
1540928067736,
1540758726259
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1450/Area_Chair1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1450/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1450/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1450/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1450/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1450/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1450/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1450/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1450/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1450/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1450/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The current version of the paper receives a unanimous rejection from reviewers, as the final proposal.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Unanimous rejection.\"}",
"{\"comment\": \"I appreciate the efforts authors made to address the comments by others. I think since some comments below are very aggressive and annoying, so I suggest that all reviewers should judge this paper fairly and independently. Thank you for your understanding.\", \"title\": \"Comments are just comments, not reviews.\"}",
"{\"title\": \"To All Reviewers from Authors\", \"comment\": \"Dear Reviewers:\\n\\nWe have done our best to clarify our works to the original poster.\\nIf you are looking for answers regarding the question of \\\"problem settings of pruning trained models\\\" and \\\"baseline selection\\\", please refer to the below replies. \\n\\nWe are still very open to other questions, and we will do our best to reply to those constructive ones.\\nHowever, we hope future reviewers could fully read our paper and fairly review our contributions without being influenced by some very aggressive comments below.\\n\\nAuthors.\"}",
"{\"title\": \"Reply to \\\"some thoughts\\\" from Authors\", \"comment\": \"Dear Reviewer,\\n\\nThanks for your comment.\\n\\n1. The \\u201cpruning with regularization during training\\u201d and \\u201cpruning post normal training\\u201d are clearly divided into two different categories and have been well discussed in [1]. Post design optimization is a well-recognized concept in many research areas. And there are also many excellent works emerging for such a pruning approach [2][3]. For more details, we recommend reviewers to refer to these papers. Overall, rather than judging which is better, these are two complementary approaches.\\n\\n2. We hope the reviewer can broaden the understanding of pruning. As we mentioned in our first reply, different pruning methods are just approaching the minimal network size [4][5]. It\\u2019s more important to understand the neural network with pruning. Our contribution in this work is not only pruning, but also interpreting the source of network redundancy. And based on this analysis, we proposed the method to effectively and precisely reduce the functionality redundancy.\\n\\nAuthors.\\n\\n[1] Auto-balanced Filter Pruning for Efficient Convolutional neural networks. Ding et al., AAAI 2018.\\n[2] NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications. Yang et al., ECCV 2018.\\n[3] ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression. Luo et al. ICCV, 2017.\\n[4] Rethinking the Value of Network Pruning. Liu et al., https://arxiv.org/abs/1810.05270\\n[5] Learning Efficient Convolutional Networks through Network Slimming. Liu et al., ICCV 2017.\"}",
"{\"comment\": \"Hi Authors,\\n\\nThanks for the continuing effort on clarifying your paper. In the end, unfortunately I don't feel the argument you gave regarding \\u201cpruning with regularization during training\\u201d and \\u201cpruning post normal training\\u201d is convincing. As the person pointed out, if the goal is to prune the network, and accelerate the network, I do not see there is any reason people do not go for the approach that achieves the best results regardless if it falls into the category of pruning with regularization during training or pruning post normal training. In other words, it would be helpful if you can explain your approach addresses some of the limitations/issues of [1] despite being less accurate. Hope this makes some sense.\\n\\n\\n\\n[1] 3. Learning Efficient Convolutional Networks through Network Slimming.\", \"title\": \"some thoughts\"}",
"{\"comment\": \"I appreciate the efforts authors made to address the comments by others. I think since some commenter and authors are not on the same page, reviewers should not be influenced by these comments and judge on their own. Thank you.\", \"title\": \"Thank you for the reply\"}",
"{\"title\": \"Reply to \\\"Sloppy Baseline\\\" from Authors\", \"comment\": \"Dear Reviewer:\\n\\nFirst of all, we think we have already answered the problem setting. \\u201cpruning with regularization during training\\u201d and \\u201cpruning post normal training\\u201d are the most intuitive explanation we can provide. For more details, please refer to the paper [1], which is published in AAAI 2018.\\n \\nSecondly, here is the answer regarding the baseline difference. It\\u2019s common that the baseline variance of the same model exists between different works [1][2][3], since people usually train published models from scratch for convenience. We did the same in our work.\\nHowever, we didn\\u2019t put much effort into chasing the highest performance of the original method, since that\\u2019s not the major focus of our work. And this difference actually doesn\\u2019t defect our findings of filter functionality analysis, functionality redundancy elimination, retraining analysis, etc. However, we can definitely improve the baseline in a future version.\\n\\nAgain, we sincerely ask the reviewer to pay more attention to our methods and contributions in our work and other referenced ones, rather than chasing results regardless of problem settings and perfecting baselines. Otherwise, this is an issue of our research philosophy difference, which can\\u2019t be well resolved.\\n\\nAuthors.\\n\\n[1] Auto-balanced Filter Pruning for Efficient Convolutional neural networks. Ding et al., AAAI 2018.\\n[2] Pruning Filters for Efficient ConvNets. Li et al., ICLR 2017.\\n[3] Learning to Prune Filters in Convolutional Neural Networks. Huang et al., WAVC2018.\"}",
"{\"comment\": \"1. \\\"Long Live TIME: Improving Lifetime for Training-In-Memory\\nEngines by Structured Gradient Sparsification\\\". This paper shows 92.5%\\n2. Online Filter Clustering and Pruning for Efficient Convnets\\n This paper shows 93.25%.\\n3. Learning Efficient Convolutional Networks through Network Slimming.\\nThis paper shows 93.66%\\n\\nNow I show the baseline is much better than the baseline you choose as 90.2%. So consider changing the conclusion of your paper?\", \"title\": \"See concrete paper about real cifar10 accuracy by vgg16\"}",
"{\"title\": \"Reply to \\\"OP May not be a reviewer\\\" from Authors\", \"comment\": \"Dear Another Reviewer in this thread:\\n\\nThank you so much for your fair comment in this thread.\\n\\nWe are trying to collect all the feedback and interact with all the readers since we are taking the OpenReview as a very serious academic society rather social medium. That's why we are doing our best to reply to the OP with detailed explanation and references. \\n\\nTherefore, we also agree with you to some extent, since we are always hoping the OP can raise more constructive questions and help us to improve.\\n\\nAgain, thank you so much for your support.\\n\\nAuthors\"}",
"{\"comment\": \"I am not an author of this work and I am not an expert in this field. But I really dislike your tone when you comment on others' work. Your comments are really unconvincing.\\nFirstly, you mention there are good results from others publication, but you don't list any publications to support your argument, whereas the response of the authors referred some of the literature.\\nSecondly, the link you mention to achieve 93% accuracy did not work. You should check that and give concrete papers.\\nThirdly, Please avoid using questions like ' how do you explain this' and so 'why you still claim there are different'. These are very offensive, this is not social media but an academic venue.\\nFinally, I recommend that the program committees of the ICLR conference should consider restricting the comments from non-reviewers. The authors have to waste much unnecessary time responding to low-quality comments here. Thank you very much.\", \"title\": \"Please watch your tone when commenting others work\"}",
"{\"comment\": \"First of all, you didn't compare with previous published showing good results and didn't explain why you don't compare with them. In addition, you didn't explain the difference between them. Since \\u201cpruning with regularization during training\\u201d and \\u201cpruning post normal training\\u201d is different, and why people choose to do this, and what is reason behind them. If the final goal is to prune the network, and accelerate the network, so why you still claim there are different?\\nSecond, you claim baseline accuracy of cifar10 under Vgg16 is 90.2%, and you got 90.3%, then I am telling you the baseline is around 93%. I don't have to search a lot, just randomly search on github, https://github.com/geifmany/cifar-vgg. And they got 93%, how do you explain this. From this way, your accuracy has decreased 3% and lots of papers do the pruning without the accuracy decrease, so how can you explain the advantage of your method.\", \"title\": \"Still Not Answer My Question\"}",
"{\"comment\": \"It is very likely that the OP for this thread is not a reviewer.\\n\\n Bare in mind anyone can sign as anonymous, whilst reviewers have been signing as anonreviewer\\\\d+ . \\n\\n Furthermore the comments that the OP has written are very hard to parse, there was little effort put in to proof-checking the grammar. If OP is indeed a reviewer then they should probably conform to the standards and sign as anonreviewer\\\\d+ . \\n\\n Openreview should probably restrict the comments from non-reviewers. I feel this is creating a lot of clutter and turning this process/platform in to some form of social medium.\", \"title\": \"Reply to \\\"Sloppy Baseline\\\" (OP May not be a reviewer)\"}",
"{\"title\": \"Reply to \\\"Sloppy Baseline\\\" from Authors\", \"comment\": \"Dear Reviewer:\\n\\nWe appreciate that you admitted the novelty of our work. However, we\\u2019d like to remind the reviewer again: Firstly, not all the papers with \\u201cpruning\\u201d in its title have a similar problem setting, \\u201cpruning with regularization during training\\u201d and \\u201cpruning post normal training\\u201d are different, and each of them has dedicated publications [1][2]. And we also hope people can explore more settings in different perspectives. Secondly, when we are comparing our research to others, we have already clearly shown our advantage over the baselines, and hope you can also carefully read our advantage in the retraining part.\\n\\nAuthors.\\n\\n[1] Fast convnets using group-wise brain damage. Lebedev et al., CVPR 2016.\\n[2] Network trimming: A data-driven neuron pruning approach towards efficient deep architectures. Hu et al., 2016. https://arxiv.org/abs/1607.03250\"}",
"{\"comment\": \"I don't it is a different problem setup. You would like to prune the network and finally get your result. Via the method you said why you can prune, is that correct?\\nHowever, in Structured Bayesian Pruning via Log-Normal Multiplicative Noise, they explain why it can be pruned in the Bayesian method, so how can you say it is different problem setting. \\nIn addition, you claim your method is better than previous results and you cannot beat other papers. Even you got a new method, then what is the meaning for that. \\nAgain, why don't do a comprehensive comparison and then conclude since you claim \\\"Such an interpretable pruning approach not only offers outstanding computation cost optimization over previous filter pruning methods\\\". I didn't see it offers outstanding computation cost optimization over previous filter pruning methods\", \"title\": \"Not Answer question\"}",
"{\"title\": \"I think the method proposed in this paper might be reasonable. But I do not suggest acceptance, unless the author can improve the writing and include more experimental results.\", \"review\": \"In this paper, the authors propose a method for pruning the convolutional filters. This method first separates the filters into clusters based on similarities defined with both Activation Maximization (AM) and back-propagation gradients. Then pruning is conducted based on the clustering results, and the contribution index that is calculated based on backward-propagation gradients. The proposed method is compared with a baseline method in the experiments.\\n\\nI consider the proposed method as novel, since I do not know any filter pruning methods that adopt a similar strategy. Based on my understanding of the proposed method, it might be useful in convolutional filter pruning.\\n\\nIt seems that \\\"interpretable\\\" might not be the most proper word to summarize the method. It looks like that the key concept of this paper, including smilarity defined in Equation (3), and the contribution index defined in Equation (7) are not directly relevant to interpretability. Therefore, I would consider change the title of the paper, for example, to \\\"Convolutional Filter Pruning Based on Functionality \\\". \\n\\nIn terms of writing, I have difficulty understanding some details about the method. \\n\\nIn filter clustering, how can one run k-means based on pair-wise similarity matrix $S_D$? Do you run kernel k-means, or you apply PCA to $S_D$ before k-means? What is the criterion of choosing the number of clusters in the process of grid search? \\n\\nAre filter level pruning, are cluster level pruning and layer level pruning three pruning strategies in the algorithm? It seems to me that you just apply one pruning strategy based on the clusters and contribution index, as shown in Figure 3. \\n\\nIn the subsubsection \\\"Cluster Level Pruning\\\", by \\\"cluster volume size\\\", denoted with$length(C^l_c)$, do you mean the size of cluster, i.e., the number of elements in each cluster? This is the first time I see the term \\\"volume size\\\". I assume the adaptive pruning rate, denoted by $R_{clt}^{(c,l)}$, is a fraction. But it looks to me that $length(C^l_c)$ is an integer. So how can it be true that $R_{clt}^{(c,l)} = length(C^l_c)$?\\n\\nIn the subsubsection \\\"Layer Level Pruning\\\", how is the value of $r$ determined?\\n\\nThe authors have conducted several experiments. These experiments help me understand the advantages of the proposed method. However, in the experiments, the proposed method is compared to only one baseline method. In recent years, a large number of convolutional filter pruning methods have been proposed, as mentioned in the related work section. I am not convinced that the proposed method is one of the best methods among all these existing methods. I would suggest the authors provide more experimental comparison, or explain why comparing with these existing methods is irrelevant. \\n\\nSince the proposed method is heuristic, I would also like the authors to illustrate that each component of the method is important, via experiment. How would the performance of the proposed method be affected, if we define the similarity $S_D$ in Equation (3) using only $V$ or $\\\\gamma$, rather than both $V$ and $\\\\gamma$? How would the performance of the proposed method be affected, if we prune randomly, rather than prune based on the contribution index?\\n\\nIn summary, I think the method proposed in this paper might be reasonable. But I do not suggest acceptance, unless the author can improve the writing and include more experimental results.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Reply to \\\"Sloppy Baseline\\\" from Authors\", \"comment\": \"Dear Reviewer:\\n\\n1. The comment ignores the problem setup and the contribution details.\\nOur work is addressing a totally different situation compared to [1][2].\\nIn [1][2], they apply sparse constraint during the \\u201ctraining\\u201d phase, while our work is to interpret the redundancy of a normally \\u201ctrained\\u201d neural network and propose the functionality oriented pruning method to explore the interpretable neural network optimization. More importantly, our work is proposing a functionality analysis approach with different methods cross-validating each other. We hope such an approach could also be adopted by other compression works to have a better result analysis. Also, the filter L1-ranking based pruning method [3] we are comparing is a well-established work published after [2] in the top conference ICLR 2017, if the authors ignore the problem setup and only chase the final results, we also suggest the reviewer have a discussion with these authors.\\n\\n2. Also, we don\\u2019t think the reviewer should consider the random pruning as a trick. If the reviewer follows the recent papers closely, you may find that many papers [4][5] discussing the significant redundancy inside neural networks, and different pruning methods (even random pruning) could achieve effectiveness eventually as long as the network is keeping retraining. In other words, there might be a certain optimal network size for a neural network\\u2019s functionality, and different pruning methods are just approaching this size. However, the questions of how to interpret the redundancy and what the retraining is doing are rarely addressed. In this work, we interpreted the functionality redundancy in a trained neural network. And our work could effectively and precisely reduce the functionality redundancy with the minimum help of the retraining process. Definitely, we understand why the reviewer favors random pruning so much, in this work we also proved that, functionality wise, the filter L1-ranking based pruning is also a kind of random pruning. Overall, \\\"claiming redundancy\\\" is easy, but \\\"analyzing redundancy\\\" is hard; \\\"random with retraining\\\" is easy, but \\\"precise without retraining\\\" is hard.\\n\\nAuthors.\\n\\n[1] Structured Bayesian Pruning via Log-Normal Multiplicative Noise. Neklyudov et al., NIPS 2017.\\n[2] Learning Structured Sparsity in Deep Neural Networks. Wen et al., NIPS 2016.\\n[3] Pruning Filters for Efficient ConvNets. Li et al., ICLR 2017.\\n[4] Rethinking the Value of Network Pruning. Liu et al., https://arxiv.org/abs/1810.05270\\n[5] Recovering from Random Pruning: On the Plasticity of Deep Convolutional Neural Networks. Mittal et al., WACV 2018.\"}",
"{\"comment\": \"Hi there,\\n Filter prune belongs to the structure prune, and you claim in the paper your results are better than previous papers.\\nHowever, I don't think so. Lot of papers are shown better performance than yours. \\nSee \\\"Structured Bayesian Pruning via Log-Normal Multiplicative Noise\\\", and \\\"Learning structured sparsity in deep\\nneural networks\\\". And there are a lot other papers showing better results than yours.\\n From this point, your conclusion is wrong and I don't recommend it for publication since you cannot say you get a new method and then publish. To tell you some tricks, even though at the beginning training stage, I randomly cut some filters and retrain the model, it can say still show better results.\", \"title\": \"Sloppy Baseline\"}",
"{\"title\": \"Good idea, but needs some improvements.\", \"review\": \"This paper proposes a new method to prune filters of convolutional nets based on a metric which consider functional similarities between filters. Those similarities are computed based on Activation Maximization and gradient information. The proposed method is better than L1 and activation-based methods in terms of accuracy after pruning at the same pruning ratio. The visualization of pruned filters (Fig. 3) shows the effectiveness of the method intuitively.\\n\\nOverall, the idea in the paper is pretty intuitive and makes sense. The experimental results support the ideas. I think this paper could be accepted if it is improved on the followings:\\n\\n1. The paper is not very easy to read although the idea is simple. \\n\\nThe equations could be updated and simplified. For example, I'm not sure if S_D in Eq. (3) wants to take V(F_i^(c,l)) and V(F_k^(c,l)) as the arguments. Layer L_l could be just l. \\n\\nAlgorithm 1 is hard to read. At least, one line should correspond to one processing. k is not initialized. It is difficult to understand what each variable represents.\\n\\nThe terms used in Section 4.2 may not be very accurate. First of all, I'm not sure if it is a hierarchical method. It does not perform pruning at multiple levels such as filters, clusters, and layers. Rather, it considers information from multiple levels to determine if a filter should be pruned or not. In that sense, everything is filter level pruning and distinguishing (filter|cluster|layer) level pruning just confuse readers. I'd recommend to simplify the section and describe simply what you do.\\n\\n2. Comparisons with more recent papers\\n\\nThe proposed method was compared with methods from 2015 and 2016. Model compression is an active area of research and there are a lot of papers. Probably, it makes sense to compare the proposed method against some state-of-the-art methods. Especially, it is interesting to see comparisons against methods with direct optimization of loss function such as (Liu et al. ICCV 2017). We might not need to even consider functionality with such methods.\\n\\nLiu et al. ICCV 2017: https://arxiv.org/pdf/1708.06519.pdf\\n\\n\\n* Some other thoughts\\n\\n** If you look at Figure 3 (a), it looks that there are still a lot of redundant filters. Actually, except the last row, I'm not sure if we can visually find any important difference between (a) and (b). I wonder if the most important thing is that you do not prune unique filters (ones which are not clustered with others). It might be interesting to see a result of the L1-based pruning which does not prunes such filters. If you see an interesting result from that, it could add some value to the paper.\\n\\n** I'd recommend another proofread.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Reply to \\\"Randomly pruning filters in a CNN\\\"\", \"comment\": \"Thank you very much for your comments.\\n\\n1) We fully understand your concern that the random pruning can obtain the comparable performance as the L1-norm based method. \\nIn our experiment, we also noted that sometimes random pruning filters even performs better than the filter L1-norm based method when the pruning rate is small. However, the accuracy drop of random pruning is always larger than the L1-norm based method when the network is pruned aggressively.\\n\\n2) Furthermore, different from previous works, we examined the pruning process and identified the real network redundancy in terms of filter functionality. Our method can precisely select functionality redundant filters to prune which causes much less accuracy drop. \\n\\n3) In our paper, we also demonstrated that, without considering the filter functionality, the retraining phase actually reconstructs the filter functionality rather than filter functionality fine-tuning. That\\u2019s the reason why the retraining phase of the L1-norm based method or random pruning can compensate the network accuracy drop. However, with a more precisely network redundancy identification, the retraining phase could be unnecessary.\\n\\n4) Actually, I think your work is also pretty related to this paper \\u201cRethinking the Value of Network Pruning\\u201d. The authors demonstrated that training a small pruned model from scratch gives comparable accuracy to the standard pruning and retraining method. \\nThrough an extensive set of experiments, the pruning method does not really matter. \\nTo some degree, I think the retraining with inherited weights from the randomly pruned model is just like training the network from scratch with random initialization. What do you think?\\n\\nAgain, thank you for your interest in our work!\"}",
"{\"title\": \"No comparisons and claiming something known make it hard to accept this paper\", \"review\": \"This paper claims to have shown some insights about the filters in a neural network. However, it has little contributions that are justifiable to be published and it missed way too many references.\\n\\nThe visualization of filters is hardly any contribution over [1]. The claim that AM is the best visualization tool is a weird statement given that there are many recent references on visualization, such as [2-4], which the authors all missed.\\n\\nThe proposed filter pruning is a simplistic approach that bears little technical novelty, and there has been zero comparison against any filter pruning approach/network compression approach, among the cited references and numerous references that the paper didn't cite, e.g. [5-6]. In this form I cannot accept this paper.\\n\\n[1] D Bau, B Zhou, A Khosla, A Oliva, and A Torralba. Network Dissection: Quantifying the Intepretability of Deep Visual Representations. In CVPR 2017.\\n[2] Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh Dhruv Batra. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. ICCV 2017\\n[3] Jianming Zhang, Zhe Lin, Jonathan Brandt, Xiaohui Shen, Stan Sclaroff. Top-down Neural Attention by Excitation Backprop. ECCV 2016\\n[4] Ruth Fong and Andrea Vedaldi. Interpretable Explanations of Black Box Algorithms by Meaningful Perturbation. ICCV 2017\\n[5] Y. Guo, A. Yao and Y. Chen. Dynamic Network Surgery for Efficient DNNs. NIPS 2016\\n[6] T.-J. Yang, Y.-H. Chen, V. Sze. Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning. CVPR 2017\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HkgSEnA5KQ | Guiding Policies with Language via Meta-Learning | [
"John D. Co-Reyes",
"Abhishek Gupta",
"Suvansh Sanjeev",
"Nick Altieri",
"Jacob Andreas",
"John DeNero",
"Pieter Abbeel",
"Sergey Levine"
] | Behavioral skills or policies for autonomous agents are conventionally learned from reward functions, via reinforcement learning, or from demonstrations, via imitation learning. However, both modes of task specification have their disadvantages: reward functions require manual engineering, while demonstrations require a human expert to be able to actually perform the task in order to generate the demonstration. Instruction following from natural language instructions provides an appealing alternative: in the same way that we can specify goals to other humans simply by speaking or writing, we would like to be able to specify tasks for our machines. However, a single instruction may be insufficient to fully communicate our intent or, even if it is, may be insufficient for an autonomous agent to actually understand how to perform the desired task. In this work, we propose an interactive formulation of the task specification problem, where iterative language corrections are provided to an autonomous agent, guiding it in acquiring the desired skill. Our proposed language-guided policy learning algorithm can integrate an instruction and a sequence of corrections to acquire new skills very quickly. In our experiments, we show that this method can enable a policy to follow instructions and corrections for simulated navigation and manipulation tasks, substantially outperforming direct, non-interactive instruction following. | [
"meta-learning",
"language grounding",
"interactive"
] | https://openreview.net/pdf?id=HkgSEnA5KQ | https://openreview.net/forum?id=HkgSEnA5KQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HygoUp_ggE",
"ByejewrEAX",
"Hygq4Iy4R7",
"BklVgmpgCQ",
"S1edmre36m",
"HyxheSl2a7",
"S1gowVln67",
"HJxXmVe3pm",
"r1glU6XCnQ",
"Hkx2E2HphX",
"Syx6sF9JnQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544748370595,
1542899442760,
1542874674037,
1542669036111,
1542354208434,
1542354163654,
1542354019252,
1542353946547,
1541451079866,
1541393459842,
1540495780656
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1449/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1449/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1449/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1449/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1449/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1449/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1449/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1449/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1449/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1449/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1449/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes a meta-learning approach to \\\"language guided policy learning\\\" where instructions are provided in the form of natural language instructions, rather than in the form of a reward function or through demonstration. A particularly interesting novel feature of the proposed approach is that it can seamlessly incorporate natural language corrections after an initial attempt to solve the task, opening up the direction towards natural instructions through interactive dialogue. The method is empirically shown to be able to learn to navigate environments and manipulate objects more sample efficiently (on test tasks) than approaches without instructions.\", \"the_reviewers_noted_several_potential_weaknesses\": \"while the problem setting was considered interesting, the empirical validation was seen to be limited. Reviewers noted that only one (simple) domain was studied, and it was unclear if results would hold up in more complex domains. They also note lack of comparison to baselines based on prior work (e.g., pre-training).\\n\\nThe authors provided very detailed replies to the reviewer comments, and added very substantial new experiments, including an entire new domain and newly implemented baselines. Reviewers indicated that they are satisfied with the revisions. The AC reviewed the reviewer suggestions and revisions and notes that the additional experiments significantly improve the contribution of the paper. The resulting consensus is that the paper should be accepted.\\n\\nThe AC would like to note that several figures are very small and unreadable when the paper is printed, e.g., figure 7, and suggests that the authors increase figure size (and font size within figures) to ensure legibility.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Innovative interactive instruction setting based on language interaction\"}",
"{\"title\": \"Thanks for the clarification\", \"comment\": \"Based on your thorough responses and paper modifications, I'll revise my review.\"}",
"{\"title\": \"Experiment Clarification\", \"comment\": \"For the multi-room environment, the room colors do not change location from task to task. While two corrections could tell it everything it needs to know and we observe this in some cases, we see that the agent often fails to complete subgoals it has information on and still benefits from successive corrections after two (as seen in Table 1). An example of this can be seen in Appendix A.1. Here the agent is not perfect and is able to complete the task after receiving multiple corrections (sometimes the same correction twice).\\n\\nOur model is also able to handle more relative types of corrections where the agent cannot memorize absolute positions. In Section 7.4 we add different types of corrections such as (\\u201cyou are in the wrong room\\u201d or \\u201cgoal room is southwest.\\u201d The agent cannot just memorize the locations of each room and instead must map corrections to changes in behavior. \\n\\nThe second environment we have added, the robotic object relocation task, has relative corrections such as \\u201cMove a little up right\\u201d or \\u201cPush closer to the green block\\u201d. A fixed number of corrections cannot exactly specify the task and the agent must consider the correction in terms of its previous behavior to gradually move closer to the goal.\"}",
"{\"title\": \"Thanks, but one more question!\", \"comment\": \"I'm very impressed and mostly satisfied with the responses to my review. There remains one important, unanswered question, however, that I'd like to be addressed.\\n\\nIf the specific rooms, indicated by colors, do not change location from task to task (and they appear not to from all the figures), then the agent can learn the room locations during meta-training and the two \\\"corrections\\\" tell it everything it needs to know to solve the task. So: do colored rooms change location from task to task? I.e., is the blue room sometimes in the lower right and other times in the upper left, etc?\"}",
"{\"title\": \"R1 Review Response (Part 2)\", \"comment\": \"\\u201c- Quantification of how much meta-training data is required. What is the sample complexity like with/without language corrections?\\u201d\\n> We add these details to the paper in Appendix A.3.\", \"meta_training\": \"For the multi-room domain we meta-train on 1700 environments. Our method converges in 6 DAgger steps so it takes 30 corrections per environment for a total of 51,000 corrections. For the robotic object relocation domain, we train on 750 environments. Our method converges in 9 DAgger steps so it takes 45 corrections per environment for a total of 33750 corrections.\", \"meta_testing\": \"On new tasks, asymptotically RL is able to achieve better final performance than our method but takes orders of magnitudes more samples. In Figure 7 we plot the number of training trajectories used per test task. While LGPL only receives up to 5 trajectories for each test task, RL takes more than 1000 trajectories to reach similar levels of performance.\\n\\n\\u201cIts unclear to me how the \\\"full information\\\" baseline processes and conditions on the full set of subgoals/corrections. Are they read as a single concatenated string converted to one vector by the bi-LSTM?\\u201d\\n> For the full information baseline, all the subgoals are concatenated and converted to one vector by a bi-LSTM.\\n\\n\\u201cI also have concerns about the need for near-optimal agents on each task -- this seems very expensive and inefficient.\\u201d\\n> To ground the language corrections we need some form of supervision. Typically methods for grounding natural language instructions assume access to either a large corpus of supervised data (i.e expert behavior) or a reward function (Janner et al 2017, Misra et al 2017, Wang, Xiong, et al. 2018, Andreas et al) in order to train the model. In our setting, we similarly assume access to near optimal agents or a reward function (which we can use to train near optimal agents), which is used to learn the policy and language grounding, but only on the meta-training tasks. On unseen meta-test tasks, we can learn very quickly simply by using language corrections, without the need for reward functions or expert policies. \\n\\n\\u201cOn the other hand, most architectural details necessary to reproduce the work are missing, at least from the main text.\\u201d\\n> We have added architecture and training details (including reward functions) to the appendix A.3 and referenced them in the main text. We also intend to open source the code once the review decision is out.\\n\\n\\n[1] Wang, Xin et al. \\u201cLook Before You Leap: Bridging Model-Free and Model-Based Reinforcement Learning for Planned-Ahead Vision-and-Language Navigation.\\u201d CoRRabs/1803.07729 (2018)\\n\\n[2] Andreas, Jacob et al. \\u201cLearning with Latent Language.\\u201d NAACL-HLT (2018).\\n\\n[3] Misra, Dipendra Kumar et al. \\u201cMapping Instructions and Visual Observations to Actions with Reinforcement Learning.\\u201d EMNLP (2017).\\n\\n[4] Janner, Michael et al \\u201cRepresentation Learning for Grounded Spatial Reasoning\\u201d TACL 2017\"}",
"{\"title\": \"R1 Review Response (Part 1)\", \"comment\": \"Thank you for the detailed and constructive feedback. To address concerns about limited experimental evaluation, we have added a new environment that we call robotic object relocation, which involves a continuous state space and more relative corrections. The results for this environment are in the revised paper in Section 7.3. To address comments about comparisons, we have also added a number of additional comparisons, comparing LGPL to state of the art instruction following methods (Misra 2017, Table 1), pre-training with language (similar to Andreas 2018, Fig 7), using rewards instead of language corrections (Fig 7), and training from scratch via RL (Fig 7). Additionally, to provide a deeper understanding of the methods performance, we included a number of additional analyses on the methods extrapolation and generalization in Section 7.4.\\n\\nWe would appreciate it if the reviewer could take another look at our changes and additional results, and let us know if they would like to either revise their rating of the paper, or request additional changes that would alleviate their concerns.\", \"find_below_the_responses_to_specific_comments\": \"\\u201cNo standard baselines are evaluated on the task (with or without meta-learning), nor is a detailed analysis of the learned policies undertaken. \\u201c\\n-> We have added additional comparisons and points of analysis to the updated paper. We compare with a strong instruction following method from the literature, Misra et al. (2017) (Table 1), as well as a number of other comparisons including all the comparisons that were requested (see detailed comments below) (Fig 7). \\n\\nWe have also added a number of new points of analysis. We analyze the performance of the method on stochastically chosen corrections instead of very well formed ones (Table 4). We analyze the extrapolation performance of the method to more corrections than training time (Table 3). We also analyze the performance of LGPL on tasks that are slightly out of distribution (Table 5). We would be happy to add additional analysis that the reviewer believes is important for the paper -- please let us know if we have addressed all of your concerns in this regard!\\n\\n\\u201c- Comparison to an RL baseline that attempts to learn the full task, without meta-training or language corrections.\\u201d\\n> We have added a RL baseline that trains a separate policy per task using a dense reward (Section 7.3, Fig 7). The details of the reward functions and training algorithm can be found in the appendix A.3. The RL baseline is able to achieve better final performance but takes orders of magnitude more samples on the new tasks. Our method can obtain reasonable performance with just 5 samples on the test tasks. An important distinction to make is that this baseline also assumes access to the test task reward function; our method only uses the language corrections. Additional details can be found in Section 7.3, 7.4. \\n\\n\\u201cComparison to a baseline that learns from intermediate rewards. Instead of annotating data with corrections, you could provide +/- scalar rewards\\u201d\\n> We have added a baseline (Section 7.3, Fig 7) that uses intermediate rewards instead of language corrections, that we call Reward Guided Policy Learning (RGPL).The correction for a trajectory is the sum of rewards of that trajectory. RGPL performed worse than LGPL in both domains as seen in Fig 7. As seen from Fig 7 language corrections allow for more information to be transmitted over scalar rewards. Additional details for this comparison can be found in Section 7.2. \\n\\n\\u201c- Comparison to a baseline that does some kind of pretraining on the language corrections, as in Andreas et al. (2018).\\u201d\\n> We have added a baseline (Section 7.3, Fig 7) that follows a pre-training paradigm similar to Andreas et al (2018) -- first pre-train a model on language instructions across many tasks and then finetune the model on new tasks using task-specific reward. Andreas et al. (2018) trains a learner with task-specific expert policies using DAgger. It then searches in the instruction space for the policy with the highest reward and then adapts the policy to individual tasks by fine tuning with RL. Since we can provide the exact instruction the policy needs, we do not perform the search in instruction space. We pretrain on the training tasks with DAgger and then finetune on test tasks with RL. This baseline is able to achieve slightly better final performance both domains but takes orders of magnitude more samples on the test tasks (>1000 trajectories vs 5 for our method). Details for this comparison can be found in Section 7.3.\"}",
"{\"title\": \"R3 Review Response\", \"comment\": \"Thank you for the detailed and constructive feedback. We have made a number of changes to the paper to address this feedback - including new experimental domains, more comparisons and in-depth analysis of model behavior. We describe these further in responses to specific comments below:\\n\\n\\u201cOnly one setting is studied\\u201d\\n> To extend the experimental evaluation beyond a single domain, we have added a new environment that we call robotic object relocation and involves manipulating a robotic gripper to push blocks. This environment involves relative corrections and continuous state space and is described in Section 7.1.2. This environment shows our method can generalize to substantially different domains (continuous state space) as well as new kinds of corrections beyond subgoals. The results for this environment are in the revised paper in Section 7.3.\\n\\n\\u201cthe task distribution seems not very complex.\\u201d\\n> We specify the task distribution in Section 7.1. For the multi-room environment the training and test tasks are generated such that for any test task, its list of all five subgoals does not exist in the training set. There are 3240 possible lists of all five subgoals. We train on 1700 of these environments and reserve a separate set for testing. For the robotic object relocation environment, we generate tasks by sampling one of the 3 movable blocks to be pushed. We then randomly choose one of the 5 immovable blocks and sample a direction and distance from that block to get a goal location. We generate 1000 of these environments and train on 750 of them. \\n\\n\\u201cHow the proposed model performs if the task is a little bit out of distribution? \\u201c\\n->We have added another experiment (Section 7.4, table 5) where we hold out specific objects in the training set and test on these unseen objects in the test set. For example, the agent will not see green triangles during training, but will see other green objects and non-green triangles during training and must generalize to the unseen combination at test time. As seen from results in Section 7.4, our method does have a lower completion rate on these tasks but is still able to complete a high completion rate (0.75) and outperform the baselines.\", \"other_improvements\": \"To further improve the experimental comparison, we have also added a number of additional comparisons, comparing to state of the art instruction following methods (Misra 2017, Table 1), pretraining with language (similar to Andreas 2018, Fig 7), using rewards instead of language corrections (Fig 7). We have also provided more analysis regarding the extrapolation and generalization of LGPL in Section 7.4.\\n\\n[1] Misra, Dipendra Kumar et al. \\u201cMapping Instructions and Visual Observations to Actions with Reinforcement Learning.\\u201d EMNLP (2017).\\n\\n[2] Andreas, Jacob et al. \\u201cLearning with Latent Language.\\u201d NAACL-HLT (2018).\"}",
"{\"title\": \"R2 Review Response\", \"comment\": \"Thank you for the detailed and constructive feedback. To address concerns about the experimental setup setup, we have added a new environment that we call robotic object relocation, which involves a continuous state space. Instead of subgoals the corrections here are more relative such as \\u201cmove a little left\\u201d. The results for this environment are in the revised paper in Section 7.3. To address comments about comparisons, we have also added a number of additional comparisons, comparing LGPL to state of the art instruction following methods (Misra 2017, Table 1), pre-training with language (similar to Andreas 2018, Fig 7), using rewards instead of language corrections (Fig 7). To provide a deeper understanding of the methods performance, we have also included a number of additional analyses on extrapolation and generalization in Section 7.4. Please let us know if adding additional comparisons or analysis would be helpful!\", \"we_respond_to_specific_comments_below\": \"\\u201cI am wondering how the method will be compared with a state-of-the-art method that focuses on following instructions\\u201d\\n-> We have implemented and compared to state of the art instruction following methods (results in Section 7.2, 7.3) Misra et al. (2017), and pretraining based on language (Andreas et al 2018) which show strong results on instruction following. We find that Misra et al. (2017) performs a little worse than our full information oracle method on the multi-room domain when given all subgoals along with the instruction, and significantly worse when given just the instruction. On the object relocation domain, Misra et al. (2017) performs around the same as our instruction baseline. We would like to emphasize that our work is complementary to better instruction following methods/architectures, it provides us a way to incorporate additional corrections in scenarios where just instructions are misspecified/vague. The specific comparison suggested, Artzi and Zettlemoyer, needs a domain specific executor and a formal language over actions. This approach requires specific engineering for each task and it\\u2019s unclear how to create a deterministic executor for ours. We also note that recent state of the art work in instruction following [Andreas 2018], [Misra 17], [Wang 2018], [Janner 2017] do not compare to A+Z for their tasks. \\n\\n\\u201cMoreover, the current experiments does not convince the reviewer if the claims are true in a more realistic setup\\u201d\\n-> We have now added an additional continuous state space environment, robotic object manipulation, and tested over more varied types of corrections, which demonstrates the applicability of our method to diverse task and correction setups. These results can be found in Section 7.4, and show that our method scales to different setups. \\n\\n\\u201cMoreover the authors need to compare their method in an environment that has been previously used for other domains with instructions\\u201d\\n-> Our algorithm incorporates language corrections to improve agent behavior quickly on new tasks, when the instruction is vague or ambiguous. No other work to our knowledge studies this problem setting, so we made our own environments for this task - based on existing instruction following domains. Our minigrid environment is a partially observed navigation-based environment and shares structural similarities to existing navigation-based environments such as [Matterport 3D, SAIL, Pond world]. \\n\\n[1] Wang, Xin et al. \\u201cLook Before You Leap: Bridging Model-Free and Model-Based Reinforcement Learning for Planned-Ahead Vision-and-Language Navigation.\\u201d CoRRabs/1803.07729 (2018)\\n\\n[2] Andreas, Jacob et al. \\u201cLearning with Latent Language.\\u201d NAACL-HLT (2018).\\n\\n[3] Misra, Dipendra Kumar et al. \\u201cMapping Instructions and Visual Observations to Actions with Reinforcement Learning.\\u201d EMNLP (2017).\"}",
"{\"title\": \"Interesting problem setup; insufficient experiments\", \"review\": \"This paper provides a meta learning framework that shows how to learn new tasks in an interactive setup. Each task is learned through a reinforcement learning setup, and then the task is being updated by observing new instructions. They evaluate the proposed method in a simulated setup, in which an agent is moving in a partially-observable environment. They show that the proposed interactive setup achieves better results than when the agent all the instructions are fully observable at the beginning.\\n\\nThe task setup is very interesting. However, the experiments are rather simplistic, and does not evaluate the full capability of the model. Moreover, the current experiments does not convince the reviewer if the claims are true in a more realistic setup. The authors compare the proposed method with one algorithm (their baseline) in which all the instructions are given at the beginning. I am wondering how the method will be compared with a state-of-the-art method that focuses on following instructions, e.g., Artzi and Zettlemoyer work. Moreover, the authors need to compare their method in an environment that has been previously used for other domains with instructions.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Meta-Learning Language-Guided Policy Learning\", \"review\": \"Summary:\\nThis paper studies how to teach agents to complete tasks via natural language instructions in an iterative way, e.g., correct the behavior of agents. This is a very natural way to learn as humans. The basic idea is to learn a model that takes correction and history as inputs and output what action to take. This paper formulates this in meta-learning setting in which each task is drawn from a pre-designed task distribution and then the models are able to adapt to new tasks very fast. The proposed method is evaluated in a virtual environment where the task is to pick up a particular object in a room and bring it to a particular goal location in a different room. There are two baselines: 1) instruction only (missing information), 2) full information (not iterative), the proposed method outperforms 1) with higher task completion rate and 2) with fewer number of corrections.\", \"strength\": [\"This paper addresses a very interesting problem in order to make agents learn more human like.\"], \"comments\": [\"Only one setting is studied. And, the task distribution seems not very complex.\", \"How the proposed model performs if the task is a little bit out of distribution?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Nice idea, very limited experimental validation\", \"review\": \"\", \"update\": \"I've increased my rating based on the authors' thorough responses and the updates they've made to the paper. However, I still have a concern over the static nature of the experimental environments.\\n\\n=====================\\n\\nThis paper proposes the use of iterative, linguistic corrections to guide (ie, condition and adjust) an RL policy. A major challenge in learning language-guided policies is grounding the language in environment states and agent actions. The authors tackle this challenge with a meta-learning approach.\\n\\nThe approach is fairly complex, blending imitation and supervised learning. It operates on a training set from a distribution of virtual pick-move-place tasks. The policy to be learned operates on this set and collects data, via something close to DAgger, for later supervised learning on the task distribution. The supervised-learning data comprises trajectories augmented with linguistic subgoal annotations, which are referred to as policy \\\"corrections.\\\" By ingesting its past trajectories and the correction information, the policy is meant to learn to solve the task and to ground the corrections at the same time, end-to-end. Correction annotations are derived from an expert policy.\", \"the_idea_of_guiding_a_policy_through_natural_language_and_the_requisite_grounding_of_language_in_environment_states_and_policy_actions_have_been_investigated_previously\": \"for example, by supervised pretraining on a language corpus, as in the cited work of Andreas et al. (2018). The alternative meta-learning approach proposed here is both well-motivated and original.\\n\\nGenerally, I found the paper clear and easy to read. The authors explain convincingly the utility of guiding policies through language, especially with respect to the standard mechanisms of reward functions (sparse, engineered) and demonstrations (expertise required). The paper is also persuasive on the utility of iterative, interactive correction versus a fully-specified language instruction given a priori. The meta-learning algorithm and training/test setup are both explained well, despite their complexity. On the other hand, most architectural details necessary to reproduce the work are missing, at least from the main text. This includes various tensor dimensions, the structure of the network for perceiving the state, etc.\\n\\nI like the proposed experimental setting. It enables meta-learning on sequential decision making problems in a partially observable environment, which seems useful to the research community at large. Ultimately, however, this paper's significance is not evident to me, mainly because the proposed method lacks thorough experimental validation. No standard baselines are evaluated on the task (with or without meta-learning), nor is a detailed analysis of the learned policies undertaken. The ablation study is useful, and a good start, but insufficient in my opinion. Unfortunately, the results are merely suggestive rather than convincing.\\n\\nSome things I'd like to see in an expanded results section before recommending this paper include:\\n- Comparison to an RL baseline that attempts to learn the full task, without meta-training or language corrections.\\n- Comparison to a baseline that learns from intermediate rewards. Instead of annotating data with corrections, you could provide +/- scalar rewards throughout each trajectory based on progress towards the goal (since you know the optimal policy). How effective might this be compared to using the corrections?\\n- Comparison to a baseline that does some kind of pretraining on the language corrections, as in Andreas et al. (2018).\\n- Quantification of how much meta-training data is required. What is the sample complexity like with/without language corrections?\\n\\nI also have concerns about the need for near-optimal agents on each task -- this seems very expensive and inefficient. The expert policy is obtained via RL on each individual task using \\\"ground truth\\\" rewards. It is not specified what these rewards are, nor is it stated how near to optimal the resulting policy is nor how this nearness affects the overall meta-learning process.\\n\\nIts unclear to me how the \\\"full information\\\" baseline processes and conditions on the full set of subgoals/corrections. Are they read as a single concatenated string converted to one vector by the bi-LSTM?\\n\\nThere also might be an issue with the experimental setup, unless I've misunderstood it. The authors state that \\\"the agent only needs 2 corrections where the first correction is the location of the goal object and the second is the location of the goal square.\\\" But if the specific rooms, indicated by colors, do not change location from task to task (and they appear not to from all the figures), then the agent can learn the room locations during meta-training and these two \\\"corrections\\\" tell it everything it needs to know to solve the task.\", \"pros\": [\"Appealing, well-motivated idea for training policies via language.\", \"Clear, pleasant writing and good communication of a complicated algorithm.\", \"Good experimental setup that should be useful in other research (except for possible issue with static room locations).\"], \"cons\": [\"The need for a near-optimal policy for each task.\", \"Overall complexity of the training process.\", \"The so-called corrections are actually linguistic statements of subgoals computed from the optimal policy. There is much talk in the introduction of interactive policy correction by humans, which is an important goal and interesting problem, but the present paper does not actually investigate human interaction. This comes as a letdown after the loftiness of the introduction.\", \"Various details needed for reproduction are lacking. Maybe they're in the supplementary material; if so, please state that in the main text.\", \"Major lack of comparisons to alternative approaches.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJfSEnRqKQ | Active Learning with Partial Feedback | [
"Peiyun Hu",
"Zachary C. Lipton",
"Anima Anandkumar",
"Deva Ramanan"
] | While many active learning papers assume that the learner can simply ask for a label and receive it, real annotation often presents a mismatch between the form of a label (say, one among many classes), and the form of an annotation (typically yes/no binary feedback). To annotate examples corpora for multiclass classification, we might need to ask multiple yes/no questions, exploiting a label hierarchy if one is available. To address this more realistic setting, we propose active learning with partial feedback (ALPF), where the learner must actively choose both which example to label and which binary question to ask. At each step, the learner selects an example, asking if it belongs to a chosen (possibly composite) class. Each answer eliminates some classes, leaving the learner with a partial label. The learner may then either ask more questions about the same example (until an exact label is uncovered) or move on immediately, leaving the first example partially labeled. Active learning with partial labels requires (i) a sampling strategy to choose (example, class) pairs, and (ii) learning from partial labels between rounds. Experiments on Tiny ImageNet demonstrate that our most effective method improves 26% (relative) in top-1 classification accuracy compared to i.i.d. baselines and standard active learners given 30% of the annotation budget that would be required (naively) to annotate the dataset. Moreover, ALPF-learners fully annotate TinyImageNet at 42% lower cost. Surprisingly, we observe that accounting for per-example annotation costs can alter the conventional wisdom that active learners should solicit labels for hard examples. | [
"Active Learning",
"Learning from Partial Feedback"
] | https://openreview.net/pdf?id=HJfSEnRqKQ | https://openreview.net/forum?id=HJfSEnRqKQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJeJlWjSlE",
"H1eMytqJ0Q",
"rJl_6OcJCm",
"rJg1hu9JCX",
"SkxYxO9kRX",
"B1xh9IemTX",
"BklDVW9F2X",
"SkgbODq_3X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545085159289,
1542592730433,
1542592703702,
1542592679199,
1542592497024,
1541764755925,
1541148974623,
1541085032588
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1448/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1448/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1448/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1448/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1448/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1448/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1448/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1448/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper is on active deep learning in the setting where a label hierarchy is available for multiclass classification problems: a fairly natural and pervasive setting. The extension where the learner can ask for example labels as well as a series of questions to adequately descend the label hierarchy is an interesting twist on active learning. The paper is well written and develops several natural formulations which are then benchmarked on CIFAR10, CIFAR100, and Tiny ImageNet using a ResNet-18 architecture. The empirical results are carefully analyzed and appear to set interesting new baselines for active learning.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Sets strong experimental baselines for active learning in hierarchical classification settings\"}",
"{\"title\": \"Reply to Reviewer 1\", \"comment\": \"Thanks for your feedback. We are glad that you appreciated the usefulness of the setup, the soundness of the experiments, and the insights of the results. We are also grateful for your thoughtful questions and recommendations.\\n\\n1) Yes, the t+1 is a mistake. Thanks for the catch! We will fix this in the camera ready version of the paper. \\n\\n2) A standard multi-class classifier cannot make use of the partially labeled data. The very purpose of these initial experiments was to establish as a sanity check that our setup for learning from partial labels with neural networks works in the first place (before adding the complexity of active learning). The point is to show that the model gets additional predictive performance as compared to if it only relied on the subset of data that had been fully annotated. \\n\\n3) One key feature of ALPF is that a better algorithm identifies the correct label with a smaller number of (binary) questions. To compute the number of questions asked, we *record* the number of queries required to conclusively identify the label of every example. Note that this requires at least 1 question for each example, but may be much faster than the naive approach of drilling through the whole label hierarchy fresh for each example.\\n\\nOur experiments compare all three acquisition strategies with AQ (EIG, ERC, and EDC). The difference between AQ and ALPF is that AQ selects examples i.i.d., and chooses only which (possibly composite) label to query. By contrast, ALPF at each round selects both the example and the label, possibly moving on to a new example and leaving the previous example with a partial label.\\n\\nThere are two relevant observations to the reviewer\\u2019s question. On Tiny ImageNet, ERC ends up spending the first 60K (the first two batch after warm-up) questions on just 32K distinct examples while EDC ends up querying 51K distinct examples. As we can see in Figure 2 (and not surprisingly), ERC obtains more exactly labeled examples early on, while EDC has less remaining classes overall. The fact that EDC consistently outperforms ERC early on suggests that given a very limited budget it might be better to coarsely but strategically annotate a larger dataset than to focus on obtaining more granular labels. How precisely this translates into improved classification performance is an interesting question and warrants deeper theoretical inquiry.\"}",
"{\"title\": \"Reply to Reviewer 3\", \"comment\": \"We thank the reviewer for their thoughtful feedback and were glad to see that you found our proposed setting to be both interesting and important. We would like to respond to your concerns briefly:\\n\\nFirst, concerning your questions:\\n***Re the failure of vanilla active learning***\\nSince theoretical analysis guaranteeing the performance of active + deep learning has yet to be established, it\\u2019s hard to say *why* vanilla uncertainty-sampling-based active learning doesn\\u2019t work so well when applied on image classification datasets with convolutional neural networks. However, we are not the first to find this. Take for example the results Active Learning for Convolutional Neural Networks: A Core Set Approach (https://arxiv.org/pdf/1708.00489.pdf), which was published at ICLR 2018, where uncertainty sampling and even the more recent deep Bayesian active learning by disagreement perform no better than random on CIFAR 10 and only marginally better for CIFAR 100. In contrast, vanilla AL strategies have demonstrated promise on a number of NLP tasks (e.g. https://arxiv.org/pdf/1808.05697.pdf).\\n\\n***Re the taxonomy of labels***\\nWhile tree-structured taxonomies are especially convenient, our methods do not in principle depend specifically on tree structure, requiring only a list of composite labels. One can draw a parallel to general formulations of the game twenty questions where the available set of questions needn\\u2019t form tree. We thank the reviewer for the suggestion for future work and plan to evaluate our methods on with label ontologies like the MeSH labels (medical subject headings) used to annotate biomedical articles that do not form a strict tree hierarchy (some nodes have multiple parents). \\n\\nRegarding theoretical guarantees, we agree with the reviewer that establishing theoretical guarantees for active learning with partial labels is an especially exciting direction and plan to pursue future work in this direction. We note that generally there is a considerable gap between the theory of active learning and the practical methods established to cope with high dimensional data and modern classifiers and hope to close this gap in the future with rigorous analysis.\"}",
"{\"title\": \"Reply to Reviewer 2\", \"comment\": \"We thank the reviewer for their thoughtful feedback and clear recommendation to accept. We were glad to see that you found the paper to be well-articulated and easy to read.\\n\\nPer your feedback, we will bring up the related work (currently in section 4) and cite it throughout as each prior technical idea is introduced. Regarding the related work on partial labels are you referring to the three papers we cite later on (Grandvalet & Bengio, 2004; Nguyen & Caruana, 2008; Cour et al., 2011) or others that we missed? Please let us know if you know of other related references and we\\u2019ll be happy to add any missing citations.\\n\\nWe agree that the choice of approaches in this paper is straightforward and meant to emphasize the importance of a novel problem setting as well as compelling experimental results. We also agree that a great next step for this work would be to establish theoretical guarantees for active learning with partial labels.\"}",
"{\"title\": \"General reply to reviewers\", \"comment\": \"We would like to thank all three reviewers for their thoughtful and detailed reviews. Overall, we were glad to see a consensus to accept the paper, with the reviews emphasizing the importance and novelty of our proposed problem setting, and the strength of our experimental work. As we continue to improve the draft, we will incorporate the constructive feedback from each reviewer. Please find replies to each review below in the respective threads.\"}",
"{\"title\": \"Interesting novel Active Learning setting\", \"review\": \"The authors introduce a new Active Learning setting where instead of querying for a label for a particular example, the oracle offers a partial or weak label. This leads to a simpler and more natural way of retrieving this information that can be of use many applications such as image classification.\\n\\nThe paper is well-written and very easy to follow. The authors first present the overview of the learning scenario and then suggest three sampling strategies based on the existing AL insights (expected information gain, expected remaining classes, expected decrease in classes). \\n\\nAs the labels that the algorithm has to then use are partial, they make use of a standard algorithm to learn from partial labels -- namely, minimizing a partial log loss. It would be nice to properly reference related methods in the literature in Sec. 2.1.\\n\\nThe way of solving both the learning from partial labels and the sampling strategies are not particularly insightful. Also, there is a lack of theoretical guarantees to show value of a partial label as compared to the true label. However, as these are not the main points of the paper (introduction of a novel learning setting), I see these as minor concerns.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good paper, but lack of theoretical analysis\", \"review\": \"This paper proposes active learning with partial feedback, which means at each step, the learner actively chooses both which example to label and which binary question to ask, then learn the multi-class classifier with these partial labels. Three different sampling strategies are used during active learning. Experimental results demonstrate that the proposed ALPF strategy outperforms existing baselines on the predicting accuracy under a limited budget.\\n\\nThis paper is well-written. The main ideas and claims are clearly expressed. ALPF combines active learning with learning from partial labels. This setting is interesting and important, especially when the number of categories is large and share some hierarchical structure. The experimental results are promising. My main concern about this work is the lack of theoretical guarantees, which is usually important for active learning paper. it\\u2019s better to provide some analysis on the efficiency of ALPF to further improve the quality of the paper.\", \"i_have_the_following_questions_for_the_authors\": \"+Why vanilla active learning strategy does not work well? Which uncertainty measurement do you use here?\\n+The performances of this work heavily rely on the taxonomy of labels, while in some cases the taxonomy of labels is not tree structure but a graph, i.e. a label may belong to multiple hyper-labels. Can ALPF still work on these cases?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"An interesting setting combining active learning and learning with partial labesl. Nice experimental contribution, lack of conceptual insights.\", \"review\": \"The paper considers a multiclass classification problem in which labels are grouped in a given number M of subsets c_j, which contain all individual labels as singletons. Training takes place through an active learning setting in which all training examples x_i are initially provided without their ground truth labels y_i. The learner issues queries of the form (x_i,c_j) where c_j is one of the given subsets of labels. The annotator only replies yes/no according to whether the true label y_i of x_i belongs to c_j or not. Hence, for each training example the learner maintains a \\\"version space\\\" containing all labels that are consistent with the answers received so far for that example. The active learning process consists of the following steps: (1) use the current learning model to score queries (x_i,c_j); (2) query the best (x_i,c_j); (3) update the model.\\nIn their experiments, the authors use a mini-batched version, where queries are issued and re-ranked several times before updating the model. Assuming the learner generates predictive models which map examples to probability distributions over the class labels, several uncertainty measures can be used to score queries: expected info gain, expected remaining classes, expected decrease in remaining classes. Experiments are run using the Res-18 neural network architecture over CIFAR10, CIFAR100, and Tiny ImageNet, with training sets of 50k, 50k, and 100k examples. The subsets c_j are computed using the Wordnet hierarchy on the label names resulting in 27, 261, and 304 subsets for the three datasets. The experiments show the advantage of performing adaptive queries as opposed to several baselines: random example selection with binary search over labels, active learning over the examples with binary search over the labels, and others.\", \"this_paper_develops_a_natural_learning_strategy_combining_two_known_approaches\": \"active learning and learning with partial labels. The main idea is to exploit adaptation in both choosing examples and queries. The experimental approach is sound and the results are informative. In general, a good experimental paper with a somewhat incremental conceptual contribution.\\n\\nIn (2) there is t+1 on the left-hand side and t on the right-hand side, as if it were an update. Is it a typo?\\n\\nIn 3.1, how is the standard multiclass classifier making use of the partially labeled examples during training?\\n\\nHow are the number of questions required to exactly label all training examples computed? Why does this number vary across the different methods?\\n\\nWhat specific partial feedback strategies are used by AQ for labeling examples?\\n\\nEDC seems to consistently outperform ERC for small annotation budgets. Any intuition why this happens?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rJgSV3AqKQ | Combining adaptive algorithms and hypergradient method: a performance and robustness study | [
"Akram Erraqabi",
"Nicolas Le Roux"
] | Wilson et al. (2017) showed that, when the stepsize schedule is properly designed, stochastic gradient generalizes better than ADAM (Kingma & Ba, 2014). In light of recent work on hypergradient methods (Baydin et al., 2018), we revisit these claims to see if such methods close the gap between the most popular optimizers. As a byproduct, we analyze the true benefit of these hypergradient methods compared to more classical schedules, such as the fixed decay of Wilson et al. (2017). In particular, we observe they are of marginal help since their performance varies significantly when tuning their hyperparameters. Finally, as robustness is a critical quality of an optimizer, we provide a sensitivity analysis of these gradient based optimizers to assess how challenging their tuning is. | [
"optimization",
"adaptive methods",
"learning rate decay"
] | https://openreview.net/pdf?id=rJgSV3AqKQ | https://openreview.net/forum?id=rJgSV3AqKQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rylaDK7klE",
"Hkxi1iUDpX",
"rygZgpNBpm",
"B1O9N17MTQ",
"SkgupY1-p7",
"Bkgyx6Cg67",
"SyeuS3Ae6X",
"BkxfqdZq3Q"
],
"note_type": [
"meta_review",
"official_comment",
"comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1544661349280,
1542052579085,
1541913832595,
1541709618020,
1541630400008,
1541627110786,
1541626944176,
1541179530057
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1447/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1447/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1447/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1447/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1447/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1447/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1447/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper is a premature submission that needs significant improvement in terms of conceptual, theoretical, and empirical aspects.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Metareview\"}",
"{\"title\": \"Thanks for the constructive feedback\", \"comment\": \"First, the authors would like to thank you for the time given to reviewing this paper and the constructive comments you are offering. We will take them into account for future submission.\\n\\n1. , 4. , 5. - These are relevant suggestions and will be followed for the future version.\\n\\n2. By \\\"true benefit\\\", we meant the gain in performance of Hypergradient when its hyperparameters are tuned more carefully (as we proved in the paper, hypergradient can significantly benefit from such tuning). However, we do agree this claim could be formulated in a more consistent way w.r.t to our results.\\n\\n3. As in 2., the sensitivity analysis was only empirical. We will investigate a large set of experimental settings to support our observations.\\n\\n6. Robust Implicit Backpropagation (Fagan & Iyengar, 2018) is offering ideas that can perfectly fit in the landscape of this study. In fact, Implicit Backpropagation (IB) approximates the update of Implicit Stochastic Gradient Descent which is known to be stable and robust to learning rate. This makes it a good candidate to consider in an investigation like the one we are conducting, in order to check how IB compares to adaptive gradient methods and the various learning rate schedules we are considering. More specifically, IB seems to be very efficient for recurrent models. Since, we are planning to extend our investigation to tasks that correspond to recurrent models (e.g. language modelling), IB would definitely be a good method to compare to. Thank you again for sharing this reference. We will be considering it in a future version eventually.\"}",
"{\"comment\": \"On the positive side this paper performs several interesting experiments comparing various learning rate tuning algorithms.\\nThe paper also spends time on sensitivity/robustness, which has not received adequate attention in the literature.\\nHowever, I am afraid there is no technical or methodological contribution from this paper that meets the ICLR standards.\", \"some_feedback_which_will_hopefully_help_in_future_submission\": \"1. Use clear definitions and notation to introduce methods. Currently, all methods are only described in words, and this creates confusion, especially for the \\\"hypergradient method\\\" that is new.\\n\\n2. Be consistent about claims and results presented in the paper. For example, in the Abstract the claim is \\\"We analyze the true benefit of these hypergradient methods...\\\" but not such analysis is presented. If your goal is experimentation and not analysis it is better to make that clear early.\\n\\n3. As above, there is no \\\"sensitivity analysis\\\" offered in this paper. I do think this is an important subject and I applaud the authors for focusing on that. However, currently in the paper there are only experimental results and simulations. \\nThere are limitations with the experiments as well. In Figures 4-5-6 we only get some plots on how the train error depends on the learning rate on some particular datasets. A deeper investigation would be helpful here as to why we see the results we see, so as to substantiate claims such as \\\" Figure 4 shows the performance of SGD and SGDN worsens faster when increasing\\nthe learning rate than when decreasing it.\\\" (page 6) It would be nice to get such general results, but this requires a deeper and more thorough investigation, whereas currently the evidence may be circumstantial. \\n\\n4. Section 2.1 is a good place to start introducing notation. Although the referenced methods are known, it helps to lay out some notation so that readers have a clear idea what the authors have in mind.\\n\\n5. Similar to point #1: p2 \\\"but this method seems to work better in practice.\\\" Blanket statements are hard to accept without solid arguments. What does \\\"better\\\" mean here and what does \\\"in practice\\\" mean? Overall, the authors should avoid such statements without presenting solid evidence. Another example in p5: \\\"It is interesting to see that, by using the optimization dynamics in an online fashion, one can recover the training performance of a carefully tuned decay schedule.\\\"\\n\\n6. About sensitivity analysis the authors could also look into (Robust Implicit Backpropagation, Fagan & Iyengar, 2018) where the authors use implicit methods to stabilize fitting algorithms for neural networks. Could the ideas in that paper apply here?\", \"title\": \"Review\"}",
"{\"title\": \"Thank you for your valuable comments\", \"comment\": \"We would like to thank the reviewers for their time. We will take their comments into account for a future version of this work.\"}",
"{\"title\": \"A technical report rather than a research paper\", \"review\": \"General:\\nIn general, this looks like a technical report rather than a research paper to me. Most parts of the paper are about the empirical analysis of adaptive algorithms and hyper-gradient methods. The contribution of the paper itself is not sufficient to be accepted.\", \"possible_improvements\": \"1. The study of such optimization problem should consider incorporating mathematics analysis with necessary proof. e.g. show the convergence rate under specific constraints. Even the paper is based on others' work, the author(s) could have extended their work by giving stronger theory analysis or experiment results.\\n2. Since this is an experimental-based paper, besides CIFAR10 and MNIST data sets, the result would be more convincing if the experiments were also done on ImageNet(probably should also try deeper neural networks).\\n3. The sensitivity study is interesting but the experiment results are not very meaningful to me. It would be better if the author(s) gave a more detailed analysis.\\n4. The paper could be more consistent. i.e. emphasize the contribution of your own work and be more logical. I might miss something, but I feel quite confused about what is the main idea after reading the paper.\", \"conclusion\": \"I believe the paper has not reached the standard of ICLR. Although we need such paper to provide analysis towards existing methods, the paper itself is not strong enough.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"my apologies\", \"comment\": \"Sorry, I meant to erase the comment \\\"which is self-apparently important\\\", which isn't appropriate and doesn't make sense.\"}",
"{\"title\": \"incremental empirical contribution\", \"review\": [\"Clarity: Below average\", \"The introduction would be easier to follow if you named Baydin's approach and your own approach, because in the 2-4 bullet points you say \\\"this online scheme\\\", and \\\"the learning rate schedule\\\", without being perfectly clear what you are talking about\", \"The last sentence of the introduction is meant to clearly state your hypothesis, so I was expecting \\\"emphasize the value of *\\\", i.e. either adaptive or non-adaptive methods, rather than just general 'tuning', which is self-apparently important.\"], \"quality\": \"Below average\\nThis is a purely empirical study that does not go too deep. It is not quite a review paper, but only compares previous methods.\", \"pros\": \"I especially appreciate the sensitivity analysis, ie Fig 6. If only all ML papers had something like this to suggest the difficulty of setting hyperparameters for their proposed methods.\", \"cons\": [\"You should use mathematics to describe what you are talking about with adaptive stepsize in Sec 2.1. \\\"these methods multiply the gradient with a matrix\\\". Just giving one equation would be extremely helpful.\", \"If I understand correctly, you are interpreting the inverse-Hessian as used in Newton's method and other non-diagonal 'gradient conditioners' as types of stepsize. This is definitely interesting, but again it would be very simple to see what you are saying with an equation instead of starting with the phrase \\\"stepsize\\\" which is generally understood to be a scalar multiple on the gradient.\", \"I'm surprised you jump right into experiements after your background settings. It's apparent that this paper fundamentally relies on the Wilson (2017) hypergradient paper. Your paper should be more self-contained: 'hypergradient' is not even defined in this paper, is it?...\"], \"especially\": \"How do you know that if you change the model architecture, data, and loss, that a similar result will occur? I imagine that it heavily relies on the data and model-- in other words, that the sensitivity is dependent on \\\"how an algorithm reacts to a certain data/loss/model landscape\\\". I'm trying to say that I'm not convinced these results generalize to any other situation than the one presented here (so does it really say anything about the different stepsize selection rules?)\", \"random_side_note\": \"Since your appendix is only a few lines, you could consider succinctly listing learning rates with set notation, for example {1e-n,5e-n : -5<n<1}.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"An emperical study on several methods for adjusting learning rate\", \"review\": \"The paper reports the results of testing several stepsize adjustment related methods including vanilla SGD, SGD with Neserov momentum, and ADAM. Also, it compares those methods with hypergradient and without. The paper reports several interesting results. For instance, they found hypergradient method on common optimizers doesn't perform better that the fixed exponential decay method propose by Wilson et al. (2017).\\n\\nThough it is an interesting paper, but the main issue with this paper is that it lacks enough innovation with respect to theory or empirical study. It is not deep or extensive enough for publishing at a top conference. \\n \\nOn page 3, it will be better to explain why use mu = 0.9, beta, etc. Why use CIFAR-10, MNIST?\\n\\nThe URL in References looks out of bound.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SJerEhR5Km | Novel positional encodings to enable tree-structured transformers | [
"Vighnesh Leonardo Shiv",
"Chris Quirk"
] | With interest in program synthesis and similarly flavored problems rapidly increasing, neural models optimized for tree-domain problems are of great value. In the sequence domain, transformers can learn relationships across arbitrary pairs of positions with less bias than recurrent models. Under the intuition that a similar property would be beneficial in the tree domain, we propose a method to extend transformers to tree-structured inputs and/or outputs. Our approach abstracts transformer's default sinusoidal positional encodings, allowing us to substitute in a novel custom positional encoding scheme that represents node positions within a tree. We evaluated our model in tree-to-tree program translation and sequence-to-tree semantic parsing settings, achieving superior performance over the vanilla transformer model on several tasks.
| [
"program translation",
"tree structures",
"transformer"
] | https://openreview.net/pdf?id=SJerEhR5Km | https://openreview.net/forum?id=SJerEhR5Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJg_VYOleV",
"rygwHC_c0m",
"r1eRq6OcCm",
"HklUr6u50Q",
"HJgu4gdM6Q",
"SJe_mn_x67",
"B1xJTjdxpX",
"Ske48oug67",
"BJeiRhrpn7",
"SyehfLgq37",
"SkgZrsYu2X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544747312226,
1543306815145,
1543306646081,
1543306558426,
1541730352096,
1541602336283,
1541602231080,
1541602124226,
1541393618851,
1541174804078,
1541081913389
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1446/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1446/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1446/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1446/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1446/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1446/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1446/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1446/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1446/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1446/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1446/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper extends the transformer model of Vashwani et al. by replacing the sine/cosine positional encodings with information reflecting the tree stucture of appropriately parsed data. According to the reviews, the paper, while interesting, does not make the cut. My concern here is that the quality of the reviews, in particular those of reviewers 2 and 3, is very sub par. They lack detail (or, in the case of R2, did so until 05 Dec(!!)), and the reviewers did not engage much (or at all) in the subsequent discussion period despite repeated reminders. Infuriatingly, this puts a lot of work squarely in the lap of the AC: if the review process fails the authors, I cannot make a decision on the basis of shoddy reviews and inexistent discussion! Clearly, as this is not the fault of the authors, the best I can offer is to properly read through the paper and reviews, and attempt to make a fair assessment.\\n\\nHaving done so, I conclude that while interesting, I agree with the sentiment expressed in the reviews that the paper is very incremental. In particular, the points of comparison are quite limited and it would have been good to see a more thorough comparison across a wider range of tasks with some more contemporary baselines. Papers like Melis et al. 2017 have shown us that an endemic issue throughout language modelling (and certainly also other evaluation areas) is that complex model improvements are offered without comparison against properly tuned baselines and benchmarks, failing to offer assurances that the baselines would not match performance of the proposed model with proper regularisation. As some of the reviewers, the scope of comparison to prior art in this paper is extremely limited, as is the bibliography, which opens up this concern I've just outlined that it's difficult to take the results with the confidence they require. In short, my assessment, on the basis of reading the paper and reviews, is that the main failing of this paper is the lack of breadth and depth of evaluation, not that it is incremental (as many good ideas are). I'm afraid this paper is not ready for publication at this time, and am sorry the authors will have had a sub-par review process, but I believe it's in the best interest of this work to encourage the authors to further evaluate their approach before publishing it in conference proceedings.\", \"confidence\": \"2: The area chair is not sure\", \"recommendation\": \"Reject\", \"title\": \"Borderline and not ideally reviewed, but not quite ready\"}",
"{\"title\": \"Thank you for your feedback!\", \"comment\": \"Hi,\\n\\nThank you very much for your feedback! We have explicitly clarified potentially confusing notation in our revised draft.\", \"regarding_latency\": \"There is no additional latency during training time as positional encodings are directly provided by the teacher. During evaluation-time decoding, there is some extra computation needed to compute one more positional encoding per time step, but this pales in comparison to the stack of attention functions and matrix multiplications each time step demands anyway.\\n\\nRegards!\"}",
"{\"title\": \"Thank you for your feedback!\", \"comment\": \"Hi,\\n\\nThank you kindly for your feedback. In our revision we have made an effort to clarify implementation details, add more results from experiments, and expand our citations.\\n\\nRegards!\"}",
"{\"title\": \"Thank you for your feedback!\", \"comment\": \"Hi,\\n\\nThank you very much for your feedback. \\n\\nIn our revised draft, we have done our best to address your concerns. We have added a related work section to better ground our contribution, and we have tried to clarify sections 3 and 4 with some additional detail and figures. We appreciate you pointing out the clarity issues and your recommendations on relevant literature to cite.\\n\\nIn regards to spectral theory approaches to tree node representation, that is a very interesting idea with a lot of promise. It would however be difficult to directly implement within our paradigm. In our system, and in transformers in general, it is assumed that the values of the decoder inputs and positions do not change over time. But as we build up a tree over multiple time steps, its adjacency matrix and associated eigenvectors change. This hinders us from directly using these eigenvectors as positional encodings.\", \"in_regards_to_binary_trees\": \"binary tree representations have been used extensively in NLP literature, and the LCRS representation in particular allows us to directly compare our work with other recent program translation literature. One key benefit to binary tree representations is that they let us work with trees with widely varying degrees among nodes, e.g. abstract syntax trees featuring functions that take arbitrary numbers of arguments. We do agree that binary tree representations have some issues, and are interested in exploring k-ary trees in future work.\\n\\nRegards!\"}",
"{\"title\": \"More details\", \"comment\": \"The current draft has no related work section and does not put the research in context with the existing literature. It contains only a mere 7 references, 3 of which are Transformer (the model used), Adam (the optimizer used) and Chen et al (the only baseline).\\nIt ignores the many works that have used position embeddings / encodings before such as [1, 2, 3]. I also suspect that there exist spectral theory approaches to represent nodes in a tree (consider the eigenvectors of the adjacency matrix for example)\", \"regarding_clarity\": \"Some notation is not introduced in section 3. Dimensions are not always obvious and more figures would help with comprehension.\", \"concerning_binary_trees\": \"There are a few non-discussed issues about relying on the Left-Child Right-Sibling Representation, such as whether any information is lost (this changes the number of branches between nodes) and how the increase of the tree depth affects downstream performance (since the encodings can only encode information perfectly up to k branches). The trade-off between n and k is also not discussed (for example when does it become useful to represent a n-ary tree into its Left-Child Right-Sibling Representation?).\\n\\n[1] Self-Attention with Relative Position Representations\\n[2] Music Transformer\\n[3] Convolutional Sequence to Sequence Learning\"}",
"{\"title\": \"Please expand\", \"comment\": \"Hello reviewer 3,\\n\\nYour review, while longer than the others on this paper, is very short. You are the only one supporting acceptance of this paper. Could you give a bit more detail about what its strengths and contributions are, and what areas it could improve on or be more clear about?\\n\\nBest,\\nAC\"}",
"{\"title\": \"More detail needed\", \"comment\": \"Hello Reviewer 2,\\n\\nThank you for your review, but I'm afraid a little more detail is needed to justify your score, as your review is quite short.\\n\\nIn particular, what about the experiments makes them too narrow? What additional experiments would you like to see? What key citations are needed? In particular, what graph neural network approaches would you recommend comparing against?\\n\\nIt is essential, when recommending rejection, that a coherent argument be made for it so that the authors have something to respond to or rebut, or at least critical feedback they can use when revising the paper.\\n\\nBest,\\nAC\"}",
"{\"title\": \"More detail needed\", \"comment\": \"Hello reviewer 1,\\n\\nThank you for your review, but I'm afraid a little more detail is needed to justify your score, as your review is quite short.\\n\\nIn particular, what are some references that you feel are missing? Why is it crucial to show experiments with larger trees, since, for example, any grammar can be binarised by putting it into Chomsky Normal Form? In what areas is the paper unclear, and could this be rectified during the rebuttal period?\\n\\nThanks.\\nAC\"}",
"{\"title\": \"Promising approach for enabling transformers to process tree-structured data\", \"review\": \"The authors propose to change the positional encodings in the transformer model to allow processing of tree-structured data.\\nThe tree positional encodings summarize the path between 2 nodes as a series of steps up or down along tree branches with the constraint that traveling up a branch negates traveling down any branch.\\n\\nThe experimental results are encouraging and the method notably outperforms the regular transformer as well as the tree2tree LSTM introduced by Chen et al on larger datasets. \\n\\nThe current draft lacks some clarity and is low on references. It would also be interesting to see experiments with arbitrary trees or at least regular trees with degree > 2 (rather than just binary trees). While the authors only consider binary trees in this paper, it represents a good first step towards generalizing attention-based models to nonlinear structures.\", \"comments\": [\"Would it be possible to use the fact that D_kU = I for the correct branch k? (This happens frequently for binary trees)\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting but incremental\", \"review\": \"The paper describes an interesting idea for using Vashwani's transformer with tree-structured data, where nodes' positions in the tree are encoded using unique affine transformations. They test the idea in several program translation tasks, and find small-to-medium improvements in performance.\\n\\nOverall the idea is promising, but the work isn't ready for publication. The implementation details weren't easy to follow, the experiments were narrow, and there are key citations missing. I would recommend trying some more diverse tasks, and putting this approach against other graph neural network techniques.\", \"revised\": \"I've revised by review upwards by 1, though I still recommend rejection. The authors improved the scholarship by adding many more citations and related work. They also made the model details and implementation more clear. \\n\\nThe remaining problem I see is that the results are just not that compelling, and the experiments do not test any other graph neural network architectures.\\n\\nSpecifically, in Table 1 (synthetic experiments) the key result is that their tree-transformer outperforms seq-transformer on structured input. But seq-transformer is best on raw programs. I'm not sure what to make of this. But I wouldn't use tree-transformer in this problem. I'd use seq-transformer.\\n\\nIn Table 2 (CoffeeScript-JavaScript experiments), no seq-transformer results are presented. That seems... suspicious. Did the authors try those experiments? What were the results? I'd definitely like to see them, or an explanation of why they're not shown. This paper tests whether tree-transformers are better than seq-transformer and other seq/tree models, but this experiment's results do not address that fully. Of the 8 tasks tested, tree-transformer is best on 5/8 while tree2tree is best on 3/8. \\n\\nIn Table 3, there's definitely a moderate advantage to using tree-transformer over seq-transformer, but in 5/6 of the tasks tree-transformer is worse than other approaches. The authors write, \\\"Transformer architectures in general, however, do not yet compete with state-of-the-art results.\\\". \\n\\nFinally, no other graph neural network/message-passing/graph attention architectures are tested (eg. Li et al 2016 was cited but not tested, and Gilmer et al 2017 and Veli\\u010dkovi\\u0107 et al 2017 weren't cited or tested), but there's a reasonable chance they'd outperform the tree-transformer.\\n\\nSo overall the results are intriguing, and I believe there's something potentially valuable here. But I'm not sure there's sufficient reason presented in the paper to use tree-transformer over seq-transformer or other seq/tree models. Also, while the basic idea is nice, as I understand it is restricted to trees, so other graphical structures wouldn't be handled.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"An interesting tree-structured positional embedding\", \"review\": \"This work proposes a novel tree structure positional embedding by uniquely representing each path in a tree using a series of transformation, i.e., matmul for going up or down the edges. The tree encoding is used for transformer and shows gains over other strong baselines, e.g., RNNs, in synthetic data and a program translation task.\", \"pros\": [\"An interesting approach for representing tree structure encoding using a series of transformation. The idea of transformation without learnable parameters is novel.\", \"Better accuracy both on synthetic tasks and code translation tasks when compared with other strong baselines.\"], \"cons\": [\"Computation seems to be larger given that the encoding has to be recomputed in every decoding step. I'd like to know the latencies incurred by the proposed method.\"], \"other_comment\": [\"I'd like to see experimental results on natural language tasks, e.g., syntax parsing.\", \"Section 2: \\\"we see that is is not at all necessary\\\" -> that is\", \"Section 3: Notation is a little bit hard to follow, \\\":\\\" for D and U, and \\\";\\\" in stacking.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
H1eH4n09KX | Adversarial Audio Super-Resolution with Unsupervised Feature Losses | [
"Sung Kim",
"Visvesh Sathe"
] | Neural network-based methods have recently demonstrated state-of-the-art results on image synthesis and super-resolution tasks, in particular by using variants of generative adversarial networks (GANs) with supervised feature losses. Nevertheless, previous feature loss formulations rely on the availability of large auxiliary classifier networks, and labeled datasets that enable such classifiers to be trained. Furthermore, there has been comparatively little work to explore the applicability of GAN-based methods to domains other than images and video. In this work we explore a GAN-based method for audio processing, and develop a convolutional neural network architecture to perform audio super-resolution. In addition to several new architectural building blocks for audio processing, a key component of our approach is the use of an autoencoder-based loss that enables training in the GAN framework, with feature losses derived from unlabeled data. We explore the impact of our architectural choices, and demonstrate significant improvements over previous works in terms of both objective and perceptual quality. | [
"methods",
"audio processing",
"adversarial audio",
"results",
"image synthesis",
"tasks",
"particular",
"variants"
] | https://openreview.net/pdf?id=H1eH4n09KX | https://openreview.net/forum?id=H1eH4n09KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1xbYbh2aN",
"SJefJYu-x4",
"r1gbtNucCm",
"rJe_yVk5Am",
"Hyg0QWJq0X",
"H1xfEekcAX",
"BklQ2J15Rm",
"ByxtWRRtRX",
"rJx81Y1ph7",
"BJgxdni52X",
"BJxtJ80O2m"
],
"note_type": [
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1559179640812,
1544812762086,
1543304313094,
1543267296129,
1543266597773,
1543266346265,
1543266219154,
1543265792917,
1541368030377,
1541221480146,
1541101025079
],
"note_signatures": [
[
"~Praveen_Narayanan1"
],
[
"ICLR.cc/2019/Conference/Paper1445/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1445/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1445/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1445/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1445/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1445/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1445/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1445/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1445/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1445/AnonReviewer2"
]
],
"structured_content_str": [
"{\"comment\": \"Do the authors have a usable implementation we could play with?\", \"title\": \"Code for paper\"}",
"{\"metareview\": [\"The paper presents an algorithm for audio super-resolution using adversarial models along with additional losses, e.g. using auto-encoders and reconstruction losses, to improve the generation process.\", \"Strengths\", \"Proposes audio super resolution based on GANs, extending some of the techniques proposed for vision / image to audio.\", \"The authors improved the paper during the review process by including results from a user study and ablation analysis.\", \"Weaknesses\", \"Although the paper presents an interesting application of GANs for the audio task, overall novelty is limited since the setup closely follows what has been done for vision and related tasks, and the baseline system. This is also not the first application of GANs for audio tasks.\", \"Performance improvement over previously proposed (U-Net) models is small. It would have been useful to also include UNet4 in user-study, as one of the reviewers\\u2019 pointed out, since it sounds better in a few cases.\", \"It is not entirely clear if the method would be an improvement of state-of-the-art audio generative models like Wavenet.\", \"Reviewers agree that the general direction of this work is interesting, but the results are not compelling enough at the moment for the paper to be accepted to ICLR. Given these review comments, the recommendation is to reject the paper.\"], \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"interesting approach, but results are not compelling enough\"}",
"{\"title\": \"Response to Reviewer #2 (cont.)\", \"comment\": \"\", \"q7\": \"Since the most directly related work was (Kuleshov 2017), I compared the super resolution (U-net) samples on that website (https://kuleshov.github.io/audio-super-res/ ) to the samples provided for the present work ( https://sites.google.com/view/unsupervised-audiosr/home ) and I was a bit confused, because the quality of the U-net samples in (Kuleshov 2017) seemed to be perceptually significantly better than the quality of the Deep CNN (U-net) baseline in the present work. Perhaps I am in error about this, but as far as I can tell, the superresolution in (Kuleshov et al 2017) is significantly better than the Deep CNN examples here. Is this a result of careful selection of examples? I do believe what I hear, e.g. that the MU-GAN8 is clearly better on some examples than the U-net8. But then for non-identical samples, how come U-net4 actually generally sounds better than U-net8? That doesn\\u2019t make immediate sense either (assuming no overfitting etc). Is the benefit in moving from U-net4 to U-net8 within a GAN context but then stabilizing it with the feature-based loss? If so, then how does MU-GAN8 compare to U-net4? Would there be any info for the reader by doing an ablation removing the feature loss from the GAN framework? etc. I guess I would like to get a better understanding of what is actually going on, even if qualitative. Is there any qualitative or anecdotal observation about which \\u201ctypes\\u201d of samples one system works better on than another? For example, in the provided examples for the present paper, it seemed to be the case that perhaps the MU-GAN8 was more helpful for supersampling female voices, which might have more high-frequency components that seem to get lost when downsampling, but maybe I\\u2019m overgeneralizing from the few examples I heard.\", \"a7\": \"We suspect that the differences you mention may stem from discrepancies in training time. Specifically, Kuleshov et al., 2017 appear to train models for 400 epochs, while our models were trained for 150 epochs due to hardware and time constraints; it is certainly possible that our models did not reach full convergence, although we observed a practical plateau around 150 epochs. Thus, comparisons of model performance at our training levels should still be valid.\\n\\nWe do have a comparison against U-net4 using the numbers published in Kuleshov et al., 2017 (see Table 1), however we didn\\u2019t find it was useful to do in-depth evaluation of this shallow version since early experiments showed that U-net8 was (unsurprisingly) better. \\n\\nLoss and model ablations have been added to the Experiments section, and indeed show that both depth and the proposed feature loss have significant impact on resulting performance.\\n\\nAs you have suspected, we find that our model performs well on sounds that have more high-frequency content. While we aren\\u2019t entirely certain that our model always performs noticeably better with female voices, we do consistently find that consonant sounds are improved with our method. We have added a short discussion of this in the subjective quality analysis portion of the Experiments section.\\n\\nThank you again for the thoughtful comments!\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for the thoughtful and detailed review. Please see our enumerated responses below. \\nWe have also posted a comment above that summarized the changes in the latest revisio\\n\\n*Q1*: The generator network appears to be nearly identical to that of Kuleshov et al (2017)-- which becomes the baseline-- and so the primary contribution differentiating this work is the insertion of that network into a GAN framework along with the additional feature-based loss term. This is overall a nice problem and a nice approach! In that light, I believe that there is a new focus in this work on the perceptual quality of the outputs, as compared to (Kuleshov et al 2017). I would therefore ideally like to see (a) some attempts at perceptually evaluating the resulting output (beyond PESQ, e.g. with human subjects and with the understanding that, e.g. not all AMT workers have the same aural discriminative abilities themselves), and/or (b) more detailed associated qualitative descriptions/visualization of the super-sampled signal, perhaps with a few more samples if that would help. That said, I understand that there are page/space limitations. (more on this next)\\n\\n*A1*: Thank you for the detailed feedback. While time was short, we were able to add a subjective user study to the paper - see the updated Experiments section. We also added more qualitative discussion of super-sampled audio, with associated spectrograms. We are currently updating the web page with more audio samples as well and will update it shortly. \\n\\n*Q2*: Given the similarity of the U-net architectures to (Kuleshov et al 2017), why not move some of those descriptions to the appendix? \\n\\n*A2*: Thanks for this feedback - indeed, we found that our original architectural descriptions were overly detailed. We have removed much of this unnecessary detail, and plan to make a minor revision later today with additional architectural parameters in the appendix. \\n\\n*Q3*: Overall, I didn\\u2019t really understand exactly the role that [superpixel] plays in the system; I wondered if it either needed a lot more clarification (in an appendix?), or just less space spent on it, but keeping the pointers to the relevant references. It seems that the subpixel layer was already implemented in Kuleshov 2017, with some explanation, yet in the present work a large table (Table 1(b)) is presented showing that there is no difference in quality metrics, and the text also mentions that there is no significant perceptual difference in audio. If the subpixel layer were explained in detail, and with justification, then I would potentially be OK with the negative results, but in this case it\\u2019s not clear why spend this time on it here. It\\u2019s possible that there is something simple about it that I am not understanding. I\\u2019m open to being convinced. Otherwise, why not just write: \\u201cFollowing (Kuleshov et al 2017), we use subpixel layers (Shi et al) [instead of ...] to speed up training, although we found that they make no significant perceptual effects.\\u201d or something along those lines, and leave it at that? \\n\\n*A3*: We concede that the presentation of superpixel and subpixel layers was somewhat confusing; we have revised the description in the Methods and Experiments sections to hopefully clarify this. One important detail we want to clarify is that while subpixel layers have been previously evaluated, no previous works have attempted to use its simple inverse to *decrease* spatial resolution (previous works use strided convolution or pooling). Our paper evaluates this somewhat obvious inverse operator, referred to as a superpixel layer, and finds that it actually reduces training time without loss of performance.\\n\\n*Q4*: Some spectrograms might be helpful, since they do after all convey some useful information despite not telling much of the perceptual story. For example, are there visible but inaudible artifacts? Are such artifacts systematic? \\n\\n*A4*: Indeed, while we were not able to include spectrograms in the initial draft, the revision includes spectrograms of super-resolved audio, as well as an example produced by a GAN that exhibits the artifacts we mentioned in the paper. To explicitly answer your question, we find that artifacts introduced by GANs are generally systematic (e.g., high-frequency whines), and both visible and audible. \\n\\n*Q5*: Were individual audio samples represented as a one-hot encoding, or as floats? (I assume floats since there was no mention of sampling from a distribution to select the value). \\n\\n*A5*: You are correct - the individual audio samples are 32-bit (single-precision) floats. We have added a short footnote in the Method section to clarify this. Thanks! \\n\\n*Q6*: A couple of typos: \\u2026 \\n\\n*A6*: Thank you, these have all been corrected.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We thank the reviewer for the feedback. Please see our enumerated responses below. \\nWe have also posted a comment above that summarized the changes in the latest revision.\\n\\n*Q1*: Redundant comma: \\u201cfilters with very large receptive fields are required to create high quality, raw audio\\u201d.\\n\\n*A1*: We have corrected this, thank you. \\n\\n*Q2*: There are some state-of-the-art non-autoregressive generative models for audio waveform e.g., parallel wavenet, clarinet. One may properly discuss them in related work section. Although GAN performs very well for images, it hasn't obtained any compelling results for raw audios. Still, it\\u2019s very interesting to explore that. Any nontrivial insight would be highly appreciated.\\n\\n*A2*: Thanks for pointing this out. We have added a discussion of these works, and a comparison to other GAN-based methods in the Related Works section. \\n\\n*Q3*: In multiscale convolutional layers, it seems only larger filter plays a significant role. What if we omit small filter, e.g., 3X1? \\n\\n*A3*: This is a good point - we found that small filters do play a marginal but noticeable role in audio quality and speed of convergence. We suspect that while smaller filters are comparatively less powerful, they are easier to optimize and may have an important role in fitting many of the \\u201ceasy\\u201d components in audio signals. Smaller filters also have lower overhead in terms of computational requirements, making our networks (which have hundreds of feature maps in some layers) feasible to train in a few days or less with a single GPU. \\n\\n*Q4*: It seems the proposed MU-GAN introduces noticeable noise in the upsampled audios. \\n\\n*A4*: Thanks for this comment - we noticed this as well. We have copied our answer A4 to reviewer #1 below:\\nWe did notice that in some samples, especially at higher upsampling rates, there are instances of noise on utterances with significant high-frequency content (e.g., fricatives and aspiration). We are not entirely certain on the cause of this noise, but we suspect that it is related to inherent ambiguity in the phase and magnitude of high frequency signals. Furthermore, we found that this noise is present even if we replace the unsupervised feature loss with other conventional feature losses. Nevertheless, we made sure to include samples at high up-sampling ratios that included this noise in the user study, which indicated that users preferred audio produced by our method in spite of spurious noise. We have added a note in the paper regarding this problem. \\n\\n*Q5*: The results are fair. I didn\\u2019t see big improvement over previous work (Kuleshov et al., 2017). \\n\\n*A5*: We appreciate the criticism. We wanted to highlight that while our baselines are based on the work from Kuleshov et al., 2017, our primary baseline for evaluation (U-net8) is a much deeper and more powerful model compared to the model evaluated in Kuleshov et al., 2017. For a more direct comparison with Kuleshov et al., 2017, see the results for U-net4 in the Experiments section, which are taken from the authors\\u2019 paper.\"}",
"{\"title\": \"Response to Reviewer #1 (2/2)\", \"comment\": \"*Q6*: Feature spaces are used in super resolution to provide a space in which the an L2 loss is perceptually more relevant. There are many such representations for audio signals. Specifically the magnitude of time-frequency representations (like spectrograms) or more sophisticated features such as scattering coefficients. In my view, the paper would be much stronger if these features would be evaluated as alternative to the features provided by the proposed autoencoder. \\n\\n*A6*: Thank you for this point. We actually have run experiments similar to the ones the reviewer mentions, but found the results to be either worse or no different to a standalone L2 loss and abandoned subsequent efforts. Specifically, we experimented with L2 losses in the Fourier transform space, and losses across the coefficients of various wavelet transforms. In general, we don\\u2019t expect linear transforms to perform better than or on par compared to non-linear transforms similar to those presented in this paper. A classic but effective example is the comparison of linear PCA to the non-linear PCA provided by a single-layer autoencoder. \\n\\n*Q7*: One of the motivations for defining the loss in the feature space is the lack (or difficulty to train) auxiliary classifiers on large amounts of data. However, speech recognition models using neural networks are quite common. It would be good to also test features obtained from an off-the-shelf speech recognition system. How would this compare to the proposed model? \\n\\n*A7*: We agree that off-the-shelf speech recognitions systems are an important comparison point. We were able to find a speech classifier-based feature loss used for speech denoising, and added the analysis to paper. We found that our method performed on-par (and better in some cases) compared to the classifier-based loss. Besides the competitive quantitative performance of our work, our methods generalize to different types of audio beyond speech. Finding classifier models for every type of audio is fraught with practical issues and may not be generally feasible. E.g., for music, a variety of different classification granularities and types exist, and it\\u2019s not clear what kind of classifier should be selected. Our method provides users with a way to train models with a feature loss that matches the characteristics of their own dataset, regardless of the audio type and how (or if) their dataset may be labeled.\\n\\n*Q8*: The L2 \\\"pixel\\\" loss seems a bit strange in my view. Particularly in audio processing, the recovered high frequency components can be synthesized with an arbitrary phase. This means that imposing an exact match seems like a constraint as the phase cannot be predicted from the low resolution signal (which is what a GAN loss could achieve). \\n\\n*A8*: We absolutely agree. Indeed, we tried other methods for a baseline loss such as 1-dimensional analogues of \\u201ctexture losses\\u201d that use correlations instead of exact match metrics, but couldn\\u2019t achieve good results. Note that your observation is exactly what our results indicate - that deviating from an exact match (as shown by lower SNR) can yield better results in terms of perceptual quality. We do see potential in methods that relax phase constraints further, for instance the phase shuffle operation from [C], but considered it an orthogonal technique that was out of scope for this paper. \\n\\n*Q9*: The paper should present ablations on the use of the different losses. In particular, one of the main contributions is the inclusion of the loss measured in the learned feature space. The authors mention that not including it leads to audible artifacts. I think that more studies should be presented (including quantitative evaluations and audio samples).\\n\\n*A9*: Thanks for this comment - we have added these details to the paper (e.g., see the model ablation study in the Experiments section). We are currently adding more samples to the webpage and will update it shortly. \\n\\n*Q10*: How where the hyper parameters chosen? is there a lot of sensitivity to their values?\\n\\n*A10*: Hyperparameters were typically determined with small parameter sweeps over generally accepted values (e.g., batch size of 32-64). However, some model parameters (such as network depth) were somewhat constrained by training time. Except for depth, we didn\\u2019t find significant differences from our parameter sweeps, including those used for the optimizer. We will make sure to include this information and more to the Appendix in a later revision.\\n\\n[C] Chris Donahue, Julian McAuley, and Miller Puckette. \\u201cSynthesizing audio with generative adversarial networks.\\u201d ICLR Workshops. 2018.\"}",
"{\"title\": \"Response to Reviewer #1 (1/2)\", \"comment\": \"We thank the reviewer for the thoughtful and detailed response. Please see our enumerated responses below. \\nWe have also posted a comment above that summarized the changes in the latest revision.\\n\\n*Q1*: From a technical perspective, I do not find the proposed approach very novel. It uses architectures following closely what has been done for Image super-resolution. I am not aware of an effective use of GANs in the audio processing domain. This would be a good point for the paper. However, the evidence presented does not seem very convincing in my view. While this is an audio processing paper, it lacks domain insights (even the terminology feels borrowed from the image domain). Again, most of the modeling decisions seem to follow what has been done for images. The empirical results seem good, but the generated audio does not match the quality of the state-of-the-art.\\n\\n*A1*: Thank you for this feedback. We agree that some of our high-level design choices are inspired by architectures from image processing literature. However, the main focus of our work is the exploration of such techniques and their adaptations to audio processing, which has been little-explored previously. Most importantly, we develop several new techniques and present analysis that is found in neither image nor audio processing literature. Indeed, autoregressive methods produce audio of excellent quality; we have added a discussion of this to the paper, and also elaborate more in this topic in answer A3 below.\\n\\n*Q2*: The presentation of the paper is correct. It would be good to list or summarize the contributions of this work.\\n\\n*A2*: We agree - the introduction has been revised and now includes an explicit list of contributions. \\n\\n*Q3*: Recent works have shown the amazing power of auto-regressive generative models (WaveNet) in producing audio signals. This is, as far as I know, the state-of-the-art in audio generation. The authors should motivate why the proposed model is better or worth studying in light of those approaches. In particular, a recent work [A] has shown very high quality results in the problem of speech conversion (which seems harder than bandwidth extension). It would seem to me that applying such models to the bandwith extension task should also lead to very high quality results as well. What is the advantage of the proposed approach? Would a WaveNet decoder also be improved by including these auxiliary losses?\\n\\n*A3*: We agree and note in the paper that auto-regressive models are indeed a promising avenue for audio generation. The primary difference from our work is that auto-regressive methods require an inference pass to generate a single output sample, with an input sequence that grows with each inference inference pass. This process is computationally intensive - for instance, at 16 KHz, an optimized Wavenet requires ~1.5 minutes to generate one second of audio [C]. We were recently made aware of a Wavenet variant that alleviates the issue of slow sample generation with a model distillation/student-teacher method [B]. As you correctly point out, Wavenet models can be improved with these auxiliary losses, and the Wavenet variant in [B] actually integrates a feature loss based on a speech phoneme-classifier network. This supports our view that our work is not in conflict with Wavenet and other auto-regressive methods, but rather augments them and can be used successfully in conjunction. \\n\\n*Q4*: While the audio samples seem to be good, they are also a bit noisy even compared with the baseline. This is not the case in the samples generated by [A] (which is of course a different problem). \\n\\n*A4*: We did notice that in some samples, especially at higher upsampling rates, there are instances of noise on utterances with significant high-frequency content (e.g., fricatives and aspiration). We are not entirely certain on the cause of this noise, but we suspect that it is related to inherent ambiguity in the phase and magnitude of high frequency signals. Furthermore, we found that this noise is present even if we replace the unsupervised feature loss with other conventional feature losses. We made sure to include samples at high up-sampling ratios that included this noise in the user study, and the study indicated that users preferred audio produced by our method in spite of spurious noise. Nevertheless we have added a note in the paper regarding this problem.\\n\\n*Q5*: The qualitative results are evaluated using PESQ. While this is a good proxy it is much better to perform blind tests with listeners. That would certainly improve the paper. \\n\\n*A5*: Thanks for this point. We have acted on this recommendation and now have results from a user study in the Experiments section. \\n\\n[B] van Den Oord et al. \\u201cParallel WaveNet: Fast High-Fidelity Speech Synthesis.\\u201d ICML 2018.\"}",
"{\"title\": \"Summary of revision\", \"comment\": \"We appreciate the reviewers\\u2019 detailed feedback and thoughtful questions. We have responded to each review independently, and summarize changes to the manuscript here.\\n\\nMajor changes include the addition of a qualitative user study, a model ablation analysis, comparisons against an off-the-shelf speech classifier-based feature loss, and an effective receptive field analysis. We have also added more spectrograms of super-resolved audio signals, and have added many new samples to the web page. Note that the webpage has moved to https://mugandemo.github.io/mugandemo . \\n\\nOther changes related to writing and clarity include the addition of a list of contributions, general writing improvements and typo fixes, and some additional discussion of autoregressive methods in the Related Works section.\\n\\nWe also note that due to a bug in our python audio processing scripts, MOS-LQO scores from the old draft were truncated. We have updated all relevant results tables with the correct, full-range MOS-LQO metrics. Note that while the absolute numbers have changed somewhat, the trends and conclusions drawn from the old results are still valid.\\n\\nAgain, thank you for the helpful feedback, it is greatly appreciated!\"}",
"{\"title\": \"Official review\", \"review\": \"The paper presents a model to perform audio super resolution. The proposed model trains a neural network to produce a high-resolution audio sample given a low resolution input. It uses three losses: sample reconstructon, adversarialy loss and feature matching on a representation learned on an unsupervised way.\\n\\nFrom a technical perspective, I do not find the proposed approach very novel. It uses architectures following closely what has been done for Image supre-resolution. I am not aware of an effective use of GANs in the audio processing domain. This would be a good point for the paper. However, the evidence presented does not seem very convincing in my view. While this is an audio processing paper, it lacks domain insights (even the terminology feels borrowed from the image domain). Again, most of the modeling decisions seem to follow what has been done for images. The empirical results seem good, but the generated audio does not match the quality of the state-of-the-art.\\n\\nThe presentation of the paper is correct. It would be good to list or summarize the contributions of this work.\\n\\nRecent works have shown the amazing power of auto-regressive generative models (WaveNet) in producing audio signals. This is, as far as I know, the state-of-the-art in audio generation. The authors should motivate why the proposed model is better or worth studying in light of those approaches. In particular, a recent work [A] has shown very high quality results in the problem of speech conversion (which seems harder than bandwidth extension). It would seem to me that applying such models to the bandwith extension task should also lead to very high quality results as well. What is the advantage of the proposed approach? Would a WaveNet decoder also be improved by including these auxiliary losses?\\n\\nWhile the audio samples seem to be good, they are also a bit noisy even compared with the baseline. This is not the case in the samples generated by [A] (which is of course a different problem). \\n\\nThe qualitative results are evaluated using PESQ. While this is a good proxy it is much better to perform blind tests with listeners. That would certainly improve the paper. \\n\\nFeature spaces are used in super resolution to provide a space in which the an L2 loss is perceptually more relevant. There are many such representations for audio signals. Specifically the magnitude of time-frequency representations (like spectrograms) or more sophisticated features such as scattering coefficients. In my view, the paper would be much stronger if these features would be evaluated as alternative to the features provided by the proposed autoencoder. \\n\\nOne of the motivations for defining the loss in the feature space is the lack (or difficulty to train) auxiliary classifiers on large amounts of data. However, speech recognition models using neural networks are quite common. It would be good to also test features obtained from an off-the-shelf speech recognition system. How would this compare to the proposed model?\\n\\nThe L2 \\\"pixel\\\" loss seems a bit strange in my view. Particularly in audio processing, the recovered high frequency components can be synthesized with an arbitrary phase. This means that imposing an exact match seems like a constraint as the phase cannot be predicted from the low resolution signal (which is what a GAN loss could achieve). \\n\\nThe paper should present ablations on the use of the different losses. In particular, one of the main contributions is the inclusion of the loss measured in the learned feature space. The authors mention that not including it leads to audible artifacts. I think that more studies should be presented (including quantitative evaluations and audio samples).\\n\\nHow where the hyper parameters chosen? is there a lot of sensitivity to their values?\\n\\n\\n[A] van den Oord, Aaron, and Oriol Vinyals. \\\"Neural discrete representation learning.\\\" Advances in Neural Information Processing Systems. 2017.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Fascinating problem & fair results\", \"review\": \"This paper presents a GAN-based method to perform audio super-resolution. In contrast to previous work, this work uses auto-encoder to obtain feature losses derived from unlabeled data.\", \"comments\": \"(1) Redundant comma: \\u201cfilters with very large receptive fields are required to create high quality, raw audio\\u201d.\\n\\n(2) There are some state-of-the-art non-autoregressive generative models for audio waveform e.g., parallel wavenet, clarinet. One may properly discuss them in related work section. Although GAN performs very well for images, it hasn't obtained any compelling results for raw audios. Still, it\\u2019s very interesting to explore that. Any nontrivial insight would be highly appreciated.\\n\\n(3) In multiscale convolutional layers, it seems only larger filter plays a significant role. What if we omit small filter, e.g., 3X1?\\n\\n(4) It seems the proposed MU-GAN introduces noticeable noise in the upsampled audios.\", \"pros\": [\"Interesting idea and fascinating problem.\"], \"cons\": \"- The results are fair. I didn\\u2019t see big improvement over previous work (Kuleshov et al., 2017).\\n\\nI'd like to reconsider my rating after the rebuttal.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"nice work, confused about evaluation-related aspects\", \"review\": \"PRO\\u2019s:\\n+well-written\\n+nice overall system: GAN framework for super-sampling audio incorporating features from an autoencoder\\n+some good-sounding examples\\n\\nCON\\u2019s:\\n-some confusing/weakly-presented parts (admittedly covering lots of material in short space)\\n-I am confused about the evaluation; would like additional qualitative/observational understanding of what works, including more on how the results differ from baseline\", \"summary\": \"The task addressed in this work is: given a low-resolution audio signal, generate corresponding high-quality audio. The approach is a generative neural network that operates on raw audio and train within a GAN framework.\\nWorking in raw sample-space (e.g. pixels) is known to be challenging, so a stabilizing solution is to incorporate a feature loss. Feature loss, however, usually requires a network trained on a related task, and if such a net one does not already exist, then building one can have its own (possibly significant) challenges. In this work, the authors avoid this auxiliary challenge by using unsupervised feature losses, taking advantage of the fact that any audio signal can be downsampled, and therefore one has the corresponding upsampled signal as well.\\n\\nThe training framework is basically that of a GAN, but where, rather than providing the generator with a low-dimensional noise signal input, they provide the generator with the subsampled audio signal. The architecture includes a generator ( G(lo-fidelity)=high-fidelity ), a discriminator ( D(high-fidelity) = real or by super-sampled ? ), and an autoencoder ( \\\\phi( signal x) = features of signal x at AE\\u2019s bottleneck).\", \"comments\": \"The generator network appears to be nearly identical to that of Kuleshov et al (2017)-- which becomes the baseline-- and so the primary contribution differentiating this work is the insertion of that network into a GAN framework along with the additional feature-based loss term. This is overall a nice problem and a nice approach! In that light, I believe that there is a new focus in this work on the perceptual quality of the outputs, as compared to (Kuleshov et al 2017). I would therefore ideally like to see (a) some attempts at perceptually evaluating the resulting output (beyond PESQ, e.g. with human subjects and with the understanding that, e.g. not all AMT workers have the same aural discriminative abilities themselves), and/or (b) more detailed associated qualitative descriptions/visualization of the super-sampled signal, perhaps with a few more samples if that would help. That said, I understand that there are page/space limitations. (more on this next)\\n\\nGiven the similarity of the U-net architectures to (Kuleshov et al 2017), why not move some of those descriptions to the appendix? \\n\\nFor example, I found the description and figure illustrating the \\u201csuperpixel layers\\u201d to be fairly uninformative: I see that the figure shows interleaving and de-interleaving, resulting in trading-off dimensionalities/ranks/etc, and we are told that this helps with well-known checkerboard artifacts, but I was confused about what the white elements represent, and the caption just reiterated that resolution was being increased and decreased. Overall, I didn\\u2019t really understand exactly the role that this plays in the system; I wondered if it either needed a lot more clarification (in an appendix?), or just less space spent on it, but keeping the pointers to the relevant references. It seems that the subpixel layer was already implemented in Kuleshov 2017, with some explanation, yet in the present work a large table (Table 1(b)) is presented showing that there is no difference in quality metrics, and the text also mentions that there is no significant perceptual difference in audio. If the subpixel layer were explained in detail, and with justification, then I would potentially be OK with the negative results, but in this case it\\u2019s not clear why spend this time on it here. It\\u2019s possible that there is something simple about it that I am not understanding. I\\u2019m open to being convinced. Otherwise, why not just write: \\u201cFollowing (Kuleshov et al 2017), we use subpixel layers (Shi et al) [instead of ...] to speed up training, although we found that they make no significant perceptual effects.\\u201d or something along those lines, and leave it at that? \\n\\nI did appreciate the descriptions of models\\u2019 sensitivity to size/structure of the conv filters, importance of the res connections, etc.\\n\\nMy biggest confusion was with the evaluation & results:\\n\\nSince the most directly related work was (Kuleshov 2017), I compared the super resolution (U-net) samples on that website (https://kuleshov.github.io/audio-super-res/ ) to the samples provided for the present work ( https://sites.google.com/view/unsupervised-audiosr/home ) and I was a bit confused, because the quality of the U-net samples in (Kuleshov 2017) seemed to be perceptually significantly better than the quality of the Deep CNN (U-net) baseline in the present work. Perhaps I am in error about this, but as far as I can tell, the superresolution in (Kuleshov et al 2017) is significantly better than the Deep CNN examples here. Is this a result of careful selection of examples? I do believe what I hear, e.g. that the MU-GAN8 is clearly better on some examples than the U-net8. But then for non-identical samples, how come U-net4 actually generally sounds better than U-net8? That doesn\\u2019t make immediate sense either (assuming no overfitting etc). Is the benefit in moving from U-net4 to U-net8 within a GAN context but then stabilizing it with the feature-based loss? If so, then how does MU-GAN8 compare to U-net4? Would there be any info for the reader by doing an ablation removing the feature loss from the GAN framework? etc. I guess I would like to get a better understanding of what is actually going on, even if qualitative. Is there any qualitative or anecdotal observation about which \\u201ctypes\\u201d of samples one system works better on than another? For example, in the provided examples for the present paper, it seemed to be the case that perhaps the MU-GAN8 was more helpful for supersampling female voices, which might have more high-frequency components that seem to get lost when downsampling, but maybe I\\u2019m overgeneralizing from the few examples I heard. \\n\\nSome spectrograms might be helpful, since they do after all convey some useful information despite not telling much of the perceptual story. For example, are there visible but inaudible artifacts? Are such artifacts systematic?\\n\\nWere individual audio samples represented as a one-hot encoding, or as floats? (I assume floats since there was no mention of sampling from a distribution to select the value).\", \"a_couple_of_typos\": \"descriminator \\u2192 discriminator \\n\\npg 6 \\u201cImpact of superpixel layers\\u201d -- last sentence of 2nd par is actually not a sentence. \\u201cthe reduction in convolutional kernels prior to the superpixel operation.\\u201d\\n\\nOverall, interesting work, and I enjoyed reading it. If some of my questions around evaluation could be addressed-- either in a revision, or in a rebuttal (e.g. if I completely misunderstood something)-- I would gladly consider revising my rating (which is currently somewhere between 6 and 7).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rygVV205KQ | Visual Imitation with a Minimal Adversary | [
"Scott Reed",
"Yusuf Aytar",
"Ziyu Wang",
"Tom Paine",
"Aäron van den Oord",
"Tobias Pfaff",
"Sergio Gomez",
"Alexander Novikov",
"David Budden",
"Oriol Vinyals"
] | High-dimensional sparse reward tasks present major challenges for reinforcement learning agents. In this work we use imitation learning to address two of these challenges: how to learn a useful representation of the world e.g. from pixels, and how to explore efficiently given the rarity of a reward signal? We show that adversarial imitation can work well even in this high dimensional observation space. Surprisingly the adversary itself, acting as the learned reward function, can be tiny, comprising as few as 128 parameters, and can be easily trained using the most basic GAN formulation. Our approach removes limitations present in most contemporary imitation approaches: requiring no demonstrator actions (only video), no special initial conditions or warm starts, and no explicit tracking of any single demo. The proposed agent can solve a challenging robot manipulation task of block stacking from only video demonstrations and sparse reward, in which the non-imitating agents fail to learn completely. Furthermore, our agent learns much faster than competing approaches that depend on hand-crafted, staged dense reward functions, and also better compared to standard GAIL baselines. Finally, we develop a new adversarial goal recognizer that in some cases allows the agent to learn stacking without any task reward, purely from imitation. | [
"imitation",
"from pixels",
"adversarial"
] | https://openreview.net/pdf?id=rygVV205KQ | https://openreview.net/forum?id=rygVV205KQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1eJARrExV",
"S1liUUpqyV",
"r1l6qR44kN",
"Hygx4yyxC7",
"BJgtuAaJCX",
"rylFR8TkCX",
"SklR0NgUpQ",
"HkeXJh1a37",
"B1x90GNcn7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544998599113,
1544373843376,
1543945876891,
1542610728308,
1542606448599,
1542604497019,
1541960918057,
1541368794673,
1541190353816
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1444/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1444/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1444/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1444/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1444/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1444/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1444/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1444/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1444/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper extends an existing approach to imitation learning, GAIL (Generative Adversarial Imitation Learning, based on an adversarial approach where a policy learner competes with a discriminator) in several ways and demonstrates that the resulting approach can learn in settings with high dimensional observation spaces, even with a very low dimensional discriminator. Empirical results show promising performance on a (simulated) robotics block stacking task, as well as a standard benchmark - Walker2D (DeepMind control suite).\\n\\nThe reviewers and the AC note several potential weaknesses. Most importantly, the contributions of the paper are \\\"muddled\\\" (R2). The authors introduce several modifications to their baseline, GAIL, and show empirical improvements over the baseline. However, the presented experiments do systematically identify which modifications have what impact on the empirical results. For example, R2 mentions this for figure 4, where it appears on first look that the proposed approach is compared to the vanilla GAIL baseline - however, there appear to be differences from vanilla GAIL, e.g., in terms of reward structure (and possibly other modeling choices - how close is the GAIL implementation used to the original method, e.g., in terms of the policy learner and discriminator)? There is also confusion on which setting is addressed in which part of the paper, given that there is both a \\\"RL+IL\\\" and an \\\"imitation only\\\" component.\\n\\nIn their rebuttal, the authors respond to, and clarify some of the questions raised by the reviewers, but the AC and corresponding reviewers consider many issues to remain unclear. Overall, the presentation could be much improved by indicating, for each set of experiments, what research question or hypothesis it is designed to address, and to clearly indicate conclusions on each question once the results have been discussed. In its current state, the paper reads as a list of interesting and potentially highly valuable ideas, together with a list of empirical results. The real value of the paper should come in when these are synthesized into lessons learned, e.g., why specific results are observed and what novel insights they afford the reader. Overall, the paper will benefit from a thorough revision and is not considered ready for publication at ICLR at this stage.\\n\\nThe AC notes that they placed less weight on R3's assessment, due to their relatively low confidence, because they appear not to be familiar with key related work (GAIL), and did not respond to further requests for comments in the discussion phase.\\n\\nThe AC also notes a potential weakness that was not brought up by the reviewers, and which they therefore did not weigh into their assessment of the paper, but nevertheless want to share to hopefully help improve a future version of the paper. Figure 6(b) should be interpreted with caution given that performance with a greater number of demonstrations (120 vs 60) showed lower performance. The authors note in the caption that one of the \\\"120 demos\\\" runs \\\"failed to take of\\\". This suggests that variance for all these runs may be underestimated with the currently used number of seeds. It is not clear what the shaded region indicates (another drawback) but if I interpret these as standard errors then this plot would suggest lower performance for higher numbers of demonstrations with some confidence - clearly that conclusion is unlikely to be correct.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"imitation in with high dimensional observations - contributions not sufficiently validated in experiments\"}",
"{\"title\": \"Imitation vs supervised learning, Usefulness\", \"comment\": \"Imitation can be treated as a supervised learning problem when there are (state, action) pairs available and you want to learn a policy by regressing expert actions from the states. However, when no expert actions are available to predict, one must learn from experience by interacting with the environment. This type of imitation therefore becomes an RL problem, not a supervised learning problem. If you want to distinguish it from RL on human-specified, static reward functions, it might be useful to call this \\\"RL imitation\\\".\\n\\nThis is the case for GAIL using only expert states (no actions), and in our current work. It is not the case that we are incidentally applying an RL algorithm to solve a supervised learning problem. Without expert actions, there is no way to formulate the problem as supervised learning. Imitation learning is not equivalent to supervised learning, although supervised learning can be used for imitation in some, but not all cases.\\n\\nBy your usefulness criterion, our proposed model has shown itself to be useful. Supervised learning was intractable because we assume no expert actions were provided, and RL on sparse human-specified task rewards for stacking failed to learn the task. In your proposed taxonomy this corresponds to \\\"both RL and IL were previously intractable\\\". Our proposed RL imitation model solved that task with high success rate. Even when we compared to a (supervised) IL baseline with additional information of expert actions, our proposed approach worked much better.\"}",
"{\"title\": \"RL as a problem vs RL as an algorithm\", \"comment\": \"I just wanted to clarify one important point. I want to make a distinction between RL as a problem and RL as an algorithm. When I think of RL as a problem, I think of a situation where I want to maximize some task reward that is meaningful unto itself. We solve these problems, obviously, with RL algorithms. However, we can also apply RL algorithms to problems with RL substructure, where the reward is some kind of intermediate quantity. Imitation learning with GAIL would fall into the latter category: we are incidentally applying an RL algorithm to solve a supervised learning problem.\\n\\nTo achieve a given task, I might be able to formulate the problem either as RL or supervised/imitation learning. If it is easy to manually specify a reward function to achieve the task, then RL is almost always preferable because it requires no training data. Unfortunately, since RL is hard, one might in practice resort to applying IL even if we can specify a reward function.\\n\\nMy point was that for the method to be practically useful, it needs to solve problems where both RL and IL were previously intractable. Let's say RL fails for task A, but I have a budget of N training examples to generate. If I train vanilla IL using those N examples, and the resulting policy solves the task, then I might as well do vanilla IL. On the other hand, if vanilla IL fails in this case, but method B somehow combines RL and IL to solve the task, then method B is practically useful.\"}",
"{\"title\": \"Imitation to help RL with task rewards, and better RL imitation in general\", \"comment\": \"We thank AR2 for your thorough feedback.\\n\\nAR2 writes that \\u201cimitation may only help RL when imitation alone works well already\\u201d. First we should point out that imitation can also itself be RL, as in the case of our model where a reward function is derived from the discriminator score. \\n\\nFor example in walker2D, none of our experiments use human specified task reward functions, but the agents learn from experience using RL on the discriminator score as a reward function. Rather than \\u201cimitation helping RL\\u201d, the contribution here is simply \\u201cbetter RL imitation\\u201d. \\n\\nIn the case where we use human task rewards - Jaco stacking - we show that by using imitation, we can replace dense staged task rewards with sparse task rewards, which is a big improvement - a clear case of \\u201cimitation helping RL\\u201d. Although figure 8 shows that we can sometimes learn to stack without human crafted sparse rewards, we were never able to learn stacking agents with \\u201creward vanilla GAIL\\u201d. We hope this is sufficient to address AR2\\u2019s first point in Cons, and we will clarify this point in the paper as well.\\n\\nAR2 points out that the \\u201clearning with no task rewards\\u201d section is (1) muddled and is (2) essentially a variant of normal GANS. As to the first point, we will try to clarify the presentation (perhaps adding pseudocode to better describe exactly what we are doing?). For the second point, we agree - it is precisely an auxiliary discriminator network but otherwise a normal vanilla GAN. However, it does something quite useful - replacing a previously hand-engineered reward function that required access to block and arm positions! The fact that such a simple GAN setup can be arranged to do this from pixels should be great news to practitioners and perhaps place this point in the Pros section instead of the Cons.\", \"hand_wavy_presentation\": \"we agree that the presentation could use more precision and clarity. We tried to emphasize the simplicity of our adversarial setup, but may have erred on the side of too few details. We will try to improve overall clarity in the final version.\"}",
"{\"title\": \"Please review the literature on adversarial imitation (GAIL)\", \"comment\": \"We thank AR3 very much for providing feedback. However, we think AR3 has missed several of the key points of our paper, which we will try to clarify below and in the final version of our paper as needed.\\n\\nFirst, our goal is not to estimate sparse rewards, but to train agents to solve continuous control tasks from pixel observations using raw video demonstrations, without access to proprioceptive states. There may be sparse rewards or no rewards available, aside from imitation-based rewards.\\n\\nAR3 also suggests that it is weird to use an adversary scheme to estimate rewards. However, this is actually a well established and effective approach; see for example \\n Ho, Jonathan, and Stefano Ermon. \\\"Generative adversarial imitation learning.\\\" Advances in Neural Information Processing Systems. 2016.\\nwhich currently has over 200 citations. What we contribute in this paper is showing how to extend this method to learning robot manipulation policies from raw video.\\n\\nAR3 is basically correct in pointing out that \\u201cthe agent is trying to maximize rewards, but the discriminator is improved so as to reduce rewards\\u201d. This is a fundamental tension inherent in any adversarial learning setup, not a flaw particular to our approach.\\n\\nAR3 is justifiably concerned with the proposed early termination scheme, since ultimately we want the robot to attempt to finish the task regardless of the discriminator score. During evaluation / test time, this is true, which is why we only apply early termination during training. We will update the paper to clarify about this.\\n\\nAR3 wonders whether this approach could work in more general settings, e.g. where state distributions vary dramatically. There is early work in this direction for visually much simpler domains (see e.g. \\u201cThird person imitation learning\\u201d in ICLR\\u201917) and in visual domains with behavior cloning agents. However, learning from visual experience on a robot from dramatically varying third person observations, remains a grand challenge for the field. We agree with AR3 that it is a worthy goal, but also not in scope for this paper.\"}",
"{\"title\": \"Key differences from previous work; comparison to behavior cloning\", \"comment\": [\"Thanks to AR1 for your detailed comments and pointing out relevant previous work. Below we address each part of the feedback.\", \"We agree that InfoGAIL shares significant motivation with our model in that it learns from pixels.\", \"However, our work makes several advances that will be of interest to the research community:\", \"InfoGAIL was demonstrated in the TORCS driving game with discrete actions, whereas our model is applied to challenging continuous control tasks such as block stacking.\", \"While our method can be used with pre-trained features as in InfoGAIL, our best performing method uses deep value network features, which are trained together with the discriminator reward function, so no feature pre-training is needed in our model.\", \"InfoGAIL used (state, action) pairs, whereas we do not use any expert actions.\", \"We show that discriminator-based early stopping can improve sample complexity.\", \"We show that it is possible to replace human engineered rewards with auxiliary discriminators on the Jaco block stacking task.\", \"Furthermore, our approach could easily be combined with that of InfoGAIL. Our goal in the paper was to show that with our approach, even the most naive GAN could be used to solve challenging visual imitation tasks. Using the latest adversarial learning techniques - e.g. information theoretic objective as in InfoGAIL - could improve things further.\", \"Comparison to behavior cloning, sample efficiency:\", \"In terms of demonstration efficiency, we can perform much better than behavior cloning with a small fraction of the demonstrations.\", \"With only 60 demonstrations, we can achieve >90% stacking success rate, whereas a comparable behavior cloning agent with 500 demonstrations only achieved ~33% success rate.\", \"Application to third-person imitation (expert and agent may have differing dynamics):\", \"This is a great suggestion, probably beyond the scope of our current paper. However, the fact that expert actions are not needed in our approach potentially removes an important barrier to this line of research.\"], \"clarity\": [\"We agree that the descriptions and terminology can be improved, which will be forthcoming in the final version of the paper.\"]}",
"{\"title\": \"Sample complexity experiments are interesting, but the ideas presented seems to overlap ideas from existing work.\", \"review\": \"---\", \"update\": \"I think the experiments are interesting and worthy of publication, but the exposition could be significantly improved. For example:\\n\\n- Not sure if Figure 1 is needed given the context.\\n- Ablation study over the proposed method without sparse reward and hyperarameter \\\\alpha\\n- Move section 7.3 into the main text and maybe cut some in the introduction\\n- More detailed comparison with closely related work (second to last paragraph in related work section), and maybe reduce exposition on behavior cloning.\\n\\nI like the work, but I would keep the score as is.\\n---\\n\\n\\nThe paper proposes to use a \\\"minimal adversary\\\" in generative adversarial imitation learning under high-dimensional visual spaces. While the experiments are interesting, and some parts of the method has not been proposed (using CPC features / random projection features etc.), I fear that some of the contributions presented in the paper have appeared in recent literature, such as InfoGAIL (Li et al.).\\n\\n- Use of image features to facilitate training: InfoGAIL used pretrained ResNet features to deal with high-dimensional inputs, only training a small neural network at the end.\\n- Tracking and warm restarts: InfoGAIL does not seem to require tracking a single expert trajectory, since it only classifies (s, a) pairs and is agnostic to the sequence.\\n- Reward augmentation: also used in InfoGAIL, although they did not use sparse rewards for augmentation.\\n\\nAnother contribution claimed by this paper is that we could do GAIL without action information. Since we can shape the rewards for most of our environments that do not depend on actions, it is unsurprising that this could work when D only takes in state information. However, it is interesting that behavior cloning pretraining is not required in the high-dimensional cases; I am interested to see a comparison between with or w/o behavior cloning in terms of sample complexity. \\n\\nOne setting that could potentially be useful is where the expert and policy learner do not operate within the same environment dynamics (so actions could not be same) but we would still want to imitate the behavior visually (same state space). \\n\\nThe paper could also benefit from clearer descriptions, such as pointers to which part of the paper discusses \\\"special initialization, tracking, or warm starting\\\", etc., from the introduction.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"straightforward idea, but this approach may not be applicable in general applications\", \"review\": \"This paper aims at solving the problem of estimating sparse rewards in a high-dimensional input setting. The authors provide a simplified version by learning the states from demonstrations. This idea is simple and straightforward, but the evaluation is not convincing.\\n\\nI am wondering if this approach still works in more general applications, e.g., when state distributions vary dramatically or visual perturbations arise in the evaluation phase. \\n\\nIn addition, it is weird to use adversary scheme to estimate rewards. Namely, the agent is trying to maximize the rewards, but the discriminator is improved so as to reduce rewards. \\n\\nIn section 3, the authors mention an early termination of the episode, this is quite strange in real applications, because even the discriminator score is low the robot still needs to accomplish the task.\\n\\nFinally, robots are subject to certain physical constraints, this issue can not be addressed by merely learning demonstrated states.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Potentially practical improvement of sparse-reward RL using IL, but a bit unclear when it helps\", \"review\": \"The submission describes a sort of hybrid between reinforcement learning and imitation learning, where an auxiliary imitation learning objective helps to guide the RL policy given expert demonstrations. The method consists of concurrently maximizing an RL objective--augmented with the GAIL discriminator as a reward\\u2014and minimizing the GAIL objective, which optimizes the discriminator between expert and policy-generated states. Only expert states (not actions) are required, which allows the method to work given only videos of the expert demonstrations. Experiments show that adding the visual imitation learning component allows RL to work with sparse rewards for complex tasks, in situations where RL without the imitation learning component fails.\", \"pros\": [\"It is an interesting result that adding a weak visual imitation loss dramatically improves RL with sparse rewards\", \"The idea of a visual imitation signal is well-motivated and could be used to solve practical problems\", \"The method enables an \\u2018early termination\\u2019 heuristic based on the imitation loss, which seems like a nice heuristic to speed up RL in practice\"], \"cons\": \"+ It seems possible that imitation only helps RL where imitation alone works pretty well already\\n+ Some contributions are a bit muddled: e.g., the \\u201clearning with no task reward\\u201d section is a little confusing, because it seems to describe what is essentially a variant of normal GAIL\\n+ The presentation borders on hand-wavy at parts and may benefit from a clean, formal description\\n\\nThe submission tackles a real, well-motivated problem that would appeal to many in the ICLR community. The setting is attractive because expert demonstrations are available for many problems, so it seems obvious that they should be leveraged to solve RL problems\\u2014especially the hardest problems, which feature very sparse reward signals. It is an interesting observation that an imitation loss can be used as a dense reward signal to supplement the sparse RL reward. The experimental results also seem very promising, as the imitation loss seems to mean the difference between sparse-reward RL completely failing and succeeding. Some architectural / feature selection details developed here seem to also be a meaningful contribution, as these factors also seem to determine the success or failure of the method.\\n\\nMy biggest doubt about the method is whether it really only works where imitation learning works pretty well already. If we don\\u2019t have enough expert examples for imitation learning to work, or if the expert is not optimizing the given reward function, then it is possible that adding the imitation loss is detrimental, because it induces an undesirable bias. If, on the other hand, we do have enough training examples for imitation learning to succeed and the expert is optimizing the given reward function, then perhaps we should just do imitation learning instead of RL. So, it is possible that there is some sweet spot where this method makes sense, but the extent of that sweet spot is unclear to me.\\n\\nThe experiments are unclear on this issue for a few reasons. First, figure 4 is confusing, as it is titled \\u2018comparison to standard GAIL', which makes it sound like a comparison to standard imitation learning. However, I believe this figure is actually showing the performance of different variants of GAIL used as a subroutine in the hybrid RL-IL method. I would like to know how much reward vanilla GAIL (without sparse rewards) achieves in this setting. Second, figure 8 seems to confirm that some variant of vanilla imitation learning (without sparse rewards) actually does work most of the time, achieving results that are as good as some variants of the hybrid RL-IL method. I think it would be useful to know, essentially, how much gain the hybrid method achieves over vanilla IL in different situations.\\n\\nAnother disappointing aspect of the paper is the \\u2018learning with no task reward\\u2019 section, which is a bit confusing. The concept seems reasonable at a first glance, except that once we replace the sparse task reward with another discriminator, aren\\u2019t we firmly back in the imitation learning setting again? So, the motivation for this section just seems a bit unclear to me. This seems to be describing a variant of GAIL with D4PG for the outer optimization instead of TRPO, which seems like a tangent from the main idea of the paper. I don\\u2019t think it is necessarily a bad idea to have another discriminator for the goal, but this part seems somewhat out of place.\", \"on_presentation\": \"I think the presentation is a bit overly hand-wavy in parts. I think the manuscript could benefit from having a concise, formal description. Currently, the paper feels like a series of disjoint equations with unclear connections among them. The paper is still intelligible, but not without knowing a lot of context relating to RL/IL methods that are trendy right now. I feel that this is an unfortunate trend recently that should be corrected. Also, I\\u2019m not sure it is really necessary to invoke \\u201cGAIL\\u201d to describe the IL component, since the discriminator is in fact linear, and the entropy component is dropped. I think \\u201capprenticeship learning\\u201d may be a more apt analogy.\", \"on_originality\": \"as far as I can tell, the main idea of the work is novel. The work consists mainly of combining existing methods (D4PG, GAIL) in a novel way. However, some minor novel variations of GAIL are also proposed, as well as novel architectural considerations.\\n\\nOverall, this is a nice idea applied to a well-motivated problem with promising results, although the exact regime in which the method succeeds could be better characterized.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
S1xNEhR9KX | On the Sensitivity of Adversarial Robustness to Input Data Distributions | [
"Gavin Weiguang Ding",
"Kry Yik Chau Lui",
"Xiaomeng Jin",
"Luyu Wang",
"Ruitong Huang"
] | Neural networks are vulnerable to small adversarial perturbations. Existing literature largely focused on understanding and mitigating the vulnerability of learned models. In this paper, we demonstrate an intriguing phenomenon about the most popular robust training method in the literature, adversarial training: Adversarial robustness, unlike clean accuracy, is sensitive to the input data distribution. Even a semantics-preserving transformations on the input data distribution can cause a significantly different robustness for the adversarial trained model that is both trained and evaluated on the new distribution. Our discovery of such sensitivity on data distribution is based on a study which disentangles the behaviors of clean accuracy and robust accuracy of the Bayes classifier. Empirical investigations further confirm our finding. We construct semantically-identical variants for MNIST and CIFAR10 respectively, and show that standardly trained models achieve comparable clean accuracies on them, but adversarially trained models achieve significantly different robustness accuracies. This counter-intuitive phenomenon indicates that input data distribution alone can affect the adversarial robustness of trained neural networks, not necessarily the tasks themselves. Lastly, we discuss the practical implications on evaluating adversarial robustness, and make initial attempts to understand this complex phenomenon. | [
"adversarial robustness",
"adversarial training",
"PGD training",
"adversarial perturbation",
"input data distribution"
] | https://openreview.net/pdf?id=S1xNEhR9KX | https://openreview.net/forum?id=S1xNEhR9KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1eptUiZxE",
"SkeFL72oTX",
"S1ey4m3jTm",
"ByeXyX2j67",
"Bkxs4_XJaQ",
"HJgnyglkaQ",
"rke7hBji2m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544824453059,
1542337361127,
1542337318951,
1542337242803,
1541515314973,
1541500899788,
1541285291072
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1443/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1443/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1443/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1443/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1443/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1443/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1443/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper studies an interesting phenomenon related to adversarial training -- that adversarial robustness is quite sensitive to semantically lossless shifts in input data distribution.\\n\\nStrengths\\n- Characterizes a previously unobserved phenomenon in adversarial training, which is quite relevant to ongoing research in the area.\\n- Interesting and novel theoretical analysis that motivates the relationship between adversarial robustness and the shape of input distribution.\\n\\nWeaknesses\\n- Reviewers pointed out some shortcomings in experiments, and analysis of causes and remedies to adversarial robustness. The authors agree that given the current state of understanding, these are hard questions to pose good answers for. The result and observations by themselves are interesting and useful for the community.\\n\\nThe weakness that the paper does not propose a solution for the observed phenomenon remains, but all reviewers agree that the observation in itself is interesting. Therefore, I recommend that the paper be accepted.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting observation and results\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for your time spent, interests and kind evaluation on our work.\\n\\nWith regard to your comments on \\\"put findings into practice\\\" and \\\"how to improve a model robustness w.r.t the changes in the data distribution\\\", we believe these are very meaningful future research directions. We covered a few aspects, but other aspects are out of the scope of our current paper. In particular, Section 4 demonstrates the issues of robustness evaluation caused by the sensitivity. Section 5 intends to inspire future research by excluding the possibilities that the sensitivity can be explained by obvious reasons, or be resolved in trivial ways, which implies that understanding the causes or finding the remedies is non-trivial future directions.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for the effort in reviewing the paper and also finding that the topic is interesting.\\n\\n--------\\n>> \\\"The paper is interesting and topical: robustness to adversarial input presentation (or shifts in training data itself, even those of the nature described by the authors 'semantic-lossless' shifts)\\\"\\n\\nWe don't quite understand the phrase \\\"adversarial input presentation\\\" in the review. Just to avoid possible misunderstanding, let us first clarify our finding in this paper: the sensitivity of adversarial robustness to the input data distribution D. Assume that R is the adversarial robustness of the method of adversarial training on D. Now consider a semantic-lossless shift of D, say D'. Again we train a model on the training data sampled from D', and test its adversarial robustness R' on the test set again sampled from D' too. We find that R' can be significantly different from R. We have improved the introduction to make this point more clear.\\n\\nSo, in our terminology, there is no adversarial input presentation. D' is just a common 'semantic-lossless' shift of D in the paper, without being against adversarial training.\\n\\n\\n--------\\n>> \\\"Unfortunately, the empirical part of the paper is weakened by an over-reliance of (custom perturbations of ) the popular MNIST and CIFAR10 datasets (which are themselves based on larger sets). Furthermore, the basic conclusion as to causes and remedies of lack of robustness is not evident, and it is not evident that it has been sufficiently investigated. Shape yes, differences in perturbable volume not (how does that concur with Section 2?), and inter-class distance also not. Are we to base these conclusions on 2 perturbed datasets? \\\"\\n\\nWe have two sets of experiments in this paper, one in Section 3 and one in Section 5. We are not sure which one this review corresponds to.\\n\\nAs mentioned above, the purpose of Section 3 is to demonstrate the existence of the sensitivity. The transformations (semantic-lossless shifts) in this section are common and are not designed to overthrow the adversarial robustness. We believe our experimental results in this section is sufficiently significant to prove the existence of such sensitivity. Furthermore, the gamma correction in Section 4 is a common transformation in image processing, which also demonstrates the sensitivity.\\n \\nOn the other hand, we agree with the reviewer that results in Section 5 are not evident enough to conclude about the causes and remedies. In fact, it is exactly because of this that we only make conservative claims in this section. As mentioned above, Section 5 is an initial attempt, and we do not consider it as the main contribution of this paper. Although we don't have a definitive answer for the problem, we believe that our findings in this paper should be noticed by the adversarial example community and it is already sufficiently significant and important for a publication.\\n\\nLastly, let us also clarify our statements on perturbable volume and inter-class distance in Section 5. We found that they are both correlated with robust accuracy. However, when we examine whether they are decisive factors for robustness, counterexamples exist for both of them. We therefore made inconclusive statements highlighting the complexity of the problem. We faithfully report our investigations in this section, and we hope that they can inspire future research around this topic. \\n\\nIn case our responses above do not address your concerns, it would be nice if you can clarify them further. \\n\\n\\n--------\\n>> \\\"How are readers to synthesize the final conclusion that robustness is a 'complex interaction of tasks and data', other than what they would already expect?\\\" \\n\\nWe do not intend to make this a \\\"final conclusion\\\". By \\\"complex interaction of tasks and data\\\", we only want to make a remark on the sample complexity difference between binarized MNIST and binarized CIFAR10. Our intention is to truthfully report to the readers that although binarization largely affects robustness, it does not decide every aspect of it. We have updated our paper to avoid these confusions.\"}",
"{\"title\": \"Response to all reviewers\", \"comment\": \"We thank the reviewers for their time and efforts. We especially appreciate that all reviewers find that the problem being investigated is interesting.\\nWe summarize our main contributions to address common concerns in this post, and provide more details in the responses to each reviewer.\\n\\nTo avoid clutter, we use the following abbreviated phrases:\\n\\\"sensitivity\\\" means \\\"the sensitivity of adversarial robustness to input data distributions\\\".\\n\\\"clean accuracy\\\" means \\\"prediction accuracy of the standardly trained model on natural examples\\\".\\n\\\"robust accuracy\\\" means \\\"prediction accuracy of the adversarially trained model on adversarial examples\\\".\\n\\nOur main contribution is the discovery that the robust accuracy of adversarially trained models is very sensitive to input data distributions, which is previously unnoticed in the literature and in sharp contrast to the steady clean accuracy in the standard learning setting. In theory, we show regular Bayes error's invariance and robust error's sensitivity to input distribution shift. We also show that if the data is uniformly distributed in a unit cube, then the perfect decision boundary cannot be robust. On the other hand, for the binary MNIST dataset, we found a provably robust classifier, using the algorithm by Kolter and Wong (2017), which guarantees 97% robust accuracy under 0.3 \\\\ell_infty perturbation. Such contrast motivates us to design systematic experiments (smoothed MNIST and saturated CIFAR in the paper) to investigate the dependence of adversarial robustness on the input data distributions, which empirically demonstrates the sensitivity. \\n\\nWe admit (also in the paper) that we don't have a definitive explanation or remedy for such sensitivity. Section 5 is only an initial attempt, and we do not consider it as the main contribution of the paper. We examined some natural hypotheses. We found highly correlated factors, but they don't fully explain the phenomenon, which suggests the absence of obvious solutions. We report these results not to make conclusive claims, but to hope to inspire future research in this direction. \\n\\nWe believe that solely by itself our discovery of the sensitivity is a significant contribution, and it has an important implication in both practice and theory. In practice, our finding raises questions on how to properly evaluate the adversarial robustness of different learning algorithms. Benchmarking adversarial robustness on only a few datasets may not be reliable due to such sensitivity. In theory, our finding opens a new angle for understanding the cause of \\\"lack of robustness\\\". More specifically, Schmidt et al. (2018) show that different data distributions could have drastically different properties of adversarially robust generalization. Our finding indicates that gradual semantics-preserving transformations of data distribution can also cause large changes to datasets' achievable robustness. Tsipras et al. (2018) hypothesize the existence of the intrinsic tradeoff between clean accuracy and adversarial robustness. Our work complements this result, by showing different levels of tradeoffs for different input data distributions.\\n\\nIn summary, we would like to emphasize that the main contribution of this paper is discovering and firmly demonstrating the existence of the sensitivity both in theory and in experiments, which we believe solely by itself is important in practice and understanding the phenomenon of adversarial examples. Although we don't have a definitive explanation or remedy for it, our paper is a starting point for future lines of research around this topic.\\nWe have improved the introduction of the paper to make this message more direct. Please see the latest updated version.\\n\\n\\nKolter, J. Z. and Wong, E. (2017). Provable defenses against adversarial examples via the convex outer adversarial polytope. arXiv preprint arXiv:1711.00851.\\n\\nSchmidt, L., Santurkar, S., Tsipras, D., Talwar, K., and M \\u02dbadry, A. (2018). Adversarially robust generalization requires more data. arXiv preprint arXiv:1804.11285.\\n\\nTsipras, D., Santurkar, S., Engstrom, L., Turner, A., and Madry, A. (2018). There is no free lunch in adversarial robustness (but there are unexpected benefits). arXiv preprint arXiv:1805.12152.\"}",
"{\"title\": \"Review\", \"review\": \"A nice paper that clarifies the difference between the clean accuracy (accuracy of models on non-perturbed examples) and the robust accuracy (accuracy of models on adversarially perturbed examples) and it shows that changing the marginal distribution of the input data P(x) while preserving its semantic P(y|x) fixed affects the robustness of the model. Therefore, testing the robustness of the model should be performed in a careful manner. Comprehensive experiments were performed to show that changing the distribution of the MINST (smoothing) and CIFAR (saturation) data could lead to a significant difference in robust accuracy while the clean accuracy is almost steady. In addition, a set of experiments were performed in an attempt to search for the criteria required for choosing a proper dataset for testing adversarial attack to measure the robustness.\\n\\nAlthough I\\u2019m not expert in the field of adversarial attack but the paper is very nice to read and easy to follow (I have not checked the proof of the theorems though).\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"investigation as to the origin of lack of robustness of classifiers to perturbations of the input data\", \"review\": \"The paper is interesting and topical: robustness to adversarial input presentation (or shifts in training data itself, even those of the nature described by the authors 'semantic-lossless' shifts). Adversarial inputs are investigated under l-inf bounded perturbations, while multiclass classification on images is the target problem considered. The theoretical parts of the paper, assigning lack of adversarial robustness to the shape of the input distribution (Section 2) is the strongest part of the paper, adding some simple and important insights. Unfortunately, the empirical part of the paper is weakened by an over-reliance of (custom perturbations of ) the popular MNIST and CIFAR10 datasets (which are themselves based on larger sets). Furthermore, the basic conclusion as to causes and remedies of lack of robustness is not evident, and it is not evident that it has been sufficiently investigated. Shape yes, differences in perturbable volume not (how does that concur with Section 2?), and inter-class distance also not. Are we to base these conclusions on 2 perturbed datasets? How are readers to synthesize the final conclusion that robustness is a 'complex interaction of tasks and data', other than what they would already expect? In short, a valiant effort, and a good direction, but one that needs more work.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A super interesting paper discussing the impact of data distribution on adversarial robustness of trained neural networks\", \"review\": \"This paper provides several theoretical and practical insights on the impact of data distribution to adversarial robustness of trained networks. The paper reads well and provides analysis on two datasets MNIST and CIFAR10. I particularly like the result demonstrating that a lossless transformation on the data distribution could significantly impact the robustness of an adversarial trained models. The idea of using smoothness and saturation to bridge the gap between the MNIST and CIFAR10 datasets was also very interesting. One thing that is not clear from the paper is how one could use the findings from this paper and put it into practice. In other words, it would help if the authors could provide some insights on how to improve a model robustness w.r.t the changes in the data distribution. The authors did an attempt toward this in section 5, but that seems to only cover three factors that do not cause the difference in robustness.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
ByME42AqK7 | Efficient Multi-Objective Neural Architecture Search via Lamarckian Evolution | [
"Thomas Elsken",
"Jan Hendrik Metzen",
"Frank Hutter"
] | Architecture search aims at automatically finding neural architectures that are competitive with architectures designed by human experts. While recent approaches have achieved state-of-the-art predictive performance for image recognition, they are problematic under resource constraints for two reasons: (1) the neural architectures found are solely optimized for high predictive performance, without penalizing excessive resource consumption; (2)most architecture search methods require vast computational resources. We address the first shortcoming by proposing LEMONADE, an evolutionary algorithm for multi-objective architecture search that allows approximating the Pareto-front of architectures under multiple objectives, such as predictive performance and number of parameters, in a single run of the method. We address the second shortcoming by proposing a Lamarckian inheritance mechanism for LEMONADE which generates children networks that are warmstarted with the predictive performance of their trained parents. This is accomplished by using (approximate) network morphism operators for generating children. The combination of these two contributions allows finding models that are on par or even outperform different-sized NASNets, MobileNets, MobileNets V2 and Wide Residual Networks on CIFAR-10 and ImageNet64x64 within only one week on eight GPUs, which is about 20-40x less compute power than previous architecture search methods that yield state-of-the-art performance. | [
"Neural Architecture Search",
"AutoML",
"AutoDL",
"Deep Learning",
"Evolutionary Algorithms",
"Multi-Objective Optimization"
] | https://openreview.net/pdf?id=ByME42AqK7 | https://openreview.net/forum?id=ByME42AqK7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1eQLvrbgN",
"HkgjDn7V07",
"ryxN-nQVRX",
"B1x65oXNRX",
"BkgSVo7VRm",
"SJgMKr5h3X",
"rJeAPV55hQ",
"BygMkWst37"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544800074583,
1542892643433,
1542892539604,
1542892436800,
1542892332841,
1541346681986,
1541215333808,
1541152986008
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1442/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1442/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1442/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1442/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1442/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1442/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1442/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1442/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes an evolutionary architecture search method which uses weight inheritance through network morphism to avoid training candidate models from scratch. The method can optimise multiple objectives (e.g. accuracy and inference time), which is relevant for practical applications, and the results are promising and competitive with the state of the art. All reviewers are generally positive about the paper. Reviewers\\u2019 feedback on improving presentation and adding experiments with a larger number of objectives has been addressed in the new revision.\\n\\nI strongly encourage the authors to add experiments on the full ImageNet dataset (not just 64x64) and/or language modelling -- the two benchmarks widely used in neural architecture search field.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"interesting method, promising results\"}",
"{\"title\": \"Revised version of our paper online!\", \"comment\": \"Dear reviewers,\\n\\nthanks again for your valuable feedback. We just updated our paper. We mainly made two modifications, based on your feedback:\\n1) We reorganized the paper according to your suggestions; some parts of the main paper were moved to the appendix, some parts of the appendix were moved to the main paper.\\n2)As you were asking whether LEMONADE is applicable to more than 2 objectives, we ran an experiment with 5 objectives, namely 1) performance on Cifar 10 (expensive objective), 2) performance on Cifar 100 (expensive) , 3) number of parameters (cheap), 4) number of multiply-add operations (cheap), 5) inference time (cheap). We refer to Appendix 3, \\u201cLEMONADE with 5 objectives\\u201d, for details and results, but in a nutshell the results are very positive and qualitatively resemble those for two objectives. While we put this experiment into the appendix for now to not change the main paper too much compared to the submitted version, if you agree we would also be very happy to include this experiment in the main paper.\\n\\nWe hope the updated version and our answers to your reviews have cleared out all major concerns and we kindly ask you to update your rating if we clarified your concerns.\"}",
"{\"title\": \"Answering your questions\", \"comment\": \"Dear AnonReviewer2,\\nthank you for your constructive feedback. Below we address your concerns and questions.\\n\\n\\u201cJudging from Table 1, the proposed method does not seem to provide a large contribution. For example, while the proposed method introduced the regularization about the number of parameters to the optimization, NASNet V2 and ENAS outperform the proposed method in terms of the accuracy and the number of parameters.\\u201c\\n\\u2192 The authors of NASNet only provide results for two regimes of parameters (3.3M and 27M) as they do not perform multi-objective optimization but rather just vary two parameters for building NASNet models (number of cells stacked, number of filters). Their method might be optimized to yield good results in these regimes and, admittedly, LEMONADE does not outperform NASNet for models with ~4M parameters. However, from Figure 3 and Table 2 one can see that only varying these two parameters for NASNet models is not necessarily sufficient to generate good models across all parameter regimes. E.g., LEMONADE clearly outperforms NASNet for very small models (50k params, 200k params - Table 2). We also refer to Appendix 3 (\\u201cLEMONADE with 5 objectives\\u201d), Figure 6, in the updated version of our paper, where one can see that while NASNet has quite strong performance in terms of error, number of parameters and number of multiply-add operations, it performs poorly in terms of inference time. Hence, there is a benefit in doing multi-objective optimization if one is actually interested in multiple objectives and diverse models rather than a single model. This is the main contribution of our paper and different to, e.g., the NASNet paper. The same likely also applies for ENAS (as they use the same search space and conduct very similar experiments). We also would like to highlight two things: 1) NASNet requires 40x computational resources than LEMONADE, so even if NASNet performs better for ~4M parameter models, LEMONADE achieves competitive performance in significantly less time. 2) Table 1 shows results for models trained with different training pipelines and hyperparameters, and hence it is hard to say architecture X performs better than architecture Y since differences could simply be due to e.g. different learning rates, batch sizes, etc. In contrast, all other results in the paper (e.g., Figure 3 and Table 2) provide comparisons with exactly the same training pipeline and hyperparameters. .\\n\\n\\u201cIt would be better to provide the details of the procedure of the proposed method (e.g., Algorithm 1 and each processing of Algorithm 1) in the paper, not in the Appendix. \\u201c\\n-> Thanks, we agree; we re-organized our paper accordingly.\\n\\n\\n\\u201c- In the case of the search space II, how many GPU days does the proposed method require? \\n-> We also ran this experiments for 7*8 GPU days, however the method converged after roughly 3*8 GPU days (meaning that there were no significant differences afterwards).\\n\\n\\u201cAbout line 10 in Algorithm 1, how does the proposed method update the population P? Please elaborate on this procedure.\\u201d\\n-> The population is updated to be all non-dominated points from the current population and the generated children, i.e. the Pareto frontier based on all current models. We clarified this in Algorithm 1. Thanks for pointing us towards this.\\n\\n\\nWe hope this clarifies your questions. Thanks again for the review!\"}",
"{\"title\": \"Answering the questions\", \"comment\": \"Dear AnonReviewer1,\\nthank you for your positive and constructive feedback. Below we address your concerns and questions.\\n\\n\\u201cWhat value of $\\\\epsilon$ in Eqn (1) is used? [...] how can they guarantee the \\\\epsilon-ANM condition?\\u201d\\n\\u2192 Indeed, one can not guarantee the \\\\epsilon-ANM condition for an arbitrary epsilon. However, in our application one does not need to explicitly select $\\\\epsilon$ at all. We simply apply an approximate network morphism operator. Case 1, epsilon is small: the output is a network that is \\u201csmaller\\u201d than its parent and has a similar error, so the children will likely be non-dominated and it will be part of the pareto front in the next generation. Case 2, epsilon is large (hence likely also the error): the children will likely be dominated by some other network and it will be discarded when the Pareto front is updated. Thus, in both cases, the specific epsilon doesn\\u2019t matter. The step of LEMONADE, where the Pareto front is updated, will automatically decide whether the morphing was successful or not based on the (non-)domination criterion. We updated (shortened) the section on approximate network morphism to not put a too strong emphasis on this. Hopefully it is now less confusing.\\n\\n\\n\\u201c[...] the method as currently presented does not show possible generalization beyond these two objectives, which is a weakness of the paper.\\u201d\\n-> We respectfully disagree. In principle, the proposed method is - as is - applicable to arbitrary objectives and arbitrary many objectives. It is neither restricted to these specific objectives nor to n=2 objectives. To demonstrate this, we carried out a new experiment with exactly the same method on 5 objectives (2 expensive ones, 3 cheap ones). We refer to the additional experiment, Appendix 3 (\\u201cLEMONADE with 5 objectives\\u201d), in the updated version of our paper.\\n\\n\\u201cHow would LEMONADE handle situations when there are more than one $f_{cheap}$, especially when different $f_{cheap}$ may have different value ranges? Eqn (8) and Eqn (9) does not seem to handle these cases.\\u201d\\n-> Both equations are not restricted to 1D inputs. (Kernel) density estimators can, in general, be applied to arbitrary dimensions and most packages allow multi-dimensional inputs by default (e.g. KDE in scipy or scikit-learn). Of course, density estimation becomes problematic with increasing number of dimensions, but we believe 4-6 objectives is a realistic dimensionality for NAS applications, and scaling to significantly more objectives will typically not be necessary. \\n\\nNote that the output of a KDE is always 1D, independent of the input. Also, most packages provide methods for, e.g., automatic bandwidth computation (per input dimension) to handle different value ranges. To make the input and output spaces in equations 8,9 (equations 1,2 in the updated version) clearer, we provide them in detail here:\", \"f_cheap\": \"<some neural network space> \\u2192 R^n, where n is the number of cheap objectives\", \"p_kde\": \"R^n \\u2192 R\", \"p_p\": \"<some neural network space> \\u2192 R\\n\\n\\u201cSame question with $f_{exp}$.\\u201d\\n\\u2192 The expensive objectives are only involved in the last two steps of LEMONADE (evaluate $f_{exp}$ on the subset of children, update the Pareto frontier). These steps can be applied to more than one expensive objective. E.g. instead of training the children only on CIFAR-10, we can also train them on some other data set as well (and in our new experiment with 5 objectives we indeed also train them on CIFAR-100 as a second expensive objective). Of course, the runtime of the method will increase linearly in the number of expensive objectives. \\n\\nSo, to summarize regarding having only 2 objectives:\\n1) Our method can in principle handle more than 2 objectives (both cheap and expensive), there is no general restriction to n=2 objectives.\\n2) From an implementation point of view, common packages for computing density estimators automatically deal with multi-dimensional inputs and different ranges, hence LEMONADE can be run - as is - with multi-dimensional objectives without any further user interaction or modifications.\\n3) To confirm these statements, we ran an additional experiment with 5 objectives - 2 expensive ones (performances on Cifar-10, performance on Cifar-100) and 3 cheap ones (number of parameters, number of multiply-add operations, inference time). We refer to Appendix 3, \\u201cLEMONADE with 5 objectives\\u201d, in the updated version of our paper for details and results. \\n\\n\\nWe hope this clarifies your questions. Thanks again for the review!\"}",
"{\"title\": \"Answering the questions\", \"comment\": \"Dear AnonReviewer3,\\nthank you for your positive review and constructive feedback!\\n\\nWe agree that the structure of the paper was not optimal and reorganized it along the lines you suggested (thanks for the suggestion!). Below we address specific questions.\\n\\n\\u201cI am a bit unclear about how comparisons are made to other methods that do not optimize for small numbers of parameters? Do you compare against the lowest error network found by LEMONADE? The closest match in # of parameters?\\u201d\\n-> The latter: we compared with the models with the closest match in # of parameters.\\n\\n\\u201cWhy is the second objective log(#params) instead of just #params when the introduction mentions explicitly that tuning the scales between different objectives is not needed in LEMONADE?\\u201d \\n-> We stated that defining a trade-off between objectives is not necessary (in case you are referring to this statement), which would, e.g., be necessary when one would scalarize objectives by using a weighted sum. Rescaling an objective, however, is different as it is independent from other objectives: it only depends on that specific objective and which scale is important to the user and the application. For the number of parameters, the log scale is natural to cover a large range of sizes: think of a plot of size vs. performance; in order to see anything for small sizes one would typically put the size on a log scale (and we indeed did, see, e.g., Figures 3 and 4). Therefore, it is most natural to also put the number of parameters on a log scale for LEMONADE.\\n\\n\\u201cIt seems like LEMONADE would scale poorly to more than 2 objectives, since it effectively requires approximating an #objectives-1 dimensional surface with the population of parents. How could scaling be handled?\\u201d\\n-> We think having 4-6 objectives is a realistic dimensionality for NAS applications, and scaling to significantly more objectives (which would indeed be problematic for our method, but also for multi-objective optimization in general) is typically not necessary. In response to this question, to demonstrate this, wee conducted a new experiment with 5 objectives (performance on Cifar 10, performance on Cifar 100, number of parameters, number of multiply-add operations, inference time) to show that LEMONADE can handle these realistic scenarios natively. We refer to the updated version of our paper for the results (Appendix 3,\\u201cLEMONADE with 5 objectives\\u201d), but in a nutshell the results are very positive and qualitatively resemble those for two objectives.\\nWhile we put this experiment into the appendix for now to not change the main paper too much compared to the submitted version, if the reviewers agree we would also be very happy to include this experiment in the main paper.\\n\\nWe hope this clarifies your questions. Thanks again for the review!\"}",
"{\"title\": \"An interesting method with a troubled presentation\", \"review\": \"This paper proposes LEMONADE, a random search procedure for neural network architectures (specifically neural networks, not general hyperparameter optimization) that handles multiple objectives. Notably, this method is significantly more efficient more efficient than previous works on neural architecture search.\\n\\nThe emphasis in this paper is very strange. It devotes a lot of space to things that are not important, while glossing over the details of its own core contribution. For example, Section 3 spends nearly a full page building up to a definition of an epsilon-approximate network morphism, but this definition is never used. I don't feel like my understanding of the paper would have suffered if all Section 3 had been replaced by its final paragraph. Meanwhile the actual method used in the paper is hidden in Appendices A.1.1-A.2. Some of the experiments (eg. comparisons involving ShakeShake and ScheduledDropPath, Section 5.2) could also be moved to the appendix in order to make room for a description of LEMONADE in the main paper.\\n\\nThat said, those complaints are just about presentation and not about the method, which seems quite good once you take the time to dig it out of the appendix.\\n\\nI am a bit unclear about how comparisons are made to other methods that do not optimize for small numbers of parameters? Do you compare against the lowest error network found by LEMONADE? The closest match in # of parameters?\\n\\nWhy is the second objective log(#params) instead of just #params when the introduction mentions explicitly that tuning the scales between different objectives is not needed in LEMONADE?\\n\\nIt seems like LEMONADE would scale poorly to more than 2 objectives, since it effectively requires approximating an #objectves-1 dimensional surface with the population of parents. How could scaling be handled?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Official Review\", \"review\": \"Summary:\\nThe paper proposes LEMONADE, an evolutionary-based algorithm the searches for neural network architectures under multiple constraints. I will say it first that experiments in the paper only actually address to constraints, namely: log(#params) and (accuracy on CIFAR-10), and the method as currently presented does not show possible generalization beyond these two objectives, which is a weakness of the paper.\\n\\nAnyhow, for the sake of summary, let\\u2019s say the method can actually address multiple, i.e. more than 2, objectives. The method works as follows.\\n\\n1. Start with an architecture.\\n\\n2. Apply network morphisms, i.e. operators that change a network\\u2019s architecture but also select some weights that do not strongly alter the function that the network represents. Which operations to apply are sampled according to log(#params). Details are in the paper.\\n\\n3. From those sampled networks, the good ones are kept, and the evolutionary process is repeated.\\n\\nThe authors propose to use operations such as \\u201cNet2WiderNet\\u201d and \\u201cNet2DeeperNet\\u201d from Chen et al (2015), which enlarge the network but also choose a set of appropriate weights that do not alter the function represented by the network. The authors also propose operations that reduce the network\\u2019s size, whilst only slightly change the function that the network represented.\\n\\nExperiments in the paper show that LEMONADE finds architecture that are Pareto-optimal compared to existing model. While this seems like a big claim, in the context of this paper, this claim means that the networks found by LEMONADE are not both slower and more wrong than existing networks, hand-crafted or automatically designed.\", \"strengths\": \"1. The method solves a real and important problem: efficiently search for neural networks that satisfy multiple properties.\\n\\n2. Pareto optimality is a good indicator of whether a proposed algorithm works on this domain, and the experiments in the paper demonstrate that this is the case.\", \"weaknesses\": \"1. How would LEMONADE handle situations when there are more than one $f_{cheap}$, especially when different $f_{cheap}$ may have different value ranges? Eqn (8) and Eqn (9) does not seem to handle these cases.\\n\\n2. Same question with $f_{exp}$. In the paper the only $f_{exp}$ refers to the networks\\u2019 accuracy on CIFAR-10. What happens if there are multiple objectives, such as (accuracy on CIFAR-10, accuracy on ImageNet) or (accuracy on CIFAR-10, accuracy on Flowers, image segmentation on VOC), etc.\\n\\nI thus think the \\u201cMulti-Objective\\u201d is a bit overclaimed, and I strongly recommend that the authors adjust their claim to be more specific to what their method is doing.\\n\\n3. What value of $\\\\epsilon$ in Eqn (1) is used? Frankly, I think that if the authors train their newly generated children networks using some gradient descent methods (SGD, Momentum, Adam, etc.), then how can they guarantee the \\\\epsilon-ANM condition? Can you clarify and/or change the presentation regarding to this part?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The proposed method is interesting, but the proposed method does not seem to provide a large contribution\", \"review\": [\"Summary\", \"This paper proposes a multi-objective evolutionary algorithm for the neural architecture search. Specifically, this paper employs a Lamarckian inheritance mechanism based on network morphism operations for speeding up the architecture search. The proposed method is evaluated on CIFAR-10 and ImageNet (64*64) datasets and compared with recent neural architecture search methods. In this paper, the proposed method aims at solving the multi-objective problem: validation error rate as a first objective and the number of parameters in a network as a second objective.\", \"Pros\", \"The proposed method does not require to be initialized with well-performing architectures.\", \"This paper proposes the approximate network morphisms to reduce the capacity of a network (e.g., removing a layer), which is reasonable property to control the size of a network for multi-objective problems.\", \"Cons\", \"Judging from Table 1, the proposed method does not seem to provide a large contribution. For example, while the proposed method introduced the regularization about the number of parameters to the optimization, NASNet V2 and ENAS outperform the proposed method in terms of the accuracy and the number of parameters.\", \"It would be better to provide the details of the procedure of the proposed method (e.g., Algorithm 1 and each processing of Algorithm 1) in the paper, not in the Appendix.\", \"In the case of the search space II, how many GPU days does the proposed method require?\", \"About line 10 in Algorithm 1, how does the proposed method update the population P? Please elaborate on this procedure.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
SklEEnC5tQ | DISTRIBUTIONAL CONCAVITY REGULARIZATION FOR GANS | [
"Shoichiro Yamaguchi",
"Masanori Koyama"
] | We propose Distributional Concavity (DC) regularization for Generative Adversarial Networks (GANs), a functional gradient-based method that promotes the entropy of the generator distribution and works against mode collapse.
Our DC regularization is an easy-to-implement method that can be used in combination with the current state of the art methods like Spectral Normalization and Wasserstein GAN with gradient penalty to further improve the performance.
We will not only show that our DC regularization can achieve highly competitive results on ILSVRC2012 and CIFAR datasets in terms of Inception score and Fr\'echet inception distance, but also provide a mathematical guarantee that our method can always increase the entropy of the generator distribution. We will also show an intimate theoretical connection between our method and the theory of optimal transport. | [
"Generative Adversarial Networks",
"regularization",
"optimal transport",
"functional gradient",
"convex analysis"
] | https://openreview.net/pdf?id=SklEEnC5tQ | https://openreview.net/forum?id=SklEEnC5tQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Hkl9yDCSxE",
"HJlnEwUMRQ",
"rkxoIlOOaQ",
"Skgk8OrDTm",
"rkl84uRMaQ",
"rJllM_RzaQ",
"HJxRuUCz6Q",
"ryxaWURfTX",
"HJeZ1SFh3Q",
"Hyx3bbVc27",
"rJgOIkIKnm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545098977893,
1542772531747,
1542123603363,
1542047815474,
1541756973802,
1541756936464,
1541756533895,
1541756420613,
1541342425162,
1541189892275,
1541132111552
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1441/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1441/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1441/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1441/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1441/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1441/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1441/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1441/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1441/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1441/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1441/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes distributional concavity regularization for GANs which encourages producing generator distributions with higher entropy.\\n\\nThe reviewers found the contribution interesting for the ICLR community. R3 initially found the paper lacked clarity, but the authors took the feedback in consideration and made significant improvements in their revision. The reviewers all agreed that the updated paper should be accepted.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"A solid contribution to regularize GANs\"}",
"{\"title\": \"Thank you very much for the positive and helpful comments!\", \"comment\": \"As advised, we made particular effort to improve the readability of the paper. We made multiple large/small revisions throughout the paper to reflect the suggestions. We would like to elaborate on our revisions in the form of writing a response to each comment we received: \\n\\n>Weaknesses:\\n>-Readability of the paper can be generally improved. I had to go over the paper many times to get the idea.\\n\\nWe took this advice as seriously as possible, and particularly reorganized introduction and section 2.\\nWe reworded many phrases in an effort to convey our ideas from more intuitive perspective. \\nWe moved the technical description of the section 2 to the appendix for the readers with interests in functional theoretic background of our algorithm.We hope that our revision improves the readability of the paper. \\n\\n>-Figures should be provided with more detailed captions, which explain main result and providing context \\n>(e.g. explaining baselines).\\n\\nWe added more description to the captions, and elaborated on the experiment described by each figure. \\n\\n>Questions/Comments:\\n>-Equation (7) has typos (uses theta_old instead of theta in some places)\\n\\nThank you very much! We fixed the typos in the equation 7. \\n\\n>-Section 4.1 (effect of monoticity) is a bit confusing. My understanding is that parameter update rule of \\n>equation (3) and (6) are equivalent, but you seem to use (6) there. Can you clarify what you do there and in \\n>general this experiment a bit more?\\n\\nWe must admit that we were not clear enough about the motive of the second experiment in section 4.1. \\nAs we have additionally explained in the revised version of the section 2.3, the \\u201cmonotoniciy\\u201c is a property satisfied by the optimal-transport-based update that can possibly have a good effect on the distillation step. \\nMonotonicity is a property that our algorithm guarantees for the map used in our update as well. \\nIn distillation step, the goal of the user is to approximate the target distribution with the parametric distribution, \\nand as many SGD steps can be used as liked. In conventional GANs, only one SGD update is applied to the parametric generator G. \\nThe purpose of our second experiment in section 4.1 is to assess the effect of monotonicity on this distillation step. We prepared a pair of target distributions---one constructed with a monotonic map and another constructed with a non-monotonic map, with the former being further away(in Wasserstein sense) from the current distribution and both of them yielding the same value for the objective function. \\nAgainst the intuition based on \\u201cthe distance\\u201d, the distillation procedure is easier for the distribution constructed with the monotonic map.\\nThis experiments demonstrates a case in which the monotonicity works in favor of the training of G( in distillation step). \\n\\n>-Comparing with entropy maximization method of EGAN (Dai et al, 2017) is a good idea, but I\\u2019m wondering \\n>if you can compare it on low dimensional settings (e.g. as in Fig 2). It is also not clear why increasing entropy \\n>with EGAN-VI is worse than baselines in Table 1.\\n\\nWe conducted experiments on the low dimensional setting as well, and confirmed that our implementation of EGAN-VI indeed achieves higher entropy than the vanilla experiment without the regularization. As we can see in the results, the performance is not too impressive, however. \\n\\nAs for the second concern that is being raised, we would like to note that the original implementation of EGAN in the publication was conducted without Spectral Normalization(SN), which generally improves the results for most methods. Our baseline method is not an ordinary DCGAN, but the DCGAN with SN that is known to perform at competitive quality on the dataset like ImageNet and CIFAR10. \\nIn fact, the \\u201cvanilla DCGAN with SN\\u201d and \\u201cEGAN with SN\\u201d both perform better than the vanilla EGAN as well. In this light, it is not so surprising that EGAN with SN performs worse than the vanilla SN-DCGAN on CIFAR10, because the variational inference for the entropy of the distribution in high dimensional space like the one dealt with in CIFAR10. \\n For the experiment, we used EGAN-VI based on Gaussian distribution, as opposed to EGAN-Nearest Neighbor. In the original publication, this version of EGAN-VI was being used for their experiment on CIFAR10. We experimented with multiple parameters and always reported the result with best Inception Score/ FID. \\nIn general, EGAN needs to prepare a decoder in addition to the pair of Generator and Discriminator. Because the training for both of them are being conducted separately, it is difficult to find the right balance between the two during the training.\"}",
"{\"title\": \"Emphasize improving readability of the paper\", \"comment\": \"I would like to emphasize that the main weakness in the paper, in my opinion, is that it can be quite hard to read for the general ICLR community. The authors are strongly encouraged to try to make it more accessible, which will in fact will increase the impact of the paper eventually.\"}",
"{\"title\": \"Sound method and good results\", \"review\": [\"Summary:\", \"This paper proposes distributional concavity regularization for GANs which encourages producing generator distributions with higher entropy. The paper motivates the proposed method as follows:\", \"Using the concept of functional gradient, the paper interprets the update in the generator parameters as an update in the generator distribution\", \"Given this functional gradient perspective, the paper proposes updating the generator distribution toward a target distribution which has *higher entropy and satisfies monoticity*\", \"Then, the paper proves that this condition can be satisfied by ensuring that generator\\u2019s objective (L) is concave\", \"Since it\\u2019s difficult to ensure concavity when parametrizing generators as deep neural networks, the paper proposes adding a simple penalty term that encourages the concavity of generator objective\", \"Experiments confirm the validity the proposed approach. Interestingly, the paper shows that performance of multiple GAN variants can be improved with their proposed method on several image datasets\"], \"strengths\": [\"The proposed method is very interesting and is based on sound theory\", \"Connection to optimal transport theory is also interesting\", \"In practice, the method is very simple to implement and seems to produce good results\"], \"weaknesses\": [\"Readability of the paper can be generally improved. I had to go over the paper many times to get the idea.\", \"Figures should be provided with more detailed captions, which explain main result and providing context (e.g. explaining baselines).\", \"Questions/Comments:\", \"Equation (7) has typos (uses theta_old instead of theta in some places)\", \"Section 4.1 (effect of monoticity) is a bit confusing. My understanding is that parameter update rule of equation (3) and (6) are equivalent, but you seem to use (6) there. Can you clarify what you do there and in general this experiment a bit more?\", \"Comparing with entropy maximization method of EGAN (Dai et al, 2017) is a good idea, but I\\u2019m wondering if you can compare it on low dimensional settings (e.g. as in Fig 2). It is also not clear why increasing entropy with EGAN-VI is worse than baselines in Table 1.\"], \"overall_recommendation\": \"The paper is based on sound theory and provides very interesting perspective. The method seems to work in practice on a variety of experimental setting. Therefore, I recommend accepting it.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"response, con'd\", \"comment\": \"> 4. Differentiation w.r.t. functions (or more generally elements in normed spaces) is a well-defined concept in mathematics, including the notions of Gateaux, Frechet and Hadamard differentiability. It is not clear why the authors neglect these classical concepts, and are talking about 'random functional perturbations', ... It is also unclear where the optimized transformation (T) lives; the authors are trying to differentiate over some function space which is undefined.\\n\\nWe have added more descriptions in the theory section and expressed our intention that we are taking Frechet derivatives of a functional defined on a Hilbert space consisting of functions that are L2 integrable with respect to the probability measure of concern. In functional gradient applications to generative models [11, 12], conditions required for the Frechet differentiability of the objective functional are often assumed to hold. We hope that our revisions made the paper more readable to the wider audience, and hope that the paper now assumes less knowledge of GANs in understanding our idea.\\n\\n[1] Ian Goodfellow. NIPS 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160, 2016.\\n[2] Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial networks. In ICLR, 2017.\\n[3] Martin Arjovsky and Le \\u0301on Bottou. Towards principled methods for training generative adversarial networks. In ICLR, 2017.\\n[4] Martin Arjovsky, Soumith Chintala, and Le \\u0301on Bottou. Wasserstein generative adversarial networks. In ICML, pp. 214\\u2013223, 2017.\\n[5] Zinan Lin, Ashish Khetan, Giulia Fanti, and Sewoong Oh. Pac gan: The power of two samples in generative adversarial networks. arXiv preprint arXiv:1712.04086, 2017.\\n[6] Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard Hovy, and Aaron Courville. Calibrating energy-based generative adversarial networks. In ICLR, 2017.\\n[7] Theophilos Cacoullos. Estimation of a multivariate density. Annals of the Institute of Statistical Mathematics, 18(1):179\\u2013189, 1966.\\n[8] Arkadas Ozakin and Alexander G Gray. Submanifold density estimation. In Advances in Neural Information Processing Systems, pp. 1375\\u20131382, 2009.\\n[9] Alfredo Canziani, Eugenio Culurciello, Adam Paszke. AN ANALYSIS OF DEEP NEURAL NETWORK MODELS FOR PRACTICAL APPLICATIONS. arXiv preprint arXiv:1605.07678, 2017.\\n[10] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, pp. 2672\\u20132680, 2014.\\n[11] Atsushi Nitanda and Taiji Suzuki. Gradient layer: Enhancing the convergence of adversarial training for generative models. In AISTATS, pp. 1008\\u20131016, 2018.\\n[12] Rie Johnson and Tong Zhang. Composite functional gradient learning of generative adversarial models. In ICML, pp. 2376\\u20132384, 2018.\"}",
"{\"title\": \"Thank you very much for the suggestions and comment for the revisions!\", \"comment\": \"Thank you very much for the comment. We have revised the script to reflect the suggestions, and we would like to articulate on the changes we have made. Because we are out of space, we will provide the answers over several comments. For the list of mentioned references, please see the 'last' comment.\\n\\n> 1. Abbreviations, notations are not defined: GAN, WGAN-GP, DNN, FID (the complete name only shows up in Section 4), softplus, sigmoid, D_{\\\\theta_{old}}, \\u2026\\n\\nWe clarified the abbreviations for commonly used acronyms, such as Deep Neural Networks and Frechet inception distance, and gave definitions to undefined symbols. \\n\\n> 2. While the primary motivation of the work is claimed to be 'mode collapse', it does not turn out from the submission what mode collapse is.\\n\\nWe have given more detailed explanation of mode collapse, and provided citation that can be consulted for further information regarding its definition. In short, Mode collapse collectively refers to the lack of diversity in generator distribution [1-5]. This phenomena happens even in the simplest case of generating a mixture of Gaussian Distribution (See Fig 22, [1] for example) . Without any notable countermeasure, GANs tend to produce a distribution with less number of modes than the target distribution . \\n\\n> 3. Estimating entropies is a standard practice in statistics and machine learning, with an arsenal of estimators; the motivation of the submission is questionable. \\n\\nIn the revision, we have emphasized how the classical empirical techniques in estimating the entropy are not suited for the training of GANs aimed toward the synthesis of high dimensional distribution. In fact, our method yields much better performance than Energy-based GANs(EGANs) [6], that uses a rather high calibre variational inference based technique to directly estimate the entropy, which was, in their experiment, performed the best among all classical techniques they tested. For more details, please consult the original text. \\n\\nIn many application of GANs, the objective is to synthesize a generative model on high dimensional space, and it is a very difficult problem on its own to rely on classical techniques to estimate the entropy in such high dimensional space. In most study of generative models, the distribution is defined in term of latent variables, and its law is given by G# p where p is some known distribution---or the pushforward of p with a measurable function G with an euclidean range. The most straightforward approach in entropy estimation uses some form of plugin Estimator based on Kernel density estimation and nearest neighbor estimation hat p: \\nestimate = g(\\\\hat p(X_i), X_i \\\\in Data ). \\nFor the real world applications of GANs (e.g image generations), the dimensions of the data space are high. On the CIFAR 10 benchmark, the dimension is 32*32*3, and the dimension of the ImageNet benchmark is high as 64*64*3. Even for Kernel density estimator on of regular density on d dimensional Euclidean space for example, the mean square error is MSE(f*, \\\\hat f) = O(h^4 + 1/(n h^{d})) where h is the bandwidth, n is the number of samples [7,8].\\nThis is to say we need small enough bandwidth with tremendous size of n for large d. On the top of this, for the density of p_g of G\\\\# p is computed through the formula\\nH(p_g) = H(p(z)) + E_p(z)[det(D_z G(z))]. \\nIn order to compute E_p(z)[det(D_z G(z))], we need at least O(n d^2) (often d^3 in usual implementation) and this expression does not include the massicve scaler that is dependent on the number of parameters, which again, is often extremely large, and can be in order of millions\\uff08Fig 2 in [9]) . Again, Energy-based GANs(EGANs)[6], that uses a rather high calibre variational inference based technique are not performing too well relative to our approach. \\nAlso, one of the fundamental motivation for the development of GANs techniques is to do away with the precise and explicit density of the target distribution in high dimensional space. For more details of these motives, please consult classical literatures in GANs [1,4,10].\"}",
"{\"title\": \"Thank you very much for the suggestions and comment.\", \"comment\": \"Thank you very much for the comment. We believe that the core novelty of our work is in introducing a type of regularization based on a functional gradient. Because we were able to conduct additional experiment in time, we also conducted feature matching as well and confirmed the superiority of DC-regularization over the method (Table 8).\"}",
"{\"title\": \"Thank you!\", \"comment\": \"Thank you very much for the comment. We believe that functional gradient based methods have much room to explore, and it is our hope that many aspects of GANs can be analyzed using this philosophy.\"}",
"{\"title\": \"A good paper\", \"review\": \"The authors make use of the theory of functional gradient, based on optimal transport, to develop a method that can promote the entropy of the generator distribution without directly estimating the entropy itself. Theoretical results are provided as well as necessary experiments to support their technique's outperformance in some data sets. I found that this is an interesting paper, both original ideal and numerical results.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"potentially useful heuristic for GANs with vague maths\", \"review\": \"GANs (generative adversarial network) represent a recently introduced min-max generative modelling scheme with several successful applications. Unfortunately, GANs often show unstable behaviour during the training phase. The authors of the submission propose a functional-gradient type entropy-promoting approach to tackle this problem, as estimating entropy is computationally difficult.\\n\\nWhile the idea of the submission might be useful in some applications, the work is rather vaguely written, it is in draft phase:\\n1. Abbreviations, notations are not defined: GAN, WGAN-GP, DNN, FID (the complete name only shows up in Section 4), softplus, sigmoid, D_{\\\\theta_{old}}, ...\\n2. While the primary motivation of the work is claimed to be 'mode collapse', it does not turn out from the submission what mode collapse is.\\n3. Estimating entropies is a standard practice in statistics and machine learning, with an arsenal of estimators; the motivation of the submission is questionable.\\n4. Differentiation w.r.t. functions (or more generally elements in normed spaces) is a well-defined concept in mathematics, including the notions of Gateaux, Frechet and Hadamard differentiability. It is not clear why the authors neglect these classical concepts, and are talking about 'random functional perturbations', ... It is also unclear where the optimized transformation (T) lives; the authors are trying to differentiate over some function space which is undefined.\\n\\nWhile the idea of the work might be useful in practice, the current submission requires significant revision and work before publication.\\n\\n---\", \"after_paper_revisions\": \"Thank you for the updates. The submission definitely improved. I have changed my score to '6: Marginally above acceptance threshold'; the suggested regularization can be a useful heuristic for the GAN community.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Nice experimental paper (with theory backing)\", \"review\": \"In this paper, the authors claim that they are able to update the generator better to avoid generator mode collapse and also increase the stability of GANs training by indirectly increasing the entropy of the generator until it matches the entropy of the original data distribution using functional gradient methods.\\n\\nThe paper is interesting and well written. However, there is a lot of work coming out in the field of GANs currently, so I am not able to comment on the novelty of this regularization approach, and I am interested to know how this method performs when compared to other techniques to avoid mode collapse such as feature matching and mini-batch discrimination, etc.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}"
]
} |
|
SklVEnR5K7 | Making Convolutional Networks Shift-Invariant Again | [
"Richard Zhang"
] | Modern convolutional networks are not shift-invariant, despite their convolutional nature: small shifts in the input can cause drastic changes in the internal feature maps and output. In this paper, we isolate the cause -- the downsampling operation in convolutional and pooling layers -- and apply the appropriate signal processing fix -- low-pass filtering before downsampling. This simple architectural modification boosts the shift-equivariance of the internal representations and consequently, shift-invariance of the output. Importantly, this is achieved while maintaining downstream classification performance. In addition, incorporating the inductive bias of shift-invariance largely removes the need for shift-based data augmentation. Lastly, we observe that the modification induces spatially-smoother learned convolutional kernels. Our results suggest that this classical signal processing technique has a place in modern deep networks. | [
"convolutional networks",
"signal processing",
"shift",
"translation",
"invariance",
"equivariance"
] | https://openreview.net/pdf?id=SklVEnR5K7 | https://openreview.net/forum?id=SklVEnR5K7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1gP6ofboE",
"rklZnFS-gN",
"HklzYf-SJ4",
"B1gszqoVkV",
"S1xu-L_41N",
"r1exflw9RX",
"r1lNpJD907",
"B1gzMCIqCm",
"BkeFl6LcRX",
"Hkx4V6w9h7",
"HkexkySc27",
"rJgwq8nFnX",
"S1gnfAzDhQ"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1556323263036,
1544800681429,
1543996026294,
1543973395026,
1543960064155,
1543299080466,
1543299004470,
1543298569643,
1543298289271,
1541205292036,
1541193432229,
1541158543402,
1540988436237
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1440/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1440/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1440/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1440/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1440/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1440/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1440/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1440/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1440/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1440/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1440/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1440/AnonReviewer2"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"title\": \"Updated paper accepted to ICML 2019\", \"comment\": \"An updated paper has been accepted to ICML 2019. The project and paper is here: https://richzhang.github.io/antialiased-cnns/.\\n\\nThe core of the paper remains the same, but with a major new result on Imagenet classification. We show improvements in both shift-invariance (as expected) and accuracy (surprisingly!), across popular networks - Alexnet, VGG, Resnet, and DenseNet. This result is contrary to popular belief, as expressed in the metareview: \\\"Note also that there exist a fundamental trade-off between shift-invariance plus anti-aliasing (stability) and performance; this being a reason why max-pooling is still preferred over anti-aliasing (better performance versus stability).\\\"\\n\\nI thank the AC for the pointer to the previous work, and address differences in the related work section, copied here for reference: \\\"Mairal et al. (2014) derive a network architecture, motivated by translation invariance, named Convolutional Kernel Networks. While theoretically interesting (Bietti & Mairal, 2017), CKNs perform at lower accuracy than contemporaries, resulting in limited usage. Interestingly, a byproduct of the derivation is a standard Gaussian filter; however, no guidance is provided on its proper integration with existing network components. Instead, we demonstrate practical integration with any strided layer, and empirically show performance increases on a challenging benchmark -- ImageNet classification \\u2013 on widely-used networks.\\\"\\n\\nRegarding novelty, we agree that anti-aliasing and low-pass filtering is obviously not novel. In the updated paper, we begin by saying, \\\"When downsampling a signal, such an image, the textbook solution is to anti-alias by low-pass filtering the signal (Oppenheim et al., 1999; Gonzalez & Woods, 1992).\\\" Our paper's contribution is to demonstrate harmonious integration of low-pass filtering with existing network components.\\n\\nFinally, I thank the ICLR metareviewer and reviewers. I greatly appreciate their informative comments and feedback through the review process!\"}",
"{\"metareview\": \"The reviewers are reasonably positive about this submission although two of them feel the paper is below acceptance threshold. AR1 advocates large scale experiments on ILSVRC2012/Cifar10/Cifar100 and so on. AR3 would like to see more comparisons to similar works and feels that the idea is not that significant. AR2 finds evaluations flawed. On balance, the reviewers find numerous flaws in experimentation that need to be improved.\\n\\nAdditionally, AC is aware that approaches such as 'Convolutional Kernel Networks' by J. Mairal et al. derive a pooling layer which, by its motivation and design, obeys the sampling theorem to attain anti-aliasing. Essentially, for pooling, they obtain a convolution of feature maps with an appropriate Gaussian prior to sampling. Thus, on balance, the idea proposed in this ICLR submission may sound novel but it is not. Ideas such as 'blurring before downsampling' or 'low-pass filter kernels' applied here are simply special cases of anti-aliasing. The authors may also want to read about aliasing in 'Invariance, Stability, and Complexity of Deep Convolutional Representations' to see how to prevent aliasing. On balance, the theory behind this problem is mostly solved even if standard networks overlook this mechanism. Note also that there exist a fundamental trade-off between shift-invariance plus anti-aliasing (stability) and performance; this being a reason why max-pooling is still preferred over anti-aliasing (better performance versus stability). Though, this is nothing new for those who delve into more theoretical papers on CNNs: this is an invite for the authors to go thoroughly first through the relevant literature/numerous prior works on this topic.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Anti-aliasing has been explored before.\"}",
"{\"title\": \"Clarification\", \"comment\": \"Sorry for making it sound that way, ImageNet is not a requirement, I did change my rating from weak reject to weak accept after all. I did this because of the additional experiments. Everything else I wrote are simply suggestions for possible future re-submissions.\"}",
"{\"title\": \"Thank you, but we are somewhat confused\", \"comment\": \"Thank you for reading the rebuttal. While we are happy the reviewer is open to making an adjustment, we admit to being rather perplexed by the additional, previously unmentioned, requirement for ImageNet, which is keeping the reviewer from adjusting the score beyond \\u201cmarginally above\\u201d.\\n\\nThe original review (\\u201cmarginally below\\u201d) made the erroneous implication that wall clock times were not reported. We corrected this misconception in the rebuttal (it was in Table 2 of the original submission). Additionally, we performed all requested experiments -- adversarial attack and densenet -- which corroborated experimental results in the submission.\\n\\nThis additional ImageNet request was not in the original review. While we are happy to conduct it, we are out of time. As our method is derived from \\u201ctextbook\\u201d first-principles [1-4], we can reasonably expect empirical experiments to continue supporting the conclusions in the submission (Section 4.2) and rebuttal period (Appendices A-D).\\n\\n[1] Section 4.6.1: Sampling Reduction by an Integer Factor. Oppenheim, Schafer, Buck. Discrete-Time Signal Processing. 2nd ed. 1999\\n[2] Section 2.4.5: Zooming and Shrinking Digital Images. Gonzalez and Woods. Digital Image Processing. 2nd ed. 1992.\\n[3] Section 14.10.6: Antialiasing in Practice. Foley, van Dam, Feiner, Hughes. Computer Graphics: Principles and Practice. 2nd ed. 1995.\\n[4] Section 3.5.2: Decimation. Szeliski. Computer Vision: Algorithms and Applications. 2010.\"}",
"{\"title\": \"Thank you very much!\", \"comment\": \"The robustness is encouraging and the DesNet results substantiate the claims as well.\\nThe updated manuscript is also easier to read now.\\n\\nI increased my score.\\n\\nTo be clear, for a 7 or higher, I would need to see large scale experiments on multiple architectures and datasets. If the paper is rejected I encourage the authors to apply their proposed approach to ILSVRC2012/Cifar10/Cifar100 with VGG, Wide ResNet and DenseNet. ImageNet training is feasible in a few GPU days and for such a general claim, strong and broad empirical evidence is needed to be truly convincing.\"}",
"{\"title\": \"(Part II) Writing\", \"comment\": \"----- WRITING -----\\nWe address all writing suggestions from the review below.\\n\\n> \\u201cI believe authors should address how this work differs to [1], as it also tests different windowing functions for pooling operators, even though in different tasks.\\u201d\\n\\nThank you for the reference. We add this reference in the final paragraph in related work. The work from [1] provides a systematic evaluation of different blurred-downsampling, similar to our work. However, it operates from the assumption that max-pooling and blurred-downsampling are strictly alternatives, and makes the recommendation to use max-pooling. We show that they are actually compatible! Advantages to max-pooling are kept while incorporating proper blurred-downsampling.\\n\\n> \\u201c(Minor) The flow of section 4.2. can be improved to help readability. The three metrics should be first motivated before their introduction. Metric 2. paragraph - the metric is defined below, not above.\\u201d\\nThank you for the suggestion. We have added a short motivational introduction in 4.2, and clarified the relation between metric 2 from metric 1.\\n\\n> \\u201c(Minor) Explicitly provide the network architecture as [Simonyan14] does not test on CIFAR and cannot use Batch normalisation.\\u201d\\nWe have added the reference in a footnote to the reference implementation, and will release code.\\n\\n> \\u201c(Minor) Section 3.1 - And L-Layer deep *CNN*, H_l x W x C_l -> H_l x W_l x C_l\\u201d\\nThank you for finding the typo. \\n\\n> \\u201c(Minor) Section 3.1. Last paragraph - I would not agree with the statement that in CNNs the shift invariance must necessarily emerge upon shift equivariance. If anything, this may hold only for the last layer of a network without fully connected layers and with average pooling of the classifier output (ResNet/GoogleNet like networks).\\u201d\\n\\nOur claim is that global average pooling of a shift-equivariant extractor results in a shift-invariant extractor. The proof is found in (Azulay and Weiss, 2018), starting from bottom of page 5. We sketch the proof below:\", \"assume\": \"(1) F(Shift(X)) = Shift(F(X)), feature extractor F is shift-equivariant\\n(2) G = GlobAvgPool o F, global average pool after shift-equivariant feature extractor\", \"we_wish_to_show_that\": \"(3) G(Shift(X)) = G(X), feature extractor G is shift-invariant\", \"proof\": \"G(Shift(X))\\n= GlobAvgPool(F(Shift(X)), substitute (2) definition of G\\n= GlobAvgPool(Shift(F(X)), substitute (1) shift-equivariance of F\\n= GlobAvgPool(F(X)), shifting the feature map does not change its average\\n= G(X), recombine using (2) definition of G\\n\\nFrom there, G(X) is a feature vector with no spatial extent. Any subsequent function with G serving as a front-end (even fully connected layers) will maintain shift-invariance, since G(Shift(X)) = G(X), wiping away the effect of any shift.\\n\\nMany networks, such as the VGG13 and DenseNet in our tests use global average pooling, followed by a single linear layer (and softmax) for classification.\"}",
"{\"title\": \"(Part I) Added experiment confirms correct pool/blur ordering; DenseNet experiment corroborates results; clarifications added\", \"comment\": \"We thank the reviewer for the detailed comments. We are happy that the reviewer found the motivation \\u201creally well written\\u201d, the method simple, and the results promising. We have updated the draft. We address all major and minor comments from the review below. In particular, we provide new experiments on switching blurring & pooling, as well as on DenseNet. These results back up the findings in the original submission.\\n\\n----- ADDITIONAL EXPERIMENTS -----\\n> \\u201cAuthors do not address the question what is the correct order of operations for the blurring. E.g. would the method empirically work if blurring was applied before max pooling? Do the operations commute?\\u201d\\n\\nThank you for this suggestion. We have added this experiment in Appendix C and Figure 10. Note that our proposed method applies the signal to the exact signal which is to be downsampled, which has solid theoretical backing in sampling theory (Oppenheim et al., 1999). The operations to not commute (as max-pooling is nonlinear). Switching the ordering separates the blurring from downsampling, only providing \\u201csecond-hand\\u201d blurring.\\n\\nInteresting, some of the worse-performing filters do improve when training without data augmentation. However, for the better-performing filters, performance is reduced. All filters perform worse when training with data augmentation.\\n\\nThese experiments empirically confirm that the proposed PoolBlurDownsample method was the correct order of operations.\\n\\n> \\u201c(Minor) One future direction would be to verify that this approach generalises to larger networks as well. It might be worth to discuss this in the conclusions.\\u201d\\n\\nThank you for the suggestion. In Appendix A and Figure 8, we show the results applied to a more modern DenseNet (Huang et al., 2017) architecture. The results confirm the findings from the VGG architecture. In short:\\n- When training without data augmentation, the proposed technique improves both classification consistency and accuracy over the baseline (as before).\\n- The proposed technique trained without data augmentation outperforms the baseline, even with data augmentation.\\n- When training with data augmentation, the proposed technique improves consistency, and surprisingly, even slightly improves accuracy in this setting.\\n\\nThese results support the general applicability of the method to CNNs.\\n\\n> \\u201c(Minor) It would be interesting to see what would be the performance if the blurring filters were trained as well.\\u201d\\nWe noted this direction in the discussion section of the submission and are currently investigating this interesting direction.\\n\\n----- CLARIFICATIONS -----\\n> \\u201cIt is not clear how many shifts are used for computing the \\\"Random Test Accuracy\\\" and the \\\"Classification Accuracy\\\". Also whether the random shifts are kept constant between evaluated networks and evaluation metrics.\\u201d\\n\\nWe have updated \\u201cRandom Test Accuracy\\u201d to use every shift (all 32x32=1024 positions) for all 10k test images. \\u201cClassification Consistency\\u201d we test classification agreement between 10 randomly shifted pairs for each test image. This provides 100k examples total (standard error of ~0.05%-0.1%).\\n\\n> \\\"It is not correct to average test accuracy and test consistency as both measures are different quantities, especially when using them for ranking.\\\"\\n\\nThank you for pointing this out. We agree and have removed the averaging. The two factors -- classification and shift-invariance -- should be evaluated as separate dimensions.\\n\\n> \\u201cThe selection of the filters is rather arbitrary, especially regarding the 1D FIR filters.\\u201d\\n> Minor: \\u201cIt would be useful to add citation for the selected FIR filters.\\u201d\\n\\nIn Appendix E, we clarify the selected filters, which are from common references (textbooks) and toolboxes (scipy.signal Python) and add citations. \\u201cRectangle\\u201d is a simple box filter. \\u201cTriangle\\u201d, and \\u201cBinomial\\u201d (which we renamed from Pascal) can be seen in Table 3.4 from (Szeliski, 2010) textbook [2]. \\u201cBinomial\\u201d was used in Image Pyramids [3]. \\u201cWindow\\u201d [4] and \\u201cLeast Squares\\u201d [5] are more modern filter design methods, implemented in FIR filter design toolboxes.\\n\\n> \\u201cThe separability of these filters should be discussed.\\u201d\\n\\nWe add a discussion of separability to Appendix E. In particular, we show that separability allows added computation to scale linearly, rather than quadratically, with filter size.\\n\\n----- REFERENCES -----\\n[1] Scherer, Dominik, Andreas M\\u00fcller, and Sven Behnke. \\\"Evaluation of pooling operations in convolutional architectures for object recognition.\\\" ICANN 2010.\\n[2] Szeliski. Computer Vision: Algorithms and Applications. 2010.\\n[3] Burt and Adelson, Laplacian Pyramid as a compact image code. IEEE Transactions on Communications. 1983.\\n[4] Oppenheim and Schafer, \\\"Discrete-Time Signal Processing\\\". 2nd ed. 1999.\\n[5] Selesnick. Linear-Phase Fir Filter Design By Least Squares. OpenStax CNX. Aug 9, 2005. http://cnx.org/contents/eb1ecb35-03a9-4610-ba87-41cd771c95f2@7\"}",
"{\"title\": \"Simple proposal, derived from first principles, to fix a fundamental property is important\", \"comment\": \"We thank the reviewer for the comments. We are happy the reviewer found the paper \\u201csimple but novel\\u201d, the analysis \\u201cconvincing\\u201d and experiments \\u201cpromising\\u201d. We first address the review title and clarify the goal of the paper. and then address individual points.\\n\\n> \\u201cproblem addressed does not seems to be interesting and significant\\u201d\\n\\nAdversarial attack and defense is a large area of interest -- [1] has 1646 citations in 5 years, according to Google scholar. Lack of shift-invariance in modern deep networks exposes it to a very simple attack. We add an additional experiment, demonstrating practical use - robustness in presence of a shift-based adversarial attack.\\n\\nBlurring before downsampling is \\u201ctextbook material\\u201d from sampling theory [2], image processing [3], computer graphics [4], and computer vision [5]. Proposing a fix from first-principles for a fundamental low-level problem (with implications on adversarial attacks/defenses) should be important. In the updated draft, we add these references to better clarify the fundamental nature of the proposed fix. \\n\\n> \\u201ctest accuracy on random shifted images of proposed method did not exceed the baseline....consistency is secondary to the test accuracy...\\u201d\\n\\nIn Appendix B, we show how consistency affects test accuracy. We compute classification accuracy, as a function of maximum adversarial shift. A max shift of 2 means the adversary can choose any of the 25 positions within a 5x5 window. For the classifier to \\u201cwin\\u201d, it must classify all positions correctly. More detailed discussion is in Appendix B. In short:\\n- The baseline is very sensitive to the adversary. Our proposed method dramatically decreases sensitivity to the adversary.\\n- Again, our proposed method (with Binomial-7 filter) without augmentation is more robust than the baseline, trained with data augmentation.\\n\\nThese results corroborate the findings in the main paper, and demonstrate a use case: increased robustness to shift-based adversarial attack.\\n\\n> \\u201cshow how accuracy various on shifting distance.\\u201d\\n\\nThank you for the suggestion. In Appendix D, we show how accuracy in the test set varies with shifted distance. The baseline accuracy drops quickly, but the proposed fix maintains classification accuracy across spatial shifts.\\n\\n> \\u201cother spatial transforming/shifting adaptive approaches should be taken into consideration to compare the performance.\\u201d\\n\\nOur paper focuses on thoroughly evaluating and incorporating shift-invariance. A feature extractor should first be robust to shifts in order to be robust to other spatial transforms, such as warps. In Appendix A, we further establish the effectiveness of our technique by testing on the DenseNet architecture.\\n\\n----- CLARIFICATIONS -----\\n> \\u201cAnd it is confused to do average on consistency and test accuracy, which are in different scales, and then compare the overall performance on the averages.\\u201d\\n\\nThank you for pointing this out. We agree and have removed the averaging. The two factors -- classification and shift-invariance -- should be evaluated as separate dimensions.\\n\\n> \\u201cIt seems to be more convincing if the \\u2018random\\u2019 test accuracy is acquired by averaging several random shifts on a single image and then do average among images\\u201d\\n\\nWe provide wall-clock analysis in Table 2, showing that our fix adds +8-12% computation. Even evaluating twice would add +100% computation. Our goal is to better preserve shift-equivariance in the network, given roughly the same computation budget, and minimal perturbation to network architecture (as described in Section 2).\\n\\n----- WRITING -----\\n> \\u201c4. There are some minor typos, such as line 3 in Section 3.1 and line 15 in Section 3.2\\u201d\\n\\nThank you for finding the typos. They are fixed in the updated draft.\\n\\n----- REFERENCES -----\\n[1] Szegedy et al. Intriguing properties of neural networks. ArXiv, 2013.\\n[2] Section 4.6.1: Sampling Reduction by an Integer Factor. Oppenheim, Schafer, Buck. Discrete-Time Signal Processing. 2nd ed. 1999\\n[3] Section 2.4.5: Zooming and Shrinking Digital Images. Gonzalez and Woods. Digital Image Processing. 2nd ed. 1992.\\n[4] Section 14.10.6: Antialiasing in Practice. Foley, van Dam, Feiner, Hughes. Computer Graphics: Principles and Practice. 2nd ed. 1995.\\n[5] Section 3.5.2: Decimation. Szeliski. Computer Vision: Algorithms and Applications. 2010.\"}",
"{\"title\": \"Suggested Densenet and adversarial experiments corroborate findings; wall clock was included in submission\", \"comment\": \"We thank the reviewer for the detailed comments. We are happy that the reviewer recognized the importance of the problem and the potential relevance of the proposed solution across CNNs.\\n\\nWe have updated the draft, with additional requested experiments in the appendix. We address all major and minor concerns below. In particular, we perform the requested experiments (DenseNet and adversarial attacks), which further corroborate findings in the submission.\\n\\n----- TIMING -----\\n> \\u201cWall-clock times need to be reported for the various blurring kernels and compared to the baselines.\\u201d\\n\\nWe agree that timing is an important consideration. In the submission, wall-clock times were reported in Table 2 and discussed in the last paragraph in Section 4. In summary, for VGG, the largest filter (7x7) adds 12.3% computation.\\n\\n> \\u201cDespite being more expensive, do dilations fix the issue of missing translation equivariance provably and not just approximately\\u201d\\n\\nYes, removing strides and adding dilations, as we described in the end of Section 2, would preserve shift-equivariance. However, this costs immense computation, as each layer needs to be evaluated more densely. For the VGG network, this adds 4x, 16x, 64x, and 256x computation for conv2-conv5 layers, respectively. We added discussion to this point based on your suggestion.\\n\\n----- REQUESTED EXPERIMENTS -----\\n> \\u201cExtend results to a cutting-edge architecture, e.g. DenseNets or Wide ResNets.\\u201d\\n\\nThank you for the suggestion. In Appendix A and Figure 8, we show the results applied to a more modern DenseNet (Huang et al., 2017) architecture. The results confirm the findings from the VGG architecture. In short:\\n- When training without data augmentation, the proposed technique improves both classification consistency and accuracy over the baseline (as before).\\n- The proposed technique trained without data augmentation outperforms the baseline, even with data augmentation.\\n- When training with data augmentation, the proposed technique improves consistency, and surprisingly, even slightly improves accuracy in this setting.\\n\\nThese results help support the general applicability of the method to CNNs.\\n\\n> \\u201cthe authors should attack their augmented network with the translation attack of [Engstrom et al. In Arxiv, 2017.]\\u201d\\n\\nThank you for the suggestion. In the submission, we show that classification accuracy is maintained, while consistency is improved. We thus expect the method to be robust to a shift-based adversary.\\n\\nIn Appendix B and Figure 9, we confirm this hypothesis empirically. We compute classification accuracy, as a function of maximum adversarial shift. A max shift of 2 means the adversary can choose any of the 25 positions within a 5x5 window. For the classifier to \\u201cwin\\u201d, it must classify all positions correctly. More detailed discussion is in Appendix B. In short:\\n- The baseline is very sensitive to the adversary. Our proposed method dramatically decreases sensitivity to the adversary.\\n- Again, our proposed method (with Binomial-7 filter) without augmentation is more robust than the baseline, trained with data augmentation.\\n\\nThese results corroborate the findings in the main paper, and demonstrate a use case: increased robustness to shift-based adversarial attack.\\n\\n----- WRITING -----\\nRegarding the writing, we made minor edits in the main paper to the updated draft. We are happy the paper\\u2019s main ideas were \\u201ceasy to follow\\u201d, and kept the overall structure. Based on your suggestion, we reduced the caption lengths. We are continuing to improve the paper.\\n\\n> \\u201c(Minor) Strange notation e.g. in equation 1. Why not write: x+\\\\delta x in the argument of the function instead of \\\"Shift\\\". The current notation seems unnecessarily informal.\\u201d\\n\\nThe Shift function is defined in Equation 4. Defining a shift function once enables reuse in six other locations (rather than using h-\\\\delta h, w-\\\\delta w indexing repeatedly), so we are inclined to keep it for now. Based on your comment, we added a note before Eqn 1, to better orient the reader.\\n\\n> \\u201cFigure 4: show scale and color bar.\\u201d\\n\\nThank you for the suggestion. We will add a colorbar in an updated version and are continuing to improve the paper.\"}",
"{\"title\": \"Making CNNs translation equivariant again, potentially important line of work with multiple loose ends\", \"review\": \"Summary\\n\\nFrom a theoretical point of view, one might be tempted to believe that deep CNNs are translation equivariant and their predictions are translation invariant. In practice, this is not necessarily true. The authors propose to augment standard deep CNNs with low-pass filters to reduce this problem. The results seem promising for an older VGG architecture.\\n\\nQuality\\n\\nThe paper is very verbose, the figures and captions are tedious to read, the mathematical notation seems strange as well, making the writing more concise is highly encouraged. The main ideas are easy to follow and the choice of experiments seems fine. \\n\\nSignificance\\n\\nThis is the first empirical work trying to fix the issue of non-translation equivariance in convolutional neural networks. The conclusions of this work are potentially relevant for a wide audience of CNN practitioners.\\n\\nMain Concerns\\n\\nTo show that all claims of the paper do indeed hold, the authors should attack their augmented network with the translation attack of [1]. As robustness to this type of transformations is one of the main goals, it should be tested if it was achieved. The attack can be found in some open source frameworks [2] and should be easy to apply.\\n\\nWall-clock times need to be reported for the various blurring kernels and compared to the baselines.\\n\\nExtend results to a cutting-edge architecture, e.g. DenseNets or Wide ResNets. If this result is not provided the significance of the work is not clear.\\n\\nDespite being more expensive, do dilations fix the issue of missing translation equivariance provably and not just approximately like the low-pass filtering approach proposed here? This should be discussed and a comparison in terms of wall-clock time would be great as well.\\n\\nMinor\\n\\n- Strange notation e.g. in equation 1. Why not write: x+\\\\delta x in the argument of the function instead of \\\"Shift\\\". The current notation seems unnecessarily informal.\\n- Figure 4: show scale and color bar.\\n\\n[1] Engstrom et al., \\\"A rotation and a translation suffice: Fooling cnns with simple transformations.\\\"\\n[2] https://foolbox.readthedocs.io/en/latest/modules/attacks/decision.html#foolbox.attacks.SpatialAttack\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A paper with technical details and analysis, but the problem addressed does not seems to be interesting and significant\", \"review\": \"This paper analyzed on the core factor that make CNNs fail to hold shift-invariance, the naive downsampling in pooling. And based on that the paper proposed the modified pooling operation by introducing a low-pass filter which endows a shift-equivariance in the convolution features and consequently the shift-invariance of CNNs.\", \"pros\": \"1.\\tThe paper proposed a simple but novel approach to make CNNs shift-invariant following the traditional signal processing principle.\\n2.\\tThis work gave convincing analysis (from both theoretical illustrations and experimental visualizations) on the problem of original pooling and the effectiveness of the proposed blur kernels.\\n3.\\tThe experiment gave some promising results. Without augmentation, the proposed method shows higher consistency to the random shifts.\", \"cons\": \"1.\\tWhen cooperating with augmentation, the test accuracy on random shifted images of proposed method did not exceed the baseline. Although the consistency is higher, it is secondary to the test accuracy of random shifted data. And it is confused to do average on consistency and test accuracy, which are in different scales, and then compare the overall performance on the averages. \\n2.\\tIt seems to be more convincing if the \\u2018random\\u2019 test accuracy is acquired by averaging several random shifts on a single image and then do average among images, as well as to show how accuracy various on shifting distance.\\n3.\\tSome other spatial transforming/shifting adaptive approaches should be taken into consideration to compare the performance.\\n4.\\tThere are some minor typos, such as line 3 in Section 3.1 and line 15 in Section 3.2\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting approach in its simplicity with flaws in evaluation\", \"review\": \"This work shows that adding a simple blurring into max pooling layers can address issues of image classification instability under small image shifts. In general this work presents a simple and easy to implement solution to a common problem of CNNs and even though it lacks more thorough theoretical analysis of this problem from the signal processing perspective (such as minimal size of the blurring kernel for fulfilling the Nyquist-Shannon sampling theorem), it seems to provide ample empirical evidence.\", \"pros\": [\"The introduction and motivation is really well written and Figure 3 provides a clear visualisation main max pooling operator issues.\", \"The proposed method is really simple and shows promising results on the CIFAR dataset. With random shifts, authors had to tackle cropping with circular shifts. As it can cause artifacts in the data, authors also provide baseline performances on the original data (used for both training and testing).\", \"Authors provide a thorough evaluation, ranging from comparing hidden representations to defining consistency metrics of the classified classes.\", \"This work is lacking in the experimental section due to some missing details and few inconsistencies. I believe the most of my concerns can be relatively easily fixed/clarified in an update of this submission.\", \"Major issues, which if fixed would improve the rating:\", \"It is not correct to average test accuracy and test consistency as both measures are different quantities, especially when using them for ranking. The difference between accuracy of different methods are considerably smaller than differences in the classification consistency.\", \"It is not clear how many shifts are used for computing the \\\"Random Test Accuracy\\\" and the \\\"Classification Accuracy\\\". Also whether the random shifts are kept constant between evaluated networks and evaluation metrics.\", \"Authors do not address the question what is the correct order of operations for the blurring. E.g. would the method empirically work if blurring was applied before max pooling? Do the operations commute?\", \"The selection of the filters is rather arbitrary, especially regarding the 1D FIR filters. The separability of these filters should be discussed.\", \"I believe authors should address how this work differs to [1], as it also tests different windowing functions for pooling operators, even though in different tasks.\", \"Minor issues, which would be nice to fix however which do not influence my rating:\", \"Section 3.1 - And L-Layer deep *CNN*, H_l x W x C_l -> H_l x W_l x C_l\", \"Section 3.1. Last paragraph - I would not agree with the statement that in CNNs the shift invariance must necessarily emerge upon shift equivariance. If anything, this may hold only for the last layer of a network without fully connected layers and with average pooling of the classifier output (ResNet/GoogleNet like networks).\", \"Explicitly provide the network architecture as [Simonyan14] does not test on CIFAR and cannot use Batch normalisation.\", \"It would be useful to add citation for the selected FIR filters.\", \"The flow of section 4.2. can be improved to help readability. The three metrics should be first motivated before their introduction. Metric 2. paragraph - the metric is defined below, not above.\", \"It would be interesting to see what would be the performance if the blurring filters were trained as well (given some sensible initialisation).\", \"One future direction would be to verify that this approach generalises to larger networks as well. It might be worth to discuss this in the conclusions.\", \"[1] Scherer, Dominik, Andreas M\\u00fcller, and Sven Behnke. \\\"Evaluation of pooling operations in convolutional architectures for object recognition.\\\" Artificial Neural Networks\\u2013ICANN 2010. Springer, Berlin, Heidelberg, 2010. 92-101.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"comment\": \"I like the idea of this paper and I like to test the BlurDownsample for other purposes like for bottleneck autoencoders. I was wondering if the authors could prepare an implementation of the proposed BlurDownsample layer into one of the existing frameworks, e.g. Tensorflow.\", \"one_question\": \"In Table 1, you evaluated 18 different filters and report that the Filter = [1,2,3,2,1] and the Filter = [1,6, 15, 20, 15, 6, 1] outcome the best results. So, it seems to me that you had a search on the different combinations for the Blurring filter to find a filter that gives you the best Test accuracy. So, is this the right technique or you should find the best filter through the cross-validation on training dataset? \\nAnd more importantly, what if I use a convolutional layer which keep the dimensions of the input data same, and let the model finds the best filter through backpropagation, instead of using your Blurring layer with a fixed filter map? I mean, it seems to me your proposed layer is just a specific type of a convolutional layer, with a fixed filter map.\", \"some_minor_comments\": [\"\\\"An L-layer deep can be...\\\" --> \\\"An -L-layers deep neural network can be..\\\"\", \"\\\"Modern convolutional networks are not shift-invariant...\\\". This is not a proper beginning for the abstract. First, what do you mean by \\\"Modern\\\"?. Second, they are actually partially shift-invariant (Modulo-N as you shown in figure 2), and not fully shift-invariant.\", \"This paper deserves a more relevant title.\"], \"title\": \"What is the difference between your blurring and a convolutional layer?\"}"
]
} |
|
r1g4E3C9t7 | Characterizing Audio Adversarial Examples Using Temporal Dependency | [
"Zhuolin Yang",
"Bo Li",
"Pin-Yu Chen",
"Dawn Song"
] | Recent studies have highlighted adversarial examples as a ubiquitous threat to different neural network models and many downstream applications. Nonetheless, as unique data properties have inspired distinct and powerful learning principles, this paper aims to explore their potentials towards mitigating adversarial inputs. In particular, our results reveal the importance of using the temporal dependency in audio data to gain discriminate power against adversarial examples. Tested on the automatic speech recognition (ASR) tasks and three recent audio adversarial attacks, we find that (i) input transformation developed from image adversarial defense provides limited robustness improvement and is subtle to advanced attacks; (ii) temporal dependency can be exploited to gain discriminative power against audio adversarial examples and is resistant to adaptive attacks considered in our experiments. Our results not only show promising means of improving the robustness of ASR systems, but also offer novel insights in exploiting domain-specific data properties to mitigate negative effects of adversarial examples. | [
"audio adversarial example",
"mitigation",
"detection",
"machine learning"
] | https://openreview.net/pdf?id=r1g4E3C9t7 | https://openreview.net/forum?id=r1g4E3C9t7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1lBqwjWxN",
"HyxdLrxcCQ",
"HJeScNx9CX",
"rJejAXe907",
"BJgFHzg90X",
"ryx9MuhU6Q",
"SJgCwLfxa7",
"SJxUvUaK27",
"rke8d0QE3m",
"BJxqwKRJ2m",
"SklHTV9Aim",
"rJe03DXAiX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1544824716770,
1543271759752,
1543271565008,
1543271378973,
1543270977500,
1542010897786,
1541576293947,
1541162589937,
1540796014451,
1540512097526,
1540428989398,
1540401078262
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1439/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1439/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1439/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1439/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1439/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1439/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1439/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1439/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1439/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1439/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"The authors present a study characterizing adversarial examples in the audio domain. They highlight the importance of temporal dependency when defining defense against adversarial attacks.\\n\\nStrengths\\n- The work presents an interesting analysis of properties of audio adversarial examples, and contrasts it with those in vision literature.\\n- Proposes a novel defense mechanism that is based on the idea of temporal dependency.\\n\\nWeaknesses\\n- The technique identifies adversarial examples but is not able to make the correct prediction.\\n- The reviewers raised issue around clarity, but the authors took the effort to improve the section during the revision process. \\n\\nThe reviewers agree that the contribution is significant and useful for the community. There are still some concerns about clarity, which the authors should consider improving in the final version. Overall, the paper received positive reviews and therefore, is recommended to be accepted to the conference.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting findings about audio adversarial examples\"}",
"{\"title\": \"Our response to Reviewer 1\", \"comment\": \"Thanks for your insightful comments.\", \"q\": \"Also, I don't understand the statement in 4.2: \\\"We report that the autoencoder works fine for transforming benign instances (57.6% WER in Common Voice compared to 27.5%)\\\": given that it's not an attack, the WER should be the same with and without transform, as we don't want the transform to affect non-adversarial examples ?\", \"a\": \"We agree with the reviewer that in this setting an \\u201cideal\\u201d autoencoder would not affect the performance of benign examples and will mitigative the negative effects of adversarial examples. However, in our experiments, we were not able to find such an ideal autoencoder.\\nGiven the reconstruction nature of autoencoder based on the training data, here we aim to do an ablation study to make sure that the applied transformation will not affect the translation results of benign instance too much. And there appears to be a tradeoff between accuracy and robustness.\"}",
"{\"title\": \"Our response to Reviewer 2\", \"comment\": \"Thanks for your constructive comments.\", \"q\": \"prefix of length k\", \"a\": \"Thanks for the precious suggestion and we modified the name as you suggested in the revised version.\"}",
"{\"title\": \"Our response to Review 4\", \"comment\": \"We appreciate your insightful comments and feel sorry about hard following in Section 4. Here are some response to questions you concerned and we\\u2019ve uploaded new version of our paper with clearer structure.\", \"q\": \"writing in Section 4\", \"a\": \"We apologize that we did not put enough efforts in presenting the experimental results in Section 4. Based on the review comments, we have reorganized and revised Section 4 to make the presentation clearer, including adding new tables (Tables 1 & 4) that highlight the overall structure of our attack & defense / detection.\"}",
"{\"title\": \"General Response\", \"comment\": \"We thank the reviewers for their valuable comments and suggestions. We are happy to learn that the contributions of this are well acknowledged by all reviewers. Meanwhile, we are sorry that the reviewers feel the experiment part (Section 4) is hard to follow. Based on the review comments, we have reorganized and revised Section 4 to make the presentation clearer, including adding new tables (Tables 1 & 4) that highlight the experiment settings and the overall structure of the considered attack & defense / detection.\\n\\nSpecifically, we made the following updates to our revision:\\n1. We rewrite Section 4 to make the structure more clear. Previously, since there are too many defense and attack methods, some of which requires slightly different evaluation metrics, Section 4 can be confusing as pointed by reviewers. In our revision, we added Table 1 and Table 4 to illustrate the defense and attack methods we evaluated, as well as the corresponding evaluation metrics and brief result summary.\\n2. We added additional experiments to show that even when K_A and K_D are both random, our proposed TD method can still detect different attacks with high AUC.\\n3. We also added additional experiments in Appendix Table A7 according to Review 4\\u2019s comments to show that even when k_A is random and k_D equals to \\u00bd , \\u2154, \\u00be, or a random number chosen from [0.2, 0.8], our proposed TD method can still detect different attacks with high AUC.\\n\\nPlease don\\u2019t hesitate to let us know if you have any additional comments.\"}",
"{\"title\": \"Interesting findings but hard to fully understand the experiments.\", \"review\": \"This paper proposed a study on audio adversarial examples and conclude the input transformation-based defenses do not work very well on the audio domain, especially for adaptive attacks. They also point out the importance of temporal dependency in designing defenses which is specific for the audio domain. This observation is very interesting and inspiring as temporal dependency is an important character that should be paid attention to in the field of audio adversarial examples. They also design some adaptive attacks to the defense based on temporal dependency but either fail to attack the system or can be detected by the defense.\\n\\nBased on the results in Table S7, it seems like being aware of the parameter k when designing attacks are very helpful for reducing the AUC score. My question is if the attacker uses the random sample K_A to generate the adversarial examples, then how the performance would be. Another limitation of this work is that the proposed defense can differentiate the adversarial examples to some extent, but the ASR is not able to make a right prediction for adversarial examples. In addition, the writing of Section 4 is not very clear and easy to follow.\\n\\nIn all, this paper proposed some interesting findings and point out a very important direction for audio adversarial examples. If the author can improve the writing in experiments and answer the above questions, I would support for the acceptance.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Important problem, reasonable evaluation, hard to follow.\", \"review\": \"This paper presents a study of the problem of generating adversarial examples for speech processing systems. Two versions of this problem are considered: attacks against audio classification and against text to speech. The authors first study a the practice of input transformations as means of defense against adversarial examples. To do so, they evaluate three recent adversarial attacks on audio classification and TTS models trained on several datasets. It is found that input transformations have limited utility against adaptive attacks. Moreover, a novel type of defense is developed in which the prefix (of some fixed length) of the audio input is converted to text and compared with the prefix of the text output of the entire input, flagging the input as adversarial if sufficient mismatch is detected. It is found that this method is robust against a number of attacks.\\n\\nThis paper tackles a relevant problem and presents some surprising (the robustness of the prefix method) as well as some not surprising results. The evaluation has reasonable enough breadth to give the conclusions credibility. \\n\\nMy main complaint is that the exposition is somewhat hard to follow at places, especially in section 4. It is hard to keep track of which attack is applied to which scenario and what the conclusions were. Perhaps this could be summarized in some kind of high-level table. It would also be greatly beneficial if the attacks are briefly summarized somewhere. E.g., without following the references, it is completely unclear what is the \\\"Commander Song\\\" setting and what is it important. \\nFinally, I would advise the authors to not use the term \\\"first k portion\\\". This made understanding their proposed defense much harder than it needed to be. Perhaps \\\"prefix of length k\\\" or something along these lines would be easier to follow. \\n\\nIn summary, if the authors commit to improving the clarity of the paper, I would be willing to support its acceptance by virtue of the breadth of the investigation and the importance of the problem.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This paper investigates adversarial examples for audio data. The standard defense techniques proposed for images are studied in the context of audio. It is shown that these techniques are somewhat robust to adversarial attacks, but fail against adaptive attacks. A method exploiting the temporal dependencies of the data is then presented and shown to be robust to adversarial examples and to adaptive attacks.\\n\\nThe paper addresses an important issue, and the two main findings of the paper, the transformation methods used in computer vision are not useful against audio adversarial example and using temporal dependencies improves the defense capability are significant. The proposed TD method is novel.\\n\\nThe first part of the paper is easy to read (Section 1-3), but Section 4 is hard to follow, for the following reasons:\\n* Section 4.1 presents the metrics used in the evaluation, which is nice. But in the following subsections, other metrics are used: effectiveness ratio, detection rate and relative perturbation. They should be clearly defined in 4.1, and the authors should discuss why they used these metrics.\\n* Section 4.2 should be reorganized as it is hard to follow: there are three types of attack, so one subsection per attack should make the section clearer.\\n* It's not always clear what each attack is doing and why it is used. I suggest the authors to have a separate subsection with the description of each attack and the motivation of why it is used.\\n\\nBecause of the above, it's hard to clearly assess the performance of each method for each attack, it would be better to have a Table that summarizes the results for the transformation methods. Also, I don't understand the statement in 4.2: \\\"We report that the autoencoder works fine for transforming benign instances (57.6% WER in Common Voice compared to 27.5%)\\\": given that it's not an attack, the PER should be the same with and without transform, as we don't want the transform to affect non-adversarial examples ? Please clarify that.\\nThe experiments on the proposed TD method are clear enough to show the viability of the approach.\\n\\nOverall, the findings of this paper are significant and it is good step towards audio adversarial examples defense. But the experimental part is hard to follow and does not bring a clear picture. I am still willing to accept the paper if the authors improve and clarify Section 4.\", \"revision_after_rebuttal\": \"The new version is definitely clearer and easier to read, hence I support the paper for acceptance and change my rating to 7.\", \"there_are_still_minor_improvements_that_can_be_done_in_section_4_to_improve_the_overall_clarity\": [\"About the metrics, the \\\"Average attack success rate\\\" and the \\\"Target command recognition rate\\\" should be clearly defined, probably under the description of the attack methods.\", \"The Adaptive attack approach could be introduced unter \\\"Attack methods\\\" in 4.1.\", \"Table 4 is not easy to read, the authors should improve it.\", \"The first paragraph in Section 4 (\\\"The presentation flows ...\\\") is very interesting, but almost reads like a conclusion, so maybe the authors could move that to the end of Section 4 or to Section 5.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"reply to \\\"example for TEMPORAL DEPENDENCY BASED METHOD\\\"\", \"comment\": \"Thank you for the interesting question.\\nFirst, in this work we aim to detect the state-of-the-art attacks such as [Carlini et al], and therefore we obtain their adversarial examples online directly and perform detection for the purpose of fair comparison.\\nIn addition, we conduct a new set of experiments these days based on your suggestion.\\nIn particular, we set the adversarial target as just modifying one single word, such as it -> he, morning -> evening, open -> close, and perform the TD detection against them (we list some examples in the end of this answer). Overall, for 50 samples, if we choose a single k, the AUC based on WER and CER score are around 0.65, and LCP 0.7.\\nWe also try to perform TD based on ensemble of k as k = (3/8, 4/8, 5/8, 6/8, 7/8} and select the maximum score among WER / CER / LCP for each k, which leads to AUC =0.832.\\nThis shows that TD still performs effectively for detecting such \\u201csingle word\\u201d adversarial audio, though less powerful than detecting those with larger perturbation. \\nThe reason is obvious by looking at the examples below. In each example, row 1 shows the translation result of the first k portion (TD), row 2 shows the \\u201csingle word\\u201d adversarial target (Adv), while row 3 denotes the original benign translation (Benign).\\nWe can see that the perturbation of such \\u201csingle word attack\\u201d will only introduce inconsistency for the \\u201ctarget word\\u201d, and therefore reduce the gap between adversarial and benign instances. Though harder to detect, such attacks limit the adversary\\u2019s attack ability in practice. If we want to detect them more effectively, better similarity metrics may be helpful, and we will explore such metrics in our future work.\", \"examples\": \"\", \"td\": \"the [erfficer] as well as the young ladies te\", \"adv\": \"the [officer] as well as the young ladies decparated it\", \"benign\": \"the [servants] as well as the young ladies decparated it\"}",
"{\"comment\": \"Hi, for TEMPORAL DEPENDENCY BASED METHOD, you only show the example that perturbs a sentence to a totally different one without any same words. However, if the adversarial attack just changing one or two keywords in the sentence, will your method still be effective? For example, perturb from \\\"Alex, call Tom and open the front door\\\" to \\\"Alex, call Tome and open the back door\\\".\", \"title\": \"example for TEMPORAL DEPENDENCY BASED METHOD\"}",
"{\"title\": \"Reply to \\\"Attack cannot succeed\\\"\", \"comment\": \"Thank you for the valuable comments.\\nYes, we indeed allow the distortion to be the largest possible in practice, which is the amplitude of the original audio waveform. If the distortion is larger than the original audio waveform, the benign audio will be totally covered by the adversarial one and hence the adversarial attack will become trivial but meaningless.\\n\\nSuch rational setting for maximum distortion is also mentioned in (Carlini et al. 2018), and we will add corresponding discussion in our updated version as well. Thank you for pointing this out.\"}",
"{\"comment\": \"You write that the adaptive attack \\\"cannot succeed\\\". I wonder if you have tried verifying that the attack does eventually succeed if you allow larger distortions as suggested by (Athalye et al. 2018) to verify the attack is working as expected.\", \"title\": \"\\\"Attack cannot succeed\\\"\"}"
]
} |
|
r1zmVhCqKm | Text Infilling | [
"Wanrong Zhu",
"Zhiting Hu",
"Eric P. Xing"
] | Recent years have seen remarkable progress of text generation in different contexts, including the most common setting of generating text from scratch, the increasingly popular paradigm of retrieval and editing, and others. Text infilling, which fills missing text portions of a sentence or paragraph, is also of numerous use in real life. Previous work has focused on restricted settings, by either assuming single word per missing portion, or limiting to single missing portion to the end of text. This paper studies the general task of text infilling, where the input text can have an arbitrary number of portions to be filled, each of which may require an arbitrary unknown number of tokens.
We develop a self-attention model with segment-aware position encoding for precise global context modeling.
We further create a variety of supervised data by masking out text in different domains with varying missing ratios and mask strategies. Extensive experiments show the proposed model performs significantly better than other methods, and generates meaningful text patches. | [
"text generation",
"text infilling",
"self attention",
"sequence to sequence"
] | https://openreview.net/pdf?id=r1zmVhCqKm | https://openreview.net/forum?id=r1zmVhCqKm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Sylig4qxl4",
"ryxzxgQc37",
"HygOvpY_2X",
"SklOXypTjm"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544754163422,
1541185513887,
1541082464454,
1540374304471
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1438/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1438/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1438/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1438/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"although the problem of text infilling itself is interesting, all the reviewers were not certain about the extent of experiments and how they shed light on whether, how and why the proposed approach is better than existing approaches.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"rejection\"}",
"{\"title\": \"The authors present an interesting new conditional generation task, but the paper lacks critical implementation details and the experimental setting is too restricted to fully support the claims.\", \"review\": \"The paper proposes a setting for evaluation of a text infilling task, where a system needs to fill in the blanks in a provided incomplete sentences. The authors select sentences from three different sources, Yahoo Reviews, fairy tales, and NBA scripts, and blank out words with varying strategies, ranging from taking out prepositions and articles to removing all but two anchor words from a sentence. On this data, they compare the performances of a GAN model, Recurrent Seq2seq model with attention, and Transformer model in terms of BLEU, perplexity, and human evaluation.\\n\\nThe setting is certainly interesting, and the various data creation strategies are reasonable, but the paper suffers from two main flaws. First, the size of the data set is far from sufficient. Unless the authors are trying to show that the transformer is more data-efficient (which is doubtful), the dataset needs to be much larger than the 1M token it appears to be now. The size of the vocabularies is also far from being representative of any real world setting.\\n\\nMore important however is the fact that the authors fail to describe there baseline systems in any details. What are the discriminator and generator used in the GAN? What kind of RNN is used in Seq2seq? What size? Why not use a transformer seq2seq? How exactly is the data fed in / how does the model know which blank it's generating? It would be absolutely impossible for anyone to reproduce the results presented in the paper.\\n\\nThere are some other problems with the presentation, including the fact that contrary to what is suggested in the introduction, the model seems to have access to the ground truth size of the blank (since positional encodings are given), making it all but useless in a real world application setting, but it is really difficult to evaluate the proposed task and the authors' conclusions without a much more detailed description of the experimental setting.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"questions about experiments\", \"review\": \"Summary\\nThis paper proposes an approach to fill in gaps in sentences. While usually generation assumes a provided left context, this paper generalizes generation to any missing gap of any number of words.\\nThe method is based on Transformer which does attention on all the available context and generates in a left to right manner.\\nThe authors validate their approach on rather small scale datasets. I am unclear about several details of their empirical validation.\", \"relevance\": \"this work is on text generation, which is very relevant to this conference.\", \"clarity\": \"the description of the method is very clear, less so the description of the empirical validation.\", \"novelty\": \"The method per se is not novel, but the combination of method and application is.\", \"empirical_validation\": \"The empirical validation is not very clear and potentially weak.\\nFirst, I did not understand the baselines the authors considered: seq2seq and GAN. There are LOTS of variants of these methods and the references cited by the authors do not immediately apply to filling-in tasks. The authors should explain in detail what these baselines are and how they work. \\nSecond, MaskGAN ICLR 2018 was proposed to solve the filling task and code is publicly available by the authors of that paper. A direct comparison to MaskGAN seems a must.\\nThird, there are lots of details that are unclear, such as : how are segments defined in practice? the ranking metric is not clearly defined (pair-wise ranking or ranking of N possible completion?). Could the authors be a little bit more formal?\\nSpeaking of metrics, why don't the authors considered precision at K for the evaluation of small gaps?\\nFourth, the datasets considered, in particular the Grimm's dataset, is very small. There are only 16K training sentences and the vocabulary size is only 7K. How big is the model? How comes it does not overfit to such a small dataset? Did the authors measure overlap between training and test sets? \\n\\nMore general comments\\nThe beauty of the proposed approach is its simplicity. However, the proposed model feels not satisfactory as generation proceeds left to right, while the rightmost and the leftmost missing word in the gap should be treated as equal citizens.\", \"minor_note\": \"positional embeddings are useful also with convolutional models.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Lack of novelty\", \"review\": \"Pros:\\nThis paper targets an interesting and important task, i.e. general text filling, where the incomplete sentence can have arbitrary number of blanks and each blank can have arbitrary number of missing words. Convincing experiments are conducted both qualitatively and quantitatively.\", \"cons\": \"1. Lack of novelty in the proposed method. Specifically, such impression mainly comes from the writing in Sec 3.2. When discussing details of the proposed method, authors keep referring to [A], indicating heavily that authors are applying an existing method, i.e. yet another application of [A]. This limits the novelty. On the other hand, this also limits the readability for anyone that not familiar with [A]. Moreover, this prevents authors discuss the motivation behinds their architectural choices. Whether [A] is the optimal choice for this task, and can there be alternative options for its components. For example, could we use a rnn to encode x_template_i and use the encoding as a condition to fill s_i? \\n\\n[A] Vaswani, Ashish, et al. \\\"Attention is all you need.\\\" Advances in Neural Information Processing Systems. 2017.\\n\\n2. Discussion in Sec 3.2 and Figure 2 are not clear enough. I didn't get a picture of how the proposed method interacts with requirements of the task. For example, in sec 3.2.3, what represent queries, keys and values respectively are unclear. And authors mention \\\"template-decoder attention\\\" layers, where I didn't find in Figure 2.\\n\\n3. Is not very straightforward that how the baselines Seq2Seq and GAN are applied to this task, where necessary information is missed in the experiment section.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
H1xQVn09FX | GANSynth: Adversarial Neural Audio Synthesis | [
"Jesse Engel",
"Kumar Krishna Agrawal",
"Shuo Chen",
"Ishaan Gulrajani",
"Chris Donahue",
"Adam Roberts"
] | Efficient audio synthesis is an inherently difficult machine learning task, as human perception is sensitive to both global structure and fine-scale waveform coherence. Autoregressive models, such as WaveNet, model local structure at the expense of global latent structure and slow iterative sampling, while Generative Adversarial Networks (GANs), have global latent conditioning and efficient parallel sampling, but struggle to generate locally-coherent audio waveforms. Herein, we demonstrate that GANs can in fact generate high-fidelity and locally-coherent audio by modeling log magnitudes and instantaneous frequencies with sufficient frequency resolution in the spectral domain. Through extensive empirical investigations on the NSynth dataset, we demonstrate that GANs are able to outperform strong WaveNet baselines on automated and human evaluation metrics, and efficiently generate audio several orders of magnitude faster than their autoregressive counterparts.
| [
"GAN",
"Audio",
"WaveNet",
"NSynth",
"Music"
] | https://openreview.net/pdf?id=H1xQVn09FX | https://openreview.net/forum?id=H1xQVn09FX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJx_lyzelV",
"r1xjgxknaX",
"rJxQcyy26m",
"S1xnyJyhpm",
"r1xhvRAoTX",
"Skgsl-CCnm",
"rke0sgTc2X",
"BkgZIiNcn7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544720111785,
1542348786600,
1542348683090,
1542348515576,
1542348387768,
1541492978843,
1541226661756,
1541192520755
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1437/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1437/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1437/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1437/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1437/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1437/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1437/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1437/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.\\n\\n- novel approach to audio synthesis\\n- strong qualitative and quantitative results\\n- extensive evaluation\\n \\n2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.\\n\\n- small grammatical issues (mostly resolved in the revision).\\n \\n3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it\\u2019s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.\\n\\nNo major points of contention.\\n \\n4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.\\n\\nThe reviewers reached a consensus that the paper should be accepted.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"novel approach with good results shown by extensive evaluation\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your time and expertise in your review, we've addressed the key points below:\\n\\n> \\u201c...what do the authors see as 'high' resolution vis a vis audio signals?\\u201d\\n\\nIn the context of these audio datasets, we use \\u201chigh\\u201d resolution to refer more to the dimensionality of the signal to model with a single latent vector, rather than the temporal resolution of the audio. The spectral \\u201cimages\\u201d that GANSynth models, have 1024 frequencies, 128 timesteps, and 2 channels, [1024, 128, 2], which is roughly equivalent to a [295, 295, 3] RGB image. This puts the task comparable to some of the higher-resolution GANs for images.\\n\\n> \\u201cI am curious if we can adapt these ideas for recurrent generators as might appear in TTS problems.\\u201c\\n\\nWe agree that would be an interesting development. Recurrent generators, and even discriminators, would allow for variable-length sequences and variable-length conditioning as is common in speech synthesis or music generation beyond single notes. Our initial experiments at using recurring generators were not very successful, so we opted to adopt a better tested architecture for this study, but this is definitely still an area ripe for exploration.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your time and insight in your review. We've incorporated changes to the paper and respond to your main points below:\\n\\n> \\u201cWhy didn't you train a WaveNet on the high-resolution instantaneous frequency representations?\\u201d\\n\\nThat\\u2019s an interesting avenue of research to explore. We trained WaveNets on the raw audio waveforms to provide strong and proven baseline models to compare against. Generating spectra with WaveNets is relatively unexplored and complicated by the high dimensionality at each timestep (number of frequencies * 2), each of which would have to be quantized in a traditional autoregressive treatment. It\\u2019s quite possible that 2 dimensional convolutions and autoregression could help overcome this, but then the model would most resemble pixelCNN and be far from a proven audio generation method for a strong baseline.\\n\\n> \\u201cI'm still not clear on unrolled phase which is central to this work. If you can, spend more time explaining this in detail, maybe with examples / diagrams? In figure 1, in unrolled phase, why is time in reverse?\\u201d\\n\\nApologies for the confusion. To help clarify, we\\u2019ve renamed the \\u201cunrolled\\u201d phase as \\u201cunwrapped\\u201d throughout the paper, which is better alignment to standards in the literature and popular software packages such as Matlab and Numpy (for example https://www.mathworks.com/help/dsp/ref/unwrap.html). We have also added text further describing figure 1 (2nd to last paragraph of introduction) to help explain unwrapping to be the process of adding 2*Pi to the wrapped phase whenever it crosses a phase discontinuity such as to recover the monotonically increasing phase. The time derivative of this unwrapped phase is then the radial instantaneous frequency.\\n\\n> \\u201cFigure 1 & 2: label the x-axis as time. Makes it a lot easier to understand.\\n\\nThank you for the helpful pointer. We\\u2019ve added time axis labels to the figures and have also labeled the interpolation amounts for the interpolation figure.\\n\\n> \\u201csentence before sec 2.2, and other small grammatical mistakes. Reread every sentence carefully for grammar.\\u201d\\n\\nWe have read through the paper several times to revise grammatical mistakes including the sentence you highlighted.\\n\\n> \\u201cFigure 5 is low-res. Please fix. All the other figures are beautiful - nice work!\\u201d\\n\\nThanks for catching this! We\\u2019ve updated the figure to be high resolution.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for your review. We've done our best to address your concerns with paper revisions and in the comments below:\\n\\n> \\u201cThe method should be tested for speech synthesis and compared with WaveNet, Parallel WaveNet as well as Tacotran\\u201d\\n\\nWe agree that it would be very interesting to adapt these methods to speech synthesis tasks, but believe that this lies outside the scope of this initial paper on adversarial audio synthesis. As we note in AnonReviewer2\\u2019s comments, adapting the current methods to incorporate variable-length conditioning and generate variable-length sequences is a non-trivial extension and requires further research. In the context of this study, we\\u2019ve done our best to provide strong autoregressive baselines from state-of-the-art implementations of WaveNet models (including 8-bit and 16-bit output representations). \\n\\nThank you for highlighting that this is an important direction for this research. We have updated the text of the paper with a paragraph highlighting the importance and difficulty of pushing the current methods forward for more general audio synthesis tasks.\"}",
"{\"title\": \"Updates\", \"comment\": [\"We would like to thank all the reviewers for their thoughtful and helpful reviews. In addition to answering the points of each individual reviewer below, we also want to highlight several additions we have made to the appendix to hopefully improve clarity and reproducibility.\", \"An additional figure displaying spectrograms for a Bach Prelude synthesized both with and without latent interpolation, the audio for which can be found in the supplemental.\", \"Substantial experimental details to improve reproducibility, including detailed architecture parameters and training procedures.\", \"An additional NDB figure highlighting the lack of diversity of WaveNet baseline samples.\", \"A table of additional baseline comparisons, justifying the use of WaveGAN and 8-bit WaveNet as the strongest baselines.\"]}",
"{\"title\": \"This paper proposes an approach that uses GAN framework to generate audio.\", \"review\": \"This paper proposes an approach that uses GAN framework to generate audio through modeling log magnitudes and instantaneous frequencies with sufficient frequency resolution in the spectral domain. Experiments on NSynth dataset show that it gives better results then WaveNet. The most successful deep generative models are WaveNET, Parallel WaveNet and Tacotran that are applied to speech synthesis, the method should be tested for speech synthesis and compared with WaveNet, Parallel WaveNet as well as Tacotran.\\n\\nFor WaveNet, the inputs are text features, but for Tacotran, the inputs are mel-spectrogram. Here the inputs are log magnitudes and instantaneous frequencies. So the idea is not that much new.\\n\\nGAN has been used in speech synthesis, see \\nStatistical Parametric Speech Synthesis Incorporating Generative Adversarial Networks\\nIEEE/ACM Transactions on Audio, Speech, and Language Processing ( Volume: 26 , Issue: 1 , Jan. 2018 )\\n\\nSo for this work, GAN's application to sound generation is not new.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting take on GAN audio synthesis - accept\", \"review\": \"This paper proposes a strategy to generate audio samples from noise with GANs. The treatment is analogous to image generation with GANs, with the emphasis being the changes to the architecture and representation necessary to make it possible to generate convincing audio that contains an interpretable latent code and is much faster than an autoregressive Wavenet based model (\\\"Neural Audio Synthesis of Musical Notes with WaveNet AutoEncoders\\\" - Engel et al (2017)). Like the other two related works (WaveGAN - \\\"Adversarial Audio Synthesis\\\" - Donahue et al 2018) and the Wavenet model above, it uses the NSynth dataset for its experiments.\\n\\nMuch of the discussion is on the representation itself - in that, it is argued that using audio (WaveGAN) and log magnitude/phase spectrograms (PhaseGAN) produce poorer results as compared with the version with the unrolled phase that they call 'IF' GANs, with high frequency resolution and log scaling to separate scales. \\n\\nThe architecture of the network is similar to the recently published paper (Donahue et al 2018), with convolutions and transpose convolutions adapted for audio. However, there seem to be two important developments. The current paper uses progressive growing of GANs (the current state of the art for producing high resolution images), and pitch conditioning (Odena et al, where labels are used to help training dynamics). \\n\\nFor validation, the paper presents several metrics, with the recently proposed \\\"NDB\\\" metric figuring in the evaluations, which I think is interesting. The IF-Mel + high frequency resolution model seems to outperform the others in most of the evaluations, with good phase coherence and interpolation between latent codes.\", \"my_thoughts\": \"Overall, it seems that the paper's contributions are centered around the representation (with \\\"IF-Mel\\\" being the best). The architecture itself is not very different from commonly used DCGAN variants - the authors say that using PGGAN is desirable, but not critical, and the use of labels from Odena et al. \\n\\nMany of my own experiments with GANs were plagued by instability (especially at higher resolution) and mode collapse problems without special treatment (largely documented, such as adding noise, adjusting learning rates and so forth). To this end, what do the authors see as 'high' resolution vis a vis audio signals? \\n\\nI am curious if we can adapt these ideas for recurrent generators as might appear in TTS problems. \\n\\nI rate this paper as an accept since this is one of the few existing works that demonstrate successful audio generation from noise using GANs, and owing to its novelty in exploring representation for audio.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Exciting work\", \"review\": \"This is an exciting paper with a simple idea for better representing audio data so that convolutional models such as generative adversarial networks can be applied. The authors demonstrate the reliability of their method on a large dataset of acoustic instruments and report human evaluation metrics. I expect their proposed method of preprocessing audio to become standard practice.\\n\\nWhy didn't you train a WaveNet on the high-resolution instantaneous frequency representations? In addition to conditioning on the notes, this seems like it would be the right fair comparison. \\n\\nI'm still not clear on unrolled phase which is central to this work. If you can, spend more time explaining this in detail, maybe with examples / diagrams? In figure 1, in unrolled phase, why is time in reverse?\", \"small_comments\": [\"Figure 1 & 2: label the x-axis as time. Makes it a lot easier to understand.\", \"I appreciate the plethora of metrics. The inception score you propose is interesting. Very cool that number of statistically-different bins tracks human eval!\", \"sentence before sec 2.2, and other small grammatical mistakes. Reread every sentence carefully for grammar.\", \"Figure 5 is low-res. Please fix. All the other figures are beautiful - nice work!\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
BylQV305YQ | Toward Understanding the Impact of Staleness in Distributed Machine Learning | [
"Wei Dai",
"Yi Zhou",
"Nanqing Dong",
"Hao Zhang",
"Eric Xing"
] | Most distributed machine learning (ML) systems store a copy of the model parameters locally on each machine to minimize network communication. In practice, in order to reduce synchronization waiting time, these copies of the model are not necessarily updated in lock-step, and can become stale. Despite much development in large-scale ML, the effect of staleness on the learning efficiency is inconclusive, mainly because it is challenging to control or monitor the staleness in complex distributed environments. In this work, we study the convergence behaviors of a wide array of ML models and algorithms under delayed updates. Our extensive experiments reveal the rich diversity of the effects of staleness on the convergence of ML algorithms and offer insights into seemingly contradictory reports in the literature. The empirical findings also inspire a new convergence analysis of SGD in non-convex optimization under staleness, matching the best-known convergence rate of O(1/\sqrt{T}). | [
"staleness",
"impact",
"distributed machine",
"distributed machine learning",
"systems",
"copy",
"model parameters",
"machine",
"network communication"
] | https://openreview.net/pdf?id=BylQV305YQ | https://openreview.net/forum?id=BylQV305YQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1gEJ4jAkV",
"B1e7cN_6RX",
"ByeqAlqeC7",
"SkgQs-Yg07",
"ryeUVuDg0Q",
"rJgnEvDxCX",
"B1e0R75T2m",
"SJxI4taJ2X",
"rJlVH62mo7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544627163910,
1543500938933,
1542656210210,
1542652314697,
1542645805573,
1542645556409,
1541411797864,
1540507949841,
1539718459873
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1436/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1436/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1436/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1436/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1436/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1436/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1436/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1436/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1436/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers that provided extensive and technically well-justified reviews agreed that the paper is of high quality. The authors are encouraged to make sure all concerns of these reviewers are properly addressed in the paper.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good-quality paper\"}",
"{\"title\": \"Additional Experiments on LSTM\", \"comment\": \"LSTM is indeed an interesting piece to add. We have added new results on LSTMs in Appendix A.8 -- we vary the number of layers of LSTMs (see Figure 13) and types of SGD algorithms (see Figure 14), and have observed that (1) staleness impacts deeper network variants more than shallower counterparts, which is consistent with our observation in CNNs and DNNs; (2) different algorithms respond to staleness differently, with SGD and Adam more robust to staleness than Momentum and RMSProp.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for the valuable feedback. Our work aims to strike a balance between empirical and theoretical approaches to understanding the effects of stale updates. Our goals in this work is threefold: (1) Through systematic experiments, we explicitly observe staleness and its impact, for the first time to our knowledge, on 12 key models and algorithms. (2) We introduce gradient coherence (GC), which is related to the impact of staleness for gradient-based optimization. GC can be evaluated in real time during the course of convergence, with minimal overhead, and may be used by practitioners to control delays in the system. (3) Based on GC, we provide a new convergence analysis of SGD in non-convex optimization under staleness. With such a broad scope, there is inevitably areas for improvements. We hope that the reviewer will consider the contributions towards making distributed ML more robust under non-synchronous execution as we address the comments:\\n\\n1. The reviewer indeed raised interesting points. Our theory based on gradient coherence relates model complexity to the larger slowdown by staleness through the gradient coherence. Fig. 5 in the manuscript shows that deeper network generally exhibits lower gradient coherence. Our theorem shows that lower gradient coherence amplifies the effect of staleness s through the factor s/mu^2 in Eq (1) in the manuscript. We have included a brief discussion of this point in the manuscript. \\n\\n2. Staleness is known to add implicit momentum to SGD gradients [2]. The Adam optimizer keeps an exponentially decaying average of past gradients to modify gradient direction, and can be viewed as a version of momentum methods, whose momentum may be affected by staleness by similar reasoning. It is, however, challenging to analyze the convergence of these advanced gradient descent methods even under sequential settings [3], and the treatment under staleness is beyond the scope of our current work. It\\u2019d be an interesting future direction to create a delay tolerant version of Adam, similar to AdaRevision [4]. \\n\\n3. We thank the reviewer for pointing out a reference that we were not aware of. We agree that the sufficient direction assumption in [1] shares resemblance to our Definition 1. We note that their ``staleness\\u2019\\u2019 in the definition of sufficient direction is based on a layer-wise and fixed delay, whereas our staleness is a random variable that is subject to system level factors such as communication bandwidth. Also, we note that their convergence results in Theorem 1 and Theorem 2 do not capture the impact of staleness, whereas our Theorem 1 explicitly characterizes its impact on the choice of stepsize and the convergence rate, and also captures the interplay to gradient coherence. We have included the reference in our updated manuscript to provide further context.\\n\\n4. Though we use a peer to peer topology in our experiment, our delay pattern is agnostic to the underlying communication network topology. In practice it is more common to implement an intermediate aggregation such as parameter server [5] to reduce network traffic.\\n\\n5. We thank the reviewer for pointing out the error. The delay should be r ~ Categorical(0, 1, \\u2026, s), which gives the 0.5s + 1 expected delay. We have corrected in the updated manuscript.\\n\\n6. This is an important point to clarify. With SGD, ResNet8\\u2019s final test accuracy is about 73% in our setting, while ResNet20\\u2019s final test accuracy is close to 75%. Therefore, deeper ResNet can reach the same model accuracy in the earlier part of the optimization path, resulting in lower number of batches in Fig.1(a). However, when the convergence time is normalized by the non-stale (s=0) value, we observe the impact of staleness is higher on deeper models. We have included this clarification in the updated manuscript. \\n\\n7. Many synchronous systems uses batch size linear in the number of workers (e.g., [6]). We preserve the same batch size and more workers simply makes more updates in each iteration. We have reworded the footnote for better clarity.\\n\\n[1] Training Neural Networks Using Features Replay. https://arxiv.org/pdf/1807.04511.pdf\\n[2] Ioannis Mitliagkas et al. Asynchrony begets momentum, with an application to deep learning. \\n[3] Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. International Conference on Learning Representations, 2018.\\n[4] H. Brendan Mcmahan and Matthew Streeter. Delay-Tolerant Algorithms for Asynchronous Distributed Online Learning. NIPS 2014.\\n[5] M. Li, D. G. Andersen, J. Park, A. J. Smola, A. Ahmed, V. Josifovski, J. Long, E. J. Shekita, and B.-Y. Su. Scaling distributed machine learning with the Parameter Server. In Proceedings of OSDI, 2014. \\n[6] P. Goyal and et al. \\u00b4 A. Kyrola, A. Tulloch, Y. Jia, and K. He, \\u201cAccurate, large minibatch SGD: training imagenet in 1 hour,\\u201d CoRR, vol. abs/1706.02677, 2017.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We appreciate the insightful comments and careful review of our work. Our goals in this work is threefold: (1) Through systematic experiments, we explicitly observe staleness and its impact, for the first time to our knowledge, on 12 key models and algorithms. (2) We introduce gradient coherence (GC), which is related to the impact of staleness for gradient-based optimization. GC can be evaluated in real time during the course of convergence, with minimal overhead, and may be used by practitioners to control delays in the system. (3) Based on GC, we provide a new convergence analysis of SGD in non-convex optimization under staleness. With such a broad scope, there is inevitably areas for improvements. We hope that the reviewer will consider the contributions towards making distributed ML more robust under non-synchronous execution as we address the comments:\", \"regarding_to_fixed_hyperparameters\": \"we have redone all experiments in Fig. 2 with hyperparameter search over the learning rate. We observe the same overall pattern as before: staleness slows down convergence, sometimes quite significantly at high levels of staleness. Furthermore, different algorithms have different sensitivity to staleness, and show similar trends as observed before. For example, SGD with Momentum remains highly sensitive to staleness. Notably, with the learning rate tuning, RMSProp no longer diverges, but is actually more robust to staleness than Adam and SGD with Momentum. We have updated manuscript to reflect this new observation. While detailed study of hyperparameter settings is beyond the scope of our work, we will open source our code upon acceptance to make the future reproducibility efforts easier and facilitate the use of simulation study alongside distributed experiments.\\n\\nLSTM is indeed an interesting piece to add. We have added new results on LSTMs in Appendix A.8 -- we vary the number of layers of LSTMs (see Figure 13) and types of SGD algorithms (see Figure 14), and have observed that (1) staleness impacts deeper network variants more than shallower counterparts, which is consistent with our observation in CNNs and DNNs; (2) different algorithms respond to staleness differently, with SGD and Adam more robust to staleness than Momentum and RMSProp.\\n\\nWe thank the reviewer for the careful review of our theoretical contributions. We especially appreciate the helpful comments that draw the connection between the low gradient coherence at the early phase of optimization and the annealing of the number of workers. Indeed, the convergence analysis of [1] requires the number of parallel workers to follow a \\\\sqrt{K} schedule, where K is the number of iterations. Our work addresses the convergence of non-convex, non-synchronous optimization from a very different starting point than [1] by using gradient coherence, and it seems that similar challenges remains at the initial phase of optimization. We have included a discussion of this connection in the revised manuscript. \\n\\n[1] Xiangru Lian and et al. Asynchronous parallel stochastic gradient for nonconvex optimization. In NIPS, 2015.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank the reviewer for the comments. Our goals in this work are threefold: (1) Through systematic experiments, we explicitly observe staleness and its impact, for the first time to our knowledge, on 12 key models and algorithms. (2) We introduce gradient coherence (GC), which is related to the impact of staleness for gradient-based optimization. GC can be evaluated in real time during the course of convergence, with minimal overhead, and may be used by practitioners to control delays in the system. (3) Based on GC, we provide a convergence analysis of SGD in non-convex optimization under staleness. With such a broad scope, there is inevitably areas for improvements. We hope that the reviewer will consider the contributions towards making distributed ML more robust under non-synchronous execution as we address the comments:\\n\\nRegarding the reviewer\\u2019s first comment, we would like to clarify that our Definition 1 *does not* require all the gradients to point to close directions along the optimization path. Instead, it only requires the gradients to be positively correlated over a small number of iterations s, which is often very small (e.g. <10 in our experiments). Therefore, Definition 1 is not a global requirement on optimization path. We have clarified this Definition 1 in the revision.\\n\\nWe want to point out that our own results and a number of recent studies show strong evidences that SGD in practical neural network training encourage positive gradient coherence, e.g., Fig. 4(a)(b), and Fig. 5 in our manuscript, [1] and [3], etc. In particular, [1] shows that the optimization trajectories of SGD and Adam are generally smooth, which is also observed in [3] (e.g., Fig. 4 in [3]). These findings suggest that the direction of the optimization trajectory changes slowly during convergence and therefore justifies our Definition 1, even if the gradient direction may oscillate globally [3]. Such findings are perhaps not surprising, because the loss surface of shallow networks and deep networks with skip connections are dominated by large, flat, nearly convex attractors around the critical points [1][2]. This indicates that the degree of non-convexity is mild around critical points. With small batch sizes (32) and skip connections for deep networks in our experiments, our observation of gradient coherence is therefore consistent with the experimental evidence in existing literature. \\n\\nRegarding the reference (Choromanska et al. 2014) mentioned by the reviewer, even though it shows the (layer-wise) structure of critical points in simple networks with one hidden layer, the more recent works, including those highlighted above, have revealed additional curvature information around critical points and the optimization dynamics for many complex networks. We therefore sincerely ask the reviewer to reevaluate our work in light of these empirical evidence that are consistent with our findings. As pointed out by Reviewer 3, a similar assumption has been made in [4]. We have included these references and discussion in our latest revision.\", \"regarding_to_fixed_hyperparameters\": \"we have redone all experiments in Fig. 2 with hyperparameter search over the learning rate. We observe the same overall pattern as before: staleness slows down convergence, sometimes quite significantly at high levels of staleness. Furthermore, different algorithms have different sensitivity to staleness, and show similar trends as observed before. For example, SGD with Momentum remains highly sensitive to staleness. Notably, with the learning rate tuning, RMSProp no longer diverges, but is actually more robust to staleness than Adam and SGD with Momentum. We have updated manuscript to reflect this new observation.\\n\\nFinally, we fully understand the reviewer\\u2019s concern about reproducibility. We believe that our simulation work provides a well-controlled environment for future research of distributed machine learning systems. To make the future reproducibility efforts easier and facilitate the use of simulation study alongside distributed experiments, we will open source our code upon acceptance. \\n\\n\\n[1] Li et al. Visualizing the loss landscape of neural nets. To appear in NIPS 2018\\n[2] Nitish Shirish Keskar et al. On large-batch training for deep learning: Generalization gap and sharp minima. In ICLR, 2017.\\n[3] Eliana Lorch. Visualizing deep network training trajectories with pca. In ICML Workshop on Visualization for Deep Learning, 2016.\\n[4] Huo and et al. Training Neural Networks Using Features Replay. To appear in NIPS 2018.\"}",
"{\"title\": \"Revision Summary\", \"comment\": \"We thank all the reviewers for giving valuable feedback to this paper. We have revised the manuscript to incorporate the suggestions from the comments.\", \"we_highlight_the_following_revisions\": [\"We have provided additional discussion and references to recent works presenting empirical evidence consistent with our assumption for Theorem 1.\", \"We have redone experiments in Fig. 2 with hyperparameter tuning and updated the writing accordingly.\", \"We have included a brief discussion on how Theorem 1 relates model complexity to the larger slowdown from staleness observed in our experiments.\", \"We have included reference to [1] which uses the sufficient direction assumption that shares the resemblance to our Definition 1 but differs in certain key aspects.\", \"We have made further clarifications throughout the manuscript based on reviewers\\u2019 comments.\", \"We have added new results on LSTMs in Appendix A.8 -- we vary the number of layers of LSTMs (see Figure 13) and types of SGD algorithms (see Figure 14) and see how staleness impacts the convergence.\", \"[1] Huo and et al. Training Neural Networks Using Features Replay. To appear in NIPS 2018.\"]}",
"{\"title\": \"The paper addresses asynchronous optimization with a focus on staleness effect. A strong hypothesis is made on the path followed by the optimization walk and concerns should be raised with the hyperparameters in the empirical validation.\", \"review\": \"The papers addresses the important issue with asynchronous SGD: stale gradients.\\n\\nConvergence is proven under an assumption on the path followed by the optimization walk. Namely, gradient are assumed to be all pointing to the close directions along the walk. My major concern is that this is a strong (if not completely wrong) hypothesis in the practical case of deep learning, with high dimensional models and totally non-convex loss functions (see e.g. \\nChoromanska et al. 2014).\\n\\nThe paper illustrates empirically the convergence claims, but only under fixed hyper-parameters, which completely illustrates the recent concerns about the reproducibility crisis in ML.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting empirical and theoretical analysis of the convergence of async SGD under delay\", \"review\": \"This paper presents and empirical and theoretical study of the convergence of asynchronous stochastic gradient descent training if there are delays due to the asynchronous part of it. The paper can be neatly split in two parts: a simulation study and a theoretical analysis.\\n\\nThe simulation study compares, under fixed hyperparameters, the behavior of distributed training under different simulated levels of delay on different problems and different model architectures. Overall the results are very interesting, but the simulation could have been more thorough. Specifically, the same hyperparameter values were used across batch sizes and across different values of the distributed delay. Some algorithms failed to converge under some settings and others experienced dramatic slowdowns, but without careful study of hyperparameters it's hard to tell whether these behaviors are normal or outliers. Also it would have been interesting to see a recurrent architecture there, as I've heard much anecdotal evidence about the robustness of RNNs and LSTMs to asynchronous training. I strongly advise the authors to redo the experiments with some hyperparameter tuning for different levels of staleness to make these results more believable.\\n\\nThe theoretical analysis identifies a quantity called gradient coherence and proves that a learning rate based on the coherence can lead to an optimal convergence rate even under asynchronous training. The proof is correct (I checked the major steps but not all details), and it's sufficiently different from the analysis of hogwild algorithms to be of independent interest. The paper also shows the empirical behavior of the gradient coherence statistic during model training; interestingly this seems to also explain the heuristic commonly believed that to make asynchronous training work one needs to slowly anneal the number of workers (coherence is much worse in the earlier than later phases of training). This quantity is interesting also because it's somewhat independent of the variance of the stochastic gradient across minibatches (it's the time variance, in a way), and further analysis might also show interesting results.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Empirical explanation of the impact of staleness\", \"review\": \"This paper tries to analyze the impact of the staleness on machine learning models in different settings, including model complexity, optimization methods or the number of workers. In this work, they study the convergence behaviors of a wide array of ML models and algorithms under delayed updates, and propose a new convergence analysis of asynchronous SGD method for non-convex optimization.\", \"the_following_are_my_concerns\": \"1. \\\"For CNNs and DNNs, the staleness slows down deeper models much more than shallower counterparts.\\\" I think it is straightforward. I want to see the theoretical analysis of the relation between model complexity and staleness. \\n2. \\\"Different algorithms respond to staleness very differently\\\". This finding is quite interesting. Is there any theoretical analysis of this phenomenon? \\n3. The \\\"gradient coherence\\\" in the paper is not new. I am certain that \\\"gradient coherence\\\" is very similar to the \\\"sufficient direction\\\" in [1]. \\n4. What is the architecture of the network? in the paper, each worker p can communicate with other workers p'. Does it mean that it is a grid network? or it is just a start network. \\n5. in the top of page 3, why the average delay under the model is 1/2s +1, isn't it (s-1)/2? \\n6. on page 5, \\\"This is perhaps not surprising, given the fact that deeper models pose more optimization challenges even under the sequential settings.\\\" why it is obvious opposite to your experimental results in figure 1(a)? Could you explain why shallower CNN requires more iterations to get the same accuracy? it is a little counter-intuitive.\\n7. I don't understand what does \\\"note that s = 0 execution treats each worker\\u2019s update as separate updates instead of one large batch in other synchronous systems\\\" mean in the footnote of page 5.\\n\\n\\nAbove all, this paper empirically analyzes the effect of the staleness on the model and optimization methods. It would be better if there is some theoretical analysis to support these findings.\\n\\n[1] Training Neural Networks Using Features Replay https://arxiv.org/pdf/1807.04511.pdf\\n\\n\\n===after rebuttal===\\nAll my concerns are addressed. I will upgrade the score.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
Hyg74h05tX | Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design | [
"Jonathan Ho",
"Xi Chen",
"Aravind Srinivas",
"Yan Duan",
"Pieter Abbeel"
] | Flow-based generative models are powerful exact likelihood models with efficient sampling and inference.
Despite their computational efficiency, flow-based models generally have much worse density modeling performance compared to state-of-the-art autoregressive models. In this paper, we investigate and improve upon three limiting design choices employed by flow-based models in prior work: the use of uniform noise for dequantization, the use of inexpressive affine flows, and the use of purely convolutional conditioning networks in coupling layers. Based on our findings, we propose Flow++, a new flow-based model that is now the state-of-the-art non-autoregressive model for unconditional density estimation on standard image benchmarks. Our work has begun to close the significant performance gap that has so far existed between autoregressive models and flow-based models. | [
"Deep Generative Models",
"Normalizing Flows",
"RealNVP",
"Density Estimation"
] | https://openreview.net/pdf?id=Hyg74h05tX | https://openreview.net/forum?id=Hyg74h05tX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SklDy910y4",
"rkltsyQYRm",
"SJeE7kmtRX",
"Skly26zK0Q",
"ByxNQVm5nQ",
"S1e0_khKhX",
"SkxrmDQunQ",
"SylTRPCaom"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544579550766,
1543217056812,
1543216923600,
1543216550754,
1541186587864,
1541156726398,
1541056285255,
1540380629438
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1435/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1435/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1435/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1435/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1435/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1435/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1435/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1435/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"Strengths:\\n--------------\\nThis paper was clearly written, contained novel technical insights, and had SOTA results. In particular, the explanation of the generalized dequantization trick was enlightening and I expect will be useful in this entire family of methods. The paper also contained ablation experiments.\", \"weaknesses\": \"------------------\\nThe paper went for a grab-bag approach, when it might have been better to focus on one contribution and explore it in more detail (e.g. show that the learned pdf is smoother when using variational quantization, or showing the different in ELBO when using uniform q as suggested by R2).\\n\\nAlso, the main text contains many references to experiments that hadn't converged at submission time, but the submission wasn't updated during the initial discussion period. Why not?\", \"points_of_contention\": \"-----------------------------\\nEveryone agrees that the contributions are novel and useful. The only question is whether the exposition is detailed enough to reproduce the new methods (the authors say they will provide code), and whether the experiments, which meet basic standards, of a high enough standard for publication, because there was little investigation into the causes of the difference in performance between models.\", \"consensus\": \"----------------\\nThe consensus was that this paper was slightly below the bar.\", \"confidence\": \"2: The area chair is not sure\", \"recommendation\": \"Reject\", \"title\": \"Lovely main idea\"}",
"{\"title\": \"Re: interesting improvements for RealNVP/Glow models, but not well analysed\", \"comment\": [\"On looseness of the variational lower bound using a uniform q: our intention was not to emphasize the looseness of the bound, but rather to emphasize that the flow is forced to compensate for the inexpressive q by assigning high probability density to hypercubes around the data. This hurts generalization error, as it is an unnatural task for flows to perform: in our CIFAR10 ablation study, we found that the train-test performance gap with uniform q was 3 times larger than the gap using a flow-based q. We have updated the paper with discussion on this.\", \"On training speed with the mixture-of-logistics layer: we found the difference in training speed to be negligible. When training our CIFAR model on 8 NVIDIA 1080ti GPUs with batch size 64, we achieved 110 images/sec without mixture of logistics, and 106 images/sec with mixture of logistics (with 32 mixture components).\", \"We have updated our paper with illustrations and results on datasets with larger images: 64x64 ImageNet and 64x64 CelebA.\", \"On importance of the individual model contributions: altogether, the model improvements we proposed make Flow++ the current state-of-the-art non-autoregressive model on CIFAR10, Imagenet 32x32, and ImageNet 64x64, and in fact it outperforms the Multiscale PixelCNN (Reed et al. 2017).\"]}",
"{\"title\": \"Re: Three threads of improvements to normalizing flow models, reducing the gap between AR and non-AR models\", \"comment\": [\"Our reported results follow the standard in likelihood-based generative modeling: the CIFAR10 results are on the test set, and the ImageNet results are on the publicly available validation sets, available here: http://image-net.org/small/download.php\", \"Regarding speed of AR models: WaveRNN is indeed excellent work that increases AR sampling speed, and we expect that some of their improvements (such as weight sparsity) will also improve flow models. We look forward to seeing how well WaveRNN-like models perform on image datasets, which were the focus of our work.\", \"We have updated the paper with CelebA results.\", \"On checkerboard and channel splitting: we have updated the paper to mention how we use them, and details will be given in a cleaned source code release.\", \"On whether our architecture is simply a matter of introducing more parameters: our ablations did control for parameter count, by increasing number of filters to compensate for removed parts of the architecture. We found that our improvements in density estimation were not explained by increased parameter count (in fact, some of our worse ablations have slightly more parameters than the full Flow++ model), but rather from improved inductive biases, and indeed our results are now state-of-the-art among non-autoregressive models on CIFAR10, 32x32 ImageNet, and 64x64 ImageNet. We have updated the section on ablations with this information.\", \"On \\u201cspline or cubic interpolation instead of uniform dequantization\\u201d: we use uniform dequantization as our baseline since it is the standard dequantization technique employed in all prior work on continuous density modeling (see section 3.1 of the paper); we are not aware of any references in prior literature to spline or cubic interpolation for data dequantization.\"]}",
"{\"title\": \"Re: Three ingredients for more powerful flow-based model\", \"comment\": \"Regarding variational dequantization with IAF: the IAF is indeed a good candidate as a dequantization distribution, and is an interesting direction for future investigation.\"}",
"{\"title\": \"Re: \\\"Our validation metrics\\\" - on the test set?\", \"comment\": \"The abstract should read \\u201cevaluation metrics\\u201d instead. All reported numbers use the same metrics used in the literature on likelihood-based generative modeling (following https://arxiv.org/abs/1601.06759 and https://arxiv.org/abs/1807.03039 for example) -- the ImageNet results use the test split given by http://image-net.org/small/download.php and the CIFAR results use the test split given by https://www.cs.toronto.edu/~kriz/cifar.html\"}",
"{\"title\": \"Three threads of improvements to normalizing flow models, reducing the gap between AR and non-AR models\", \"review\": \"I think the ideas are of sufficient interest to the community to merit acceptance & discussion, but I still miss the high resolution samples we got with the Glow paper. Responses to my concerns somewhat addressed, though simpler alternatives to uniform dequant would be nice.\\n\\n=====\\n\\nImprovements are attained on two image datasets by (a) variational dequantization, (b) mixture CDF coupling layers, and (c) self-attention in conditioning net.\", \"quality\": \"The work is fine, demonstrating familiarity with recent work in flows and improving upon it. The experiments are on CIFAR-10 and 32x32 ImageNet. Unclear if the evaluation numbers are on a test set or a 'validation' set. I will be assuming test set. The visualizations are fine, but not nearly as convincing as the Glow visualizations on CelebA.\", \"clarity\": \"The presentation is clear enough, and the motivation seems reasonable, though the assertion that all AR models are slow seems a bit belied by the recent WaveRNN work, which gets a Wavenet like model running in realtime on a phone. On the other hand, I felt like the proposed fixes were all a bit scattered here & there. Each could stand as a research topic on its own, and one paper can't fit in much analysis of all three. For example, a RealNVP style model usually needs to shuffle or reverse the channels to attain decent performance, but there's no discussion of how/whether that is done here. Folks wanting to replicate this work would want a formula for the tractable log-abs-det-jacobian of the coupling layer, but all we have is \\\"involves calculating the pdf of the logistic mixtures\\\".\", \"originality\": \"Self-attention is not new, though its uptake in the conditioning networks of flow models has been slow/nonexistent. I found the dequantization improvement more novel. The new proposal for a coupling layer seems like a clever way of introducing more parameters in a structured manner.\", \"significance\": \"Bringing flow models closer to the performance of AR models is good progress.\\n\\n\\nQuestions\\nI wonder whether some kind of spline or cubic interpolation might achieve similar improvement over the uniform dequantization. Perhaps uniform is not the best baseline?\\nThe new coupling layer might just be viewed as a way of introducing many more parameters in a structured manner. Have you compared parameter counts?\\nAppendix B shows some portion of the code, but seems like a missed opportunity to fit this into a framework like tfp.bijectors. The code seems glued in somewhat slapdash. For example, the tf_go function looks like debugging/logging code (unwanted), and lacks any usage.\\n\\nI think this work is promising and interesting to the probabilistic modeling community, but needs some cleanup and some more compelling presentation (non image data? Glow-style graphics?).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Three ingredients for more powerful flow-based model\", \"review\": \"This paper offers architectural improvements for flow-based models that enable them to be very competitive with autoregressive models in terms of bits/dim metrics while still providing efficient sampling scheme. The three main contributions are the use of variational dequantization scheme, more powerful element-wise bijections (mixture of logistic CDF), and multi-head self-attention in the dependency structure.\", \"the_two_first_contributions_are_in_my_opinion_the_most_interesting_as\": \"- variational dequantization demonstrates the improvement that one can obtain by redefining part of the image processing that has been overlooked before;\\n- the inversion of element-wise bijection without closed form inverse can be efficiently approximated with bisection (binary search).\\nThe performances achieved by the resulting model are in my opinion a stepping stone in the area of flow-based models and encouraging as to their potential. \\nThe ablation study suggest that each contribution by themselves only improve slightly the model but that their simultaneous application results in a stronger boost in performance, which I can't explain from the paper. Nonetheless, some this ablation study was useful in tearing apart the contribution of each of several pieces of the model (missing pieces being gated convolutions, dropout, and instance normalization), although without explaining them.\\nAlthough flow-based model can intuitively sample faster than autoregressive models, the measure of sampling time is a bit interesting as an actual evidence of that claim. But the analysis of sampling time should be done on same hardware as to fair comparison before it can be a convincing argument. \\nConcerning variational dequantization, is there a reason coupling layer architecture was used instead of potentially more powerful model with less convenient inverses such as inverse autoregressive flow?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"interesting improvements for RealNVP/Glow models, but not well analysed\", \"review\": \"The paper improves upon the Real NVP/Glow design by proposing better dequantization schemes and more expressive forms of coupling layers. I really like Real NVP models, which I think are a bit underappreciated. Thus, I\\u2019m happy that there are papers trying to improve their performance. However, I wish this was done with more rigour.\", \"the_paper_makes_3_claims_about_the_current_flow_models\": \"(1) it is suboptimal to use additive uniform noise when dequantizing images, (2) affine coupling layers are not expressive enough, and (3) the architectures fail to capture global image context. I\\u2019ll comment on these claims and proposed solutions below.\\n\\n(1) I agree with the reasoning behind the need for a better dequantization distribution. However, I think the authors should provide an evidence that the lower bound is indeed loose when q is uniform. For example, for the CIFAR-10 model, the authors calculated a gap of 0.025 bpd when using variational dequantization. What would this gap be when using uniform q? Maybe, a clear illustration of the dequantization effect on a simpler dataset or a toy example would be more useful.\\n\\n(2) My main concern about the mixture CDFs coupling layer is how much bigger the model becomes and how much slower it trains. I find this analysis crucial when deciding whether 0.05 bpd improvement as reported in Table 1 is worth the hassle.\\n\\n(3) As a person not familiar with the Transformer, I couldn\\u2019t understand how exactly self-attention works and how much it helps the model to capture the global image context. Also, I think this problem needs a separate illustration on a dataset of larger images. \\n \\nThe experiments section is very weak in backing up the identified problems and proposed solutions. Firstly, I think it is more clear if the ablation study is done in reverse: instead of making Flow++ and removing components, start with the vanilla model and then add stuff. Secondly, it\\u2019s not clear if these improvements generalize across datasets, e.g. when images are larger than 32x32. Though, larger inputs may lead to huge models which are impossible to train when the resources are quite limited. That\\u2019s why I find it important to report how much complexity is added compared to the initial Real NVP. Also, I think it\\u2019s a well-known fact that sampling from PixelCNN models is slow unlike for Real NVPs, so I don\\u2019t find the results in Table 3 surprising or even useful. \\n\\nTo conclude, I find this paper unfinished and wouldn\\u2019t recommend its acceptance until the analysis of the problems and their solutions becomes better thought out.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
r1xX42R5Fm | Beyond Greedy Ranking: Slate Optimization via List-CVAE | [
"Ray Jiang",
"Sven Gowal",
"Yuqiu Qian",
"Timothy Mann",
"Danilo J. Rezende"
] | The conventional approach to solving the recommendation problem greedily ranks
individual document candidates by prediction scores. However, this method fails to
optimize the slate as a whole, and hence, often struggles to capture biases caused
by the page layout and document interdepedencies. The slate recommendation
problem aims to directly find the optimally ordered subset of documents (i.e.
slates) that best serve users’ interests. Solving this problem is hard due to the
combinatorial explosion of document candidates and their display positions on the
page. Therefore we propose a paradigm shift from the traditional viewpoint of solving a ranking problem to a direct slate generation framework. In this paper, we introduce List Conditional Variational Auto-Encoders (ListCVAE),
which learn the joint distribution of documents on the slate conditioned
on user responses, and directly generate full slates. Experiments on simulated
and real-world data show that List-CVAE outperforms greedy ranking methods
consistently on various scales of documents corpora. | [
"CVAE",
"VAE",
"recommendation system",
"slate optimization",
"whole page optimization"
] | https://openreview.net/pdf?id=r1xX42R5Fm | https://openreview.net/forum?id=r1xX42R5Fm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJefj6-Pe4",
"HJxk8hf41N",
"Hyxrzl_TAm",
"H1g9MpDTAm",
"HJlUh4EU07",
"ryg178Yi6Q",
"SygYUHKopX",
"Sklln4tiam",
"Syx1JutAn7",
"BklYOAG62m",
"rJg80M9qhX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545178521999,
1543937094865,
1543499789313,
1543499026240,
1543025837955,
1542325782982,
1542325585225,
1542325415564,
1541474263034,
1541381745468,
1541214925889
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1434/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1434/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1434/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1434/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1434/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1434/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1434/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1434/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1434/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1434/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1434/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents a novel perspective on optimizing lists of documents (\\\"slates\\\") in a recommendation setting. The proposed approach builds on progress in variational auto-encoders, and proposes an approach that generates slates of the desired quality, conditioned on user responses.\\n\\nThe paper presents an interesting and promising novel idea that is expected to motivate follow-up work. Conceptually, the proposed model can learn complex relationships between documents and account for these when generating slates. The paper is clearly written. The empirical results show clear improvements over competitive baselines in synthetic and semi-synthetic experiments (real users and clicks, learned user model).\\n\\nThe reviewers and AC also note several potential shortcomings. The reviewers asked for additional baselines that reflect current state of the art approaches, and for comparisons in terms of prediction times. There are also concerns about the model's ability to generalize to (responses on) slates unseen during training, as well as concerns about the realism of the simulated user model in the evaluation. There were questions regarding the presentation, including model details / formalism.\\n\\nIn the rebuttal phase, the authors addressed the above as follows. They added new baselines that reflect sequential document selection (auto-regressive MLP and LSTM) and demonstrate that these perform on par with greedy approaches. They provide details on an experiment to test generalization, showing both when the model succeeds and where it fails - which is valuable for understanding the advantages and limitations of the proposed approach. The authors clarified modeling and evaluation choices. \\n\\nThrough the rebuttal and discussion phase, the reviewers reached consensus on a borderline / lean to accept decision. The AC suggests accepting the paper, based on the innovative approach and potential directions for follow up work.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"novel take on slate generation for recommendation using conditional VAEs\"}",
"{\"title\": \"Thank you.\", \"comment\": \"I read your reply, thank you for the new experiments and detailed description.\\n\\nRegarding the setting (multiple clicks instead of a single click), I do believe that this is particular even for recommender systems and it's worth saying it explicitly early in the paper.\"}",
"{\"title\": \"A gentle nudge for feedback [paper updated]\", \"comment\": \"Thanks again for your insightful review. We would like to know your thoughts on our rebuttal if possible. We have since updated the paper and would appreciate any additional feedback you have.\"}",
"{\"title\": \"Second reply to reviewer's comments\", \"comment\": \"1) \\\"I understand the rationale but given the venue, I think it is necessary not to restrict yourselves to \\\"industry-specific\\\" baselines (of course you can also discuss computation aspects). I understand that there is some subjectivity in this.\\\"\\n\\nAuto-regressive baselines (Position and LSTM) have been added to the paper. These methods can, in theory, capture some level of positional and contextual bias. However, we want to point out that these models are computationally more expensive and, as such, are rarely used in time-critical applications. We haven't found any compelling alternative method that can capture contextual bias without a significant overhead in computation.\\n\\n2) a) \\\"Isn't that a more typical setting (i.e., in many domains I would imagine that users would consume a small number of recommended items from a slate)? (This comment also relates to your response about the results reported in Figure 4--6)\\\"\\n\\nWhile in some problems (typically information retrieval settings) most slates have max 1 positive response by design, in the recommendation problems that aim at maximizing user engagement, slates often do have multiple clicks (please refer to papers such as [Katariya16: http://proceedings.mlr.press/v48/katariya16.pdf]). As you correctly pointed out, we haven't run any experiment where slates \\\"naturally\\\" have either 0 or 1 click (the current experiments still target multiple clicks, even if only 0/1 click slate are used during training). Running such experiment will likely require rethinking how the ideal condition is selected, and we do plan to focus on this aspect in future work. Current experiments seem to indicate that we can get away with a slightly overestimated ideal condition.\\n\\n\\nb) \\\"My original comment was related to training/test mismatch. In addition to showing empirically that it works, it would be nice to provide insights as to why it works in practice with your proposed model. You provide some of that in your rebuttal (in the paragraph that starts with \\\"It is true that evaluation of our model requires choosing a c*\\\") but I am wondering if you have any insights about how the model \\\"generalizes\\\" to different conditioning information?\\\" \\n\\nGeneralization will heavily depend on the underlying dataset. List-CVAE does assume that we can correctly estimate the conditional distribution of slates w.r.t. a given target engagement vector. When some engagement vectors are missing from the training data, List-CVAE's performance degrades (as shown in Figure 6). Prior experiments (on additional synthetic data) and current experiment show that List-CVAE tends to generalize, but we do not have any theoretical insight as to why this is the case.\\n\\n3) \\\"Perhaps I'm missing something here. I am not sure I understand how that would have been detected. In other words, weakened compared to what? One perhaps convincing experiment would be to compare to the performance of List-CVAE trained on the randomly shuffled slates (created in the same way from the same dataset). Of course, I am not suggesting you have to run this experiment but it would perhaps make your argument more convincing.\\\"\", \"the_experiment_suggested_is_a_good_idea\": \"intuitively, we expect that List-CVAE trained on randomly shuffled slates will perform poorly when order is important. In particular, it will be impossible to learn positional bias (since positions will be random). The fact that List-CVAE performs well in our slate version of the Yoochoose dataset seem to indicate that order is important in this dataset - however, we will run the suggested experiment to test this hypothesis.\"}",
"{\"title\": \"Replies to your rebuttal.\", \"comment\": \"1) You write: \\\"In this paper we mainly compared List-CVAE with popular baselines that are suitable for large-scale production settings.\\\"\\n\\nI understand the rationale but given the venue, I think it is necessary not to restrict yourselves to \\\"industry-specific\\\" baselines (of course you can also discuss computation aspects). I understand that there is some subjectivity in this.\\n\\n\\n2) You write \\\"Regarding the generalization ability of the model, we performed a test in the Yoochoose dataset by masking out a percentage of the top responses at the end of Section 4.2. List-CVAE only failed to generalize to the case of training on slates with maximum 1 positive response (h=40%, Figure 6d).\\\"\\n\\nThat's interesting, thanks for pointing it out. I still have two comments about it: \\n\\na) Isn't that a more typical setting (i.e., in many\\u00a0domains I would imagine that users would consume a small number of recommended items from a slate)? (This comment also relates to your response about the results reported in Figure 4--6)\\n\\n2) My original comment was related to training/test mismatch. In addition to showing empirically that it works, it would be nice to provide insights as to why it works in practice with your proposed model. You provide some of that in your rebuttal (in the paragraph that starts with \\\"It is true that evaluation of our model requires choosing a c*\\\") but I am wondering if you have any insights about how the model \\\"generalizes\\\" to different conditioning information? \\n\\n3) \\\"Given that there are few publicly available slate datasets, we had to devise slates using the temporal order of clicks/purchases within each user session. If the ordering were not important, it would have considerably weakened the performance of List-CVAE.\\\"\\n\\nPerhaps I'm missing something here. I am not sure I understand how that would have been detected. In other words, weakened compared to what? One perhaps convincing experiment would be to compare to the performance of List-CVAE trained on the randomly shuffled slates (created in the same way from the same dataset). Of course, I am not suggesting you have to run this experiment but it would perhaps make your argument more convincing.\\n\\nThanks again for providing feedback and updating your paper.\"}",
"{\"title\": \"Replies to the reviewer's comments [paper updated]\", \"comment\": \"Thank you for the nice review! We updated the paper to add more baselines to address the reviewer\\u2019s comments. The ranking industry standard is generally not well-defined, but in this paper, we mainly compare against the benchmark industry method of the two-stage ranking, which is a weaker performing version of the Greedy MLP baseline due to the candidate generation stage. We also compare List-CVAE with other popular greedy baselines that are suitable for large-scale production settings. For non-greedy baselines, we ran experiments against auto-regressive LSTM (learning contextual/positional biases) and auto-regressive position MLP (learning positional biases) models, which are now included in the results and they performed on par with the greedy baselines.\"}",
"{\"title\": \"Replies to the reviewer's comments [paper updated]\", \"comment\": \"Thank you for the informative review focusing on the following three aspects of the paper, generalization capacity of the model, evaluation metrics and comparison against other baselines. We incorporated the reviewer\\u2019s comments into the latest version of the paper, to add a couple of non-greedy baselines and clarify the generalization capacity and the general motivation of our work.\\n\\nRegarding the generalization ability, given the binary vector format of conditions and the model design, the decoder of the List-CVAE model learns the relationship between the compression of a document and its binary response, and it picks up pairwise (or more generally, sub-slate) interactions from different sub-optimal training slates yet using all of them to construct an optimal slate at test time. We tested the generalization capacity of the model in real world experiments by masking out a percentage of the top responded slates in the Yoochoose dataset at the end of Section 4.2. List-CVAE showed strong generalization capacity and only failed to generalize in the case of training on slates with maximum 1 positive response (h=40%, Figure 6d). This result is expected since List-CVAE can not learn much about the interactions between documents given 0 or 1 positive response, whereas the MLP-type models learn click probabilities of single documents in the same way as in slates with higher responses. \\n\\nIt is true that evaluation of our model requires choosing a c* at or near the edge of the support of P(c). However we can compromise generalization vs. performance by controlling c* to some extent (in this paper, we did not need to use sub-optimal conditioning since the model readily generalizes well to the optimal condition. However in practice, depending on the datasets, one can choose close-to-optimal conditioning at test time for better generalization results). Moreover, interactions between documents are generated by similar mechanisms whether they are from the optimal or sub-optimal slates. Thus the model can learn these mechanisms from sub-optimal slates and generalize to optimal slates (as the experiment results indicate). This discussion has been added to the paper.\\n \\nOn evaluation metrics, it is not always the case that a higher diversity-inclusive score gives better slates measured by user\\u2019s total responses. Even though diversity-inclusive metrics are indeed more transparent, they do not directly measure our end goal, which is to maximize the expected number of total responses on the generated slates. We added some clarification on this in the paper.\\n\\nRegarding our choice of baselines, in this paper, our goal is to challenge the industry state-of-the-art benchmark two-stage ranking on both small and large scales. Therefore we mainly compared List-CVAE with popular baselines that are suitable for large-scale recommender system production settings. For non-greedy baselines, we ran experiments against auto-regressive LSTM (learning contextual/positional biases) and auto-regressive position MLP (learning positional biases) models, which are now included in the results and they performed on par with the greedy baselines. \\n\\nThe two models proposed by the reviewer (Ai 2018, Zamani 2018) (we added Ai 2018 as a reference in the paper) are solving information retrieval problems and hence (rightfully) think about slate generation from a ranking paradigm using greedy ranking evaluation metrics such as nDCG, which assumes that \\u201cbetter\\u201d documents should be put into higher positions. However, while this is a natural assumption for information retrieval problems, it is the exact assumption that we avoid making since our problem setting is optimal slate generation for maximizing user engagement. One can imagine cases where leaving good quality documents towards the end of the slate encourages users to browse to later positions of the slate, and thus the effects on total user engagement may be diverse. \\n\\nFinally, we would like to emphasize that we are proposing a paradigm shift for recommender systems that aim to maximize user engagements on whole slates, departing from the traditional viewpoint of a ranking problem and to adopt a direct slate generation framework. As such, we call for new baselines and evaluation metrics that are representative of the new paradigm.\"}",
"{\"title\": \"Replies to the reviewer's comments [paper updated]\", \"comment\": \"Thank you for the thoughtful review. We updated the paper to reflect several of the reviewer\\u2019s comments, which will be mentioned below. In this paper we mainly compared List-CVAE with popular baselines that are suitable for large-scale production settings. For non-greedy baselines, we ran experiments against auto-regressive LSTM (learning contextual/positional biases) and auto-regressive position MLP (learning positional biases) models, which are now included in the results and they performed on par with the greedy baselines.\\n\\nIn terms of prediction time, all baselines (except auto-regressive methods) and List-CVAE have a run-time complexity of O(k * log(n)) where k is the slate size and n is the number of documents. To obtain logarithmic scaling, one can use kd-trees [Sproull, R.F., Refinements to nearest-neighbor searching in k-dimensional trees. Algorithmica 6(4) (1991) 579\\u2013589]. Given this, providing exact numbers becomes highly dependent on the underlying implementation details. With that said, in our experiments, all greedy baselines and the CVAE model are performing below 0.04 ms per test example on a single GPU, and their differences are very small (less than 0.01 ms). The auto-regressive LSTM is ~10 times slower as expected.\\n\\nRegarding the generalization ability of the model, we performed a test in the Yoochoose dataset by masking out a percentage of the top responses at the end of Section 4.2. List-CVAE only failed to generalize to the case of training on slates with maximum 1 positive response (h=40%, Figure 6d). This result is expected since List-CVAE can not hope to learn much about the interactions between documents given only 0 or 1 positive response per slate, whereas the MLP-type models learn click probabilities of single documents in the same way as in slates with higher responses. This discussion is now added to the paper.\\n\\nIt is true that evaluation of our model requires choosing a c* at or near the edge of the support of P(c). However we can compromise generalization vs performance by controlling c* to some extent (in this paper, we did not need to use sub-optimal conditioning since the model readily generalizes well to the optimal condition. However in practice, depending on the datasets, one can choose close-to-optimal conditioning at test time for better generalization results). Moreover, interactions between documents are generated by similar mechanisms whether they are from the optimal or sub-optimal slates. Thus the model can learn these mechanisms from sub-optimal slates and generalize to optimal slates (as the experiment results indicate). \\n\\nThe figures (4--6) do report test/eval performance as a function of training steps. Due to the setup (Eq. 5) of our simulation environments, a random slate has on average over 3 clicks. On the Yoochoose dataset, in Section 4.2, the paper explained that \\u201cwe removed slates with no positive responses such that after removal they only account for 50% of the total number of slates\\u201d (in order to save training time). Therefore the random slates from the training set (a clarification of this baseline has been added to the paper) have on average slightly over 0.5 purchases. \\n\\nGiven that there are few publicly available slate datasets, we had to devise slates using the temporal order of clicks/purchases within each user session. If the ordering were not important, it would have considerably weakened the performance of List-CVAE.\\n\\nOther comments (e.g.: Theory -> Method) have been addressed in the newly upload version of the paper.\"}",
"{\"title\": \"Scalable method for the slate-recommendation task\", \"review\": \"This paper pr poses a conditional generative model for slate-based recommendations. The idea of slate recommendation is to model an ordered-list of items instead of modelling each item independently as is usually done (e.g., for computational reasons). This provides a more natural framework for recommending lists of items (vs. recommending the items with the top scores).\\n\\nTo generate slates, the authors propose to learn a mapping from a utility function (value) to an actual slate of products (i.e., the model conditions on the utility). Once fitted, recommending good slates is then achieved by conditioning on the optimal utility (which is problem dependant) and generating a slate according to that utility. This procedure which is learned in a conditional VAE framework effectively bypasses the intractable combinatorial search problem (i.e., choosing the best ordered list of k-items from the set of all items) by instead estimating a model which generates slates of a particular utility. The results demonstrate empirically that the approach outperforms several baselines. \\n\\nThis idea seems promising and provides an interesting methodological development for recommender systems. Presumably this approach, given the right data, could also learn interesting concepts such as substitution, complementarity, and cannibalization.\", \"the_paper_is_fairly_clear_although_the_model_is_never_formally_expressed\": [\"I would suggest defining it using math and not only a figure. The study is also interesting although the lack of publicly available datasets limits the extent of it and the strength of the results. Overall, it would be good to compare to a few published baselines even if these were not tailored to this specific problem.\", \"A few detailed comments (in approximate decreasing order of importance):\", \"Baselines. The current baselines seem to focus on what may be used in industry with a specific focus on efficient methods at test time.\", \"For this venue, I would suggest that it is necessary to compare to other published baselines. Either baselines that use a similar setup or, at least, strong collaborative filtering baselines that frame the problem as a regression one.\", \"If prediction time is important then you could also compare your method to others in that respect.\", \"training/test mismatch. There seems to be a mismatch between the value of the conditioning information at train and at test. How do you know that your fitted model will generalize to this \\\"new\\\" setting?\", \"In Figures: If I understand correctly the figures (4--6) report test performance as a function of training steps. Is that correct?\", \"Could you explain why the random baseline seems to do so well? That is, for a large number of items, I would expect that it should get close to zero expected number of clicks.\", \"Figure 6d. It seems like that subfigure is not discussed. Why does CVAE perform worse on the hardest training set?\", \"The way you create slates from the yoochoose challenge seems a bit arbitrary. Perhaps I don't know this data well enough but it seems like using the temporal aspects of the observations to define a slate makes the resulting data closer to a subset selection problem than an ordered list.\", \"Section 3. It's currently titled \\\"Theory\\\" but doesn't seem to contain any theory. Perhaps consider renaming to \\\"Method\\\" or \\\"Model\\\"\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Using variational auto-encoders to go beyond greedy construction of slates for recommendations\", \"review\": \"The latest revision is a substantially improved version of the paper. The comment about generalization still feels unsatisfying (\\\"our model requires choosing c* in the support of P(c) seen during training\\\") but could spur follow-up work attempting a precise characterization.\\nI remain wary of using a neural net reward function in the simulated environment, and prefer a direct simulation of Eqn5. With a non-transparent metric, it is much harder to diagnose whether the observed improvement in List-CVAE indeed corresponds to improved user engagement; or whether the slate recommender has gamed the neural reward function. Transparent metrics (that encourage non-greedy scoring) also have user studies showing they correlate with user engagement in some scenarios.\\nIn summary, I think the paper is borderline leaning towards accept -- there is a novel idea here for framing slate recommendation problems in an auto-encoder framework that can spur some follow-up works.\\n---\\nThe paper proposes using a variational auto-encoder to learn how to map a user context, and a desired user response to a slate of item recommendations. During training, data collected from an existing recommender policy (user contexts, displayed slate, recorded user response) can be used to train the encoder and decoder of the auto-encoder to map from contexts to a latent variable and decode the latent variable to a slate. Once trained, we can invoke the encoder with a new user context and the desired user response, and decode the resulting latent variable to construct an optimal slate.\", \"a_basic_question_for_such_an_approach_is\": \"[Fig2] Why do we expect generalization from the user responses c(r) seen in training to c(r*) that we construct for testing? At an extreme, suppose our slate recommendation policy always picks the same k items and never gets a click. We can optimize Eqn3 very well on any dataset collected from our policy; but I don't expect deploying the VAE to production with c(r*) as the desired user response will give us anything meaningful.\\nThe generalization test on RecSys2015-medium (Fig6d) confirms this intuition. Under what conditions can we hope for reliable generalization?\\n\\nThe comment about ranking evaluation metrics being unsuitable (because they favor greedy approaches) needs to be justified. There are several metrics that favor diversity (e.g. BPR, ERR) where a pointwise greedy scoring function will perform very sub-optimally. Such metrics are more transparent than a neural network trained to predict Eqn6. See comment above for why I don't expect the neural net trained to predict Eqn6 on training data will not necessarily generalize to the testing regime we care about (at the core, finding a slate recommendation policy is asking how best to change the distribution P(s), which introduces a distribution shift between training and test regimes for this neural net).\", \"there_are_2_central_claims_in_the_paper\": \"that this approach can scale to many more candidate items (and hence, we don't need candidate generation followed by a ranking step); and that this approach can reason about interaction-effects within a slate to go beyond greedy scoring. For the second claim, there are many other approaches that go beyond greedy (one of the most recent is Ai et al, SIGIR2018 https://arxiv.org/pdf/1804.05936.pdf; the references there should point to a long history of beyond-greedy scoring) These approaches should also be benchmarked in the synthetic and semi-synthetic experiments. At a glance, many neural rankers (DSSM-based approaches) use a nearly identical decoder to CVAE (one of the most recent is Zamani et al, CIKM2018 https://dl.acm.org/citation.cfm?id=3271800; the references there should point to many other neural rankers) These approaches should also be benchmarked in the expts.\\nThis way, we have a more representative picture of the gain of CVAE from a more flexible (slate-level) encoder-decoder; and the gain from using item embeddings to achieve scalability.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This paper proposes a List Conditional Variational Autoencoder approach for the slate recommendation problem. Particularly, it learns the joint document distribution on the slate conditioned on user responses, and directly generates full slates. The experiments show that the proposed method surpasses greedy ranking approaches.\", \"pros\": [\"nice motivation, and the connection with industrial recommendation systems where candidate nomination and ranker is being used is engaging\", \"it provides a conditional generative modeling framework for slate recommendation\", \"the simulation experiments very clearly show that the expected number of clicks as obtained by the proposed List-CVAE is much higher compared to the chosen baselines. A similar story is shown for the YOOCHOOSE challenge dataset.\"], \"cons\": [\"Do the experiments explicitly compare with the nomination & ranking industry standard?\", \"Comparison with other slate recommendation approaches besides the greedy baselines?\", \"Comparison with non-slate recommendation models of Figure 1?\", \"Overall, this is a very nicely written paper, and the experiments both in the simulated and real dataset shows the promise of the proposed approach.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
B1lG42C9Km | Intrinsic Social Motivation via Causal Influence in Multi-Agent RL | [
"Natasha Jaques",
"Angeliki Lazaridou",
"Edward Hughes",
"Caglar Gulcehre",
"Pedro A. Ortega",
"DJ Strouse",
"Joel Z. Leibo",
"Nando de Freitas"
] | We derive a new intrinsic social motivation for multi-agent reinforcement learning (MARL), in which agents are rewarded for having causal influence over another agent's actions, where causal influence is assessed using counterfactual reasoning. The reward does not depend on observing another agent's reward function, and is thus a more realistic approach to MARL than taken in previous work. We show that the causal influence reward is related to maximizing the mutual information between agents' actions. We test the approach in challenging social dilemma environments, where it consistently leads to enhanced cooperation between agents and higher collective reward. Moreover, we find that rewarding influence can lead agents to develop emergent communication protocols. Therefore, we also employ influence to train agents to use an explicit communication channel, and find that it leads to more effective communication and higher collective reward. Finally, we show that influence can be computed by equipping each agent with an internal model that predicts the actions of other agents. This allows the social influence reward to be computed without the use of a centralised controller, and as such represents a significantly more general and scalable inductive bias for MARL with independent agents. | [
"multi-agent reinforcement learning",
"causal inference",
"game theory",
"social dilemma",
"intrinsic motivation",
"counterfactual reasoning",
"empowerment",
"communication"
] | https://openreview.net/pdf?id=B1lG42C9Km | https://openreview.net/forum?id=B1lG42C9Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1xVyeiKgV",
"SJx7XXeOeV",
"r1eujRhmCm",
"ByxwIPLQRQ",
"Hyld11VmAQ",
"H1gtWKXmCQ",
"B1lcZ2seR7",
"Hyx7WiPiTQ",
"rkg5v8DDpQ",
"Syxpm9_NaQ",
"SJgvSduEaQ",
"rklLguO4aX",
"rylAADza3m",
"Skg0DcDc2X"
],
"note_type": [
"official_comment",
"meta_review",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1545347036369,
1545237275429,
1542864543877,
1542838095400,
1542827744191,
1542826241118,
1542663170116,
1542318842658,
1542055522151,
1541863972605,
1541863486889,
1541863406392,
1541380054257,
1541204581651
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1433/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1433/Area_Chair1"
],
[
"~Rachit_Dubey1"
],
[
"ICLR.cc/2019/Conference/Paper1433/Authors"
],
[
"~Rachit_Dubey1"
],
[
"ICLR.cc/2019/Conference/Paper1433/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1433/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1433/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1433/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1433/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1433/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1433/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1433/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1433/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"No response to revisions\", \"comment\": \"We would like to point out that the reviewers have not posted a response to our revisions. We replied to all concerns listed by the reviewers and made extensive revisions to the paper. Further, two\\u00a0of the reviewers appeared to like the paper (saying that the quality is good, that it is interesting, that they liked it), but only asked for clarifications. We provided these clarifications, but the reviewers have not acknowledged our response / revisions nor updated their scores.\"}",
"{\"metareview\": \"The reviewers raised a number of concerns including the appropriateness of the chosen application and the terms in which social dilemmas have been discussed, the lack of explanations and discussions, missing references, and the extent of the evaluation studies. The authors\\u2019 rebuttal addressed some of the reviewers\\u2019 concerns but not fully. Overall, I believe that the work is interesting and may be useful to the community (though to a small extent., in my opinion). However, the paper would benefit from additional explanations, experiments and discussions pointed in quite some detail by the reviewers. AS is, the paper is below the acceptance threshold for presentation at ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview\"}",
"{\"comment\": \"I am glad that you liked and appreciated my suggestion. The rephrased paragraph looks great - thanks for modifying that and good luck with the final reviews!\", \"title\": \"Thanks for your reply\"}",
"{\"title\": \"Thanks for the valuable input!\", \"comment\": \"Thank you so much for your interest in the paper! It\\u2019s wonderful to hear that you think it may be well-cited in the future.\\n\\nWe truly value your insight as a cognitive scientist and would be happy to revise the intro according to your suggestions. What do you think of the following revised language:\\n\\n\\u201cHumans have remarkable social learning abilities; some authors suggest that it is social learning that has given rise to cultural evolution, and allowed us to achieve unprecedented progress and coordination on a massive scale \\\\citep{van2011social, Herrmann1360}. Others emphasize that our impressive capacity to learn from others far surpasses that of other animals, apes, and even other proto-human species \\\\citep{henrich2015, harari2014sapiens, laland2017}.\\u201d\"}",
"{\"comment\": \"I found this paper to be really interesting and wanted to congratulate the authors on a neat idea that is very well executed. Good luck with the reviews.\\n\\nThat being said, as a cognitive scientist I find the claims made in the introduction to be slightly misrepresentative of the field of cognitive science and psychology and would suggest/request for a slight modification of those statements. Specifically, the statement \\\"Arguably, the most extraordinary aspect of human intelligence is not curiosity or a drive for power; rather it is our remarkable social abilities which have given rise to cultural evolution, and unprecedented progress and coordination on a massive scale\\\" is too strong with no real empirical and theoretical evidence behind the same. The human mind is very complex and I would be skeptical of reducing intelligence to simply one factor. Furthermore, I don't think cognitive science has advanced enough to the point to say what is the most critical aspect of human cognition (this is still widely debated and much work remains to be done to understand this). \\n\\nWith this in mind, here is my suggestion - I don't think the paper would be drastically affected if the authors were to point how critical social abilities are for human intelligence and then use that as a motivation for their work (as opposed to claiming that to be the most important aspect of cognition which remains unproven). I also want to state my reason for this suggestion/request - I can see this paper to be widely cited in the future and I think more and more papers in computer science will continue to draw inspiration from psychology. Therefore, it is also important that those psychology works are properly explained so that computer scientists not only appreciate the beauty of the human mind but are also aware of the complexity of our fascinating mind for continued inspiration. \\n\\nThanks a lot and congrats again on a great paper.\", \"title\": \"Interesting work and comments on the introduction\"}",
"{\"title\": \"Summary of revisions\", \"comment\": [\"In response to the feedback provided by the reviewers, we have made the following revisions to the paper:\", \"Added Section 6.4.1 to the Appendix, which provides additional results from training prosocial agents to directly optimize for group reward.\", \"Added further explanation on the causal inference in the centralized controller case to Section 2.1.\", \"Revised the MOA causal graph in Figure 4.\", \"Added the following references to the related work: Peysakhovich & Lerer (2018), Devlin et al. (2014), Foerster et al. (2018), Oudeyer & Kaplan (2006), Oudeyer & Smith (2016), Forestier & Oudeyer (2017).\", \"Added a future work section suggesting further applications of the influence reward.\", \"Rephrased the reference to Crawford & Sobel (1982) in the related work.\", \"Modified language in the introduction citing Tomasello\\u2019s research on infant cognition.\", \"Corrected references to \\u201cTable 11\\u201d to Table 4.\", \"Introduced LSTM acronym.\"]}",
"{\"title\": \"Clarifications on causal modeling\", \"comment\": \"Thanks for your feedback - we are glad that you found the paper interesting, and we hope to be able to clear up any confusion surrounding the causal modeling.\\n\\nYou are correct that the first method of implementing the causal influence reward described in section 2.1 has the important limitation that agents cannot mutually influence each other. However, we believe we have handled the conditioning correctly to satisfy the back door criterion, by imposing a sequential ordering on agents\\u2019 actions. We allow only a fixed number of agents to be influencers, and the rest are influencees. Only an influencer gets the causal influence reward, and only an influencee can be influenced. At each timestep, the influencers choose their actions first, and these actions are then given as input to the influencees. Let\\u2019s say that agent A and B are influencers, and C is an influencee. Then C receives both a^A_t and a^B_t as input. When computing the causal influence of A on C, we also add a^B_t to the conditioning set, as you describe. However, we do not condition on actions downstream of C, as you mention. You are correct that in this model the causal graph does need to be known a priori, and in that sense it is more limited. We only introduced this initial model as a proof-of-concept, and retained it in the paper because it is associated with some of the interesting qualitative results we present in Section 4.1. We will modify the paper to include a more detailed description of the sequential nature of agents\\u2019 actions in order to reduce confusion in the future. However, you are correct that the MOA approach is likely to be more effective in practice, and we would like to emphasize the success of this approach, and the communication results in Section 4.2, as more important contributions. \\n\\nYou are right that we are missing an arrow from s_t -> s_{t+1}, and the partially observed states s^B_{t+1} in Figure 4; we will add these to the Figure and update it in the next revision. You are also correct that we do not need to condition on a_t^B, but we do allow the model to use a_t^B when making its predictions about a_{t+1}^B, so we have shown this as shaded in the Figure. \\n\\nWe don\\u2019t believe there is a missing log in equation 2; the log is absorbed into the KL term.\"}",
"{\"title\": \"Table 11 is actually Table 4\", \"comment\": \"Thanks a lot for pointing that out. This is a typo in the manuscript. Table 11 in the text refers to Table 4 of the appendix.\"}",
"{\"title\": \"Interesting paper, possibly some confusion on some causal modelling (especially Section 2.1)\", \"review\": \"The paper introduces a new intrinsic reward for MARL, representing the causal influence of an agent\\u2019s action on another agent counterfactually. The authors show this causal influence reward is related to maximising the mutual information between the agents\\u2019 actions. The behaviour of agents using this reward is tested in a set of social dilemmas, where it leads to increased cooperation and communication protocols, especially if given an explicit communication channel. As opposed to related work, the authors also equip the agents with an internal Model of Other Agents that predicts the actions of other agents and simulates counterfactuals. This allows the method to run in a decentralized fashion and without access to other agents\\u2019 reward functions.\\n\\nThe paper proposes a very interesting approach. I\\u2019m not a MARL expert, so I focused more on the the causal aspects. The paper seems generally well-organized and well-written, although I\\u2019m a bit confused about the some of the causal modelling decisions and assumptions. This confusion and some potential errors, which I describe in detail below, are the reason for my borderline decision, despite liking the paper otherwise. \\n \\nFirst, I\\u2019m a bit confused about the utility of the Section 2.1 model (Figure 1), mostly because of the temporal and multiple agents aspects that seem to be dealt with (\\u201cmore\\u201d) correctly in the MOA model. Specifically in Figure 1, one would need to assume that there is only one agent A influencing agent B at the same time (and agent B does not influence anything else). For example, there is no other agent C which actions also influence agent B, and no agent D that is influenced by agent B, otherwise the backdoor-criterion would not work, unless you add also the action of agent C to the conditioning set (or its state). Importantly, adding the actions of all agents, also a potential agent D that is downstream of B would be incorrect. So in this model there is some kind of same time interaction and there seems to be the need for a causal graph that is known a priori. These problems should disappear if one assumes that only the time t-1 actions can influence the time t actions, as in the MOA model. I assume the idea of the Figure 1 model was to show a relationship with mutual information, but for me specifically it was quite confusing. \\n\\nI was much less confused by the MOA causal graph represented in Figure 4, although I suspect there are quite some interactions missing (for example s_t^A causes u_t^A similarly to the green background? s_t causes s_{t+1} (which btw in this case should probably be split in two nodes, one s_{t+1} and one s_{t+1}^B?). Possibly one could also add the previous time step for agent B (with u_{t+1}^B influenced by u_t^B I would assume?). As far as I can see there is no need to condition on a_t^B in this case to see the influence of a_t^A on a_{t+1}^B, u_t^A and s_t^A should be enough?\", \"minor_details\": \"Is there possibly a log missing in Eq. 2?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Explanation and discussion\", \"comment\": \"Thanks for your questions about the results in Figure 6. With regards to Figure 6a, we set the limit of 3e8 steps for the first two experiments a priori, and did not change it based on the results of the experiments, to ensure we did not bias the results. While it does appear that the visible actions baseline may reach the performance of the influence model in this experiment, we consider the initial centralized controller experiments to be a simple proof-of-concept. In practice, one would most likely always prefer to use the MOA method for computing influence, since it provides the important benefit that agents do not need to observe each others\\u2019 reward or require training with a centralized controller in order to compute influence. As is evident in Figures 6c, 13e, and 14e, the MOA method reliably and clearly outperforms all baselines in the Cleanup game.\\n\\nWith regard to Figure 6f, because multi-agent training is highly stochastic and non-stationary, as agents learn to adapt to each other it can change the dynamics of the environment such that formerly effective strategies no longer result in high reward. For example, in Harvest, as agents get more proficient at collecting apples efficiently, they may actually deplete the apples faster, thus paradoxically lowering overall reward. As noted in Section 6.4 of the Appendix, if one agent fails to learn to collect apples, it actually makes Harvest easier for the other agents, since the apples are less easily exhausted. However, if this agent then begins to collect apples they will quickly be exhausted. Figure 6f shows some of these unstable dynamics for a single hyperparameter setting with 5 random seeds in Harvest. However, Figure 14f in the Appendix plots the same game using 5 hyperparameter settings with 5 seeds each, giving a more stable training curve. \\n\\nYou make an excellent point about the fact that agents must balance their own self-interest with the intrinsic reward of influencing others. We actually hypothesize that the reason the agent in the example on page 7 learned to communicate was because communication is the cheapest way to obtain influence while still pursuing its own environment reward. In terms of generalizing to new tasks, it is straightforward to tune the parameter which trades off the environment and influence rewards to suit a new task. We will add further discussion about this trade off to in an updated version of the paper. \\n\\nWe have also added text introducing the acronym LSTM - thanks for pointing that out.\"}",
"{\"title\": \"Explanation of the results and contributions and summary of changes (2/2)\", \"comment\": \"Thank you for pointing out the connection to related work on reward shaping. We initially understood reward shaping to be specific to a given environment, and would argue that intrinsic motivation is designed to be a more general mechanism that works across environments, and thus focused on related work in intrinsic motivation. However, at your suggestion we have begun looking for related work in the reward shaping literature (such as [4-5]) and after reading these works in detail, will include references to them in an updated version of the text. We are happy to include other specific papers that you can recommend.\\n\\nYou raise an interesting question about whether the influence reward, if used to train autonomous vehicles, could lead to vehicles being exploited for information. The example of autonomous driving was mainly meant to illustrate the benefit of decentralized training. Obviously the problem of cars driving in the real world is much more complex than the simulations tested here, and so we cannot make claims about whether the influence reward could generalize to this setting. However, it is interesting to consider the question of the degree to which the desire to influence may lead to being exploited. Since the agents balance both influence and environmental reward based on a hyperparameter, this can be tuned to ensure influence does not override the drive for environmental reward. We hypothesize that sharing information is actually a relatively cheap way to influence another agent, without sacrificing much in terms of one\\u2019s own environmental reward; this may protect agents from being unduly exploited. However, we should emphasize that we think the approach of training agents with influence goes well beyond the application of autonomous vehicles. As we have shown in the paper, influence can be an effective way to train agents to learn to communicate with each other, and could thus be valuable whenever meaningful communication is desired. We think that this could be an important and novel contribution to the emergent communication community. \\n\\n[1] Edward Hughes, Joel Z Leibo, Matthew G Phillips, Karl Tuyls, Edgar A Duenez-Guzman, Antonio Garc\\u0131a Castaneda, Iain Dunning, Tina Zhu, Kevin R McKee, Raphael Koster, et al. Inequity aversion improves cooperation in intertemporal social dilemmas. In Advances in neural information processing systems (NIPS), Montreal, Canada, 2018.\\n\\n[2] Joel Z Leibo, Vinicius Zambaldi, Marc Lanctot, Janusz Marecki, and Thore Graepel. Multi-agent reinforcement learning in sequential social dilemmas. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, pp. 464\\u2013473. International Foundation for Autonomous Agents and Multiagent Systems, 2017.\\n\\n[3] Devlin, S., Yliniemi, L., Kudenko, D., & Tumer, K. (2014, May). Potential-based difference rewards for multiagent reinforcement learning. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems (pp. 165-172). International Foundation for Autonomous Agents and Multiagent Systems.\\n\\n[4] Devlin, S., Yliniemi, L., Kudenko, D., & Tumer, K. (2014, May). Potential-based difference rewards for multiagent reinforcement learning. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems (pp. 165-172). International Foundation for Autonomous Agents and Multiagent Systems.\\n\\n[5] Peysakhovich, A., & Lerer, A. (2018, July). Prosocial learning agents solve generalized stag hunts better than selfish ones. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (pp. 2043-2044). International Foundation for Autonomous Agents and Multiagent Systems.\"}",
"{\"title\": \"Explanation of the results and contributions and summary of changes (1/2)\", \"comment\": \"Thank you for your detailed feedback on our paper, we appreciate your insight. You are right that the reference to human cognition may be a bit grandiose, and we have uploaded a revised version of the paper in which we have rephrased this reference to show that we are only deriving a loose inspiration from this work. We are also happy to remove it entirely if you think it detracts rather than adds from the paper. We have also rephrased the reference to Crawford and Sobel as you suggested.\\n\\nIn addition to the contribution correctly pointed out in your introductory paragraph, the paper also presents results on emergent communication. Further, it shows that the proposed causal influence reward, with estimated models of other agents, enables decentralized training of deep recurrent neural network agents in very challenging settings: directly from pixels, subject to partial-observability and with no knowledge of other agents\\u2019 rewards. Previous deep MARL works, e.g. the emergent communication works of Foerster et al., resorted to centralized training because their decentralized approaches failed to learn. This paper innovates a solution to this learning problem, and as such we feel it makes an important contribution. \\n\\nTo your point about comparing the results to those obtained via prosocial reward functions, we actually do compare to a recent method that gives agents a prosocial inductive bias (Inequity Aversion [1]) in Table 4 of the Appendix. These results show that our method is able to exceed the performance of Inequity Aversion in several experiments, in spite of the fact that agents trained with influence do not have access to other agents\\u2019 rewards, while prosocial agents do, as you correctly point out. We can also compute the results of comparing to agents trained to optimize directly for the group reward, if you think this is important to include. However, we note that explicitly programming pro-sociality into agents by training them to optimize for group reward, as in [3], can lead to problems with spurious rewards and \\u201clazy agents\\u201d, and is not possible if the reward functions of other agents are unknown. Influence is a more general approach, and as such it represents a significant contribution to the state-of-the-art. \\n\\nWe agree that testing on Stag Hunt would be an interesting future extension, and expect that our method would likely perform well in this game, since emergent communication should allow agents to coordinate better and thus be beneficial for all agents. We are also excited about testing whether influence can improve coordination in manipulation and control tasks as well. The reason we initially focused on the Tragedy of the Commons and Public Goods games presented in this paper is because we felt that they would present the most difficult challenge, since they not only require coordination, but also require that coordination to be prosocial. These were known to be the hardest established benchmark tasks in this domain, and allowed us to compare easily with prior work [1-2]. It is known that vanilla MARL converges to poor equilibria in these games as well. Since this paper is mainly about presenting a new agent algorithm, we felt that running comparisons on all possible kinds of games from game theory would be out of scope, especially given length restrictions on the paper; we decided that presenting results indicating that influence can be used to train effective communication protocols was a more important contribution. \\n\\nBecause of the points above, we must respectfully disagree with your assertion that the results are thin. We give results from 3 experiments which are stable across 15 hyper-parameters, each with 5 seeds, and using multiple metrics (please see the Appendix for additional results, including further results testing on a 3rd, proof-of-concept environment). These results constitute very strong empirical evidence obtained in a rigorous way. Our experiments involve multiple agents with memory acting under partial observability, and with non-trivial, nonlinear, high-dimensional, recurrent policies. These aspects of the problem make it very challenging from an empirical perspective, and beyond theoretical analysis using existing tools.\"}",
"{\"title\": \"Review\", \"review\": \"INTRINSIC SOCIAL MOTIVATION VIA CAUSAL INFLUENCE IN MULTI-AGENT RL\", \"main_idea\": \"The authors consider adding a reward term to standard MARL which is the mutual information between its actions and the actions of others. They show that adding this intrinsic social motivation can lead to increased cooperation in several social dilemmas.\", \"strong_points\": [\"This paper is a novel extension of ideas from single agent RL to multi agent RL, there are clear benefits from doing reward shaping in the right way to make deep RL work better.\", \"The paper focuses on cooperative environments which is an underfocused area in RL right now\"], \"weak_points\": [\"There is missing discussion of a lot of literature. The causal influence term can be thought of as a form of reward shaping. There is little discussion on the (large) literature on reward shaping to get MARL to exhibit good behavior.\", \"The results feel quite thin. Related to the point above: the theory of different types of reward shaping (e.g. optimistic Q-learning, prosociality, etc\\u2026) are well understood. It is not clear to me under what conditions the authors\\u2019 proposed augmentation to the reward function will lead to better or worse outcomes. The experiments in this paper are quite simple and only span a small set of environments so it would be good to have at least some formal theory.\", \"Social dilemmas don\\u2019t seem like the best application. The authors define the social dilemma as: \\u201cFor each individual agent, \\u2018defecting\\u2019 i.e. non-cooperative behavior has the highest payoff.\\u201d With the intrinsic motivation, agents learn to cooperate. This is good, however, if we\\u2019re thinking about situations where agents aren\\u2019t trained together and have their own rewards (the authors\\u2019 example: \\u201cautonomous vehicles are likely to be produced by a wide variety of organizations and institutions with mixed motivations\\u201d) then won\\u2019t these agents be exploited by rational agents? Other solutions to this problem (e.g. recent papers on tit-for-tat by Lerer & Peysakhovich or LOLA by Foerster et al. construct agents where defectors get explicitly punished and so don\\u2019t want to try exploiting). Is there something I am missing here? Do the agents learn to punish non-cooperators (if no, isn\\u2019t it rational at that point to just not cooperate and won\\u2019t self-driving cars trained via this method get exploited by others)?\", \"Relate to the point(s) above: a better environment for application here seems to be coordination games/\\u201dStag Hunt\\u201d games where it is known that MARL converges to poor equilibria and many other methods e.g. optimistic Q-learning or prosociality have been invented to make things work better. Perhaps the method proposed here will work better than these (and it has the appealing property that it does not require the ability to observe the other agents' rewards as e.g. prosociality does)\", \"This paper contains some quite grandiose language connecting the proposed reward shaping to \\u201chow humans learn\\u201d (example: It may also have correlates in human cognition; experiments show that newborn infants are sensitive to correspondences between their own actions and the actions of other people, and use this to coordinate their behavior with others) it\\u2019s unclear to me that humans experience extra reward for their actions having high mutual information (and/or causal information with others). While it\\u2019s fine to argue some of these points at a high level I would suggest scrubbing the text of the gratuitous references to this.\"], \"nits\": \"\\u201cCrawford & Sobel (1982) find that once agents\\u2019 interests diverge by a finite amount, no communication is to be expected.\\u201d \\u2013 this is an awkward phrasing of the Crawford and Sobel result (it can be read as \\u201cif interests diverge by any epsilon there can be no communication\\u201d). The CS result is that information revealed in communication (in equilibrium) is proportional to amount of common interest.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review of Paper \\\"Intrinsic Social Motivation via Causal Influence in Multi-Agent RL\\\"\", \"review\": \"This paper proposes an approach to model social influence in a scenario-independent manner by instantiating the concept of intrinsic motivation and combine various human abilities as part of a reinforcement learning function in order to improve the agent's operation in social dilemma scenarios.\\n\\nAgents are operationalised as convolutional neural network, linear layers and LSTM. Using these base mechanisms, different abilities (communication, models of other agents (MOA)), their causal influence is inferred based on counterfactual actions. The architecture is explored across two different sequential social dilemmas. \\n\\nThe architecture is described in sufficient detail, with particular focus on the isolation of causal influence for communication and MOA influence. The experimental evaluation is described in sufficient detail, given the low complexity of the scenarios. While the agents with communicative ability and MOA show superior performance, a few results warrant clarification.\\n\\nFigure 6a) highlights the performance of influencers in contrast to a visible actions baseline. This specific scenarios shows the necessity to run experiments for larger number of runs, since it appears that action observations may actually outperform influencer performance beyond 3 steps. Please clarify what is happening in this specific case, and secondly, justify your choice of steps used in the experimental evaluation. \\n\\nAnother results that requires clarification is Figure 6f), which is not sufficiently discussed in the text, yet provides interesting patterns between the MOA baseline performance decaying abruptly at around 3 steps, with the influence MOA variant only peaking after that. Please clarify the observation. Also, could you draw conclusions or directions for a combination of the different approaches to maximise the performance (more generally, beyond this specific observation)? \\n\\nA valuable discussion is the exemplification of specific agent behaviour on Page 7. While it clarifies the signalling of resources in this specific case, it also shows shortcomings of the model's realism. How would the model perform if agents had limited resources and would die upon depletion (e.g. the de facto altruistic influencer in this scenario - since it only performs two distinct actions)? The extent of generalisability should be considered in the discussion. \\n\\nIn general, the paper motivates and discusses the underlying work in great detail and is written in an accessible manner (minor comment: the acronym LSTM is not explicitly introduced). The quality of presentation is good.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
Byxz4n09tQ | Model Compression with Generative Adversarial Networks | [
"Ruishan Liu",
"Nicolo Fusi",
"Lester Mackey"
] | More accurate machine learning models often demand more computation and memory at test time, making them difficult to deploy on CPU- or memory-constrained devices. Model compression (also known as distillation) alleviates this burden by training a less expensive student model to mimic the expensive teacher model while maintaining most of the original accuracy. However, when fresh data is unavailable for the compression task, the teacher's training data is typically reused, leading to suboptimal compression. In this work, we propose to augment the compression dataset with synthetic data from a generative adversarial network (GAN) designed to approximate the training data distribution. Our GAN-assisted model compression (GAN-MC) significantly improves student accuracy for expensive models such as deep neural networks and large random forests on both image and tabular datasets. Building on these results, we propose a comprehensive metric—the Compression Score—to evaluate the quality of synthetic datasets based on their induced model compression performance. The Compression Score captures both data diversity and discriminability, and we illustrate its benefits over the popular Inception Score in the context of image classification. | [
"Model compression",
"distillation",
"generative adversarial network",
"GAN",
"deep neural network",
"random forest",
"ensemble",
"decision tree",
"convolutional neural network"
] | https://openreview.net/pdf?id=Byxz4n09tQ | https://openreview.net/forum?id=Byxz4n09tQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1gLztVJeE",
"Skxxk-wlJN",
"SJgkBBmkkV",
"HyxInbRhRm",
"B1lCL19nAQ",
"ryl2dVuqAQ",
"H1gl10wcA7",
"BJeSQhv9Am",
"HyxcNcq5hm",
"HJgHTcROhQ",
"BJgyrccuh7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544665358375,
1543692504370,
1543611702930,
1543459245654,
1543442261886,
1543304308218,
1543302615973,
1543302173362,
1541216818495,
1541102269104,
1541085751481
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1432/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1432/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1432/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1432/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1432/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1432/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1432/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1432/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1432/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1432/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1432/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The authors propose a scheme to compress models using student-teacher distillation, where training data are augmented using examples generated from a conditional GAN.\\nThe reviewers were generally in agreement that 1) that the experimental results generally support the claims made by the authors, and 2) that the paper is clearly written and easy to follow.\\nHowever, the reviewers also raised a number of concerns: 1) that the experiments were conducted on small-scale tasks, 2) the use of the compression score might be impractical since it would require retraining a compressed model, and is affected by the effectiveness of the compression algorithm which is an additional confounding factor. The authors in their rebuttal address 2) by noting that the student training was not too expensive, but I believe that this cost is task specific. Overall, I think 1) is a significant concern, and the AC agrees with the reviewers that an evaluation of the techniques on large-scale datasets would strengthen the paper.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Paper could be strengthened by evaluations on large-scale tasks\"}",
"{\"title\": \"an answer to the rebuttal\", \"comment\": \"I read the rebuttal. I think the assessment of this paper as being \\\"Marginally below acceptance threshold\\\" is accurate.\"}",
"{\"title\": \"Additional clarifications\", \"comment\": \"Thank you for your responses and your suggestions to further improve the paper; we include some additional clarifications below.\\n\\n-Practicality of compression score\\n\\nThe compression score is quite practical to compute as only one epoch of training is conducted (this is stated and motivated in Sec. 5.1, but in the final version we will highlight this in the following subsection as well so that there is no misunderstanding). In our Sec. 5 experiment in the revision, we report that the Inception Score evaluation required 1436.6s using the code of Salimans et al., while the Compression Score required 350.1s. Both evaluations were done in Tensorflow using an NVIDIA Tesla V100 GPU.\\n\\nTo compute the Inception score we perform one forward pass on the synthetic 50K images. To compute the Compression Score we compute one forward and one backward pass on the same 50K images to train the student for a single epoch. We finally perform one forward pass on 10K test images to compute student test accuracy. Our use of a much smaller student network (the NIN net from Sec. 3) also shortens the relative computation time. We will clarify these points in the final version.\\n\\n-Standard data augmentation\\n\\nIn the revision, we report the effect of data augmentation alone (as well as data augmentation on top of GAN-MC) in Fig. 2b. Standard data augmentation corresponds to the point in the GAN + standard augmentation curve with p_fake = 0. We see that standard data augmentation alone increases student test accuracy to 73.3% (vs. 76.7% for GAN-MC alone). In addition, any augmentation that can be performed on the training data can also be applied to GAN data for added diversity; in this case standard augmentation on top of GAN-MC has a maximum student test accuracy of 77.7%.\\n\\nWe agree that it will be valuable to evaluate additional data augmentation schemes as well, such as the mixup data augmentation you mentioned. In the final version, we will report the performance of those schemes both alone and applied on top of GAN-MC. \\n\\n-GAN strength\\n\\nWhile we have not yet explored varying the GAN architecture, we have found that the quality of the GAN generator has a significant impact on student performance. For example, in Fig 2c, student test accuracy ranges from .1 to .7 as a function of the quality of the AC-GAN used to generate the synthetic data. However, notably, compression performance does not depend on the quality of the synthetic class labels associated with each GAN point (while supervised learning performance certainly does).\"}",
"{\"title\": \"Additional clarifications\", \"comment\": \"Thank you for your responses; we include a few additional clarifications below.\\n\\n-Value of augmentation with test data\\n\\nThanks for this suggestion; in the final version we will update our references to these works to clarify that they provide direct evidence for the value of augmenting the compression set with test data. \\n\\n-Standard augmentation \\n\\nWe believe our initial response was unclear. In the revision, we report the effect of data augmentation alone (as well as data augmentation on top of GAN-MC) in Fig. 2b. Standard data augmentation corresponds to the point in the GAN + standard augmentation curve with p_fake = 0. We see that standard data augmentation alone increases student test accuracy to 73.3% (vs. 76.7% for GAN-MC alone). In addition, any augmentation that can be performed on the training data can also be applied to GAN data for added diversity; in this case standard augmentation on top of GAN-MC has a maximum student test accuracy of 77.7%.\\n\\nWe agree that it will be valuable to evaluate additional image augmentation schemes as well. In the final version, we will report the performance of those schemes both alone and applied on top of GAN-MC. \\n\\n-Inception vs. Compression\\n\\nTo compute the Inception score we perform one forward pass on the synthetic 50K images. To compute the Compression Score we compute one forward and one backward pass on the same 50K images to train the student for a single epoch. We finally perform one forward pass on 10K test images to compute student test accuracy. The reviewer is correct that our use of a much smaller student network (the NIN net from Sec. 3) also shortens the computation time. We will clarify these points in the final version.\\n\\nIn the revision, we state that only one epoch of training is performed in Sec. 5.1, but in the final version we will highlight this in the following subsection as well so that there is no misunderstanding.\"}",
"{\"title\": \"Thanks for the detailed response\", \"comment\": [\"Additional comments:\", \"Thanks for pointing our these results in previous work of Bucila et al. I think this deserves to be explicitly mentioned in your paper, because it provides direct evidence for your claim.\", \"I think you should report the effect of data-augmentation *alone*. Also, adding more than one data-augmentation strategy would strengthen the result (especially for image data, where there are plenty of effective methods). That being said, I agree with a point raised reviewer Reviewer-2 that for other kinds of data (e.g. tabular data) there might not be effective ways for data-augmentation.\", \"I am not sure how you compare the execution time of inception score compared to your methods. Fundamentally, inception score requires only a forward-pass in a pretrained model, which can be done with a small number of examples (e.g. 1K or 5K). Training a model from scratch would require a forward and backward pass, and probably on much more data, but I guess the model is much smaller than inception. Also, I'm not sure it's clear from the paper that you do only one epoch of training.\", \"I am going to raise my score to 6, because I think the paper has some interesting aspects to it. I still believe it's a borderline paper, especially that I'm not convinced of the effectiveness of compression score and that GANs can be substantially more effective than data-augmentation.\"]}",
"{\"title\": \"Clarifications and new experiments\", \"comment\": \"-Overfitting and weak students\\nEven when the compression set is large, overfitting can occur because the distribution of teacher logits looks very different for its training points than for fresh test points due to the teacher\\u2019s overfitting to the training set. If the student is strong enough to distinguish between the train and test logit distributions, then we expect augmentation to help. If the student is so weak that it cannot distinguish between the train and test distributions and the training set is large, then augmentation may not help. We see an example of this in the error curves of Fig. 1. In the first few epochs of student training, the real training data leads to a greater decrease in test loss than the GAN augmented data, but the GAN augmented data eventually leads to a better test loss. Here, the network constrained to train for only one epoch is an example of a weak student with less discriminating ability than the fully trained network.\\n\\n-Suboptimal compression with training data\\nAugmenting the compression set with held-out data from the training distribution has been shown to improve compression in prior work (see Fig. 4 in Bucila et al. and Sec. 4 of Ba and Caruana). We view this as the ideal case. All of our experiments were designed to provide even stronger evidence that compression with training data alone is suboptimal: not only can compression be improved, it can be improved in the common scenario where one does not have access to fresh real data.\\n\\n-Infinite data\\nThis is a very good point. We have replaced \\u201cinfinite\\u201d with \\u201cauxiliary\\u201d throughout to avoid any confusion and now emphasize that our approach does not hinge on having an infinite amount of data but rather benefits from access to an auxiliary source of realistic data. \\n\\n-Value of synthetic data\\nWe agree that the idea of using GAN datapoints to improve compression is counterintuitive, and we now clarify in the revision why it is sensible: The goal in model compression is to approximate the teacher prediction function g which maps from inputs to predictions z. Because the teacher is a function of the training data alone, g itself is a functional of the training data alone and is otherwise independent of the unknown distribution that generated that data. In addition, because we have access to the teacher, we have the freedom to query the function g at any point, and hence our information concerning g is limited only by number of queries we can afford. When we generate a new query point x, we can observe the actual target value of interest, the teacher\\u2019s prediction g(x) (this is not true for the original supervised learning task, where no new labels can be observed). We believe these properties make the model compression task more tractable than the original supervised learning task and ideal for data augmentation with generative models. Similar rationale is given in Bucila et al.\\n\\nln addition, we believe that the best proof that GAN-MC is valuable is to demonstrate that it works in practice. In our submission we show that, in multiple experiments on multiple datasets with a variety of students and teachers, true test set performance improves when GAN-MC is used. Especially compelling is Figure 2a which shows that no matter what non-zero value of pfake we use, GAN-MC improves upon compression with training data alone.\\n\\n-Compression Score practicality\\nThe compression score is quite practical to compute as only one epoch of training is conducted; in our Sec. 5 experiment, the Inception Score evaluation required 1436.6s using the code of Salimans et al., while the Compression Score required 350.1s. Both evaluations were done in Tensorflow using an NVIDIA Tesla V100 GPU.\\n\\n-Idiosyncrasies\\nWe agree that any imperfect prediction rule is subject to idiosyncrasies. However, as we clarify in the revision, the Inception Score is completely determined by the idiosyncratic output of the network, while the Compression Score is determined by performance on real test data, so the Compression Score will only increase if the synthetic data leads to better classification of real data.\\n\\n-Standard data augmentation\", \"we_have_added_an_experiment_showing_that_standard_image_augmentation_complements_gan_augmentation\": \"while standard augmentation improves compression on training data alone, the greatest gain is achieved by applying standard augmentation on top of GAN-MC. For tabular data, as Rev. 2 notes, it is unclear how to define an appropriate analogue of standard image augmentation, but GAN-MC is equally applicable.\\n\\n-Compression score range\\nWe apologize for the confusion and clarify in the revision. The Compression Score can take on values larger than 1; however, we have found empirically (see, e.g., Fig. 1) that accuracy after the first epoch is typically superior when training data is used than when GAN data is used. This was part of our rationale for using only 1 epoch of training to define the Compression Score.\"}",
"{\"title\": \"New experiments and tabular data impact\", \"comment\": \"We thank the reviewer for the positive and constructive feedback; we provide detailed responses below and have updated our manuscript accordingly.\\n\\n-Simple data augmentation\\nA primary contribution of this work is developing an improved compression approach that works for both tabular and image data alike. We have added an experiment showing that standard image augmentation complements GAN augmentation: while standard augmentation improves compression on training data alone, the greatest gain is achieved by applying standard augmentation on top of GAN-MC. For tabular data, the reviewer is correct that it is unclear how to define an appropriate analogue of standard image augmentation, but GAN-MC is equally applicable.\\n\\n-Tabular data impact\\nIn the revision, we highlight the literature attesting to the importance of compressing tabular random forests for memory and computation-constrained environments (e.g., Model Compression, Compressing Random Forests, Globally Induced Forest: A Prepruning Compression Scheme, and L1-based compression of random forest models). GAN-MC is especially effective in this setting, where we achieve up to 375-fold reductions in test time computation and storage by compressing a 500-tree forest into a single decision tree of similar quality. To clarify the impact on test-time throughput, we now include displays of teacher vs. student test time throughput in Fig. 3f.\\n\\n-Additional experiments\\nIn response to all reviewers\\u2019 suggestions, we now explore how student performance varies as a function of GAN quality in Fig. 2c and highlight the difference between using GAN-augmentation for compression and for the original supervised learning task in Fig. 2d for neural networks and in Fig. 3f for random forests.\"}",
"{\"title\": \"Experiment explanations and clarifications\", \"comment\": \"We thank the reviewer for the positive and constructive feedback; we provide detailed responses below and have updated our manuscript accordingly.\\n\\n-Why GAN generated data are particularly effective for knowledge distillation / Accuracy of GAN-augmented supervised learning\\n\\nWe clarify in the revision that there is an important difference between using GANs for model compression and using GANs for the original supervised learning task. The goal of the original supervised learning task is to approximate the ideal mapping f* between inputs x and outputs y. This ideal f* is a functional of the true but unknown distribution underlying our data, and our information concerning f* is limited by the real data we have collected. The goal in model compression is to approximate the teacher prediction function g which maps from inputs to predictions z. Because the teacher is a function of the training data alone, g itself is a functional of the training data alone and is otherwise independent of the unknown distribution that generated that data. In addition, because we have access to the teacher, we have the freedom to query the function g at any point, and hence our information concerning g is limited only by number of queries we can afford. In particular, when we generate a new query point x, we can observe the actual target value of interest, the teacher\\u2019s prediction g(x); this is not true however for the supervised learning task, where no new labels can be observed. We believe these properties make the model compression task a more tractable one and one that is ideal for data augmentation with generative models.\\n\\nFollowing the reviewer\\u2019s suggestion, we complement this explanation with two new experiments (one for DNNs in Fig. 2d and one for random forests in Fig. 3e) showing that the same GAN data that greatly improves distillation performance either harms or scarcely improves performance in the supervised learning task.\\n\\n-GAN quality matters\\nThe reviewer is correct that the performance of GAN-MC depends on the quality of the GAN; to make this clearer we have introduced a new \\u201cquality matters\\u201d experiment demonstrating how student performance varies as a function of the number of GAN training iterations.\\n\\n-Fairness of Table 1 comparison\\nWe address all of these questions in the revision. For image data, we use the same batch sizes and process the same number of batches for distillation with and without GAN data. The improved performance apparently comes from the exposure to synthetic data in addition to the available training data. Interestingly, in Fig. 2a we see improved performance for every non-zero value of p_fake. For tabular data, we explicitly augment the training dataset of size n_train with 9n_train synthetic datapoints and run the default Random Forest scikit-learn training code. All hyperparameters except p_fake were set to the default values recommended in (Li, 2018). We select p_fake from the values {0, 0.1, 0.2, \\u2026, 1.0} using a validation set and report performance on the test set.\\n\\n-Compression vs. Inception\\nWe clarify the conceptual advantages of the compression score over the inception score in the revision. The Inception Score measures across-class diversity but does not account for within class diversity. In addition, the Inception Score measures a form of discriminability based on the predictions of a pre-trained neural network but is easily misled by datapoints that elicit high confidence predictions without resembling real data. Because the Compression Score is determined by student performance on real test data, it benefits from both within and across-class diversity (this typically leads to a higher-quality students) and prefers datapoints that improve real test set error (students trained on less realistic datapoints tend to yield worse test error).\\n\\n-Conclusion\\nWe have added a conclusion section and discuss the mentioned related work therein.\"}",
"{\"title\": \"More explanation and clarification on experiments required\", \"review\": \"This paper focused on training a small network with a pre-trained large network in a student-teacher strategy, which also known as knowledge distillation. The authors proposed to use a separately trained GAN network to generate synthetic data for the student-teacher training.\\n\\nThe proposed method is rather straightforward. The experimental results look good, GAN generated data help train a better performed student in knowledge distillation. However, I have concerns about both motivations and experiments. \\n\\n1. The benefits of GAN for generating synthetic data to assist supervised training are still mysterious, especially when GAN is separately trained on the same dataset without more information introduced. I would love the authors to clarify why GAN generated data are particularly effective for knowledge distillation. Does GAN generated data also help standard supervised training? I would expect following experiments: use mixture of training and GAN data to train teacher and student network by standard supervised loss without knowledge distillation, and compare with values in table 1. \\n\\n2. The performance of the proposed method depends on the quality of GAN. To help me further understand the quality of GAN, I hope to see the following experiments to compare with scores in table 1.\\ni) The accuracy of supervised trained teach and student on GAN generated image. \\nii) The classification accuracy on test data by the classifier trained in AC-GAN. \\n\\n3. I would like the authors to clarify their experiments to convince me the comparison in table 1 is fair.\\ni) How many data and iterations in total are used for standard training and knowledge distillation with/without GAN data? Does the better performance come from synthetic data, or come from exploiting more data and training for longer time?\\nii) Related to i), In figure 1 (a), how many data and iteration for each epoch? It would help if the standard supervised training curve for student can be provided. \\niii) The experiments have a lot of hyperparameters, for example, the weight \\\\alpha, the temperature T, optimizer, the learning rate, learning rate decay, the probability p_fake. These hyperparameters are different for each experimental setting. How are they chosen?\\n\\n4. Please explain conceptually why the proposed compression score is better than inception score. \\n\\n5. The paper is missing a conclusion section. The following papers introduce adversarial training for knowledge distillation. Though it is not necessary to compare with them in experiments as they are complicated method and the usage of GAN is different from this paper, I think it is still worth to mention them in related work. \\nWang et al. 2018 Adversarial Learning of Portable Student Networks\\nXu et al. 2018 Training Student Networks for Acceleration with Conditional Adversarial Networks \\n\\n================ after rebuttal ====================\\nI appreciate the authors' response and slightly raise the score. It is a good rebuttal and it has clarified several things. I like the authors' explanation on why GAN is particularly good in a student-teacher setting. The explanation reminds me of the mixup data augmentation paper from last year. I also like the additional experiments which clearly show the benefits of GAN data augmentation. \\n\\nHowever, I still think it is borderline for several reasons.\\n1. As the other reviewer has pointed out, CIFAR-10 is a bit too toy and some models (like LeNet for Figure 2) cannot really show the advantage of the method. I would suggest try ImageNet, and use more recent networks for ablation study. \\n2. As the other reviewer has pointed out, the compression ratio can be impractical. The compression ratio depends on student-teacher training, which can take a relatively long time. \\n3. I would suggest the following experiments that may strengthen the paper. I would consider these as a plus, not necessarily related to my current evaluation. \\ni) Try not use GAN, but use mixup (linear interpolation of samples) as data augmentation, and go through the student-teacher training.\\nii) Try evaluate the effect of generator structure for data augmentation. Does the generator have to be very strong? The GAN generated results did not improve supervised learning may suggest the generator is not necessarily to be strong.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official Review\", \"review\": \"Summary:\\nThe paper proposes an approach for improving standard techniques for model compression, i.e. compressing a big model (teacher) in a smaller and more computationally efficient one (student), using data generated by a conditional GAN (cGAN). The paper suggests that the standard practice of training the student to imitate the behavior of the teacher *on the same training data* that the teacher was trained on is problematic and can lead to overfitting. Instead, the paper proposes learning a conditional GAN, which can potentially generate large amounts of realistic synthetic data, and use this data (in addition to original training data) for model compression.\\nExperimental results show that this idea seems to improve the performance of convnet student models on CIFAR-10 classification and random forest student models on tabular data from UCI and Kaggle.\\nAnother contribution of the paper is to propose an evaluation metric for generative model, called the compression score. This score evaluates the quality of generated data by using it in model compression: \\u201cgood\\u201d synthetic data results in a smaller gap in performance between student and teacher models.\", \"strengths\": [\"The paper sheds a light on an interesting aspect in model compression. The idea of teaching a student model to imitate behavior of the teacher model on *new* data is interesting. In fact, it emphasizes the fact that we are mostly interested in imitating the teacher model\\u2019s capability of generalizing to new examples rather than overfitting to training examples.\", \"Experiments show that for several settings (model class, architecture and datasets), using synthetic data by a cGAN can be useful in reducing the gap between student and teacher models.\", \"The paper is clearly written and easy to follow.\"], \"weaknesses\": [\"The claim that reusing the same training data used for training the teacher model in model compression can lead to overfitting of student model is not very obvious and needs more experimental evidence in my opinion. One way to test this is to use some unseen real data (e.g. validation or a held-out part of training data) for model compression, and showing that it can indeed help in improving student performance.\", \"The claim that cGAN can generate \\u201cinfinite\\u201d amount of realistic data is too strong. In light of some well-known problems of GANs such as mode collapse [2] and low-support learned distributions [1], this assumption seems unrealistic. In fact, it is not too obvious how synthetic data by a generative model learned on *same training data as the teacher* can provide any additional information to real data.\", \"While the idea of the proposed evaluation metric seems interesting, I believe it is not very practical, because:\", \"1.\\tIt is computationally intensive (requires training a model from scratch on fake data)\", \"2.\\tIt relies on performance of the compression mechanism, which might also have some idiosyncrasies that prefer some features in synthetic data which do not necessarily correspond to quality of generated data.\", \"Questions/Suggestions:\", \"In addition to using held-out real data for model compression as suggested above, a useful baseline could be using standard data-augmentation techniques in model compression.\", \"What would happen if a student model is very small and cannot possibly overfit training data? Would using synthetic data be still useful there?\", \"I am actually confused about a claim made when presenting compression score in Section 5. The paper claims that the best compression score is 1 (training student model on real data), while the paper shows that in fact, good synthetic data should produce *better* accuracy than using real data. I would appreciate if authors can clarify this point.\"], \"overall_recommendation\": \"While the paper presents an interesting problem in model compression, I\\u2019m leaning towards rejecting the paper because of the weaknesses mentioned above. That being said, I am happy to reconsider my decision if there is any misunderstanding on my part.\", \"references\": \"[1] Arora, Sanjeev, and Yi Zhang. \\\"Do GANs actually learn the distribution? an empirical study.\\\" arXiv preprint arXiv:1706.08224 (2017).\\n[2] Goodfellow, Ian. \\\"NIPS 2016 tutorial: Generative adversarial networks.\\\" arXiv preprint arXiv:1701.00160 (2016).\\n\\n\\n-----\\n\\nUpdated score and posted a comment to author response.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"interesting idea, some important experiments missing\", \"review\": \"I like this paper. What the authors have done is of high quality. It is well written and clear. However, quite a lot of experiments are necessary to make this paper publishable in my opinion.\", \"strenghts\": [\"The idea to use a GAN for model compression is something that many must have considered. It is good to see that someone has actually tried it and it works well.\", \"I think the compression score is definitely an interesting idea on how to compare GANs that can be of practical use in the future.\", \"The experimental results, which are currently in the paper, largely support what the authors are saying.\"], \"weaknesses\": [\"The authors don't compare how good this technique is in comparison to simple data augmentation. My suspicion is that the difference will be small. I realise, however, that the advantage of this method over data augmentation is that it is harder to do it for tabular data, for which the proposed method works well. Having said that, models for tabular data are usually quite simple in comparison to convnets, so compressing them would have less impact.\", \"The experiments on image data are done with CIFAR-10, which as of 2018 is kind of a toy data set. Moreover, I think the authors should try to push both the baselines and their technique much harder with hyperparameter tuning to understand what is the real benefit of what they are proposing. I suspect there is a lot of slack there. For comparison, Urban et al. [1] trained a two-layer fully connected network to 74% accuracy on CIFAR-10 using model compression.\", \"[1] Urban et al. Do Deep Convolutional Nets Really Need to be Deep (Or Even Convolutional)? 2016.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SyxfEn09Y7 | G-SGD: Optimizing ReLU Neural Networks in its Positively Scale-Invariant Space | [
"Qi Meng",
"Shuxin Zheng",
"Huishuai Zhang",
"Wei Chen",
"Qiwei Ye",
"Zhi-Ming Ma",
"Nenghai Yu",
"Tie-Yan Liu"
] | It is well known that neural networks with rectified linear units (ReLU) activation functions are positively scale-invariant. Conventional algorithms like stochastic gradient descent optimize the neural networks in the vector space of weights, which is, however, not positively scale-invariant. This mismatch may lead to problems during the optimization process. Then, a natural question is: \emph{can we construct a new vector space that is positively scale-invariant and sufficient to represent ReLU neural networks so as to better facilitate the optimization process }? In this paper, we provide our positive answer to this question. First, we conduct a formal study on the positive scaling operators which forms a transformation group, denoted as $\mathcal{G}$. We prove that the value of a path (i.e. the product of the weights along the path) in the neural network is invariant to positive scaling and the value vector of all the paths is sufficient to represent the neural networks under mild conditions. Second, we show that one can identify some basis paths out of all the paths and prove that the linear span of their value vectors (denoted as $\mathcal{G}$-space) is an invariant space with lower dimension under the positive scaling group. Finally, we design stochastic gradient descent algorithm in $\mathcal{G}$-space (abbreviated as $\mathcal{G}$-SGD) to optimize the value vector of the basis paths of neural networks with little extra cost by leveraging back-propagation. Our experiments show that $\mathcal{G}$-SGD significantly outperforms the conventional SGD algorithm in optimizing ReLU networks on benchmark datasets. | [
"optimization",
"neural network",
"irreducible positively scale-invariant space",
"deep learning"
] | https://openreview.net/pdf?id=SyxfEn09Y7 | https://openreview.net/forum?id=SyxfEn09Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ryefknZIlN",
"SygHZEM1xE",
"ryeWPub4R7",
"S1lkR4gjpX",
"ByeKq4YqTm",
"ByxLrNtqTm",
"rkejI7K5pX",
"ryesj55ZaX",
"SJgCHmagaQ",
"ryevvN85hQ",
"Hylomw-qn7",
"B1gKT5yq37"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545112537744,
1544655869184,
1542883416944,
1542288583053,
1542259857386,
1542259773567,
1542259538957,
1541675682939,
1541620549992,
1541198943130,
1541179170687,
1541171905400
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1431/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1431/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1431/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1431/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1431/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1431/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1431/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1431/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1431/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1431/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1431/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Add experiments on deeper ResNet\", \"comment\": \"We have finished the experiments on 110-layer ResNet (He, et. al. 2016) using the same training strategies with Table1 in the paper. The test error rates are shown below:\\n-------------------------------------------------------------------------- \\n CIFAR-10 CIFAR-100\\nSGD 6.83% (\\u00b10.25) 29.44%(\\u00b1 0.66)\\nG-SGD 6.49%(\\u00b10.06) 27.74%(\\u00b10.36)\\n--------------------------------------------------------------------------\\nIt shows that G-SGD clearly outperforms SGD on each dataset. The best test accuracies are achieved by G-SGD on both datasets, which indicates that G-SGD indeed helps the optimization of deep ResNet model. We have added the results in the draft and will update the paper in the next version.\"}",
"{\"metareview\": \"This paper proposes a new optimization method for ReLU networks that optimizes in a scale-invariant vector space in the hopes of facilitating learning. The proposed method is novel and is validated by some experiments on CIFAR-10 and CIFAR-100. The reviewers find the analysis of the invariance group informative but have raised questions about the computational cost of the method. These concerns were addressed by the authors in the revision. The method could be of practical interest to the community and so acceptance is recommended.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Meta-review\"}",
"{\"title\": \"Thanks for raising the score\", \"comment\": \"Thank you for raising the score. We\\u2019re glad to see that our response has addressed your concerns.\\n\\nWe also want to investigate how G-SGD performs on deeper ResNet (such as 101 layers or more). In our mind, deeper neural networks may benefit more from the optimization in G-space. We are conducting the experiments and try our best to add the results before the rebuttal deadline. \\n\\nOne of our concern is that, to the best of our knowledge, with the traditional or similar training strategies, ResNet-101 can easily achieve about 93%+ accuracy on the CIFAR-10 dataset, but a little bit far from >96%. For example, 93.75% and 93.89% are reported by these two PyTorch-based implementations(https://github.com/kuangliu/pytorch-cifar and https://github.com/bearpaw/pytorch-classification), respectively. As for comparison, in our implementation, the test accuracy of 94.29% is gained by ResNet-34 trained by SGD and weight decay. Therefore, it will be helpful to tell us if there is any implementation or training strategy that can train ResNet-101 with 96% test accuracy.\"}",
"{\"title\": \"The latest paper version\", \"comment\": \"Dear reviewers,\", \"we_modified_our_paper_in_that\": \"1) As for the concern of computation cost, please refer to the time complexity analysis (section 11.1) and training throughput (section 12.1).\\n\\n2) We add step 4 and step 5 in Alg.1 to connect step 3 and step 4.\\n\\n3) We fix some typos especially the notations to make the paper more readable.\\n\\nWe also updated our initial response to all of you, for the sake of clearer clarifications and looping in the latest manuscript changes. We hope all these can make our paper more comprehensive and remove your corresponding concerns. Thanks!\"}",
"{\"title\": \"Thanks for your reviewing and suggestions\", \"comment\": \"Thank you for the helpful feedback.\\n\\n1. Q: Significance of experiments in Table 1.Similar training/testing behaviors in Figure 3. \\nThe performance gain by G-SGD in Table 1 is not marginal. For example, since it can eliminate the influence of positive scaling invariance across all layers of PlainNet, G-SGD can averagely improve the accuracy number by 0.8 and 5.7 on CIFAR-10 and CIFAR-100 respectively, over 5 independent runs. Moreover, Plain-34 trained by G-SGD achieves even better accuracy than ResNet-34 trained by SGD on CIFAR-10, which shows the influence of invariance on optimization in weight space as well.\\nIn our experiments, we employ the training strategies in the original ResNet paper [1] (which is designed for SGD) for both SGD and our G-SGD. This is the reason why the training/testing behaviors are similar. In the future, we will design training strategies for optimization in G space, which will further enhance its performance.\\n\\n2. Q: Evidence of low computational cost.\\nBy the update rule of G-SGD in Section 11.1 (i.e., Eqn (43) and Eqn(46)), the extra computation is the calculation of $R^t(p^j)$. It is clear that the computation overhead of our G-SGD is upper bounded by $(B+1)T$ (B is the batch size and T is the time of processing one sample, and the cost of SGD is BT). Considering that the computation cost of original SGD is $BT$, G-SGD has \\u201clittle extra cost\\u201d. We will add the complexity analysis and the figure of training throughput on our GPU server in the appendix. Please see the next version of our paper.\\n\\n3. Q: Connection between step 3 and 4 in Alg.3.\\nWe assume your comments are about Alg 1, since there is no Alg. 3 in our paper. Please let us know if not. In step 3, we calculate the gradient with respect to basis path by solving Eqn (3) in Inverse-Chain-Rule method. Then, we update basis paths according to SGD accordingly. After that, we can compute the update ratio for basis path. In step 4, we allocate the update of basis path to the update ratio of weight. \\n\\nThank you for the valuable suggestion. We will add the connection between step 3 and step 4 in the next version. \\n\\n[1] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.\"}",
"{\"title\": \"Thanks for your reviewing and suggestions\", \"comment\": \"Thank you for the valuable comments which are addressed as follows.\\n\\n1. Q: Path-SGD results for the CIFAR experiments?\\nIt is unaffordable to compute gradients with respect to all the paths, thus the authors in [1] take coordinate-independent approximation in Path-SGD (Eqn(7) in [1] can be referred). The authors design Path-SGD only for MLP[1] and RNN[2], and the Path-SGD for CNN is non-trivial. Therefore, the result of Path-SGD on ResNet and PlainNet is not provided in Table 1. We do compare with Path-SGD for MLP case in Figure 5. \\n\\n2. Q: Experimental proof of computational overhead?\\nThanks for your advice. We will add the figure of training throughput on our GPU server in the appendix. Please see the next version of our paper.\\n\\n3. Q: Code Release within one of the standard deep learning frameworks.\\nWe plan to open source the codebase of our implemented G-SGD, not only for reproducing our experimental results, but also as a toolkit which can be freely used for the community.\\n\\n4. Q: The chosen of \\u201cfree skeleton weights\\u201d? The necessity of their presence?\\nDifferent selections of free skeleton weights are mathematically equivalent and all the theoretical results will preserve, e.g., the activation status is a function of basis path if the signs of free skeleton weights are fixed.\\nActually we can project the gradient with respect to basis path back to any weights on that path, but according to weight-allocation method, we want to reduce the number of update executions to reduce the time complexity. Hence, we choose one skeleton weight to update on all-basis path (which is only composed by skeleton weights) and others are selected to be free skeleton weights.\\nAlthough there is no update on those \\u201cfree skeleton weights\\u201d, they are necessary because the \\u201cvalue of path\\u201d is composed by all weights on one path. We can delete a weight only when it always takes value \\u201c0\\u201d. Free skeleton weights are not initialized as \\u201c0\\u201d, and they indeed play a role in calculating the value of path and also in feedforward process in G-SGD. \\n\\n5. Q: High computational overhead of the gradient of path norm?\\nThe gradient of path regularizer is hard to calculate exactly and a coordinate-independent approximation is made in Path-SGD which costs (B + 1)T (B is the batch size and T is the time of processing one sample, and the cost of SGD is BT). The computation cost of our G-SGD is also (B+1)T (Please refer to Eqn(43)-Eqn(46)), which is comparable with the approximation of Path-SGD. We will clarify this comparison in the next version. \\n\\n6. Q: Is the advantage of G-SGD over SGD expected to be proportional to the invariant ratio for CNNs as well?\\nYes, we have similar observations for CNN. \\n\\n[1] Neyshabur. B, et al. Path-sgd: Path-normalized optimization in deep neural networks. NIPS 2015.\\n[2] Neyshabur. B, et al. Path-normalized optimization of recurrent neural networks with relu activations. NIPS 2016.\"}",
"{\"title\": \"Good suggestions\", \"comment\": \"Thank you for viewing the idea novel and the analysis informative.\\n\\nNew optimization algorithm in G-Space is an interesting topic. In this work, we mainly focus on introducing G-Space through the PSI property of ReLU Neural Networks and investigate the most popular SGD algorithm in G-Space. We will further investigate new optimization algorithms in G-Space.\"}",
"{\"title\": \"G-SGD can solve the scaling-invariant problem for BN networks\", \"comment\": \"It\\u2019s a good question.\\n\\nFirst, you may misunderstand the definition of positively scale-invariant in our paper. The PSI property defined in our paper is: \\u201cthe output of the ReLU network is invariant if the incoming weights of a hidden node are multiplied by a positive constant \\u201cc\\u201d and the outgoing weights are divided by \\u201cc\\u201d at the same time. \\u201d However, BN networks are only invariant to the scaling of the incoming weights, which is different from the PSI property defined in our paper.\\n\\nSecond, we still need G-SGD for BN networks. 1) From theoretical view, BN networks still encounter the problem that equivalent BN networks with different scaling weights generate different gradients. Consider that the incoming weights are multiplied by a constant \\u201cc\\u201d, the new parameterized network generates the same output as the previous one. However, their gradients on weights are not the same using SGD, which may hurt the optimization and is unstable about the unbalanced initialization. Thus, we choose to optimize the invariant variables \\u2013 the value of basis path (the multiplication of normalized weights which is normalized by its incoming weight norm (refer to section 11.2)) for this kind of scaling operators brought by BN. Then their gradients on basis paths are the same using G-SGD. 2) Experiments also show that G-SGD helps to improve the performance with BN networks, especially on PlainNet (Figure 1). It indicates that G-SGD can help to solve the scaling invariant property brought by BN.\"}",
"{\"comment\": \"If I understand right, BN (batch norm) networks are invariant to the scaling of the weights. Does that mean SGD is enough for BN networks? If so, do we still need G-SGD for BN networks?\", \"title\": \"BN networks are scale-invariant, therefore positively scale-invariant?\"}",
"{\"title\": \"good paper\", \"review\": \"The paper proposes SGD for ReLU networks. The authors focuses on positive scale invariance of ReLU which can not be incorporated by naive SGD. To overcome this issue, a positively scale-invariant space is first introduced. The authors show the SGD procedure in that space, which is based on three component techniques: skeleton method, inverse-chain rule, and weight allocation.\\n\\nThe basic idea, directly optimizing weight in scale invariant space, is reasonable and would be novel, and experiments verify the claim. Readability might be low slightly.\\n\\nAnalysis about invariance group (e.g., theorem 3.6) is interesting and informative.\\n\\nCombining with other optimization algorithms (other than simple SGD) method would be valuable.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"SGD is recast in positively scale-invariant space, showing improvements over training in weight space with low computational overhead.\", \"review\": \"Summary:\\nIn prior deep learning papers it has been observed that ReLU networks are positively scale invariant due to the non-negative homogeneity property of the max(0, x) function. The present paper proposes to change the optimization procedure so that it is done in a space where this invariance is preserved (in contrast to the weight space, where it is not). To do so, the authors define the group G of positive scaling operators, and note that the \\\"value of path\\\" (product of weights along the path) is G-invariant and together with \\\"activation status of paths\\\" allows for the definition of an equivalence class. They then build a G-space which has fewer dimensions than the weight space, and proposed g-SGD to optimize the network in this space. In g-SGD gradients are computed normally, then projected to G-space via a sparse matrix in order to update the values of paths. The weights are then updated based on a \\\"weight allocation method\\\" that involves the inverse projection.\\n\\nThe authors conduct experiments on CIFAR-10 and -100 with a ResNet-34 and a similarly deep variant of VGG, showing significant benefits from G-SGD training in all cases. They also evaluate the performance of a simple MLP on Fashion-MNIST as a function of the invariant ratio H/m.\", \"comments\": \"The paper is organized well (with technical details of the proofs delegated to the appendix), and discusses the differences in comparison prior work. While evaluations on large-scale datasets would be helpful here, the present experiments suggest that optimization in G-space indeed consistently improves results, so the proposed method seems promising.\\n\\nFor completeness, it would be great to include Path-SGD results for the CIFAR experiments in Table 1, together with runtime information to highlight the benefits of the g-SGD algorithm and provide experimental proof that the computational overhead is indeed low.\\n\\nIf the authors are hoping for a wider adoption of the method, it would be helpful for the community to have the g-SGD code released within one of the standard deep learning frameworks.\", \"questions\": [\"Does it matter which weights are chosen as \\\"free skeleton weights\\\"? If these weights never get updated in the optimization procedure, could you please comment on the intuitive interpretation of the necessity of their presence?\", \"The text states that the computational overhead of the gradient of path norm is \\\"very high\\\". The Path-SGD paper proposes a method costing (B + 1)T, where the overhead an be small for large batches. It would be good to clarify this a bit in the present text.\", \"Is the advantage of g-SGD over SGD expected to be proportional to the invariant ratio for CNNs as well?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"interesting idea, but not convincing exps\", \"review\": \"This paper proposed a new training algorithm, G-SGD, by exploring the positively scale-invariant space for relu neural networks. The basic idea is to identify the basis paths in the path graph, and convert the weight update in SGD to the weight rescaling in G-SGD. My major concerns are as follows:\\n\\n1. Empirical significance of G-SGD: While the idea of exploring the structures of relu neural networks for training based on group theory on graphs is interesting, I do not see significant improvement over SGD. The accuracy differences in Table 1 are marginal, training/testing behaviors in Fig. 3 are very similar, and more importantly there is no evidence to support the claims \\\"with little extra cost\\\" in the abstract/Sec. 4.3 in terms of computation. Therefore, I do not see the truly contribution of the proposed method.\", \"ps\": \"After reading the revision, I am happy to see the results on computational time that support the authors' claim. However, I still have doubts on the significance of the improvement on CIFAR10 and CIFAR100, because the performance is heavily dependent on network architectures. In my experience, using resnet101 it can easily achieve >96% accuracy. So can you achieve better than this using G-SGD? The training and testing behaviors on both datasets somehow show improvement over SGD, which I take it more importantly than just those numbers. Therefore, I am glad to raise my score.\\n\\n2. In Alg. 3 I do not quite understand how to apply step 3 to step 4. The connection needs more explanation.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
BygGNnCqKQ | Architecture Compression | [
"Anubhav Ashok"
] | In this paper we propose a novel approach to model compression termed Architecture Compression. Instead of operating on the weight or filter space of the network like classical model compression methods, our approach operates on the architecture space. A 1-D CNN encoder/decoder is trained to learn a mapping from discrete architecture space to a continuous embedding and back. Additionally, this embedding is jointly trained to regress accuracy and parameter count in order to incorporate information about the architecture's effectiveness on the dataset. During the compression phase, we first encode the network and then perform gradient descent in continuous space to optimize a compression objective function that maximizes accuracy and minimizes parameter count. The final continuous feature is then mapped to a discrete architecture using the decoder. We demonstrate the merits of this approach on visual recognition tasks such as CIFAR-10/100, FMNIST and SVHN and achieve a greater than 20x compression on CIFAR-10. | [
"compression",
"architecture search"
] | https://openreview.net/pdf?id=BygGNnCqKQ | https://openreview.net/forum?id=BygGNnCqKQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJgNUmk-gV",
"H1ldz3JhyV",
"BJgCiR_LAX",
"r1eLUCdLCm",
"BJxj26d80Q",
"r1lO96_IAQ",
"r1gE7a1Rn7",
"Hklv4COo3X",
"BJgdDYYun7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544774476401,
1544449039893,
1543044774193,
1543044686156,
1543044531509,
1543044496339,
1541434652426,
1541275182531,
1541081440392
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1430/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1430/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1430/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1430/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1430/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1430/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1430/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1430/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1430/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The authors propose a scheme to learn a mapping between the discrete space of network architectures into a continuous embedding, and from the continuous embedding back into the space of network architectures. During the training phase, the models regress the number of parameters, and expected accuracy given the continuous embedding. Once trained, the model can be used for compression by first embedding the network structure and then performing gradient descent to maximize accuracy by minimizing the number of parameters. The optimized representation can then be mapped back into the discrete architecture space.\\nOverall, the main idea of this work is very interesting, and the experiments show that the method has some promise. However, as was noted by the reviewers, the paper could be significantly strengthened by performing additional experiments and analyses. As such, the AC agrees with the reviewers that the paper in its present form is not suitable for acceptance, but the authors are encouraged to revise and resubmit this work to a future venue.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting idea, but requires additional experimentation and analyses\"}",
"{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for the response.\\n\\nI agree with the concerns raised by the other reviewer on the experiment including reproducibility and multi-target optimization. Still, I think the point in the rebuttal is not critical enough to change my decision. That said, I do believe as long as there is enough pooling layer (which reduces feature map size), calling the statement 3.2 a theorem still looks like overstating.\"}",
"{\"title\": \"Thank you for your insightful and useful comments!\", \"comment\": \"We thank you for your insightful and detailed comments on improving the paper. We have addressed the comments and made changes where appropriate.\\n\\n>> In the specification of layers, the layer type is just appointed to an integer variable, even though in reality the layer type is a categorical variable.\\n\\n---\\nThis is an interesting idea and one that we did consider. The main reason we decided to go with a regression-based approach is that for an arbitrary task, it is unclear how many categories we would need to determine the appropriate capacity (filter size, number of filters). Under the categorical formulation, we would need to redefine the learning problem for every different task, thus complicating transfer learning. Another approach that was considered was to structure the decoder as a classification problem for variables that were categorical and regression for the other variables. However since the simpler regression approach worked, we decided to stick with that approach.\\n\\n>> For one, it would be to evaluate the proposed model in more challenging setups: evaluate on ImageNet dataset, using some of the recent architectures (e.g., ResNet, VGGnet, and so on).\\n\\n---\\nWhile the number of architectures that have to be trained (1500 trained for 5 epochs) is relatively small compared to other architecture search papers, training on large datasets like ImageNet pose a challenge due to the number of computing resources needed to run the experiment.\\nRegarding recent architectures, we note that the conv architectures used in the paper are very similar to the VGG architecture and achieve better performance with fewer parameters. Using more complex architectures such as ResNets or DenseNets is an interesting future direction which would require several architecture-specific modifications.\\n\\n>> For a finite number of edges the number of possible graphs (including the valid architecture) are finite when the input is finite and pooling reduces the feature map size. Thus, it seems that the statement in theorem 3.2 is rather trivial and it is not worth calling it a Theorem.\\n\\n---\\nEven if there exists pooling that reduces feature map size, it is possible to have an infinite number of graphs if there is no parameter constraint. An example of this is as follows: consider a network with finite input I. Let the network be I-> MaxPool -> Conv1\\u2026.->ConvN, where conv preserves feature dimensions. A new graph with a finite number of edges can be formed by adding ConvN+1. Thus, there can be an infinite number of possible graphs each having a finite size.\"}",
"{\"title\": \"Thanks for the comments and questions about the paper!\", \"comment\": \">> My main concern is the validity of the compression step, Procedure COMPRESS, in Algorithm 1. First, is only one step gradient descent applied? If it is, why not minimize the L_c until convergence?\\n\\n----\\nWe repeat the gradient descent procedure while the true compression of the model improves and the true accuracy does not decrease too much. In practice, we observed that testing models every 5 steps (with sgd, momentum=0.5, lr=0.003) works well and fewer than 10 iterations are necessary. \\n\\n>> Second, it seems that minimizing L_c cannot guarantee that both error and the number of parameters are reduced. It is possible that only one of them is reduced.\\n\\n---\\nWe weight the accuracy objective slightly lower than the parameter count objective and find that this helps to maintain accuracy while minimizing parameter count. The weight value is a hyperparameter that varies for each dataset and model.\\n\\n>> The writing of the paper needs to be improved. Some notations are not consistent with each other. For example, the loss notations in Line 19 in Algorithm 1 are different from those defined in Sec. 4.3. \\n>> There is no step size, \\\\eta in Line 20 in Algorithm 1, but there is a step size in the last equation on Page 6. \\n\\n---\\nWe have updated the notation to match those defined in section 4.3\\n\\n\\n>> It is unclear to me how the hyperparameters, such as the step size and \\\\lambda's, are chosen. \\nThe optimal hyperparameters differ for each dataset and network. While there are a few approaches to choosing hyperparameters such as Bayesian optimization or grid search, we found that in practice, the loss plots are quite informative in determining which loss to weight more. For example, if the total loss does not seem to decrease but the parameter loss is still high, we increase the weight of the parameter loss. \\n\\n---\\nWe observed that the step size for the compression step is hard to determine a priori but developed a few ways to tune it. We usually start off at a small value (0.003) and run the compression procedure for 5 steps until we arrive at a compressed network.\"}",
"{\"title\": \"Thank you for your detailed and insightful comments! (Part 1)\", \"comment\": \"We would like to thank the reviewer for their detailed review and numerous useful comments, we have addressed the comments and incorporated the changes below.\\n\\n>> This seems to preclude even basic architectural advancement like skip connections / ResNet - the authors even mention this in section 3.1, and point to experiments on resnets in section 4.4, but the words \\\"skip\\\" and \\\"resnet\\\" do not appear anywhere else in the paper. I presume from the emphasis on topological sort that this is possible, but I don't see how.\\n\\n---\\nOur original plan was indeed to add two more variables specifying the position of the layer relative to the start and end of the skip connection in the input. However, this would require more manual tuning to ensure that the dimension of the residual is the same as that of the output at the end of the skip connection. We have updated that particular sentence to make things clearer.\\n\\nGiven that this is a novel direction of research, we have chosen to start off by demonstrating results on simpler models before modifying the search space to suit custom models. We do think that incorporating a wider selection of networks is a direction indeed worth expanding upon. We have added a note about this in the conclusion. \\n\\n>> In terms of experiments, Figure 3 is very hard to interpret. The axes labellings are nearly too small to read, but it's also unclear what loss this even is - I presume this is the 'train' loss of L_d + \\\\lambda_1L_a + \\\\lambda_2L_p, but it could also be the 'compress' loss. \\n\\n---\\nWe have updated the figures to be easier to read as well as re-run the experiments with updated hyperparameters to generate plots for each of the losses separately so that it is clearer as to which objective is being minimized. We do also incorporate optimization techniques such as alternating the loss that is optimized during each iteration which could explain the non-monotonic decrease.\\n\\n>> A key point that is not really addressed is how well the continuous latent space actually captures what it should. I am extremely interested to know whether the result of 'compress', ie a new concrete architecture found by gradient descent in the latent space, actually has the number of parameters and the accuracy that the regressors predict.\\n\\n---\\nThe predicted accuracy and compression are accurate when the updated feature in latent space is close to the original. We found that after several iterations (~20), we start to see divergence. The degree of accuracy varies for each query network. An analysis will be included in the appendix.\\n\\n>> It would also be really useful to see some concrete input / output values in discrete architecture space. Presumably along the way to 20x compression of parameter count, the optimisation passes through a number of progressively smaller discrete architectures - what do these looks like? \\n\\n---\\nWith a larger step size or aggressive parameter count reduction, we observe that the number of filters decreases, more layers become closer to identity (more consecutive ReLUs, MaxPool with kernel size 1) and fewer conv layers are produced. Interestingly, when ==accuracy is maximized, the network also learns to produce higher capacity networks. An analysis will be included in the appendix.\\n\\n\\n>> Given that the discrete architecture encoding appears to have a fixed length of T, it's not even clear how layers would be removed. Figure 1 implies you would fill columns with zeros to delete layers, but I don't see this mentioned elsewhere in the text.\\n\\n---\\nWe naturally remove the layer when the number of filters, stride or kernel size is 0. We have updated the paper to make this clearer\\n\\n\\n>> Equation numbers would be extremely useful throughout the paper.\\n\\n--\\nWe have updated the paper with equation numbers for better readability.\\n\\n>> Notation in section 3 is unclear. If theta represents trained parameters, then surely the accuracy on a given dataset would be a deterministic value. Assuming that the distribution P_{\\\\theta}(a | A, D) is used to represent the non-determinism of SGD training, is \\\\theta supposed to represent the initialized values of the weights?\\n\\n---\\nIn our formulation, \\\\theta is a random variable representing the trained parameters where the randomness originates from the random initialization procedure and the SGD training. It is true that accuracy is deterministic given a specific sample was drawn from the distribution, but it can vary across two samples.\"}",
"{\"title\": \"Thank you for your detailed and insightful comments! (Part 2)\", \"comment\": \">> There are 3 functions denoted by 'g' defined on page 3 and they all refer to completely different things - this is unnecessarily confusing.\\n>> The formula for expected accuracy - surely this should be averaging over N different training / evaluation runs\\n>> The decoder computes a 6xT output instead of a 5xT output - what is this extra row for?\\n\\n---\\nWe have updated the paper to take these suggestions into account and changed the notation on page 3 such that o - topological ordering function, q - pooling layers and g - loss minimization function. We hope this makes the notation clearer.\\n\\n>> In the definition of \\\"ground truth parameter count\\\" p^* - presumably the standard deviation here is the standard deviation of the l vector? This formulation is a bit surprising, as convolutional layers will generally have few parameters, and final dense layers could have many. \\n\\n---\\nThe parameter count is predicted for the entire network and thus the standard deviation is that of the entire network, not per each layer. \\n\\n>> Did you consider alternative formulations like simply taking the log of the number of parameters? Having a huber loss with scale 1 for this part of the loss function was also surprising, it would be good to have some justification for this (ie, what range are the p^* values in for typical networks?)\\n\\n---\\nWe did try taking the log of the parameters but found that the precision of the parameter regressor decreased. We think this is related to the spread of parameter counts we see in the dataset and that smaller networks might have noisier predictions as a result of the log-scaling. We found empirically that a scale of 1 worked well. p^* values range from about approximately [-10, 10] for our dataset.\\n\\n>> In algorithm 1 line 4 - here you are subtracting \\\\bar{p} from num_params before dividing by standard deviation, which does not appear in the formulation above.\\n\\n---\\nIn our experiments we subtract the mean to center the parameter count, we have updated the formulation to reflect this.\\n\\n>> How were the 1500 random architectures generated?\\n>> These random architectures were then trained five times for 5 epochs - what optimizer / hyperparameters / regularization was used? \\n>> Similarly, the optimization algorithm used in the outer loop to learn the {en,de}coders/regressors is not specified.\\n\\n---\\nWe have included training details for both the architectures as well as the compressor networks in the appendix along with details for the random architecture generation.\\n\\n>> Surely the fact that the architecture is represented as a 5xT tensor, and practically there are upper limits to kernel size, stride etc beyond which an increase has no effect, already implies a finite space?\\n\\n---\\nIn this paper, we are concerned with the cardinality of the set of networks since we desire to form a mapping from architecture space to some latent space. Without a parameter constraint, there could theoretically be an infinite number of valid networks.\"}",
"{\"title\": \"Good paper, experimental validation must be improved\", \"review\": \"Even though many people have considered prunning as architecture search, it has not been explored enough so far. This paper comprises a good approach for compression of achitectures using pruning. Based on the uniqueness of the topological ordering of commonly used neural networks (feed forward, skip connection), the paper proposes a simple and easily manipulable vector (sequence) representation for a wide class of neural networks. Instead of using RNN networks, such long seqeunce representation are mapped to a continuous embedding by 1D-CNN. While training this embedding, for the purpose of compression, predictors needed for compression are jointly trained with embedding.\\nConsequently, the proposed method presents a possiblity of including many other constraints during the architecture search.\\n\\nIn the specification of layers, the layer type is just appointed to an integer variable, even though in reality the layer type is a categorical variable. This choice is ok for standard neural network layers, where effectively the choice is between a single catecorical aspect. However, for more sophisticated layer configurations, where you may need many categorical choices, this model choice will not be adequate and will likely lead to artificially biased design choices. The authors should explain the limitations of this model design and propose methods these limitations can be tackled.\\n\\nThe overall model achieves quite good result in compression. On CIFAR10 the model show good performance as compared to existing compression methods. It should be noted that other methods start with a given stucture, so their search space is more limited than this paper's approach. Specifically, compared to those methods, the search space for the proposed paper is larger because although the number of layers is fixed, the connections between layers give more freedom to the compression algorithm.\\n\\nCurrently, the number of experiments is borderline. They are enough to indicate the potentials of this approach. However, additional experiments would be welcome. For one, it would be to evaluate the proposed model in more challenging setups: evaluate on ImageNet dataset, using some of the recent architectures (e.g., ResNet, VGGnet, and so on). What is more, for more compressed architectures with better accuracy, when searching for a compressed architecture global optimization methods like Bayesian Optimization is worth to try, for instance using the recently proposed BOCK (Oh et al, ICML 2018).\\n\\nSome additional comments.\\n- For a finite number of edges the number of possible graphs (including the valid architecture) are finite when the input is finite and pooling reduces the feature map size. Thus, it seems that the statement in theorem 3.2 is rather trivial and it is not worth calling it a Theorem.\\n\\nOverall, this was an interesting paper to read and worth of acceptance, provided that the proposed method delivers also in more competitive experimental settings.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea but the approach is not rigorous\", \"review\": \"The paper presents a way to compress the neural network architecture. In particular, it first extracts some characteristics for the neural network architecture and then learns two mapping functions, one from the encoded architecture characteristics to the expected accuracy and the other from the same encoded architecture characteristics to the number of parameters. In the meanwhile, the proposed approach learns the encoding and the decoding for the architecture characteristics.\", \"pros\": \"1. The idea of converting the architecture characteristics, which is discrete in nature, to continuous variables is interesting. The continuity of the architecture characteristics can help architecture search tasks.\", \"cons\": \"1. My main concern is the validity of the compression step, Procedure COMPRESS, in Algorithm 1. First, is only one step gradient descent applied? If it is, why not minimize the L_c until convergence? Second, it seems that minimizing L_c cannot guarantee that both error and the number of parameters are reduced. It is possible that only one of them is reduced. \\n2. The writing of the paper needs to be improved. Some notations are not consistent with each other. For example, the loss notations in Line 19 in Algorithm 1 are different from those defined in Sec. 4.3. \\n3. There is no step size, \\\\eta in Line 20 in Algorithm 1, but there is a step size in the last equation on Page 6. \\n4. It is unclear to me how the hyperparameters, such as the step size and \\\\lambda's, are chosen. \\n5. More experimental results are needed to support the proposed approach. \\n\\nIn summary, I think this paper is not ready to be published. \\n\\n ==== After rebuttal ====\\nThe authors' feedback clarified some of my concerns. But my main concern about why minimizing the objective function can reduce both error and the number of parameters still remains. So I changed my rating to 4 from 3.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting underlying idea, but evaluation insufficient\", \"review\": \"This paper deals with Architecture Compression, where the authors seem to learn a mapping from a discrete architecture space which includes various 1D convnets. The aim is to learn a continuous latent space, and an encoder and decoder to map both directions between the two architecture spaces. Two further regressors are trained to map from the continuous latent space to accuracy, and parameter count. By jointly training all these networks, the authors are now able to compress a given network by mapping it's discrete architecture into the latent space, then performing gradient descent towards higher accuracy and lower parameter count (according to the learned regressors).\\n\\nThe authors perform experiments on 4 standard datasets, and show that they can in some cases get a 20x reduction in parameters with negligible performance decrease. They show better Cifar10 results than a few baselines - I am not aware whether this is SOTA for that parameter budget, and the authors do not specify.\\n\\nOverall I really like the idea in this paper, the latent space is well justified, but I cannot recommend acceptance of the current manuscript. There are many notational issues which I go into below, but the key issue is experiments and reproducability.\\n\\nThe search space is not clearly defined. Current literature shows that the performance of these methods depends a lot on the search space. The manuscript does make clear that a T-layer CNN is represented as a 5XT tensor, with each column representing layer type, kernel size etc. However the connectivity is not defined at all, which implies that layers are simply sequentially stacked. This seems to preclude even basic architectural advancement like skip connections / ResNet - the authors even mention this in section 3.1, and point to experiments on resnets in section 4.4, but the words \\\"skip\\\" and \\\"resnet\\\" do not appear anywhere else in the paper. I presume from the emphasis on topological sort that this is possible, but I don't see how.\\n\\nIf this paper is simply dealing with linear chains of modules, then the mapping to a continuous representation, and accuracy regression etc would still be interesting in principle. However it does mean that essentially all the big architecture advancements post-VGG (ie inception, resnet, densenet...) are impossible to represent in this space. Most of the Architecture Search works cited do have a search space which allows the more recent advances.\\n\\nI don't see a big reason why the method could not be extended - taking the 5D per-layer representation and adding a few more dimensions to denote connectivity would seem reasonable. If not, the authors should clearly mention the limitations of their search space.\\n\\n\\nIn terms of experiments, Figure 3 is very hard to interpret. The axes labellings are nearly too small to read, but it's also unclear what loss this even is - I presume this is the 'train' loss of L_d + \\\\lambda_1L_a + \\\\lambda_2L_p, but it could also be the 'compress' loss. It also behaves very unusually - the lines all end up lower than where they started, but oscillate around a lot, making me wonder if the curves from a second set of runs would look anything alike. It's not obvious why there's not just a 'normal' monotonic decrease.\\n\\nA key point that is not really addressed is how well the continuous latent space actually captures what it should. I am extremely interested to know whether the result of 'compress', ie a new concrete architecture found by gradient descent in the latent space, actually has the number of parameters and the accuracy that the regressors predict. This could be added as columns in Table 1 - eg the concrete architecture for Cifar10 gets 20.33x compression and no change in accuracy, but does the regressor for the latents space predict this compression ratio / accuracy as well? If this is the case, then I feel that the latent space is clearly very informative, but it's not obvious here.\\n\\nIt would also be really useful to see some concrete input / output values in discrete architecture space. Presumably along the way to 20x compression of parameter count, the optimisation passes through a number of progressively smaller discrete architectures - what do these looks like? Is it progressively fewer layers / smaller filters / ??? Given that the discrete architecture encoding appears to have a fixed length of T, it's not even clear how layers would be removed. Figure 1 implies you would fill columns with zeros to delete layers, but I don't see this mentioned elsewhere in the text.\", \"more_minor_points\": \"Equation numbers would be extremely useful throughout the paper.\\n\\nNotation in section 3 is unclear. If theta represents trained parameters, then surely the accuracy on a given dataset would be a deterministic value. Assuming that the distribution P_{\\\\theta}(a | A, D) is used to represent the non-determinism of SGD training, is \\\\theta supposed to represent the initialised values of the weights?\\n\\nThere are 3 functions denoted by 'g' defined on page 3 and they all refer to completely different things - this is unnecessarily confusing.\\n\\nThe formula for expected accuracy - surely this should be averaging over N different training / evaluation runs, something like:\\n\\nE_{\\\\theta}[a | A, D] \\\\simto \\\\frac{1}{N} \\\\sigma_{i}^N g_{\\\\theta}(A, D, \\\\theta_i)\\n\\nThe decoder computes a 6xT output instead of a 5xT output - what is this extra row for?\\n\\nIn the definition of \\\"ground truth parameter count\\\" p^* - presumably the standard deviation here is the standard deviation of the l vector? This formulation is a bit surprising, as convolutional layers will generally have few parameters, and final dense layers could have many. Did you consider alternative formulations like simply taking the log of the number of parameters? Having a huber loss with scale 1 for this part of the loss function was also surprising, it would be good to have some justification for this (ie, what range are the p^* values in for typical networks?)\\n\\nIn algorithm 1 line 4 - here you are subtracting \\\\bar{p} from num_params before dividing by standard deviation, which does not appear in the formulation above.\", \"in_the_experiments\": \"How were the 1500 random architectures generated? I presume by sampling uniformly a lot of 5xT tensors, but this encoding is not clearly defined. x_i is defined as being in the set of integers, does this include negative numbers? What are the upper / lower limits, and is there anything to push towards standard kernel sizes like 3x3, 5x5, etc? These random architectures were then trained five times for 5 epochs - what optimizer / hyperparameters / regularization was used? Similarly, the optimization algorithm used in the outer loop to learn the {en,de}coders/regressors is not specified.\\n\\nI would move the lemma and theorem into the appendix - they seem quite unrelated to the overall thrust of the paper. To me, saying that an embedding is not uniquely defined, but can be learnt is not that controversial, and I don't need proofs that some architecture search space has a finite number of entries. Surely the fact that the architecture is represented as a 5xT tensor, and practically there are upper limits to kernel size, stride etc beyond which an increase has no effect, already implies a finite space? Either way, this section of the paper did not add much value from my perspective.\\n\\n\\nI want to close by encouraging the authors to resubmit after addressing the above issues, I do believe the underlying idea here is potentially very interesting.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rkgMNnC9YQ | ATTENTIVE EXPLAINABILITY FOR PATIENT TEMPORAL EMBEDDING | [
"Daby Sow",
"Mohamed Ghalwash",
"Zach Shahn",
"Sanjoy Dey",
"Moulay Draidia",
"Li-wei Lehmann"
] | Learning explainable patient temporal embeddings from observational data has mostly ignored the use of RNN architecture that excel in capturing temporal data dependencies but at the expense of explainability. This paper addresses this problem by introducing and applying an information theoretic approach to estimate the degree of explainability of such architectures. Using a communication paradigm, we formalize metrics of explainability by estimating the amount of information that an AI model needs to convey to a human end user to explain and rationalize its outputs. A key aspect of this work is to model human prior knowledge at the receiving end and measure the lack of explainability as a deviation from human prior knowledge. We apply this paradigm to medical concept representation problems by regularizing loss functions of temporal autoencoders according to the derived explainability metrics to guide the learning process towards models producing explainable outputs. We illustrate the approach with convincing experimental results for the generation of explainable temporal embeddings for critical care patient data. | [
"explainability",
"attentive explainability",
"patient temporal",
"human prior knowledge",
"observational data",
"use",
"rnn architecture",
"temporal data dependencies",
"expense"
] | https://openreview.net/pdf?id=rkgMNnC9YQ | https://openreview.net/forum?id=rkgMNnC9YQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJlQR4i9lE",
"HylXbhFlg4",
"HyxW_aE9AQ",
"SyeM1qV5CQ",
"ByeXN_N5AQ",
"B1lSpR6n2m",
"ByeabU2c3X",
"rygGb4XW2X"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545413835016,
1544752122689,
1543290216995,
1543289306293,
1543288874520,
1541361341298,
1541223941316,
1540596729631
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1429/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1429/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1429/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1429/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1429/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1429/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1429/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1429/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Postmortem disagreement\", \"comment\": \"Just letting you know that I had read your response but forgot to reply here. I disagree with your response to 1 and 2, which is why I kept my score.\"}",
"{\"metareview\": \"The paper proposes an approach to define an \\\"interpretable representation\\\",\\nin particular for the case of patient condition monitoring. Reviewers point\\nto several concerns, including even the definition of explainability and\\nlimited significance. The authors tried to address the concerns but reviewers\\nthink the paper is not ready for acceptance. I concur with them in rejecting it.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}",
"{\"title\": \"Thank you for your valuable comments\", \"comment\": \"Regarding issue 1) raised in your review, we would like to point out that we do not measure interpretability or explainability with a \\\"number of model parameters\\\". The use of the external observer treats the model as a blackbox. Our approach is in fact based on the premise that we do not want to inspect models to estimate explainability, let alone counting their number of parameters. Instead, we are training an external observer as described in the paper to estimate explainability.\\n\\nRegarding 2), we do not agree with the removal of the goodness of fit term. Removing it does not make much sense to us. Without this term, the observer would not be connected in any ways with the model M to be able to assess explainability of this model. Estimating L(M^o) or L(M^o|M^p) alone without any links to M does not make much sense. It is imperative to be able to make sure that the explanations provided fit the data produced by $M$. Consequently, we do not understand nor agree with the comment made in the review on the flaws found on this definition. \\n\\nRegarding 3), we have added a significant amount of details on how the softmax operator is used. The original submission was definitely omitting important details on the actual implementation of the scheme. \\n\\nRegarding 4), we have fixed these broken references and also addressed a few typos that we found in the text. \\n\\nThank you for the review.\"}",
"{\"title\": \"Thank you for the comments, much appreciated\", \"comment\": \"We have addressed your feedback by taking a full pass at the entire manuscript. We specially focused on rewriting many aspects of the methodology. For instance, we have added text explaining the use of attention mechanism and detailing how they are computed as part of the observer model.\\nMany thanks.\"}",
"{\"title\": \"Thank you for the detailed review. Your feedback is greatly appreciated.\", \"comment\": \"We have attempted to address your comments in the following way.\\nRegarding issue (1) above, we have added text in 2.4 to address your valid concern. It turns out that for the MDL, we are really interested in M^o. However, the regression restriction is really dictated by the problem that we are trying to solve. Both M and M^o belong to M_reg. \\n\\nRegarding (2), there was a typo that we fixed as you pointed (missing \\\"p\\\"). Regarding the gaussian assumption, this is pretty standard when trying to model the distribution of a regression error. It is a standard assumption for which we added a pointer in the paper to a text book on MDL. \\n\\nRegarding 3, compactness simply relates to description lengths. A more compact model will require less bits to be described. As stated in the paper, the MDL does not really mandate how the model complexity should be computed. Attention models have been suggested for explainability in the literature. We are expanding on these approaches. More complex models can certainly be looked and we plan to do this in the future. \\n\\nRegarding 5, the gist of this work is to develop metrics of explainability to: (i) be able to estimate or measure how explainable deep learning models are and (ii) to be able to force the learning of deep models towards models that are easy to explain. For (i), training an external observer separately makes sense. We could have reported results experimental results on this since the code that we have does it implicitly to estimate these metrics during the learning. Our focus in the paper is mostly on (ii), a harder problem in our view. We do not view this joint learning as \\\"cheating\\\" (it could just be a poor choice of words). We are only trying to enforce explainability constraints on the training of architectures like RNNs that are notoriously hard to explain. The joint learning with the observer allows us to do just that. There is a delicate balance between the expressiveness of the model M and explainability enforced by the observer M^o. If the black box is too easy to explain, it will not perform well on its task. Similarly, if the backbox is very complex, it will not be easy to explain. The joint learning helps us trade between these extremes, as shown by the experiments where we vary the hyper-parameters controlling this trade-off. There is a discussion on this trade-off in the experimental section. \\n\\nRegarding 6, we agree that this approach can be applied on other deep learning tasks. We have decided to use this embedding problem in this paper simply because applications of deep learning techniques to these problems in healthcare is hindered by explainability requirements. We definitely plan on applying the framework on other problems in the future. \\n\\nRegarding 7, we do agree and have taken multiple passes at it to address these issues. \\n\\nFinally, we removed Figure 4. It was a bit distractive. It was meant to show how temporal attention coefficients were distributed at the decoder side. \\n\\nMany thanks for the very constructive comments.\"}",
"{\"title\": \"Interesting problem and hypothesis, inconclusive analyses and experiments\", \"review\": \"This paper is motivated in an interesting application, namely \\\"explainable representations\\\" of patient physiology, phrased as a more general problem of patient condition monitoring. Explainability is formulated as a communication problem in line with classical expert systems (http://people.dbmi.columbia.edu/~ehs7001/Buchanan-Shortliffe-1984/MYCIN%20Book.htm).\\nInformation theoretical concepts are applied, and performance is quantified within the minimum description length (MDL) concept.\\n\\nQuality & clarity \\nWhile the patient dynamics representation problem and the communication theoretical framing is interesting , the analyses and experiments are not state of the art. \\nWhile the writing overall is clear and the motivation well-written, there are many issues with the modeling and experimental work.\\nThe choice of MDL over more probabilistic approaches (as e.g. Hsu et al 2017 for sequences) could have been better motivated. The attention mechanism could have been better explained (attention of whom and to what?) and also the prior (\\\\beta). How is the prior established - e.g. in the MIMIC case study \\nThe experimental work is carried out within a open source data set - not allowing the possibility of testing explanations against experts/users. \\n\\nOriginality \\nThe main originality is in the problem formulation. \\n\\nSignificance \\nThe importance of this work is limited as the case is not clearly defined. How are the representations to be used and what type of users is it intended to serve (expert/patients etc) \\n\\nPros and cons\\n+ interesting problem\\n\\n-modeling could be better motivated\\n-experimental platform is limited for interpretability studies\\n\\n== \\nHsu, W.N., Zhang, Y. and Glass, J., 2017. Unsupervised learning of disentangled and interpretable representations from sequential data. In Advances in neural information processing systems (pp. 1878-1889).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting approach, incomplete work.\", \"review\": \"Summary:\\nThe authors propose a framework for training an external observer that tries explain the behavior of a prediction function using the minimal description principle. They extend this idea by considering how a human with domain knowledge might have different expectations for the observer. The authors test this framework on a multi-variate time series medical data (MIMIC-III) to show that, under their formulation, the external observer can learn interpretable embeddings.\", \"pros\": [\"Interesting approach: Trying to develop an external observer based on information theoretic perspective.\", \"Considering the domain knowledge of the human subject can potentially be an important element when we want to use interpretable models in practice.\"], \"issues\": \"(1) In 2.4: So between M and M^O, which one is a member of M_reg? \\n(2) On a related note to issue (1): In 2.4.1, \\\"Clearly, for each i, (M(X))_i | M^O(X) follows also a Gaussian distribution: First of all, I'm not sure if that expression is supposed to be p(M(X)_i | M^O(X)) or if that was intended. But either way, I don't understand why that would follow a normal distribution. Can you clarify this along with issue (1)?\\n(3) In 2.4.2: The rationale behind using attention & compactness to estimate the complexity of M^O is weak. Can you elaborate this in the future version?\\n(4) What do each figure in Figure 4 represent?\\n(5) More of a philosophical question: the authors train M and M^O together, but it seems more appropriate to train an external observer separately. If we are going to propose a framework to train an agent that tries to explain a black box function, then training the black-box function together with the observer can be seen as cheating. It can potentially make the job of the observer easier by training the black box function to be easily explainable. It would have been okay if this was discussed in the paper, but I can't find such discussion.\\n(6) The experiments can be made much stronger by applying this approach to a specific prediction task such as mortality prediction. The current auto-encoding task doesn't seem very interesting to apply interpretation.\\n(7) Most importantly: I like the idea very much, but the paper clearly needs more work. There are broken citations and typos everywhere. I strongly suggest polishing this paper as it could be an important work in the model interpretability field.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A Definition for Interpretability based on MDL Principle\", \"review\": \"This paper proposes a definition for interpretability which is indeed the same as model simplicity using the MDL principle. It has several issues:\\n\\n1) Interpretability is not the same as simplicity or number of model parameters. For example, an MLP is thought to be more interpretable than an RNN with the same number of parameters.\\n\\n2) The definition of explainability in Eq. (5) is flawed. It should not have the second term L(M(X)|M^o, X) which is the goodness of M^o's fit. You should estimate M^o using that equation and then report L(M^o|M^p) as the complexity of the best estimate of the model (subject to e.g. linear class). Mixing accuracy of estimation of a model and its simplicity does not give you a valid explainability score. \\n\\n3) In Section 2.4.2, the softmax operator will shrink the large negative coefficients to almost zero (reduces the degrees of freedom of a vector by 1). Thus, using softmax will result in loss of information. In the linear observer case, I am not sure why the authors cannot come up with a simple solution without any transformation.\\n\\n4) Several references in the text are missing which hinders understanding of the paper.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
rylWVnR5YQ | Context Dependent Modulation of Activation Function | [
"Long Sha",
"Jonathan Schwarcz",
"Pengyu Hong"
] | We propose a modification to traditional Artificial Neural Networks (ANNs), which provides the ANNs with new aptitudes motivated by biological neurons. Biological neurons work far beyond linearly summing up synaptic inputs and then transforming the integrated information. A biological neuron change firing modes accordingly to peripheral factors (e.g., neuromodulators) as well as intrinsic ones. Our modification connects a new type of ANN nodes, which mimic the function of biological neuromodulators and are termed modulators, to enable other traditional ANN nodes to adjust their activation sensitivities in run-time based on their input patterns. In this manner, we enable the slope of the activation function to be context dependent. This modification produces statistically significant improvements in comparison with traditional ANN nodes in the context of Convolutional Neural Networks and Long Short-Term Memory networks. | [
"Artificial Neural Network",
"Convolution Neural Network",
"Long Short-Term Memory",
"Activation Function",
"Neuromodulation"
] | https://openreview.net/pdf?id=rylWVnR5YQ | https://openreview.net/forum?id=rylWVnR5YQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Hkes6Ea0JV",
"r1xhUmtURX",
"rylOJXtUC7",
"ByemMfYIRX",
"SJeiT-t8Cm",
"Skg599BQRX",
"SJgMQuLd6Q",
"H1l5sLRf6X",
"HJxlr2rg6Q",
"HJgd_ZWanX",
"BJgfNNK92m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544635586699,
1543045972252,
1543045855585,
1543045643284,
1543045570755,
1542834834340,
1542117402012,
1541756577803,
1541590072202,
1541374320490,
1541211177684
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1427/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1427/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1427/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1427/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1427/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1427/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1427/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1427/AnonReviewer5"
],
[
"ICLR.cc/2019/Conference/Paper1427/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1427/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1427/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper adds a new level of complexity to neural networks, by modulating activation functions of a layer as a function of the previous layer activations. The method is evaluated on relatively simple vision and language tasks.\\n\\nThe idea is nice, but seems to be a special case of previously published work; and the results are not convincing. Four of five reviewers agree that the work would benefit from: improving comparisons with existing approaches, but also improving its theoretical framework, in light of competing approaches.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"the merit needs to be validated\"}",
"{\"title\": \"Thank you very much for your helpful comments!\", \"comment\": \"I.1 We used a modulation to change the shape of the activation, it is still a single computation but the effect is multiplicative instead of additive and effects an entire layer rather than a single node.\\nI.2 Our modulation was set on the convolution layer only before the activation layer. It adds a very small amount of parameters, we also did a \\u2018lite\\u2019 DenseNet version to compare our modification with fewer parameters.\\nI.3 We used \\u2018run-time\\u2019 in the manuscript trying to express that our modulation method can take the input and use that information and change the shape of the activation function on the fly. \\nII. we thank the reviewer for picking out minor errors, all suggestions taken.\"}",
"{\"title\": \"Thank you very much for your helpful comments!\", \"comment\": \"1. We did not compare performances with relevant works since we are generally not trying to compete against the existing tools (i.e. attention). Our modification focused on a different aspect of changing the slope of the activation and can be applied on top of the related works. We will explore other implementations of existing works in conjunction with modulator nodes in the future works.\\n2. Regarding the CNN model performance, in the paper, we didn\\u2019t report the best DenseNet as the baseline from the original work. After we tried more complexed model setup, we got the following new results:\\n\\nModel\\t\\t\\t\\t Top-1 Accuracy\\n----------------------------------------------------------------------------\\nDenseNet-161(k=48)\\t\\t 93.79\\nModulatedDenseNet-161(k=48)\\t\\t93.95\\n\\nWe also thank the reviewer for mentioning other recent works, we will explore the comparison in the future works.\"}",
"{\"title\": \"Thank you very much for your helpful comments!\", \"comment\": \"1. The neuroscientific inspiration came simply from looking at what neurons were capable of, as opposed to a descriptive approach for why their capabilities may be useful for specific tasks. However, why would modulation benefit a supervised learning task is a completely valid and absolutely vital question. We did not find an easy location to address this question in the paper. However, for a brief explanation, supervised learning may benefit from the increased contrast in the amplitude of the signals propagating through the network. A sigmoidal modulator should theoretically learn to spatiotemporally inhibit signals, thereby increasing the relative gain of certain signals or pathways based. This context modulation is common in the visual system via reciprocal inhibition. Regarding \\u2018Intrinsic Excitability\\u2019, it is very true that a bias can capture it if node activation is envisioned as subthreshold voltage, however, if node activation is considered analogous to firing rate, once a neuron passes a threshold, \\u2018Intrinsic Excitability\\u2019 has a multiplicative effect on the firing rate frequency.\\n2. We thank the reviewer for suggesting the possible experimental setups that not included in this paper. We will strengthen the experiment section in the final paper and explore more in future works.\"}",
"{\"title\": \"Thank you very much for your helpful comments!\", \"comment\": \"I.1. We thank the reviewer for pointing out the comparison of our modification with the attention mechanism. Since we stated in the paper that our modification focused on a different aspect which is the slope of the activation function, we didn\\u2019t include this kind of comparison. In future works, we will explore the possibility of combining both methods.\\nI.2. From our experiments, the modulated vanilla network structure can outperform the counterpart by a small margin. After the epochs showed in the chart, the performance of the modulated network became almost flat. We tried to set the type of activation functions and optimization methods the same for comparing the performance of models in our work. Also, we can explore more combinations of setups in future works.\\nII. we thank the reviewer for picking out minor errors, all suggestions taken.\"}",
"{\"title\": \"Thank you for your helpful comments!\", \"comment\": \"1. We thank the reviewer for pointing out the \\u2018Network in Network\\u2019 paper (Lin et. al. 2014), we will add the discussion with this work in our final version. In short, our approach applied the context modulation only before the activation layer, on the contrary, Lin et. al.\\u2019s method was applied to every convolutional layer; also, the modulator weights were applied to all the feature maps providing a very easy to implement light weighted modification that was solely used to change the activation function slope.\\n2. In vanilla LSTM, the input gate can control how much the input will affect the cell status, but our modification focuses on a different part which is adding a modulator to control the shape of the activation function.\\n3. We focus on the context dynamic activation function which can have a side benefit of easing the gradient issue of other activation functions e.g. tanh in LSTM.\\n3. And we will clear the discussion section to make our claim more clear.\"}",
"{\"title\": \"Restricted/simplified version of network in network by Lin et. al. without clear benefits\", \"review\": \"Paper summary:\\n\\nThis paper proposes a method to scale the activations of a layer of neurons in an ANN depending on the inputs to that layer. The scaling factor, called modulation, is computed using a separate weight matrix and activation function. It is multiplied with each neuron's activation before applying its non-linearity. The weight matrix of the modulator is learned alongside the other weights of the network by backpropagation. The authors evaluate this modulated neural unit in convolutional neural networks, densely connected CNNs and recurrent networks consisting of LSTM units. Reported improvements above the baselines are between 1% - 3%.\", \"pro\": [\"With some minor exceptions the paper is clearly written and comprehensible.\", \"Experiments seem to have been performed with due diligence.\", \"The proposed modulator is easy to implement and applicable to (almost) all network architectures.\"], \"contra\": [\"Lin et. al. (2014) proposed a network in network architecture. In this architecture the output of each neural unit is computed using a small neural network contained in it and thus arbitrary, input-dependent activation functions can be realized and learned by each neuron. The proposed neural modulation mechanism in the paper at hand is in fact a more restricted version of the network-in-network model and the authors should discuss the relationship of their proposal to this prior work.\", \"When comparing the test accuracy of CNNs in Fig. 4 the result is questionable. If training of the vanilla CNN was stopped at its best validation loss (early stopping), the difference in accuracies would have been marginal. Also the choice of hyper-parameters may significantly affect the outcome of the comparison experiments. More experiments would be necessary to prove the advantage of this model over a wide range of hyper-parameters.\"], \"minor_points\": [\"It is unclear whether the modulator weights are shared along the depth of a CNN layer, i.e. between feature maps.\", \"Page 9: \\\"Our modification enables a network to use previous activity to determine its current sensitivity to input [...]\\\" => A vanilla LSTM is already capable of doing that using its input gate.\", \"Page 9: \\\"[...] the ability to adjust the slope of an Activation Function has an immediate benefit in making the back-propagation gradient dynamic.\\\" => In fact ReLUs do not suffer from the vanishing gradient problem. Furthermore DenseNets already provide a short-path for the gradient flow by introducing skip connections.\", \"The discussion at the end adds little value and rather seems to be a motivation of the model than a discussion of the results.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting idea but the overall state of the paper needs improvements\", \"review\": \"Summary:\\nThis paper introduces an architectural change for basic neurons in neural network. Assuming a \\\"neuron\\\" consists of a linear combination of the input, followed by a non-linear activation function, the idea is to multiply the output of the linear combination by a \\\"modulator\\\", prior to feeding it into the activation function. The modulator is itself a non-linear function of the input. Furthermore, in the paper's implementation, the modulators share weights across the same layer. The idea is demonstrated on basic vision and NLP tasks, showing improvements over the baselines.\", \"i___on_the_substance\": \"1. Related concepts and biological inspirations\\nThe idea is analogous to attention and gating mechanisms, as the authors point out, with the clear distinction that the modulation happens _before_ the activation function. It would have been interesting to experiment a combination of modulation and attention since they do not act on the same levels. \\nAlso, the authors claim inspiration from the biological neurons, however, they do not elaborate in depth on the connections to the neuronal concepts mentioned in the introduction. \\n\\n2. The performance of the proposed approach\\nIn the first experiment, the modulated CNN at 150 epochs seems to have comparable performance with the vanilla CNN at 60 (the latter CNN starts overfitting afterwards). Why not extending the learning curve to more epochs since the modulated CNN seems on a positive slope? \\nThe other experiments show some improvements over the baselines, however more experiments are necessary for claiming generality. Especially, the baselines remain too simple and there are some well-known well-performing architectures, for both image and text processing, that the authors could compare to (cf winning architectures for imagenet for instance). They could also take these same architectures and augment them with the modulation proposed in the paper. \\nFurthermore, an ablation study is clearly missing, what about different activation functions, combination with other optimization techniques etc.?\", \"ii___on_the_form\": \"1. the paper is sometimes unclear, even though the overall narrative is sound,\\n2. wiggly red lines are still present in the caption of Figure 1 right.\\n3. Figure 6 could be greatly simplified by putting its content in the form of a table, I don't find that the rectangles and forms bring much benefit here.\\n4. Table 5 (should it not be Figure?): it is not fully clear what the lines represent and based on which input. \\n5. some typos: \\n - abstract: a biological neuron change[s]\\n - abstract: accordingly to -> according to \\n - introduction > paragraph 2 > line 11: Each target node multipl[i]es\", \"iii___conclusion\": \"The idea is interesting and some of the experiments show nice results (eg. modulated densenet-lite outperforming densenet) but the overall paper needs further improvements. In particular, the writing needs to be reworked, the experiments to be consolidated, and the link to neuronal modulation to be further investigated.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Idea with no very convincing benefits, baseline comparison to improve.\", \"review\": \"Summary: this submission proposes a modification of neural network architectures that allows the modulation of activation functions of a given layer as a function of the activations in the previous layer. The author provide different version of their approach adapted to CNN, DenseNets and LSTM, and show it outperforms a vanilla version of these algorithms.\", \"evaluation\": \"In the classical context of supervised learning tasks investigated in this submission, it is unclear to me what could be the benefit of introducing such \\u201cmodulators\\u201d, as vanilla ANNs already have the capability of modulating the excitability of their neurons. Although the results show significant, but quite limited, improvements with respect to the chosen baseline, more extensive baseline comparisons are needed.\", \"details_comments\": \"1.\\tUnderlying principles of the approach\\nIt is unclear to me why the proposed approach should bring a significant improvement to the existing architectures. First, from a neuroscientific perspective, neuromodulators allow the brain to go through different states, including arousal, sleep, and different levels of stress. While it is relatively clear that state modulation has some benefits to a living system, it is less so for an ANN focused on a supervised learning task. Why should the state change instead of focusing on the optimal way to perform the task? If the authors want to use a neuroscientific argument, I would suggest to elaborate based on the precise context of the tasks they propose to solve. \\nIn addition, as mentioned several times in the paper, neuromodulation is frequently associated to changes in cell excitability. While excitability is a concept that can be associated to multiple mechanisms, a simple way to model changes in excitability is to modify the threshold that must be reached by the membrane potential of a given neuron in order for the cell to fire. Such simple change in excitability can be easily implemented in ANNs architectures by affecting one afferent neuron in the previous layer to the modification of this firing threshold (simply adding a bias term). As a consequence, if there is any benefit to the proposed architecture, it is very likely to originate specifically from the multiplicative interactions used to implement modulation in this paper. However, approximation of such multiplicative interactions can also be implemented using multiple layers network equipped with non-linear activations. Overall, it would be good to discuss these aspects in great detail in the introduction and/or discussion of the paper, and possibly find a more convincing justification for the approach.\\n\\n2.\\tWeak baseline comparison results\\nIn the CNN experiments, modulated networks are only compared with a single vanilla counterpart equipped with ReLu. There are at least two obvious additional baseline comparison that would be useful: what if the Re-Lu activations are replaced with fixed sigmoids? And what if batch-normalization is switched on/off (I could not find whether it was used at all). Indeed it, we should exclude benefits that are simply due to the non-linearity of the sigmoid, and batch normalization also implements a form of modulation at training that may provide benefits equivalent to modulation (or on the contrary, batch norm could implement a modulation in the wrong way). It would be better to look at all possible combinations of these architecture choices.\\nDue to lack of details in the paper and my personal lack of expertise in LSTMs, I will not comment on baselines for that part but I assume similar modifications can be done.\\nOverall, given the weak improvements in performance, it is questionable whether this extra degree of complexity should be added to the architecture. Additionally, I could not find the precise description of the statistical tests performed. Ideally, the test, the number of samples, the exact p-value, and whether the method of correction for multiple comparison should be included each time a p-value is mentioned.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting, but no convincing results and analysis\", \"review\": \"This paper proposes a scalar modulator adding to hidden nodes before an activation function. The authors claim that it controls the sensitivity of the hidden nodes by changing the slope of activation function. The modulator is combined with a simple CNN, DesneNet, and a LSTM model, and they provided the performance improvement over the classic models.\\n\\nThe paper is clear and easy to understand. The idea is interesting. However, the experimental results are not enough and convincing to justify it. \\n\\n1) The authors cited the relevant literature, but there is no comparison with any of these related works. \\n\\n2) Does this modulator actually help for CNN and LSTM architectures? and How? Recently, there are many advanced CNN and LSTM architectures. The experiments the authors showed were with only 2 layer CNNs and 1 layer LSTM. There should be at least some comparison with an architecture that contains more layers/units and performs well. There is a DenseNet comparison, but it seems to have an error. See 4) for more details.\\n\\n3) The authors mentioned that the modulator can be used as a complement to the attention and gate mechanisms. Indeed, they are very similar. However, the benefit is unclear. More experiments need to be demonstrated among the models with the proposed modulator, attention, and gates, especially learning behavior and performance differences. \\n\\n4) The comparison in Table 2 is not convincing. \\n- The baseline is too simple. For instance on CIFAR10, a simple CNN architecture introduced much earlier (like LeNet5 or AlexNet) performs better than Vanilla CNNs or modulated CNNs.\\n- DenseNet accuracy reported in Table 2 is different from to the original paper: DenseNet (Huang et al. 2017) CIFAR10 # parameters 1.0M, accuracy 93%, but in this paper 88.9%. Even the accuracy of modulated DenseNet is 90.2% which is still far from the original DenseNet.\\nFurthermore, there are many variations of DenseNet recently e.g., SparsenNet: sparsified DenseNet with attention layer (Liu et al. 2018), # parameters 0.86M, accuracy 95.75%. Authors should check their experiments and related papers more carefully.\", \"side_note\": \"page 4, Section 3.1 \\\"The vanilla DenseNet used the structure (40 in depth and 12 in growth-rate) reported in the original DenseNet paper (Iandola et al., 2014)\\\". This DenseNet structure is from Huang et al. 2017 not from Iandola et al. 2014.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"modulating scalar applied per-neuron\", \"review\": \"The paper introduces a new twist to the activation of a particular neuron. They use a modulator which looks at the input and performs a matrix multiplication to produce a vector. That vector is then used to scale the original input before passing it through an activation function. Since this modulating scalar can look across neurons to apply a per-neuron scalar, it overcomes the problem that otherwise neurons cannot incorporate their relative activation within a layer. They apply this new addition to several different kinds of neural network architectures and several different applications and show that it can achieve better performance than some models with more parameters.\", \"strengths\": [\"This is a simple, easy-to-implement idea that could easily be incorporated into existing models and frameworks.\", \"As the authors state, adding more width to a vanilla layer stops increasing performance at a certain point. Adding more complex connections to a given layer, like this, is a good way forward to increase capacity of layers.\", \"They achieve better performance than existing baselines in a wide variety of applications.\", \"The reasons this should perform better are intuitive and the introduction is well written.\"], \"weaknesses\": [\"After identifying the problem with just summing inputs to a neuron, they evaluate the modulator value by just summing inputs in a layer. So while doing it twice computes a more complicated function, it is still a fundamentally simple computation.\", \"It is not clear from reading this whether the modulator weights are tied to the normal layer weights or not. The modulator nets have more parameters than their counterparts, so they would have to be separate, I imagine.\", \"The authors repeatedly emphasize that this is incorporating \\\"run-time\\\" information into the activation. This is true only in the sense that feedforward nets compute their output from their input, by definition at run-time. This information is no different from the tradition input to a network in any other regard, though.\", \"The p-values in the experiment section add no value to the conclusions drawn there and are not convincing.\"], \"suggested_revisions\": [\"In the abstract: \\\"A biological neuron change[s]\\\"\", \"The conclusion is too long and adds little to the paper\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
H1gZV30qKQ | Transfer Value or Policy? A Value-centric Framework Towards Transferrable Continuous Reinforcement Learning | [
"Xingchao Liu",
"Tongzhou Mu",
"Hao Su"
] | Transferring learned knowledge from one environment to another is an important step towards practical reinforcement learning (RL). In this paper, we investigate the problem of transfer learning across environments with different dynamics while accomplishing the same task in the continuous control domain. We start by illustrating the limitations of policy-centric methods (policy gradient, actor- critic, etc.) when transferring knowledge across environments. We then propose a general model-based value-centric (MVC) framework for continuous RL. MVC learns a dynamics approximator and a value approximator simultaneously in the source domain, and makes decision based on both of them. We evaluate MVC against popular baselines on 5 benchmark control tasks in a training from scratch setting and a transfer learning setting. Our experiments demonstrate MVC achieves comparable performance with the baselines when it is trained from scratch, while it significantly surpasses them when it is used in the transfer setting.
| [
"Reinforcement Learning",
"Transfer Learning",
"Control",
"Value function"
] | https://openreview.net/pdf?id=H1gZV30qKQ | https://openreview.net/forum?id=H1gZV30qKQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1e6qoJbgE",
"S1xrTtvxk4",
"BJxZA6Pc0m",
"Syek2TPqA7",
"SJlUFav5CQ",
"B1xSFcPta7",
"SyeV8pj_6X",
"Hyx2gsaHTm",
"SJx7zg14am",
"rJlxYy146m",
"rJgTKis-6m",
"rklNBiiZaQ",
"B1er_zja3m",
"H1lXESRY2X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1544776597035,
1543694780839,
1543302600986,
1543302567009,
1543302525518,
1542187644580,
1542139211786,
1541950196060,
1541824523144,
1541824376089,
1541680005324,
1541679932312,
1541415532895,
1541166378998
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1425/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1425/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1425/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1425/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1425/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1425/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1425/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1425/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1425/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1425/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1425/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1425/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1425/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1425/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper studies whether the best strategy for transfer learning in RL is to transfer value estimates or policy probabilities. The paper also presents a model-based value-centric (MVC) framework for continuous RL. The reviewers raised concerns regarding (1) the coherence of the story, (2) the novelty and importance of the MVC framework and (3) the significance of the experiments. I encourage the authors to either focus on the algorithmic aspect or the transfer learning aspect and expand on the experimental results to make them more convincing. I appreciate the changes made to improve the paper, but in its current form the paper is still below the acceptance threshold at ICLR.\", \"ps\": \"in my view one can think of value as (shifted and scaled) log of policy. Hence, it is a bit ambiguous to ask whether to transfer value or policy.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"The paper needs to be imrpvoed\"}",
"{\"title\": \"Response\", \"comment\": \"I greatly appreciate including some of the requested comparisons in the appendix. The new results indicate that existing algorithms can match or even outperform MVC in both learning from scratch and in the transfer learning setup, and since MVC has limited novelty, I will stick with my earlier assessment. However, the results also indicate that methods that do not have an explicit actor (MVC and SQL) transfer better, and thus I would recommend revising the paper and making transferability the main contribution (instead of MVC), and resubmitting to a future conference.\"}",
"{\"title\": \"Revision Uploaded\", \"comment\": \"Dear reviewer, we have updated our paper. Results of two new baselines on HalfCheetah-v1 have be added in Appendix F (page 16). Please check our new version.\"}",
"{\"title\": \"Revision Uploaded\", \"comment\": \"Dear reviewer, we have updated our paper. To address your concern about the convergence without the true reward and transition function, we analyze how the error of model approximators influence our algorithm in Appendix A.3 (page 11-12). Please check our new version.\"}",
"{\"title\": \"Revision Uploaded\", \"comment\": \"We thank the reviewers for their valuable comments. Accordingly, we have updated our paper. The specific changes go as follows.\\n\\n1. We analyze how the error of model approximators influence our algorithm in Appendix A.3 (page 11-12)\\n2. We compare with more baselines (PPO and SQL) on HalfCheetah and show the results in Appendix F (page 16). We revise the figure 2 in Section 3 to present the illustrative example in a more clear way.\\n\\nPlease check our new version.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"Thanks for the valuable feedback! We will address your concerns one by one.\", \"q1\": \"Lack of rigorous evaluation of transferability.\", \"a\": \"It is not necessary to assume a deterministic dynamics model. We take the deterministic assumption just because of its simplicity and performance. Additionally, people use a deterministic transition approximator in continuous control extensively [1, 2, 3].\\n\\nWe assume that the reward function would not change because it is the reward that defines the task. This aligns with the problem we study in the paper -- only the dynamics changes but the *task* keeps the same across environments.\\u00a0\\n\\n[1] Kurutach, Thanard, et al. \\\"Model-Ensemble Trust-Region Policy Optimization.\\\" arXiv preprint arXiv:1802.10592 (2018).\\n[2] Feinberg, Vladimir, et al. \\\"Model-Based Value Estimation for Efficient Model-Free Reinforcement Learning.\\\" arXiv preprint arXiv:1803.00101 (2018).\\n[3] Pathak, Deepak, et al. \\\"Zero-shot visual imitation.\\\" International Conference on Learning Representations. 2018.\", \"q2\": \"There are some possible solutions to the drawbacks of explicit policy, like replacing the actor with greedy maximization, or other value-based methods.\", \"q3\": \"In practice, the reviewer has seen policies transfer better than values.\", \"q4\": \"The experimental evaluation is not enough to allow drawing further conclusions.\", \"q5\": \"Why is it necessary to assume a deterministic dynamics model? Why only the dynamics model can vary between the domains and not also the reward (second paragraph in Section 1)?\"}",
"{\"title\": \"reply\", \"comment\": \"Thank you for your thorough response and clarifications. This made some things clearer, and a few things could be updated in the paper. You can go above the 8 page limit (the recommended limit is 8, but the hard limit is 10) to add a conclusion.\\n\\nI still think this is an interesting line of work, but might need some more research/experiments to gain novel insights and convince the reader.\"}",
"{\"title\": \"Limited novelty and inconclusive experiments\", \"review\": \"The paper proposes a model-based value-centric (MVC) deep RL algorithm for transfer learning. The algorithm optimizes neural networks to estimate the deterministic transitions and rewards, and uses the these models to learn a value function by minimizing the Bellman residual. Policy is represented implicitly as the action that greedily maximizes the return, expressed in terms of the learned models. The experiments show some improvement on transferability over DDPG and TRPO policies.\", \"the_paper_has_two_relatively_independent_stories\": \"The title and the introduction motivates the work by discussing the transferability of policies and value functions. However, instead of rigorously evaluating transferability, the paper proposes a model-based algorithm (MVC) for learning policies for continuous actions. Novelty of the new algorithm is quite limited, as it simply uses a learned dynamics model and reward function to learn a value function. Regarding transferability, introducing MVC seem quite orthogonal, and instead, it would be better to have a clear comparison of transferability using existing methods (e.g., DDPG). If having an explicit policy network hurts transferability, then existing algorithms can be modified by replacing the actor with greedy maximization, or alternatively other value based methods that do not involve actor network (NAF, SQL, QT-Opt) could be used.\\n\\nRegarding the intuition why values transfer better, the examples given in the introduction and Section 3 are good and intuitive. However, from my experience, the limited information content of a policy is only a partial reason for poor transferability, and in practice I have seen policies to transfer, in fact, better than values. The chosen viewpoint based on information content is nice as it can be proven mathematically, but might not be the most insightful and important in practice. The experimental evaluation is not rigorous enough to allow drawing further conclusions. For example, one could compare the two approaches using a wider set of RL algorithms, include more realistic environments (ideally transfer to real-world), or have a heat map illustrating transferability w.r.t. selected parameters. Also, no comparison to the state-of-the-art methods is provided (PPO, TD3, SAC).\", \"minor_points\": [\"Please include the theorems in Section 5 (and proofs in the appendix). The intuition provided in the body is not very clear.\", \"Why is it necessary to assume a deterministic dynamics model? Why only the dynamics model can vary between the domains and not also the reward (second paragraph in Section 1)?\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"(cont'd) Response to Reviewer 2\", \"comment\": \"Q6: Explanation of some terminologies\", \"a\": \"We did not include a discussion/conclusion section due to the limitation of space. Here we would like to discuss the concerns you raised.\\n\\nFirst, if the low-dimensional assumption breaks down, our method might fail. However, we envision that high-quality and low-dimensional environment models will be accessible as state abstraction techniques maturate, and our method will show even greater potential then. In fact, the investigation into compact model-based learning methods has started to be advocated by some pioneer researchers in the RL community. For example, Prof. Richard Sutton said in a recent talk at Alberta University that, 'one next big step in AI is planning with a learned model'. \\n\\nSecond, if the goal of the task changes, while the value and the reward approximator will be useless, the learned transition approximator is still useful to accelerate the learning of new task.\\n\\nFinally, we want to emphasize again very few formal studies have been done in our transfer learning setting before, and our algorithm could be a novel perspective on the problem. And in { https://openreview.net/forum?id=H1gZV30qKQ¬eId=rJgTKis-6m }, we emphasize our contributions by positioning this work in the literature.\", \"q7\": \"Questions about Property1 and Property 2.\", \"q8\": \"The question of 'task' and the change of reward.\", \"q9\": \"Discussion/ Conclusion.\", \"references\": \"[1] Gu, Shixiang, et al. \\\"Continuous deep q-learning with model-based acceleration.\\\"\\u00a0International Conference on Machine Learning. 2016.\\n[2] Lillicrap, Timothy P., et al. \\\"Continuous control with deep reinforcement learning.\\\"\\u00a0arXiv preprint arXiv:1509.02971\\u00a0(2015).\\n[3] Feinberg, Vladimir, et al. \\\"Model-Based Value Estimation for Efficient Model-Free Reinforcement Learning.\\\"\\u00a0arXiv preprint arXiv:1803.00101\\u00a0(2018).\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewers for your appreciation of our work and valuable comments. We will clarify several misunderstandings and address your concerns.\\n\\nOur work is motivated by the observation that policy-centric methods are prone to get stuck in local minima in the transfer learning setting, as a small change to the dynamics may significantly change the optimal policy. Through theoretical, algorithmic, and experimental study, we show that our proposed value-centric method has better potential to transfer knowledge across environments in the setting that only dynamics changes but tasks keep the same.\", \"q1\": \"Is there a counterexample that fails value-based but not policy-based methods?\", \"a\": \"Our advantage comes mainly from using the value function but not the model. As a comparison, even with a model, the policy still cannot be transferred efficiently. We elaborate this point as below:\", \"q2\": \"Many assumptions for the example in Sec 3.\", \"q3\": \"Are there any guarantees on convergence when the true reward and transition functions are not accessible.\", \"q4\": \"Is the phenomenon more a problem with policy gradient than a general problem? Are there other methods that would be able to transfer policies better than policy gradient methods?\", \"q5\": \"Does the main advantage come from a smooth-changing model?\", \"there_are_two_mainstream_methods_for_continuous_rl_algorithms_that_utilize_models\": \"1. Emulate the real environment by a learned model and training a model-free RL agent in the emulated environment. In this case, note that policy gradient still cannot \\\"jump\\\" across remote actions. Actually, our toy example in Sec 3 has already revealed the limitation of this method. In the experiment, policy gradient method *converges* to a suboptimal local minimum even after abundant training, and note that the agent is training with access to the groundtruth environment model, even better than the learned model.\\n2. Model predictive control, which directly solves the action sequence {a_1, a_2, ... , a_T} according to the models. This is usually not feasible practically since it needs a very accurate model on the long horizon, while the approximated transition models learned by networks are only accurate on the short horizon.\"}",
"{\"title\": \"(cont'd) Response to Reviewer 1\", \"comment\": \"Finally, we want to emphasize our contributions by positioning this work in the literature.\\n\\nWhile transfer learning in RL has been investigated in different settings, our setting (only dynamics changes but tasks keep the same across environments) has not been *formally* studied yet. Prior work such as [6, 7] mostly targets at relevant but different settings. In methodology, they either need additional knowledge of what environment parameters would change or rely on domain randomization, thus are quite sample inefficient. We believe that there must exist fundamentally different mechanisms that result in sample-efficient algorithms in our setting, as humans can adapt to environmental change quickly with neither explicit knowledge about the change nor domain randomization.\\n\\nWe provide a fundamental and new perspective to this problem. We rethink what knowledge is really needed to adapt to the new environment. Our brand-new framework is based on the recognition that the knowledge of both environment dynamics and state values are indispensable during the transfer process.\", \"we_firmly_believe_that_we_have_two_significant_contributions\": [\"Our algorithm, MVC, is the *first attempt* to address the problem in a value-centric way. MVC is not a trivial extension of value iteration since value iteration relies on ground truth dynamics and has not been applied to deal with continuous actions.\", \"MVC also suggests that \\\"directly search the action that gives the maximum value\\\" could work if both state and action spaces are continuous, a point that has not been demonstrated before. As far as we know, MVC is the first continuous RL algorithm using this paradigm, though many previous works claim this paradigm \\\"impossible\\\" [1, 2, 3].\", \"We are looking forward to further discussions. Thank you!\"], \"references\": \"[1] Gu, Shixiang, et al. \\\"Continuous deep q-learning with model-based acceleration.\\\"\\u00a0International Conference on Machine Learning. 2016.\\n[2] Buckman, Jacob, et al. \\\"Sample-Efficient Reinforcement Learning with Stochastic Ensemble Value Expansion.\\\"\\u00a0arXiv preprint arXiv:1807.01675\\u00a0(2018).\\n[3] Feinberg, Vladimir, et al. \\\"Model-Based Value Estimation for Efficient Model-Free Reinforcement Learning.\\\"\\u00a0arXiv preprint arXiv:1803.00101\\u00a0(2018).\\n[4] Kurutach, Thanard, et al. \\\"Model-Ensemble Trust-Region Policy Optimization.\\\"\\u00a0arXiv preprint arXiv:1802.10592\\u00a0(2018).\\n[5] Lillicrap, Timothy P., et al. \\\"Continuous control with deep reinforcement learning.\\\"\\u00a0arXiv preprint arXiv:1509.02971\\u00a0(2015).\\n[6] Rajeswaran, Aravind, et al. \\\"Epopt: Learning robust neural network policies using model ensembles.\\\"\\u00a0arXiv preprint arXiv:1610.01283\\u00a0(2016).\\n[7] Yu, Wenhao, et al. \\\"Preparing for the unknown: Learning a universal policy with online system identification.\\\"\\u00a0arXiv preprint arXiv:1702.02453(2017).\\n[8] Pathak, Deepak, et al. \\\"Zero-shot visual imitation.\\\"\\u00a0International Conference on Learning Representations. 2018.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thanks for the valuable feedback! We would like to first address your concerns and then restate our key contributions that might not have been fully appreciated.\\n\\nOur work is motivated by the observation that policy-centric methods are prone to get stuck in local minima in the transfer learning setting, as a small change to the dynamics may significantly change the optimal policy. Through theoretical, algorithmic, and experimental study, we show that our proposed value-centric method has better potential to transfer knowledge across environments in the setting that only dynamics changes but tasks keep the same.\", \"response_to_your_concerns\": \"\", \"q1\": \"By taking the max, the proposed method has no superiority to a deterministic policy.\", \"a\": \"In an actor-critic framework, the actor (policy network) outputs a policy that leads to a high reward if the learning is successful. However, the critic module in practice will just produce a rather poor estimation of the true value for this policy, even after extensive training. This is what we mean by an 'imprecise' estimation. In fact, this phenomenon has been reported in [5] and inspired recent research over the problem [3]. We are happy to add references and further elaborate this point in the paper.\", \"q2\": \"The models are not easy to learn in the high-dimensional situation, and thus Property 2 is not easy to satisfy.\", \"q3\": \"Confusion over 'precise' and 'imprecise'.\"}",
"{\"title\": \"Overall interesting, but concerns about the key idea and the applicability of the method\", \"review\": \"The paper considers the problem of transfer in continuous-action deep RL. In particular, the authors consider the setting where the dynamics of the task change slightly, but the effect on the policy is significant. They suggest that values are better suited for transfer and suggest learning a model to obtain these values.\\n\\nOverall, there are interesting ideas here, but I am concerned about whether the proposed approach actually solves the problem the authors consider and its general applicability.\", \"the_point_about_value_functions_being_better_suited_for_transfer_than_policies_is_indeed_true_for_greedy_policies\": \"it is well-known that they are discontinuous, and small differences in value can result in large differences in policy. This point is hence relevant in continuous control, where deterministic policies are considered.\\n\\nBut I am a bit confused as to why the proposed approach is better though. Eq. (4) still takes a max w.r.t. the estimated dynamics, etc. So even if the value function is continuous, by taking the max, we get a deterministic policy which has the same problem! That is probably why the performance is quite similar to DDPG. Considering a softer policy parameterization (a continuous softmax analogue) would be more in line with the authors\\u2019 motivation.\\n\\nThe proposed method itself doesn\\u2019t seem generally practical unfortunately, as it is suggested to learn the *model* of the environment for with a high-dimensional state space and a continuous action space, and do value iteration. In other words, if Property 2 was easy to satisfy, we wouldn\\u2019t be struggling with model-based methods as much as we are! However, I do appreciate that the authors illustrate the model loss curves in their considered domains. This raises a question of when are dynamics \\u201ceasy\\u201d.\\n\\nThe theoretical justification is quite weak, since the bound in Proposition 2 is too loose to be meaningful (as the authors themselves acknowledge). One way to mitigate this would be to support it empirically, by considering a range of disturbances of the specified form, and showing the shape of the bound on a small domain. The same thing can be done for the parametric modifications considered in the experiments -- instead of considering a set of instances, consider the performance as a function of the range of disturbances to the same dynamics parameter.\", \"minor_comments\": [\"The italicization of certain keywords in the intro is confusing, in particular precise, imprecise -- these aren\\u2019t well-defined terms, and don\\u2019t make sense to me in the mentioned context. The policy function isn\\u2019t more \\u201cprecise\\u201d than the value.\", \"I suggest including the statements of the propositions in the main text\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Has potential, needs some more investigation\", \"review\": \"This paper proposes a model-based value-centric (MVC) framework for transfer learning in continuous RL problems, and an algorithm within that framework. The paper attempts to answer two questions: (1) \\\"why are current RL algorithms so inefficient in transfer learning\\\" and (2) \\\"what kind of RL algorithms could be friendly to transfer learning by nature\\\"? I think these are very interesting questions to investigate, and researchers that work on transfer learning could benefit from insights on them. However, I am not yet convinced that this paper answers these questions satisfyingly. It would be great to hear the author's thoughts on my questions below.\\n\\nThe main insight I take away from the paper is that policy gradient methods are not suitable for transfer learning compared to model-based and value-centric methods for some assumptions (the reward function not changing and the transition dynamics being deterministic). This insight and the experiments in the paper are interesting, but I am unsure if the paper as it is presented now passes the bar for ICLR.\", \"in_general_the_paper_has_two_contributions\": \"A) analysis of value-centric vs policy-centric methods\\nB) an algorithm that is more useful for transfer learning.\\n\\nRegarding A)\\nThe authors argue that policy-centric algorithms are less useful for transfer learning than value-centric methods. \\n\\nThey first illustrate this with an example in Section 3. Since this is just one example, as a reader I wonder if it would not be possible to construct an example that shows the exact opposite, where value iteration fails but policy gradient doesn't. It feels like there are many assumptions that play into the given example (the reward function not changing; the transition dynamics being deterministic; the choice of using policy gradients and value iteration). \\n\\nIn addition, the authors provide a theoretical justification in the Appendix (which I have briefly scanned) and the intuition behind it in Section 5. From what I understand, the main problem arises from the policy's output space being a Gaussian distribution, which causes the policy being able to get stuck in a local optimum. Further, the authors show (in the Appendix) that under some assumtions the value function always converges. Are there any guarantees on this when we don't have access to the true reward and transition functions (which themselves could get stuck in a local optimum)?\\n\\nWould the authors say that the phenomenon is more a problem with the algorithm (policy gradient vs value iteration) than policy-centric and value-centric methods in general? Are there other methods that would be able to transfer policies better than policy gradient methods?\\n\\nRegarding B)\\nThe author's proposed method (MVC) has three components: the value function, the dynamics model and the reward model, all of which are learned by neural networks. It seems like the main advantage comes from using a model (since that's the aspect which changes when having to transfer to an altered MDP). Does the advantage of this method over DDPG and TRPO come from the fact that the dynamics model changes smoothly, and we have an approximation to it? Then it is not surprising that this outperforms a policy gradient method.\", \"other_comments\": [\"Could you explain what is meant by \\\"precise\\\" and \\\"imprecise\\\" when speaking about policies or value functions?\", \"Could you explain what is meant by the algorithm being \\\"accessible\\\" (e.g., Definition 1)?\", \"Section 2.1: In Property 1, what is f? Could you make explicit why we are interested in the two properties listed? By \\\"not rigorously\\\", do you mean that those properties are based on intuition? These properties are used later in the paper and the appendix, so I wonder how strong of an assumption this is.\", \"Section 2.2: Could you explain what is meant by \\\"task\\\"? You say that within the MDP, the transition dynamics and reward functions change, but the task stays the same. However, earlier (in the introduction) you state that only the environment dynamics change. I find it confusing that \\\"the task\\\" is something hand-wavy and not part of the formal definition of the MDP. In what exact ways can the reward function be influenced by the change in the transition dynamics?\", \"Section 3: Replace \\\"obviously\\\" with \\\"hence\\\"; remove \\\"it is not hard to find that\\\". This might not be so trivial for some readers.\", \"Appendix B: Refer to Table 1 in the text.\"], \"clarity\": \"The paper is written well, but I think some assumptions and their affects should be stated more clearly and put into context. The paper misses a discussion / conclusion section. It would be great to see a discussion on some of the assumptions; e.g., what if the low dimensional assumtion breaks down? What if we assume that also the reward function can change? The authors are in a unique position to give insight into these things (even if the results from the paper do not hold after dropping some assumptions) and it would be very helpful to share these with the reader in a discussion section.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
SkgZNnR5tX | Uncovering Surprising Behaviors in Reinforcement Learning via Worst-case Analysis | [
"Avraham Ruderman",
"Richard Everett",
"Bristy Sikder",
"Hubert Soyer",
"Jonathan Uesato",
"Ananya Kumar",
"Charlie Beattie",
"Pushmeet Kohli"
] | Reinforcement learning agents are typically trained and evaluated according to their performance averaged over some distribution of environment settings. But does the distribution over environment settings contain important biases, and do these lead to agents that fail in certain cases despite high average-case performance? In this work, we consider worst-case analysis of agents over environment settings in order to detect whether there are directions in which agents may have failed to generalize. Specifically, we consider a 3D first-person task where agents must navigate procedurally generated mazes, and where reinforcement learning agents have recently achieved human-level average-case performance. By optimizing over the structure of mazes, we find that agents can suffer from catastrophic failures, failing to find the goal even on surprisingly simple mazes, despite their impressive average-case performance. Additionally, we find that these failures transfer between different agents and even significantly different architectures. We believe our findings highlight an important role for worst-case analysis in identifying whether there are directions in which agents have failed to generalize. Our hope is that the ability to automatically identify failures of generalization will facilitate development of more general and robust agents. To this end, we report initial results on enriching training with settings causing failure. | [
"Reinforcement learning",
"Adversarial examples",
"Navigation",
"Evaluation",
"Analysis"
] | https://openreview.net/pdf?id=SkgZNnR5tX | https://openreview.net/forum?id=SkgZNnR5tX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1gaH4sMlN",
"BJlJ9lVN0Q",
"HylA5R1cp7",
"SJgT7Rk5a7",
"B1xuCa1cTX",
"rJxuv6y5T7",
"B1xFkeP5hX",
"ByxGf2I5nQ",
"SygK7DGMnX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544889412978,
1542893703274,
1542221461920,
1542221349271,
1542221264329,
1542221151826,
1541201888582,
1541200905779,
1540658977414
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1424/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1424/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1424/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1424/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1424/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1424/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1424/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1424/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1424/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents adversarial \\\"attacks\\\" to maze generation for RL agents trained to perform 2D navigation tasks in 3D environments (DM Lab).\\n\\nThe paper is well written, and the rebuttal(s) and additional experiments (section 4) make the paper better. The approach itself is very interesting. However, there are a few limitations, and thus I am very borderline on this submission: \\n - the analysis of why and how the navigation trained models fail, is rather succinct. Analyzing what happens on the model side (not just the features of the adversarial mazes vs. training mazes) would make the paper stronger.\\n - (more importantly) Section 4: \\\"adapting the training distribution\\\" by incorporating adversarial mazes into training feels incomplete. That is a pithy as giving an adversarial attack for RL trained navigation agents would be much more complete of a contribution if at least the most obvious way to defend the attack was studied in depth. The authors themselves are honest about it and write \\\"Therefore, it is possible that many more training iterations are necessary for agents to learn to perform well in each adversarial setting.\\\" (under 4.4 / Expensive Training).\\n\\nI would invite the authors to submit this version to the workshop track, and/or to finish the work started in Section 4 and make it a strong paper.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Very interesting approach, unsure about the usefulness in the current state of experiments\"}",
"{\"title\": \"Paper Updated\", \"comment\": \"Dear Reviewers,\\n\\nThank you for the constructive feedback and suggestions. We have now updated the paper based on your comments.\\n\\nMost notably, we have updated the paper to include our experiments on adapting the training distribution using adversarial and out-of-distribution examples (see new Section 4). This addresses a majority of the questions and concerns raised by all reviewers, and follows what we mentioned in our previous comments. For example, our adversarial search procedure now finds in-distribution examples and we also present results on incorporating adversarial examples into the training distribution.\\n\\nAdditionally, we have added details on the computational requirements and robustness of our adversarial search procedure to the appendix (Appendix A.2). We have also made minor changes throughout to improve the readability of the paper.\\n\\nWe believe these updates address the comments made by the reviewers. If the reviewers are satisfied by our responses, we hope they will consider raising their scores or letting us know in what ways they think the paper should be improved before the discussion period ends.\\n\\nThanks,\\nAuthors\"}",
"{\"title\": \"Overall response to all reviewers on out-of-distribution samples and adapting the training distribution\", \"comment\": \"As adapting the training distribution was a common point raised by all reviewers, we provide this overall response for all reviewers.\\n\\nWe absolutely agree that adapting the training distribution is an important direction, and during the work for this paper we investigated several methods for incorporating out-of-distribution mazes into the training distribution.\\n\\nSpecifically, we tried two approaches related to what the reviewers suggested: \\n - Altered Training: we altered the maze generator so that any adversarial maze could be generated and seen during training, meaning our adversarial search procedure produces in-distribution adversarial mazes. We accomplished this by randomly altering the original mazes, repeatedly using the same Modify function used by our adversarial search procedure, but without selecting for worst agent performance.\\n - Adversarial Training: we incorporated a large set of adversarial mazes into the training distribution, including the 500 mazes we used in our transfer analysis experiment. This was achieved by having two sets of mazes (the default distribution and the adversarial distribution) which were then randomly sampled every episode (i.e. 50% of training episodes were on an adversarial maze).\\n\\nTo summarise, we found that -- perhaps surprisingly -- neither of these approaches significantly improved the agent\\u2019s robustness to our adversarial search procedure (i.e. numerous adversarial mazes still exist and can be found), and therefore adapting the training distribution in such a way that improves performance in the dimension we are interested in is more challenging than initially thought.\\n\\nGiven this result, we originally did not include these experiments as we felt they were too preliminary and the topic was worth a thorough investigation in subsequent work. However, given the surprise of the result, the further work we have put into these results since submission, and that all reviewers suggested trying it, we will include an additional section in the paper on \\u2018Adapting the Training Distribution\\u2019 where we will include our experimental results.\", \"we_list_the_two_key_results_here_while_we_update_the_paper\": \"- Altered Training: we found that training on altered mazes made it marginally harder for our search method to find adversarial mazes (i.e. it took several more iterations), however overall this did not fix the problem.\\n - Adversarial Training: we found that incorporating adversarial examples into the training distribution led to agents performing well on those specific examples, however this did not lead to agents performing better against our adversarial search method.\\n\\nFor Adversarial Training, it is possible that adding many more adversarial mazes into the distribution will lead to more robustness (as is the case for some image classification datasets such as MNIST and CIFAR10). However, this would require significant new techniques for finding adversarial mazes more efficiently, and is another line of inquiry we are currently pursuing.\\n\\nWe also found that there were quantifiable differences between the adversarial and non-adversarial distribution of mazes (e.g. goals in small rooms, long path from the player to the goal). Indeed, as all reviewers mentioned, this motivates adding out-of-distribution / rare mazes into the training distribution. However, importantly, we note that while we identified features which correlated with adversarial mazes, there is not necessarily causation. To investigate causation, we tried handcrafting mazes with the identified features of the adversarial distribution, however we were not able to consistently create mazes which were adversarial. This point of correlation versus causation is one possible reason why adapting the training distribution is more challenging than initially thought.\\n\\nWe believe the addition of this new section will provide a more complete investigation while also opening up a number of interesting research directions for future work. We believe it is likely that, similar to adversarial examples in image classification domains, figuring out how to train agents to be more robust and general is likely to take significant time and effort, and therefore span many papers and works.\"}",
"{\"title\": \"Addressing questions on perfection, modifying the training distribution, and the computational requirements of the search method\", \"comment\": \"Thank you for your detailed comments. We hope our responses below address your comments.\\n\\n> 1) Almost no machine learning model is perfect... Why is it so surprising that this is also the case for navigation models? Why would one assume they should be perfect? \\n\\nWe agree that it is not necessarily surprising that these failure cases exist, and this is not the key result we wanted to highlight as surprising. While surprise is clearly subjective, we, and others we spoke to, found the following results surprising: (1) lack of generalisation to simpler situations, (2) how extreme the failures are (not just that the algorithm is imperfect, but 100x\\u2019s reduction in performance), (3) transfer of failures between different agents/architectures (so these failures aren\\u2019t just overly-specialised to specific training runs).\\n\\n> 1) I would therefore like to see if it is possible to modify the training distribution - by adding \\u201cadversarial\\u201d mazes.\\n\\nPlease see the overall response to all reviewers for a full answer to this question.\\n\\nSummarising our full answer for your question - yes it is possible to modify the training distribution. However, in the case of adding \\u201cadversarial\\u201d mazes to the training distribution, we found that while agents learned to perform well on the specific mazes added, they did not perform better on adversarial mazes overall. It is possible that adding many more adversarial mazes into the distribution will lead to more robustness (as is the case for some image classification datasets such as MNIST and CIFAR10), however this would require significant new techniques for finding adversarial mazes more efficiently, and this is another line of inquiry we are currently pursuing.\\n\\n> 2) The modification of the maze is not constrained to be small or imperceptible. In fact, it is quite huge - the generated mazes are far from the training distribution \\u2026 . This major difference with classical adversarial examples should be clearly acknowledged.\\n\\nPlease see the overall response to all reviewers for a full answer to this question. \\n\\nTo address this concern, we have altered the maze generator so that any adversarial maze can be generated and seen during training, meaning our adversarial search procedure produces in-distribution adversarial mazes. We accomplished this by randomly altering the original mazes using the same Modify function used by our adversarial search procedure. The result of this is that the generated mazes are no longer far from the training distribution.\\n\\nWe will add an acknowledgement to the difference between our current analysis and the classical adversarial examples, and highlight how our new section on adapting the training distribution helps address this point.\\n\\n> 3) It would be interesting to know the computational requirements of the search method. \\n\\nOur search method is performed using 10 candidate mazes per iteration, each evaluated 30 times, across 20 iterations. This is a total of 6000 episodes for the entire search procedure, and all episodes within one iteration can be evaluated in parallel (i.e. 20 batches of 300 episodes). Depending on the desired confidence level and resources available, the number of evaluations can be increased or decreased. Indeed, we found that evaluating each maze only 10 times rather than 30 produced similar results and led to a 3x speed up. In our experiments with 30 evaluations per maze, the entire search procedure took 30 minutes to complete, and only 9 minutes on average to find an adversarial maze where the probability of the agent finding the goal was below 50%. As suggested, we will add an explicit mention of this in the paper.\"}",
"{\"title\": \"Addressing comments on \\\"surprise\\\" and out-of-distribution samples\", \"comment\": \"Thank you for your constructive comments. We\\u2019re glad you enjoyed reading the paper and hope our responses below address your concerns.\\n\\nPlease see the overall response to all reviewers for a detailed answer on adapting the training distribution which we will include in the paper. Here we will provide answers specific to the points you raised.\\n\\n> I am unconvinced that many of the observed behaviors are \\\"surprising\\\". \\n\\nWhile surprise is clearly subjective, we, and others we spoke to, found the following results surprising: (1) lack of generalisation to simpler situations, (2) how extreme the failures are (not just that the algorithm is imperfect, but 100x\\u2019s reduction in performance), (3) transfer of failures between different agents/architectures (so these failures aren\\u2019t just overly-specialised to specific training runs). Importantly, we found that these (arguably surprising) behaviours were also present after we addressed the out-of-distribution concern which we explain in the next point.\\n\\n> The procedure for adversarially optimizing the maps is creating out-of-distribution map samples.\\n\\nTo address this, we have altered the maze generator so that any adversarial maze can be generated and seen during training, meaning our adversarial search procedure produces in-distribution adversarial mazes. We accomplished this by randomly altering the original mazes using the same Modify function used by our adversarial search procedure.\\n\\nThe result of this change is that our adversarial search procedure takes several more iterations to find a maze as-adversarial as before, however it is still possible to find such mazes and there is no significant improvement in agent performance.\\n\\n> I feel that this paper is only partially complete without an investigation of how these out-of-distribution samples can be used to improve the performance of the agents.\\n\\nWe agree, and the other reviewers also mentioned this, therefore we will be adding a section on this topic to the paper. We found that using these out-of-distribution samples to improve the performance of the agents is not straightforward, and the approaches we tried (which were also suggested by various reviewers) were not sufficient for significantly improving performance. \\n\\nIn the case of adding adversarial samples to training, we found that agents learned to solve the specific adversarial samples they were trained on, but did not perform better on adversarial samples overall.\"}",
"{\"title\": \"Addressing questions on robustness, re-weighting training samples, and measures of complexity\", \"comment\": \"Thank you for your positive comments. We hope our following responses address the three questions you raised.\\n\\n> 1. The search algorithm depicted in section 2 is only able to find a local optimum in the environment space. How robust is the result given different initializations?\\n\\nGreat question. We found that our search algorithm is robust to different initialisations\\n\\nIn Figure 3 (Section 3.1), we report the average performance of 50 independent optimisation runs (i.e. 50 different initialisations). Related to your question, in 44/50 (88%) of these runs our search algorithm found an adversarial maze where the agent\\u2019s probability of finding the goal was <50% (compared to 98% on the average maze). It is also possible to improve the robustness of our method by increasing the number of candidates considered per iteration (at the cost of increased time/computational requirements).\\n\\nThe 25th, 50th, and 75th percentiles of our optimisation method were as follows:\\n - p(reaching the goal): 0.031, 0.136, 0.279\\n - number of goals reached: 0.042, 0.136, 0.368\\n\\nWe will include a mention of these in the appendix.\\n\\n> 2. It is hence natural to ask, if the procedure described in this paper can be incorporated to enhance the performance by some simple heuristics like re-weighting the training samples.\\n\\nPlease see the overall response to all reviewers for a full answer to this question.\\n\\nSummarising our full answer for your question - yes the procedure described in the paper can be used to re-weight the training samples, and we will include a section to the paper describing these experiments. However, we found that doing this is not sufficient to significantly enhance the performance of agents. Specifically, we found that re-weighting adversarial examples improved performance on those examples, but did not lead to agents improving overall.\\n\\n> 3. I would like to see more exploration in different factors that accounts for the complexity.\\n\\nThis is an interesting point and relates to our motivation for adapting the training distribution. We investigated a number of different ways for measuring the complexity of mazes, comparing the distribution of various features between adversarial and non-adversarial mazes.\\n\\nNotably, we found differences in two measures of maze complexity: (1) the shortest path distance between the player\\u2019s start location and the goal location, and (2) the complexity of the shortest path to goal defined by the shortest path divided by the straight line distance to the goal. In particular, we found that adversarial mazes have a significantly longer shortest path to the goal on average, as well as a higher path complexity. \\n\\nThese findings are in part what motivated us to adapt the training distribution, for example by re-weighting training samples which were adversarial. However, we observed minimal improvement doing this, and one of our hypotheses for this is that while the measures above are correlated with mazes being adversarial, they are not necessarily the cause (we discuss this more in our overall response to all reviewers).\\n\\nWe will add these findings to the paper.\"}",
"{\"title\": \"A simple idea with interesting results, but lacking in broader impact\", \"review\": \"The authors present a simple technique for finding \\\"worst-case\\\" maze environments that result in bad performance. The adversarial optimization procedure is a greedy procedure, which alternately perturbs maze environments and selects the maze on which the trained agent performs worst for the next iteration. The authors highlight three properties of these mazes, which show how this adversarial optimization procedure can negatively impact performance.\", \"high_level_comments\": [\"I am unconvinced that many of the observed behaviors are \\\"surprising\\\". The procedure for adversarially optimizing the maps is creating out-of-distribution map samples (this is confirmed by the authors). The function for creating maps built-in to DeepMind Lab (the tool used to generate the random maps used in this paper) has a set of rules it uses to ensure that the map follows certain criteria. Visual inspection of the 'Iteration 20' maps in Figure 2 finds that the resulting adversarial map looks fundamentally different from the 'Initial Candidate' maps. As a result, many of the features present in the adversarial maps may not exist in the initial distribution, and the lack of generalizability of Deep RL has become a relatively common talking point within the community. That being said, I agree with the authors' claims about how this sort of analysis is important (I discuss this more in my second point).\", \"In my mind, the 'discovery' of the performance on these optimized out-of-distribution samples is, in my mind, not particularly impactful on its own. The Deep RL community is already rather aware of the lack of generalization ability for agents, but are in need of tools to make the agents more robust to these sorts of examples. For comparison, there is a community which researches techniques to robustify supervised learning systems to adversarial examples (this is also mentioned by the authors in the paper). I feel that this paper is only partially complete without an investigation of how these out-of-distribution samples can be used to improve the performance of the agents. The addition of such an investigation has the potential to greatly strengthen the paper. This lack of \\\"significance\\\" is the biggest factor in my decision.\", \"The first two results sections outlining the properties of the adversarially optimized mazes were all well-written and interesting. While generally interesting, that the less-complex A2CV agent shows better generalization performance than the more-complex MERLIN agent is also not overly surprising. Yet, it remains a good study of a phenomenon I would not have thought to investigate.\"], \"minor_comments\": [\"The paper is very clear in general. It was a pleasure to read, so thank you! The introduction is particularly engaging, and I found myself nodding along while\", \"Figures are generally excellent; your figure titles are also extremely informative, so good work here.\", \"Fig 4. It might be clearer to say \\\"adversarially optimized\\\" instead of simplly \\\"optimized\\\" in the (b) caption to be clearer that it the map that is being changed here, rather than the agent. Also, \\\"Human Trajectories\\\" -> \\\"Human Trajectory\\\", since there is only one.\", \"I am not a fan of saying \\\"3D navigation tasks\\\" for 2.5D environments (but this has become standard, so feel free to leave this unchanged).\", \"This paper is a well-written investigation of adversarially chosen out-of-distribution samples. However, the the high-quality of this narrow investigation still only paints a partial picture of the problem the authors set out to address. At the moment, I am hesitant to recommend this paper for acceptance, due to its relatively low \\\"significance\\\"; a more thorough investigation of how these out-of-distribution samples can be used.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Official review\", \"review\": \"Update:\\n\\nI appreciate the clarifications and the extension of the paper in response to the reviews. I think it made the work stronger. The results in the newly added section are interesting and actually suggest that by putting more effort into training set design/augmentation, one could further robustify the agents, possibly up to the point where they do not break at all in unnatural ways. It is a pity the authors have not pushed the work to this point (and therefore the paper is not as great as it could be), but still I think it is a good paper that can be published.\\n\\n-----\\n\\nThe paper analyzes the performance of modern reinforcement-learning-based navigation agents by searching for \\u201cadversarial\\u201d maze layouts in which the agents do not perform well. It turns out that such mazes exist, and moreover, one can find even relatively simple maze configurations that are easily solved by humans, but very challenging for the algorithms.\", \"pros\": \"1) Interesting and relevant topic: it is important not only to push for best results on benchmarks, but also understand the limitations of existing approaches.\\n2) The paper is well written\\n3) The experiments are quite thorough and convincing. I especially appreciate that it is demonstrated that there exist simple mazes that can be easily solved by humans, but not by algorithms. The analysis of transferability of \\u201cadversarial\\u201d mazes between different agents is also a plus.\", \"cons\": \"1) I am not convinced worst-case performance is the most informative way to evaluate models. Almost no machine learning model is perfect, and therefore almost for any model it would be possible to find training or validation samples on which it does not perform well. Why is it so surprising that this is also the case for navigation models? Why would one assume they should be perfect? Especially given that the generated \\u201cadversarial\\u201d mazed lie outside of the training data distribution, seemingly quite far outside. Are machine learning models ever expected to perfectly generalize outside of the training data distribution? Very roughly speaking, the key finding of the paper can be summarized as \\u201cseveral recent navigation agents have problems finding and entering small rooms of the type they never saw during training\\u201d - is this all that significant?\\n\\nTo me, the most interesting part of the paper is that the models generalize as well as they do. I would therefore like to see if it is possible to modify the training distribution - by adding \\u201cadversarial\\u201d mazes, potentially in an iterative fashion, or just by hand-designing a wider distribution of mazes - so that generalization becomes nearly perfect and the proposed search method is not anymore able to find \\u201cadversarial\\u201d mazes that are difficult for the algorithm, but easy for humans.\\n\\n2) On a related note, to me the largest difference between the mazes generated in this paper and the classical adversarial images is that the modification of the maze is not constrained to be small or imperceptible. In fact, it is quite huge - the generated mazes are far from the training distribution. This is a completely different regime. This is like training a model to classify cartoon images and then asking it to generalize to real images (or perhaps other way round). Noone would expect existing image classification models to do this. This major difference with classical adversarial examples should be clearly acknowledged.\\n\\n3) It would be interesting to know the computational requirements of the search method. I guess it can be estimated from the information in the paper, but would be great to mention it explicitly. (I am sorry if it is already mentioned and I missed it)\\n\\nTo conclude, I think the paper is interesting and well executed, but the presented results are very much to be expected. To me the most interesting aspect of the work is that the navigation agents generalize surprisingly well. Therefore, I believe the work would be much more useful if it focused more on how to make the agents generalize even better, especially since there is a very straightforward way to try this - by extending the training set. I am currently in the borderline mode, but would be very happy to change my evaluation if the focus of the paper is somewhat changed and additional experiments on improving generalization (or some other experiments, but making the results a bit more useful/surprising) are added.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting paper\", \"review\": \"This is an interesting paper, trying to find the adversarial cases in reinforcement learning agents. The paper discusses several different settings to investigate how generalizable the worst-case environment is across different models and conjectured that it comes from the bias in training the agents. Overall the paper is well-written and the experiments seem convincing. I have two questions regarding the presented result.\\n\\n1. The search algorithm depicted in section 2 is only able to find a local optimum in the environment space. How robust is the result given different initializations?\\n\\n2. It is briefly discussed in the paper that the failure in certain mazes might come from the structural bias in the training and the \\u201ccomplex\\u201d mazes are under-represented in the training dataset. It is hence natural to ask, if the procedure described in this paper can be incorporated to enhance the performance by some simple heuristics like re-weighting the training samples. I think some discussion on this would be beneficial for verifying the conjecture made here.\\n\\n3. The authors compared the \\u201chardness\\u201d of the mazes based on the number of walls in the maze. But it is arguably a good metric as the authors also mentioned visibility and other factors in measuring the complexity of the task. I would like to see more exploration in different factors that accounts for the complexity and maybe compare different agents to see if they are sensitive in the same set of factors. \\n\\nTo summarize, I like the idea of the paper and I think the result can be illuminating and worth some more follow-up work to understand the RL training in general.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
Bkl-43C9FQ | Spherical CNNs on Unstructured Grids | [
"Chiyu Max Jiang",
"Jingwei Huang",
"Karthik Kashinath",
"Prabhat",
"Philip Marcus",
"Matthias Niessner"
] | We present an efficient convolution kernel for Convolutional Neural Networks (CNNs) on unstructured grids using parameterized differential operators while focusing on spherical signals such as panorama images or planetary signals.
To this end, we replace conventional convolution kernels with linear combinations of differential operators that are weighted by learnable parameters. Differential operators can be efficiently estimated on unstructured grids using one-ring neighbors, and learnable parameters can be optimized through standard back-propagation. As a result, we obtain extremely efficient neural networks that match or outperform state-of-the-art network architectures in terms of performance but with a significantly lower number of network parameters. We evaluate our algorithm in an extensive series of experiments on a variety of computer vision and climate science tasks, including shape classification, climate pattern segmentation, and omnidirectional image semantic segmentation. Overall, we present (1) a novel CNN approach on unstructured grids using parameterized differential operators for spherical signals, and (2) we show that our unique kernel parameterization allows our model to achieve the same or higher accuracy with significantly fewer network parameters. | [
"Spherical CNN",
"unstructured grid",
"panoramic",
"semantic segmentation",
"parameter efficiency"
] | https://openreview.net/pdf?id=Bkl-43C9FQ | https://openreview.net/forum?id=Bkl-43C9FQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Bke0OEWWg4",
"SJeO8PlFAX",
"SygAhSlF0Q",
"rJgo4QetAQ",
"H1gjxtiv6X",
"r1lhQZeq37",
"rkxqEcLqsX",
"HklBQtGu9m",
"B1xhqVz_qQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1544782965650,
1543206736000,
1543206325908,
1543205683021,
1542072562744,
1541173540470,
1540151857916,
1538955549201,
1538954388344
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1422/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1422/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1422/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1422/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1422/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1422/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1422/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1422/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents a simple and effective convolution kernel for CNNs on spherical data (convolution by a linear combination of differential operators). The proposed method is efficient in the number of parameters and achieves strong classification and segmentation performance in several benchmarks. The paper is generally well written but the authors should clarify the details and address reviewer comments (for example, clarity/notations of equations) in the revision.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"decision\"}",
"{\"title\": \"Re: Review\", \"comment\": \"Thank you for your thorough review and helpful comments. We will try to address your concerns and suggestions below:\\n- Details on MeshConv\\nWe have added additional references as well as details for implementation of mesh differential operators in the Appendix. Additionally, we make our code anonymously available for reproducibility. Please check the code at the link below:\", \"https\": \"//drive.google.com/open?id=1z-hy3NVQtPxNcyDsRz-LqulwqDxNqAMo\\n\\n- Coordinate-dependence of the method (singularity at the poles).\\nThe method is coordinate dependent and coordinate singularity is an actual problem during implementation of the method. However, several tricks can be implemented to mitigate this issue. First, we use a spherical mesh subdivided from a base icosahedron that does not have a vertex that is at the pole. Then, all subsequent vertices will not be exactly residing on the pole, and numerically the singularity will not occur. Second, we always mute the signal at the poles (i.e., pad with zero). In practice this work extremely well, and tends not to affect the results. The major reason for rotating the spherical MNIST to the equator is in fact due to rotational invariance, since projecting the digits to the pole will turn the gradient components into radial and azimuthal ones, rendering the filters rotationally (around upward z axis) equivariant and the overall network invariant to rotations around z axis. We added discussions about its limitations in the revised paper.\\n\\n- Steerable CNNs\\nThank you for the suggestion. We have added the reference to the corresponding section.\\n\\n- Orientable models with equivariant layers\\nIndeed equivariant convolutional operators do not prevent the network from being able to distinguish transformed versions of the same input. As per suggestion, we altered the original S2CNN network to be non-invariant by swapping the final global pooling layers with an average pool only in the gamma dimension (the extra dimension in SO(3) to for preserving equivariance), followed by a flattening operation in the spatial dimension. Furthermore, we added an additional fully-connected layer for enhanced representational power. Testing this network on MNIST dataset, we have the following findings:\\n# of params: 162946\", \"accuracy\": \"98.08\\nWhich has more parameters and lower accuracy than our proposed model. The experiment results suggest that since these equivariant operators are specifically engineered to preserve equivariance, they tend to not be the most efficient for orientable tasks that do not require equivariance.\\nAdditionally, to verify that orientability as been resolved, we compare the per-class accuracy for both the original S2CNN (rot-invariant version) and the modified S2CNN (not rot-invariant version). Below are the comparisons:\", \"digit_class\": \"0 1 2 3 4 5 6 7 8 9\\n----------------------------------------------------------------------------------------------\", \"original_s2cnn\": \"0.99, 0.99, 0.98, 0.98, 0.96, 0.96, 0.98, 0.95, 0.96, 0.86\", \"modified_s2cnn\": \"0.99, 0.99, 0.98, 0.99, 0.97, 0.99, 0.99, 0.97, 0.97, 0.98\\n\\nResults show that removing the final pooling layer drastically improves accuracy for the digit \\u201c9\\u201d due to orientability, but overall lower accuracy compared to our spherical network suggests weaker representational power.\\n\\n- Visualizations\\nWe agree that visualizing the differential operators could be helpful for the reader. We have a visualization of an exemplary signal in Figure 1 that illustrates the differential operators.\"}",
"{\"title\": \"Re: Simple and efficient model on spherical data, large scale experiments need more benchmarks\", \"comment\": \"Thanks for your detailed and thorough review of our paper. We will try to address your questions and suggestions below:\\n- Runtime\\nWe evaluate the runtime for our classification network and compare with the PointNet++ model which is of comparable peak performance. We report these runtimes in Appendix D. Our best performing model achieves a 5x speedup compared with PointNet++.\\n\\n- 2D3DS Baseline (add S2CNN)\\nS2CNN and SphereNet were originally designed and evaluated for classification tasks. We corresponded with the authors of S2CNN and extended upon the original S2CNN architecture for semantic segmentation. We include the S2CNN results on 2D3DS dataset in the revised Figure 4. We detailed the modified S2CNN architecture in Appendix E. The best mIoU from the modified S2CNN model is significantly lower than ours (0.2581 vs 0.3829).\\n\\n- What is the number of points used in PointNet++?\\nThe number of points we use is 8192. We are using the same code from PointNet++ for the ScanNet task, where we do the data-augmentation by rotating around the z-axis and take subregions for the learning.\\n\\n- Difference between pano image and 3D point segmentation\\nPointNet++ was initially designed for and tested on point clouds sampled from a 3D model that requires fusing multiple scans from various scan locations. Segmenting a single panorama, which is the setup in our experiment, is a much more challenging yet realistic task for engineering applications. A single view panorama poses multiple additional challenges, such as serious occlusions in the scene, noises in the depth map, and sparsity in the point cloud for objects that are far away from the viewpoint. We believe that all these problems can prevent the point-based method from achieving comparable results as in the original setup using uniformly sampled 3D points.\\n\\n- Notation lacks clarity\\nThank you for pointing out the issue in notation clarity. In this context, x and y refer to the spatial coordinates that correspond to the two spatial dimensions over which the convolution is performed. Eqn 1 through 3 states the fact that since convolution (and cross-correlation) operators are linear, traditional convolution operators can be viewed as linear combinations of the original signal convolved with the basis functions of the kernel.\\n\\n- The terminology of \\u201cMeshConv\\u201d\\nWe have added the definition of this terminology to the introduction section, before its occurrence in Fig. 1.\\n\\n- Why is this method not rotationally invariant\\nThe method is not considered to be rotation invariant because the convolution operator is coordinate-dependent (depending on how the x-y coordinate vectors are defined on the manifold). Hence, the corresponding features will change due to a rotation, and the final pooled value will be different.\"}",
"{\"title\": \"Re:Simple and effective idea.\", \"comment\": \"Thank you for your response and constructive feedback! Below are some of our comments in response to your questions and suggestions:\\n(1) Analysis of computational cost\\nWhile the model computes second order spatial derivative (Laplacian) as a basis for the convolution operator, it ultimately amounts to a linear combination of these basis for the convolution step. Hence, training only involves first order gradients with respect to these weights to train (as opposed to using the Hessian). Generally, training time is difficult to benchmark as it involves many variables (hardware, DL framework etc.). However, we evaluate the inference runtime for our classification network and compare with the PointNet++ model which is of comparable peak performance. We report these runtimes in Appendix D. Our best performing model achieves a 5x speedup compared with PointNet++.\\n(2) Intuitive justification\\nWe appreciate your feedback. We have added more intuitive explanations to our paper in Sec 1 as well as the captions of Fig. 1.\"}",
"{\"title\": \"Simple and effective idea.\", \"review\": \"Summary:\\nThe paper proposes a novel convolutional kernel for CNN on the unstructured grids (mesh). Contrary to previous works, the proposed method formulates the convolution by a linear combination of differential operators, which is parameterized by kernel weights. Such kernel is then applied on the spherical mesh representation of features, which is appropriate to handle spherical data and makes the computation of differential operators efficient. The proposed method is evaluated on multiple recognition tasks on spherical data (e.g. 3d object classification and omnidirectional semantic segmentation) and demonstrates its advantages over existing methods.\\n\\nComments/suggestions:\\nI think the paper is generally well-written and clearly delivers its key idea/advantages. However, I hope the authors can elaborate the followings:\\n\\n1) Analysis of computational cost\\nIt would be helpful to elaborate more analysis on computational cost. The proposed formulation seems to involve the second-order derivatives in the backpropagation process (due to the first-order derivatives in Eq.(4)), which can be a computational bottleneck. It will be very useful to provide analysis on computational cost together with parameter efficiency study (Figure 3 and 4).\\n\\n2) Intuitive justification\\nIt would be great if the authors provide more intuitive descriptions on Eq.(4) (and possibly elaborate captions of Figure 1); what is the intuition of using differential operators? Why is it useful to deal with unstructured grids? How does it lead to improvement over the existing techniques?\", \"conclusion\": \"Overall, I think this paper has solid contributions; the proposed MeshConv operator is simple but effective to handle spherical data; the experiment results demonstrate its advantages over existing methods on broad applications, which are convincing. I think conveying more intuitions on the proposed formulation and providing additional performance analysis will help readers to understand paper better.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Simple and efficient model on spherical data, large scale experiments need more benchmarks\", \"review\": \"This article introduces a simple yet efficient method that enables deep learning on spherical data (or 3D mesh projected onto a spherical surface), with much less parameters than the popular approaches, and also a good alternative to the regular correlation based models.\\n\\nInstead of running patches of spherical filters, the authors takes a weighted linear combination of differential operators applied on the data. The method is shown to be effective on Spherical MNIST, ModelNet, Stanford 2D-3D-S and a climate prediction dataset, reaching competitive/state-of-the-art numbers with much less parameters..\\n\\nLess parameters is nice, but the argument could be strengthened if the authors could also show impressive results in terms of runtime. Typically number of parameters is not a huge issue for today\\u2019s deep networks, but for real-time robotics to be equipped with 3D perception, runtime is a much bigger factor.\", \"i_also_think_that_the_stanford_2d_3d_s_experiments_have_some_issues\": \"UNet and FCN-8s are good baselines, but other prior work based on spherical convolution are omitted here. E.g. S2CNN and SphereNet. S2CNN has released their code so it should be benchmarked.\\n\\nAdditionally, comparison to PointNet++ could be a little unfair. \\n\\ni) What is the number of points used in PointNet++? The author reported 1000 points for ModelNet which is ok for that dataset but definitely too small for indoor scenes. The original paper used 8192 points for ScanNet indoor scenes.\\n\\nii) Point-based can have data-augmentation by taking subregions of the panoramic scene, where as sphere-based method can only take a single panoramic image. The state-of-the-art method (PointSIFT) achieves ~70 mIOU on this dataset. PointNet(++) can also achieve 40-50 mIOU. Maybe the difference is at using regular image or panoramic images, but the panoramic image is just a combination of regular images so I wouldn\\u2019t expect such a large difference.\\n\\nIn conclusion, this paper proposes a novel deep learning algorithm to handle spherical data based on differential operators. It uses much less parameters and gets impressive results. However, the large scale experiments has some weaknesses. Therefore I recommend weak accept.\\n\\n----\\nSmall issues / questions:\\n\\n- Notation lacks clarity. What are x, y in Eqn. 1? The formulation of convolution is not very clear to me, but maybe due to my lack of familiarity in this literature.\\n\\n- In Figure 1, the terminology of \\u201cMeshConv\\u201d is first introduced, which should come earlier in the text to improve clarity.\\n\\n- In the article, the author distinguished their method with S2CNN that their method is not rotation invariant. I don\\u2019t understand this part. In the architecture diagram, if average pool is applied across all spherical locations, then why is it not rotation invariant?\\n\\n===\", \"after_rebuttal\": \"I thank the authors for addressing the comments in my review. It clarifies the questions I had about on the 2D3DS dataset (panorama vs. 3D points). Overall I feel this is a good model and have solid experiments. Therefore, I raise the score to 7.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"The paper presents a new convolution-like operation for parameterized manifolds, and demonstrates its effectiveness on learning problems involving spherical signals. The basic idea is to define the MeshConvolution as a linear combination (with learnable coefficients) of differential operators (identity, gradient, and Laplacian). These operators can be efficiently approximated using the 1-hop neighbourhood of a vertex in the mesh.\\n\\nIn general I think this is a strong paper, because it presents a simple and intuitive idea, and shows that it works well on a range of different problems. The paper is well written and mostly easy to follow. The appendix contains a wealth of detail on network architectures and training procedures.\\n\\nWhat is not clear to me is how exactly the differential operators are computed, and how the MeshConvolution layer is implemented. The authors write that \\\"differential operators can be efficiently computed using Finite Element basis, or derived by Discrete Exterior Calculus\\\", but no references or further detail is provided. The explanation of the derivative computation is:\\n\\\"The first derivative can be obtained by first computing the per-face gradients, and then using area-weighted average to obtain per-vertex gradients. The dot product between the per-vertex gradient value and the corresponding x and y vector fields are then computed to acquire grad_x F and grad_y F.\\\"\\nWhat are per-face gradients and how are they computed? Is the signal sampled on vertices or on faces? What area is used for weighting? What is the exact formula? What vector fields are you referring to? (I presume these are the coordinate vector fields). In eq. 5, what are F_i and F_j? What is the intuition behind the cotangent formula (eq. 5), and where can I read more? etc.\\n\\nPlease provide a lot more detail here, delegating parts to an appendix if necessary. Providing code would be very helpful as well.\\n\\nA second (minor) concern I have is to do with the coordinate-dependence of the method. Because the MeshConvolution is defined in terms of (lat / lon) coordinates in a non-invariant manner, and the sphere does not admit a global chart, the method will have a singularity at the poles. This is confirmed by the fact that in the MNIST experiment, digits are rotated to the equator \\\"to prevent coordinate singularity at the poles\\\". I think that for many applications, this is not a serious problem, but it would still be nice to be transparent and mention this as a limitation of the method when comparing to related work.\\n\\nIn \\\"Steerable CNNs\\\", Cohen & Welling also used a linear combination of basis kernels, so this could be mentioned in the related work under \\\"Reparameterized Convolutional Kernel\\\".\\n\\nTo get a feel for the differential operators, it may be helpful to show the impulse response (at different positions on the sphere if it matters).\\n\\nIn experiment 4.1 as well as in the introduction, it is claimed that invariant/equivariant models cannot distinguish rotated versions of the same input, such as a 6 and a 9. Although indeed an invariant model cannot, equivariant layers do preserve the ability to discriminate transformed versions of the same input, by e.g. representing a 9 as an upside-down 6. So by replacing the final invariant pooling layer and instead using a fully connected one, it should be possible to deal with this issue in such a network. This should be mentioned in the text, and could be evaluated experimentally.\\n\\nIn my review I have listed several areas for improvement, but as mentioned, overall I think this is a solid paper.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Orientability\", \"comment\": \"Thank you for your feedback and your interest in our paper! We would like to clarify our wording of this statement. Admittedly various current equivariant architectures can be made into non-equivariant counterparts with additional enhancements such as additional feature layers. However such enhancements would render the equivariant architectures into non-equivariant ones, therefore our general statement that \\\"assumed orientation information is crucial to the predictive capability of the network (for a range of problems)\\\" is nevertheless accurate.\\n\\nAlso, as a side note only for further discussion, equivariant architectures have a particular construct to maintain equivariance (such as adding an additional dimension for SO(3) layers in S2CNN), and tend not to be most efficient for orientable tasks.\"}",
"{\"comment\": \"In the introduction you say \\\"[...] assumed orientation information is crucial to the predictive capability of the network [...] omnidirectional images, where images are naturally oriented by gravity [...]\\\".\", \"let_me_inform_you_that_there_is_a_simple_trick_to_solve_this_problem\": \"add an extra input feature map that indicates the orientation of the gravitational field.\\n\\nIndeed if the symmetry completely broken like for the example of MNIST then you better have to give up the equivariant architecture. But for tasks when the symmetry is only partially broken like planets oriented by their axis of rotation then equivariant architectures are still relevant and the axis of rotation can be given as part of the input.\", \"title\": \"Simplistic criticism against equivariant architectures\"}"
]
} |
|
HJgeEh09KQ | Boosting Robustness Certification of Neural Networks | [
"Gagandeep Singh",
"Timon Gehr",
"Markus Püschel",
"Martin Vechev"
] | We present a novel approach for the certification of neural networks against adversarial perturbations which combines scalable overapproximation methods with precise (mixed integer) linear programming. This results in significantly better precision than state-of-the-art verifiers on challenging feedforward and convolutional neural networks with piecewise linear activation functions. | [
"Robustness certification",
"Adversarial Attacks",
"Abstract Interpretation",
"MILP Solvers",
"Verification of Neural Networks"
] | https://openreview.net/pdf?id=HJgeEh09KQ | https://openreview.net/forum?id=HJgeEh09KQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJlJ3qJOlN",
"SJxdpsmL0m",
"B1xk_jXURm",
"B1eV9q7URm",
"HyxBSKmURQ",
"H1ep3mjh2Q",
"BJeGXqR9h7",
"S1xULFL9n7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545235110654,
1543023551827,
1543023463279,
1543023244494,
1543022908940,
1541350325512,
1541233178325,
1541200206384
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1421/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1421/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1421/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1421/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1421/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1421/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1421/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1421/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper addresess an important problem of neural net robustness verification, and presents a novel approach outperforming state of art; author provided details rebuttals which clarified their contributions over the state of art and highlighted scalability; this work appears to be a solid and useful contribution to the field.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"A novel and scalable approach to robustness analysis of neural nets\"}",
"{\"title\": \"Response to main question\", \"comment\": \"Q1. My background is more theoretical, but I'm looking for theorems here, considering the complicatedness of the neural network. All I am looking for is probably some high-level explanation. \\n\\nR1. RefineAI is a new approach for proving the robustness of neural networks: it is more precise than current incomplete methods and more scalable than current complete methods. We believe this is a difficult problem and RefineAI is a promising step forward.\", \"some_key_insights_in_the_paper\": \"\", \"insight_i\": \"expensive but precise techniques like MILP solvers can be used for refinement earlier in the analysis but do not scale for refinement of neurons in later layers. However, they do substantially improve on incomplete verifiers.\", \"insight_ii\": \"not all neurons in the network contribute equally to the output and thus we do not need to refine all neurons in a layer. For this, we present a novel heuristic which improves the scalability of our approach while maintaining sufficient precision.\"}",
"{\"title\": \"Response to main questions\", \"comment\": \"Q1. Is MILP-based refinement applicable only for the first few layers of the network?\\n\\nR1. Generally, such refinement is most effective in the initial layers: as the analysis proceeds deeper into the network, it becomes harder for the MILP solver to refine the bounds within the specified time limit of 1 second. This is due to the increase in the number of integer variables caused by the increase in the number of unstable units (as explained in the general section on unstable ReLU). \\n\\nQ2. Why is l6 = 0? I think that it is easy to figure out that max(0,x4) is at least 0.\\n\\nR2. We assume you mean l6=-0.5. The negative lower bound for x6 = ReLU(x4) is due to the Zonotope ReLU transformer shown in Figure 2 which permits negative values for the output. \\n\\nQ3. I couldn't understand your sentence \\\"Note that the encoding ...\\\". Explaining a bit more about how bounds computed in previous layers are used will be helpful.\\n\\nR3. We mean that both the set of constraints added by the LP encoding (Ehlers (2017)) and the Zonotope transformer (Figure 2) for approximating ReLU behaviour depends on the neuron bounds from the previous layers. The degree of imprecision introduced by these approximations can be reduced by propagating tighter bounds through the network. We will clarify this.\\n\\nQ4. Do you mean that your algorithm looks into the future layers of each neuron xi and adds the weights of edges in all the reachable paths from xi?\\n\\nR4. Yes. We consider all outgoing edges from xi and add the absolute values of the corresponding weights.\\n\\nQ5. Why did you reduce epsilon from 0.07 to 0.02, 0.015 and 0.015?\\n\\nR5. The 5x100 network is trained using adversarial training and is thus more robust than the other networks which were not obtained through adversarial training. Thus, we chose a higher epsilon for it compared to the other networks (please see the comment in the general section on unstable ReLU).\"}",
"{\"title\": \"Response to main questions\", \"comment\": \"Q1. The verified robustness percentage of Tjeng & Tedrake is reported but the robustness bound is not.\\n\\nR1. The epsilon considered for this experiment is reported (page 7) and it is 0.03. \\n \\nQ2. Can RefineAI handle only piecewise linear activation functions? How about other activations such as sigmoid? If so, what modifications are needed?\\n\\nR2. RefineAI provides better approximations for ReLU because it uses tighter bounds returned by MILP/LP solvers. Similarly, we can refine DeepZ approximations for sigmoid (which already exist) by using better bounds from a tighter approximation, e.g., quadratic approximation.\\n\\nQ3. How is the verification problem affected by considering the untargeted attack as in this paper vs. the targeted attack in Weng et al (2018) and Tjeng & Tedrake (2017)?\\n\\nR3. Since the targeted attack is weaker, the complete verifier from Tjeng and Tedrake runs faster and the incomplete verifier from Weng et al. proves more properties in their respective evaluation than it would if they considered untargeted attacks as considered in this paper.\\n\\nQ4. How tight are the output bounds improved by the neuron selection heuristics? \\n\\nR4. We observed that the width of the interval for the correctly classified label is up to 37% smaller with our neuron selection heuristic.\"}",
"{\"title\": \"Answers to key questions\", \"comment\": \"We thank the reviewers for their feedback.\\n\\nBelow is a summary of key points, followed by further elaboration on each point. We also provide individual replies to each reviewer.\\n\\nSummary points [short]\\n\\n1. RefineAI is more precise than state-of-the-art incomplete verifiers.\\n2. RefineAI is more scalable than existing state-of-the-art complete verifiers, including the latest: https://openreview.net/forum?id=HyGIdiRqtm based on Tjeng & Tedrake.\\n3. RefineAI is applicable to much larger networks than shown in the paper.\\n4. Effectiveness of verification methods for neural networks is primarily affected by the number of unstable ReLU units, *not* by the number of neurons.\\n5. DeepZ [1], the domain used in our paper, is publicly available [3].\\n\\nWe are happy to provide further results or explanations if requested.\\n\\nSummary points [longer]\\n\\n\\u2192 RefineAI is more precise than all state-of-the-art incomplete verifiers.\\n\\nThis is because DeepZ has the same precision as Weng et. al (2018) and Kolter and Wong (2018) while being faster (unlike RefineAI, Weng et al. cannot handle convolutional nets). Then, based on DeepZ results, Refine AI computes more precise results.\\n\\n\\u2192 RefineAI is more scalable than all state-of-the-art complete verifiers, including the latest: https://openreview.net/forum?id=HyGIdiRqtm, based on Tjeng & Tedrake.\\n\\nThis is because the above method uses Box to compute initial bounds and uses more expensive methods if required. Unfortunately, in deeper layers, Box analysis becomes too imprecise and does not help. As a result, the above approach primarily relies on LP to obtain tight bounds for formulating a MILP instance for the whole network. Determining bounds with LP for all neurons in the larger networks is prohibitive. For example, on the 9x200 network from our paper, determining bounds with LP for all neurons already takes > 20 minutes (without calling the MILP solver which is more expensive than LP) whereas DeepZ computes significantly more precise bounds than Box for deeper layers in few seconds. \\n\\nThis gives us considerably fewer candidates to refine using LP/MILP than the Box analysis provides. Note that Tjeng & Tedrake (2017) is in turn significantly faster than Reluplex.\\n\\n\\u2192 RefineAI is applicable to much larger networks than shown in the paper.\\n\\nWe evaluated RefineAI on larger publicly available networks from [3]: three MNIST convolutional networks containing 3,604 (Conv1), 4,804 (Conv2), 34,688 (Conv3) neurons and one skip net containing 71,650 neurons. We also tried a CIFAR10 convolutional network with 4,852 neurons. As in the paper, we considered epsilon values for which the precision of DeepZ drops significantly. The performance numbers below show RefineAI scales to larger networks (DiffAI is a particular defense [2]):\\n\\n Dataset Network Epsilon Adversarial training Avg. runtime(s)\\n DeepZ RefineAI\\n MNIST\\t Conv1 0.1 None 1.1 357 \\n Conv2 0.2 DiffAI 6.8 602\\n Conv3 0.2 DiffAI 7 1011 \\n Skipnet 0.13 DiffAI 163 682\\n CIFAR10 Conv 0.012 DiffAI 3.9 262\\n\\n\\u2192 The effectiveness of a verification method for neural networks is primarily affected by the number of unstable ReLU units, *not* by the number of neurons.\", \"this_is_because_the_speed_of_a_complete_verifier_and_the_precision_of_an_incomplete_verifier_are_affected_mainly_by_unstable_relu_units\": \"those which can take both + and - values. Indeed, the speed of the MILP solver used in both RefineAI and the method based on Tjeng & Tedrake (2017) is adversely affected by the presolve approximations for such unstable units.\\n\\nThis explains why defending a network (e.g., via DiffAI) will make any verifier scale better (including RefineAI): because defended networks have much fewer unstable units than undefended networks.\\n\\n[1] Fast and Effective Robustness Certification, NIPS\\u201918\\n[2] Differentiable Abstract Interpretation for Provably Robust Neural Networks, ICML\\u201918.\\n[3] DeepZ analysis: https://github.com/eth-sri/eran.\"}",
"{\"title\": \"Interesting ideas but not persuasive enough\", \"review\": \"This paper proposed a mixed strategy to obtain better precision on robustness verifications of feed-forward neural networks with piecewise linear activation functions.\\n\\nThe topic of robustness verification is important. The paper is well-written and the overview example is nice and helpful.\", \"the_central_idea_of_this_paper_is_simple_and_the_results_can_be_expected\": \"the authors combine several verification methods (the complete verifier MILP, the incomplete verifier LP and AI2) and thus achieve better precision compared with imcomplete verifiers while being more scalable than the complete verifiers. However, the verified networks are fairly small (1800 neurons) and it is not clear how good the performance is compared to other state-of-the-art complete/incomplete verifiers.\", \"about_experiments_questions\": \"1. The experiments compare verified robustness with AI2 and show that RefineAI can verify more than AI2 at the expense of much more computation time (Figure 3). However, the problem here is how is RefineAI or AI2 compare with other complete and incomplete verifiers as described in the second paragraph of introduction? The AI2 does not seem to have public available codes that readers can try out but for some complete and incomplete verifiers papers mentioned in the introductions, I do find some public codes available:\\n* complete verifiers\\n1. Tjeng & Tedrake (2017): github.com/vtjeng/MIPVerify.jl\\n2. SMT Katz etal (2017): https://github.com/guykatzz/ReluplexCav2017\\n\\n* incomplete verifiers\\n3. Weng etal (2018) : https://github.com/huanzhang12/CertifiedReLURobustness\\n4. Wong & Kolter (2018): http://github.com/locuslab/convex_adversarial\\n\\nHow does Refine AI proposed in this paper compare with the above four papers in terms of the verified robustness percentage on test set, the robustness bound (the epsilon in the paragraph Abstract Interpretation p.4) and the run time? The verified robustness percentage of Tjeng & Tedrake is reported but the robustness bound is not reported. Also, can Refine AI scale to other datasets?\", \"about_other_questions\": \"1. Can RefineAI handle only piece-wise linear activation functions? How about other activation functions, such as sigmoid? If so, what are the modifications to be made to handle other non-piece-wise linear activation functions? \\n\\n2. In Sec 4, the Robustness properties paragraph. \\\"The adversarial attack considered here is untargeted and therefore stronger than ...\\\". The approaches in Weng etal (2018) and Tjeng & Tedrake (2017) seem to be able to handle the untargeted robustness as well? \\n\\n3. In Sec 4, the Effect of neural selection heuristic paragraph. \\\"Although the number of images verified change by only 3 %... produces tighter output bounds...\\\". How tight the output bounds improved by the neuron selection heuristics?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea but not fully evaluated\", \"review\": [\"In the paper, the authors provide a new approach for verifying the robustness of deep neural networks that combines complete yet expensive methods based on mixed integer-linear programming (MILP) and incomplete yet cheap methods based on abstract interpretation or linear-programming relaxation. Roughly speaking, the approach is to run an abstract interpreter but to refine its results at early layers of a neural network using mixed integer-linear programming and some of later layers using linear programming. The unrefined results of the abstract interpreter help these refinement steps. They help prioritize or prune the refinement of the abstract-interpretation results at neurons at a layer. Using neural networks with 3, 5, 6, 9 layers and the MNIST dataset, the authors compared their approach with AI^2, which uses only abstract interpretation. This experimental comparison shows that the approach can prove the robustness of more examples for all of these networks.\", \"I found the authors' way of combining complete techniques and incomplete techniques novel and interesting. They apply complete techniques in a prioritized manner, so that those techniques do not incur big performance penalties. However, I feel that more experimental justifications are needed. The approach in the paper applies MILP to the first few layers of a given network, without any further simplification or abstraction of the network. One possible implication of this is that this MILP-based refinement is applicable only for the first few layers of the network. Of course, prioritization and timeout of the authors help, but I am not sure that this is enough. Also, I think that more datasets and networks should be tried. The experiments in the paper with different networks for MNIST show the promise, but I feel that they are not enough.\", \"p3: Why is l6 = 0? I think that it is easy to figure out that max(0,x4) is at least 0.\", \"p4: [li,yi] for ===> [li,ui]\", \"p4: gamma_n(T^#_(x|->Ax+b)) ===> gamma_n(T^#_(x|->Ax+b)(a))\", \"p4: subseteq T^#... ===> subseteq gamma_n(T^#...)\", \"p5: phi^(k)(x^(0)_1,...,x^(k-1)_p) ===> phi^(k)(x^(0)_1,...,x^k_p)\", \"p6: I couldn't understand your sentence \\\"Note that the encoding ...\\\". Explaining a bit more about how bounds computed in previous layers are used will be helpful.\", \"p6: I find your explanation on the way to compute the second ranking with weights confusing. Do you mean that your algorithm looks into the future layers of each neuron xi and adds the weights of edges in all the reachable paths from xi?\", \"p7: Why did you reduce epsilon from 0.07 to 0.02, 0.15 and 0.017?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"a refined approach for robust verification, but experimental part could be stronger\", \"review\": \"This paper introduces a verifier that obtains improvement on both the precision of the incomplete verifiers and the scalability of the complete verifiers. The proposed approaches combines over-parameterization, mixed integer linear programming, and linear programming relaxation.\\n\\nThis paper is well written and well organized. I like the simple example exposed in section 2, which is a friendly start. However, I begun to lose track after that. As far as I can understand, the next section listed several techniques to be deployed. But I failed to see enough justification or reasoning why these techniques are important. My background is more theoretical, but I'm looking for theorems here, considering the complicatedness of neural network. All I am looking for is probably some high level explanation.\\n\\nEmpirically, the proposed approach is more robust while time consuming that the AI2 algorithm. However, the contribution and the importance of this paper still seems incremental to me. I probably have grumbled too much about the lack of reasonings. As this paper is purely empirical, which is totally fine and could be valuable and influential as well. In that case, I found the current experiment unsatisfying and would love to see more extensive experimental results.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
rJleN20qK7 | Two-Timescale Networks for Nonlinear Value Function Approximation | [
"Wesley Chung",
"Somjit Nath",
"Ajin Joseph",
"Martha White"
] | A key component for many reinforcement learning agents is to learn a value function, either for policy evaluation or control. Many of the algorithms for learning values, however, are designed for linear function approximation---with a fixed basis or fixed representation. Though there have been a few sound extensions to nonlinear function approximation, such as nonlinear gradient temporal difference learning, these methods have largely not been adopted, eschewed in favour of simpler but not sound methods like temporal difference learning and Q-learning. In this work, we provide a two-timescale network (TTN) architecture that enables linear methods to be used to learn values, with a nonlinear representation learned at a slower timescale. The approach facilitates the use of algorithms developed for the linear setting, such as data-efficient least-squares methods, eligibility traces and the myriad of recently developed linear policy evaluation algorithms, to provide nonlinear value estimates. We prove convergence for TTNs, with particular care given to ensure convergence of the fast linear component under potentially dependent features provided by the learned representation. We empirically demonstrate the benefits of TTNs, compared to other nonlinear value function approximation algorithms, both for policy evaluation and control. | [
"Reinforcement learning",
"policy evaluation",
"nonlinear function approximation"
] | https://openreview.net/pdf?id=rJleN20qK7 | https://openreview.net/forum?id=rJleN20qK7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1l0yEBlxN",
"rJlwi0bOAQ",
"HygVJ4_w0Q",
"rklRWsfvCX",
"SJep0KGPAX",
"SJx8UYzwAX",
"BJeWyAYr0Q",
"ByeNn94ipm",
"B1gw_Gm96m",
"rJeJQ-ud6m",
"HkxZ4Gxf6m",
"rJlD3X8n3Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"official_review"
],
"note_created": [
1544733669634,
1543147166963,
1543107548242,
1543084805879,
1543084500842,
1543084365939,
1542983129280,
1542306476208,
1542234734671,
1542123799207,
1541698088662,
1541329838770
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1420/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1420/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1420/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1420/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1420/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1420/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1420/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1420/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1420/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1420/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1420/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1420/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes a new method to approximate the nonlinear value function by estimating it as a sum of linear and nonlinear terms. The nonlinear term is updated much slower than the linear term, and the paper proposes to use a\\nfast least-square algorithm to update the linear term. Convergence results are also discussed and empirical evidence is provided.\\n\\n\\nAs reviewers have pointed out, the novelty of the paper is limited, but the ideas are interesting and could be useful for the community. I strongly recommend taking reviewers comments into account for the camera ready and also add a discussion on the relationship with the existing work.\\n \\nOverall, I think this paper is interesting and I recommend acceptance.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting work on approximation value function\"}",
"{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for answering my questions. I have adjusted my score accordingly.\\nI suggest you to add few sentences to clarify the novelty, as you explained to me in your response (especially about the eligibility traces and the target/convergence).\\nAlso, I would suggest to move the catcher experiments to the appendix unless you can get better results. They are interesting, but not that meaningful at the current stage. You can use the space to move some text from the appendix to the main part, such as Algorithm 1.\"}",
"{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for the clarifications. I do believe this work is important and relevant, and I have updated my score to recommend acceptance.\"}",
"{\"title\": \"Thank you for the review\", \"comment\": \"Thank you for the constructive feedback and comments.\\n\\nConcerning the projection step of the parameters, this is mainly a technical requirement for the proofs. This is not a strong requirement; we can initialize the compact subset arbitrarily and gradually increase it until it encompasses the whole parameter space (this is mentioned in remark 1 of appendix B). In practice, we do not utilize projection and simply let the parameters be unbounded.\\n\\nWe have added results for the utility of optimizing the MSPBE for the other domains in the appendix. \\n\\nWe have chosen to focus on evaluating theoretically-sound algorithms for policy evaluation. Many algorithms used for learning value functions in actor-critic methods either aggregate data from multiple agents to do nonlinear TD updates or use experience replay to resample minibatches with nonlinear TD updates. Nonlinear TD (and fitted Q-iteration for nonlinear Q) does not have any convergence guarantees ---even in the batch setting, so we did not include these variants. The convergence issues are due to the combination of nonlinear function approximation and bootstrapping the value targets, which is not solved by batching.\\n\\nWe used the incremental version of LSTD which uses the Sherman-Morrison formula to do online updates on the A_inv matrix that LSTD requires (as is done in Section 3, page 10 of \\u201cLeast squares policy evaluation algorithms with linear function approximation\\u201d, Nedic and Bertsekas 2002). We have added an algorithm box in appendix D to clarify this in the paper. \\n\\nFor the control experiments, we have added results for the Levine et al. algorithm as another baseline in addition to vanilla DQN for nonimage catcher. Results for image catcher will also be added later.\"}",
"{\"title\": \"Thank you for the review\", \"comment\": \"Thank you for the helpful feedback.\\n\\nWe agree that the TTN idea is straightforward---and likely already in use---but believe it is different from LS-DQN (Levine et al.). LS-DQN uses one head, and computes a fast linear update after a larger number of steps. The distinction is subtle, but important. The strategy from Levine et al. does not allow let us take advantage of the fast learning of linear methods (since it is only executed very infrequently) and further affects the learning of the features. In their paper, the FQI solution was only computed every 500000 steps (the DQN target net was updated every 10000) while TTN recomputes the weights every 10000 steps. The authors also mention that it was necessary to take only a small step in the direction of the recomputed weights so that it did not destabilize learning for the rest of the network. In TTNs, the fast linear weights do not affect the features, so we do not suffer from such stability problems.\\n\\nWe have edited the paper to expand on the theory section and provided a formal theorem statement to clarify the contribution.\\n\\nWe agree that a more thorough investigation of the control case would be beneficial in the future. Here, we are more concerned with the policy evaluation setting and provide some preliminary results for control to show the potential of the TTN approach.\\n\\nSBEED is a control algorithm which is shown to be stable with any differentiable function class (contrary to ours which is a prediction algorithm) using Nestrov\\u2019s smoothing technique (to overcome the structural limitations of the max Bellman operator) and a primal-dual formulation (to overcome the double sampling). However, it is important to note that the stability theorem provided in the SBEED paper claims only mean convergence. Ours is an almost sure convergence claim. We will however mention SBEED as related work.\"}",
"{\"title\": \"Thank you for the review\", \"comment\": \"Thank you for your constructive feedback.\\n\\nWe have edited the abstract to clarify that we are indeed considering nonlinear value function approximation by using a combination of nonlinear (learned) features and a linear value function of those features. \\n\\nAlthough target networks and the TTN ideas have similar goals---improving the stability of learning nonlinear value functions---the approaches are fairly different. Target networks attempt to stabilize the learning process by fixing the TD targets for some number of steps. On the other hand, TTNs provide stability by \\u2018fixing\\u2019 (slowing down) the change in the features. These two approaches are orthogonal and can definitely be combined. Also, note that the use target networks alone does not provide any convergence guarantees with nonlinear function approximation unlike TTNs.\\n\\nTo clarify, we did not use the MSBE in any of the experiments. Indeed, the double-sampling problem makes its use infeasible in practice and we mention this under equation (6) on page 4. As such, the MSTDE was the surrogate loss of choice for most of the experiments.\\n\\nWe want to clarify that GAE (Schulman et al.) use an analog to the lambda-return with advantage functions, but they do not use eligibility traces. Instead, they use the forward view to compute the desired quantities by accumulating a batch of data. To the best of our knowledge, there are no theoretically-sound extensions of eligibility traces (ie. the backward view) for nonlinear function approximation.\\n\\nFor the control experiments, it is true that the runs of TTN---and DQN---have relatively high variance. We agree that these results definitely leave room for improvement but our goal is to give some preliminary results to show that TTNs can be a promising direction for the control setting in addition to policy evaluation. Further, such variability in control is not unique to this paper, and is a larger research question in RL.\"}",
"{\"title\": \"Backtracking on my novelty comment\", \"comment\": \"Thank you for clarifying the novelty. I have adjusted my score accordingly.\\nHowever, I urge the authors to clarify the theoretical novelty in their paper and include a sketch proof in the main paper to provide intuition as to why inclusions are necessary.\"}",
"{\"title\": \"Promising core idea\", \"review\": \"The paper introduces an algorithm (TTN) for non-linear online and on-policy value function approximation. The main novelty of the paper is to view non-linear value estimation as two separate components. One of representation learning from a non-linear mapping and one of linear value function estimation. The soundness of the approach stems from the rate at which each component is updated. The authors argue that if the non-linear component is updated at a slower rate than the linear component, the former can be viewed as fixed in the limit and what remains is a linear value function estimation problem for which several sound algorithms exist. TTN is evaluated on 4 domains and compared to several other value estimation methods as well as DQN on a control problem with two variations on the task's state space.\\n\\nI'll start off the review by stating that I find the idea and theoretical justification of separating the non-linear and linear parts of value function estimation to be quite interesting, potentially impacting RL at large. Indeed, this view promises to reconcile latest developments in deep RL with the long-lasting work on RL with linear function approximators. However, there are a few unclear aspects that do not allow one to be fully convinced that this paper lives up to the aforementioned promise.\\n\\n- For the theoretical contribution. The authors claim that the main challenge was to deal with the potentially dependent features outputted by the neural network. It is dealt with by using a projection that projects the linear parameters of the value function to a compact subset of the parameter space. Bar the appendix, there is no mention of this projection in the paper, on how this compact subset (that must include the optimal parameter) is defined and if this projection is merely a theoretical tool or if it was necessary to implement it in practice. There is a projection for the neural net weights too but I can see how for these it might not be necessary to use in practice. However, for the linear weights, as their computation potentially involves inverting ill-conditioned matrices, they can indeed blow-up relatively fast.\\n\\n- I found the experimental validation to be quite rich but not done in a systematic enough manner. For instance, the experiment \\\"utility of optimizing the MSPBE\\\" demonstrates quite nicely the importance of each component but is only performed on a single task. As the theoretical analysis does not say anything about the improvements the representation learning can have on the linear value estimation nor if the loss used for learning the representation effectively yields better features for the MSPBE minimization, this experiment is rather important and should have been performed on more than a single domain.\\n\\nSecondly, I do not find the chosen baselines to be sufficiently competitive. The authors state in Sec. 2 that nonlinear-GTD has not seen widespread use, but having this algorithm as the main competitor does not provide strong evidence that TTN will know a better fate. In the abstract, it is implied that outside of nonlinear-GTD, value function approximation methods are not sound. In approximate policy iteration algorithms such as DDPG or TRPO, there is a need in performing value estimation. It is done by essentially a fitted-Q iteration procedure which is sound. Why wasn't TTN compared to these methods? If it is because they are not online, why being online in the experiments of the paper is important? Showing that TTN is competitive with currently widespread methods for value estimated would have been more convincing than the comparison with nonlinear-GTD.\\n\\nThirdly, for the sake of reproducibility, as LSTD seems to be the method of choice for learning the linear part, it would have been adequate to provide an algorithm box for this version as is done for GTD2/TDC. LSTD is essentially a batch algorithm and there could be many ways to turn it into an online algorithm. With which algorithm were the results in the experimental section obtained?\\n\\nFinally, on the control task, the authors add several modifications to their algorithm which results in an algorithm that is very close to that of Levine et al., 2017. Why was not the latter a baseline for this experiment? Especially since it was included in other experiments.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting algorithm, although similar methods and claims have been proposed recently\", \"review\": \"This paper proposes Two-Timescale Networks (TTNs), a reinforcement learning algorithm where feature representations are learned by a neural network trained on a surrogate loss function (i.e. value), and a value function is learned on top of the feature representation using a \\\"fast\\\" least-squares algorithm. The authors prove the convergence of this method using methods from two time-scale stochastic approximation.\\n\\nConvergent and stable nonlinear algorithms is an important problem in reinforcement learning, and this paper offers an interesting approach for addressing this issue. The idea of using a \\\"fast\\\" linear learner on top of a slowly changing representation is not new in RL (Levine et. al, 2017), but the authors somewhat motivate this approach by showing that it results in a stable and convergent algorithm. Thus, I view the convergence proof as the main contribution of the paper.\\n\\nThe paper is written clearly, but could benefit from more efficient use of space in the main paper. For example, I feel that the introduction and discussion in Section 3 on surrogate objectives could be considerably shortened, and a formal proof statement could be included from the appendix in Section 4, with the full proof in the appendix.\\n\\nThe experimental evaluation is detailed, and ablation tests show the value of different choices of surrogate loss for value function training, linear value function learning methods, and comparisons against other nonlinear algorithms such as DQN and Nonlinear GTD/TD/variants. A minor criticism is that it is difficult to position this work against the \\\"simpler but not sound\\\" deep RL methods, as the authors only compare to DQN on a non-standard benchmark task.\\n\\nAs additional related work, SBEED (Dai et. al, ICML 2018) also shows convergence for a nonlinear reinforcement learning algorithm (in the control setting), and quantifies the convergence rate while accounting for finite sample error. It would be good to include discussion of this work, although the proposed method and proofs are derived very differently.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A paper with a lot of potential but not well structured. I suggest to rewrite it for a journal track.\", \"review\": \"The paper proposes a two-timescale framework for learning the value function and a state representation altogether with nonlinear approximators. The authors provide proof of convergence and a good empirical evaluation.\\n\\nThe topic is very interesting and relevant to ICLR. However, I think that the paper is not ready for a publication.\\nFirst, although the paper is well written, the writing can be improved. For instance, I found already the abstract a bit confusing. There, the authors state that they \\\"provide a two-timescale network (TTN) architecture that enables LINEAR methods to be used to learn values [...] The approach facilitates use of algorithms developed for the LINEAR setting [...] We prove convergence for TTNs, with particular care given to ensure convergence of the fast LINEAR component.\\\"\\nYet, the title says NONLINEAR and in the remainder of the paper they use neural networks. \\n\\nThe major problem of the paper is, however, its organization. The novelty of the paper (the proof of convergence) is relegated to the appendix, and too much is spent in the introduction, when actually the idea of having the V-function depending on a slowly changing network is also not novel in RL. For instance, the authors say that V depends on \\\\theta and w, and that \\\\theta changes at slower pace compared to w. This recalls the use of target networks in the TD error for many actor-critic algorithms. (It is not the same thing, but there is a strong connection).\\nFurthermore, in the introduction, the authors say that eligibility traces have been used only with linear function approximators, but GAE by Schulman et al. uses the same principle (their advantage is actually the TD(\\\\lambda) error) to learn an advantage function estimator, and it became SOTA for learning the value function.\\n\\nI am also a bit skeptical about the use of MSBE in the experiment. First, in Eq 4 and 5 the authors state that using the MSTDE is easier than MSBE, then in the experiments they evaluate both. However, the MSBE error involves the square of an expectation, which should be biased. How do you compute it? \\n(Furthermore, you should spend a couple of sentences to explain the problem of this square and the double-sampling problem of Bellman residual algorithms. For someone unfamiliar with the problem, this issue could be unclear.)\\n\\nI appreciate the extensive evaluation, but its organization can also be improved, considering that some important information are, again, in the appendix.\\nFurthermore, results on control experiment are not significative and should be removed (at the current stage, at least). In the non-image version there is a lot of variance in your runs (one blue curve is really bad), while for the image version all runs are very unstable, going always up and down. \\n\\nIn conclusion, there is a lot of interesting material in this paper. Even though the novelty is not great, the proofs, analysis and evaluation make it a solid paper. However, because there is so much do discuss, I would suggest to reorganize the paper and submit directly to a journal track (the paper is already 29 pages including the appendix).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Thank you for the review\", \"comment\": \"Thank you for your constructive feedback and comments. We look forward to further discussion.\\n\\nConcerning the novelty of the algorithm, we would like to emphasize that the main focus of the two-timescale network (TTN) architecture is to separate the value and feature learning processes. This is in contrast to the more popular approach of jointly learning values and features in an end-to-end manner. Splitting these learning processes is key to providing convergence guarantees and enabling the use of advances for linear policy evaluation which do not have direct extension to nonlinear function approximation, such as least-squares methods and eligibility traces. Empirically, we show that these additions can provide benefits for policy evaluation. We also validate that TTNs can still achieve competitive performance in the control setting even when the features are learned separately, unlike the architecture that is used by Levine et al. (\\u201cShallow Updates for Deep Reinforcement Learning\\u201d) where the features are learned jointly.\\n\\nWe would like to clarify our contributions in regards to the theoretical analysis.\\nThe analysis provided in the paper has two parts. In the first part (Lemma 1), we address the convergence of the feature representation algorithm with Markovian noise (contrary to the IID setting), where we employed extensions of the classical results from Borkar\\u2019s textbook.\\nIn the second part (Lemma 3), we address the value function prediction procedure, where we found that Borkar\\u2019s classical results do not apply due to the singularity of the feature covariance matrix involved. These singular matrices can occur since we do not assume that the feature-learning process produces linearly independent features, an unrealistic assumption for neural networks. This renders the analysis more complex and non-trivial. We address this issue by considering the algorithm as a multi-timescale stochastic approximation *inclusion* (contrary to a stochastic approximation recursion) and employ recent results on multi-timescale stochastic approximation inclusions from \\u201cA. Ramaswamy and S. Bhatnagar. Stochastic recursive inclusion in two timescales with an application to the lagrangian dual problem. Stochastics, 2016.\\u201d Hence the proof techniques employed are totally different and novel in their application to reinforcement learning.\\nRegarding the performance guarantees suggested by the reviewer, we agree that these would be highly desired but, currently, it is an open problem to develop sample complexity bounds for general multi-timescale stochastic approximation algorithms. The only result we are aware of applies solely to linear systems (Konda and Tsitsiklis, \\u201cConvergence rate of two-time-scale stochastic approximation\\u201d, Annals of Applied Probability, 2004). Additionally, our algorithm is a multi-timescale stochastic approximation *inclusion* which would pose an even larger challenge.\\n\\nConcerning \\u201c...the paper Convergent Temporal-Difference Learning with Arbitrary Smooth Function Approximation does indeed have a closed form solution for non-linear function approximators\\u2026\\u201d\\nIndeed, Maei et al. are able to find a closed-form solution for the projection operator onto a tangent plane approximation. Without this approximation, the projection operator would not have a readily-available form. Our main point is that, despite this simplifying assumption, the projection operator still depends on the current parameter values, which complicates the derivation and results in a more complex algorithm (Nonlinear GTD).\"}",
"{\"title\": \"A well written paper with thorough experimental evaluation, but lacks novelty.\", \"review\": \"Summary:\\nThis paper presents a Two-Timescale Network (TTN) that enables linear methods to be used to learn values. On the slow timescale non-linear features are learned using a surrogate loss. On the fast timescale, a value function is estimated as a linear function of those features. It appears to be a single network, where one head drives the representation and the second head learns the values. They investigate multiple surrogate losses and end up using the MSTDE for its simplicity, even though it provides worse value estimates than MSPBE as detailed in their experiments. They provide convergence results - regular two-timescale stochastic approximation results from Borkar, for the two-timescale procedure and provide empirical evidence for the benefits of this method compared to other non-linear value function approximation methods.\", \"clarity_and_quality\": \"The paper is well written in general, the mathematics seems to be sound and the experimental results appear to be thorough.\", \"originality\": \"Using two different heads, one to drive the representation and the second to learn the values appears to be an architectural detail. The surrogate loss to learn the features coupled with a linear policy evaluation algorithm appear to be novel, but does not warrant, in my opinion, the novelty necessary for publication at ICLR. \\n\\nThe theoretical results appear to be a straightforward application of Borkar\\u2019s two-timescale stochastic approximation algorithm to this architecture to get convergence. This therefore, does not appear to be a novel contribution.\\n\\nYou state after equaltion (3) that non-linear function classes do not have a closed form solution. However, it seems that the paper Convergent Temporal-Difference Learning with Arbitrary Smooth Function Approximation does indeed have a closed form solution for non-linear function approximators when minimizing the MSPBE (albeit making a linearity assumption, which is something your work seems to make as well).\", \"the_work_done_in_the_control_setting_appears_to_be_very_similar_to_the_experiments_performed_in_the_paper\": \"Shallow Updates for Deep Reinforcement Learning.\", \"significance\": \"Overall, I think that the paper is well written and the experimental evaluation is thorough. However, the novelty is lacking as it appears to be training using a multi-headed approach (which exists) and the convergence results appear to be a straightforward application of Borkars two-timescale proof. The novelty therefore appears to be using a surrogate loss function for training the features which does not possess the sufficient novelty in my opinion for ICLR. \\n\\nI would suggest the authors' detail why their two-timescale approach is different from that of Borkars. Or additionally add some performance guarantee to the convergence results to extend the theory. This would make for a much stronger paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
BJlgNh0qKQ | Differentiable Perturb-and-Parse: Semi-Supervised Parsing with a Structured Variational Autoencoder | [
"Caio Corro",
"Ivan Titov"
] | Human annotation for syntactic parsing is expensive, and large resources are available only for a fraction of languages. A question we ask is whether one can leverage abundant unlabeled texts to improve syntactic parsers, beyond just using the texts to obtain more generalisable lexical features (i.e. beyond word embeddings). To this end, we propose a novel latent-variable generative model for semi-supervised syntactic dependency parsing. As exact inference is intractable, we introduce a differentiable relaxation to obtain approximate samples and compute gradients with respect to the parser parameters. Our method (Differentiable Perturb-and-Parse) relies on differentiable dynamic programming over stochastically perturbed edge scores. We demonstrate effectiveness of our approach with experiments on English, French and Swedish. | [
"differentiable dynamic programming",
"variational auto-encoder",
"dependency parsing",
"semi-supervised learning"
] | https://openreview.net/pdf?id=BJlgNh0qKQ | https://openreview.net/forum?id=BJlgNh0qKQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rkg-zVtJg4",
"r1gciHq_pQ",
"SJlN1H5up7",
"ByxR94q_p7",
"SylgfKWC27",
"HJlknmEq2X",
"rJgRXtqO3Q",
"HJebIHYwh7",
"Bkgmgia43Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1544684552921,
1542133153664,
1542132955785,
1542132885536,
1541441800510,
1541190566712,
1541085477531,
1541014856905,
1540836075357
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1419/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1419/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1419/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1419/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1419/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1419/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1419/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1419/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1419/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a method for unsupervised learning that uses a latent variable generative model for semi-supervised dependency parsing. The key learning method consists of making perturbations to the logits going into a parsing algorithm, to make it possible to sample within the variational auto-encoder framework. Significant gains are found through semi-supervised learning.\\n\\nThe largest reviewer concern was that the baselines were potentially not strong enough, as significantly better numbers have been reported in previous work, which may have a result of over-stating the perceived utility.\\n\\nOverall though it seems that the reviewers appreciated the novel solution to an important problem, and in general would like to see the paper accepted.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Novel, well-founded, and interesting method. Concerns about baseline\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for your comments and for finding the method novel and interesting.\\n\\nWe would like first to clarify that we are not making claiming that our method is appropriate in the high resource scenario (i.e. full in-domain English PTB parsing). However, large datasets are available only for a few languages, so the lower resource setting we study here is important and common. We use a sufficiently strong baseline (e.g., already using external word embeddings) and obtain improvements across all 3 languages. Interestingly, we observe that there are certain phenomena which our semi-supervised parser captures considerably more accurately than the baseline model (e.g., long distance dependencies and multi-word expression, see reply to R1). Very few studies have been done for semi-supervised structured prediction with neural generative models, especially for the more challenging parsing task, so we think these results are interesting.\\n\\nWe also think that our differentiable perturb-and-parse operator is interesting on its own, and has other potential applications. For example, it could be used in the context of latent structure induction, where there is no supervision (i.e. no treebank). Our sampling technique has properties which are different from those of previously proposed latent induction methods:\\n- unlike structured attention [4], we sample global structures rather than compute marginals (e.g., we preserve higher-order statistics)\\n- unlike SPIGOT [2], we can impose tree constraints directly rather than compute an approximation\\n- unlike us, [3] relies on sparse distributions so that marginalization is feasible. While sparse distributions have many interesting properties, they yield flat areas in the optimization landscape that can be difficult to escape from. \\n- unlike sampling with shift-reduce parsing models, we do not seem to have issues with bias which was argued to negatively affect its results [1].\\n\\n\\n > A performance curve with different amount of labeled and unlabeled data \\n\\nWe will do our best to include these results in a subsequent revision. Using more unlabeled data is harder for Swedish and French, as we would need to re-tokenize in the form consistent with our labeled data. \\n\\n\\n> What's the impact of perturbation?\\n\\nIn our experiments, using sampling is beneficial so that improvements are consistent across languages. For example, UAS results in French for the model that does not us sentence embeddings are as follows:\\n- supervised: 84.09\\n- semi-supervised without sampling: 84.27\\n- semi-supervised with sampling: 84.69\\n\\n\\n> What's the impact of keeping the tree constraint on dependencies during backpropagation?\\n\\nWe thought that the main motivation for dropping the constraint in previous work (e.g., SPIGOT) was efficiency. Since it does not seriously affect computation cost in our approach, we have not experimented with dropping it. \\n\\n\\n> Are sentence embedding and trees generated from two separate LSTM encoders?\\n\\nYes. There are no shared parameters in our model: the LSTM of the parser, the LSTM generating the sentence embeddings and the decoder are all separate. Introducing parameter sharing would likely be beneficial. However, our set-up is more controlled, as we can make sure that the improvements are due to modeling latent syntactic structure rather than getting better word representations (i.e. from using the multi-task learning objective). \\n\\n\\n\\n[1] Andrew Drozdov and Samuel Bowman, The Coadaptation Problem when Learning How and What to Compose (2nd Workshop on Representation Learning for NLP, 2017)\\n[2] Hao Peng, Sam Thomson and Noah Smith, Backpropagating through Structured Argmax using a SPIGOT (ACL 2018)\\n[3] Vlad Niculae, Andr\\u00e9 Martins and Claire Cardie, Towards Dynamic Computation Graphs via Sparse Latent Structure (EMNLP 2018)\\n[5] Yoon Kim, Carl Denton, Luong Hoang and Alexander Rush, Structured Attention Networks (ICLR 2017)\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your suggestions and the positive feedback.\\n\\n> hand-wavy explanations\\n\\nWe toned down our speculation, and incorporated your suggestions. Please let us know if you think, we could improve this further.\\n\\n> A number of important details are missing in the submitted version of the paper which the authors addressed in their reply to my public comment.\\n\\n\\nThe submission has now been updated, reflecting what we described in our public comment.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Many thanks for the positive feedback and suggestions.\\n\\n> Varying amounts of unlabeled data \\n\\nWe will do our best to include these results in a subsequent revision. Using more unlabeled data is harder for Swedish and French, as we would need to re-tokenize in the form consistent with our labeled data. \\n\\n\\n> Are there natural generalizations to multi-lingual data for example settings where supervised data is only available for languages other than the language of interest?\\n\\nThis is a very interesting direction. We hope that using \\u2018unlabeled\\u2019 and \\u2018labeled\\u2019 terms in the objective would make the multilingual model capture correspondences between surface regularities and the underlying syntax, for a given language. This should be especially helpful in the suggested one-shot learning scenario, where only unlabeled term will present for the target language. We suspect that part-of-speech tags (not currently used in our model) would be needed to facilitate learning the cross-lingual correspondences. \\n\\n\\n> I wonder also if this method would be particularly helpful in domain transfer\\n\\nYes, we would like to look into this in the future work.\\n\\n\\n> It would be interesting to see an analysis of accuracy improvementson different dependency labels.\\n\\nWe performed analysis on English, there are some interesting cases:\\n1. Multi-word expressions: the recall / precision scores of the semi-supervised model are 90.70 / 84.78 while the one of the supervised model are 75.58 / 81.25. We suspect that the reason is that MWEs are relatively infrequent.\\n2. Adverbial modifiers: we observe an increase in precision without compromising on recall: 87.32 / 87.51 versus 87.27 / 85.95.\\n3. Appositional modifiers: we also observe a significant increase for the recall in this category: 81.39 / 81.03 versus 77.49 / 80.27\\nWe included the results in the new version of the paper.\"}",
"{\"title\": \"I thought this was an excellent paper - very clear, an important problem, a useful set of techniques and results.\", \"review\": \"The paper describes a VAE-based approach to semi-supervised learning\\nof dependency parsing. The encoder in the VAE is a neural edge-factored\\nparser allowing inference using Eisner's dynamic programming algorithms.\\nThe decoder generates sentences left-to-right, at each point conditioning\\non head-modifier dependencies specified by the tree. A key technical \\nstep is to develop a method for \\\"differentiable\\\" sampling/parsing,\\nusing a modification of the dynamic program, and the Gumbel-max trick.\\n\\nI thought this was an excellent paper - very clear, an important \\nproblem, a very useful set of techniques and results. I would strongly\\nrecommend acceptance.\", \"some_comments\": \"* I do wonder how well this approach would work with orders of magnitude\\nmore unlabeled data. The amount of unlabeled data used is quite small.\\n\\n* Similarly, I wonder how well the approach works as the amount of\\nunlabeled data is decreased (or increased, for that matter). It should\\nbe possible to provide graphs showing this.\\n\\n* Are there natural generalizations to multi-lingual data, for example\\nsettings where supervised data is only available for languages other\\nthan the language of interest?\\n\\n* It would be interesting to see an analysis of accuracy improvements\\non different dependency labels. The \\\"root\\\" case is in some sense just\\none of the labels (nsubj, dobj, prep, etc.) that could be analyzed.\\n\\n* I wonder also if this method would be particularly helpful in \\ndomain transfer, for example from Wall Street Journal text to\\nWikipedia or Web data in general. The improvements could be more\\ndramatic in this case - that kind of effect has been seen with \\nELMO for example.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting application of VAEs\", \"review\": \"[Summary]\\nThis paper proposes to do semi-supervised learning , via a generative model, of an arc-factored dependency parser by using amortized variational inference. The parse tree is the latent variable, the parser is the encoder that maps a sentence to a distribution over parse-trees, and the decoder is a generative model that maps a parse tree to a distribution over sentences. \\n\\n[Pros]\\nSemi-supervised learning for dependency parsing is both important and difficult and this paper presents a novel approach using variational auto-encoders. And the semi-supervised learning method in this paper gives a small but non-zero improvement over a reasonably strong baseline. \\n\\n[Cons]\\n1. My main concern with this paper currently are the \\\"explanations\\\" provided in the paper which are quite hand-wavy. E.g. the authors state that using a KL term in semi-supervised learning is exactly opposite to the \\\"low density separation assumption\\\". And therefore they set the KL term to be zero. One has to wonder that why is the \\\"low density separation assumption\\\" so critical for dependency parsing only? VAEs have been used with a prior for semi-supervised learning before, why didn't this assumption affect those models ? \\n\\nA better explanation will have been that since the authors first trained the parser in a supervised fashion, therefore their inference network already represents a \\\"good\\\" distribution over parses, even though this distribution is specified only upto sampling but not in a mathematically closed form. Finally, setting the KL divergence between the posterior of the inference network and the prior to be zero is the same as dynamically specifying the prior to be the same as the inference network's distribution. \\n\\n2. A number of important details are missing in the submitted version of the paper which the authors addressed in their reply to my public comment.\\n\\n3. The current paper does not contain any comparison to self-training which is a natural baseline for this work. The authors replied to my comment saying that self-training requires a number of heuristics but it's not clear to me how much more difficult can these heuristics be than the tuning required for training their VAE.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Clarifications\", \"comment\": \"Thank you for your comments and finding the idea exciting. Please find our replies to your questions.\\n\\n1. Thank you for pointing this out. We experimented with a version where the prior was the uniform distribution over all projective trees. It was not effective: downweighting or remove the KL term was yielding the best results. We realize that this prior may not be quite appropriate (linguistic trees are not samples from the uniform distribution), but given that our model is generative / not conditional (e.g., we do not condition even on PoS tags), the distribution would not be sharp anyway (even if we estimate it). This makes us sceptical about using the KL term in our semi-supervised learning: using KL with respect to a high entropy distribution forces our model to be uncertain on unlabelled sentences. This is exactly opposite of the standard \\u201clow density separation assumption\\u201d: our preference should be for models which are confident on datapoints (roughly speaking, decision boundaries should not cross datapoints). This motivated us to try another alternative (also not yielding ELBO), where instead of the KL term we used an adversarial term forcing our model to draw trees similar to linguistic ones. Unfortunately, it was not effective as well. We would clarify this extra experiments in a new version of the paper. Note that not using ELBO should not prevent us from using the term VAE: many recent VAE versions (e.g., beta-VAE) cannot be interpreted as optimizing ELBO.\\n\\n2. Again, we should have clarified this. We rely on perturb-and-map to sample a single tree from the posterior distribution. However, the MAP procedure is not differentiable, therefore we replace it with a differentiable surrogate. In our model, the weights in T do not represent probabilities neither log-probabilities but a soft-selection of arcs. GCN can be run over weighted graphs, the message passed between nodes is simply multiplied by the continuous weights. This is actually a motivation for using GCN rather than a Recursive LSTM/RNN. On the one hand, running a GCN with a matrix that represents a soft-selection of arcs (i.e. with real values) has the same computational cost than using a standard adjacency matrix (i.e. with binary elements) if we use matrix multiplication on GPU (optimization with sparse matrix multiplication is helpful on CPU, but not always on GPU). On the other hand, a recursive network over a soft-selection of arcs requires to build a n^2 set of RNN-cells that follow the dynamic programming chart where the possible inputs of a cell are multiplied by their corresponding soft-selection in T, which is expensive and not GPU-friendly. We also experimented with using straight-through estimators where GCN computation is performed over a discretizatized version of the graph, whereas the backpropagation step is done over the soft version. We did not see much of a difference in performance.\\n\\n3. Self-training is an option, though all (?) previous applications of self-training to syntactic parsing used quite a number of tricks and parameters (e.g., McClosky et al 2006; Reichart and Rappoport 2007, Yu and Bohnet 2017). Even if self-training works, we believe that our approach provides an interesting alternative, and one of very few methods for semi-supervised learning for structured prediction where improvements over a strong supervised baseline can be seen (recall that our baseline already uses external embeddings). What is also interesting is that the parse trees predicted by the semi-supervised model are qualitatively different from the ones produced by the supervised baseline. E.g., as we discuss in the experimental section, it predicts many more long distance dependencies than the supervised one. We speculate that this is an artefact of using the RNN+GCN decoder which does not care about short edges as they are too easy to encode by RNN, so encourages longer range dependencies. This won\\u2019t happen for self-trained parsers as self-training reinforces the predictions. Co-training is even harder to make to work than self-training, as we need to come up with two models and it would be more orthogonal to our method (we could use a co-training loss in combination with ours). Previous work suggests that co-training does not work out-of-the-box for syntactic parsing, so a meaningful baseline would be hard to construct.\"}",
"{\"title\": \"Clarifications\", \"comment\": \"This paper proposes to do semi-supervised learning , via a generative model, of an arc-factored dependency parser by using amortized variational inference. The parse tree is the latent variable, the parser is the encoder that maps a sentence to a distribution over parse-trees, and the decoder is a generative model that maps a parse tree to a distribution over sentences. While this idea itself is exciting, a few important details are missing in the paper, that are needed to review the paper.\\n\\n1. A VAE requires a generative story for the latent variables. What exactly is the distribution of p(T|n) ? This distribution is not mentioned anywhere in the paper. More importantly Section 5 focuses entirely on the first term of the ELBO objective. What about the second term, the negative KL term, of the ELBO ? How exactly do you compute KL[q_\\u03c6(T , z|s)|p(T , z)] in equation (3) ? You mention that you use a weight of 0 for the KL term during optimization in the experiments section because you did not see any benefit from the KL term ? But what was the form of the prior that you used earlier ? \\n\\n2. As you mention Smith and Eisner (2008) showed how to frame dependency parsing as an MRF and Perturb and MAP is a method for sampling from the posterior for general MRFs. However, you are adding a further relaxation and replacing the argmax with a softmax operation ( where you set \\u03c4 = 1 in all experiments). So at the end you no-longer get true dependency trees but continuous entries in T. How exactly do you compute log p_\\u03b8( s | RELAXATION of Eisner(W + P) ) in this scenario ? How do you feed soft connections to the GCN ? Does T contain probabilities or log-probabilities in this case? \\n\\n3. You mention a number of other fairly simple methods for semi-supervised learning such as self-training and co-training in the related work section. Clearly these types of methods will be the right baseline to evaluate against since they do not use word-embeddings, or any manual feature engineering. What was the reason to not evaluate against such simple methods?\"}",
"{\"title\": \"Novel and nice method, but experiments are not strong enough\", \"review\": \"This paper proposed a variational autoencoder-based method for semi-supervised dependency parsing. Given an input sentence s, an LSTM-based encoder generates a sentence embedding z, and a NN of Kiperwasser & Goldberg (2016) generates a dependency structure T. Gradients over the tree encoder are approximated by (1) adding a perturbation matrix over the weight matrix and (2) relax dynamic programming-based parsing algorithm to a differentiable format. The decoder combines standard LSTM and Graph Convolutional Network to generate the input sentence from z and T. The authors evaluated the proposed method on three languages, using 10% of the original training data as labeled and the rest as unlabeled data.\\n\\nPros\\n1. I like the idea of this sentence->tree->sentence autoencoder for semi-supervised parsing. The authors proposed a novel and nice way to tackle key challenges in gradient computation. VAE involves marginalization over all possible dependency trees, which is computationally infeasible, and the proposed method used a Gumbel-Max trick to approximate it. The tree inference procedure involves non-differentiable structured prediction, and the authors used a peaked-softmax method to address the issue. The whole model is fully differentiable and can be thus trained end to end.\\n\\n2. The direction of semi-supervised parsing is useful and promising, not only for resource-poor languages, but also for popular languages like English. A successful research on this direction could be potentially helpful for lots of future work.\\n\\nCons, and suggestions on experiments\\nMy main concerns are around experiments. Overall I think they are not strong enough to demonstrate that this paper has sufficient contribution to semi-supervised parsing. Below are details.\\n\\n1. The current version only used 10% of original training data as labeled and the rest as unlabeled data. This makes the reported numbers way below existing state-of-the-art performance. For example, the SOTA UAS on English PTB has been >95%. Ideally, the authors should be able to train a competitive supervised parser on full training data (English or other languages), and get huge amount of unlabeled data from other sources (e.g. News) to further push up the performance. The current setting makes it hard to justify how useful the proposed method could be in practice.\\n\\n2. The best numbers from the proposed model is lower than baseline (Kipperwasser & Goldberg) on English, and only marginally better on Swedish. This probably means the supervised baseline is weak, and it's hard to tell if the gains from VAE will retain if applied to a stronger supervised.\\n\\n3. A performance curve with different amount of labeled and unlabeled data would be useful to better understand the impact of semi-supervised learning.\\n\\n4. What's the impact of perturbation? One could simply use T=Eisner(W) as approximation. Did you observe any significant benefits from sampling?\\n\\nOther questions\\n1. What's the impact of keeping the tree constraint on dependencies during backpropagation? Have you tried removing the tree constraint like previous work?\\n\\n2. Are sentence embedding and trees generated from two separate LSTM encoders? Are there any parameter sharing between the two?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
Sygx4305KQ | Small steps and giant leaps: Minimal Newton solvers for Deep Learning | [
"Joao Henriques",
"Sebastien Ehrhardt",
"Samuel Albanie",
"Andrea Vedaldi"
] | We propose a fast second-order method that can be used as a drop-in replacement for current deep learning solvers. Compared to stochastic gradient descent (SGD), it only requires two additional forward-mode automatic differentiation operations per iteration, which has a computational cost comparable to two standard forward passes and is easy to implement. Our method addresses long-standing issues with current second-order solvers, which invert an approximate Hessian matrix every iteration exactly or by conjugate-gradient methods, procedures that are much slower than a SGD step. Instead, we propose to keep a single estimate of the gradient projected by the inverse Hessian matrix, and update it once per iteration with just two passes over the network. This estimate has the same size and is similar to the momentum variable that is commonly used in SGD. No estimate of the Hessian is maintained.
We first validate our method, called CurveBall, on small problems with known solutions (noisy Rosenbrock function and degenerate 2-layer linear networks), where current deep learning solvers struggle. We then train several large models on CIFAR and ImageNet, including ResNet and VGG-f networks, where we demonstrate faster convergence with no hyperparameter tuning. We also show our optimiser's generality by testing on a large set of randomly-generated architectures. | [
"deep learning"
] | https://openreview.net/pdf?id=Sygx4305KQ | https://openreview.net/forum?id=Sygx4305KQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rklSUeaoYV",
"rJgU3zOvyV",
"SJgp6njLJN",
"Byxdx-dHJE",
"SygVWkE4JE",
"rJxEV074kE",
"SJx9di-XkV",
"Ske076UWJV",
"S1e706lJkN",
"rkxy10K2C7",
"HyeK5Jt3R7",
"SyemYpEiam",
"B1xyXcQ767",
"SyxRWTJxpQ",
"ryeIGPDv2m",
"ryeMkWWQn7"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1554923596981,
1544155821818,
1544105156592,
1544024303750,
1543941884048,
1543941675878,
1543867249941,
1543757094101,
1543601611191,
1543441879007,
1543438224981,
1542307195304,
1541777942769,
1541565701519,
1541007117966,
1540718809892
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1418/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1418/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1418/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1418/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1418/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1418/Authors"
],
[
"~Tim_Cooijmans1"
],
[
"ICLR.cc/2019/Conference/Paper1418/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1418/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1418/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1418/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1418/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1418/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1418/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1418/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1418/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Response to final decision\", \"comment\": \"We would like to address a few points which we consider inaccurate in the AC\\u2019s final decision:\\n\\n> \\u201cThe per-epoch improvement over SGD is fairly small\\u201d\\nWe show improvements on the order of 5% on CIFAR, and even reduce the ResNet error by a factor of 3 (from an already low 2.1% to 0.7%). We also improve on Adam by 2 to 3% in most cases. Compare this to many papers in the same venue that were published with reductions of 1% error compared to their baselines.\\n\\n> \\u201c[Improvements] probably outweighed by the factor-of-2 computational overhead, so it's likely there is no wall-clock improvement\\u201d\\nWe show that there is a wall-clock improvement in Fig. 3. The improvements in optimization compensate for the small computational overhead.\\n\\n> \\u201cSGD parameters were tuned for validation accuracy\\u201d\\nThis was only the case for the ImageNet experiments, which are 2 out of 7 experimental settings. However, we agree that it is fairer to use the same cross-validation protocol as for the other experiments. We found that SGD with the optimal learning rate still does not outperform Adam or our method, and will include this change in a future version.\\n\\n> \\u201croughly based on [...] tricks from Martens and Grosse (2015).\\u201d\\nThe core of our method is an implicit inversion of the Hessian, while the mentioned work has an explicit model of the Hessian that is kept in memory - the methods differ substantially.\"}",
"{\"metareview\": \"The proposal is a scheme for using implicit matrix-vector products to exploit curvature information for neural net optimization, roughly based on the adaptive learning rate and momentum tricks from Martens and Grosse (2015). The paper is well-written, and the proposed method seems like a reasonable thing to try.\\n\\nI don't see any critical flaws in the methods. While there was a long discussion between R1 and the authors on many detailed points, most of the points R1 raises seem very minor, and authors' response to the conceptual points seems satisfactory.\\n\\nIn terms of novelty, the method is mostly a remixing of ideas that have already appeared in the neural net optimization literature. There is sufficient novelty to justify acceptance if there were strong experimental results, but in my opinion not enough for the conceptual contributions to stand on their own.\\n\\nThere is not much evidence of a real optimization improvement. The per-epoch improvement over SGD is fairly small, and (as the reviewers point out) probably outweighed by the factor-of-2 computational overhead, so it's likely there is no wall-clock improvement. Other details of the experimental setup seem concerning; e.g., if I understand right, the SGD training curve flatlines because the SGD parameters were tuned for validation accuracy rather than training accuracy (as is reported). The only comparison to another second-order method is to K-FAC on an MNIST MLP, even though K-FAC and other methods have been applied to much larger-scale models. \\n\\nI think there's a promising idea here which could make a strong paper if the theory or experiments were further developed. But I can't recommend acceptance in its current form.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"a sensible proposal, but little evidence of optimization benefit\"}",
"{\"title\": \"Relationship to Nesterov accelerated GD\", \"comment\": \"Thank you for pointing out this interesting connection. Quoting from the paper you mentioned:\\n\\n\\u201cIf CG terminated after just 1 step, HF becomes equivalent to NAG, except that it uses a special formula based on the curvature matrix for the learning rate instead of a fixed constant.\\u201d\\n\\nThe same reasoning applies to our own method, since it is a modification of the HF approach. The main difference from NAG is then the use of curvature, which Nesterov\\u2019s accelerated GD (NAG) does not use, as pointed out in the quote.\\n\\nAs a practical matter, note that NAG is not robust to perturbations and accumulates errors in the gradient oracle linearly with iterations (as discussed in a paper by Nesterov himself and co-authors, \\u201cFirst-Order Methods of Smooth Convex Optimization with Inexact Oracle\\u201d).\\n\\nNAG is a part of standard deep learning frameworks and practitioners try it from time to time. While we cannot comment on its success in RNNs with sigmoids (which Sutskever focused on), our experience with modern CNNs shows that it is \\u201chit or miss\\u201d, with no clear advantage over a similarly well-tuned momentum SGD. Its success is highly dependent on the stochasticity of the problem and many other factors (which is expected given the Nesterov paper we mentioned).\"}",
"{\"title\": \"Response to additional questions\", \"comment\": \"Thank you for the response to our points. We have answered the initial comments that you mentioned in the appropriate thread.\\n\\n> \\u201cKFAC has been used in various papers to train neural nets (see references below) so I still think the authors should provide a comparison on the larger networks.\\u201d\\n\\nAs much as we would have liked to include the improved KFAC modifications published by Grosse & Martens recently, it would represent a large departure from our training regime, which uses single GPUs. The mentioned works are all intended for multi-GPU settings. Multi-GPU synchronization of updates brings a host of confounding factors and different dynamics compared to synchronous (single-GPU) training, see Mitliagkas et al. (arXiv:1605.09774) for one example. However, we recognize that this line of research is very relevant, and we will include a caveat in the paragraph on KFAC that these improved versions exist, and should perform better in these settings.\\n\\n> \\u201cResults in Figure 2 are averaged over several runs? What does the variance look like?\\u201d\\n\\nThe results in fig. 2 are not averaged over several runs, although the results in fig. 3 (left) are (averaged over 50 random architectures), which gives an indication of the variance. In practice, the variance across optimizers on larger benchmarks is very small -- we will add additional runs to the final version to make this more concrete.\\n\\n> \\u201cCan you show the plots in Figure 2 in terms of training time? In Fig. 7, your method does not outperform others so I would like to see more empirical results.\\u201d\\n\\nYes, we can include all the plots for completeness, but the conclusions are unchanged compared to the referenced plot (which is fig. 3-right in the updated paper). Our goal was to show that it is comparable to first-order methods, despite the overhead of the second-order operations, and gains the benefit of having no hyper-parameter tuning.\\n\\nWe would also like to bring attention to the fact that the large gap between Adam and SGD on the per-iteration plots (fig. 2) is mostly erased in the wall-clock time plot (fig. 3-right), due to the additional matrix operations that each step of Adam requires. The same phenomena happens with our method, which could be improved with better engineering of the FMAD operations.\\n\\n> \\u201cHow sensitive is your approach to the batch size?\\u201d\\n\\nIt is not very sensitive to batch size, as we simply used the same batch sizes as in the original papers for all of the tested architectures. It is possible that tuning the batch size brings additional benefit but we did not exploit it.\"}",
"{\"title\": \"Detailed response to AR1 (part 2/2)\", \"comment\": \"> \\u201cYou claim to decay rho\\u201d\\n\\nWe do not claim to explicitly decay rho (one of the points we focused on was on not having complicated schedules to tune), but rather it decays naturally as a consequence of the automatic hyper-parameter adaptation (fig. 5 in updated paper).\\n\\n> \\u201cBetter than simply using CG on a trust-region model?\\u201d\\n\\nWe tried this variant (fig. 7), but the large number of inner iterations makes it less competitive.\\n\\n> \\u201cAnalysis is only performed on a quadratic\\u201d\\n\\nTheorem A.2 deals specifically with non-quadratic functions.\\n\\n> \\u201cIt appears the rate would scale with the square root of the condition number\\u201d; \\u201cconstant is not as good as Heavy-ball on a quadratic\\u201d\\n\\nUnfortunately, the rates that we derived are not as directly interpretable as SGD or momentum SGD, despite our best efforts (eq. 38). However, on a convex quadratic we do not expect our rate of convergence to be better than momentum SGD with the optimal momentum parameter. Note that using momentum GD with the optimal hyper-parameters (which require knowledge of the Hessian eigenvalues of the quadratic) already provides a square-root improvement in the condition number compared to GD. We would consider any further improvement in linear convergence in this well-explored setting to be quite a breakthrough.\\n\\n> \\u201cSub-sampling \\u2026 not discussed\\u201d; \\u201cconsider extending the proof\\u201d\\n\\nWe addressed this matter in our initial response.\\n\\n> \\u201cConsider showing the gradient norms\\u201d\\n\\nWe added this to the paper (fig. 6).\\n\\n> \\u201cMethods have not yet converged\\u201d\\n\\nIt is standard practice in deep learning comparisons to give methods a budget of epochs to converge over, due to the time-consuming nature of the experiments. We used the default numbers of epochs for which the SGD learning rate schedules were defined. Note that even when this is not the case, early-stopping is used as an effective regularization method (since the parameters vastly outnumber the samples).\\n\\n> Compare to Newton method with true Hessian\\n\\nWe added this to the paper; see the \\u201cExact Hessian\\u201d (Newton method) row in table 1 and fig. 1.\\n\\n> \\u201cWhy is BFGS in Rosenbrock but not in NN plots?\\u201d\\n\\nThis was addressed in another comment.\\n\\n> \\u201cDauphin does not rely on the Gauss-Newton approximation\\u201d\\n\\nWe cited Dauphin et al. as a reference on avoiding saddle-points with PSD surrogates for the Hessian. We did not mean to imply that they used the Gauss-Newton surrogate. We agree that this should be more clear, and corrected the text.\\n\\n> \\u201cThe title is rather bold and not necessarily precise since the stepsize of curveball is not particularly small\\u201d\\n\\nThe title is actually a reference to the different step sizes in different parameter-space directions, when optimizing an ill-conditioned function (mentioned in the 2nd paragraph of section 2.1).\\n\\n> Additional citations and other editing suggestions\\nWe would like to thank the reviewer for the suggestions, which we incorporated in the paper. Note that we cited Loizou & Richtarik (2017) as an up-to-date summary of theoretical results; nevertheless we now cite the original papers.\"}",
"{\"title\": \"Detailed response to AR1 (part 1/2)\", \"comment\": \"> \\u201cThe derivation ... is very much driven on a set of heuristics without theoretical guarantees\\u201d\\n\\nIt is common (in fact, necessary) to propose changes to existing methods based on empirical observation of their failures. It is only after proposing changes that we can prove theoretical guarantees. We would like to refer the reviewer to Theorems A.1-A.2 for such guarantees.\\n\\n> \\u201cIn the early phase, the gradient norm is likely to be large and thus z will change significantly. One might also encounter regions of high curvature\\u201d\\n\\nAlthough the analysis (theoretical and experimental) shows that the method converges anyway, we can analyze these cases in the following way.\", \"the_step_z_update_can_be_rearranged_as\": \"z <- (rho*I - beta*H)*z - beta*J. Assume that the hyper-parameters and Hessian model are correct to enable convergence (H is positive-definite, rho is close to 1 and beta < 1/||H||). Then, in the high-curvature directions of H, z will be reduced the most towards 0, compared to lower-curvature directions. So in those directions, the algorithm behaves like GD. Likewise, for a large enough gradient J and low curvature, its magnitude overwhelms the first term, so the update devolves to standard GD.\\n\\n> \\u201cThe \\\"warm start\\\" at s_{t-1} is also what yields the momentum term, what interpretation can you give to this choice?\\u201d\\n\\nThe warm-starting is what makes the algorithm directly comparable to momentum. We establish this connection in section 3, and expand on differences and similarities (CurveBall vs momentum SGD). If there is a specific unclear aspect we\\u2019ll be happy to address it.\\n\\n> \\u201cIt is rather unclear why one iteration is an appropriate number\\u201d\\n\\nIt is appropriate due to the interleaving of steps (Algorithm 1) -- it does not represent a single isolated step, but rather builds on the previous iteration. Likewise, one could ask: why does momentum SGD only update the step z once for each iteration of w?\\n\\n> \\u201cAdaptive strategy where CG with a fixed accuracy\\u201d\\n\\nThis has been done in previous works, and is very costly since it often requires dozens of steps (see fig. 7 in the appendix for a direct comparison).\\n\\n> \\u201cGradually increasing the batch-size\\u201d\\n\\nThis would introduce a schedule of batch sizes to tune, which would increase the complexity of implementation and usage.\\n\\n> \\u201cThe number of outer iterations may be a lot less for the Hessian-free method than for SGD\\u201d\\n\\nThe Hessian-free method would have to converge faster than SGD by orders of magnitude (measured in outer iterations) to compensate. While this is observed with linear models (where such methods are widely deployed), it is not necessarily so for deep networks, as verified in our experiments with CG (figure 7 in appendix). Nevertheless, we agree that this claim is overly broad, and we toned it down in the text.\\n\\n> Choice of GD over Krylov subspace methods, Lanczos\", \"there_are_several_reasons\": \"- Memory. While Krylov subspace methods have better convergence rates, they require more storage.\\n- Simplicity. The aim of this paper is to create a \\u201cminimal\\u201d solver. Gradient descent fits this criteria better than the other methods; it can be described in a single line given a gradient.\\n- Robustness to noise between iterations. SGD is well-understood and works well with perturbed updates; it remains to be demonstrated whether Lanczos and other methods can be made equally robust. Very recent work in this front (De Sa et al., \\u201cAccelerated Stochastic Power Iteration\\u201d, arXiv 2017) shows that much larger batches than what is acceptable for deep networks (i.e. tens of thousands) may be needed.\\n\\nd) \\u201cI\\u2019m not really sure [rho] makes sense\\u201d\\n\\nThe rho parameter allows bridging two formalisms which would not be possible otherwise. Our method can be interpreted as:\\n1) A momentum GD variant: it modifies momentum GD by introducing a single term, -beta*H.\\n2) A Hessian-free optimizer variant: by performing the changes that the reviewer just mentioned (section 3).\\nNote that the rho parameter is crucial for the first interpretation. Nonetheless, its apparent arbitrariness when viewed under the second interpretation can be resolved by setting rho=1. We tried fixing rho=1 experimentally, but it degrades performance; we added this experiment to the paper (fig. 8).\\nThe effect of rho can be interpreted in two ways. First, rho<1 gradually erases stale updates (based on old Hessian matrices) from the z buffer, which is important for a non-quadratic objective. Second, it results in the regularizer (1-rho)*||z||^2 in the quadratic model, which can be beneficial, and has small magnitude with rho close to 1.\\nFinally, the automatic hyper-parameter tuning (eq. 18) requires rho to be present, which is another practical reason for its presence.\"}",
"{\"comment\": \"How does curveball relate to the single-step-CG variant of HF discussed in the cited paper \\\"On the importance of initialization and momentum in deep learning\\\" (Sutskever et al 2013)? From the discussion in that paper, I wish the authors of the present paper would also include Nesterov momentum in their experiments.\", \"title\": \"What is the relationship to single-step HF and Nesterov momentum?\"}",
"{\"title\": \"Questions in first review regarding derivation of your update + experimental results\", \"comment\": \"Thank you for the clarifications.\\n\\nRegarding BGFS, you would of course use the limited memory variant. I do agree with what you said regarding stochasticity, second-order methods do indeed need to be stabilize for them to work. I think this does justify not showing results for this approach. On the other hand, KFAC has been used in various papers to train neural nets (see references below) so I still think the authors should provide a comparison on the larger networks.\", \"https\": \"//jimmylba.github.io/papers/nsync.pdf\\n\\nRegarding the hyper-parameter lambda, this seem to make the approach rather fragile as a fixed constant lambda is not able to adapt to the curvature of the objective. Dauphin et al., 2014 also used a similar approach and although I think one should use an adaptive approach (which is actually not much more expensive), I concede that this simple approach might work in practice after some parameter tuning, although I believe an adaptive method would be more suitable for non-convex functions.\\n\\nNow, I still have some unanswered questions regarding the derivation of your approach (questions #1 in my first review), can you please address my concerns there?\\n\\nSince we have turned our discussion to the empirical aspect of the paper, I have many more questions regarding this aspect:\\n1. I would like to know if the results in Figure 2 are averaged over several runs? What does the variance look like?\\n2. Can you show the plots in Figure 2 in terms of training time? In Fig. 7, your method does not outperform others so I would like to see more empirical results.\\n3. How sensitive is your approach to the batch size?\"}",
"{\"title\": \"Such comparisons are not feasible at this scale; the mentioned \\\"theory\\\" problem is actually an implementation detail\", \"comment\": \"> \\u201cIt seems we would all agree that the paper is lacking from a theoretical point of view.\\u201d\\n\\nThis is not a fair characterization of our viewpoint. Our point was that we focused much more on delivering an algorithm that is easy to implement and use by practitioners, than on tuning it to obtain theoretical guarantees (e.g. by adding variance reduction techniques). The reviewer has stated a preference for the opposite approach, which we acknowledge, and it was with this concern in mind that we included Theorems A.1-A.2 in our initial submission.\\n\\n> \\u201cWhy is BFGS in Rosenbrock but not in NN plots?\\u201d\\n\\nBFGS was not included because it does not work in these settings, a widely known fact among practitioners, which is matched by our observations, and which explains its absence from the state-of-the-art in deep learning. Concretely, the issues are:\\n- Memory. For typical problems, we simply cannot form a millions-by-millions-sized matrix. Limited-memory variants will instead require K columns, but K is still in the dozens or more. For large models, we typically cannot afford more than a factor of 2 or 3 more storage than the original parameters.\\n- Stochasticity. BFGS breaks down with noisy functions (observed in our Stochastic Rosenbrock experiments). Variants that are robust to noise exist, but they have similar or worse memory and computation requirements.\\n\\nThese are not a concern with stochastic first-order methods, which are widely used.\\n\\n> \\u201cSame question regarding K-FAC, why is it only showed for MNIST?\\u201d\\n\\nK-FAC requires non-trivial amounts of hand-tuning to work, as stated by the author on the project page. For example, they state the learning rate can take values between 10^-5 and 100, and in a later paper (Ba et al., ICLR 2017) they use an exponential learning rate decay with constants c_0, zeta, and exponential averaging of parameters over time.\\n\\nAdditionally, its memory requirements are unusually large, necessitating distributed learning across multiple machines in the mentioned paper, which makes it unwieldy.\\n\\nNevertheless, we show one comparison to K-FAC that is feasible, using the original paper\\u2019s code and problem setting (MLP autoencoder on MNIST), with hyper-parameters calibrated by the authors, in the interest of fairness.\\n\\n> \\u201cKFAC seem to be reaching a lower function value in Figure 2, please use a log scale\\u201d\\n\\nWe will consider this scale in the final version. Note that at these noise levels it will be hard to observe any meaningful difference between the algorithms. \\n\\n> \\u201cI disagree with your claim \\u201cWe believe we are the first to apply a second-order method in such a way to extremely large settings\\u201d. BFGS has been used for a long time to optimize deep neural networks, see e.g. https://arxiv.org/pdf/1311.2115.pdf that had experiments on a a twelve layer neural network.\\u201d\\n\\nWe must clarify that extremely large settings mean several layers and millions of parameters, on non-toy problems. There are several aspects that make the referenced paper non-comparable:\\n- The number of parameters or model architecture is not reported, other than mentioning that it has 12 layers.\\n- It is an MLP applied to 28x28 inputs, which is only applicable to the least realistic scenarios.\\n- The dataset (\\u201cCURVES\\u201d) is entirely composed of synthetic toy data.\\n\\nWe would like to contrast this scale to training a VGG-f model with over 60 million parameters on ImageNet, with 224x224 images, among our other experiments. Hence our claim.\\n\\n> \\u201cRegarding the proof of convergence of Theorem A.2, note that you either require convexity or - as you suggested -, you could rely on a trust-region approach but then the decrease is only valid for achieving a **model** decrease. You would need to implement a proper trust-region algorithm to guarantee a **function** decrease.\\u201d\\n\\nAs the reviewer has noted, this is not a problem with the proof -- which assumes a trust region, a reasonable assumption -- but with the implementation, which uses a simple mechanism for this trust region.\\n\\nAs explained in section 3 (subsection on hyper-parameter lambda), we chose the simplest trust-region adaptation because it requires only 1 additional function evaluation. We could have easily chosen another mechanism, at the cost of speed, which is important in the large-scale. We remark that the same choice was made by Martens & Grosse (2015). This choice is entirely divorced from the validity of the proposed method, and represents one of the usual trade-offs made in any practical implementation.\\n\\n> \\u201cI therefore do not think the statement in Theorem A.2 is especially relevant to the deep learning setting which you seem to be targeting in this paper.\\u201c\\n\\nOn the contrary, the extensive experiments show that the trust-region model with automatic hyper-parameter adaptation is quite accurate, otherwise the reported high performance would not have been observed.\"}",
"{\"title\": \"Authors emphasize the practical aspect of the paper but do not provide results for competing methods such as LBFGS and KFAC. The paper still lacks from a theoretical point of view.\", \"comment\": \"First, I would like to thank the authors for providing further explanations trying to clarify their view on the empirical validity of their approach. Given their answer, it seems we would all agree that the paper is lacking from a theoretical point of view.\\n \\nSince the authors seem to stress the empirical aspect of their paper, I would find it **critical** to answer my question \\u201cWhy is BFGS in Rosenbrock but not in NN plots?\\u201d. I have the same question regarding K-FAC, why is it only showed for MNIST? As a minor comment, note that KFAC seem to be reaching a lower function value in Figure 2, please use a log scale in order to improve readability.\\n\\nI disagree with your claim \\u201cWe believe we are the first to apply a second-order method in such a way to extremely large settings\\u201d. BFGS has been used for a long time to optimize deep neural networks, see e.g. https://arxiv.org/pdf/1311.2115.pdf that had experiments on a a twelve layer neural network.\\n\\nRegarding the proof of convergence of Theorem A.2, note that you either require convexity or - as you suggested -, you could rely on a trust-region approach but then the decrease is only valid for achieving a **model** decrease. You would need to implement a proper trust-region algorithm to guarantee a **function** decrease. I therefore do not think the statement in Theorem A.2 is especially relevant to the deep learning setting which you seem to be targeting in this paper.\"}",
"{\"title\": \"Short summary of response to AR1\", \"comment\": \"We thank the reviewer for the constructive comments, especially with regards to having a more complete bibliography, which we have integrated. However, we differ considerably with the reviewer in their assessment of our contribution.\\n \\nWe would like to emphasize that our specific goal and motivation for this work was the development of a practical optimization method for large-scale deep learning. \\n \\nIn this respect, our contribution pushes the boundaries of what has been done previously in similar papers, with a much greater scope and stringent protocol (e.g. no tuning of hyper-parameters on each experiment). We believe we are the first to apply a second-order method in such a way to extremely large settings, such as the VGG-f on ImageNet (over 60 million parameters and 1 million samples), as well as several other datasets, models, and large numbers of randomly-generated architectures.\\n \\nThis is not to say that we do not place great value in theoretical guarantees, and in fact we proved convergence of our algorithm in convex quadratic functions (Theorem A.1) and guaranteed descent in general non-convex functions (Theorem A.2), a much broader result that the reviewer did not mention. However, we consider that providing convergence proofs for the non-convex stochastic case (as suggested) is an unreasonable burden, both due to their much greater complexity, and because the guarantees they afford are usually mild. Instead, we only proved formally that our algorithm \\u201cdoes the right thing\\u201d (i.e. descends for reasonable functions), and the hard case (stochastic non-convex functions with millions of variables) is instead validated empirically.\\n \\nWe recognise that our work places greater reliance on careful empirical evidence than the theoretical analysis preferred by the reviewer, but we hope that they will nevertheless reconsider their assessment that it represents a useful contribution to the community targeted by this conference. \\n \\nWe will give a more detailed answer to each point in a separate comment.\"}",
"{\"title\": \"Response to AR2\", \"comment\": \"Thank you for the very thoughtful suggestions and questions.\\n\\n> Comparison to the LiSSA algorithm\\nThere are indeed very interesting connections between CurveBall and LiSSA. Despite their main update being derived from very different assumptions, we found that it is possible to manipulate it into a form that is directly comparable to ours, without a learning rate \\\\beta. Another difference is structural: LiSSA uses a nested inner loop to create a Newton-like update from scratch every iteration, like other Hessian-free methods, while our algorithm structure has no such nesting and thus has the same structure as momentum SGD (cf. Alg. 1 and Alg. 2 in the paper).\\n\\nWe updated the paper with a much more detailed exposition of these points, which we have only hinted at in this response to keep it short. It can be found in the last (large) paragraph of the related work (section 5, p. 9).\\n\\n> Page numbers on books\\nThank you, we agree that this is important; we just added them to the paper.\\n\\n> Vectors as capital letters, e.g. J(w)\\nWe share this concern, however this was used to simplify our exposition of automatic differentiation (sec. 2.2). There, the gradient J arises from the multiplication of several Jacobians, which are generally matrices, and it only happens to be a vector because of the shape of the initial projection. We could have treated Jacobians and gradients separately, but it would hamper this unifying view which we found more instructive.\\n\\n> Automatic hyper-parameters derivation\\nAlthough this can be found in the work of Martens & Grosse (2015), to make the paper self-contained we added the derivations to the appendix (section A.1), consisting of a simple minimization problem in the \\\\rho and \\\\beta scalars.\\n\\n> \\u201cPlot the evolution of \\\\beta, \\\\rho and \\\\lambda\\u201d\\nThis is an interesting aspect to analyze. We plot these quantities for two models (with and without batch normalization) in the (newly-added) Fig. 5.\\n\\nIt seems that the momentum hyper-parameter \\\\rho starts with a high value and decreases over time, with what appears to be geometric behavior. This is in line with the mentioned theory, although it is simply a result of the automatic tuning process.\\n\\nAs for the learning rate \\\\beta, it increases in an initial phase, only to decrease slowly afterwards. We can compare this to the practitioners\\u2019 manually-tuned learning rate schedules for SGD that include \\u201cburn-in\\u201d periods, which follow a similar shape (He et al., 2016). Similar schedules were also obtained by previous work on gradient-based hyper-parameter tuning (Maclaurin et al., \\u201cGradient-based Hyperparameter Optimization through Reversible Learning\\u201d, ICML 2015).\\n\\nThe trust region \\\\lambda decreases over time, but by a minute amount. The trust region adaptation is a 1D optimization problem over \\\\lambda, minimizing the difference between the ratio \\\\gamma and 1. This 1D problem has many local minima, punctuated by singularities corresponding to Hessian eigenvalues (see Wright & Nocedal (1999) fig. 4.5). Given a large enough spread of eigenvalues, it is not surprising that a minimum close to the initial \\\\lambda was found by the iterative adaptation scheme.\\n\\n> Comment on Stochastic Line Searches; damping for \\\\lambda using \\\\gamma\\nWe agree that a more satisfactory solution would be to employ the ideas of Probabilistic Line Search. However, it would involve reframing the optimization in Bayesian terms, which would be a large change and add significant complexity, which we tried to avoid.\\n\\nInstead, and inspired by KFAC (Martens & Grosse, 2015), we change \\\\lambda in *small* increments based on how close \\\\gamma is to 1. The argument is that, even if a particular batch gives an inaccurate estimate of \\\\gamma, in expectation it should be correct, and so most of the small \\\\lambda increments will be in the right direction (in 1D). The procedure would indeed be unstable if the increments were much less gradual.\\n\\n> Re-do experiments with Armijo-Wolfe line search; BFGS performance\\nBFGS only needs 19 function evaluations to achieve 10^-4 error on the *deterministic* Rosenbrock function, which we considered to be a reasonable result. However, the *stochastic* Rosenbrock functions are more difficult, as expected.\\n\\nThe cubic line search is part of the BFGS implementation that ships with Matlab. Following this suggestion, we also tested minFunc\\u2019s implementation of L-BFGS, which includes Armijo and Wolfe line searches. We tried several initialization schemes, as well as different line search variants, and found no improvement over the previous ones (Table 1).\\n\\n> \\u201cTry (true) Newton's method\\u201d\\nWe considered LM as the upper baseline as the Hessian isn\\u2019t necessarily definite positive, but the true Newton\\u2019s method is indeed subtly different. It achieves slightly better results overall (see updated Table 1). Note that when the Hessian matrix has negative eigenvalues, we use the absolute values instead.\\n\\n> \\u201cPlease consider rephrasing some phrases\\u201d\\nWe did; thank you for the suggestions.\"}",
"{\"title\": \"Response to AR3\", \"comment\": \"We would like to thank the reviewer for the comments and questions.\\n\\n> \\u201cIntroducing \\\\rho parameter and solving for optimal \\\\rho, \\\\beta complicates things\\u201d\\nIt does, but it makes the proposed solver more reliable (no tuning is necessary).\\n\\nIt is possible to set the hyper-parameters manually, but this requires multiple runs (similarly to learning rate tuning for SGD), which makes it much less convenient.\\n\\nOne reason for introducing rho, in addition to the connection to momentum SGD, is that this allowed us to use the same automatic tuning strategy as Martens & Grosse (2015). This formulation depends on the update equations having both rho and beta hyper-parameters. Another intuitive reason is to slowly forget stale updates, which is the same role played by this parameter in momentum SGD. We will clarify this further in the paper.\\n\\nOur analysis for the convex quadratic case (visualized in fig. 4, appendix A) shows that the algorithm converges on a relatively large region of the (rho, beta) parameter-space. However, the best performance is achieved in a relatively narrow band, which will vary depending on the Hessian eigenvalues. Automatically solving for the optimal rho and beta removes this concern.\\n\\n> \\u201cFor ImageNet results, they show 82% accuracy after 20 epochs on full ImageNet using VGG. Is this top5 or top1 error?\\u201d\\nThis is top-1 training error; if it were top-1 validation error, it would indeed be unreasonably good.\\n\\nCounter-intuitively, SGD is well tuned -- its training error stalls, however the validation error keeps going down for a few more epochs. The learning rate annealing schedule was chosen by the authors of the VGG-f model taking this into account. This is a problem with SGD -- as it is implemented, it works both as optimizer and regularizer.\\n\\nWe show the training error in all plots in order to accurately measure improvements in optimization, without the added confusion of such regularization effects. We study and discuss the validation error separately (in the last subsection of the experiments).\\n\\nIn summary, we found that models that have an appropriate number of parameters w.r.t. the dataset size benefit from our improved optimization, while the larger models (e.g. ResNet) require additional regularization to lower the validation error.\", \"we_view_this_development_as_a_two_step_process\": \"first we create algorithms that can optimize the objective function efficiently; and once we have them, we can focus on effective regularization techniques. We believe that this strategy is more promising than developing both simultaneously.\"}",
"{\"title\": \"Well-motivated idea\", \"review\": \"Authors propose choosing direction by using a single step of gradient descent \\\"towards Newton step\\\" from an original estimate, and then taking this direction instead of original gradient. This direction is reused as a starting estimate for the next iteration of the algorithm. This can be efficiently implemented since it only relies on Hessian-vector products which are accessible in all major frameworks.\\n\\nBased on the fact that this is an easy to implement idea, clearly described, and that it seems to benefit some tasks using standard architectures, I would recommend this paper for acceptance.\", \"comments\": [\"introducing \\\\rho parameter and solving for optimal \\\\rho, \\\\beta complicates things. I'm assuming \\\\rho was needed for practical reasons, this should be explained better in the paper. (ie, what if we leave rho at 1)\", \"For ImageNet results, they show 82% accuracy after 20 epochs on full ImageNet using VGG. Is this top5 or top1 error? I'm assuming top5 since top1 would be new world record for the number of epochs needed. For top5, it seems SGD has stopped optimizing at 60% top5. Since all the current records on ImageNet are achieved with SGD (which beats Adam), this suggests that the SGD implementation is badly tuned\", \"I appreciate that CIFAR experiments were made using standard architectures, ie using networks with batch-norm which clearly benefits SGD\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Good Paper, Accept\", \"review\": \"In this paper, the authors introduce a new second-order algorithm for training deep networks. The method, named CurveBall, is motivated as an inexpensive alternative to Newton-CG. At its core, the method augments the update role for SGD+M with a Hessian-vector product that can be done efficiently (Algorithm 1). While a few new hyperparameters are introduced, the authors propose ways by which they can be calibrated automatically (Equation 16) and also prove convergence for quadratic functions (Theorem A.1) and guaranteed descent (Theorem A.2). The authors also present numerical results showing improved training on common benchmarks. I enjoyed reading the paper and found the motivation and results to be convincing. I especially appreciate that the authors performed experiments on ImageNet instead of just CIFAR-10, and the differentiation modes are explained well. As such, I recommend the paper for acceptance.\", \"i_suggest_ways_in_which_the_paper_can_be_further_improved_below\": [\"In essence, the closest algorithm to CurveBall is LiSSA proposed by Agarwal et al. They use a series expansion for approximating the inverse whereas your work uses one iteration of CG. If you limit LiSSA to only one expansion, the update rule that you would get would be similar to that of CurveBall (but not exactly the same). I feel that a careful comparison to LiSSA is necessary in the paper, highlighting the algorithmic and theoretical differences. I don't see the need for any additional experiments, however.\", \"For books, such as Nocedal & Wright, please provide page numbers for each citation since the information quoted is across hundreds of pages.\", \"It's a bit non-standard to see vectors being denoted by capital letters, e.g. J(w) \\\\in R^p on Page 2. I think it's better you don't change it now, however, since that might introduce inadvertent typos.\", \"It would be good if you could expand on the details concerning the automatic determination of the hyperparameters (Equation 16). It was a bit unclear to me where those equations came from.\", \"Could you plot the evolution of \\\\beta, \\\\rho and \\\\lambda for a couple of your experiments? I am curious whether our intuition about the values aligns with what happens in reality. In Newton-CG or Levenberg-Marquardt-esque algorithms, with standard local strong convexity assumptions, the amount of damping necessary near the solution usually falls to 0. Further, in the SGD+M paper of Sutskever et al., they talked about how it was necessary to zero out the momentum at the end. It would be fascinating if such insights (or contradictory ones) were discovered by Equation 16 and the damping mechanism automatically.\", \"I'm somewhat concerned about the damping for \\\\lambda using \\\\gamma. There has been quite a lot of work recently in the area of Stochastic Line Searches which underscores the issues involving computation with noisy estimates of function values. I wonder if the randomness inherent in the computation of f(w) can throw off your estimates enough to cause convergence issues. Can you comment on this?\", \"It was a bit odd to see BFGS implemented with a cubic line search. The beneficial properties of BFGS, such as superlinear convergence and self-correction, usually work out only if you're using the Armijo-Wolfe (Strong/Weak) line search. Can you re-do those experiments with this line search? It is unexpected that BFGS would take O(100) iterations to converge on a two dimensional problem.\", \"In the same experiment, did you also try (true) Newton's method? Maybe we some form of damping? Given that you're proposing an approximate Newton's method, it would be a good upper baseline to have this experiment.\", \"I enjoyed reading your experimental section on random architectures, I think it is quite illuminating.\", \"Please consider rephrasing some phrases in the paper such as \\\"soon the latter\\\" (Page 1), \\\"which is known to improve optimisation\\\", (Page 7), \\\"non-deep problems\\\" (Page 9).\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting research direction but the paper needs a lot more work before publication\", \"review\": \"This paper proposes an approximate second-order method with low computational cost. A common pitfall of second-order methods is the computation (and perhaps inversion) of the Hessian matrix. While this can be avoided by instead relying on Hessian-vector products as done in CG, it typically still requires several iterations. Instead, the authors suggest a simpler approach that relies on one single gradient step and a warm start strategy. The authors points out that the resulting algorithm resembles a momentum method. They also provide some simple convergence proofs on quadratics and benchmark their method to train deep neural networks.\\n\\nWhile I find the research direction interesting, the execution is rather clumsy and many details are not sufficiently motivated. Finally, there is a lot of relevant work in the optimization community that is not discussed in this paper, see detailed comments and references below.\\n\\n1) Method\\nThe derivation of the method is very much driven on a set of heuristics without theoretical guarantees. In order to derive the update of the proposed method, the authors rely on three heuristics:\\na) The first is to reuse the previous search direction z as a warm-start. The authors argue that this might be beneficial if If z does not change abruptly. In the early phase, the gradient norm is likely to be large and thus z will change significantly. One might also encounter regions of high curvature where the direction of z might change quickly from one iteration to the next.\\nThe \\\"warm start\\\" at s_{t-1} is also what yields the momentum term, what interpretation can you give to this choice?\\n\\nb) The second step interleaves the updates of z and w instead of first finding the optimum z. This amounts to just running one iteration of CG but it is rather unclear why one iteration is an appropriate number. It seems one could instead some adaptive strategy where CG with a fixed accuracy. One could potentially see if allowing larger errors at the beginning of the optimization process might still allow for the method to converge. This is for instance commonly done with the batch-size of first-order method. Gradually increasing the batch-size and therefore reducing the error as one gets close to the optimum can still yield to a converging algorithm, see e.g. \\nFriedlander, M. P., & Schmidt, M. (2012). Hybrid deterministic-stochastic methods for data fitting. SIAM Journal on Scientific Computing, 34(3), A1380-A1405.\\n\\nc) The third step consists in replacing CG with gradient descent.\\n\\\"If CG takes N steps on average, then Algorithm 2 will be slower than SGD by a factor of at least N, which can easily be an order of magnitude\\\".\\nFirst, the number of outer iterations may be a lot less for the Hessian-free method than for SGD so this does not seem to be a valid argument. Please comment.\\nSecond, I would like to see a discussion of the convergence rate of solving (12) inexactly with krylov subspace methods. Note that Lanczos yields an accelerated rate while GD does not. So the motivation for switching to GD should be made clearer.\\n\\nd) The fourth step introduces a factor rho that decays z at each step. I\\u2019m not really sure this makes sense even heuristically. The full update of the algorithm developed by the author is:\\nw_{t+1} = w_t - beta nabla f + (rho I - beta H) (w_t - w_{t-1}).\\nThe momentum term therefore gets weighted by (rho I - beta H). What is the meaning of this term? The -beta H term weights the momentum according to the curvature of the objective function. Given the lack of theoretical support for this idea, I would at least expect a practical reason back up by some empirical evidence that this is a sensible thing to do.\\nThis is especially important given that you claim to decay rho therefore giving more importance to the curvature term.\\nFinally, why would this be better than simply using CG on a trust-region model? (Recall that Lanczos yields an accelerated linear rate while GD does not).\\n\\n2) Convergence analysis\\na) The analysis is only performed on a quadratic while the author clearly target non-convex functions, this should be made clear in the main text. Also see references below (comment #3) regarding a possible extension to non-convex functions.\\nb) The authors should check the range of allowed values for alpha and beta. It appears the rate would scale with the square root of the condition number, please confirm, this is an important detail. I also think that the constant is not as good as Heavy-ball on a quadratic (see e.g. http://pages.cs.wisc.edu/~brecht/cs726docs/HeavyBallLinear.pdf), please comment.\\nc) Sub-sampling of the Hessian and gradients is not discussed at all (but used in the experiments). Please add a discussion and consider extending the proof (again, see references given below).\\n\\n3) Convergence Heavy-ball\\nThe authors emphasize the similarity of their approach to Heavy-ball. They cite the results of Loizou & Richtarik 2017. Note that they are earlier results for quadratic functions such as \\nLessard, L., Recht, B., & Packard, A. (2016). Analysis and design of optimization algorithms via integral quadratic constraints. SIAM Journal on Optimization, 26(1), 57-95.\\nFlammarion, N., & Bach, F. (2015, June). From averaging to acceleration, there is only a step-size. In Conference on Learning Theory (pp. 658-695).\\nThe novelty of the bounds derived in Loizou & Richtarik 2017 is that they apply in stochastic settings.\\nFinally, there are results for non-convex functions such convergence to a stationary point, see\\nZavriev, S. K., & Kostyuk, F. V. (1993). Heavy-ball method in nonconvex optimization problems. Computational Mathematics and Modeling, 4(4), 336-341.\\nAlso on page 2, \\\"Momentum GD ... can be shown to have faster convergence than GD\\\". It should be mentioned that this only hold for (strongly) convex functions!\\n\\n4) Experiments\\na) Consider showing the gradient norms. \\nb) it looks like the methods have not yet converged in Fig 2 and 3.\\nc) Second order benchmark:\\nIt would be nice to compare to a method that does not use the GN matrix but the true or subsampled Hessian (like Trust Region/Cubic Regularization) methods given below.\\nWhy is BFGS in Rosenbrock but not in NN plots?\\nd) \\\"Batch normalization (which is known to improve optimization)\\\" \\nThis statement requires a reference such as\\nTowards a Theoretical Understanding of Batch Normalization\\nKohler et al\\u2026 - arXiv preprint arXiv:1805.10694, 2018\\n\\n5) Related Work\\nThe related work should include Cubic Regularization and Trust Region methods since they are among the most prominent second order algorithms. Consider citing Conn et al. 2000 Trust Region, Nesterov 2006 Cubic regularization, Cartis et al. 2011 ARC.\", \"regarding_sub_sampling\": \"Kohler&Lucchi 2017: Stochastic Cubic Regularization for non-convex optimization and Xu et al.: Newton-type methods for non-convex optimization under inexact hessian information.\\n\\n6) More comments\\n\\nPage 2\\nPolyak 1964 should be cited where momentum is discussed.\\n\\\"Perhaps the simplest algorithm to optimize Eq. 1 is Gradient Descent\\\". This is technically not correct since GD is not a global optimization algorithm. Maybe mention that you try to find a stationary point\\nrho (Eq. 2) and lambda (Eq. 4) are not defined\", \"page_4\": \"\", \"algorithm_1_and_2_and_related_equations_in_the_main_text\": \"it should be H_hat instead of H.\\n\\nBackground\\n\\u201cMomemtum GD exhibits somewhat better resistance to poor scaling of the objective function\\u201d\\nTo be precise the improvement is quadratic for convex functions. Note that Goh might not be the best reference to cite as the article focuses on quadratic function. Consider citing the lecture notes from Nesterov.\\n\\nSection 2.2\\nThis section is perhaps a bit confusing at first as the authors discuss the general case of a multivalue loss function. Consider moving your last comment to the beginning of the section.\\n\\nSection 2.3\\nAs a side remark, the work of Dauphin does not rely on the Gauss-Newton approximation but a different PSD matrix, this is probably worth mentioning.\", \"minor_comment\": \"The title is rather bold and not necessarily precise since the stepsize of curveball is not particularly small e.g. in Fig 1.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
r1exVhActQ | DEEP-TRIM: REVISITING L1 REGULARIZATION FOR CONNECTION PRUNING OF DEEP NETWORK | [
"Chih-Kuan Yeh",
"Ian E.H. Yen",
"Hong-You Chen",
"Chun-Pei Yang",
"Shou-De Lin",
"Pradeep Ravikumar"
] | State-of-the-art deep neural networks (DNNs) typically have tens of millions of parameters, which might not fit into the upper levels of the memory hierarchy, thus increasing the inference time and energy consumption significantly, and prohibiting their use on edge devices such as mobile phones. The compression of DNN models has therefore become an active area of research recently, with \emph{connection pruning} emerging as one of the most successful strategies. A very natural approach is to prune connections of DNNs via $\ell_1$ regularization, but recent empirical investigations have suggested that this does not work as well in the context of DNN compression. In this work, we revisit this simple strategy and analyze it rigorously, to show that: (a) any \emph{stationary point} of an $\ell_1$-regularized layerwise-pruning objective has its number of non-zero elements bounded by the number of penalized prediction logits, regardless of the strength of the regularization; (b) successful pruning highly relies on an accurate optimization solver, and there is a trade-off between compression speed and distortion of prediction accuracy, controlled by the strength of regularization. Our theoretical results thus suggest that $\ell_1$ pruning could be successful provided we use an accurate optimization solver. We corroborate this in our experiments, where we show that simple $\ell_1$ regularization with an Adamax-L1(cumulative) solver gives pruning ratio competitive to the state-of-the-art. | [
"L1 regularization",
"deep neural network",
"deep compression"
] | https://openreview.net/pdf?id=r1exVhActQ | https://openreview.net/forum?id=r1exVhActQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rkl23V07e4",
"BJlZt9ptRX",
"HkxH-cpFCQ",
"B1lBnKatR7",
"SkeePQCJTQ",
"Byevdiz3nm",
"BylbaEivn7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544967347966,
1543260792616,
1543260668716,
1543260588798,
1541559128453,
1541315438666,
1541022904779
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1417/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1417/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1417/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1417/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1417/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1417/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1417/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper studies the properties of L1 regularization for deep neural network. It contains some interesting results, e.g. the stationary point of an l1 regularized layer has bounded number of non-zero elements. On the other hand, the majority of reviewers has concerns on that experimental supports are weak and suggests rejection. Therefore, a final rejection is proposed.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"A study on sparse properties of L1-regularization in deep neural networks, yet experimental supports seem week.\"}",
"{\"title\": \"Reply to AnonReviewer2\", \"comment\": \"We thank the reviewer for the feedback and comments.\\n\\n(1) \\\"whether the theory for (5) is rigorously justified by the experiments\\\":\\n\\nWhile our theorem is designed for the layerwise objective (5), in practice for simplicity we find that directly optimize (8) yields promising results is more simple. We will show experimental results for both (5) and (8) in future revisions. Note that by optimizing (8), we achieve satisfactory results satisfying our bounds from analyzing (5) in all experiments in this work.\\n\\n(2) Regarding the bound tightness:\\n\\nWe perform experiments on Cifar 10 with Vgglike-networks with different \\\\lambda values by compressing the last 2 FC layer. \\nWe would like to point out that the bound for NNZ per-layer in this setting is 50000 * K_s, which depends on the number of supports in the stationary point.\\n\\nIf a max-margin loss is used, K_s can be close to 1, which would give us an NNZ bound around 50000, which is not far from the empirical compressed NNZ (~ 10000).\\n\\nepsilon | 1e-4 | 1e-5 | 1e-6 | 1e-7 | 1e-8 | 1e-9 | 1e-10 | 0 |\\nnnz_fc1 | 9052 | 9947 | 10046 | 10053 | 10054 | 10054 | 10054 | 262144|\\nnnz_fc2 | 4549 | 4567 | 4570 | 4570 | 4570 | 4570 | 4570 | 5120 |\\ntrain_acc | 0.9970 | 0.9974 | 0.9979 | 0.9970 | 0.9969 | 0.9972| 0.9969| 0.9970 |\\n\\n(3) regarding minor points:\\n\\nWe will fix the mistakes and typos in future revisions.\"}",
"{\"title\": \"Reply to AnonReviewer1\", \"comment\": \"We thank the reviewer for the nice feedback and concerns.\\n\\n(1) the assumption of \\u201cgeneral position\\u201d:\\n\\nThe columns of V do not need to be independent to be in general position. It is sufficient if V is drawn from any continuous probability distribution. In other words, the assumption holds as long as we add a very small continuously-distributed perturbation to V. Note general position is a much weaker condition than the RIP condition used widely in sparse recovery.\\n\\n\\n(2) Theorem 1 claims the sparse inequality holds for any \\\\lambda:\\n\\nTo validate that the sparse inequality holds for any \\\\lambda, we perform experiments on Cifar 10 with Vgglike-networks with different \\\\lambda values by compressing the last 2 FC layer.\", \"the_result_is_shown_below\": \"epsilon | 1e-4 | 1e-5 | 1e-6 | 1e-7 | 1e-8 | 1e-9 | 1e-10 | 0 |\\nnnz_fc1 | 9052 | 9947 | 10046 | 10053 | 10054 | 10054 | 10054 | 262144|\\nnnz_fc2 | 4549 | 4567 | 4570 | 4570 | 4570 | 4570 | 4570 | 5120 |\\ntrain_acc | 0.9970 | 0.9974 | 0.9979 | 0.9970 | 0.9969 | 0.9972| 0.9969| 0.9970 |\\ntest_acc | 0.9271 | 0.9270 | 0.9266 | 0.9267 | 0.9264 | 0.9262| 0.9265| 0.9268 |\\n\\nWe note that we perform SGD with L1 regularizer to train the network as a pretraining step. Empirically, we find that after the L1 norm is penalized, even a very small epsilon can lead to very sparse solutions. (However, when epsilon is too small, the converging time may grow a lot.) For epsilon >= 1e-9, the nnz_fc1 becomes <= 10054 for the first training epoch. However, for epsilon = 1e-10, nnz_fc1 drops to 10054 after the second epoch.\\n\\n(2) regarding minor points:\\n\\nWe will fix the mistakes and typos in future revisions.\"}",
"{\"title\": \"Reply to AnonReviewer4\", \"comment\": \"We thank the reviewer for the feedback.\\n\\n1) About \\\"Ignoring the latest improvement in (C. Louizos et al., 2017) and (J. Achterhold et al.)\\\":\\n\\nWhile we thank the reviewer for providing us more related works, it worths noticing that pruning ratios in (C. Louizos et al., 2017), (J. Achterhold et al.) are not as strong as our compared baseline \\\"Variational Dropout\\\". For example, for LeNet on Mnist, the former have ~0.65%, while the latter (and our result) are less than 0.4%, and for VGG on CIFAR-10, the former have ~5.5%, while the latter (and our result) are less than 2%. That is, both our method and VD has better results compared to the two related works.\\n\\nNote many results provided in (C. Louizos et al., 2017), (J. Achterhold et al.) are for simultaneous pruning and quantization, while our submission focuses more on investigating the pruning effect of the simple L1 regularizer. In this work, we focus on the weight pruning ratio without quantization.\\n\\n(2) About comment \\\"Repeating the old story from other papers\\\":\\n\\nOur story focuses more on the analysis of \\\"problem\\\" instead of the \\\"algorithm\\\". In other words, we argue that different problems have different compression rate, depending on their number of supporting labels, when a simple L1-regularized pruning objective is used. The algorithm we proposed is just a tool for helping our iterates getting closer to the stationary points.\\n\\n(3) About comment \\\"quite limited novelty\\\":\\n\\nFirstly, our novelty lies more on the analysis of the pruning objective than on the algorithm. Second, it is a wrong impression that we are proposing ADAM over SGD. Our proposition for the algorithm is the \\\"L1 cumulative\\\" technique as a general extension module to modify any stochastic-gradient-based algorithms, such as SGD and ADAM, into a sparsity-inducing solver.\\n\\n(4) About comment \\\"lacking solid experiments\\\":\\n\\nThe sentence is an editorial mistake. We will strengthen our experiments in future revisions.\"}",
"{\"title\": \"Repeating the old story from other papers, quit limited novelty, lacking solid experiments\", \"review\": \"The main concerns come from the following parts:\\n\\n\\n(1) Repeating the old story from other papers:\\nA large part of math is from previous works, which seems not enough for the ICLR conference.\\nIt is very surprising that the authors totally ignore the latest improvements in neural network compression. Their approach is extremely far away from the state of the art in terms of both methodological excellence and experimental results. The authors should read through at least some of the papers I list below, differentiate their approach from these pioneer works, and properly justify their position within the literature. They also need to show a clear improvement on all these existing pieces of work. \\n\\n(2) quite limited novelty:\\nIn my opinion, the core contribution is replacing SGD with Adam.\\nFor network compression, it is common to add L1 Penalty to loss function. The main difference of this paper is change SGD to Adam, which seems not enough. \\n\\n(3) lacking solid experiments:\\nIn section Experiment, the authors claim \\\"Finally, we show the trade-off for pruning Resnet-50 on the ILSVRC dataset.\\\", but I cannot find the results. \\n\\nIs the ResNet-32 too complex for cifar-10? Of course, it can be easily pruned if the model is too much capacity for a simple dataset. Why not try the Resnet-20 first?\\n\\n[1] C. Louizos et al., Bayesian Compression for Deep Learning, NIPS, 2017\\n[2] J. Achterhold et al., Variational Network Quantization, ICLR, 2018\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"an interesting perspective on the L1 regularization of neural network\", \"review\": \"This paper discusses the effect of L1 penalization for deep neural network. In particular it shows the stationary point of an l1 regularized layer has bounded non-zero elements.\", \"the_perspective_of_the_proof_is_interesting\": \"By chain rule, the stationary point satisfies nnz(W^j) linear equations, but the subgradients of the loss function w.r.t. the logits have at most N\\\\times ks variables. If the coefficients of the linear equation are distributed in general positions, then the number of variables should not be larger than the number of equations.\\n\\nWhile I mostly like the paper, I would like to point out some possible issues:\", \"main_concerns\": \"1. the columns of V may not be independent during the optimization(training) process. In this situation, I am not quite sure if the assumption of \\u201cgeneral position\\u201d still holds. I understand that in literatures of Lasso and sparse coding it is common to assume \\u201cgeneral position\\u201d. But in those problems the coefficient matrix is not Jacobian from a learning procedure. \\n\\n2. the claim is a little bit counter intuitive: Theorem 1 claims the sparse inequality holds for any \\\\lambda. It is against the empirical observation that when lambda is extremely small, effect of the regularizer tends to be almost zero. Can authors also show this effects empirically, i.e., when the regularization coefficients decrease, the nnz does not vary much? (Maybe there is some optimization details or approximations I missed?)\", \"some_minor_notation_issues\": \"1. in theorem 1: dim(W^{(j)})=d should be dim(vec(W^{(j)}))=d\\n2. in theorem 1: Even though I understand what you are trying to say, I would suggest we describe the jacobian matrix V in details. Especially it is confusing to stack vec(X^J) (vec(W^j)) in the description.\\n3. the notations of subgradient and gradient are used without claim\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Nice Theoretical Insights, but Not Sure How Experiments Substantiate the Theory\", \"review\": \"The paper theoretically analyzes the sparsity property of the stationary point of layerwise l1-regularized network trimming. Experiments are conducted to show that reaching a stationary point of the optimization can help to deliver good performance. Specific comments follow.\\n\\n1. While the paper analyzes the properties of the stationary point of the layerwise objective (5), the experiments seem to be conducted based on the different joint objective (8). Experimental results of optimizing (5) seem missing. While the reviewer understands that (5) and (8) are closely related, and the theoretical insights for (5) can potentially translate to the scenario in (8), the reviewer is not sure whether the theory for (5) is rigorously justified by the experiments.\\n\\n2. It is also unclear how tight the bound provided by Theorem 1 is. Is the bound vacuous? Relevant statistics in the experiments might need to be reported to elucidate this point.\\n\\n3. It is also unclear how the trade-off in point (b) of the abstract is justified in the experiments.\", \"minor_points\": \"page 2, the definition of $X^{(j)}$, the index of $l$ and $j$ seem to be typos.\\npage 2, definition 1, the definition of the bracket need to be specified. \\npage 4, the concept of stationary point and general position can be introduced before presenting Theorem 1 to improve readability.\\npage 4, Corollary 1, should it be $nnz(\\\\hat{W})\\\\le JN k_{\\\\mathcal{S}}$?\\npage 7, Table 2, FLOPS should be FLOP? \\npage 8, is FLOP related to the time/speed needed for compression? If so, it should be specified. If not, compression runtime should also be reported.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
B1lx42A9Ym | Neural Rendering Model: Joint Generation and Prediction for Semi-Supervised Learning | [
"Nhat Ho",
"Tan Nguyen",
"Ankit B. Patel",
"Anima Anandkumar",
"Michael I. Jordan",
"Richard G. Baraniuk"
] | Unsupervised and semi-supervised learning are important problems that are especially challenging with complex data like natural images. Progress on these problems would accelerate if we had access to appropriate generative models under which to pose the associated inference tasks. Inspired by the success of Convolutional Neural Networks (CNNs) for supervised prediction in images, we design the Neural Rendering Model (NRM), a new hierarchical probabilistic generative model whose inference calculations correspond to those in a CNN. The NRM introduces a small set of latent variables at each level of the model and enforces dependencies among all the latent variables via a conjugate prior distribution. The conjugate prior yields a new regularizer for learning based on the paths rendered in the generative model for training CNNs–the Rendering Path Normalization (RPN). We demonstrate that this regularizer improves generalization both in theory and in practice. Likelihood estimation in the NRM yields the new Max-Min cross entropy training loss, which suggests a new deep network architecture–the Max- Min network–which exceeds or matches the state-of-art for semi-supervised and supervised learning on SVHN, CIFAR10, and CIFAR100. | [
"neural nets",
"generative models",
"semi-supervised learning",
"cross-entropy"
] | https://openreview.net/pdf?id=B1lx42A9Ym | https://openreview.net/forum?id=B1lx42A9Ym | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ryegAVM3kE",
"HylNFR3707",
"H1x7ZTn7RQ",
"SyeKt5hQCm",
"Syxml5nmC7",
"BJlc5w3X07",
"ryxX4v2mAX",
"HJl7MH27R7",
"HklCD43QCm",
"HJeZDYVuaX",
"SJlMPZ8qhX",
"HJeALQbq3Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544459464372,
1542864508331,
1542864122858,
1542863488827,
1542863339186,
1542862738322,
1542862634582,
1542862091315,
1542861925984,
1542109528583,
1541198170119,
1541178198507
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1416/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1416/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1416/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1416/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1416/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1416/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1416/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1416/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1416/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1416/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1416/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1416/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper introduced a Neural Rendering Model, whose inference calculation corresponded to those in a CNN. It derived losses for both supervised and unsupervised learning settings. Furthermore, the paper introduced Max-Min network derived from the proposed loss, and showed strong performance on semi-supervised learning tasks.\\n\\nAll reviewers agreed this paper introduces a highly interesting research direction and could be very useful for probabilistic inference. However, all reviewers found this paper hard to follow. It was written in an overly condensed way and tried to explain several concepts within the page limit such as NRM, rendering path, max-min network. In the end, it was not able to explain key concepts sufficiently.\\n\\nI suggest the authors take a major revision on the paper writing and give a better explanation about main components of the proposed method. The reviewer also suggested splitting the paper into two conference submissions in order to explain the main ideas sufficiently under a conference page limit.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Very interesting direction but requiring major revision for readability\"}",
"{\"title\": \"An example of a 2-layer Neural Rendering Model (NRM) to clarify the model's definition\", \"comment\": \"We would like to thank the reviewer for his/her comments. In what follows we shall give an example of a 2-layer Neural Rendering Model (NRM) to clarify the definition of our model. Also, in order to simplify the notations, we will define the generation process in the vectorized form. A L-layer NRM can be generalized from this example.\\n\\nThe 2-layer NRM is a generative model, which generates images from the class templates $\\\\mu$ via a linear transformations $\\\\Lambda$. Here $\\\\mu$ is a vector depending on the class label y and $\\\\Lambda$ is a matrix depending on a set of latent variables z. Let further assume that $\\\\mu$ of size 2 x 1, and $\\\\Lambda$ is of size D x 2. Here, we assume the generated image X is of size D x 1. Images are generated from this 2-layer NRM as follows:\\n\\n1) First, we sample the class label y from a categorical distribution $Cat(\\\\pi_y)$. Given the value of y, we will select the corresponding template $\\\\mu$ from a set of predefined templates, which will be learned during the training of the model by stochastic gradient descent (SGD).\\n\\n2) Second, given y, we sample the latent variables z from the prior p(z|y), which is also a categorical distribution $Cat(\\\\pi(z|y))$. Note that z contains s and t, which are the template selecting variable and the local translation variable defined in our paper. s and t are vectors of size 2 x 1. Element in s and t correspond to pixels in $\\\\mu$. Element s(1) and s(2) in s take one of the two possible values - 0 and 1 - which selects render or not render. Element t(1) and t(2) in t take one of the four possible values - UPPER LEFT, UPPER RIGHT, LOWER LEFT, and LOWER RIGHT. If s(1) is 0, then the first column of $\\\\Lambda$, i.e. $\\\\Lambda(:,1)$, is a vector of 0\\u2019s. If s(1) is 1, then $\\\\Lambda(:,1)$ takes one of the four possible predefined values depending on the value of t(1). These four vectors are locally translated versions of each other. The same process is applied for s(2), t(2), and $\\\\Lambda(:,2)$. The generated image X is then given by:\\n\\n\\tX = $\\\\mu(1)$ x $\\\\Lambda(:,1)$ + $\\\\mu(2)$ x $\\\\Lambda(:,2)$ + pixel noise\\n\\nNote that similar to the class template $\\\\mu$, $\\\\Lambda$ will be learned during training by SGD.\\n\\nThe process above is captured by equation (1), (2), and (3) and illustrated by Figure 3 in our paper.\"}",
"{\"title\": \"Compare the NRM with the Deep Rendering Model of [Patel 2016] and show the advantage of our parametrized prior on the latent variables\", \"comment\": \"Let us take this opportunity to clarify our contributions in the Neural Rendering Model (NRM) in comparison with the Deep Rendering Model (DRM) work of [Patel 2016]. In our response below, we will also address your question regarding the advantage of our parametrized prior on the latent variables in the NRM.\\n\\n1) We introduce the dependency between latent variables in our NRM. This dependency is implicitly enforced by the joint prior of all latent variables $p(z|y)$ in the NRM (see in equation (1) in our paper). Particularly, we parametrize this joint prior such that it cannot be factorized into the product of priors of individual latent variables. As a result, the latent variables are dependent. Such dependency is missing in the DRM, and this limits the DRM\\u2019s performance on semi-supervised learning tasks. As shown in our paper, the NRM, which captures the dependency between latent variables, significantly outperforms the DRM in semi-supervised learning tasks on popular benchmarks. Furthermore, the parametric form we choose for the joint prior $p(z|y)$ yields a conjugate prior for the NRM. Thus, inference and learning in the NRM is still computationally efficient. In particular, as shown in Theorem 2.2 in our paper, during inference, due to its conjugate form, the joint prior $p(z|y)$ only adds an additional bias term b(l) into each convolutional layer l in the CNNs. During learning, compared to the DRM, the NRM only needs to learn that extra bias term b(l) at each layer.\\n\\n2) We derive the cross-entropy loss used for training CNNs with labeled data in conjunction with the architecture of the CNNs, all from maximizing the conditional log-likelihood of the NRM as shown in Theorem 2.3(a). This derivation is missing in the DRM of [Patel 2016]. In their paper, they only derive the reconstruction loss for unsupervised learning without addressing supervised learning, which CNNs are good at. From our derivation of the cross-entropy loss, we are able to provide statistical guarantees and generalization bounds of NRM and CNNs for supervised and semi-supervised learning tasks. These statistical guarantees and generalization bounds are missing in [Patel 2016] due to the fact that they cannot explain supervised learning in CNNs from their DRM.\"}",
"{\"title\": \"Motivation for using the MIN-MAX entropy\", \"comment\": \"We agree with the reviewer that we didn\\u2019t explain the Max-Min cross-entropy loss very clearly in the main text of our paper due to the page constraint. Let us try to explain the motivation for our Max-Min cross-entropy loss here. In the Appendix C.14 of our paper, we give a proof for Theorem 2.3(a) in the main text, which establishes the connection between the cross-entropy loss for training with labeled data and the conditional log-likelihood of the NRM. The equation (31) and (32) in that appendix (page 30) show how we derive the cross-entropy loss from the conditional log-likelihood of the NRM. Notice that, from equation (31) to equation (32), we lower bound the conditional log-likelihood of the NRM by maximizing over $z_{i}$ of the term $log(exp(\\\\psi_{i}(y_{i}, z_{i})))$ without considering the log sum term. This corresponds to maximizing the likelihood of the correct labels without considering the likelihood of the incorrect labels. Notice that without normalization, maximizing the likelihood of the correct labels might also increase the likelihood of the incorrect labels. Alternatively, we can lower bound the conditional log-likelihood of the NRM by minimizing over $z_{i}$ of the log sum term. This is equivalent to minimizing the likelihood of the incorrect labels and yields the MIN cross-entropy loss. Combining both cross-entropy losses from these two lower bounds yields the MAX-MIN cross-entropy loss, which tries to maximize the likelihood of the correct labels and minimize the likelihood of the incorrect labels at the same time.\"}",
"{\"title\": \"Explain why we condition the expected complete-data likelihood on the class label $y$\", \"comment\": \"The reviewer is right that one way to do unsupervised learning is to maximize the expected complete data likelihood $\\\\sum_{i} E_{z, y} p(x_i, z, y|\\\\theta)$ in which $\\\\theta$ is the parameters of the model. This is indeed what we do to learn from unlabeled data using the NRM. However, in order to simplify the notations and formulas in the main text of our paper, we condition our model on $y$ so that we can ignore the term $ln p(y)$ in our equations. In order to convert this conditional model $p(x, z|y; \\\\theta)$ into a marginal model $p(x, z, y| \\\\theta)$, we only need to add the term $ln p(y)$ back into the equations in our paper. We will amend the text to reflect this.\"}",
"{\"title\": \"Summarize the major claims of our paper\", \"comment\": \"We shall try to summarize the major claims of our paper below.\", \"contribution_1\": \"We develop a new generative model, the Neural Rendering Model (NRM), whose inference matches the architecture of a CNN. Different from the Deep Rendering Model (DRM) work of [Patel 2016], latent variables in our NRM are dependent.\", \"contribution_2\": \"We develop losses for training the CNNs with labeled and unlabeled data from maximizing the conditional log-likelihood and the expected complete-data log-likelihood of the NRM. Deriving losses for training the CNNs with labeled data is missing in the DRM work of [Patel 2016]. Given these losses, we provide consistency guarantees and generalization bounds for supervised and semi-supervised learning task in the NRM. Using the NRM, we also develop a new CNN architecture, which we term the Max-Min network. We show that the Max-Min network outperforms CNNs on supervised and semi-supervised learning tasks on popular benchmarks.\", \"contribution_3\": \"We show that NRM + Max-Min network achieves state-of-the-art empirical results for semi-supervised and supervised learning on benchmarks including CIFAR10, CIFAR100, and SVHN.\"}",
"{\"title\": \"Justify why we discuss many ideas, including the Max-Min network and the statistical guarantees, in our paper\", \"comment\": \"We would like to thank the reviewer for his/her comments. We agree with the reviewer that we include many ideas in our paper. The Neural Rendering Model (NRM) without the Max-Min network can be itself a separate paper. However, given the NRM, it would be great to demonstrate how the model can be employed to improve the CNNs. In the paper, we show how to use NRM to design losses for training CNNs with both unlabeled and labeled data. We would also like to show how to utilize NRM to modify the architecture of CNNs. This is why we try to incorporate the Max-Min network into our paper. Furthermore, given our probabilistic setting, we believe that it is important to provide statistical guarantees for the model to establish that NRM is well defined statistically.\"}",
"{\"title\": \"Examples of how to incorporate the task-specific knowledge into the NRM\", \"comment\": \"One example of task-specific knowledge discussed in the paper is learning with given labels (supervised learning) or learning without labels (unsupervised learning). As mentioned in the paper, when labeled data are available, we can design the objective loss for CNNs from the conditional log-likelihood of the NRM. Similarly, objective loss for training CNNs with unlabeled data can be derived from the expected complete-data log-likelihood of the NRM. This unsupervised learning loss can only be computed using the NRM since it is the reconstruction loss between the input images and the images reconstructed from the NRM.\\n\\nAnother example of task-specific knowledge that can be incorporated into the NRM is learning when there are outliers in the training set. Instead of using Gaussian pixel noise in equation (2) in our paper, we can allow the noise to be from a Students\\u2019 t-distribution. Then the NRM is equivalent to a mixture of Students t-distributions. It has been shown in (Bishop, 2006) that a mixture of Students t-distributions is robust to outliers. However, this idea is out of the scope of our paper, and we leave it for future work.\\n\\nC. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006. ISBN 978-0387-31073-2. URL http://research.microsoft.com/en-us/um/people/cmbishop/prml/.\"}",
"{\"title\": \"Explanation for the rendering paths and the RPN terms\", \"comment\": \"We would like to thank the reviewer for pointing out that the rendering paths and the RPN are not clearly explained throughout our paper. In what follows we shall try to elucidate these two terms.\\n\\nA rendering path is a set of latent variables (s_{l}, t_{l}, y), l = 1,2,...,L, in the NRM, where s_{l} decides to render or not at particular locations in layer l, t_{l} decides how to translate the rendered image locally in layer l, and y is the class label. This corresponds to a set of the ON-OFF states of the ReLUs, the argmax values from the Max Pooling layers, and the class label y in the CNN. For example, let consider a 2 x 2 image patch at layer 1 in the CNN, taking the following values [2, -3; -4, 5]. After applying Max Pooling, our patch yields one scalar, which is 5. The argmax values from the Max Pooling is then 4 or Lower Right, which implies that the Lower Right location has the highest pixel value in our patch. 4 or Lower Right is then the value for t_{1} at the corresponding pixel in the layer 1 of the NRM. After applying ReLU on 5 yield the same value 5. The ON-OFF states of the ReLU will be 1 instead of 0. 1 is then the value for s_{1} at the corresponding pixel locations in the layer 1 of the NRM. A particular value for (s_{l}, t_{l}, y), l = 1,2,...,L, defines a rendering path in NRM.\\n\\nThe Rendering Path Normalization (RPN) term is proportional to the negative of the log prior of the most probable rendering path (see equation 8). When minimizing the negative log-likelihood of the NRM, this RPN term encourages that the rendering path estimated by the CNN during inference has higher prior compared to other rendering paths. \\n\\nWe will amend the text to include the explanation above.\"}",
"{\"title\": \"Interesting direction for probabilistic inference with CNNs and good semi-supervised learning results, but writing is difficult to follow\", \"review\": \"Summary: This paper introduces the Neural Rendering Model (NRM), a generative model in which the computations involved in inference correspond to those of a CNN forward pass. The NRM\\u2019s supervised learning objective is lower bounded by a variant of the cross-entropy objective. This objective is used to formulate a max-min network, which has a particular type of weight sharing between a standard branch with max pooling / ReLUs and a second branch with min pooling / NReLUs. The max-min objective and network show strong performance on semi-supervised learning tasks.\\n\\nPosing a CNN as inference in a generative model is an interesting direction, and could be very useful for probabilistic inference in the context of neural nets. However, the paper is rather difficult to follow and requires frequent reference to the appendix to understand the main body. Some important components (like those relating to rendering paths and RPNs) are given good intuitive explanations early on but remain a bit ambiguous throughout the paper. I would recommend improving the presentation before publication.\", \"question\": \"\\u201cwe can modify NRM to incorporate our knowledge of the tasks and datasets into the model and perform JMAP inference to achieve a new CNN architecture.\\u201c\\nI appreciate the CNN / NRM correspondence in Table 1, and see how the NRM may be modified to produce modified CNN architectures. That being said, I am not sure I understand what sorts of task-specific knowledge are being referred to here. Could you give an example of a type of knowledge that the NRM would allow you to bake into a CNN architecture, but would otherwise be difficult to incorporate?\", \"minor\": \"\\u201cAs been shown later in Section 2.2\\u2026\\u201d\\n\\n\\u201c\\u2026is part of the optimization in equation equation 6.\\u201d\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The paper is not self-contained. Otherwise it provides an interesting probabilistic interpretation of CNNs which allows to design semi-supervised learning algorithm leading to promising empirical results.\", \"review\": [\"pros:\", \"Interesting probabilistic interpretation of the CNNs improving work of [Patel 2016].\", \"State-of-the-art results following from the proposed probabilistic model.\"], \"cons\": [\"The regular 10 pages of the paper are not self-contained.\", \"The paper is written in overly condensed way. I found it impossible to clearly understand major claims of the paper without reading the accompanied 34 pages long appendix. Many concepts/notations used in the paper are introduced in the appendix. My assessment is done solely based on reading the 10 regular pages.\", \"The probabilistic model NMR (equ (1) and (2)) defines distribution of inputs given latent variables and the outputs, $p(x|z,y)$, as well as it defines a distribution $p(z|y)$. Hence, in principle, one could maximize $p(x)=\\\\sum_{i} E_{z} p(x_i|z,y)p(z|y)p(y)$ when learning from unsupervised data. Instead, the authors propose to learn by MINIMIZING the expectation (not clear w.r.t which variables) of $\\\\log p(x,z|y)$ (equ (7)). Although it leads to empirically nice results, I do not see a clear motivation for such objective function.\", \"The motivation for using the MIN-MAX entropy as a loss function (sec 3) is also not clear. Why it should be better than the standard cross-entropy in the statistical sense?\", \"The proposed probabilistic model NMR differs form the previous work of [Patel 2016] by introducing the prior (1) on the latent variable. Unfortunately, pros and cons of this modifications are not fully discussed. E.g. how using dependent latent variables impact complexity of the inference.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"intersting model and claims, but incomprehensible paper\", \"review\": \"The paper claims to propose a novel generative probabilistic neural network model such that its encoder (classifying an image) can be approximated by a convolutional neural network with ReLU activations and MaxPooling layers. Besides the standard parameters of the units (weights and biases), the model has two additional latent variables per unit, which decide whether and where to put the template (represented by the weights of the neuron) in the subsequent layer, when generating an image from the class. Furthermore, the authors claim to derive new learning criteria for semi-supervised learning of the model including a novel regulariser and claim to prove its consistency.\\n\\nUnfortunately, the paper is written in a way that is completely incomprehensible (for me). The accumulating ambiguities, together with its sheer length (44 pages with all supplementary appendices!), make it impossible for me to verify the model and the proofs of the claimed theorems. This begins already with definition of the model. The authors consider the latent variables as dependent and model them by a joint distribution. Its definition remains obscure, let alone the question how to marginalise over these variables when making inference. Without a thorough understanding of the model definition, it becomes impossible (for me) to follow the presentation of the learning approaches and the proofs for the theorem claims.\\n\\nIn my view, the presented material exceeds the limits of a single conference paper. A clear and concise definition of the proposed model accompanied by a concise derivation of the basic inference and learning algorithm would already make a highly interesting paper.\\n\\nConsidering the present state of the paper, I can't, unfortunately, recommend to accept it for ICLR.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HyfyN30qt7 | NICE: noise injection and clamping estimation for neural network quantization | [
"Chaim Baskin",
"Natan Liss",
"Yoav Chai",
"Evgenii Zheltonozhskii",
"Eli Schwartz",
"Raja Girayes",
"Avi Mendelson",
"Alexander M.Bronstein"
] | Convolutional Neural Networks (CNN) are very popular in many fields including computer vision, speech recognition, natural language processing, to name a few. Though deep learning leads to groundbreaking performance in these domains, the networks used are very demanding computationally and are far from real-time even on a GPU, which is not power efficient and therefore does not suit low power systems such as mobile devices. To overcome this challenge, some solutions have been proposed for quantizing the weights and activations of these networks, which accelerate the runtime significantly. Yet, this acceleration comes at the cost of a larger error. The NICE method proposed in this work trains quantized neural networks by noise injection and a learned clamping, which improve the accuracy. This leads to state-of-the-art results on various regression and classification tasks, e.g., ImageNet classification with architectures such as ResNet-18/34/50 with low as 3-bit weights and 3 -bit activations. We implement the proposed solution on an FPGA to demonstrate its applicability for low power real-time applications. | [
"Efficient inference",
"Hardware-efficient model architectures",
"Quantization"
] | https://openreview.net/pdf?id=HyfyN30qt7 | https://openreview.net/forum?id=HyfyN30qt7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJlyP7ryx4",
"B1xOIQXcnm",
"BJl1eHfFnQ",
"HJeXr78fhQ",
"SyeRdDUJ3X"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1544667990663,
1541186383965,
1541117158683,
1540674362837,
1540478838489
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1413/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1413/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1413/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1413/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1413/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper addresses an important problem, quantizing deep neural network models to reduce the cost of implementing them on hardware such as FPGAs without severely affecting task performance. The approach explored in the paper combines three ideas: (1) injecting noise into the network to simulate the effects of quantization noise, (2) a smart initialization of the parameter and activation clamping along with learning of the activation clamping using the straight-through estimator, and (3) a gradual approach to quantization. While the reviewers agreed that the problem is important, they raised concerns about the novelty of the proposed approach and the quality of the experiments. The authors did not respond to the reviewers in the discussion period, and did not revise their submission.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Addresses an important problem, but the novelty is limited and experiments may not be good enough\"}",
"{\"title\": \"Low novelty and weak experiments.\", \"review\": \"[Summary]\\nNeural network quantization is can enable many practical applications for deep learning, therefore it is an important research problem. The paper claims two contributions: 1. Injecting noise during training to make it more robust to quantization errors. 2. Clamping the parameter values in a layer as well as the activation output, where the clamping interval is some multiple of the standard deviation about the mean, and the clamping interval is updated using the Straight through estimator. The main strength of the paper lies in the empirical results where the combination of techniques employed by the authors outperforms the SOTA methods in a compute of scenarios.\\n\\n[Pros]\\nThe paper is working on an important problem area and as a technical report this work can be valuable in the industry. There is novelty in the particular combination of techniques that the authors have employed and some of the empirical results show the strength of the technique.\\n\\n[Cons]\\nthe main contribution of the paper is a careful combination of existing techniques and the associated empirical results, therefore the experiments need to be strong. I noticed some strange omissions in the results, and asked the authors for a reply via a public comment but they did not reply. Specifically,\\n\\na) On RESNET 34 the results for PACT 5,5 are not shown and JOINT and PACT on 3,3 on ResNet-34 are also not shown. Why are these results omitted?\\n\\nb) The noise+gradual training decreases performance on (layer-weight bitwidth, activation bitwidth) = (3, 3). But further experiments for table 2 where the nets do not use noise+gradual training is not shown. Currently the proposed recipe for quantizing nets does not seem to be all that better than existing methods and it hard to guess exactly what was the reason for the improved results in situations where the results were infact better. Why were these experiments omitted ?\\n\\nOverall the experimental results in the paper are weak and the novelty of the proposed methods is low.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Network quantization on ResNets\", \"review\": \"The article presents a method for quantization of deep neural networks for classification and regression, using three key parts: (i) noise injection to model the effect of quantization during forward inference, (ii) clamping with learned maximum activations to reduce the quantization bin size, and (iii) gradual quantization of blocks of the network, while previously quantized blocks remain unchanged. The method is evaluated on ImageNet, CIFAR-10, and a regression task, showing performance on-par or better than state-of-the-art methods for particular quantization bit size. Finally, the method is used for porting network onto a FPGA.\\n\\nThe paper addresses an important topic, because there is increasing interest in hardware-efficient implementations of deep neural networks. The method could be interesting for practitioners, because it does not interfere with the original training of the full-precision method, and can be applied later on.\\n\\nThe main weakness is that none of the proposed methods are entirely original, and the combination is rather ad-hoc than well-justified. For example, quantization noise has been considered in several previous articles, e.g. already in BinaryConnect (Courbariaux et al. 2015), although the novelty here is that the noise is explicitly added during the forward path. However, the choice of the Bernoulli mask with p=0.05 is not justified and might not work best for other tasks. The authors admit that gradual quantization has been proposed before, and clamping a ReLU is also not new, although here a new way to learn and initialize the clamping parameters is presented.\\n\\nThe article would be OK if the empirical results were really strong, but unfortunately they are not entirely convincing:\\n1. The classification results are only for ResNet architectures, it remains unclear whether results would hold also for other architectures.\\n2. The numerical results in Table 2 are very close to each other, and no error bars are available, so it is not possible to judge whether differences are significant. Also, the advantage of the NICE method vanishes for 3-bit models.\\n3. The results for CIFAR-10 come without any comparison.\\n4. The results for regression are only compared to a single method, which is re-implemented by the authors, and might therefore not be fully optimized. Thus there is no strong baseline to judge the results.\\n5. No results are shown for the hardware implementation.\\n\\nOverall, the paper is not a particularly interesting read for people interested in a deeper understanding of network quantization, but the method could still be valuable for applications. Is this sufficient for ICLR? Since the experimental results do not entirely convince me I will put my grade slightly below acceptance threshold.\", \"minor_points\": [\"The abstract has a pretty long introduction before it begins to tell what the contributions of the article are.\", \"Occasional grammar mistakes.\", \"Tables 1 and 2 are misplaced\"], \"pros\": [\"important topic (network quantization)\", \"good empirical results\", \"easy to apply\"], \"cons\": [\"combination of previously proposed methods\", \"no convincing justification\", \"no strong advantage over previous methods\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"promising results but presentation needs some work\", \"review\": \"The authors present a method for fine-tuning neural networks so inference can be performed in a quantized low bit data format down to 3 bits. The authors achieve this through a combination of three techniques:\\n1. Noise injection to fine-tune the weights before quantization. The effect of noise injection can model that of quantization, but rather than being stuck in a quantization bin, fine grained weight updates are still possible\\n2. A schedule that quantizes layer by layer, rather than all layers at the same time\\n3. Clipping weights and activations within a learned range to obtain finer grained bins within that range. \\n\\n\\nThe main contribution is a novel combination of mostly existing techniques. Clipping (or clamping as the authors call it) has been proposed by Zhang et al. 2018, but it's an interesting contribution to have the clipping learned directly via backpropagation with a straight-through estimator. Treating the quantization as noise has been proposed in a different form in McKinstry et al. 2018. Gradual quantization appears novel, but is also the least interesting of the techniques. Therefore, novelty on ideas/methods is somewhat limited, and the contribution is mostly the in the impressive experimental results, which appear to be outperforming previous methods. The main weaknesses are poor writing, and that some details of the implementation required to reproduce the results are missing. For example, the training schedule is not given, e.g. how many epochs to train the clean model, how many with noise, how many quantized. Details on the gradual quantization are also missing. Block based quantization is completely heuristic and not well motivated. If this is the main novel ingredient, more details on the mechanics would be needed. Is both the noise injection and the quantization done in blocks? If the motivation is in \\\"the opportunity to adapt\\u201d, then what does the adaptation look like? \\n\\nAs above, my other main issue is with the writing, there are many examples where I would suggest improvements:\\n\\nThis work could be improved greatly by copy editing for English grammar. There are many typos (including ones that can be caught by autocorrect, missing punctuation, or using similar but unrelated words, e.g. \\\"token\\\" instead of \\\"taken\\\"). The manuscript appears hastily put together and not ready for publication.\", \"the_acronym_nice_already_has_a_meaning_in_the_dl_literature\": \"Dinh, L., Krueger, D., & Bengio, Y. (2014). NICE: Non-linear Independent Components Estimation. It confusing to reuse it.\\n\\nThe term clamping is only explained on page 4 but used since the abstract. It\\u2019s used in a nonstandard way to mean \\u201cconstrained to lie within a range\\u201d which should be explained earlier. I think \\u201cclipped\\u201d would be a better term, following the related Choi et al. 2018. Clamping usually means \\\"constrained to a fixed value\\\" (not a range), so it is not a good term to use in this context. \\n\\nAre the results shown in table 2 and table 3 from a single trial or averaged across reruns? If single trial, it's misleading to have 2 figures after the decimal. Even non-quantized ResNet tends to have 0.5% or so run to run variability, which is much larger than the differences between some of the methods shown here. In fact, a lot of the results could just be due to picking a lucky random seed. \\n\\nComparisons are shown against methods JOINT (Jung et al), LQ-Nets (Zhang et al), FAQ (McKinstry et al). It would be helpful to present them with the same names in the \\\"related work\\\" section, and explain why they were picked out for the comparison. For someone not familiar with the literature it's hard to see why these 3 would be the obvious picks. \\n\\nReadability would increase if table 2 and 3 were moved to section 4 where they are referenced, rather than after the discussion. Fig 2 font size too small and hard to read.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"On omitted experiments and citations\", \"comment\": \"This paper is working on an important problem. However, since the main contribution of the paper is a careful combination of existing techniques and the associated empirical results, therefore the experiments need to be strong. I have noticed some strange omissions in the results.\\n\\na) On RESNET 34 the results for PACT 5,5 are not shown and JOINT and PACT on 3,3 on ResNet-34 are also not shown. Why are these results omitted?\\n\\nb) The noise+gradual training decreases performance on (layer-weight bitwidth, activation bitwidth) = (3, 3). But further experiments for table 2 where the nets do not use noise+gradual training is not shown. Currently the proposed recipe for quantizing nets does not seem to be all that better than existing methods and it hard to guess exactly what was the reason for the improved results in situations where the results were infact better. Why were these experiments omitted ?\\n\\nc) You mention in the paper that \\\"following common [sic] we don\\u2019t quantize first and last layers of the networks\\\" What is the citation for this claim? The paper by Arora et. al. that you do cite in the paper says the exact opposite. Their experiments on compression shows that later layers (including the last layer) are more compressible than the earlier layers.\"}"
]
} |
|
Byx1VnR9K7 | Trajectory VAE for multi-modal imitation | [
"Xiaoyu Lu",
"Jan Stuehmer",
"Katja Hofmann"
] | We address the problem of imitating multi-modal expert demonstrations in sequential decision making problems. In many practical applications, for example video games, behavioural demonstrations are readily available that contain multi-modal structure not captured by typical existing imitation learning approaches. For example, differences in the observed players' behaviours may be representative of different underlying playstyles.
In this paper, we use a generative model to capture different emergent playstyles in an unsupervised manner, enabling the imitation of a diverse range of distinct behaviours. We utilise a variational autoencoder to learn an embedding of the different types of expert demonstrations on the trajectory level, and jointly learn a latent representation with a policy. In experiments on a range of 2D continuous control problems representative of Minecraft environments, we empirically demonstrate that our model can capture a multi-modal structured latent space from the demonstrated behavioural trajectories. | [
"imitation learning",
"latent variable model",
"variational autoencoder",
"diverse behaviour"
] | https://openreview.net/pdf?id=Byx1VnR9K7 | https://openreview.net/forum?id=Byx1VnR9K7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1eg0LElg4",
"H1l9q4gqAm",
"rylVt4lq0X",
"Syenv4xqR7",
"SJeeDkc227",
"SyxpLO9_2m",
"Sye9uPhBn7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544730312190,
1543271569622,
1543271547784,
1543271524078,
1541345111737,
1541085269413,
1540896626326
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1412/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1412/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1412/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1412/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1412/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1412/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1412/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper considers the problem of imitating multi-modal expert demonstrations using a variational auto-encoder to embed demonstrated trajectories into a structured latent space. The problem is important, and the paper is well written. The model is shown to work well on toy examples. However, as pointed out by the reviewers, given that multi-modal has been studied before, the approach should have been compared both in theory and in practice to existing methods and baselines (e.g., InfoGAIL). Furthermore, the technical contribution is somewhat limited as it using an existing model on a new application domain.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Incremental solution, missing baselines\"}",
"{\"title\": \"Thank you for your constructive feedback.\", \"comment\": \"We propose a new trajectory-level VAE which is different compared with previous work. The model is an alternative fully probabilistic model to capture state sequence dependencies, which is easy to train simply by gradient descent and has a promising performance on a range of problems. We agree that further experiments are needed as we mentioned in the comments for R1 and R2. All the results we show for the rolling window case generate actions on-the-fly. The initial state is observed and a subsequent L actions are generated, after which the new observed state is fed into the model and so on. We will do more ablation studies to compare this with feeding the observed states directly into the policy decoder during test, and clarify the experiment set up in section 3.3.\"}",
"{\"title\": \"Thank you for your constructive feedback.\", \"comment\": \"The main contribution of the paper is to introduce a consistent trajectory-level VAE which does not need simulation during training and serves as an alternative for capturing state sequence structure. We will strengthen our paper by experiments on the real MineCraft environment, quantitative comparison with SoTA algorithms and ablation studies, as well as the extension to bootstrapping reinforcement learning.\"}",
"{\"title\": \"Thank you for your constructive feedback.\", \"comment\": \"Our model differs from Wang et al in the sense that our VAE is on the trajectory level, which enables it to better identify the latent variable that differentiates different behaviors from the whole trajectory. Co-Reyes et al also uses a trajectory-level VAE, but our work differs from theirs in that our model is fully probabilistic and consistent. Therefore no extra penalty term is needed as in Co-Reyes et al to force the state decoder to be consistent with the action decoder.\\n\\nThank you for the suggestions for the ablation studies. We will conduct a comprehensive analysis on the impact of the state decoder/policy decoder. \\n\\nWe will fix the typos, add the references and clarify experiment setup in a revision.\"}",
"{\"title\": \"Review\", \"review\": \"This paper presents an approach to multi-modal imitation learning by using a variational auto-encoder to embed demonstrated trajectories into a structured latent space that captures the multi-modal structure. This is done through a stochastic neural network with a bi-directional LSTM and mean pooling architecture that predicts the mean and log-variance of the latent state. This is followed by a state and action/policy decoder (both LSTMs) that recursively generate trajectories from latent space samples. The entire model is trained by optimising the ELBO on a set of pre-specified expert demonstrations. At test time, samples are generated from the latent space and recursively decoded to generate state and action trajectories. The method is tested on three low-dimensional continuous control tasks and is able to learn structured latent spaces capturing the modes in the training data as well as generating good trajectory reconstructions.\\n\\nLearning from multi-modal demonstration data is an important sub-area in imitation learning. As the paper pointed out, there has been a lot of recent work in this area. A lot of the ideas in this paper are similar to those proposed in prior work -- the network for embedding the trajectory is similar to the ones from Wang et al & Co-Reyes et al with the major difference being in the structure of the action decoder (and what inputs to encoder). Also, prior work has dealt with problems that are high-dimensional (Wang et al) and has shown results when operating directly on visual data (InfoGAIL). Comparatively, the results in this paper are on toy problems. \\n\\nAs there is no direct comparison to prior work provided in the paper, it is hard to quantify how much better the proposed approach is in comparison to prior work. For example, the \\\"2D Circle Example\\\" was taken from the InfoGAIL paper. It would have been good to use that as a baseline example to compare those two methods and highlight the advantages of the proposed approach -- did it require less data? fewer environment interactions? etc. \\n\\nThe results on the Zombie Attack Scenario seem poor. Specifically, in the avoid scenario, the approach seems to fail almost half the time. It would be good if the authors spend more time on this -- again, a comparison to prior work would establish some baselines and give us a good idea of the expected performance on this scenario. The videos show a single representative example for the \\\"Attack\\\" and \\\"Avoid\\\" scenarios. More examples including failures need to be included so that the distribution of results can be captured. \\n\\nThere is little in terms of generalisation or ablation studies in the paper. For example, in the Zombie Attack Scenario one could generate data with different zombie behaviours and measure performance on held out behaviours. Similarly, as an ablation, the authors could look at directly predicting actions instead of states & actions (states could be generated through a pre-trained dynamics model).\\n\\nFigure 6. is hard to parse and could be explained better. No details are provided on the network architecture (number/size of the LSTM/fully connected layers), number of demonstrations used, training algorithm, hyper-parameters etc.\", \"few_typos_in_the_paper\": \"Page 6 - between the animation links 'avoiding' 'region'\\n Fig 7 caption - the zombie but are not in attacking range -> but the zombies are not in the attacking range,\", \"relevant_citations_that_can_be_added\": \"1) Hausman, K., Chebotar, Y., Schaal, S., Sukhatme, G., & Lim, J. J. (2017). Multi-modal imitation learning from unstructured demonstrations using generative adversarial nets. In Advances in Neural Information Processing Systems (pp. 1235-1245).\\n2) Tamar, A., Rohanimanesh, K., Chow, Y., Vigorito, C., Goodrich, B., Kahane, M., & Pridmore, D. (2018). Imitation Learning from Visual Data with Multiple Intentions.\\n\\nOverall, I find the paper to be incremental and lacking good experimental results and comparisons. The strengths of the paper are not clear and need to be explained and evaluated well. Substantial work is needed to significantly improve the paper before it can be accepted.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good but generic model, contribution limited\", \"review\": \"This paper proposes a VAE for modelling state-action sequences using a single latent variable rather than one per timestep. The authors empirically demonstrate that this model works on toy 2D examples and a simplified 2D Minecraft-like environment. Although I am unaware of other works that use a VAE in this setting, the model is still quite generic, thus requires further application to justify its significance. This paper is clear and well written. \\n\\nThe current contribution of this paper is limited, however it could be improved in a number of ways. The main component lacking from this paper is a meaningful comparison to other related works. Its unclear what the advantage of this model is over other models and so a thorough comparison to other sequence models would really help this paper. As mentioned in the conclusion, another direction for this work would be to bootstrap reinforcement learning. If this bootrapping was demonstrated then it would make this paper\\u2019s contribution stronger. Finally, another important direction for improvement for this paper would be to demonstrate its usefulness on more complex environments, instead of only 2D examples.\", \"pros\": [\"clear and well written\", \"model works on toy examples\"], \"cons\": [\"lack of baseline comparisons\", \"lack of contributions\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"The paper proposes an imitation learning model able to generate trajectories based on some expert trajectories. The assumption is that observed trajectories contain multi-modal (i.e. style) information that is not naturally captured by existing methods. The authors proposed a VAE based architecture that uses a prior distribution P(z) to simultaneously generate (state-action) pairs based on a LSTM decoder (actually, one LSTM for the states and one interleaved LSTM for the actions). This decoder is learned using a classical VAE auto-encoding loss, observed trajectories being encoder through a bi-LSTM. Experiments are made on three toy examples: a simple 2d Navigation case exhibiting 3 different 'styles', a 2D circle example with also 3 different styles, and a zombie attack scenario with two different styles. The results show that the model is able to capture different clusters of trajectories.\\n\\nFirst of all, the paper does not propose a new model, but an instantiation of an existing model to a particular case. The main difference with SoTA is that the authors propose to both decode states and actions without using a simulator. The contribution of the paper is thus quite light. Moreover, it is unclear how the model can be used to get a policy corresponding to a particular mode. Can we use the learned decoders to generate actions on-the-fly in a real/simulated environment? Right now (section 3.3), actions are generated on generated states, but not on observed ones. The paper has to clarify this point since just generating trajectories seems to be a little bit useless. In general Section 3.3 lacks of details (e.g the rolling window is also unclear). Also, the model could be described a little bit more in term of architecture, particularly on the critical point about how the two decoding LSTMs are interacting. \\n\\nFrom the experimental point of view, the paper attacks very simple cases, without any comparison with state-of-the-art, and without almost any quantitative results. If Section 4.1 and 4.2 are useful to explore the ability of the model on simple cases, I would recommend the authors to merge these two sections in one smaller one, and then to focus on more realistic experiments. For example, it seems to me that the experimental setting proposed for example in [Li et al.] on driving styles could be interesting, and would allow a comparison with existing methods. Also the model proposed in [Co-Reyes et al.] could be an interesting comparison (at least, keeping the principle of this paper, without the hierarchical structure), particularly because this model is based on the use of a simulator while the proposed one is not. If a performance close to this baseline can be obtained with your model, it would be interesting for the community.\\n\\nRight now, the experimental part and the too small contribution of the paper are not enough for acceptance. I would suggest the authors to:\\n* better describe their contribution i.e model architecture and how the model can be used to obtain a real policy\\n* use 'stronger' use cases for the experiments, and particularly existing use cases\\n* provide a deep quantitative and qualitative comparison with SoTA\", \"pro\": [\"simple method, no need of a simulator\"], \"cons\": [\"not clear how to move from trajectory generation to a real policy\", \"small contribution\", \"too light experimental study without comparison with baselines and state of the art\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HklJV3A9Ym | Approximation capability of neural networks on sets of probability measures and tree-structured data | [
"Tomáš Pevný",
"Vojtěch Kovařík"
] | This paper extends the proof of density of neural networks in the space of continuous (or even measurable) functions on Euclidean spaces to functions on compact sets of probability measures.
By doing so the work parallels a more then a decade old results on mean-map embedding of probability measures in reproducing kernel Hilbert spaces.
The work has wide practical consequences for multi-instance learning, where it theoretically justifies some recently proposed constructions.
The result is then extended to Cartesian products, yielding universal approximation theorem for tree-structured domains, which naturally occur in data-exchange formats like JSON, XML, YAML, AVRO, and ProtoBuffer. This has important practical implications, as it enables to automatically create an architecture of neural networks for processing structured data (AutoML paradigms), as demonstrated by an accompanied library for JSON format. | [
"multi-instance learning",
"hierarchical models",
"universal approximation theorem"
] | https://openreview.net/pdf?id=HklJV3A9Ym | https://openreview.net/forum?id=HklJV3A9Ym | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJlyUaeNlN",
"B1lSnuSWAm",
"BJeq4UXeR7",
"BJekexQOTQ",
"HkeHul4P6Q",
"HkxRkOgRnX",
"SkxnuIAp3m",
"BJgiQFiNhX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544977734710,
1542703276946,
1542628914005,
1542103015105,
1542041708897,
1541437414404,
1541428851927,
1540827426948
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1411/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1411/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1411/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1411/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1411/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1411/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1411/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1411/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"Several reviewers thought the results were not surprising in light of existing universality results, and thought the results were of limited relevance, given that the formalization is not quite in line with real-world networks for MIL. The authors draw out some further justifications in the rebuttal. These should be reintegrated. I agree with the general criticisms regarding relevance to ICLR. Ultimately, this work may belong in a journal.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Rewrite needed to address importance of result\"}",
"{\"title\": \"Key properties should have proofs even if they aren't surprising.\", \"comment\": \"We do not dispute the novelty of the proof, yet we believe that as the number of applications of AI grows, it becomes important to prove even expected results, as the lack of a proof can help us spot the unsound constructions quicker. The proof itself is important for the field of multi-instance learning, since it has been shown in [1] that the MIL NN architecture it addresses is a considerable improvement over the prior art on a wide range (20) of problems.\\n\\n[1] Pevn\\u00fd, Tom\\u00e1\\u0161, and Petr Somol. \\\"Using neural network formalism to solve multiple-instance problems.\\\" International Symposium on Neural Networks. Springer, Cham, 2017.\"}",
"{\"title\": \"importance\", \"comment\": \"Without disagreeing with the arguments regarding novelty and bag sizes, we would like to add that for the purposes of MIL NN, being able to work with general probability measures is more general than being able to work with functions in L^p(mu) as in [1], since mu has to be fixed and this only gives measures which are absolutely continuous w.r.t. mu. We also hope that for application to MIL NN, our result should be more accessible than [1] --- while our Theorem 5 gives the approximation property for MIL NN directly, some additional effort is required before being able to apply [1] to specific scenarios (the amount of said effort being quite dependent on the readers background).\\n\\n\\n[1] Rossi, Fabrice, and Brieuc Conan-Guez. \\\"Functional multi-layer perceptron: a non-linear tool for functional data analysis.\\\" Neural networks 18.1 (2005): 45-60.\"}",
"{\"title\": \"Relevance to learning representation\", \"comment\": \"The truth is that this work has been inspired by difficulty to use neural networks on security-related problems. As has been written in the introduction, most methods (multi-layer perceptron, convolutional neural networks) assumes samples to have a fixed euclidean dimension, or (recurrent neural networks) being sequences of vectors of a fixed dimension.\\n\\nBut in many domains where you ingesting data using APIs, they comes typically in form of JSON documents (see for example https://www.threatcrowd.org/searchApi/v2/ip/report/?ip=188.40.75.132). This type of data can be elegantly processed by the proposed framework (and the accompanying library). Therefore we believe that it is relevant to ICLR.\"}",
"{\"title\": \"Justifying the usefulness\", \"comment\": \"Dear Reviewers,\\n\\nwe admit that the results aren't \\\"surprising\\\". But taking into account the recent paper [1], we believe the results are important. Ref. [1], published last year at NIPS, studies the same approach as described in our paper (previously independently proposed in [2, 3]), but justifies the construction only for a limited case of probability distributions over finite sets. Our paper fills this gap by extending the justification to probability distributions with infinite support.\\n\\nThe construction seems to be versatile, as it has been recently used in many cited papers, for example in [4] (cited 37 times) it is used within a reasoning module, in [5] (cited 155 times) it is used to learn messages in message passing algorithms for graphs, and in [6] (cited 273 times) it is used for 3D scene recognition.\\n\\nTaking the above into account, we think that the proof has its place.\\n\\n[1] Zaheer, Manzil, et al. \\\"Deep sets.\\\" Advances in Neural Information Processing Systems. 2017.\\n\\n[2] Edwards, Harrison, and Amos Storkey. \\\"Towards a neural statistician.\\\" arXiv preprint arXiv:1606.02185 (2016).\\n\\n[3] Pevny, Tomas, and Petr Somol. \\\"Using Neural Network Formalism to Solve Multiple-Instance Problems.\\\" arXiv preprint arXiv:1609.07257 (2016).\\n\\n[4] Santoro, Adam, et al. \\\"A simple neural network module for relational reasoning.\\\" Advances in neural information processing systems. 2017\\n\\n[5] Lin, Guosheng, et al. \\\"Deeply learning the messages in message passing inference.\\\" Advances in Neural Information Processing Systems. 2015.\\n\\n[6] (Qi, Charles R., et al. \\\"Pointnet: Deep learning on point sets for 3d classification and segmentation.\\\" Proc. Computer Vision and Pattern Recognition (CVPR), 2017.\"}",
"{\"title\": \"Useful result on universality. Probably not extremely relevant to ICLR\", \"review\": \"The paper investigates the approximation properties of a family of neural networks designed to address multi-instance learning (MIL) problems. The authors show that results well-known for standard one layer architectures extend to the MIL models considered. The authors focus on tree-structured domains showing that their analysis applies to these relevant settings.\\n\\nThe paper is well written and easy to follow. In particular the theoretical analysis is clear and pleasant to read. \\n\\nThe main concern is related to the relevance of the result to ICLR. As the authors themselves state, the result is not surprising given the standard universality result of one-layer neural networks (and indeed Thm. 2 heavily relies on this fact to prove the universality of MIL architectures). In this sense the current work might be more suited to a journal venue.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting but...\", \"review\": \"This paper generalizes the universal approximation theorem (usually stated for real functions on some Euclidean space) to real functions on the space of measures (at least a compact set of proba. measures).\\n\\nThis result might be interesting but not really surprising and the paper does not put any new theoretical ideas or proof techniques. The proof is actually almost identical than in the original paper of Hornik, Stinchcombe and White (89) [and not the 91 paper of Hornik as indicated in the paper], the only difference being a trick on the density of f\\\\circ h instead of just considering cos() function.\\n\\nAll in all, the contributions is interesting but really incremental\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"The paper proposes a quite straightforward extension of standard results about universal approximation by neural networks on complex domains.\", \"review\": \"The authors study in this paper the approximation capabilities of neural networks for real valued functions on probability measure spaces (and on tree structured domains).\\n\\nThe first step of the paper consists in extending standard NN results to probability measure spaces, that is rather than having finite dimensional vectors as inputs, the NN considered here have probability measures as inputs. The extension to this case is straightforward and closely related to older extension on infinite dimensional spaces (see for instance the seminal paper of Stinchcombe https://doi.org/10.1016/S0893-6080(98)00108-7 and e.g. http://dx.doi.org/10.1016/j.neunet.2004.07.001 for an application to NN with functional inputs). Nothing quite new here.\\n\\nIn addition, and exactly as in the case of functional inputs, the real world neural networks do not implement what is covered by the theorem but only an approximation of it. This is acknowledged by the authors at the end of Section 2 but in a way that is close to hand waving. Indeed while the probability distribution point is valuable and gives interesting tools in the MIL context, the truth is that we have no reason to assume the bag sizes will grow to infinite or even will be under our control. In fact there are many situations were the bag sizes are part of the data (for instance when a text is embedded in a vector space word by word and then represented as a bag of vectors). Thus proving some form of universal approximation in the multiple instance learning context would need to take this fact into account, something that is not done at all here. \\n\\nTherefore I believe the contribution of this paper to be somewhat limited.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
BJe1E2R5KX | Algorithmic Framework for Model-based Deep Reinforcement Learning with Theoretical Guarantees | [
"Yuping Luo",
"Huazhe Xu",
"Yuanzhi Li",
"Yuandong Tian",
"Trevor Darrell",
"Tengyu Ma"
] | Model-based reinforcement learning (RL) is considered to be a promising approach to reduce the sample complexity that hinders model-free RL. However, the theoretical understanding of such methods has been rather limited. This paper introduces a novel algorithmic framework for designing and analyzing model-based RL algorithms with theoretical guarantees. We design a meta-algorithm with a theoretical guarantee of monotone improvement to a local maximum of the expected reward. The meta-algorithm iteratively builds a lower bound of the expected reward based on the estimated dynamical model and sample trajectories, and then maximizes the lower bound jointly over the policy and the model. The framework extends the optimism-in-face-of-uncertainty principle to non-linear dynamical models in a way that requires no explicit uncertainty quantification. Instantiating our framework with simplification gives a variant of model-based RL algorithms Stochastic Lower Bounds Optimization (SLBO). Experiments demonstrate that SLBO achieves the state-of-the-art performance when only 1M or fewer samples are permitted on a range of continuous control benchmark tasks. | [
"model-based reinforcement learning",
"sample efficiency",
"deep reinforcement learning"
] | https://openreview.net/pdf?id=BJe1E2R5KX | https://openreview.net/forum?id=BJe1E2R5KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1eCYuINlE",
"rJxCLWG_Rm",
"Hkx29sJxC7",
"rygz4okeAQ",
"H1g1w5th6X",
"r1xWIcFhTm",
"rJgrRkSDTX",
"rJlACiJXa7",
"H1e69okXam",
"r1gl8iJX6X",
"HJeBddsh37",
"SyxTcTgq3Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1545001093944,
1543147861953,
1542613908223,
1542613801884,
1542392406663,
1542392392825,
1542045645304,
1541762006261,
1541761941215,
1541761864330,
1541351532635,
1541176725158
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1410/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1410/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1410/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1410/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1410/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1410/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1410/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1410/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1410/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1410/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1410/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1410/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes model-based reinforcement learning algorithms that have theoretical guarantees. These methods are shown to good results on Mujuco benchmark tasks. All of the reviewers have given a reasonable score to the paper, and the paper can be accepted.\", \"confidence\": \"2: The area chair is not sure\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper\"}",
"{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for answering my questions. I have adjusted my score accordingly.\"}",
"{\"title\": \"Response 2: Sample Complexity\", \"comment\": \"We\\u2019ve added a paragraph below Theorem 3.1 and Appendix G, which contains a finite sample complexity results. We can obtain an approximate local maximum in $O(1/\\\\epsilon)$ iterations with sample complexity (in the number of trajectories) that is linear in the number of parameters and accuracy $\\\\epsilon$ and is logarithmic in certain smoothness parameters.\\n\\nWe note that the bound doesn\\u2019t directly apply to LQR because we require the reward is bounded (which is not true in LQR because the states can blow up.) If the reward (at a single step) is bounded --- which is a reasonable assumption for many practical applications --- then our sample complexity can be better (or at least comparable) to Abbasi-Yadkori, Szepesvari in dimension (it\\u2019s not clear how AS\\u2019s bound depends on the dimension --- it\\u2019s likely exponential in dimension.) We also note that AS applies to the adaptive setting which is stronger than the episodic setting that we work with. Finally, we\\u2019d like to mention again that our paper\\u2019s primary goal is to deal with non-linear setting without explicit uncertainty quantification. In this sense, our result is much stronger than AS because our result applies to any non-linear models with bounded rewards.\"}",
"{\"title\": \"Revision\", \"comment\": \"As promised in the responses to the reviewers, we have updated the paper with the following changes:\\n\\n--- We\\u2019ve added the citations mentioned by the reviewers, and incorporated most of the clarifications in the responses in the paper. (E.g., in Appendix F.4, we discussed more on the most important hyperparameters.)\\n\\n--- We\\u2019ve added a paragraph below Theorem 3.1 and Appendix G, which contains a finite sample complexity results. We can obtain an approximate local maximum in $O(1/\\\\epsilon)$ iterations with sample complexity (in the number of trajectories) that is linear in the number of parameters and accuracy $\\\\epsilon$ and is logarithmic in certain smoothness parameters.\"}",
"{\"title\": \"Response - Part 2/2\", \"comment\": \"4) If I understood it correctly, your V-function directly depends on your model, i.e., you have V(M(s)) and you learn the model M parameters to maximize V. This means that you want to learn the model that, together with the policy, maximizes V. Am I correct? Can you comment a bit more on that? Did you try to optimize them (V and M) separately, i.e., to add a third parameter to learn (the V-function parameters)?\\n\\nYes, V-function directly depends on your model and we learn the M parameters and \\\\pi parameters to maximize V. In other words, in our current implementation, we don\\u2019t have a parameterized approximator for V, and V is computed by querying the model. It\\u2019s a fascinating idea of using a third function approximator for V and learn that as well. This is left as future work though. \\n\\n\\n5) How does you algorithm deal with environmental noise? The tasks used for the evaluation are all deterministic and I believe that this heavily simplifies the model learning. It would be interesting an evaluation on a simple problem (for example the swing-up pendulum) in the presence of noise on the observations and/or the transition function.\\n\\nThe MuJoCo locomotion environments are deterministic yet very challenging. The dynamics of such environments are very complex (e.g. the humanoid dynamics) thus this demonstrates the effectiveness of our method. Many of reinforcement learning algorithms are using these locomotion environments as testbeds. Our meta-algorithms also applies to stochastic environments. We will try to apply the algorithm to a stochastic environment empirically, and hopefully, we can add this to the revision soon.\\n\\n6) I appreciate that you provide many details about the implementation in the appendix. Can you comment a bit more? Which are the most important hyperparameters? The number of policy optimization n_policy or of model optimization n_model? You mention that you observed policy overfitting at the first iterations. Did you also experience model overfitting? Did normalizing the state help a lot? \\n\\nThe most important hyperparameters we found are n_policy and the coefficient in front of the entropy regularizer. It seems that once n_model is large enough we don\\u2019t see any significant changes. We did have a held-out set for model prediction (with the same distribution as the training set) and found out the model doesn\\u2019t overfit much. Normalizing the state helped a lot since the raw entries in the state have different magnitudes --- if we don\\u2019t normalize them, the loss will be dominated by the loss of some large entries.\"}",
"{\"title\": \"Response - Part 1/2\", \"comment\": \"We thank the reviewer for the insightful and positive comments. We address the questions below:\\n\\n1) \\u201cIn footnote 3 you state that \\\"[We note that such an assumption, though restricted, may not be very far from reality: optimistically speaking], we only need to approximate the dynamical model accurately on the trajectories of the optimal policy\\\". Why only of the optimal policy? Don't you also need an accurate dynamic model for the current policy to perform a good policy improvement step?\\u201d\\n\\nIn the most optimistic scenario, one only needs a not-so-accurate model around the trajectories of a non-optimal policy to make *some reasonable* progress. We note that it\\u2019s likely preferable to make decent progress with a non-perfect model compared to making optimal progress with a perfect model, because learning perfect models would require much more samples. \\n\\n2) A major challenge in RL is that the state distribution \\\\rho^\\\\pi changes with \\\\pi and it is usually very hard to estimate. Therefore, many algorithms assume it does not change if the policy is subject to small changes (examples are PPO and TRPO). In Eq 4.3 it seems that you also do something similar, fixing \\\\rho^\\\\pi and constraining the KL of \\\\pi (and not of the joint distribution p(s,a)). Am I correct? Can you elaborate it a bit more, building a connection with other RL methods?\\n\\nYou are correct that we constrain the changes of \\\\rho^\\\\pi. We compare with PPO and TRPO from this perspective in the remark 4.5. We summarize the key point here (please see Remark 4.5 for a longer and more technical discussion): the main advantage of MB approach to TRPO is that our constraint on the changes of \\\\rho^\\\\pi can be more relaxed than that in TRPO. Or in other words, the sensitivity of the reward approximation to the change of \\\\rho^\\\\pi is smaller in our algorithm than in TRPO. This is mostly because, in MB algorithms, the approximation error of the total reward by the imaginary total reward decreases as the model error decreases (even with a fixed change of \\\\rho^\\\\pi), whereas, in model-free algorithms, the approximation error of the total reward by the local linear approximation only depends on the change of the \\\\rho^\\\\pi. Intuitively, we build a better local approximation of the reward using the models than the linear approximation in TRPO. \\n\\n3) In Eq. 6.1 and 6.2 you minimize the H-loss, defined as the prediction error of your model. Recently, Pathak et al. used the same loss function in many papers (such as Curiosity-driven Exploration by Self-supervised Prediction) and your Eq. 6.2 looks like theirs. The practical implementation of your algorithm looks very similar to theirs too. Can you comment on that?\\n\\nThis H-loss is not a contribution of ours (e.g., as mentioned in our paper, it has been used in Nagabandi et al\\u2019\\u200e2017 for evaluation.) Our implementation differs from \\u201cCuriosity-driven Exploration by Self-supervised Prediction\\u201d in the sense that we consider the prediction after multiple steps while theirs only considers the prediction of next state (thus one-step prediction). The Zero-shot visual imitation learning paper by Pathak et al uses an auto-regressive recurrent model to predict a multi-step loss on a trajectory, which is closely related to ours. However, theirs differ from ours in the sense that they do not use the predicted output x_{t+1} as the input for the prediction of x_{t+2}, and so on and so forth. Thanks for pointing out the reference! We include this work in our references and discuss more in our next revision.\"}",
"{\"title\": \"Interesting theoretical framework for model-based RL and convincing results. It can be improved by adding some clarifications and connections with other RL methods.\", \"review\": \"The paper proposed a framework to design model-based RL algorithms. The framework is based on OFU and within this framework the authors develop an algorithm (a variant of SLBO) achieving SOTA performance on MuJoCo tasks.\\n\\nThe paper is very well written and the topic is important for the RL community. The authors do a good job at covering related works, the bounds are very interesting and the results quite convincing. \\n\\nQuestions/comments to the authors:\\n1) In footnote 3 you state that \\\"[...] we only need to approximate the dynamical model accurately on the trajectories of the optimal policy\\\". Why only of the optimal policy? Don't you also need an accurate dynamic model for the current policy to perform a good policy improvement step? \\n2) A major challenge in RL is that the state distribution \\\\rho^\\\\pi changes with \\\\pi and it is usually very hard to estimate. Therefore, many algorithms assume it does not change if the policy is subject to small changes (examples are PPO and TRPO). In Eq 4.3 it seems that you also do something similar, fixing \\\\rho^\\\\pi and constraining the KL of \\\\pi (and not of the joint distribution p(s,a)). Am I correct? Can you elaborate it a bit more, building a connection with other RL methods?\\n3) In Eq. 6.1 and 6.2 you minimize the H-loss, defined as the prediction error of your model. Recently, Pathak et al. used the same loss function in many papers (such as Curiosity-driven Exploration by Self-supervised Prediction) and your Eq. 6.2 looks like theirs. The practical implementation of your algorithm looks very similar to theirs too. Can you comment on that? \\n4) If I understood it correctly, your V-function directly depends on your model, i.e., you have V(M(s)) and you learn the model M parameters to maximize V. This means that you want to learn the model that, together with the policy, maximizes V. Am I correct? Can you comment a bit more on that? Did you try to optimize them (V and M) separately, i.e., to add a third parameter to learn (the V-function parameters)?\\n5) How does you algorithm deal with environmental noise? The tasks used for the evaluation are all deterministic and I believe that this heavily simplifies the model learning. It would be interesting an evaluation on a simple problem (for example the swing-up pendulum) in the presence of noise on the observations and/or the transition function.\\n6) I appreciate that you provide many details about the implementation in the appendix. Can you comment a bit more? Which are the most important hyperparameters? The number of policy optimization n_policy or of model optimization n_model? You mention that you observed policy overfitting at the first iterations. Did you also experience model overfitting? Did normalizing the state help a lot?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response - Part 2/2\", \"comment\": \"--- \\u201cPlease explain in more detail what the effects are from relaxing the assumptions for the algorithm? I assume none of the monotonic improvement results can be transferred to the algorithm?\\u201d \\n\\nWe will still have monotone improvements if the algorithm uses any of the discrepancy bounds in Section 4. The monotonicity won\\u2019t hold in the worst case, if the model is not optimized in an optimistic fashion. The worst-case scenario would be that the lower bound is quite loose for some dynamical models, and accurate for others. In this case, we would really have to be optimistic about the lower bound and choose the best models and policy. However, such situations are unlikely to occur in MuJoCo tasks since the looseness of the lower bound seems to be comparable in a neighborhood of the current model. This may be the reason why we can simplify the algorithm. \\n\\n\\n--- \\u201cCould you elaborate why the algorithm was not implemented as suggested by Section 4? Is the problem that the algorithm did not perform well or that the discrepancy measure is hard to compute?\\u201d\\n\\nWe implemented the discrepancy bound in Section 4.1 as reported in the experiments. The discrepancy measure of Section 4.2 involves the value function which requires another neural net approximators (and thus the resulting algorithm would update the model, the value, and the policy iteratively.) We have implemented this algorithm and it works fine but not as well as the reported simpler version. This may be because either the MuJoCo environments satisfy the assumption in Section 4.1 well, so that L2 norm model loss already performs great, or we have not pinned down the best ways to combine the updates of the model, value, and policy iteratively. \\n\\n--- \\u201cFor the presented algorithm, the discrepancy does not depend on the policy any more. I did not understand why the iterative optimization should be useful in this case.\\u201d \\n\\nAs briefly mentioned in one of the previous paragraphs, the key benefit of iterative optimization is from the stochasticity in the model when we optimize the imaginary value V^{\\\\pi, M} over the policy. In other words, if we were to optimize M until convergence and then optimize pi, we may optimize the lower bound better, but the algorithm doesn\\u2019t use samples in a fully stochastic way. The stochasticity dramatically reduces the overfitting (of the policy to the estimated dynamical model) in a way similar to that SGD regularizes ordinary supervised training. To some extent, since the policy optimization involves stochastic iterates from the updates of the model learning loss, the learned policy has to be robust to a family of stochastic models instead of a single one. \\n\\n--- \\u201cThe only difference between Algo 3 and Algo 2 seems to be the additional for loop. \\u2026.. Did you try Algo 3 with the same amount of Adam updates as Algo 2 (could be that I missed that). \\u201d \\u201cThe difference to a standard model-based RL algorithm is minor and the many advantages of the nice theoretical framework are lost\\u201d\\n\\nIndeed we did try Algo 3 with the same amount of Adam updates as in Algo 2, and it performs worse than the current setting. In fact, we used the optimal number of Adam updates in Algo 3. Concretely, we test the performance of Algo 3 with different hyper-parameters, including the number of Adam updates. Our experiments show that 200 (among 100, 200, 400, 800) is the optimal number of Adam updates in Ant and we use it for all other environments. Note that when Algo 3 uses 800 Adam updates (per outer iteration), it has the same amount of updates (per outer iteration) as in Algo 2. \\n\\nTherefore, the differences of ours from the standard MBRL algorithms, though look simple, are empirically important for the significant improvements of the performance. As we try to argue above, these differences were indeed inspired by the theory.\"}",
"{\"title\": \"Response - Part 1/2\", \"comment\": \"We thank the reviewer for the insightful review and positive comments on the theoretical framework. We address the reviewer\\u2019s comments/questions below:\\n\\n--- It seems that the reviewer thinks our empirical implementation is different from what the theory suggests: \\u201c the resulting algorithm is actually quite far away from the assumptions made for deriving the bounds\\u201d. \\n\\nWe would like first to mention/clarify that our proposed algorithm (Algorithm 1) is a meta-algorithm/framework for model-based RL. Our main goal is to develop some framework to mathematically reason about non-linear MB RL (such as how to design the model loss function.) The meta-algorithm is designed to have provable monotone convergence, even for the worst-case environments. However, in the empirical implementation, since MuJoCo tasks have nice properties (e.g., the value functions tend to be Lipschitz in states), many components of the meta-algorithm are not necessary, and thus we only need a simplification of the meta-algorithm with a simple discrepancy bound in Section 4.1. \\n\\nWe tried hard to find the simplest instantiation of our meta-algorithm for MuJoCo tasks, instead of using an artificially complicated algorithm. That doesn\\u2019t necessarily mean that other instantiations wouldn\\u2019t work. (In fact, as mentioned below, some others are promising, though not entirely successful yet. Our current implementation also mostly just serves as a proof-of-concept demonstration that some instantiations of the framework are possible and helpful.) \\nThe theoretical results in MBRL are very sparse. To some extent, we hope that our work can spark future works that either instantiate our meta-algorithm with strong and clever modifications or that improve our meta-algorithm with stronger guarantees. \\n\\nMoreover, we would like to argue that the two new empirical ingredients pointed out by the reviewer are both inspired by the theory, in our opinion and our research process. First, the technique of optimizing the policy and model iteratively in an inner policy improvement loop may sound unrelated to the theory, but actually, it was very much inspired by it: our theory suggests that we should jointly optimize the model and the policy to maximize the lower bound for the real reward by SGD, and this would have perfectly justified the iterative optimization of the policy and the model in an inner loop. Later in the experiments, we found that stopping the gradient from one occurrence of the model parameter would not hurt the performance and would speed up the code. Doing so would a priori implies that alternating updates of the model and the policy in an inner loop are less useful, but in fact, the stochasticity introduced by the SGD on model loss is still powerful to reduce overfitting, in a way similar to that SGD regularizes the ordinary supervised training. (Please see paragraph before section 6.2, or the response below to the last two questions, for slightly more detailed discussions.) Therefore, we view this technique as inspired crucially by the theory, though disguised by the simplification of our algorithm. \\n\\nAs the reviewer agreed, the use of L2 norm (instead of MSE) is inspired and justified by the theory and it also contributes significantly to the empirical improvements. \\n\\n--- \\u201cI was confused by section 4.2. Could you please explain why the transformation is needed and how it is used?\\u201d\\n\\nThe transformation is only to demonstrate that the norm-based model loss is not invariant to a potential hidden transformation of the state space, whereas the discrepancy bound proposed in Section 4.2 is. This is a feature of the algorithm: if somehow the algorithm is presented with states in different representation space, it will still work the same, whereas the norm-based model loss will behave differently. If one is not concerned with the representation of the states, this section indeed only provides the formal error bound of the discrepancy bound in equation 4.6.\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the insightful review and positive comments on the theoretical framework. We address the reviewer\\u2019s comments/questions below:\\n\\n--- \\u201cThe framework seems to be quite general but does not include any specific example, like what non-linear dynamical model in detail could be included and will this framework cover the classical MDP setting\\u201d, \\u201cPrevious model-based work with simpler model already can have such strong guarantees, such as linear dynamic (Y. Abbasi-Yadkori and Cs. Szepesvari (2011)), MDP (Agrawal and Jia (2017)). What kind of new insights will this framework give when the model reduces to simpler one (linear model)?\\u201d\\n\\nIndeed, our framework can capture all parameterized models (including linear model or even tabular MDP); however, our focus is on non-linear models. The distinction to the previous papers is that we are the first framework that can show the monotone improvement and handle the uncertainty quantification (via a discrepancy bound) *for non-linear models*. As far as we understand, the existing papers\\u2019 techniques are difficult to extend to non-linear models. Our approach, restricted to linear models or classical MDP, would give some sensible results but wouldn\\u2019t be as strong as the existing ones, and would probably not provide much more insights. However, the strength of the paper is that it works for non-linear models and the key insight is that we don\\u2019t need explicit uncertainty quantification of the parameters in the traditional sense (instead, a discrepancy bound would suffice.)\\n\\n--- \\u201cin RL, people may be more care about the regret or sample complexity. \\u201d\\n\\nWe can actually prove a polynomial (in dimension) sample complexity bound for Algorithm 1 with very standard concentration inequality. We can prove uniform convergence results with standard machinery for the estimation of the discrepancy bounds via samples when the bound satisfies R3. Then we can show that the algorithm converges to an approximate local maximum with an error that depends on the estimation error of the discrepancy bound. Such a polynomial complexity bound will not be comparable to Y. Abbasi-Yadkori and Cs. Szepesvari (2011) when restricted to linear models, but they can work generically for non-linear models (under the assumption of Theorem 3.1.) This result is not written in the paper because we thought it\\u2019s relatively standard, but we would be more than happy to add it in the revision very soon. \\n\\n--- \\u201c1. In (3.2), what norm is considered here?\\u201d \\n\\nEquation (3.2) is a demonstration of a potential type of results we could hope for. In Section 4, we show that if the value function is L-Lipschitz with some norm, then (3.2) would be true with the same norm. In the experiments, we use the L2 norm. \\n\\n--- \\u201c2. In page 4, the authors mentioned their algorithm can be viewed as an extension of the optimism-in-face-of-uncertainty principle to non-linear parameterized setting. This is a little bit confused. How this algorithm can be viewed as OFU principle? How does it recover the result in linear setting (Y. Abbasi-Yadkori and Cs. Szepesvari (2011))?\\u201d\\n\\nThe relationship to OFU is in the very conceptual sense that we optimize the model and the policy together in an optimistic fashion as in OFU. However, the way to quantify the uncertainty is through the discrepancy bound but not the confidence interval as in typical OFU approaches. (But many OFU based papers, such as Jaksch et al\\u201910, implicitly uses some sort of discrepancy bound that is similar to ours in nature in their proof techniques.) \\n\\n--- \\u201c- Is there any convergence rate guarantee for this stochastic optimization? \\u201d \\u201cAnd also neural network is used for deep RL. So there is also no guarantee for the actual algorithm which is used?\\u201d\\n\\nThe concrete implementation of the algorithm doesn\\u2019t have a convergence rate guarantee yet. We don\\u2019t expect it to work for all environments, but under some assumptions of the environments, we may be able to show convergence. This is left as future work. \\n\\nWe also thank the reviewer for the suggestions to add sub-sections in Section 1 and will revise in the next revision. We will also cite the two relevant papers mentioned by the reviewers in the revision.\"}",
"{\"title\": \"Very nice theoretical framework for model-based RL and also an interesting algorithm with promising results is presented. However, there is a large mismatch between the assumptions of the theory and the assumptions made for the algorithm such that it is unclear how much theory can still be used to characterize this algorithm.\", \"review\": \"The paper presents monotonic improvement bounds for model-based reinforcement learning algorithms. Based on these bounds, a new model-based RL algorithm is presented that performs well on standard benchmarks for deep RL.\\n\\nThe paper is well written and the bounds are very interesting. The algorithm is also interesting and seems to perform well. However, there is a slight disappointment after reading the paper because the resulting algorithm is actually quite far away from the assumptions made for deriving the bounds. The 2 innovations of the algorithm are:\\n- Model and policy are optimized iteratively in an inner policy improvement loop. As far as I see it, this is independent of the presented theory. \\n- The L2 norm is used to learn the model instead of the squared L2 norm. This is inspired by the theory.\", \"more_comments_below\": [\"I was confused by section 4.2. Could you please explain why the transformation is needed and how it is used? As I understand, this is not used at all in the algorithm any more? So what is the advantage of this derivation in comparison to Eq 4.6?\", \"Please explain in more detail what the effects are from relaxing the assumptions for the algorithm? I assume none of the monotonic improvement results can be transferred to the algorithm?\", \"Could you elaborate why the algorithm was not implemented as suggested by Section 4? Is the problem that the algorithm did not perform well or that the discrepency measure is hard to compute?\", \"For the presented algorithm, the discrepency does not depend on the policy any more. I did not understand why the iterative optimization should be useful in this case.\", \"The theory suggests that we have to do a combined optimization of the lower bound. However, effectively, the algorithm optimizes the policy over V and the policy over the L2 multi-step prediction loss. The difference to a standard model-based RL algorithm is minor and the many advantages of the nice theoretical framework are lost.\", \"The only difference between Algo 3 and Algo 2 seems to be the additional for loop. As I said, its not clear to me why this should be useful as the optimization problems are independent of each other (except for the trajectories, but the model does not depend on the policy). Did you try Algo 3 with the same amount of Adam updates as Algo 2 (could be that I missed that).\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A novel framework for deep RL but needs more specific examples\", \"review\": \"This paper proposed a new class of meta-algorithm for reinforcement learning and proved the monotone improvement for a local maximum of the expected reward, which could be used in deep RL setting. The framework seems to be quite general but does not include any specific example, like what non-linear dynamical model in detail could be included and will this framework cover the classical MDP setting? In theory, the dynamical model needs to satisfy L-Lipschitz. So which dynamical model in reality could satisfy this assumption? It seems that the focus of this paper is theoretical side. But the only guarantee is the non-decreasing value function of the policy. In RL, people may be more care about the regret or sample complexity. Previous model-based work with simpler model already can have such strong guarantees, such as linear dynamic (Y. Abbasi-Yadkori and Cs. Szepesvari (2011)), MDP (Agrawal and Jia (2017)). What kind of new insights will this framework give when the model reduces to simpler one (linear model)?\\n\\nIn practical implementation, the authors designed a Stochastic Lower Bound Optimization. Is there any convergence rate guarantee for this stochastic optimization? And also neural network is used for deep RL. So there is also no guarantee for the actual algorithm which is used?\", \"minor\": \"1. In (3.2), what norm is considered here?\\n2. In page 4, the authors mentioned their algorithm can be viewed as an extension of the optimism-in-face-of-uncertainty principle to non-linear parameterized setting. This is a little bit confused. How this algorithm can be viewed as OFU principle? How does it recover the result in linear setting (Y. Abbasi-Yadkori and Cs. Szepesvari (2011))?\\n3. The organization could be more informative. For example, Section 1 has 13 paragraphs but without any subsection.\\n\\nY. Abbasi-Yadkori and Cs. Szepesvari, Regret Bounds for the Adaptive Control of Linear Quadratic Systems, COLT, 2011.\\nShipra Agrawal and Randy Jia. Optimistic posterior sampling for reinforcement learning: worst-case regret bounds. NIPS, 2017\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
r1V0m3C5YQ | Coupled Recurrent Models for Polyphonic Music Composition | [
"John Thickstun",
"Zaid Harchaoui",
"Dean P. Foster",
"Sham M. Kakade"
] | This work describes a novel recurrent model for music composition, which accounts for the rich statistical structure of polyphonic music. There are many ways to factor the probability distribution over musical scores; we consider the merits of various approaches and propose a new factorization that decomposes a score into a collection of concurrent, coupled time series: "parts." The model we propose borrows ideas from both convolutional neural models and recurrent neural models; we argue that these ideas are natural for capturing music's pitch invariances, temporal structure, and polyphony.
We train generative models for homophonic and polyphonic composition on the KernScores dataset (Sapp, 2005), a collection of 2,300 musical scores comprised of around 2.8 million notes spanning time from the Renaissance to the early 20th century. While evaluation of generative models is know to be hard (Theis et al., 2016), we present careful quantitative results using a unit-adjusted cross entropy metric that is independent of how we factor the distribution over scores. We also present qualitative results using a blind discrimination test.
| [
"music composition",
"music generation",
"polyphonic music modeling"
] | https://openreview.net/pdf?id=r1V0m3C5YQ | https://openreview.net/forum?id=r1V0m3C5YQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1edwt0xg4",
"H1lqh2AHAX",
"rkgk2iABA7",
"BkldQoArAm",
"Hyl5P90rCX",
"rkxg76-an7",
"rJexNSvj3X",
"BJghCWtqh7",
"BygOalnNnQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1544771936283,
1543003314181,
1543003047158,
1543002912392,
1543002722370,
1541377304067,
1541268776016,
1541210579809,
1540829375797
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1409/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1409/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1409/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1409/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1409/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1409/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1409/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1409/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1409/Authors"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes novel recurrent models for polyphonic music composition and demonstrates the approach with qualitative and quantitative evaluations as well as samples. The technical parts in the original write-up were not very clear, as noted by multiple reviewers. During the review period, the presentation was improved. Unfortunately the reviewer scores are mixed, and are on the lower side, mainly because of the lack of clarity and quality of the results.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"recurrent models for polyphonic music composition, quality seems to be the issue\"}",
"{\"title\": \"re: many comments\", \"comment\": \"Thank you for your extensive comments, and in particular for drawing our attention to the Johnson paper. Our relative pitch weight-sharing is the same idea as Johnson\\u2019s tied parallel networks, and we have made sure to recognize this in the new revision of the paper.\\n\\nWe\\u2019ve made an effort to clean up many of your specific comments regarding clarity (see our new top-level post for a summary of changes to the paper) and we also hope that the new Appendix B gives a more holistic response to your requests for clarification. Specifically regarding the models in Table 5: all models in Table 5 address the single-part (homophonic) task, so there is no global model. All models in Table 3 follow the coupled (standard) run architectures described in Equations (2), (3), and (4).\", \"regarding_piano_scores\": \"we hope Appendix B provides some clarification, in particular subsection B.4. A piano score like the one in Figure 5 can be handle as two homophonic parts (bass staff and treble staff). The example given in Appendix A is a particularly complicated case where the lines split into 4-part polyphony; our point is that KernScores gives us labels to decompose these more complicated cases into homophonic parts that can be modeled using coupled architectures.\\n\\nRegarding the \\u201cunusualness\\u201d of the melodies, we point out that about half our corpus consists of Renaissance music. This may account for some of the difference in model output compared to models trained on corpora consisting entirely of music from the canonical classical era (17th-19th centuries).\\n\\nWe have added single-part and two-part scores on the demos page, with the caveat that there are no single-part scores in our dataset and very few two-part scores, so these scores are somewhat out-of-sample.\", \"regarding_data_augmentation\": \"we found in our experiments that, for the relative pitch models, pitch-shifting data augmentation doesn\\u2019t improve log-loss. Likewise, given weight-sharing for parts, shuffling the order of the parts doesn\\u2019t improve log-loss.\\n\\nFor the listening test, excerpts were chosen at random locations in the score. The fact that participants struggle to distinguish between training data and model outputs puts at least a lower bound on the quality of generated output. But we agree that listening to these excerpts without context can often make very little sense. On the other hand, we make no claim to model long-term dependencies in music, so presenting listeners with long clips doesn't elicit informative feedback. We are open to ideas about better ways to evaluate these models.\"}",
"{\"title\": \"re: writing and piano roll representation\", \"comment\": \"Thank you for your feedback. We have revised the paper to clarify the notational and definitional issues you identified.\", \"regarding_the_number_of_bits_needed_for_a_piano_roll\": \"we have included further discussion of this point in Appendix B. We draw your attention in particular to Figures 5 and 7, which clarify the need for a second bit to distinguish between the onset of a note and continuation of a note from an earlier frame.\"}",
"{\"title\": \"re: qualitative evaluation\", \"comment\": \"Seven out of our twenty study participants self-identified as musically educated. Conditioned on that group, we found the following results:\", \"clip_length\": \"10, 20, 30, 40, 50\", \"average\": \"4.9, 6.0, 6.4, 6.9, 7.0\\n\\nSo there was no significant distinction in results based on musical education. \\n\\nOne thing to keep in mind is that a substantial fraction of our corpus is Renaissance music, which even well-educated classical musicians may be less familiar with. We informed participants of the scope of the training data prior to the listening test, but the bias towards Renaissance patterns in both the training data and model output could make classical music education less informative for discrimination.\"}",
"{\"title\": \"revision summary\", \"comment\": \"We have updated the paper. This comment summarizes the substantial changes.\\n\\n(*) We have included an additional Appendix B that precisely describes several popular factorizations of the distribution over scores, including the one used in this paper (Appendix B.5).\\n\\n(*) Some clarifications of the cross-entropy metric in Section 3.\\n\\n(*) Multiple revisions in Section 4 to clarify notation.\\n\\n(*) Clarification of the distinction between a monophonic and homophonic prediction task in Section 4.\\n\\n(*) Revision to Table 3: renaming \\u201cbottom\\u201d and \\u201ctop\\u201d to \\u201cpart\\u201d and \\u201cglobal\\u201d respectively, along with clarification in the caption regarding the meaning of these terms.\"}",
"{\"title\": \"well-written paper\", \"review\": \"The paper is well written and presented, giving a good literature review and clearly explaining the design decisions and trade-offs. The paper proposes a novel factorisation approach and uses recurrent networks.\\n\\nThe evaluation is both quantitative and qualitative. The qualitative experiment is interesting, but there is no information given about the level of musical training the participants had. You would expect very different results from music students compared to the general public. How did you control for musical ability/ understanding?\\n\\nThe paper has a refreshing honesty in its critical evaluation of the results, highlighting fundamental problems in this field.\\n\\nOverall, while I am not an expert in musical composition and machine learning, the paper is clear, and appears to be advancing the art in a reliable fashion.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Contains a good overview and extensive simulations. Unfortunately poor technical writing.\", \"review\": \"Composing polyphonic music is a hard computational problem. \\nThis paper views the problem as modelling a probability distribution \\nover musical scores that is parametrized using convolutional and recurrent \\nnetworks. Emphasis is given to careful evaluation, both quantitatively and qualitatively. The technical parts are quite poorly written.\\n\\nThe introduction is quite well written and it is easy to follow. It provides a good review that is nicely balanced between older and recent literature. \\n\\nUnfortunately, at the technical parts, the paper starts to suffer due to sloppy notation. The cross entropy definition is missing important details. What does S exactly denote? Are you referring to a binary piano roll or some abstract vector valued process? This leaves a lot of guess work to the reader. \\nEven the footnote makes it evident that the authors may have a different mental picture -- I would argue that a piano roll does not need two bits. Take a binary matrix: Roll(note=n, time=t) = 1 (=0) when note n is present (absent) at time t. \\n\\nI also think the term factorization is sometimes used freely as a synonym for representation in last paragraphs of 4 and first two paragraphs of 5 -- I find this misleading without proper definitions.\\n\\nThe models, which are central to the message of the paper, are not described clearly. Please\\ndefine function a(\\\\cdot) in (2), (3), (4), : this maybe possibly a typesetting issue (and a is highly likely a sigmoid) but what does x_p W_hp x x_pt etc stand for? Various contractions? You have only defined the tensor as x_tpn. Even there, the proposed encoding is difficult to follow -- using different names for different ranges of the same index (n and d) seems to be avoiding important details and calling for trouble. Why not just introduce an order 4 tensor and represent everything in the product space as every note must have a duration? \\n\\nWhile the paper includes some interesting ideas about representation of relative pitch, the poor technical writing makes it not suitable to ICLR and hard to judge/interpret the extensive simulation results.\", \"minor\": \"For tensors, 'rank-3' is not correct use, please use order-3 here if you are referring to the number of dimensions of the multiway array. \\n\\nWhat is a non-linear sampling scheme? Please be more precise.\", \"the_allan_williams_citation_and_year_is_broken\": \"Moray Allan and Christopher K. I. Williams. Harmonising Chorales by Probabilistic Inference. Advances in Neural Information Processing Systems 17, 2005.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"sections almost made sense, output almost sounded good, ... (\\\"4: Ok but not good enough\\\")\", \"review\": \"PROs\\n-seemingly reasonable approach to polyphonic music generation: figuring out a way to splitting the parts, share parameters appropriately, measuring entropy per time, all make sense\\n-the resulting outputs tend to have very short-term harmonic coherence (e.g. often a \\u2018standard chord\\u2019 with some resolving suspensions, etc), with individual parts often making very small stepwise motion (i.e. reasonable local voice leading)\\n-extensive comparison of architectural variations\\n-positive results from listening experiments\\n\\nCONs\\n-musical outputs are *not* clearly better than some of the polyphonic systems described; despite the often small melodic steps, the individual lines are quite random sounding; this is perhaps a direct result of the short history\\n-I do not hear the rhythmic complexity that is described in the introduction\\n-the work by Johnson (2015) (ref. provided below) should be looked at and listened to; it too uses coupled networks, albeit in a different way but with a related motivation, and has rhythmic and polyphonic complexity and sounds quite good (better, in my opinion) \\n-some unclear sections (fixable, especially with an appendix; more detail below)\\n-despite the extensive architectural comparisons, I was not always clear about rationale behind certain choices, eg. if using recurrent nets, why not try LSTM or GRU? (more questions below)\\n-would like to have heard the listening tests; or at least read more about how samples were selected (again, perhaps in an appendix and additional sample files)\\n\\n quality, clarity, originality and significance of this work, including a list of its pros and cons (max 200000 characters).\\n\\nQuality -- In this work, various good/reasonable choices are made. The quality of the actual output is fine. It is comparable to-- and to my ears not better than-- existing polyphonic systems such as the ones below (links to sample audio are provided here):\\n\\n-Bachbot - https://soundcloud.com/bachbot (Liang et al 2017)\\n- tied parallel nets - http://www.hexahedria.com/2015/08/03/composing-music-with-recurrent-neural-networks/ (Johnson 2015, ref below)\\n-performanceRNN - https://magenta.tensorflow.org/performance-rnn - (Simon & Oore 2017)\\n..others as well..\\n\\n\\nClarity -- Some of the writing is \\\"locally\\\" clear, but one large, poorly-organized section makes the whole thing confusing (details below). It is very helpful that the authors subsequently added a comment with a link to some sample scores; without that, it had been utterly impossible to evaluate the quality. There are a few points that could be better clarified:\\n\\t-p5\\u201da multi-hot vector of notes N\\u201d. It sounds like N will be used to denote note-numbers, but in fact it seems like N is the total number of notes, i.e. the length of the vector, right? What value of N is used?\\n-p5 \\u201ca one-hot vector of durations D\\u201d. It sounds like D will be used to denote durations, but actually I think D is the length of the 1-hot vector encoding durations right? What value of D is used, and what durations do the elements of this vector represent?\\n-similarly, does T represent the size of the history? This should really be clarified.\\n\\t-p5 Polyphonic models.\\n\\t\\t-Eq (2), (3), (4): Presumably the h\\u2019s are the hidden activations layers?\\n\\t\\t-the networks here correspond to the blue circles in Fig 1, right? If so, make the relationship clear and explicit \\n\\t\\t-Note that most variables in most equations are left undefined \\n\\t\\t-actually defining the W\\u2019s in Eq(2-4) would allow the authors to refer to the W\\u2019s later (e.g. in Section 5.2) when describing weight-sharing ideas. Otherwise, it\\u2019s all rather confusing. For example, the authors could write, \\u201cThus, we can set W_p1 = W_p2 = W_p3 = W_p4\\u201d (or whatever is appropriate). \\n\\t-Generally, I found that pages 5-7 describe many ideas, and some of them are individually fairly clearly described, but it is not always clear when one idea is beginning, and one idea is ending, and which ideas can be combined or not. On my first readings, I thought that I was basically following it, until I got to Table 5, which then convinced me that I was in fact *not* quite following it. For example, I had been certain that all the networks described are recurrent (perhaps due to Fig1?), but then it turned out that many are in fact *not* recurrent, which made a lot more sense given the continual reference to the history and the length of the model\\u2019s Markov window etc. But the reader should not have had to deduce this. For example, one could write, \\n\\t\\u201cWe will consider 3 types of architectures: convolutional, recurrent, .... In each architecture, we will have [...] modules, and we will try a variety of combinations of these modules. The modules/components are as follows:\\u201d. It\\u2019s a bit prosaic, but it can really help the reader. \\n-Appendices, presented well, could be immensely helpful in clarifying the exact architectures; obviously not all 22 architectures from Table 5 need to be shown, but at least a few of them shown explicitly would help clarify. For example, in Fig1, the purple boxes seem to represent notes (according to the caption), but do they actually represent networks? If they really do represent notes, then how can \\u201cnotes\\u201d receive inputs from both the part-networks and the global network? Also, I was not entirely clear on the relationship of the architecture of the individual nets (for the parts) to that of the global integrating network. E.g. for experiment #20, the part-net is an RNN (with how many layers?? with regular or LSTM cells?) followed by a log-linear predictor (with one hidden layer of 300 units right? or are there multiple layers sometimes?), but then what is the global network? Why does the longest part-history vector appear to have length 10 based on Table 5, but according to Table 3 the best-performing history length was 20? Though, I am not sure the meaning of the \\u201cbottom/top\\u201d column was explained anywhere, so maybe I am completely misunderstanding that aspect of the table? Etc.\\n-Many piano scores do not easily deconstruct into clean 4-part polyphony; the example in Appendix A is an exception. It was not clear to me how piano scores were handled during training. \\n-Terminology: it is not entirely clear to me why one section is entitled \\u201chomophonic models\\u201d, instead of just \\u201cmonophonic models\\u201d. Homophonic music usually involves a melody line that is supported by other voices, i.e. a sort of asymmetry in the part-wise structure. Here, the outputs are quite the opposite of that: the voices are independent, they generally function well together harmonically, and there is usually no sense of one voice containing a melody. If there\\u2019s some reason to call it homophonic, that would be fine, but otherwise it doesn\\u2019t really serve to clarify anything. However, the authors do say that the homophonic composition tasks are a \\u201cminor generalization of classic monophonic composition tasks\\u201d, so this suggests to me that there is something here that I am not quite understanding.\\n\\nThe last sentence of Section 5.3 is very confusing-- I don\\u2019t understand what lin_n is, or 1_n is, or how to read the corresponding entries of the table. The first part of the paragraph is fairly clear.\", \"table_4\": \"\\u201cThe first row\\u201d actually seems like it is referring to the second row. I know what the authors mean, but it is unnecessarily confusing to refer to it in this way. One might as well refer to \\u201cthe zeroth row\\u201d as listing the duration of the clip :)\", \"the_experimental_evaluation\": \"I would like to hear some of the paired samples that were played for subjects. Were classical score excerpts chosen starting at random locations in the score, or at the beginning of the score? It is known that listening to a 10-second excerpt without context can sometimes not make sense. I would be curious to see the false positives versus the false negatives. Nevertheless, I certainly appreciate the authors\\u2019 warning to interpret the listening results with caution.\\n\\n\\n\\n\\nOriginality & Significance -- So far, based both on the techniques and the output, I am not entirely convinced of the originality or significance of this particular system. The authors refer to \\u201crhythmically simple polyphonic scores\\u201d such as Bachbot, but I cannot see what is rhythmically fundamentally more sophisticated about the scores being generated by the present system. One nice characteristic of the present system is the true and audible independence of the voices.\\n\\nOne of the contributions appears to be the construction of models that explicitly leverage with shared weights some of the patterns that occur in different \\u201cplaces\\u201d (pitch-wise and temporally) in music. This is both very reasonable, and also not an entirely novel idea; see for example the excellent work by Daniel Johnson, \\u201cGenerating Polyphonic Music Using Tied Parallel Networks\\u201d (paper published 2017, first shared online, as far as I know, in 2015: links to all materials available at http://www.hexahedria.com/2015/08/03/composing-music-with-recurrent-neural-networks/ )\\nAnother now common (and non exclusive) way to handle some of this is by augmenting the data with transposition. It seems that the authors are not doing this here. Why not? It usually helps. \\n\\nAnother contribution appears to be the use of a per-time measure of loss. This is reasonable, and I believe others have done this as well. I certainly appreciated the explicit justification for it, however.\\n\\nNote that the idea of using a vector to indicate metric subdivision was also used in (Johnson 2015).\\n\\nPlaying through some of the scores, it is clear that melodies themselves are often quite unusual (check user studies), but the voices do stay closely connected harmonically, which is what gives the system a certain aural coherence. I would be interested to hear (and look at) what is generated in two-part harmony, and even what is generated-- as a sort of baseline-- with just a single part.\", \"i_encourage_the_authors_to_look_at_and_listen_to_the_work_by_johnson\": \"-listening samples: http://www.hexahedria.com/2015/08/03/composing-music-with-recurrent-neural-networks/\\n-associated publication: http://www.hexahedria.com/files/2017generatingpolyphonic.pdf\\n\\nOverall, I think that the problem of generating rhythmically and polyphonically complex music is a good one, the approaches seem to generally be reasonable, although they do not appear to be particularly novel, and the musical results are not particularly impressive. The architectural choices are not always clearly presented.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Demos\", \"comment\": \"We've sampled some scores from the model described in the paper and released them anonymously here:\", \"http\": \"//ec2-18-219-197-207.us-east-2.compute.amazonaws.com/\\n\\nCode for loading the KernScores dataset discussed in the paper will be made available once this submission is de-anonymized.\"}"
]
} |
|
HkzRQhR9YX | Tree-Structured Recurrent Switching Linear Dynamical Systems for Multi-Scale Modeling | [
"Josue Nassar",
"Scott Linderman",
"Monica Bugallo",
"Il Memming Park"
] | Many real-world systems studied are governed by complex, nonlinear dynamics. By modeling these dynamics, we can gain insight into how these systems work, make predictions about how they will behave, and develop strategies for controlling them. While there are many methods for modeling nonlinear dynamical systems, existing techniques face a trade off between offering interpretable descriptions and making accurate predictions. Here, we develop a class of models that aims to achieve both simultaneously, smoothly interpolating between simple descriptions and more complex, yet also more accurate models. Our probabilistic model achieves this multi-scale property through of a hierarchy of locally linear dynamics that jointly approximate global nonlinear dynamics. We call it the tree-structured recurrent switching linear dynamical system. To fit this model, we present a fully-Bayesian sampling procedure using Polya-Gamma data augmentation to allow for fast and conjugate Gibbs sampling. Through a variety of synthetic and real examples, we show how these models outperform existing methods in both interpretability and predictive capability. | [
"machine learning",
"bayesian statistics",
"dynamical systems"
] | https://openreview.net/pdf?id=HkzRQhR9YX | https://openreview.net/forum?id=HkzRQhR9YX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1lUtwLVgE",
"Hyx_x1UsyN",
"S1emGeRIyE",
"r1eVWiHqRQ",
"B1eKkCndA7",
"HJeZ36huRX",
"ryxq_63dRm",
"rke9PnnOCQ",
"rygVmgaR3Q",
"SyetRg5a3Q",
"ryxsWdw927"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545000830187,
1544408815619,
1544114187186,
1543293691976,
1543192032818,
1543191976678,
1543191922048,
1543191650465,
1541488667806,
1541411024880,
1541203970551
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1408/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1408/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1408/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1408/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1408/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1408/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1408/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1408/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1408/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1408/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1408/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper presents a recurrent tree-structured linear dynamical system to model the dynamics of a complex nonlinear dynamical system. All reviewers agree that the paper is interesting and useful, and is likely to have an impact in the community. Some of the doubts that reviewers had were resolved after the rebuttal period.\\n\\nOverall, this is a good paper, and I recommend an acceptance.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper on modeling nonlinear dynamical system.\"}",
"{\"title\": \"Response to authors' rebuttal\", \"comment\": \"Thanks to the authors for the detailed and sufficient contents added to the appendix. I am satisfied with the new proof provided by the author and am willing to support it to be accepted. My score to the paper is also updated accordingly.\"}",
"{\"title\": \"Response to Authors' Revision\", \"comment\": \"Thank you for the significant updates and detailed clarification. The revision has sufficiently addressed all my concerns. I have upgraded my score and am willing to support the acceptance of this paper.\"}",
"{\"title\": \"Updates sufficiently address my concerns\", \"comment\": \"Thank you for the response and the significant updates to the paper. The rebuttal sufficiently addresses most of my concerns. Although, I still have some concerns about scalability, they are not a showstopper for me and I am willing to support the acceptance of this revised paper.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for her/his insightful review. We note that tree-structured stick-breaking utilized by TrSLDS is a strict generalization of the sequential stick-breaking used in rSLDS. As stated in the response to AnonReviewer1 above, we can recover sequential stick-breaking from tree-structured stick-breaking by enforcing the left node at each level in the tree to be a leaf node. We have amended the manuscript to make this connection explicit.\\n\\nSince tree-structured stick-breaking is a strict generalization of sequential stick-breaking, the expressive power of TrSLDS theoretically subsumes that of rSLDS. The question reduces to a comparison of tree structures; in our experiments, a comparison of right-branching trees to balanced binary trees. We emphasize this by including two new examples in the appendix in which the true dynamics and the entire latent structure are known; the first being the \\u201csynthetic Nascar\\u201d example used in Linderman et al. (2017), where the true model follows a right-branching tree, as in the standard rSLDS, to emphasize that we can effectively learn these dynamics with a tree-structured model. The second example is a twist on the synthetic Nascar where the underlying model is a TrSLDS and where we test both rSLDS and TrSLDS. In this example (Fig. 7), rSLDS fails due to the sequential nature of stick-breaking that cannot adequately capture the locally-linear dynamics.\\n\\n>>Experiment Section\\nWe thank the reviewer for pointing out the missing information in the experiments section and have amended the manuscript with corrections. As stated above, we have included two more examples in the appendix to highlight not only the expressive power of TrSLDS, but also to show that the sampler is indeed mixing well. Concerning the real data experiment, we have amended the manuscript with more results from the analysis. The orientations were chosen to resemble a tree where orientations 140 and 150 have the same parent; the same is true for orientations 230 and 240. Thanks to the multi-scale nature of TrSLDS, the method is able to learn this relation and assigns the two groups to different subtrees. It then refines the dynamics by focusing on each of these two groups separately. \\n\\n>>Scalability\\nThe computational complexity of TrSLDS is of the same order as rSLDS; for specifics please refer to our response to AnonReviewer 1.\\n\\nTo address the concerns regarding the samplers mixing speed as a function of number of discrete states, we fit a TrSLDS with K = 2, 4, 8 discrete states, keeping the amount of data used to train the model fixed, and plotted the log joint density as function of samples and included it in the appendix. From the plots, the sampler seems to converge to a mode of the posterior after about 150-250 samples for each of the various numbers of discrete states. Due to the nature of Gibbs sampling, we are limited to batch updates to each of the conditional posteriors. While scalability has not been an issue in our experiments, we will explore stochastic variational approaches in future work.\\n\\n>>Minor\\nWe thank the reviewer for pointing out these minor mistakes and have corrected them in the amended manuscript.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for her/his insightful review and for bringing up the prior work done by Stanculescu et al. 2014. In Stanculescu et al. 2014., they propose adding a layer to factorized SLDS where the top-level discrete latent variables determine the conditional distribution of z_t, with no dependence on x_{t-1}. While the tree-structured stick-breaking used in TrSLDS is also a hierarchy of discrete latent variables, the model proposed in Stanculescu et al. 2014., has no hierarchy of dynamics, preventing it from obtaining a multi-scale view of the dynamics. Stanculescu et al. (2014) also reference preceding work on hierarchical SLDS by Zoeter & Heskes (2003), the only example they found in the literature. In Zoeter & Heskes (2003), the authors construct a tree of SLDSs where an SLDS with K possible discrete states is first fit. An SLDS with M discrete states is then fit to each of the K clusters of points. This process continues iteratively, building a hierarchical collection of SLDSs that allow for a multi-scale, low-dimensional representation of the observed data. While similar in spirit to TrSLDS, there are key differences between the two models.\\nFirst, it is through the tree-structured prior that TrSLDS obtains a multi-scale view of the dynamics, thus we only need to fit one instantiation of TrSLDS; in contrast, they fit a separate SLDS for each node in the tree, which is computationally expensive. There is also no explicit probabilistic connection between the dynamics of a parent and child in Zoeter & Heskes (2003). We also note that TrSLDS aims to learn a multi-scale view of the dynamics while Zoeter & Heskes (2003) focuses on smoothing, that is, they aim to learn a multi-scale view of the latent states corresponding to data but not suitable for forecasting. We have amended the manuscript to include a section discussing prior and related work.\\n\\n>>Technical Soundness\\nThe rSLDS and the TrSLDS share the same linear time complexity for sampling the discrete and continuous states, and both models learn K-1 hyperplanes to weakly partition the space. Specifically, both models incur: an O(TK) cost for sampling the discrete states, which increases to O(TK^2) if we allow Markovian dependencies between discrete states; an O(TD^3) cost (D is the continuous state dimension) for sampling the continuous states, just like in a linear dynamical system; and an O(KD^3) cost for sampling the hyperplanes. The only additional cost of the TrSLDS stems from the hierarchical prior on state dynamics. Unlike the rSLDS, we impose a tree-structured prior on the dynamics to encourage similar dynamics between nearby nodes in the tree. Rather than sampling K dynamics parameters, we need to sample 2K-1. Since they are all related via a tree-structured Gaussian graphical model, the cost of an exact sample is O(KD^3) just as in the rSLDS, with the only difference being a constant factor of about 2. Thus, we obtain a multi-scale view of the underlying system with a negligible effect on the computational complexity. We have amended the manuscript to make this clear.\\n\\nWe also note that tree-structured stick-breaking utilized by TrSLDS is a strict generalization of the sequential stick-breaking used by rSLDS. We can recover sequential stick-breaking from tree-structured stick-breaking by enforcing the left node at each level in the tree to be a leaf node. Our experiments only considered balanced binary trees for simplicity, but an interesting avenue of future work is to learn the tree structure, perhaps through additional MCMC. Learning such discrete representations is highly non-trivial and demands further investigation outside this submission. We have amended the manuscript to make this connection explicit.\\n\\n>>Empirical Results\\nThe Lorenz attractor in experiment 2 was also used in as a benchmark for the rSLDS (Linderman et al, 2017; Fig. 4). The only difference is that Linderman et al generated binary observations with a Bernoulli GLM emission model. For completeness, we ran TrSLDS on the synthetic nascar example used to test rSLDS in Linderman et al. (2017) to see if we could recover the dynamics and the discrete latent state assignments and included the results in the appendix. We also note that we included another example in the appendix where the data generated from an alternative version of the synthetic nascar example from Linderman et al. (2017) where the underlying model is a TrSLDS and compared both TrSLDS and rSLDS.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for her/his insightful review. We note that we added a section to the Appendix comparing the computational complexity of TrSLDS and rSLDS in which it states that the computational complexity of TrSLDS is of the same order as rSLDS; for specifics please refer to our response to AnonReviewer 1. Thus, we obtain a multi-scale view of the underlying system with a negligible effect on the computational complexity. We also note that tree-structured stick-breaking utilized by TrSLDS is a strict generalization of the sequential stick-breaking used by rSLDS; we have amended the manuscript to make this connection explicit.\\n\\nFor both synthetic experiments, the predictive power of TrSLDS and rSLDS (as well as SLDS and LDS) was compared using k-step R^2 (Figs 2 & 3). In both synthetic experiments, the predictive power of TrSLDS is at least as much rSLDS. \\n\\nTo better highlight the differences between TrSLDS and rSLDS, we added two more examples in the appendix. The first example is the synthetic NASCAR from (https://arxiv.org/pdf/1610.08466.pdf) where the underlying model is indeed an rSLDS (Fig. 6). The space is partitioned into 4 sections using sequential stick-breaking, where the trajectories trace out oval tracks similar to a NASCAR track. TrSLDS was fit to see if it could recover the dynamics even though it relies on tree-structured stick-breaking. From Fig. 6, it is evident that TrSLDS can recover the dynamics and obtain a multi-scale view. The second example is a twist on the TrSLDS, where the underlying model is a TrSLDS i.e. the space is partitioned using tree-structured stick-breaking (Fig. 7). We ran TrSLDS and rSLDS and compared their predictive performance using k-step R^2. From Fig. 7, we can see that rSLDS could not adequately learn the vector field due to its reliance on sequential stick-breaking. This provides empirical evidence that the expressive power of TrSLDS subsumes that of rSLDS which was stated in our response to AnonReviewer 1.\"}",
"{\"title\": \"Summary of revisions\", \"comment\": \"We thank all the reviewers for their suggestions and have amended the manuscript accordingly. Here is a summary of the changes:\\n1) Added a paragraph discussing prior work on hierarchical extensions of SLDS.\\n2) Added section describing the Polya-Gamma data augmentation scheme.\\n3) Redid the experiments to better highlight the multi-scale nature of our algorithm. (Figs 2, 3, 4)\\n4) Added a section in the appendix describing how to handle Bernoulli observations using Polya-Gamma data augmentation scheme.\\n5) Added a section in the appendix providing details on the message-passing used in the sampling.\\n6) To better highlight the differences between rSLDS and TrSLDS, two new experiments have been added to the appendix including a benchmark experiment from the original rSLDS paper. (Figs. 6 & 7)\\n7) Added a section in the appendix discussing the computational complexity of fitting the model. We also show empirically how the time till convergence of the MCMC sampler changes as a function of discrete latent states by fitting three TrSLDS of varying number of leaf nodes and plot the log of the joint density. (Fig. 5)\"}",
"{\"title\": \"a tree extension to previous work rSLDS\", \"review\": \"This paper introduces a probabilistic model to model nonlinear dynamic systems with multiple granularities. The nonlinearity is achieved by using multiple local linear approximations. The method is an extension to rSLDS (recurrent switching linear dynamical systems), which in turn is an extension to SLDS.\", \"pros\": \"1. Introducing the tree structure is a neat way of extending the existing rSLDS model to multiscale scenarios. \\n2. The paper is written clearly. The background is well illustrated and the idea rises naturally from there. The paper is also solid in the part describing the model.\", \"con\": \"1. from the rSLDS paper (https://arxiv.org/pdf/1610.08466.pdf), the authors there was experimenting with some settings similar to those used in this paper. However, I am not able to find some explicit comparison between the TrSLDS and rSLDS in this work. I think it should be needed since TrSLDS itself is derived out from rSLDS, it would be good to show explicitly the advantage of the new model.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"This is an interesting paper that seems to make a non-trivial contribution. The paper is however not positioned against existing work in hierarchical SLDS and is also somewhat sloppy in the experiment. I currently give this paper a borderline rating but would increase the scores if my concerns above are addressed satisfactorily.\", \"review\": \"PAPER SUMMARY:\\n\\nThis paper introduces a probabilistic generative framework to model linear dynamical systems at multiple levels of resolution, where the entire complex, nonlinear dynamics is approximated via a hierarchy of local regimes of linear dynamics -- the global dynamic is then characterized as a switching process that switches between linear regimes in the hierarchy.\\n\\nNOVELTY & SIGNIFICANCE:\\n\\nThe key contributions of this paper are (a) the use of tree-structured stick breaking to partition the entire dynamic space into a hierarchy of linear regimes; (b) the design of a hierarchical prior that is compatible to the tree structure; and (c) the developed Bayesian inference framework for it in Section 4.\\n\\nBy exploiting the tree-structured stick breaking process (Adams et al., 2010), the proposed framework is able to partition the entire dynamic space into a hierarchy of switching linear regimes.\\n\\nThis allows the dynamic to be queried at multiple levels of resolution. This appears to be the key difference between the proposed framework and the previous work of (Linderman et al., 2017) on recurrent switching dynamical systems that partition the dynamic space sequentially at the same level of resolution.\\n\\nThis seems like a non-trivial extension to the previous work of (Linderman et al., 2017) & I tend to consider this a novel contribution. That said, the paper was also not positioned against existing literature on hierarchical switching linear dynamic systems (see below) & I find it hard to evaluate the significance of the proposed framework (which explains the borderline rating)\\n\\n\\\"A Hierarchical Switching Linear Dynamical System applied to the detection of sepsis in neonatal condition monitoring\\\", Ioan Stanculescu, Christopher K. I. Williams and Yvonne Freer. In Proceeding of the 30th Conference on Uncertainty in AI (UAI-14), pages 752-761\\n\\nCould the authors please discuss the differences between the proposed work & (at least) the above?\", \"technical_soundness\": \"The technical exposition makes sense to me. Please also discuss the processing complexity of the resulting TrSLDS framework. In exchange for the improved performance, how much slower TrSLDS is as compared\\nto rSLDS? I am interested to see this demonstrated in the empirical studies.\", \"clarity\": \"The paper is clearly written.\", \"empirical_results\": \"The experiments look interesting and are very extensive on both test domains. However, I do not understand why the authors decided not to compare with rSLDS using its benchmark? \\n\\nI find this somewhat sloppy and hope the authors would clarify this too. \\n\\n****\", \"post_rebuttal_update\": \"The authors have made significant revision to their work, which sufficiently addressed all my concerns. I have upgraded my score accordingly and I am willing to support the acceptance of this paper.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Interesting extension to switching linear dynamical systems; scalability concerns and issues with experiments.\", \"review\": \"The authors develop a tree structured extension to the recently proposed recurrent switching linear dynamical systems. Like switching linear dynamical systems (sLDS) the proposed models capture non-linear dynamics by switching between a collection of linear regimes. However, unlike SLDS, the transition between the regimes is a function of a latent tree as well as the preceding continuous latent state. Experiments on synthetic data as well as neural spike train data are presented to demonstrate the utility of the model.\\n\\nThe paper is clearly written and easy to read. The tree structured model (TrSLDS) is a sensible extension to rSLDS. While one wouldn\\u2019t expect TrSLDS to necessarily fit the data any better than rSLDS, the potential for recovering multi-scale, possibly more interpretable decompositions of the dynamic process is compelling. \\n\\nWhile the authors do provide some evidence of being able to recover such multi-scale structures, overall the experiments are underwhelming and somewhat sloppy. First, to understand whether the sampler is mixing well, it would be nice to include an experiment where the true dynamics and the entire latent structure (including the discrete states) are known, and then to examine how well this ground-truth structure is recovered. Second, for the results presented in section 5, how many iterations was the sampler run for? In the figures, what is being visualized?, the last sample?, the MAP sample? or something else? I am not sure what to make of the real data experiment in section 5.3. Wouldn\\u2019t rSLDS produce nearly identical results? What is TrSLDS buying us in this scenario? Do the higher levels of the tree capture interesting low resolution dynamics that are not shown for some reason? \\n\\nMy other big concern is scalability. To use larger number of discrete states one would need deeper (or wider if the binary requirement is relaxed) trees. How well does the sampler scale with the number of discrete states? How long did the sampler take for the various 4-state results presented in the paper?\", \"minor\": \"a) There is a missing citation in the first para fo Section 5. \\nb) Details of message passing claimed to be in the supplement are missing.\\n\\n============\\nThere are interesting ideas in this paper. However, experimental section could better highlight the benefits afforded by the model and scalability concerns need to be addressed.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
ryM07h0cYX | Reinforced Pipeline Optimization: Behaving Optimally with Non-Differentiabilities | [
"Aijun Bai",
"Dongdong Chen",
"Gang Hua",
"Lu Yuan"
] | Many machine learning systems are implemented as pipelines. A pipeline is essentially a chain/network of information processing units. As information flows in and out and gradients vice versa, ideally, a pipeline can be trained end-to-end via backpropagation provided with the right supervision and loss function. However, this is usually impossible in practice, because either the loss function itself may be non-differentiable, or there may exist some non-differentiable units. One popular way to superficially resolve this issue is to separate a pipeline into a set of differentiable sub-pipelines and train them with isolated loss functions. Yet, from a decision-theoretical point of view, this is equivalent to making myopic decisions using ad hoc heuristics along the pipeline while ignoring the real utility, which prevents the pipeline from behaving optimally. In this paper, we show that by converting a pipeline into a stochastic counterpart, it can then be trained end-to-end in the presence of non-differentiable parts. Thus, the resulting pipeline is optimal under certain conditions with respect to any criterion attached to it. In experiments, we apply the proposed approach - reinforced pipeline optimization - to Faster R-CNN, a state-of-the-art object detection pipeline, and obtain empirically near-optimal object detectors consistent with its base design in terms of mean average precision. | [
"Pipeline Optimization",
"Reinforcement Learning",
"Stochastic Computation Graph",
"Faster R-CNN"
] | https://openreview.net/pdf?id=ryM07h0cYX | https://openreview.net/forum?id=ryM07h0cYX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Hkl-cPJ-gE",
"HyeZwJEcnQ",
"Syx6N8zqnm",
"Syxl8c0D2m"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544775561371,
1541189464636,
1541183028912,
1541036615576
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1407/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1407/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1407/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1407/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The work proposes a method for smoothing a non-differentiable machine learning pipeline (such as the Faster-RCNN detector) using policy gradient. Unfortunately, the reviewers identified a number of critical issues, including no significant improvement beyond existing works. The authors did not provide a rebuttal for these critical issues.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"No rebuttal submitted\"}",
"{\"title\": \"paper review\", \"review\": \"The authors use RPO (Shulman et al, 2015) to transform non-differentiable operations in Faster R-CNN such as NNS, RoIPool, mAP to stochastic but differentiable operations. They cast Faster R-CNN as a SCG which can be trained end-to-end. They show results on VOC 2007.\", \"pros\": \"(+) The idea of casting a non-differentiable pipeline into a stochastic one is very reasonable\\n(+) This idea is showcased for a hard task, rather than toy examples, thus making it more realistic and exciting\", \"cons\": \"(-) Results are rather underwhelming\\n(-) Important properties of the final approach, such as complexity (time, memory, FLOPs) are not mentioned at all\\n\\nWhile the idea the authors present seems reasonable and is showcased for a hard problem, such as object detection and on a well-designed system such as Faster R-CNN, the results are rather underwhelming. The proposed approach does not show any significant gains on top of the original pipeline (for ResNet101 the reported gains are < 0.2%). These small gains come at the expense of a more complicated definition and training procedure. The added complexity is not mentioned by the authors, such as time, memory requirements and FLOPs. In addition, the VOC2007 benchmark is rather outdated and much smaller than others. It would be nice to see similar results on COCO, which is larger and more challenging. \\n\\nSimilar efforts in this direction, namely making various modules of the Faster R-CNN pipeline differentiable, have shown little gains as well. For example, Dai at al., CVPR 2016, convert RoIPool into RoIWarp (following STN, Jaderberg et al) that allows for differentiation with respect to the box coordinates.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Unclear about the extent of the contribution\", \"review\": [\"The paper proposes a method for converting a non-differentiable machine learning pipeline into a stochastic, differentiable, pipeline that can be trained end-to-end with gradient descent approaches.\", \"Clarity: The language in the paper is very clear and easy to follow. The paper is lacking in clarity only when discussing some results/concepts from previous work (see detailed comments below).\", \"Quality: Overall the paper is in good shape, aside from some concerns which I will describe further.\", \"Originality: The originality is not very clear because it seems that a lot of ideas are borrowed from Schulman et al. (2015) (i.e. the concept of stochastic computation graph and how to compute the gradient) and from Rao et, al (2018) (i.e. sampling bounding boxes in some stages of the pipeline). To be fair to the authors, I am not very familiar with the two papers mentioned above, which makes this hard to judge. However, I think this paper could have explained more clearly which part exactly is a novelty of this paper, and where it separates from the rest.\", \"Significance: The concept of converting a non-differentiable pipeline to a differentiable version is indeed very useful and widely applicable, but the experimental section did not convince me that this particular method indeed works: the results show a very small improvement (0.7-2%) on a single system (Faster R-CNN), that has already been pretrained (so not clear if this method can learn from scratch).\"], \"pros\": \"1)\\tOverall the paper is well written.\\n2)\\tThe algorithm shown in Figure 4 nicely summarizes the whole algorithm.\\n3)\\tI particularly liked the part of Section 3 where it is shown the equivalence between the optimal parameters for the non-differentiable pipeline and the optimal parameters for the differentiable version.\\n4)\\tFigure 5 with detailed results is useful.\", \"cons\": \"5)\\tThe way the paper is written, it is not clear where the contribution of this paper separates from existing work, mainly Schulman et al. (2015). I believe the idea of going around non-differentiability via minimizing a surrogate loss (i.e. your equation (2) introduced by Schulman et al. (2015)) is already known. I\\u2019m not sure exactly where this work diverges from that.\\n6)\\tThe contribution of this paper is posed as a general framework for turning an arbitrary non-differentiable pipeline into a similar differentiable and stochastic version. However, the experimental section does not convince me that: \\n a)\\tit is general \\u2013 because it is applied only on the Faster R-CNN problem. \\n b)\\tthat it can learn from scratch \\u2013 it is only applied after the base method has been pre-trained. There are no experiments where you train a network from scratch with this new differentiable pipeline. If the reason is that ResNets are hard to train from scratch, then you can always try your pipeline on a smaller problem, even a synthetic dataset, just to prove that it works.\\n c)\\tthat the improvement is significant from the baseline method \\u2013 the results section show only a 1-2% increase in mAP, and only for the smaller networks (on larger ResNet models the gain is less than 1%, and the standard deviation is getting larger).\", \"detailed_comments\": \"7)\\tYou only cite the work of Schulman et al. (2015) at the beginning of section 2.1. While moving to section 2.2, I initially got the wrong impression that this us your contribution. Please state clearly where this comes from.\\n8)\\tIt is not explained well why the new gradient can be estimated as in equation (2). I spent quite some time trying to figure out where that comes from (particularly the log part), only to realize that the explanation is probably in the original work (at the time when I thought this is your contribution). Please point the readers to it.\", \"final_remarks\": \"Overall this paper introduces some interesting ideas. My main concerns were: (1) the originality, and (2) the results are not convincing. Perhaps concern (1) can be easily clarified by the authors, but for concern (2) it might be useful to show new results (training from scratch, other architectures to prove generality), as well as give arguments as to why the 1-2% gain in mAP is significant.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Lacking meaningful baselines and some claims are dubious\", \"review\": [\"Pros:\", \"Improving joint training of non-differentiable pipelines is a meaningful and relevant problem\", \"Using the stochastic computation graph structure to smooth a pipeline in a structured way is a plausible idea\"], \"cons\": \"+ The main result of the paper concerning sufficient conditions for optimality of the method seems dubious\\n+ It is not obvious why this method would outperform simple baselines, and baselines for joint training were tried\\n+ The notation seems unnecessarily bloated and overly formal\\n+ The exposition spends too much time on prior work, too little on the contribution, and the description of the contribution is confusing\\n\\nThe submission describes a method for smoothing a non-differentiable machine learning pipeline (such as the Faster-RCNN detector), so that gradient-based methods may be applied to jointly train all the parameters of the pipeline. In particular, the proposal involves recasting the pipeline as a stochastic computation graph (SCG), adding stochastic nodes to this graph, and then using REINFORCE-style policy gradients to perform parameter learning on the SCG. It is claimed that under certain conditions, the optimal parameters of the resulting SCG are also optimal for the original pipeline. The method is applied to optimizing the parameters of Faster-RCNN.\\n\\nI think making non-differentiable pipelines differentiable is an intuitively appealing concept. A lot of important, practical machine learning systems fall into this category, so devising a nice way to do global parameter optimization for such systems could potentially have significant impact. In general, we can\\u2019t hope to make much meaningful progress on the problem of optimizing general nonlinear, differentiable functions, but it is plausible that a method that targets key non-differentiable components for smoothing\\u2014such as this paper\\u2014could outperform a generic black-box optimizer. So, I think the basic idea here is plausible and addresses an important problem.\\n\\nUnfortunately, I think this work loses sight of that high-level goal: to me, the key question is whether the proposed approach outperforms any other simple method for global parameter optimization in the presence of nonlinearities and nondifferentiability. The paper fails to answer this question because no baselines for global parameter optimization were tried. We can just treat the pipeline as a black box mapping parameters to training set performance, and so any black-box optimization method can be applied to this problem. It is not clear that the proposed method would outperform an arbitrary black box optimization method such as simulated annealing, Nelder-Mead, cross-entropy method, etc.\\n\\nI think there are also much simpler methods in a similar vein to the proposed method that might also perform just as well as the proposal. One key conceptual issue here is that reducing the problem to a reinforcement learning problem, as the submission does, is not much of a reduction at all. First, if the goal is to do global parameter optimization, then we don\\u2019t really have to smooth the pipeline itself: we can just smooth the black box mapping parameters to performance, and then optimize that with SGD. There are many ways to do this--if we want to use policy gradient, we can just express the problem as something in this form:\\n\\nmin_\\\\phi E_{\\\\theta ~ q_\\\\phi} C(\\\\theta)\\n\\nwhere C is the black-box mapping parameters \\\\theta to a performance index (such as mean AP), q_\\\\phi is a distribution over parameters (e.g., Gaussian), and \\\\phi are the distribution parameters (e.g., mean, covariance of the Gaussian). We can then optimize this using REINFORCE policy gradients.\\n\\nIf we want to really smooth the pipeline itself, then it is also easy to do this by devising a suitable MDP and then applying REINFORCE with the usual MDP structure. We simply identify the state s_t at time t with the output of the t\\u2019th pipeline stage, introduce a new \\u2018action\\u2019 variable a_t representing a \\u2019stochastified output\\u2019, and trivial dynamics (P(s_{t+1} | s_t, a_t) = \\\\delta(s_{t+1} - a_t)). If the policy is a Gaussian (P(a_t | s_t) = N(a_t; s_t, \\\\Sigma)), then this is similar to relaxing the constraint that one stage\\u2019s output is equal to the input of the next stage, and somehow quadratically penalizing their difference. In fact, there is a neural network training method based explicitly on this penalization view [A], and it would make yet another great baseline to try.\\n\\nIn fact, the proposed method is essentially similar to what I have just described, but it is unfortunately described in an overcomplicated way that obscures the true nature of the method. I think the whole SCG framework is overkill here. Too much of the paper is spent just rehashing the SCG framework, and the very heavy notation again just obscures the essential character of the method.\\n\\nIf there were, as the paper claims, some interesting condition under which the method produces solutions that are optimal under the original pipeline, that would be remarkable and interesting. However, I have serious doubts about this part of the paper. The key problem is the statement that \\u201cIt follows that c(k_c, DEPS_c - k_c) = c(\\u2026) + z_c\\u201d. The paper seems to be claiming that if E z = 0, then c(k + z) = c(k) + z, which can\\u2019t possibly be true in general. \\n\\nThe heavy and opaque notation makes it very difficult to understand this section. Perhaps it would help to consider a very simple example. Suppose we want to minimize E_{x ~ q} c(y(x)) (where x ~ q means x is distributed as q). We can introduce only one new stochastic node (k = y + z), between y and c. Clearly c(y + z) is not generally equal to c(y) + z, even if E z = 0.\\n\\nIn summary, I think the submission needs a lot of work on multiple axes before it can make a significant impact. The most important issues are a complete lack of relevant baselines and the dubious claims about sufficient conditions for optimality. The idea could have merit, but it needs to be carefully compared and motivated with respect to existing work (such as [A]) as well as the simple baselines I have mentioned. The presentation also needs to be revised to find the simplest expression of the method and to focus on the interesting parts.\\n\\n[A] Taylor, Gavin, et al. \\\"Training neural networks without gradients: A scalable admm approach.\\\" International Conference on Machine Learning. 2016.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJeRm3Aqt7 | GenEval: A Benchmark Suite for Evaluating Generative Models | [
"Anton Bakhtin",
"Arthur Szlam",
"Marc'Aurelio Ranzato"
] | Generative models are important for several practical applications, from low level image processing tasks, to model-based planning in robotics. More generally,
the study of generative models is motivated by the long-standing endeavor to model uncertainty and to discover structure by leveraging unlabeled data.
Unfortunately, the lack of an ultimate task of interest has hindered progress in the field, as there is no established way to
compare models and, often times, evaluation is based on mere visual inspection of samples drawn from such models.
In this work, we aim at addressing this problem by introducing a new benchmark evaluation suite, dubbed \textit{GenEval}.
GenEval hosts a large array of distributions capturing many important
properties of real datasets, yet in a controlled setting, such as lower intrinsic dimensionality, multi-modality, compositionality,
independence and causal structure. Any model can be easily plugged for evaluation, provided it can generate samples.
Our extensive evaluation suggests that different models have different strenghts, and that GenEval is a great tool to gain insights about how models and metrics work.
We offer GenEval to the community~\footnote{Available at: \it{coming soon}.} and believe that this benchmark will facilitate comparison and development of
new generative models. | [
"generative models",
"GAN",
"VAE",
"Real NVP"
] | https://openreview.net/pdf?id=HJeRm3Aqt7 | https://openreview.net/forum?id=HJeRm3Aqt7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkghsjrZgV",
"BJe5bbvFyN",
"H1ejlDZKJN",
"HJe6ONKE14",
"S1x64itm1V",
"S1esM6qiCX",
"H1xrM48i0m",
"ByeYmJdF0X",
"SkgfBIGYCm",
"H1xNzHZFRQ",
"BkepyS-FAX",
"SkltHnlFRm",
"BkefNpuA2Q",
"H1gCUtMchQ",
"B1xEK15_nX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544801187618,
1544282370257,
1544259314593,
1543963764582,
1543899956910,
1543380243095,
1543361549201,
1543237408630,
1543214650096,
1543210251729,
1543210212765,
1543208000916,
1541471530509,
1541183830152,
1541083004175
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1406/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1406/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1406/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1406/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1406/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1406/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1406/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1406/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1406/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1406/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1406/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1406/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1406/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1406/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1406/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper introduces a benchmark suite providing a series of synthetic distributions and metrics for the evaluation of generative models. While providing such a tool-kit is interesting and helpful and it extends existing approaches for evaluating generative models on simple distributions, it seems not to allow for very different additional conclusions or insights.This limits the paper's significance. Adding more problems and metrics to the benchmark suite would make it more convincing.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Intersting benchmark suite that could be extended.\"}",
"{\"title\": \"more complex distributions\", \"comment\": \"Note that we do have \\\"image-like\\\" distributions in the set (the \\\"shifted bumps\\\" distribution). Moreover, all the distributions we show results for have parameterized difficulty: for example, with the shifted bumps, it is the size of the image (equivalently, the number of bumps), the random scalings to the height and width of the bump, and the amount of ambient noise. It is trivial to adjust the parameters to make the problem harder if we want.\\n\\nWhen we use a mixture of Gaussians, the difficulty depends on the number of components, and the covariance structure of the components. We show in Figure 1 the effect of changing these. \\n\\nWhen we build a product of mixtures of Gaussians, the complexity depends on the number of mixture components c per product component, and the number p of product components. We show two settings for these in Figure 6. \\n\\nFinally, note that it is also trivial with the toolbox to compose distributions to get something as complicated as you want. You want a product of mixtures of shifted bumps and low-d manifolds? its easy, just a line of code. The requirement we have is that any distribution we add we have full control over: we should be able to compute its log-likelihood, sample from it, etc.; but by composing the building blocks one can make enormously complicated distributions over which we have full control. We don't display the results of these things because we don't consider them informative per amount of space we are allowed. \\n\\nSo, in short: the tool makes it easy to control the difficulty of the problem. The tool does have image-like distributions. We show results on complexities that we consider informative, but this is not because a limit of the tool, but rather because a limit on the pages in a submission.\"}",
"{\"title\": \"The value of simple examples\", \"comment\": \"I completely agree on the value of simple distributions. Something is fundamentally wrong or at least very illustrative if a method demonstrates poor performance on a simple distribution. It's crucial for simple distributions to be included. I think the value of a benchmark suite is to strike this balance between being focused and being somewhat comprehensive. It also helps if it can be connected to previous results on more complex distributions. I think for a benchmark suite to be really valuable it has to some examples of both.\"}",
"{\"title\": \"thanks for the response\", \"comment\": \"w.r.t. \\\"Many of these models only sense on more complex distributions\\\": Choosing synthetic distributions with easily understood properties is part of the contribution of this work. Our approach allows analysis of which models can approximate distributions with which kinds of properties.\\n\\n While we agree with the statement that on simpler distributions, the neural models may be overkill, we think the idea that a model should *only* \\\"work\\\" when the problem is hard (and hard to measure success on) to be a cause for concern. We are careful in this work to not suggest that simple models that can do well on these distributions are better models (see our discussion section). On the other hand, we do see that the neural models often fail to successfully approximate simple distributions. One may surmise that even though these models can generate interesting and convincing samples in more complex settings (e.g. with convolutional networks and image data), they are probably not approximating the distribution there either (corroborating several other works suggesting this). \\n\\n Note that if nothing else, decoupling the influence of the inductive biases of the model architecture and the modeling protocol is important.\"}",
"{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for your thoughtful reply. I have updated my review to take into account your response. One clarification is that I think you have a wonderful variety of generative models, but I wish there were more distributions. Many of these models only sense on more complex distributions and it's worth creating some larger synthetic ones to capture some of what's going on larger settings.\"}",
"{\"title\": \"failure of optimization/training protocol\", \"comment\": \"We mean that for a distribution P, we can write down the neural generator (i.e. explicitly write down the values of the weights) that would map Gaussian noise to P with low distortion in our metrics. However, when we train a GAN, for example, with the generator hyperparameters (e.g. number of layers, nonlinearity type, and hidden dimensions) of the hand-designed model (and a sweep over discriminator parameters), the training fails to find a generator that is low distortion. Thus we know that the reason for the failure to find the generator is because of the optimization/optimization protocol, and not because a good generator is not realizable by the neural nets we use.\"}",
"{\"title\": \"Further questions\", \"comment\": \"I am a bit confused here. Why did the protocol/optimization fail to find the correct parameters from the sweep if the best choice of hyper-parameters is in the sweep space? I assume the authors were referring to optimize towards different metrics.\"}",
"{\"title\": \"Thanks for updates\", \"comment\": \"Thanks for your response and changes in the revision; they've helped. I agree about the dimensionality issues with Gaussian-kernel MMD but also think that it will definitely help to add in a revision anyway as you've said.\\n\\nA minor note, though, that your revision has introduced several typos and grammatical errors (and also your .bib entry for Zaheer et al. is formatted incorrectly); make sure to give it a proofread in a further revision.\"}",
"{\"title\": \"rebuttal:\", \"comment\": \"\\u201cThe only models mentioned are Gaussians and Mixture of Gaussians. Section 4 mentions VAEs and GANs but those are not generally seen as particular generative models.\\u201d :\\nThis is not true. We show results using Gaussians, Mixtures of Gaussians, Kernel Density Estimators, VAEs, several flavors of GANs, and Real NVPs (see sec. 4). \\nWhile one can make an argument that a GAN is not a generative model by some definitions, all the other models are generative models by standard definitions. Furthermore, we take pains to give a definition of \\u201cgenerative model\\u201d as used in the paper (see third paragraph in the introduction). \\n \\n \\u201cI would like to see described a wider variety of models, including possibly models with discrete latent variables as much recent literatue is currently exploring.\\u201d: \\nBesides the fact that we do in fact have a wide variety of deep and non-deep models, note that we include mixtures of Gaussians, and KDE\\u2019s, both of which have discrete latent variables.\\n\\n\\u201c\\u201cOften for evaluation now, many papers use a Deep Gaussian model trained to model MNIST digits. I worry that insights drawn from the synthetic examples won't transfer when the models are applied to real-world tasks.\\u201d\\nThe point of this work is to work in a controlled setting. Since we know the properties of the ground truth data distribution, we can leverage these properties to much more accurately estimate model fitting using our various distortion metrics. See last paragraph of sec 7 and sec. 8 for a summary of the findings and contributions. In short, this work is meant to be complementary to prior attempts at comparing generative models using natural images.\"}",
"{\"title\": \"response to \\\"other issues\\\"\", \"comment\": \"##\\n\\n\\\" - In Section 1, the authors argued that \\\"we deliberately ... domain specific neural architectures\\\". Then in footnote 4, they mentioned that ....\\\": \\n\\nThe \\\"specific neural architectures\\\" that can generate the distributions *were* in the search space of the parameter sweeps/optimizations; that is the point of the comment. We explicitly wrote down the weights of a network (with a choice of hyper-parameters from the sweep) that was a (near) solution to the given modeling problem. The goal of this exercise was to show that the modeling failures were not because it is impossible to model the distributions with the architectures that were in the sweep, but because the protocol/optimization failed to find the correct parameters.\\n\\n##\\n\\n\\\"The authors only used ...\\\" :\\n\\nThe distributions should be learnable with 10K training points. They either have very low-dim structure (2 or 3 intrinsic dim), a small number of very well defined clusters (<50), or independence structure that factorizes the distribution into several (simpler) distributions with these structures (e.g. the product of mixtures is 3 clusters per independent factor, with <8 factors). The empirical results suggest that these *are* mostly learnable at this numbers of samples, even if not learned by every scheme. Furthermore, sample complexity is a reasonable thing to care about- in our opinion restricting to the case where one has access to unlimited samples is unrealistic.\"}",
"{\"title\": \"explanation regarding the pros and cons of the existing models and the efficacy of the proposed metrics *is* substantially discussed in the paper\", \"comment\": \"\\\"are not well known\\\": We still see many papers suggesting that GANs learn a generator that accurately samples from a generic distribution. Just showing that this is not empirically true across a wide range of distributions and GAN variants, and with huge parameter sweeps (a grid search with about 20000 different combinations of hyper-parameters) is valuable: even if it is well known by some, corroboration of these results at this scale is useful.\\n\\n##\\n\\n\\\"Pros and Cons\\\": we make the following conclusions, amongst others:\", \"1\": \"RealNVP works generally better than the other neural models we tested, although it also fails to model distributions as simple as mixture of Gaussians with relatively large number of components and tight covariance (see the 3d paragraph in the discussion section, and also each of the paragraphs in section 7 describing the tables).\", \"2\": \"Mixture of Gaussians and KDE are baselines that can be hard to beat in these simple settings, but can also fail catastrophically, e.g. for MOG when the number of clusters in the data is much higher than in the mixture (figure 1 and 2), and for KDE when distributions have product structure (figure 4 and the discussion of it in section 7)\", \"3\": \"different distortion statistics offer complementary insights into the performance of various models. In particular, while 2S is generally more trustworthy since its oracle performance is closer to 0, OT can offer a different perspective (see 3d paragraph in section 7 describing figure 1). This is true even when OT \\\"fails\\\" (see VAE working better than oracle hinting at its denoising effect). Specifically, OT is less trustworthy when the true distribution has an intrinsically high dimension, and fills space.\", \"4\": \"The GAN variants do not generally seem to improve over vanilla GAN, except in cluster coverage in the product of mixtures distribution (discussed in the 2nd paragraph of the discussion section).\", \"5\": \"VAE's can collapse to a manifold, and fail at capturing \\\"noise\\\" dimensions (see 4th paragraph in section 7 or last paragraph in the discussion section 8)\\n\\nOur paper documents an analysis tool designed to answer the question \\\"Can your model learn to sample some simple distributions?\\\". We absolutely think it will help drive model design, because if you think, for example, that your new optimization method is making GANs better, you can test that. On the other hand, we think explaining the nice properties of GANs, or elucidating \\\"the reason ... nice generative properties of GANs ...\\\" is beyond the scope of this work. As is making specific recommendations on how to improve GANs is outside the scope of the work. These are interesting topics, but they are not the topics of this paper.\\n\\n##\\n\\nw.r.t. [1] and [2]: our work can be considered a far more comprehensive version of [1]; and in particular, [1] does not really offers any insights that help drive model design, except to show that some forms of GAN seem to struggle with simple 1d distributions. [2] is demonstrating the results of a *particular* test to see if a generative model learns the input distribution, applies it to the setting of image-generating models, and concludes that GANs that produce high quality images are not learning the distribution. That work does suggest a method for improving model design for GAN, namely increasing the discriminator capacity; but in our work we see that this does not seem to be enough. Again, our work is a much more comprehensive suite of tests, and the results of those tests on a set of popular generative modeling protocols. We thank the reviewer for reminding us of these; we cite them in the related work.\"}",
"{\"title\": \"Thanks for helpful review\", \"comment\": \"We thank the referee for their careful review. Your effort has substantially improved the paper.\", \"before_we_respond_to_specific_criticisms\": \"to the best of our knowledge, estimating the distance between continuous, high-dimensional distributions from samples in a computationally tractable manner is not a solved problem. We actually agree with many the issues you raise with the metrics; but we feel it is better to do what we can with what we have rather than wait for the solution to this hard problem. Already there is a lot we can say about the generative models we study even with these imperfect tools. Moreover, we *do* put the oracle scores in each table so a reader can understand when the measurements should be regarded with caution.\\n\\nResponses to specific criticisms, in order of the review:\\n\\n\\\"their followup, Kurach et al.\\\": we have added a citation. Note that this is concurrent work.\\n\\n\\\"You've confused some notions here\\\": We have replaced the inaccurate language in the revision, we thank the reviewer for the comment. \\n\\n\\\"Incidentally, this is exactly the example used in Arora et al.\\\": we have added a citation to this example.\\n\\n Note that in many of the examples where the distribution is not near a very low dimensional manifold (which is makes the estimation easier in terms of the number of samples), we use a factorized OT, projecting onto known independent sets of coordinates. This is less powerful than the full OT, because two distributions that have the same marginals would have distance 0, but is easier to estimate. Furthermore, we do in the text discuss the empirical limitations of OT in our particular setting, see the third paragraph in section 7.\\n\\nYour scheme you call \\\"Two-Sample Test,\\\" ... ': We have changed this to \\\"nearest-neighbor two-sample statistic.\\n\\n\\\"The strong change in performance for KDE is somewhat hard to interpret, but maybe has something to do with the connection between KDE and NN-based methods?\\\": we added a footnote explaining this, see the bottom of page 8.\\n\\n\\n\\\"There is at least one score in common use for this kind of evaluation with easy-to-compare estimators... \\\" We will add a flavor of MMD to the final revision, and run all the experiments with it as a primary distortion measure. We apologize for being unable to include it in this revision, but it will definitely be there.\\n\\n A kernelized MMD with a Gaussian-type kernel is still going to be affected by the curse of dimensionality, as even though one might have better control over how the estimator converges, the metric itself becomes blurrier as a function of dimension. However, we agree with the reviewer that the theoretical properties of MMD make it appealing and appropriate for this work, and not having MMD was a serious omission. Again, this will be rectified for the final revision.\\n\\n\\n\\\"Why is Pedregosa et al. (2011) cited \\\": we use this reference because we use this code\\n\\n\\n-\\\"Mode coverage and related scores ...\\\": Indeed, this can fail exactly as the reviewer suggests. Note however that a spherical Gaussian is going to have blocks of independent coordinates- it would do well on any reasonable independence test. The point is that we consider this measurement to be not useful if the results using other metrics are very bad.\"}",
"{\"title\": \"This paper proposes a series of metrics and generative models to evaluate different approximate inference frameworks.\", \"review\": \"Updated to reflect author response:\\n\\nThis paper proposes a series of metrics to use with a collection of generative models to evaluate different approximate inference frameworks. The generative models are designed to be synthetic and not specialized to a particular task. The paper is clearly written and the motivation is very clear.\\n\\nWhile there has been work like Forestdb to maintain a collection of generative models, I don't believe\\nthere has been work to evaluate how they perform on a series of metrics. There would be great utility\\nin having a less ad-hoc way to evaluate inference algorithms.\\n\\nWhile the idea is sound, the work still feels a bit incomplete. The only distributions used in the experimental section seem to be Gaussians and Mixture of Gaussians. Many more families of distributions are mentioned in Section 3, and it would have been nice to show some evaluation of them considering the code is already there. In addition to distributions mentioned in Section 3, it would help if there were a few larger dimensional distributions. Often for evaluation now, many papers\\nuse a Deep Gaussian model trained to model MNIST digits. I worry that insights drawn from\\nthe synthetic examples won't transfer when the models are applied to real-world tasks.\\n\\nI would like to see described a wider variety of models, including possibly more models with\\ndiscrete latent variables as much recent literature is currently exploring.\\n\\nThe paper is a bit confusing in how it discusses distributions and models. Distributions form the ground truth we compare different trained models to. It would been more clear for me if the explanation with supplemented with some notation to describe who will compare draws from the true data distributions to samples from each of the trained generative models.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Unsatisfactory results on a very important topic\", \"review\": \"This work aims at addressing the generative model evaluation problem by introducing a new benchmark evaluation suite which hosts a large array of distributions capturing different properties. The authors evaluated different generative models including VAE and various variants of the GANs on the benchmark, but the current presentation leaves the details in the dark.\\n\\nThe proposed benchmark and the accompanied metrics should provide additional insights about those generative models that are not well known and help drive improvement to model design, similar to [1] and [2]. But the presentation of the work, especially the experiment section, only gives abundant number of results without detailed explanation regarding the pros and cons of the existing models, the efficacy of the proposed metrics, or the reason behind some nice generative properties of GANs that are not able to learn the distribution well.\", \"other_issues\": \"- In Section 1, the authors argued that \\\"we deliberately avoid convolutional networks on images with the aim of decoupling the benefits of various modeling paradigms from domain specific neural architectures\\\". Then in footnote 4, they mentioned that \\\"constructed by hand neural generators that well approximate these distributions\\\" which suggests the importance of the domain specific neural architectures. It would be nicer to see how much the \\\"specific\\\" neural architectures help and how different metrics favor different architectures.\\n- The authors only used 10K training points and 1K test samples, which seems small especially for multivariate distributions. This could have impacts on the quality of the learned models, especially the neural ones.\\n\\n[1] M. Zaheer, C.-L. Li, B. Poczos, and R. Salakhutdinov. GAN connoisseur: can GANs learn simple 1D parametric distributions? NIPS Workshop on Deep Learning: Bridging Theory and Practice 2017.\\n[2] S. Arora, and Y. Zhang. Do GANs actually learn the distributions? An empirical study. arXiv:1706.08224.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good but with one glaring flaw\", \"review\": \"Overall, this is a thorough attempt at a system for evaluating various generative models on synthetic problems vaguely representative of the kinds of problems claimed to be covered by GANs. I think the approach and the conclusions drawn are mostly reasonable, with one major caveat discussed shortly.\\n\\nI also think it would help in a revision to add evaluations of more recent successors to RealNVP, such as MAF (NIPS 2017, https://arxiv.org/abs/1705.07057 ), Glow ( https://arxiv.org/abs/1807.03039 ), and (although of course this paper came out concurrently with your submission) the promising FFJORD ( https://openreview.net/forum?id=rJxgknCcK7 ). The scale of comparison of GAN variants is also much smaller than that of Lucic et al. or their followup, Kurach et al. ( https://arxiv.org/abs/1807.04720 ), which is not cited here (and should be).\\n\\nBut primarily, I think there are some serious concerns with your choice of metrics that make the results as they are difficult to interpret.\\n\\n\\n\\\"Note that OT is not a distance in the standard mathematical sense, as for instance the 'distance' between two sets of points sampled from the same distribution is not zero.\\\" -- You've confused some notions here. The Wasserstein-1 distance, which is a scalar times the variant of OT you use here, absolutely is a proper distance metric between distributions: W(P, Q) is a metric. But when you compute the OT distance between *samples*, OT(S, T) with S ~ P and T ~ Q, you're equivalently computing the distance W(\\\\hat{P}, \\\\hat{Q}) between the empirical distributions of the samples, \\\\hat{P} = 1/N \\\\sum_i \\\\delta_{S_i} and the similar \\\\hat{Q}, which of course are not the same thing as the source distributions themselves. You can, though, view OT(S, T) as an *estimator* of W(P, Q); the distance between *distributions* is what we actually care about.\\n\\nIt is well-known that these empirical distributions of samples \\\\hat{P} converge to the true distribution P (in the Wasserstein sense, W(P, \\\\hat{P})) exponentially slowly in the dimension, which is what your example about high-dimensional distributions demonstrates. Incidentally, this is exactly the example used in Arora et al. (ICML 2017, https://arxiv.org/abs/1703.00573 ). This means that, viewed as an estimator of the true distance between distributions, the empirical-distribution OT estimator is strongly biased. Thus it becomes very difficult to tell what the true OT value is at any sample size, and moreover this amount of bias might differ for different distribution pairs even at the same sample size, so *comparing* OT estimates at a fixed sample size is a tricky business. For example, in your Figure 2, when the \\\"oracle\\\" score is significantly more than zero, you know that all of your estimates are very strongly biased. There is not, as far as I know, any strong reason to suspect that this amount of bias should be comparable for different distribution pairs, making any conclusions drawn from these numbers suspect.\\n\\n\\nYour scheme you call \\\"Two-Sample Test,\\\" first, should have a more specific name. Two-sample testing is an extremely broad field, with instances including the classical Kolmogorov-Smirnov test and t tests, the popular-in-ML kernel MMD-based tests, and even Wasserstein-based tests (e.g. https://arxiv.org/abs/1509.02237 ). Previous applications of these tests in GANs and generative models include Bounliphone et al. (ICLR 2016, https://arxiv.org/abs/1511.04581 ), Lopez-Paz and Oquab (2016 - which you cite without a venue but which was at ICLR 2017), Sutherland et al. (ICLR 2017, https://arxiv.org/abs/1611.04488 ), Huang et al. (2018), and more, using a variety of schemes. Your name for this should include \\\"nearest neighbor\\\" or something along those lines to avoid confusion.\\n\\nAlso, you call this an \\\"extension of the original formulation,\\\" but in the common case where n(x) is more often right than wrong, your v is exactly \\\\hat t - 1 of Lopez-Paz and Oquab; see their (2). If it's usually wrong, then v = 1 - \\\\hat t; only when the signs differ per class does it significantly differ from theirs, and in any case I don't see a real motivation to put the absolute values for each class separately rather than just taking |\\\\hat t - 1/2|.\\n\\nMoreover, it's kind of crazy to term your v statistic a two-sample *test* -- you have nothing in there about its sampling distribution, which is key to hypothesis testing to obtain e.g. a p-value. (Maybe the variance of v is very different between different distributions; this is likely the case. In any case the variance will probably become extremely large as the dimension increases.) Comparing this score is thus difficult, but in any case calling it a \\\"test\\\" is potentially very misleading. You could, though, estimate the variance as described by Lopez-Paz and Oquab to construct a test.\", \"also\": \"you can imagine the statistic v(S, T) as an estimator of the distance between distributions given as\\n D(P, Q) = |1/2 - \\\\int ( 1 if p(x) > q(x), 0 o.w.) p(x) dx|\\n + |1/2 - \\\\int (-1 if p(x) > q(x), 0 o.w.) q(x) dx|.\\nBut v(S, T) is, like for the OT distance, a biased estimator of this distance, whose bias will get worse with the dimension. Thus, like with the OT, it's hard to meaningfully compare v(S, T) as an attempt to compare *distributions* based on D, which is what we actually care about. Here the oracle score does not show strong bias: assuming a reasonable number of samples, when P = Q the v estimator is always going be approximately 0. But this doesn't mean that other estimators aren't strongly biased, and indeed this is exactly what your Appendix C shows. The strong change in performance for KDE is somewhat hard to interpret, but maybe has something to do with the connection between KDE and NN-based methods?\\n\\n\\nYour log-likelihood score is an unbiased and asymptotically normal estimate of the true distribution score (the cross-entropy), so it's easy to compare. But it accounts only for a very small portion of comparing distributions.\", \"there_is_at_least_one_score_in_common_use_for_this_kind_of_evaluation_with_easy_to_compare_estimators\": \"the squared MMD. It has an easy-to-compute unbiased and asymptotically normal estimator, so it's easy to get confidence intervals for the true value between distributions at any sample size, making comparing the numbers based on a reasonable number of samples easy. There's also a well-devolped theory for how to construct p-values for a test if you want those; Bounliphone et al. above even developed a relative test to compare the MMDs of two models accounting for the correlations due to using the same \\\"target\\\" set, though if you use separate target sets (because you can easily sample more points from your synthetic distribution) then it's simpler. The choice of kernel does matter, but I think the median-heuristic Gaussian kernel would be a very reasonable score to add to your repertoire, and for particular distributions you also might be able to pick a better kernel (e.g. based on the causal factors when those exist). See also Binkowski et al. (ICLR 2018, https://arxiv.org/abs/1801.01401 ) for a detailed discussion of these issues in comparison to the FID score.\\n\\nUsing a metric whose estimation can be understood, and whose estimators can be reliably compared, is I think vital to any evaluation process. This also prevents issues like when RealNVP outperforms the oracle, which should be impossible with any proper evaluation metric.\", \"minor_points\": [\"Why is Pedregosa et al. (2011) cited for fitting multivariate Gaussians by maximum likelihood? This is something that doesn't need a citation, especially not to scikit-learn, which doesn't even (I don't think) contain an implementation of fitting Gaussians beyond (np.mean(X, axis=0), np.cov(X, rowvar=False)).\", \"Mode coverage and related scores: this is based on assigning sample points to their single most likely clusters? I'd imagine that sometimes a model will output points far from any cluster, in which case the cluster that happens to be closest might happen to be the most likely, but it's strange to really count that point as part of that cluster for these scores. Or similarly, a point might be relatively evenly spaced between two clusters, in which case the assignment could be fairly arbitrary, again making these scores a little strange.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
B1e0X3C9tQ | Diagnosing and Enhancing VAE Models | [
"Bin Dai",
"David Wipf"
] | Although variational autoencoders (VAEs) represent a widely influential deep generative model, many aspects of the underlying energy function remain poorly understood. In particular, it is commonly believed that Gaussian encoder/decoder assumptions reduce the effectiveness of VAEs in generating realistic samples. In this regard, we rigorously analyze the VAE objective, differentiating situations where this belief is and is not actually true. We then leverage the corresponding insights to develop a simple VAE enhancement that requires no additional hyperparameters or sensitive tuning. Quantitatively, this proposal produces crisp samples and stable FID scores that are actually competitive with a variety of GAN models, all while retaining desirable attributes of the original VAE architecture. The code for our model is available at \url{https://github.com/daib13/TwoStageVAE}. | [
"variational autoencoder",
"generative models"
] | https://openreview.net/pdf?id=B1e0X3C9tQ | https://openreview.net/forum?id=B1e0X3C9tQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Sylhd5aZY4",
"BkxHhaKIOE",
"ryljrRXION",
"BylKX5QUdV",
"Ske8IIF7xE",
"rygm8MFA0X",
"BJxjvk7nCm",
"H1xarJXnCm",
"S1exXym2Cm",
"Bye1byCKCX",
"Sye7L4k1AQ",
"ryg--PcTTQ",
"SyxF01pF6X",
"Bkletw5FT7",
"ByeNgv9KTQ",
"H1xC7LAvaQ",
"S1low-VDpX",
"B1xK2VH4T7",
"ByebcVS4TX",
"BJxGPNr4TX",
"BJlRhQrETQ",
"BJ-97B4pQ",
"B1eNhzHVTX",
"Bye1QOek6m",
"Bkg4rgm93Q",
"HJxVsagYnm"
],
"note_type": [
"official_comment",
"comment",
"comment",
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1554270836312,
1553534380600,
1553509955212,
1553508897159,
1544947278153,
1543569995013,
1543413602741,
1543413573428,
1543413527740,
1543261943377,
1542546506891,
1542461177109,
1542209488862,
1542199160181,
1542199020154,
1542084133777,
1542041955373,
1541850289171,
1541850248937,
1541850202007,
1541850038257,
1541849992516,
1541849771829,
1541502998974,
1541185595671,
1541111195696
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1405/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1405/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1405/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1405/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1405/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1405/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1405/AnonReviewer3"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1405/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1405/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1405/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1405/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1405/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1405/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1405/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1405/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1405/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1405/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1405/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1405/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Clarification\", \"comment\": \"Sorry we responded to this last week, but just noticed that the response was located after your original comment (not the latest) and seemingly not accessible to everyone. Anyway, the answer to your question is as follows:\\n \\nWhen we compute FID scores, we first save the generated images to a folder using ``scipy.misc.imsave'' and then read the images to calculate the required inception feature. This is slightly different than directly using the output of the decoder because imsave will automatically rescale images to use the full (0, 255) range. This rescaling actually makes little difference to the FID scores on most datasets; however, for some reason CIFAR-10 data seems to be more sensitive, even though perceptually the generated images look the same. In general though, FID score computations are potentially sensitive to seemingly inconsequential factors, such as whether the tensorflow or pytorch inception network is used, so it is important to use a consistent methodology across different methods.\\n\\nIn the present case, to obtain results consistent with ours, generated images should be normalized to use the full (0,255) range as is done when using imsave under default settings. Although it is difficult to know for sure, given the common use of the imsave function, it is likely that many other works handle FID computations in the same way. We will update our github shortly to make these details more explicit.\"}",
"{\"comment\": \"I'm still getting a massive discrepancy between the FID scores reported in the paper and the FID scores I'm seeing after running the code.\\n\\nFor reference this is the command I'm using to train the model:\\n\\npython demo.py --dataset cifar10 --network-structure Infogan --epochs 1000 --epochs2 2000 --lr-epochs 300 --lr-epochs2 600 --batch-size 100\\n\\nFID scores are 100.58 and 94.93 for the VAE and 2-stage VAE models respectively. Any clarification would be appreciated, thanks.\", \"title\": \"After retraining...\"}",
"{\"comment\": \"Allow me to answer my own question: The default number of trained epochs for the code is 400 while in the paper the reported epochs trained for CIFAR-10 is 1000. I will delete my comment and retrain.\", \"title\": \"Oh\"}",
"{\"comment\": \"Sorry for the late comment. I am trying to reproduce your results using your code at https://github.com/daib13/TwoStageVAE\\n\\nI set the network structure to \\\"Infogan\\\", as that is what you reported using for the results in Table 1 of your paper. In your code it seems that a learnable gamma is the default setting and is not configurable. After training on CIFAR-10, I observed FID scores of 101.55 and 100.01 for the VAE and 2-stage VAE models, respectively. These are far from the mean values of 76.7 and 72.9 reported in Table 1.\\n\\nI was just wondering if you could clarify that \\\"Infogan\\\" is the correct setting for the network structure to reproduce these results? Thanks\", \"title\": \"Clarification on reported FID values\"}",
"{\"metareview\": \"The reviewers acknowledge the value of the careful analysis of Gaussian encoder/decoder VAE presented in the paper. The proposed algorithm shows impressive FID scores that are comparable to those obtained by state of the art GANs. The paper will be a valuable addition to the ICLR program.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"interesting analysis of Gaussian VAEs, a simple VAE training approach that results in impressive sample quality\"}",
"{\"title\": \"Update regarding empirical evaluations\", \"comment\": [\"After our original submission, we have continued investigating a wider variety of generative models and evaluation metrics for broader research purposes. We summarize a few updates here that are relevant to our submission:\", \"As a highly-relevant benchmark, we have obtained additional FID scores for all of the GAN models trained using suggested hyperparameter settings (from original authors), as opposed to the scores we originally reported from (Lucic et al., 2018) that were based on a large-scale, dataset-dependent hyperparameter search. When averaged across all four datasets (i.e., MNIST, Fashion, CIFAR10, CelebA), all GAN models trained with suggested settings had a mean FID score above 45. In contrast, with hyperparameters optimized across 100 different settings independently for each dataset as in (Lucic et al., 2018), the mean GAN FID scores are all within the range 31-45. As a point of reference, our proposed 2-Stage VAE model with no tuning whatsoever (the same default settings across all datasets) has a mean FID below 40, which is significantly better than all of the GANs operating with analogous fixed/suggested settings, and well within the range of the heavily-optimized GANs. And all other existing VAE baselines we have tried (including additional ones computed since our original submission), are considerably above this range.\", \"In our original submission we also included results from a model labeled 2-Stage VAE*, where we coarsely optimized the hyperparameter kappa (the dimension of the latent representation). However, upon further reflection we have decided that it is probably better to remove this variant for two reasons. First, although the optimized GAN models involved searching over values from 7 hyperparameter categories (see the supplementary file from the latest NeurIPS 2018 version of (Lucic et al., 2018)), varying kappa was apparently not considered. Therefore it is somewhat of an apples-and-oranges comparison between our 2-Stage VAE* and the optimized GANs. Secondly, we have recently noticed that PyTorch and TensorFlow implementations of FID scores are sometimes a bit different (this appears to be the result of different underlying Inception models upon which the FID score is based). This discrepancy is inconsequential for our 2-Stage VAE model and associated baselines, but for 2-Stage VAE* the mean improvement differs by 4 depending on the FID implementation (this could be in part because optimizing over FID scores may exacerbate implementation differences). Regardless, this issue highlights the importance of using a consistent FID implementation across all models (a seemingly under-appreciated issue in the literature).\", \"Although normalizing flows have been frequently reported to improve log-likelihood values in VAE models, this type of encoder enhancement has not as of yet been shown to improve FID scores (at least in the literature we are aware of). Of course log-likelihood values are not a good indicator of generated sample quality at measured by FID (Theis et al., ICLR 2016), so improving one need not correlate with improving the other. Even so, per the suggestion of AnonReviewer1, we have conducted experiments using VAE models with normalizing flows (Rezende and Mohamed, ICML 2015) as an additional baseline. Thus far, we have not found any instances where the addition of flows improves the FID score within the standardized/neutral testing framework from (Lucic et al., 2018), and sometimes the flows can actually make the FID worse. Still there are numerous different flow-based models, and further investigation is warranted to examine whether or not some versions could indeed help in certain scenarios.\", \"Finally, we have also performed evaluations using the new Kernel Inception Distance (KID) quantitative metric of sample quality. This metric was proposed in (Binkowski et al., ICLR 2018) and serves as an alternative to FID. Note that we cannot evaluate all of the GAN baselines using the KID score; only the authors of (Lucic et al., 2018) could easily do this given the huge number of trained models involved that are not publicly available, and the need to retrain selected models multiple times to produce new average scores at optimal hyperparameter settings. However, we can at least compare our trained 2-Stage VAE model to other VAE baselines. In this regard we have found that the same improvement patterns reported in our original submission with respect to FID are preserved when we apply KID instead, providing further confidence in our approach.\"]}",
"{\"title\": \"High-Level Response\", \"comment\": \"Thank you for reading our earlier response carefully and showing continued interest in understanding the details. Just to clarify though, we are not arguing that joint training is unhelpful in other types of hierarchical generative models (such as in the references the reviewer mentioned, where we agree it can be advantageous). Rather, our analysis merely suggests that within the narrow context of our particular 2-stage VAE structure, joint training is unlikely to be beneficial. But the underlying reason for this is not actually a mystery. Although admittedly counterintuitive at first, the inadequacy of joint training is exactly what is predicted by the theory (the same core analyses that inspired our non-obvious approach to begin with). Furthermore, this prediction can be empirically tested, which we have done in multiple ways. For example, we have tried fusing the respective encoders and decoders from the first and second stages to train what amounts to a slightly more complex single VAE model. We have also tried merging the two stages including the associated penalty terms. In both cases, joint training does not help at all, with performance no better than the first stage VAE (which contains the vast majority of parameters).\"}",
"{\"title\": \"Detailed Explanation -- Part I\", \"comment\": \"To help provide a clearer explanation of this phenomena, we revisit the two criteria required for producing good samples from a generative model built upon an autoencoder structure (like a VAE). Per the analysis from reference (Makhzani et al., 2016) and elsewhere, these criteria are: (i) small reconstruction error when passing through the encoder-decoder networks, and (ii) an aggregate posterior q(z) that is close to some known distribution like N(0,I) that is easy to sample from. As mentioned in a previous response, the latter criteria ensures that we have access to a tractable distribution from which we can easily generate random input samples that, when passed through the learned decoder, will be converted to output samples resembling the training data.\\n\\nThe two stages of our proposed VAE model can be motivated in one-to-one correspondence with these two criteria. In brief, the first VAE stage addresses criteria (i) by pushing both the encoder and decoder variances towards zero such that accurate reconstruction is possible. However, the detailed analysis from Sections 2 and 3 of our submission suggests that as these variances go towards zero to achieve this goal, the reconstruction cost dominates the overall VAE objective because the ambient space is higher-dimensional than the latent space where the KL penalty resides. The consequence is that, although criteria (i) is satisfied, the aggregate posterior q(z) need not be close to N(0,I) (this is predicted by theory and explicitly confirmed by experiments, e.g., see Figure 1, rightmost plot). This then implies that if we take samples from N(0,I) and pass them through the learned decoder, the result will not closely resemble samples from the training data.\\n\\nOf course if we had a way to directly sample from q(z), we would not need to use N(0,I), since by design of any autoencoder-structured generative model samples from q(z) passed through the decoder will represent the training data (assuming the reconstruction criteria has been satisfied as mentioned above). Therefore, the second VAE stage of our proposal can be viewed as addressing criteria (ii) by learning a tractable approximation of q(z) that we can actually sample from intead of N(0,I). This estimate of q(z) is formed from a special, independent VAE structure explicitly designed such that the ambient and latent spaces have the same dimension allowing us to apply Theorem 1, which guarantees that a good approximation can be found when reconstruction and KL terms are in some sense properly balanced. Therefore, we now have access to a tractable process for producing samples from q(z), even though q(z) need not be close to N(0,I). Per the notation of our submission on page 6, bullet point 3, sampling u from N(0,I) and then z from p(z|u) is a close approximation to sampling z from q(z). This z can then be passed to the first-stage decoder to produce the desired data x.\"}",
"{\"title\": \"Detailed Explanation -- Part II\", \"comment\": \"Returning to the original question, how might joint training of the first and second VAE stages interfere with this process? The problem lies in the dominant influence of the reconstruction term from the first VAE stage. As the decoder variance goes to zero (as needed for perfect reconstruction), this term can be pushed towards minus infinity at an increasingly fast rate. If trained jointly, the extra degrees-of-freedom from the second-stage VAE parameters will be distracted from their original intended purpose of modeling q(z). Instead they will largely be used to push the dominant reconstruction term even further towards minus infinity (with increasing marginal gains) at the expense of working to address criteria (ii) which has only a modest effect on the overall cost.\\n\\nAnother way to think about this is to consider the following illustrative scenario. Suppose we have a 2-stage VAE model that produces a reconstruction error that is infinitesimally close to zero, but provides a poor estimate of q(z). Because the reconstruction term becomes increasingly dominant when close to zero per the analysis from Section 3, during joint training *all* parameters, including those from the second stage, will focus on pushing the reconstruction error even closer to zero, rather than improving the estimate of q(z). But from a practical standpoint generating realistic samples this is unhelpful, because it is far better to improve the estimate of q(z) than to make the reconstruction error infinitesimally closer to zero. This is why separate training is so critical, because it isolates the second-stage and forces it to address criteria (ii), rather than needlessly focusing on infinitesimal changes to the reconstruction term from criteria (i) that makes no perceptual difference to generated samples. And indeed, when we do train jointly, although the reconstruction errors are quite small as expected, the more pivotal FID scores measuring sample quality are bad precisely because q(z) has been neglected.\\n\\nRegardless, we realize that there are many subtleties involved here, and hope that the above comments provide helpful clarification and background.\"}",
"{\"title\": \"problem with question 3\", \"comment\": \"Thanks for the detailed reply. The answer for question 3 still bothers me. The authors state that the joint training of the two stage have no benefit for the model. This does not make sense, and the reason cannot convince me. There are many popular hierarchical generative models, i.e. DBM[1], DBN[2], GBN[3], which have an enhanced performance in joint training. I think the authors should find out the reason for the failed joint training.\\n\\n[1] Salakhutdinov R, Larochelle H. Efficient learning of deep Boltzmann machines[C]//Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010: 693-700.\\n[2] Hinton G E. Deep belief networks[J]. Scholarpedia, 2009, 4(5): 5947.\\n[3] Zhou M, Cong Y, Chen B. Gamma Belief Networks[J]. arXiv preprint arXiv:1512.03081, 2015.\"}",
"{\"comment\": \"Thanks for the reply. I think this is a great exposition of the differences and the paper will be strengthened by making some of these points in the revision.\", \"title\": \"Thanks\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for the continued engaging dialogue, and we can try to further clarify what we believe to be critical differentiating factors. First, you mentioned that the link between our method and (Tomczak and Welling, 2018) is that we both consider issues caused by the mismatch between the aggregate posterior q(z) and the prior p(z). But whether explicitly stated or not, essentially all methods based on an autoencoder structure share this exact same link on some level, so this is not any particular indication of close kinship in and of itself. And if this mismatch is ignored, then samples drawn from p(z) and passed through the decoder are highly unlikely to follow the true ground-truth distribution (see for example (Makhzani et al., 2016) mentioned in our submission).\\n\\nBeyond this though, the means by which we deal with this central, shared issue are fundamentally different. In our case, we exploit provable conditions whereby an independent second-stage VAE can effectively learn and sample from the unknown q(z) produced by a first stage VAE, and additionally, we provide direct empirical evidence supporting this theory (e.g., see Figure 1, righthand plot). Hence it no longer matters that p(z) and q(z) are not the same since we can just sample from the latter using the second-stage VAE. Even though this approach may seem counter-intuitive at first glance, an accurate model can in fact be learned (provably so in certain situations), and our state-of-the-art results for a VAE model relative to GANs (the very first such demonstration in the literature) provide further strong corroborating evidence.\\n\\nIn contrast, (Tomczak and Welling, 2018) choose to parameterize p(z) in such a way that the additional flexibility can provide simpler pathways for pushing p(z) and q(z) closer together. This is certainly an interesting idea, but it is significantly different from ours. But of course we agree that the ultimate purpose is the same: to have access to a known distribution with which to initiate passing samples through the decoder, a common goal shared by all autoencoder-structured models, including ours and many others like (Makhzani et al., 2016), where an adversarial loss is used to push p(z) and q(z) together. What ultimately distinguishes these methods is, to a large degree, the specific way in which this goal is addressed. We have no reservations about including additional discussion of (Tomczak and Welling, 2018), and these broader points in a revised version of our paper.\"}",
"{\"comment\": \"Thanks for your reply. I appreciate that there are certainly differences between the two, including in their original motivations, and I certainly not trying to imply your work is just a rehashing of theirs. I should point out that I am in no way associated with that paper so I have no ulterior motive to try and promote it or similar.\\n\\nHowever, I think the link between the two is a lot stronger than something to do with hierarchical priors and so I disagree with your suggestion above. The link is that both consider issues caused by the mismatch between the aggregate posterior q(z) and the prior p(z). In your work, you learn a second network to generate samples from q(z) and thus in turn p(x|z)q(z). In their formulation, they instead replace p(z) with q(z), therefore generating samples from exactly the same model as yours, at least in theory. In practice, they have to make approximations because q(z) is not directly available. Consequently, the two approaches are intimately linked to one another, the key methodological differences, in my opinion, being that in your case you only approximate q(z) after training and you use a different method to approximate q(z). There is a bit of a trade-off here, your method for approximating q(z) is almost certainly better, but this better approximation prevents you using it during training, which is likely to lead to a worse model being learned.\\n\\nConsequently, I think the link is a lot stronger than you are suggesting above, and thus this is an essential piece of related work to be considering.\", \"title\": \"Link is much stronger and more subtle than this\"}",
"{\"title\": \"Response to the Vamp prior reference suggestion\", \"comment\": \"Thank you for the reference to (Tomczak and Welling, 2018), which proposes a nice two-stage hierarchical prior to replace the parameter-free standardized Gaussian N(0,I) that is commonly used with VAE models. Note that multiple stages of latent variables have been aggregated in the context of VAE-like models going back to (Rezende et al., 2014). However, beyond the common use of two sets/stages of latent variables, our approach bares relatively little similarity to (Tomczak and Welling, 2018) or other multi-stage alternatives. For example, the underlying theoretical design principles/analysis, aggregate energy function parameterizations, and training strategies are not at all the same. Likewise, the empirical validation is completely different and incomparable as well; (Tomczak and Welling, 2018) focuses on demonstrating improved log-likelihood scores, while we concentrate exclusively on improving the quality of generated samples as explicitly quantified by FID scores. And we stress that these two evaluation criteria can be almost completely unrelated to one another in many circumstances (see for example, Theis et al., \\\"A Note on the Evaluation of Generative Models,\\\" ICLR 2016). And as a final point of differentiation, (Tomczak and Welling, 2018) tests only on small black-and-white images and includes no comparisons against GANs, while we include testing with larger color images like CelebA and directly compare against GANs in a neutral setting. Regardless, (Tomczak and Welling, 2018) still represents a compelling contribution, and space permitting, we can try to provide broader context in a revision.\"}",
"{\"title\": \"Response to the question\", \"comment\": \"Thanks for your interest in our work. Regarding the situation when gamma -> 0, the VAE will not actually default to a regular AE. Note that we can multiply both reconstruction and regularization terms (eqs. (8) and (9)) by gamma and then examine the limit as gamma becomes small; however, this does not allow us to discount all of the regularization factors even though they may be converging to zero as well. The convergence details turn out to be critical here.\\n\\nTo see this, consider the following simplified regularized regression problem which reflects the core underlying issue. Assume that we would like to solve\\n\\nmin_w (1/gamma)||y - A w||^2 + ||w||^2,\\n\\nwhere 1/gamma is a trade-off parameter, y is a known observation vector, A is an overcomplete matrix (full rank, with more columns than rows), and w represents unknown coefficients we would like to compute. If gamma -> 0, then any optimal solution must be in the feasible region where y = A w, meaning zero reconstruction error. Therefore, when gamma -> 0 solving this problem becomes formally equivalent to solving the constrained problem\\n\\nmin_w ||w||^2 subject to y = A w.\\n\\nOf course we could equally well consider multiplying both sides of the original objective by gamma, producing\\n\\nmin_w ||y - A w||^2 + gamma ||w||^2.\\n\\nThis shouldn't change the optimal w since we have just multiplied by a constant independent of w. But if gamma -> 0, then technically speaking, the regularization factor gamma ||w||^2 becomes arbitrarily small; however, this does not mean that we can simply ignore it because there are an infinite number of solutions whereby the data factor ||y - A w||^2 equals zero, i.e., a fixed, minimizing constant. The direct implication is that\\n\\nlimit gamma -> 0 arg min_w ||y - A w||^2 + gamma ||w||^2 \\\\neq arg min_w ||y - A w||^2,\\n\\nwhere the righthand side is just the objective obtained when gamma = 0, and it has an infinite number of minimizers unlike the lefthand side. In general, the regularization factor ||w||^2 will always have an influence in choosing which solution, out of the infinite number satisfying y = A w, is optimal, and the minimizing argument will again provably be the same as from the constrained problem above. This notion is well-established in the regularized regression literature, and generalizes to generic problems composed of data-fitting and regularization terms where the former in isolation has multiple equivalent minima. Returning to the VAE, if extra unneeded latent dimensions are present, then there will be an infinite number of latent representations capable of producing perfect reconstructions. The lingering KL regularization terms then determine which is optimal, per our analysis in Section 3 of the paper.\\n\\nAdditionally, in terms of adding small isotropic noise to observations x, the results will actually not be much different. This is because in practice, gamma need not converge to exactly zero, but only a small value near zero. This allows the model to slightly expand around the manifold and still apply high probability to the data. If the noise level is within such a modest expansion, then the behavior is more-or-less the same as if a low-dimensional manifold were present. Of course if added noise or other deviations from the manifold are too large, then obviously using additional dimensions to model the data may be required.\\n\\nFinally, with regard to your other question, we have also considered training a second-stage VAE on top of a regular autoencoder. This structure is discussed in footnote 5 on page 7.\"}",
"{\"comment\": \"It is an interesting and refreshing paper. I have a question regarding the analysis on Eq. (9). When \\\\gamma->0, the coefficient (1/\\\\gamma) of the reconstruction term of Eq. (9) will approach infinity, which results in a loss function that is similar to that of a plain AE. To see that, we can multiply Eqs. (8) and (9) by \\\\gamma, then the coefficient of the reconstruction term becomes 1, while that of Eq. (9) approaches 0. Note that \\\\gamma\\\\log(\\\\gamma)->0 when \\\\gamma->0. So I don't see why \\\\hat{r} will be pushed to as small as possible. Intuitively, if we add some small (e.g., stddev=0.01) isotropic gaussian noise to x, we wouldn't expect the resulting model to be significantly different, while the analysis seems to suggest that \\\\hat{r} will suddenly increase from r to \\\\kappa (assuming \\\\kappa<d), since the manifold of the noisy x is d-dimensional. Moreover, it would be interesting to see if adding a second stage VAE on top of a plain AE can lead to similar performance gain.\", \"title\": \"A question regarding the analysis on Eq. (9) and the 2-stage VAE\"}",
"{\"comment\": \"The two-stage process you introduce seems very closely related to using a Vamp prior (https://arxiv.org/abs/1705.07120), wherein one effectively tries to replace the original prior with the aggregate posterior q(z) (though this is not achieved exactly for computational reasons). Obviously, there are some differences, but this seems like a natural baseline that should probably be compared to and at the very least a paper that should be being cited and discussed.\", \"title\": \"Relationship with the Vamp prior\"}",
"{\"title\": \"Response to AnonReviewer1 -- Part 1\", \"comment\": \"Thanks for providing detailed comments regarding our manuscript, including constructive ideas on how to improve the presentation and clarify the context. We address each main point in turn.\\n\\n\\n- Reviewer Comment: Limitation of Gaussian assumptions for likelihoods and approximate posteriors\", \"our_response\": \"In the introduction, we state that the most commonly adopted distributional assumption is that the encoder and decoder are Gaussian. This claim was based on an informal survey of numerous recent papers involving VAE models applied to continuous data (e.g., images, etc.). However, we completely agree that VAEs can also be successfully applied to discrete data types like language models, where these Gaussian assumptions can be more problematic. Although all of our theoretical developments are clearly framed in the context of continuous data on a manifold, we are happy to revise the introduction to better explain this issue up front. And of course the whole point of our paper is rigorously showing that even with seemingly restrictive Gaussian assumptions, highly non-Gaussian continuous distributions can nonetheless be accurately modeled.\\n\\nAlso, just to clarify one lingering point: although the decoder p(x|z) is defined to be Gaussian, it does not follow that the associated posterior p(z|x) will necessarily be Gaussian as well. In fact this will usually not be the case when using deep models and parameters in general position. However, the VAE can still push the KL divergence between p(z|x) and q(z|x) to zero even when the latter is constrained to be Gaussian as long as there exists at least some specially matched encoder-decoder parameterizations capable of pushing them together everywhere except on a space of measure zero. This was left as an open problem under general conditions in the most highly-cited VAE tutorial (Doersch, 2016), and is what we demonstrate in Section 2.\"}",
"{\"title\": \"Response to AnonReviewer1 -- Part 2\", \"comment\": [\"Reviewer Comment: Approximation error arising from finite samples not addressed; missing references to advances in approximate inference\"], \"our_response\": \"In an ideal world we would obviously like to have optimal finite sample approximations that closely reflect practical testing scenarios. But such a bar is impossibly high at this point. Overall, we believe the value of theoretical inquiry into asymptotic regimes (i.e., population data rather than finite samples) cannot be dismissed out of hand, especially when simplifying assumptions of some sort are absolutely essential in making any reasonable progress. Even so, the true test of any theoretical contribution is the degree to which it leads to useful, empirically-testable predictions about behavior in real-world settings. In the present context, our theory makes the seemingly counter-intuitive prediction that a simple two-stage VAE could circumvent existing problems and produce realistic samples. We then tested this idea via the neutral DNN architecture and comprehensive experimental design from reference (Lucic et al., 2018) and it immediately worked. It is also critical to emphasize that these experiments were designed by others to evaluate top-performing GAN models with respect to generated sample quality, they were not developed to favor our approach in any way via some carefully tuned architecture or setting. Therefore, regardless of whether or not our theory involves asymptotic assumptions, it made testable, non-obvious predictions that were confirmed in a real-world practical environment, providing the very first VAE-based architecture that is quantitatively competitive with GANs in generating novel samples (at least with continuous data like images). We strongly believe that this is the hallmark of a significant contribution.\\n\\nThe reviewer also mentions that we may be unfamiliar with certain recent advances in approximate Bayesian inference, but no references were provided. Which papers in particular is the reviewer referring to? We are quite open to hearing about relevant work that we may have missed; however, presently we are unaware of any overlooked references that might serve to discount the message of our paper. Note that there is an extensive recent literature developing more sophisticated VAE inference networks using normalizing flows and related. However, to the best of our knowledge, none of these works contain quantitative evaluations of generated sample quality (our focus here), and many (possibly most) do not even contain visualizations of images generated by the model. Please see reference (van den Berg et al., \\\"Sylvester Normalizing Flows for Variational Inference,\\\" UAI 2018) for the latest representative example we have found. Of course our point here is not to disparage insightful papers of this type that provide significant advances in approximate inference. Rather we are merely arguing that they seem to be somewhat out of the scope of our present submission, especially given the limited space for broader discussions. But we can try to squeeze in more references and background perspective of this nature if the reviewer feels it could be helpful.\"}",
"{\"title\": \"Response to AnonReviewer1 -- Part 3\", \"comment\": [\"Reviewer Comment: No comparisons against VAE models with more flexible approximate posteriors such as those produced via normalizing flows\"], \"our_response\": \"We sincerely appreciate the effort in finding typos and checking the proofs. We have corrected each of the cases the reviewer uncovered. This will certainly be of benefit to future readers. Additionally, r can never be greater than d, because r is the manifold dimension within the ambient space of dimension d.\"}",
"{\"title\": \"Response to AnonReviewer3 -- Part 1\", \"comment\": \"Thanks for providing feedback regarding our submission and indicating specific points of uncertainty. We provide detailed answers to each question as follows:\\n\\n\\n1.\\tReviewer Comment: Why do the second-stage VAE latent variables more closely resemble N(0,I), and how does this ensure that the generated samples are realistic, especially if the dimension of the latent space is high?\", \"our_response\": \"These issues are addressed in Section 4 of our paper, building on foundational properties of VAE models and our theory from Sections 2 and 3, but we can provide some additional background details here. First, it can be helpful to check reference (Makhzani et al., 2016) which defines the aggregate posterior q(z) = \\\\int q(z|x)p_gt(x)dx, where q(z|x) serves as the encoder and p_gt(x) is the ground-truth data density. The basic idea behind generative models framed upon an autoencoder structure (VAE or otherwise) is that two criteria are required for producing good samples: (i) small reconstruction error when passing through the encoder-decoder networks, and (ii) an aggregate posterior q(z) that is close to some known distribution like N(0,I) that is easy to sample from. Without the latter criteria, we have no tractable way of generating random inputs to the learned decoder that will produce realistic samples resembling the training data distribution.\\n\\nIn the context of our paper and VAE models, we argue that the first-stage VAE provides small reconstruction errors using a minimal number of latent dimensions (if parameterized properly with a trainable decoder variance), but not necessarily an aggregate posterior q(z) that is close to N(0,I). This is because the basic VAE cost function is heavily biased towards finding low-dimensional manifolds upon which the data resides at the expense of learning the correct distribution within this manifold, which also prevents the aggregate posterior from nearing N(0,I). However, although the VAE may partially fail in this regard, it nonetheless provides a useful mapping to a lower-dimensional space in such a way that we can apply Theorem 1 from our work. In this lower dimensional space we treat q(z) \\\\neq N(0,I) as a revised ground-truth data distribution p_gt(z), and train a new VAE with latent variables u. Based on Theorem 1, in this restricted setting there will exist at least some parameterizations of the new encoder q(u|z) and decoder p(z|u) such that perfect reconstructions are possible, p_gt(z) is fully recovered, and KL[ q(u|z) || p(u|z) ] -> 0. If this all occurs, then we have the new second-stage aggregate posterior\\n\\nq(u) = \\\\int q(u|z)p_gt(z)dz = \\\\int p(u|z)p_gt(z)dz = \\\\int p_gt(z|u)p(u)dz = p(u) = N(0,I)\\n\\nas desired. For practical deployment, we then only need sample u from N(0,I), then z from p(z|u), and finally x from p(x|z). Note also that if the latent dimension of z is higher than actually needed, the first-stage VAE decoder is effectively capable of blocking/pruning the extra dimensions as discussed in Section 3. This will not guarantee high quality samples, but it is adequate for preparing the data from the aggregate posterior q(z) to satisfy Theorem 1, which can then be leveraged by the second-stage VAE as mentioned above and in our paper.\"}",
"{\"title\": \"Response to AnonReviewer3 -- Part 2\", \"comment\": \"2.\\tReviewer Comment: The adversarial autoencoder is also proposed to solve the latent space problem, by comparison, what is the advantage of this paper?\", \"our_response\": \"The true latent manifold dimension r is unknown in all of our experiments since we are using real-world data. However, for the dimension of the VAE latent code, we chose kappa = 64 for all experiments, except for the 2-Stage VAE* model results, where we used 32 for MNIST and Fashion-MNIST, 192 for CIFAR-10, and 256 for CelebA. Note that these values were not carefully tuned and need not be exact per the arguments responding to reviewer comment 4 above. We just tried a single smaller value for the simpler data (MNIST and FashionMNIST), and a couple larger values for the more complex ones (CIFAR-10 and CelebA).\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We appreciate the detailed and positive comments, which truly reflect many of the essential contributions of our work. Likewise to the best of our knowledge, the FID scores we report are indeed the first to close the gap between GANs and non-adversarial AE-based methods as the reviewer points out. Regarding the small comments concluding the review, we answer as follows:\\n\\n\\n- Reviewer Comment: Is the code / checkpoints going to be available anytime soon?\", \"our_response\": \"Thanks for catching each of these and also checking the proofs carefully. We have fixed each typo/suggestion in a revised version.\"}",
"{\"title\": \"The paper establishes a new state-of-art FID scores among auto-encoder based generative models with solid theoretical insights supporting the empirical results\", \"review\": \"The paper provides a number of novel interesting theoretical results on \\\"vanilla\\\" Gaussian Variational Auto-Encoders (VAEs) (sections 1, 2, and 3), which are then used to build a new algorithm called \\\"2 stage VAEs\\\" (Section 4). The resulting algorithm is as stable as VAEs to train (it is free of any sort of adversarial training, it comes with a little overhead in terms of extra parameters), while achieving a quality of samples which is *very impressive* for an Auto-Encoder (AE) based generative modeling techniques (Section 5). In particular, the method achieves FID score 24 on the CelebA dataset which is on par with the best GAN-based models as reported in [1], thus sufficiently reducing the gap between the generative quality of the GAN-based and AE-based models reported in the literature.\", \"main_theoretical_contributions\": \"1. In some cases the variational bound of Gaussian VAEs can get tight (Theorem 1).\\nIn the context of vanilla Gaussian VAEs (Gaussian prior, encoders, and decoders) the authors show that if (a) the intrinsic data dimensionality r is equal to the data space dimensionality d and (b) the latent space dimensionality k is not smaller than r then there is a sequence of encoder-decoder pairs achieving the global minimum of the VAE objective and simultaneously (a) zeroing the variational gap and (b) precisely matching the true data distribution. In other words, in this setting the variational bound and the Gaussian model does not prevent the true data distribution from being recovered.\\n\\n2. In other cases Gaussian VAEs may not recover the actual distribution, but they will recover the real manifold (Theorems 2, 3, 4 and discussions on page 5).\\nIn case when r < d, that is when the data distribution is supported on a low dimensional smooth manifold in the input space, things are quite different. The authors show that there are still sequences of encoder-decoder pairs which achieves the global minimum of the VAE objective. However, this time only *some* of these sequences converge to the model which is in a way indistinguishable from the true data distribution (and thus again Gaussian VAEs do not fundamentally prevent the true distribution from being recovered). Nevertheless, all sequences mentioned above recover the true data manifold in that (a) the optimal encoder learns to use r dimensional linear subspace in the latent space to encode the inputs in a lossless and noise-free way, while filling the remaining k - r dimensions with a white Gaussian noise and (b) the decoder learns to ignore the k - r noisy dimensions and use the r \\\"informative\\\" dimensions to produce the outputs perfectly landing on the true data manifold.\", \"main_algorithmic_contributions\": \"(0) A simple 2 stage algorithm, where first a vanilla Gaussian VAE is trained on the input dataset and second a separate vanilla Gaussian VAE is trained to match the aggregate posterior obtained after the first stage. The authors support this algorithm with a reasonable theoretical argument based on theoretical insights listed above (see end of page 6 - beginning of page 7). The algorithm achieves state-of-art FID scores across several data sets among AE based models existing in the literature.\", \"review_summary\": \"I would like to say that this paper was a breath of fresh air to me. I really liked how the authors make a strong point that *it is not the Gaussian assumptions that harm the performance of VAEs* in contrast to what is usually believed in the field nowadays. Also, I think *the reported FID scores alone may be considered as a significant enough contribution*, because to my knowledge this is the first paper significantly closing the gap between generative quality of GAN-based models and non-adversarial AE-based methods. \\n\\n***************\\n*** Couple of comments and typos:\\n***************\\n(0) Is the code / checkpoints going to be available anytime soon?\\n(1) I would mention [2] which in a way used a very similar approach, where the aggregate posterior of the implicit generative model was modeled with a separate implicit generative model. Of course, two approaches are very different ([2] used an adversarial training to match the aggregate posterior), however I believe the paper is worth mentioning.\\n(2) In light of the discussion on page 6 as well as some of the conclusions regarding commonly reported blurriness of the VAE models, results of Section 4.1 of [3] look quite relevant. \\n(3) It would be nice to specify the dimensionality of the Sz matrix in definition 1.\\n(4) Line ater Eq. 3: I think it should be $\\\\int p_gt(x) \\\\log p_\\\\theta(x) dx$ ?\\n(5) Eq 4: p_\\\\theta(x|x)\\n(6) Page 4: \\\"... mass to most all measurable...\\\".\\n(7) Eq 34. Is it sqrt(\\\\gamma_t) or just \\\\gamma_t?\\n(8) Line after Eq 40. Why exactly D(u^*) is finite?\\n\\nI only checked proofs of Theorems 1 and 2 in details and those looked correct. \\n\\n[1] Lucic et al., 2018.\\n[2] Zhao et al., Adversarially regularized autoencoders, 2017, http://proceedings.mlr.press/v80/zhao18b.html\\n[3] Bousquet et al., From optimal transport to generative modeling: the VEGAN cookbook. 2017, https://arxiv.org/abs/1705.07642\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Two-stage VAE method to generate high-quality samples and avoid blurriness\", \"review\": \"This paper proposed a two-stage VAE method to generate high-quality samples and avoid blurriness. It is accomplished by utilizing a VAE structure on the observation and latent variable separately. The paper exploited a collection of interesting properties of VAE and point out the problem existed in the generative process of VAE. I have several concerns about the paper:\\n\\n1.\\tIt is necessary to explain why the second-stage VAE can have its latent variable more closely resemble N(u|0,I). Even if the latent variable closely resemble N(u|0,I), How does it make sure the generated images are realistic? I admit that the VAE model can reconstruct realistic data based on its inferred latent variable, however, when given a random sample from N(u|0,I), the generated images are not good, which is true when the dimension of the latent space is high. I still can\\u2019t understand why a second-stage VAE can relief this problem.\\n2.\\tThe adversarial auto-encoder is also proposed to solve the latent space problem, by comparison, what is the advantage of this paper?\\n3.\\tWhy do you set the model as two separate stages? Will it enhance the performance if we train theses two-stages all together?\\n4.\\tThe proofs for the theory 2 and 3 are under the assumption that the manifold dimension of the observation is r, while in reality it is difficult to obtain this r, do these theories applicable if we choose a value for the dimension of the latent space that is smaller than the real manifold dimension of the observation? How will it affect the performance of the proposed method?\\n5.\\tThe value of r and k in each experiment should be specified.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Careful analysis of Gaussian VAEs yields valuable insights and training procedure\", \"review\": \"Overview:\\nI thank the authors for their interesting and detailed work in this paper. I believe it has the potential to provide strong value to the community interested in using VAEs with an explicit and simple parameterization of the approximate posterior and likelihood as Gaussian. Gaussianity can be appropriate in many cases where no sequential or discrete structure needs to be induced in the model. I find the mathematical arguments interesting and enlightening. However, the authors somewhat mischaracterize the scope of applicability of VAE models in contemporary machine learning, and don't show familiarity with the broad literature around VAEs outside of this case (that is, where a Gaussian model of the output would be manifestly inappropriate). Since the core of the paper is valuable and salvageable from a clarity standpoint, my comments below are geared towards what changes the authors may make to move this paper into the \\\"pass\\\" category.\", \"pros\": [\"Mathematical insights are well reasoned and interesting. Based on the insight from the analysis in the supplementary materials, the authors propose a two-stage VAE which separate learning the a parsimonious representation of the low-dimensional (lower than the ambient dimension of the input space), and the training a second VAE to learn the unknown approximate posterior. The two-stage training procedure is both theoretically motivated and appears to enhance the output quality of VAEs w.r.t. FID score, making them rival GAN architectures on this metric.\"], \"cons\": [\"The title and general tone of the paper is too broad: it is only VAE models with Gaussian approximate posteriors and likelihoods. This is hardly the norm for most applications, contrary to the claims of the authors. VAEs are commonly used for discrete random variables, for example. Many cases where VAEs are applied cannot use a Gaussian assumption for the likelihood, which is the key requirement for the proofs in the supplement to be valid (then, the true posterior is also Gaussian, and the KL divergence between that and the approximate posterior can be driven to zero during optimization--clearly a Gaussian approximate posterior will never have zero KL divergence with a non-Gaussian true posterior).\", \"None of the proofs consider the approximation error garnered by only having access to empirical samples through a sample of the ground truth population. (The ground-truth distribution must be defined with respect to the population rather just the dataset in hand, otherwise we lose all generalizability from a model.) Moreover, the proofs hold asymptotically. Generalization bounds and error from finite time approximations are very pertinent issues and these are ignored by the presented analyses. Such concerns have motivated many of the recent developments in approximate posterior distributions. Overall, the paper contains little evidence of familiarity with the recent advances in approximate Bayesian inference that have occurred over the past two years.\", \"A central claim of the paper is that the two-stage VAE obviates the need for highly adaptive approximate posteriors. However, no comparison against those models is done in the paper. How does a two-stage VAE compare against one with, e.g., a normalizing flow approximate posterior? I acknowledge that the purpose of the paper was to argue for the Gaussianity assumption as less stringent than previously believed, but all of the mathematical arguments take place in an imagined world with infinite time and unbounded access to the population distribution. This is not really the domain of interest in modern computational statistics / machine learning, where issues of generalization and computational efficiency are paramount.\", \"While the mathematical insights are well developed, the specifics of the algorithm used to implement the two-stage VAE are a little opaque. Ancestral sampling now takes place using latent samples from a second VAE. An algorithm box is badly needed for reproducibility.\", \"Recommendations / Typos\", \"I noted a few typos and omissions that need correction.\", \"Generally, the mathematical proofs in section 7 of the supplement are clear. At the top of page 11, though, the paragraph correctly begins by stating that the composition of invertible functions is invertible, but fails to establish that G is also invertible. Clearly it is so by construction, but the explicit reasons should be stated (as a prior sentence promises), and so I assume this is an accidental omission.\", \"The title of Section 8.1 has a typo: clearly is it is the negative log of p_{theta_t} (x) which approaches its infimum rather than p_{theta_t} (x) approaching negative infinity.\", \"Equation (4): the true posterior has an x as its argument instead of the latent z.\", \"Missing parenthesis under Case 2 and wrong indentation. This analysis also seems to be cut off. Is the case r > d relevant here?\", \"EDIT: I have read the authors' detailed response. It has clarified a few key issues, and convinced me of the value to the community for publication in its present (slightly edited according to the reviwers' feedback) form. I would like to see this published and discussed at ICLR and have revised my score accordingly. *\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HylTXn0qYX | Efficiently testing local optimality and escaping saddles for ReLU networks | [
"Chulhee Yun",
"Suvrit Sra",
"Ali Jadbabaie"
] | We provide a theoretical algorithm for checking local optimality and escaping saddles at nondifferentiable points of empirical risks of two-layer ReLU networks. Our algorithm receives any parameter value and returns: local minimum, second-order stationary point, or a strict descent direction. The presence of M data points on the nondifferentiability of the ReLU divides the parameter space into at most 2^M regions, which makes analysis difficult. By exploiting polyhedral geometry, we reduce the total computation down to one convex quadratic program (QP) for each hidden node, O(M) (in)equality tests, and one (or a few) nonconvex QP. For the last QP, we show that our specific problem can be solved efficiently, in spite of nonconvexity. In the benign case, we solve one equality constrained QP, and we prove that projected gradient descent solves it exponentially fast. In the bad case, we have to solve a few more inequality constrained QPs, but we prove that the time complexity is exponential only in the number of inequality constraints. Our experiments show that either benign case or bad case with very few inequality constraints occurs, implying that our algorithm is efficient in most cases. | [
"local optimality",
"second-order stationary point",
"escaping saddle points",
"nondifferentiability",
"ReLU",
"empirical risk"
] | https://openreview.net/pdf?id=HylTXn0qYX | https://openreview.net/forum?id=HylTXn0qYX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ryl2V8tAy4",
"BJx4YOG6C7",
"rylPNDchRX",
"H1giRmc30Q",
"HJlqYuFqAX",
"SklaadxYRm",
"H1xBzsAQAQ",
"rJxE090Q0X",
"SyxnsqC7RQ",
"SyxXdq07Rm",
"rJl-Gq0mAQ",
"rkl6k5D7am",
"rkeveQE02m",
"SklEPann2X",
"SkxeZJD5nQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544619571563,
1543477371709,
1543444270887,
1543443410865,
1543309441595,
1543207108568,
1542871820636,
1542871755883,
1542871716382,
1542871659086,
1542871560662,
1541794277318,
1541452527220,
1541356892374,
1541201656109
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1404/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1404/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1404/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1404/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1404/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1404/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1404/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1404/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1404/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1404/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1404/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1404/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1404/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1404/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1404/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a new method for verifying whether a given point of a two layer ReLU network is a local minima or a second order stationary point and checks for descent directions. All reviewers agree that the algorithm is based on number of new techniques involving both convex and non-convex QPs, and is novel. The method proposed in the paper has significant limitations as the method is not robust to handle approximate stationary points. Given these limitations, there is a disagreement between reviewers about the significance of the result . While I share the same concerns as R4, I agree with R3 and believe that the new ideas in the paper will inspire future work to extend the proposed method towards addressing these limitations. Hence I suggest acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"ICLR 2019 decision\"}",
"{\"title\": \"Clarification\", \"comment\": \"Sorry for the confusion. What worries me most is not a practical implementation.\\n\\nFrom a theoretical point of view, the current version can only test if a point is a real SOSP. Thus this is only a qualitative result. I expect a theoretical machine learning paper in ICLR/ICML/NIPS/COLT to have at least some quantitative analysis on the error (even it has a large polynomial or exponential dependency), i.e., robust analysis for the problem studied in this paper.\"}",
"{\"title\": \"Thank you for your feedback\", \"comment\": \"Thank you for your positive feedback and support! As you pointed out, analyzing and implementing a robust version of the algorithm will require a significant amount of additional effort. We hope to tackle this in the future.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your response. Allow us to summarize our point again: we are providing the foundation for a practical check of local optimality that can be eventually turned into a useful algorithm in practice. Our paper and our review response express these results and contributions, acknowledging current limitations honestly and without any overclaims.\\n\\nIn particular, our work is a theoretical contribution, and implementing (plus analyzing) a robust version requires much more additional effort, well beyond the current paper; to be fair, no non-trivial theory is developed in one day, and the amount of work that went into laying the foundations for a future robust analysis is fairly substantial already, in our opinion.\\n\\nWe are disappointed by Reviewer 4\\u2019s decision to deduct their rating from 5 to 3. We believe that a rating of \\u201cclear rejection,\\u201d just because the paper does not build the whole tower already, is a bit harsh, as it ignores the importance of the foundations established herein. We would like to ask the reviewers to take this aspect into account in deciding their final ratings\\u2014ultimately, because we believe that the paper is well on-topic, and will spur follow up work.\"}",
"{\"title\": \"Thanks for your response\", \"comment\": \"Thanks for your response!\\nI have updated my review. I encourage author(s) to add the robust analysis and submit to the next top machine learning conference.\"}",
"{\"title\": \"Response acknowledged\", \"comment\": \"I read the authors' response. Given the shared concerns by all reviewers about robustness and applicability, I am not quite as positive as I was before, but I still support the paper. The authors seem well-aware of the shortcomings of the work, which would require a major new work to address. I think this work is an interesting stepping-stone, and shows original thought.\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"We thank the reviewer for their efforts in reviewing our paper. We will address the concerns by the reviewer below:\\n\\n1. Due to multiple reviewers raising a similar point, we addressed this issue in a separate comment above. Please refer to item (2) of the comment.\\n\\n2. For the discussion on the robustness of the algorithm, we wrote a separate comment above to address common concerns raised by the reviewers. Please refer to item (1) of the comment.\\n\\n3. Thank you for the suggestion! Since our analysis is specialized for ReLU neural networks, it would be a good idea to place Lemma 2 in Section 2.1. We will update the paper in our next revision.\\n\\nAs for \\u201cminor comments\\u201d: Thank you for pointing out the typos. We will fix those issues as we revise our paper.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We appreciate the reviewer for their time and thoughtful comments. Below, we will provide answers to the reviewer\\u2019s concerns.\\n\\n1) For the discussion on the precision of the algorithm, we wrote a separate comment above to address common concerns raised by the reviewers. Please refer to item (1) of the comment.\\n\\n2) It is true that the set of nondifferentiable points has measure zero. On the other hand, please note that a nondifferentiable point can have multiple boundary data points, i.e., x_i\\u2019s that satisfy [W_1 x_i + b_1]_k = 0 for some k (input to the k-th hidden node is zero for this x_i). Also, such nondifferentiable points with many boundary data points lie on the intersection of subsets of the parameter space, where each subset corresponds to one boundary data point x_i and contains the parameter values satisfying [W_1 x_i + b_1]_k = 0.\\nOur experiments were run for an extended period of time with exponentially decaying step size, to get as close to the exact nondifferentiable point (potentially local minimum) as possible. And then, we counted the number of \\u201capproximate\\u201d boundary data points, i.e., x_i\\u2019s that satisfied abs( [W_1 x_i + b_1]_k ) < 1e-5 for some k. In our experimental settings, it turns out that gradient descent pushes the parameters to a point with multiple boundary data points (i.e., M is large), but there are usually very few flat extreme rays (i.e., L is small).\\n\\n3a) At our current stage of results, we are not claiming that our algorithm is useful in practice. By the experimental results, we are claiming the following: 1) given that M can be large in practice, our analysis of nondifferentiable points is meaningful, 2) L is usually very small in our experiments, so testing local minimality at nondifferentiable points can be tractable.\\n\\n3b) Our analysis for now is limited to one-hidden-layer networks. For deeper networks, perturbation on the first layer may affect later layers, so the extension to deep networks is beyond the scope of this paper. For now, we leave this extension as future work.\\n\\n3c) Due to multiple reviewers with similar concerns, we addressed this issue in a separate comment above. Please refer to item (2) of the comment.\\n\\n3d) Your observation is true, because other activation functions do not have nondifferentiable points. In such cases, we can directly compute the gradient and Hessian, so the second-order stationarity test is straightforward. However, ReLU is one of the most popular activation functions, and it inevitably introduces nondifferentiable points in the empirical risk, which are difficult to analyze. The goal of our paper is to shed light on a better understanding of such nonsmooth points.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank the reviewer for the time and effort invested in reviewing our paper. Below, we will address the comments point by point:\\n\\n1) For the discussion on ideal conditions of the algorithm, we wrote a separate comment above to address common concerns raised by the reviewers. Please refer to item (1) of the comment.\\n \\n2) We agree that we do not have a good theoretical bound on L, so in the worst case we might suffer exponential running time. Due to the complex nature of the loss surface of empirical risk minimization problems, providing tight theoretical bound for L might be very difficult, so we instead provide some empirical evidence showing L is usually small. We leave the theory side as future work.\\n\\n3) Indeed, the computational cost of calculating exact (sub)differentials and Hessians grow proportionally with the number of data points m. It seems difficult to obtain a stochastic version unless we add assumptions on the distribution of data points. If we can develop a robust version of the algorithm as mentioned in item 1), then with some distributional assumptions on data, we expect that we can get some high probability results for a stochastic version.\\nHowever, even without the stochastic version, we expect that (a numerical implementation of) our algorithm will be used only for testing local optimality almost at the end of training, not every iteration. Thus, its computational cost will not be too big.\\n\\nThank you very much for pointing out those typos. $[N(x_i)]_k$ is originally meant to be $[W_1 x_i + b_1]_k$. We will fix these typos in the next revision.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you very much for your feedback. We are glad that you enjoyed reading our paper. We list our answers to your comments, by their numbering:\\n\\n(1) Yes, we agree that there is certainly room for improvement; we will make our best efforts in revising the paper accordingly.\\n\\n(2) For the discussion on the robustness of the algorithm in general, we wrote a separate comment above to address common concerns raised by the reviewers. Please refer to item (1) of the general comments.\\nRegarding the specific concern of testing if a directional derivative is zero, we believe that the reviewer is talking about testing the existence of flat extreme rays. In our experiments, to count the number of \\u201capproximate\\u201d flat extreme rays, we used our lemma A.1 that gives conditions for existence of flat extreme rays, and tested if these conditions are approximately satisfied. For more details, please refer to the end of the 2nd paragraph of Section 4.\\n\\n(3) The main purpose of our numerical experiments is to provide an empirical evidence of how many boundary data points (M) and flat extreme rays (L) we can have, because these quantities are difficult to estimate/bound theoretically. Our experiments show that, in our settings, there can be nonsmooth local minima with large M (implying that our analysis on nonsmooth points is meaningful) but L is usually surprisingly small.\"}",
"{\"title\": \"General remarks on commonly-raised concerns from reviewers\", \"comment\": \"Dear reviewers,\\n\\nWe truly appreciate the time and effort put in reviewing our paper, and we thank you all for your thoughtful comments and suggestions.\\n\\nThere were some concerns common to multiple reviewers, so we address them in a separate comment here.\\n\\n(1) Regarding precision / robustness / ideal conditions issues\\nAll reviewers raised concerns about numerical applicability of our algorithm. We would like to emphasize that this paper is a theoretical contribution seeking to understand the nondifferentiable points of the empirical risk surface of ReLU networks. As noted in the introduction, our understanding of nonsmooth points of empirical risk is limited, but in some cases, nonsmooth points can be precisely the \\u201cinteresting points\\u201d that we care about. In this paper, we are theoretically/empirically showing that testing local optimality/second order stationarity at nondifferentiable points can be tractable (if the number of flat extreme rays is small), by exploiting the geometric structure of empirical risk.\\nBut we fully agree (as also noted in Section 1 in the Remarks and in Section 5) that creating a numerically robust version of this algorithm that works for \\u201cclose-to-nondifferential\\u201d points and approximate SOSPs will be needed before our theoretical work can attain its true practical significance---this goal requires a fairly substantial amount of effort (both theory and practice) and we hope to tackle it in the future.\\n\\n(2) Regarding practical usefulness of the algorithm \\nReviewers 1 and 4 raised concerns whether this algorithm is really meaningful in practice, given that SGD already performs well enough without our algorithm. It is true that in practice, SGD easily achieves near-zero empirical risk most of the time. However, please note that the solutions that we obtain at the end of training are not necessarily global or even local minima, because in practice we don\\u2019t have optimality tests / certificates during training.\\nIn contrast, one of the most important beneficial features of convex optimization is existence of an optimality test (e.g., norm of the gradient is smaller than a certain threshold) for termination, which gives us a certificate of optimality. One of our motivations is that deep learning may also benefit from such optimality tests. Our analysis and experimental results suggest that even in the nonconvex and nonsmooth case of ReLU networks, it is sometimes not too difficult to get such a certificate of local optimality (we remind the readers that in general detecting local optimality for nonconvex problems is \\u201cNP-Hard\\u201d).\\nWith a proper numerical implementation of our algorithm (although we leave it for future work), one can run a first-order method until it gets stuck near a point, and run our algorithm to test for optimality/second-order stationarity. If the point is an (approximate) SOSP, we can terminate without further computation time over many epochs; if the point has a descent direction, our algorithm will return a descent direction and we can continue on optimizing. Note that the descent direction may come from the second-order information; our algorithm even allows us to escape nonsmooth second-order saddle points.\\n\\nWe address the remaining points individually.\"}",
"{\"title\": \"Review\", \"review\": \"Updates:\\nAuthor(s) acknowledged that they cannot get a robust analysis. Furthermore, the optimality test also requires a robust analysis. Therefore, I believe the current version is still incomplete so I changed my score. I encourage author(s) to add the robust analysis and submit to the next top machine learning conference.\\n\\n-------------------------------------------\", \"paper_summary\": \"This paper gives a new algorithm to check whether a given point is a (generalized) second-order stationary point if not, it can return a strict descent direction even at this point the objective function (empirical risks of two-layer ReLU or Leaky-ReLU networks) is not differentiable.\\nThe main challenge comes from the non-differentiability of ReLU. While testing a second-order stationary point is easy, because of the non-differentiability, one needs to test 2^M regions in the ReLU case. This paper exploits the special structure of two-layer ReLU network and shows it suffices to check only the extreme rays of the polyhedral cones which are the feasible sets of these 2^M linear programs.\", \"comments\": \"1. About Motivation. While checking the optimality on a non-differentiable point is a mathematically interesting problem, it has little use in deep learning. In practice, SGD often finds a global minimum easily of ReLU-activated deep neural networks [1].\\n2. This algorithm can only test if a point is a real SOSP. In practice, we can only hope to get an approximate SOSP. I expect a robust analysis, i.e., can we check whether it is a (\\\\epsilon,\\\\delta) SOSP?\\n3. About writing: g(z,\\\\eta) and H(z,\\\\eta) appear in Section 1 and Section 2, and they are used to define generalized SOSP. However, their formal definitions are in Lemma 2. I suggest give the formal definitions in Section 1 or Section 2 and give more intuitions on their formulas.\", \"minor_comments\": \"1. Many typos in references, e.g., cnn -> CNN.\\n2. Page 4: Big-Oh -> Big O.\\n\\n\\n\\nOverall I think this paper presents some interesting ideas but I am unsatisfied with the issues above. I am happy to see the authors\\u2019 response, and I may modify my score. \\n\\n\\n[1] Zhang, C., Bengio, S., Hardt, M., Recht, B., & Vinyals, O. (2016). Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"review\", \"review\": \"This paper proposes an efficient method to test whether a point is a local minimum in a 1-hidden-layer ReLU network. If the point is not a local minimum, the algorithm also returns a direction for descending the value of the loss function.\\n\\nThe tests include a first-order stationary point test (FOSP), and a second-order stationary point test (SOSP). As these test can be written as QPs, the core challenge is that if there are M boundary points in the dataset, i.e., data points on a non-differentiable region of the ReLU function, then the FOSP test requires 2^M tests of extreme rays -- each boundary partition the whole space into at least two parts. This paper observes that since the feasible sets are pointed polyhedral cones. Therefore checking only these extreme rays suffices. This results in an efficient test with only 2M tests. \\n\\nLastly, the paper performs experiments on synthetic data. It turns out there are surprisingly many boundary points.\", \"comments\": \"This paper proposes an interesting method of testing whether a given point is a local minimum or not in a ReLU network. The technique is non-trivial and requires some key observation to make it computationally efficient. However, I have the following concerns:\\n1) such a test may need very high numeric precision. For instance, you cannot make sure whether a floating point number is strictly greater than 0 or not. The small error may critically affect the property of a point. \\n2) boundary points of a ReLU network should have measure 0 (correct me if not). The finding in the experiment shows surprisingly many boundary points. This is counter-intuitive. Is it because of numeric issues? You might misclassify non-boundary points.\\n3) Usefulness. \\n a. The paper claims that such a test would be very useful in practice. However, they cannot even perform an experiment on real datasets. \\n b. Such a method only works for one-hidden layer network. It is not clear deeper network admit similar structure. \\n c. Practical training of neural-network usually trains the network using SGD, which always obtain a solution with a non-zero gradient. In this sense, there is no need for such a testing. \\n d. It seems like it is much easier to perform a test with different activation function, e.g., sigmoid.\\n \\nIf the authors can address these concerns convincingly, I would be happy to change the rating.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Algorithm for testing local optimality in ReLU networks\", \"review\": \"Summary:\\nThis work proposes a theoretical algorithm for checking local optimality and escaping saddles when training two-layer ReLU networks. The proposed \\\"checking algorithm\\\" involves solving convex and non-convex quadratic programs (QP) which can be done in polynomial time. The paper is well organized and technically correct with detailed proofs.\", \"comments\": \"1) Applicability issue: the conditions required by the proposed checking algorithm are too ideal, making it difficult to apply in practical applications. For example, the first step of the proposed algorithm is to check whether 0 belongs to the subdifferential. In practice, the iterates may get very close to a stationary point, but arriving to a stationary point might be too time-consuming and unrealistic. If the problem is smooth, then the gradient is expected to be small so that one can easily relax this first order optimality condition by allowing a small gradient. However, since here the problem is nonsmooth, in general the subgradient could be still very large even when the iterate is very close to a stationary point. Therefore, one would need to relax the ideal conditions in the proposed algorithm to make it more applicable.\\n\\n2) Another concern is that the efficiency of the proposed method relies too much on the empirical result that the number of flat extreme ray is small. The computational complexities for the test of the local optimality is exponentially depending on the number of flat extreme rays. Thus to guarantee a high efficiency of the proposed test algorithm and to make the main theory sound, it is important to provide a theoretical bound on this number. Without appropriate theoretical guarantees on the upper-bound of this number, it is not persuasive to claim that the proposed theoretical algorithm is of high efficiency.\\n \\n3) The computational complexity is proportional to the number of training data points which could be huge. Is it possible to have a stochastic version?\", \"typos\": \"1) On page 2, under Section 2, ``$h(t):=$\\\" should be ``$h(x):=$\\\"\\n\\n2) In section 2.1, at the end of the paragraph \\\"Bisection by boundary data points\\\": change $b_1$ by $\\\\delta_1$ in ``$\\\\Delta_1x_i+b_1$\\\".\\n\\n3) On page 4, when defining B_k, change x by x_i. \\n\\n4) On page 5, above Lemma 1, when defining C_k, N(x_i) is not well defined.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Refreshing ideas, clever algorithm, unclear impact\", \"review\": \"The paper proposes a method to check if a given point is a stationary point or not (if not, it provides a descent direction), and then classify stationary points as either local min or second-order stationary. The method works for a specific non-differentiable loss. In the worst case, there can be exponentially many flat directions to check (2^L), but usually this is no the case.\\n\\nOverall, I'm impressed. The analysis seems solid, and a lot of clever ideas are used to get around issues (such as exponential number of regions, and non-convex QPs that cannot be solved by the S-procedure or simple tricks). A wide-variety of techniques are used: non-smooth analysis, recent analysis of non-convex QPs, copositive optimization.\\n\\nThe writing is clear and makes most arguments easy to follow.\", \"there_are_some_limitations\": \"(1) the technical details are hard to follow, and most are in a lengthy appendix, which I did not check\\n\\n(2) there was no discussion of robustness. If I find a direction eta for which the directional derivative is zero, what do you mean by \\\"zero\\\"? This is implemented on a computer, so we don't really expect to find a directional derivative that is exactly zero. I would have liked to see some discussions with epsilons, and give me a guarantee of an epsilon-SOSP or some kind of notion. In the experiments, this isn't discussed (though another issue is touched on a little bit: you wanted to find real stationary points to test, but you don't have exactly stationary points, but rather can get arbitrarily close). To make this practical, I think you need a robust theory.\\n\\n(3) The numerical simulations mainly provided some evidence that there are usually not too many flat directions, but don't convince us that this is a useful technique on a real problem. The discussion about possible loss functions at the end was a bit opaque. Furthermore, if you can't find a dataset/loss, then why is this technique useful?\\n\\nThe paper is interesting and novel enough that despite the limitations, I am supportive of publishing it. It introduces new ideas that I find refreshing. The technique many not ever make it into the state-of-the-art algorithms, but I think the paper has intellectual value regardless of practical value.\\n\\nIn short, quality = high, clarity=high, originality=very high, and significance=hard-to-predict\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
HJE6X305Fm | Don't let your Discriminator be fooled | [
"Brady Zhou",
"Philipp Krähenbühl"
] | Generative Adversarial Networks are one of the leading tools in generative modeling, image editing and content creation.
However, they are hard to train as they require a delicate balancing act between two deep networks fighting a never ending duel. Some of the most promising adversarial models today minimize a Wasserstein objective. It is smoother and more stable to optimize. In this paper, we show that the Wasserstein distance is just one out of a large family of objective functions that yield these properties. By making the discriminator of a GAN robust to adversarial attacks we can turn any GAN objective into a smooth and stable loss. We experimentally show that any GAN objective, including Wasserstein GANs, benefit from adversarial robustness both quantitatively and qualitatively. The training additionally becomes more robust to suboptimal choices of hyperparameters, model architectures, or objective functions. | [
"GAN",
"generative models",
"computer vision"
] | https://openreview.net/pdf?id=HJE6X305Fm | https://openreview.net/forum?id=HJE6X305Fm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1li3Y4-gV",
"SJlbkwLcRm",
"BkggGxgM6X",
"H1eRJegGaQ",
"HJlipkeGp7",
"SygU5RN927",
"Hkg6Ixk93Q",
"rklFwIBK2Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544796595168,
1543296728807,
1541697543595,
1541697509806,
1541697474560,
1541193357886,
1541169236792,
1541129824682
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1403/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1403/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1403/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1403/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1403/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1403/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1403/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1403/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper provides a simple method for regularising and robustifying GAN training. Always appreciated contribution to GANs. :-)\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good practical approach to stabilise GAN training\"}",
"{\"title\": \"Changing rating to 6\", \"comment\": \"Thanks for the detailed feedback. Some of my issues are addressed in the feedback and it would be better to clarify them in the revised paper. Now I change my rating to 6. The reason why I cannot give 7 is the missing analysis of robust D leading to better G.\"}",
"{\"title\": \"RE: robustness regularization improves GANs training\", \"comment\": \"We thank the reviewer for the insightful feedback. We\\u2019re glad the reviewer liked the paper. We will make the code public upon acceptance.\"}",
"{\"title\": \"RE: Good paper with good results\", \"comment\": \"We thank the reviewer for the feedback. We are glad you liked the paper.\"}",
"{\"title\": \"RE: A novel idea, but lack of motivation and intuition\", \"comment\": \"We thank the reviewer for her/his time and the constructive feedback. We are glad that the reviewer sees our contribution as a novel idea. We address the main concerns below.\\n\\n> this paper is not highly motivated and lacks (of) intuition. I can hardly understand why the robustness can stabilize the training of GAN.\\n\\nAs the reviewer rightly pointed out the connection between smoothness of the objective and ease (or stability) of training is an empirical one.\\n\\u201cThe toy example in Sec. 4.2 shows that it can regularize the Discriminator to provide a meaningful gradient to Generator, but no theoretical analysis is provided\\u201d\\nWe are not the first paper to establish this empirical connection, WGAN and follow-up work already established this on a wide range of generative tasks. However, we are the first to point out that this smoothness / robustness is not a property of WGAN, but rather the regularization used to optimize the discriminator in any GAN.\\n\\nIn addition, we show empirically in Table 2 that robustness leads to much more stable training (and better generation performance) than theoretically motivated stability results such as Instance Noise. However, the reviewer is right that there is no theoretical connection, which is an important avenue of future work, but beyond the scope of this paper.\\n\\n\\n> the theoretical analysis is inconsistent with the experimental settings. Theorem 4.3 holds true when f is non-positive, but WGAN\\u2019s loss function can be positive and this paper does not give any details about this part.\\n\\nThe reviewer is right that Theorem 4.3 only applies to the JS (GAN) and LS (LSGAN) objectives for which the regularization works best. Theorem 4.3 does not say anything about linear (WGAN) objectives. For WGAN style objectives the original WGAN paper showed robustness results under slightly different conditions. We tried to extend our results to linear objectives, but did not yet succeed (we could not proof or disproof Theorem 4.3 for linear objectives).\\nWe still included WGAN results in the results to establish the empirical connection between regularization and robustness. While we can not prove robustness for linear objectives, it still holds in practice.\\nHowever, if the reviewer finds this distracting and confusing we are happy to edit or remove parts of the experimental section to make it more consistent.\\n\\n> in Sec. 4.2, I can hardly distinguish the difference between robust loss, robust discriminator and regularized objectives.\\n\\nBoth robust loss and discriminator pose a hard constraint on the discriminator (either before or after the loss function). These hard constraints are difficult to optimize (see WGAN), but easy to analyze. The regularized objective is easy to optimize as a regularization (soft constraint) between two generative distributions (original and perturbed). Theorem 4.3 shows that the regularized objective can be reduced to hard constraints for the JS and LS objectives, and thus benefits for all the analysis of the hard constraints.\\n\\nWe will update the paper to better highlight this difference.\\n\\n> Typos and notation\\nWe thank the reviewer for pointing the typos and notational inconsistencies out, and will fix them in the next iteration.\"}",
"{\"title\": \"robustness regularization improves GANs training\", \"review\": [\"The paper proposed a systematic way of training GANs with robustness regularization terms. Using the proposed method, training GANs is smoother and\", \"pros\", \"The paper is solving an important problem of training GANs in a robust manner. The idea of designing regularization terms is also explored in other domains of computer vision research, and it's nice to see the its power in training GANs.\", \"The paper provides detailed proofs and analysis of the approach, and visualizations of the regularization term help people to understand the ideas.\", \"The presentation of the approach makes sense, and experimental results using several different GANs methods and competing regularization methods are extensive and good in general\", \"cons\", \"I didn't find major issues of the paper. I think code in the paper should be made public as it could potentially be very useful for training GANs in general.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Good paper with good results\", \"review\": \"The main idea that this paper presents is that making a discriminator robust to adversarial perturbations the GAN objective can be made smooth which results in better results both visually and in terms of FID. In addition to the proposed adversarial regularisation the authors also propose a much stronger regularisation called robust feature matching which uses the features of the second last layer of the discriminator. I find the ideas presented in this paper interesting and novel.\\nThe authors' claims are supported with sufficient theory and several experiments that prove their claims. The presented results show consistent improvements in terms of FID and actually some of the improvements reported are impressive\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A novel idea, but lack of motivation and intuition\", \"review\": \"## Overview\\n\\nThis paper proposes a new way to stabilize the training process of GAN by regularizing the Discriminator to be robust to adversarial examples. Specifically, this paper proves that a discriminator which is robust to adversarial attacks also leads to a robust minimax objects. Authors provide theoretical analysis about the how the robustness of the Discriminator affects the properties of the objective function, and the proposed regularization term provides an efficient and effective way to regularize the discriminator to be robust. However, it does not build connection between the robustness of the Discriminator and why it can provide meaningful gradient to the Generator. Experimental results demonstrate the effectiveness of the proposed method. This paper is easy to understand.\\n\\n\\n## Drawbacks\\nThere are some problems in this paper. First, this paper is not highly motivated and lacks of intuition. I can hardly understand why the robustness can stabilize the training of GAN. Will it solve the problem of gradient vanishing problem or speed up the convergence of GAN? The toy example in Sec. 4.2 shows that it can regularize the Discriminator to provide a meaningful gradient to Generator, but no theoretical analysis is provided. The main gap between them is that the smoothness of D around the generated data points does not imply the effectiveness of gradients. Second, the theoretical analysis is inconsistent with the experimental settings. Theorem 4.3 holds true when f is non-positive, but WGAN\\u2019s loss function can be positive and this paper does not give any details about this part. Third, in Sec. 4.2, I can hardly distinguish the difference between robust loss, robust discriminator and regularized objectives.\\n\\nBesides, there are lots of typos in this paper. In Sec 3, Generative Adversarial Networks part, the notations of x and z are quiet confusing. In Definition 3.2, d which measures the distance between network outputs is not appeared above.\\n\\n## Summarization\\nGenerally, this paper provides a novel way to stabilize the training of GAN. However, it does not illustrate its motivation clearly and no insight is provided.\\n\\n## After rebuttal\\nSome of the issues are addressed. So I change my rating to 6.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
ryza73R9tQ | Machine Translation With Weakly Paired Bilingual Documents | [
"Lijun Wu",
"Jinhua Zhu",
"Di He",
"Fei Gao",
"Xu Tan",
"Tao Qin",
"Tie-Yan Liu"
] | Neural machine translation, which achieves near human-level performance in some languages, strongly relies on the availability of large amounts of parallel sentences, which hinders its applicability to low-resource language pairs. Recent works explore the possibility of unsupervised machine translation with monolingual data only, leading to much lower accuracy compared with the supervised one. Observing that weakly paired bilingual documents are much easier to collect than bilingual sentences, e.g., from Wikipedia, news websites or books, in this paper, we investigate the training of translation models with weakly paired bilingual documents. Our approach contains two components/steps. First, we provide a simple approach to mine implicitly bilingual sentence pairs from document pairs which can then be used as supervised signals for training. Second, we leverage the topic consistency of two weakly paired documents and learn the sentence-to-sentence translation by constraining the word distribution-level alignments. We evaluate our proposed method on weakly paired documents from Wikipedia on four tasks, the widely used WMT16 German$\leftrightarrow$English and WMT13 Spanish$\leftrightarrow$English tasks, and obtain $24.1$/$30.3$ and $28.0$/$27.6$ BLEU points separately, outperforming
state-of-the-art unsupervised results by more than 5 BLEU points and reducing the gap between unsupervised translation and supervised translation up to 50\%. | [
"Natural Language Processing",
"Machine Translation",
"Unsupervised Learning"
] | https://openreview.net/pdf?id=ryza73R9tQ | https://openreview.net/forum?id=ryza73R9tQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rkxaAYMxgE",
"H1epCemnRm",
"S1eeOBNcC7",
"rJxrJT2FA7",
"B1xhXRDKCm",
"BJxgL3mY0Q",
"Syx1cFNOAQ",
"ryxnh_NOCQ",
"Hyl08hvHCQ",
"SyxSMI_G0X",
"r1xbcS_zCX",
"Hyl0GH_M0Q",
"HJlNJaHMC7",
"r1lcsUNfAQ",
"SJxtPwIypm",
"Hyl7Mj1a2X",
"HkgVsbHch7",
"r1lZazqO3m",
"r1l5gBF4qm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1544722901402,
1543413973360,
1543288167751,
1543257308815,
1543237155560,
1543220296250,
1543158150874,
1543157939878,
1542974550370,
1542780428774,
1542780297182,
1542780182282,
1542769884047,
1542764194447,
1541527392937,
1541368587022,
1541194139732,
1541083833007,
1538721009650
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1402/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1402/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1402/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1402/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1402/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1402/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1402/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1402/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1402/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1402/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1402/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1402/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1402/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1402/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1402/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1402/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1402/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1402/AnonReviewer2"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a new method to mine sentence from Wikipedia and use them to train an MT system, and also a topic-based loss function. In particular, the first contribution, which is the main aspect of the proposal is effective, outperforming methods for fully unsupervised learning.\\n\\nThe main concern with the proposed method, or at least it's description in the paper, is that it isn't framed appropriately with respect to previous work on mining parallel sentences from comparable corpora such as Wikipedia. Based on interaction in the reviews, I feel that things are now framed a bit better, and there are additional baselines, but still the explanation in the paper isn't framed with respect to this previous work, and also the baselines are not competitive, despite previous work reporting very nice results for these previous methods.\\n\\nI feel like this could be a very nice paper at some point if it's re-written with the appropriate references to previous work, and experimental results where the baselines are done appropriately. Thus at this time I'm not recommending that the paper be accepted, but encourage the authors to re-submit a revised version in the future.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Method is likely useful, but paper needs to be re-framed in light of previous work.\"}",
"{\"title\": \"Response to Reviewer 2 and Area Chair\", \"comment\": \"Dear Reviewer 2 and Area Chair:\\n\\nThanks for the comments. We will definitely revise our paper to add more related work and comparisons. We also want to make discussions about the following points. \\n\\n1. For the experimental setting\\n1). We fully agree that the method should be tested on the setting you mentioned, which has no parallel data at all. However, please note that in order to verify the performance of a translation model, we need some **ground truth** in-domain sentence pairs for evaluation (Bible is out-of-domain), e.g, the most standard and widely used WMT and IWSLT test data. However, once we have a **ground truth test set**, there always exists corresponding training data, which are bilingual sentences. We think that finding a setting that satisfies 1. A significant number of documents. 2. No-parallel sentences anywhere. 3. Can be professionally evaluated is almost impossible. \\nIn order to fairly compare with previous work and study the effectiveness of the proposed method, we follow all previous works to use these WMT translation test data for evaluation. Note that the previous work [4] also use En-De, even En-Fr to test their unsupervised models. \\n\\n2. Regarding the contribution\\n1). We notice that almost all concerns are around the sentence mining approach. However, we kindly point out that sentence mining is only a component of our proposed method. We also leverage information in weakly paired documents and the ablation study shows that our document loss can improve the model performance. \\n2). Our work and previous works on unsupervised translation use Wiki data (e.g., [3,4]), which is shown to be very effective. [3] uses Wiki data to learn cross-lingual embeddings (which is also used in our work) and [4] use the cross-lingual embedding as a warm start to initialize the parameters used in translation models. However, [3] aims at solving a simpler task while [4] simply uses the Wiki data trained embedding as *parameter initialization*. Our method can be considered as taking one step further compared to the most recent previous works by better leveraging documents to learn sentence-level translation models.\\n\\n3. On recent experiments on sentence mining approaches\\nSentence pair mining approaches rely on a bilingual dictionary either from other resources or learned from supervised bilingual data, such as [1]. [2] (the paper Area Chair mentioned) also stated that they used parallel corpora to initialize their EM lexical, and they found that initialization with \\u201cvery-non-parallel corpora\\u201d performs terrible (you can check the figure 3 in [2]). \\nWe followed AC\\u2019s suggestion to get bilingual dictionary based on titles from cross-lingual Wikipedia language links, and did some experiments. We evaluate the generated bilingual dictionary using tools from https://github.com/facebookresearch/MUSE, which is a benchmark task to measure the quality of a learned dictionary. The results are as follows:\\nSource -> Target\\tDict from Wiki (Top 1 Accuracy)\\n En->Es\\t 0.177\\n Es->En\\t 0.185\\n En->De\\t 0.129\\n De->En\\t 0.149\\n\\nAs you can see, the Top 1 accuracy of the dictionary is very poor, which shows the quality of the dictionary is not good. This is reasonable because titles of Wikipedia pages are usually entities, while a useful dictionary is always beyond entities. Our approach is based on calculating similarities between sentence embeddings, but we can also test for **word pair mining** using the word embeddings and the accuracy is more than 70% for the tasks. Therefore, it is obvious that our used approach is much better than the title dictionary.\\n\\n[1] Munteanu, Dragos Stefan, and Daniel Marcu. \\\"Improving machine translation performance by exploiting non-parallel corpora.\\\" Computational Linguistics 31.4 (2005): 477-504. \\\", 2006\\n[2] Fung, Pascale, and Percy Cheung. \\\"Mining very-non-parallel corpora: Parallel sentence and lexicon extraction via bootstrapping and e.\\\" EMNLP 2004.\\n[3] Alexis Conneau, Guillaume Lample, Marc\\u2019Aurelio Ranzato, Ludovic Denoyer, and Herve Jegou. Word translation without parallel data. ICLR 2017\\n[4] Guillaume Lample, Ludovic Denoyer, and Marc\\u2019Aurelio Ranzato. Unsupervised machine translation using monolingual corpora only. ICLR 2017\"}",
"{\"title\": \"thank you for the response\", \"comment\": \"I have read your response as well as the other comments. Thank you for taking the time to respond to my questions.\\n\\nI think the Authors should revise their paper to a) include the references mentioned by the other reviewers and AC and b) strengthen their empirical validation. \\n\\nI agree with the Authors that this contribution is different from past work because it is in the fully unsupervised setting. However, I also think that the AC made a good point in saying that for all language pairs for which there is good and abundant alignment between Wikipedia pages, there is also likely to be some parallel data, which diminishes the practical impact of this work.\\nI also do not agree with the Authors definition of \\\"low-resource\\\" language pair. En-De is not low resource by any standard definition, as anybody can easily find abundant amount of parallel sentences. It is fine to simulate unsupervised MT in these high resource languages, but it is not fine to call these examples of low resource languages. For instance, En-Ur has instead little parallel data in domains of interest (e.g., news). Proving your method on such languages is the ultimate test. \\n\\nDespite all of this, I still think that this work has merit and is good enough to be presented at the conference (assuming references are added and properly contrasted in the revision), as it can sparkle discussion and promote research in this space which may make this effort more practical.\"}",
"{\"title\": \"Re: More Results on More Language Pairs\", \"comment\": \"Thank you for providing some preliminary results on a more challenging language pair, and for addressing the questions in my review. The setup you explore in this new experiment is particularly interesting because you only have tens of thousands of weakly paired documents, as compared to millions of documents.\"}",
"{\"title\": \"Lots of existing work does not rely on sentence pairs\", \"comment\": \"Hello,\\n\\nThank you for the follow-up comment. However, there are many works that do not rely on bilingual sentence pairs, rather using translation lexicons only. One example of such a method is below, but a quick search should be able to reveal others:\", \"http\": \"//www.aclweb.org/anthology/W04-3208\\n\\nThese translation lexicons can be easily extracted from, for example, Wikipedia language links. Given that Wikipedia (or other similar document pairs) is a resource requirement for the proposed method, it seems that the use of much stronger baselines would be possible, and in fact necessary to stress the utility of the proposed method.\\n\\nAs an auxiliary note, it's not clear to me how realistic it is to have a high-quality document collection such as Wikipedia, but not have *any* parallel data that can be used to train a classifier such as the one used by Munteanu and Marcu. The reason why this method is widely used is because it works well and is less reliant on heuristics such as the ones in the paper above. Are there any languages in the world where we have a significantly sized Wikipedia or similar document collection, but don't have any sentence aligned parallel data whatsoever, even from sources such as the Bible, which is translated into 2500 languages? If there are, that would be a convincing argument for the utility of this method.\"}",
"{\"title\": \"Response to Reviewers and Area Chair\", \"comment\": \"Dear Reviewers and Area Chair,\\n\\nThanks again for your great reviews and comments. According to your suggestions, we have made more discussions and experiments and updated the paper. We list the main points as below.\\n\\n1. We have provided preliminary experimental results on additional language pairs according to the suggestions from Reviewer 1 and Reviewer 2. According to the current results, our method is better than the baselines and we will keep tracking the status.\\n2. We have discussed the difference between our method and existing methods on sentence mining to address the concerns from Reviewer 3 and Area Chair with empirical comparisons. \\n3. We have updated our paper and added more discussions about related works according to the suggestions from Reviewer 3. \\n\\nWe hope our responses can help address your concerns and questions.\"}",
"{\"title\": \"More Results on More Language Pairs\", \"comment\": \"Dear Reviewer 2:\\n\\nDue to time limitation, we just provide some preliminary experimental results on En-Ro task. The results also show that the translation quality of our proposed method is better than that of the baseline. As the model performance is still growing, we will keep training and report the number when the optimization converges.\", \"conducting_this_experiment_requires_a_relatively_long_time_mainly_due_to_the_following_reasons\": \"1. The cleaned Wikipedia dump file does not contain the internal link between the pages in different languages. We need to crawl the Wikipedia page online, match the pages between different languages and map such relationship back to the cleaned parsed Wikipedia contexts (https://en.wikipedia.org/wiki/Wikipedia:Database_download). Such a process is time-consuming.\\n2. According to our experience, training an unsupervised NMT baseline model as well as our model needs more than **two weeks** using 4 GPUs to get a reasonable number. \\nNote that step 1 and 2 have dependencies.\\n\\nTo address the concerns from the reviewers on the adaptability of our proposed method, we quickly conducted experiments with **a small number of paired documents** (tens of thousands), **a small number of mined sentences** (several thousands) extracted from step 1. Then we train the model using such data as well as monolingual data for *one week* directly using the configuration of En-De/En-Es with no hyperparameter tuning. \\n\\nTo be fair enough, we compare our method with the unsupervised baseline with the same training time. For direction En->Ro, the BLEU score of the unsupervised baseline trained for one week achieves about 10.33 and we achieve 12.60 using our method. For direction Ro-En, we achieve BLEU score 15.98 while the baseline is 12.63. \\nWe believe with more data in step 1 and longer training time in step 2, our proposed method will have more improvements. The current experimental results already show that our method has great potential and is robust to handle more language pairs.\"}",
"{\"title\": \"More Results on More Language Pairs\", \"comment\": \"Dear Reviewer 1,\\nDue to time limitation, we just provide some preliminary experimental results on En-Ro task. The results also show that the translation quality of our proposed method is better than that of the baseline. As the model performance is still growing, we will keep training and report the number when the optimization converges.\", \"conducting_this_experiment_requires_a_relatively_long_time_mainly_due_to_the_following_reasons\": \"1. The cleaned Wikipedia dump file does not contain the internal link between the pages in different languages. We need to crawl the Wikipedia page online, match the pages between different languages and map such relationship back to the cleaned parsed Wikipedia contexts (https://en.wikipedia.org/wiki/Wikipedia:Database_download). Such a process is time-consuming.\\n2. According to our experience, training an unsupervised NMT baseline model as well as our model needs more than **two weeks** using 4 GPUs to get a reasonable number. \\nNote that step 1 and 2 have dependencies.\\n\\nTo address the concerns from the reviewers on the adaptability of our proposed method, we quickly conducted experiments with **a small number of paired documents** (tens of thousands), **a small number of mined sentences** (several thousands) extracted from step 1. Then we train the model using such data as well as monolingual data for *one week* directly using the configuration of En-De/En-Es with no hyperparameter tuning. \\n\\nTo be fair enough, we compare our method with the unsupervised baseline with the same training time. For direction En->Ro, the BLEU score of the unsupervised baseline trained for one week achieves about 10.33 and we achieve 12.60 using our method. For direction Ro-En, we achieve BLEU score 15.98 while the baseline is 12.63. \\nWe believe with more data in step 1 and longer training time in step 2, our proposed method will have more improvements. The current experimental results already show that our method has great potential and is robust to handle more language pairs.\"}",
"{\"title\": \"Response from Authors\", \"comment\": \"Thanks for your comment and sorry for the late response.\\n\\n1. Regarding the document data quality and beta (for document loss) impacts\\nAs we have no ground truth of the paired data quality on Wiki, we simply checked the quality of our extracted data according to a well-trained supervised translation model. In particular, as we have extracted a set of paired sentences, we checked the BLEU score by comparing the translation from a well-trained supervised model and our mined sentence pairs. The results are in the below table. \\n\\t En-De(c1=0.7)\\t De-En(c1=0.7)\\nC2=0.0\\t 26.86\\t 28.33\\nC2=0.1\\t 30.68\\t 32.10\\nC2=0.2\\t 33.40\\t 34.38\\nAs you can see, the quality of our extracted sentence pairs is good and the reasonable BLEU scores show that the data will be good to use in NMT model training. Since these sentence pairs are extracted from the weakly paired documents, therefore we think such paired documents can provide valuable information for the model training. For the document loss, we vary the value of beta in the experiments, and we found if we use a very large beta, the KL-divergence loss will contribute much and dominant other loss terms, the trained model in such setting is a little worse than setting beta = 0.05 as used in our experiments. \\n\\n2. Regarding to the low-resource setting\\nWe are conducting more experiments on En-Ro and En-Tr to test our proposed methods. We will report the number once the experiments are finished. However, we still want to make it clear in machine translation, low-resource tasks are usually referred to as learning a translation model with a small set of (or no) supervised bilingual sentence pairs [1,2,3]. From this perspective, the task, method, and experiment in our paper are focused on low-resource problems indeed, as we use no human-labeled bilingual sentence pairs but only document-level information or automatically mined related sentence pairs (maybe not exact or perfect translations). We show that given no bilingual translation pairs but weakly paired documents (from Wiki), we can learn a translation model which is much better than the state-of-the-art unsupervised translation methods. \\n\\n[1] Universal Neural Machine Translation for Extremely Low Resource Languages\\uff0c Jiatao Gu\\u2020\\u2217 Hany Hassan\\u2021 Jacob Devlin\\u2217 Victor O.K. Li\\u2020NAACL 2018\\n[2] Neural Machine Translation for Low Resource Languages using Bilingual Lexicon Induced from Comparable Corpora, Sree Harsha Ramesh and Krishna Prasad Sankaranarayanan, NAACL, workshop 2018 \\n[3] Neural machine translation for low-resource languages, Robert Ostling \\u00a8 and Jorg Tiedemann. 2017.\"}",
"{\"title\": \"Response from Authors\", \"comment\": \"Dear area chair, thanks for your comment!\\nFirst, as discussed in our response to Reviewer 3, existing works rely on bilingual sentence pairs to train a model for parallel sentence mining. Note that we focus on the unsupervised setting, where there is no bilingual sentence pair and so existing methods cannot be applied. \\nSecond, we also tried to use an unsupervised neural machine translation model to rank/select sentence pairs from comparable corpora as in [1-4]. As described in our response to Reviewer 3, according to extended experiments and case studies, we find that the data quality of the sentence pairs selected by such unsupervised model is not that good and the final trained translation model is worse than ours. \\nAs a summary, we find by leveraging the recent techniques (the cross-lingual word embedding + unsupervised sentence representation), the selected sentences are much better. We believe our findings are important to the field of unsupervised learning and unsupervised machine translation.\\n\\n[1] Adafre S F, De Rijke M. Finding similar sentences across multiple languages in Wikipedia[C]//Proceedings of the Workshop on NEW TEXT Wikis and blogs and other dynamic text sources. 2006.\\n[2] Yasuda K, Sumita E. Method for building sentence-aligned corpus from wikipedia[C]//2008 AAAI Workshop on Wikipedia and Artificial Intelligence (WikiAI08). 2008: 263-268.\\n[3] Smith J R, Quirk C, Toutanova K. Extracting parallel sentences from comparable corpora using document-level alignment[C]//Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, 2010: 403-411.\\n[4] Munteanu, Dragos Stefan, and Daniel Marcu. \\\"Improving machine translation performance by exploiting non-parallel corpora.\\\" Computational Linguistics 31.4 (2005): 477-504. \\\", 2006.\"}",
"{\"title\": \"Rebuttal from Authors - Part 2\", \"comment\": \"3. Regarding why to remove the first principal component of the embedding and p(w)\\nWe follow the unsupervised sentence representation approach from [5,6] to remove the first principal component of sentence embedding but not word embedding, as mentioned in the 4th paragraph of Section 3.1. Intuitively, from empirical observations, the embeddings of many sentences share a large **common vector**. Removing the first principal component from the sentence embeddings make them more diverse and expressive in the embedding space, and thus the resulted embeddings are shown to be more effective [5,6]. \\np(w) is the unigram probability (in the entire corpus) of the word w. We will make it clearer. \\n\\n4. Regarding the notion of topic distribution, normalization, and citations\\nWe have added citations about the word distribution in the new paper version. We are just trying to describe that the topics between the source document and target document should be similar if they talk about the same event, and thus they should use similar words. We can change the term **topic distribution** to **word distribution** if you think it is essential and important. \\nApparently, we did the normalization over the target vocabulary as we use KL-divergence loss function. \\n\\n5. More experiments on alpha and beta\\nWe made more analysis on the model trained with different alpha and beta on En-De data, and listed the numbers in the below table. We found that the value of alpha is robust to the model performance. \\nAlpha\\t0.5\\t0.8\\t1.0\\t1.2\\t1.5\\nEn-De\\t23.6\\t24.0\\t24.2\\t24.1\\t24.1\\nDe-En\\t29.8\\t30.1\\t30.3\\t30.2\\t30.1\\nWe found larger values for beta will make model worse if the KL-divergence contributes much in the loss function. beta=0.05 is the best configuration we have found. \\n\\n[1] Adafre S F, De Rijke M. Finding similar sentences across multiple languages in Wikipedia[C]//Proceedings of the Workshop on NEW TEXT Wikis and blogs and other dynamic text sources. 2006.\\n[2] Yasuda K, Sumita E. Method for building sentence-aligned corpus from wikipedia[C]//2008 AAAI Workshop on Wikipedia and Artificial Intelligence (WikiAI08). 2008: 263-268.\\n[3] Smith J R, Quirk C, Toutanova K. Extracting parallel sentences from comparable corpora using document-level alignment[C]//Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, 2010: 403-411.\\n[4] Munteanu, Dragos Stefan, and Daniel Marcu. \\\"Improving machine translation performance by exploiting non-parallel corpora.\\\" Computational Linguistics 31.4 (2005): 477-504. \\\", 2006\\n[5] Arora, Sanjeev, Yingyu Liang, and Tengyu Ma. \\\"A simple but tough-to-beat baseline for sentence embeddings.\\\" ICLR-2017.\\n[6] Mu, Jiaqi, Suma Bhat, and Pramod Viswanath. \\\"All-but-the-top: Simple and effective postprocessing for word representations.\\\" ICLR-2018.\\n[7] Phrase-Based & Neural Unsupervised Machine Translation. EMNLP-2018.\"}",
"{\"title\": \"Rebuttal from Authors - Part 1\", \"comment\": \"We thank Reviewer 3 for the reviews and comments! Here are our responses to the concerns.\\n\\n1. Regarding the related work\\nThanks for the reference. We are indeed aware of the related work on selecting sentence pairs from monolingual corpora. We did try some methods and found them do not work well as the scenario of related works is far different from ours. \\nThe methods in [1-4] rely on bilingual sentences to train a model and use this model to select sentence pairs. For example, [1-2] use an MT system to obtain a rough translation of a given page in one language into another and then uses word overlap or BLEU score between sentences as measures. [3-4] develop a ranking model/binary classifier to learn how likely a sentence in the target language is the translation of a sentence in the source language using parallel corpora. However, in our setting, we don\\u2019t have any bilingual sentences pair available. That is being said, we have no bilingual sentence pairs to train such a model to further select new data pairs.\\nIn order to work similarly to the previous works in the unsupervised setting, the most related model for selecting pairs is the unsupervised machine translation model. We did try to use an unsupervised translation model for sentence pair selection at the very early stage of the work. We first trained an unsupervised model followed [7] and then use the model outputs to evaluate each sentence pairs between two linked documents. We have conducted the following experiments:\\n(a). Similarly to [1,2], for each sentence x, we generate the translation results using the unsupervised NMT model, select the most similar sentence to the translation results (in terms of BLEU), and use such data pairs for NMT training. \\n(b). To build up a scoring function as used in [3-4], we use the model-output probability as the scoring function. We select sentence pair (x, y) with larger translation probabilities p(y|x) and use such data pairs for NMT training.\\nAs the **unsupervised translation model** is not good enough, the selected sentence pairs are not reasonable as shown in the below table. We hypothesize this is due to that as some sentences in one Wiki pages are similar (e.g., a few words differ from each other), then 1. the BLEU(or sentence-level BLEU) score is very sensitive to evaluate such sentences. 2. the likelihood on similar sentences are not that trustable. \\nFurthermore, we found training an NMT model using such poor data does not work well. On WMT De-En task, we have the following results: The BLEU score of model trained in (a) can only reach 22.4. The best model trained in (b) can achieve only 19.8 in terms of BLEU score. Both show that the trained NMT models are not good as expected. \\nAs a summary, we find by leveraging the recent techniques (the cross-lingual word embedding + unsupervised sentence representation), the selected sentences are much better. We believe our findings are important to the field of unsupervised learning and unsupervised machine translation. We will include those discussions in our paper and clarify the differences between our work and previous works. \\n\\nEnglish\\t|| Selected German sentence by unsupervised translation model\\t|| Selected German sentence by our method\\nShe was one of the pioneers of Greek surrealism .\\t|| Inzwischen ist sie Mitglied der Kommunistischen Partei geworden .\\t|| Zun\\u00e4chst z\\u00e4hlt sie zu den Pionieren des griechischen Surrealismus .\\nThe film premiered at the 2014 Zurich Film Festival .\\t|| In Deutschland startete der Film am 10. September 2015 .\\t|| Er hatte seine Premiere am 26. September 2014 beim Zurich Film Festival .\\nThe eastern part is leafy and park-like .\\t|| Au\\u00dferdem befindet sich hier ein Kinderspielplatz .\\t|| Der \\u00f6stliche Teil ist begr\\u00fcnt und park\\u00e4hnlich gestaltet .\\nMost of the remaining convicts were then relocated to Port Arthur .\\t|| Insgesamt wurden in der Strafkolonie 1200 H\\u00e4ftlinge verwahrt .\\t|| Die verbliebenen H\\u00e4ftlinge wurden schrittweise ins Lager nach Port Arthur verlegt .\\n\\n2. Regarding more experiments on the different percentage of data pairs\\nWe are afraid that you might miss some parts of our paper. We have tested the performance with different percentage of implicitly aligned data according to the different choices of the thresholds to understand the sensitivity of the data size in Section 4.4. As we can see from Section 4.4, by setting different thresholds, we select data pairs from 60k to 250k. We think these results answer the question you mentioned. The different sizes of data indeed have impacts to the model performance, but all experimental results show that our model is better than the baselines (i.e., comparing the numbers in Figure 1 and the baselines in Table 2).\"}",
"{\"title\": \"Rebuttal from Authors\", \"comment\": \"Thanks for your reviews and comments!\\n\\n1. Regarding the empirical validation\\nThanks for the suggestions! We are conducting more experiments to low-resource language pairs. However, we want to make it clear that in machine translation, the low-resource tasks are usually referred to as learning a translation model with a small set of (or no) supervised bilingual sentence pairs [1,2,3]. From this perspective, the task, method, and experiment (En-De, De-En, En-Es, Es-En) in our paper are focused on low-resource problems indeed, as we use no human-labeled bilingual sentence pairs but only use document-level information or automatically mined related sentence pairs (maybe not exact or perfect translations).\\nOur method shows that given no bilingual translation pairs but weakly paired documents (from Wiki, news websites or books), we can learn a translation model which is much better than the state-of-the-art unsupervised translation methods. As you suggested, we are currently working on the En-Ro and En-Tr language pairs, but since the document aligning process is costly, we are afraid that we may not give a result by the end of the rebuttal phase. We will report the number once the experiments are finished.\\n\\n2. On the quality of retrieved sentences\\nActually, we have no *ground truth* for the retrieved sentence pairs, so we just use a well-trained translation model from huge bilingual data and check whether the retrieved sentences are *similar* to the translated sentences in terms of BLEU score in the below table.\\n\\t En-De (c1=0.7)\\t De-En (c1=0.7)\\nC2=0.0\\t 26.86\\t 28.33\\nC2=0.1\\t 30.68\\t 32.10\\nC2=0.2\\t 33.40\\t 34.38\\n\\nAs we expected, the sentence pairs we mined with more strict thresholds are more *similar* to the supervised model outputs. Besides, the reasonable BLEU scores show that the sentence pairs we extracted will be good to use in NMT model training.\\n\\n3. Regarding the notion of P\\nSorry to make you feel confused. P is generally used as a notation of **probability**, but not for any specific parametric function. For example, in Eqn. 7, the first item P(w^Y; d^Y_i) refers to the empirical distribution of word w in language Y, and the second item P(w^Y; d^X_i, \\\\theta) refers to the distribution of word w translated from document X_i with parameter \\\\theta. We will modify the related equations to make them clearer. \\n\\n4. Regarding the topic distribution implementation\\nYour understanding is correct. In our experiments, during training, we generate the translated documents online to compute the topic distribution loss over document pairs, and the gradient signal is computed over a mini batch of document pairs.\\n\\n5. Regarding sentence pairs in pure monolingual data\\nYes. In fact, we have tried this before the submission as it is a natural way to generalize our method to wider settings. We have tried to select sentence pairs over 50M monolingual WMT En-De dataset. According to our manual check, the quality of the sentence pairs we mined from the original monolingual dataset is not good. That\\u2019s why we mine the sentences from the weakly paired documents in our work. \\n\\n[1] Universal Neural Machine Translation for Extremely Low Resource Languages\\uff0c Jiatao Gu\\u2020\\u2217 Hany Hassan\\u2021 Jacob Devlin\\u2217 Victor O.K. Li\\u2020NAACL 2018\\n[2] Neural Machine Translation for Low Resource Languages using Bilingual Lexicon Induced from Comparable Corpora, Sree Harsha Ramesh and Krishna Prasad Sankaranarayanan, NAACL, workshop 2018\\n[3] Neural machine translation for low-resource languages, Robert Ostling \\u00a8 and Jorg Tiedemann. 2017.\"}",
"{\"title\": \"Rebuttal from Authors\", \"comment\": \"We thank Reviewer 1 for the reviews and comments! Here are our responses to the concerns.\\n\\n1. Regarding low-resource tasks\\nWe want to make it clear that in machine translation, the low-resource tasks are usually referred to as learning a translation model with a small set of (or no) supervised bilingual sentence pairs [1,2,3]. From this perspective, the task, method, and experiment (En-De, De-En, En-Es, Es-En) in our paper are focused on low-resource problems indeed, as we use no human-labeled bilingual sentence pairs but just use document-level information or automatically mined related sentence pairs (which may be not exact or perfect translations).\\nOur method shows that given no bilingual translation pairs but weakly paired documents (e.g., from Wiki), we can learn a translation model which is much better than the state-of-the-art unsupervised translation methods.\\n\\n2. Regarding why to remove the first principal component of the embedding \\nWe follow the unsupervised sentence representation approach from [4,5] to remove the first principal component of sentence embedding but not word embedding, as mentioned in 4th paragraph of Section 3.1. Intuitively, from empirical observations, the embeddings of many sentences share a large **common vector**. Removing the first principal component from the sentence embeddings make them more diverse and expressive in the embedding space, and thus the resulted embeddings are shown to be more effective [4,5]. \\n\\n3. Regarding the supervised baseline\\nAll the supervised models are trained on the widely acknowledged WMT bilingual dataset using Transformer [6], which is considered to be a standard baseline model of NMT tasks [7]. For our learned models and the baseline models, we do follow the common practice and use sub-word tokens (Byte Pair Encoding (BPE) approach) as in [6]. We have mentioned this in 2nd paragraph of Section 4.2.\\n\\n4. Regarding data statistics \\nFor the number of sentences in the weakly paired documents, there are 4,285,607 English sentences and 4,266,178 German sentences in English-German language pair, 2,679,278 English sentences and 2,547,358 Spanish sentences in English-Spanish language pair. Therefore, we extract a reasonable proportion of sentences from the weakly paired documents to train the model. \\n\\n[1] Universal Neural Machine Translation for Extremely Low Resource Languages\\uff0c Jiatao Gu\\u2020\\u2217 Hany Hassan\\u2021 Jacob Devlin\\u2217 Victor O.K. Li\\u2020NAACL 2018\\n[2] Neural Machine Translation for Low Resource Languages using Bilingual Lexicon Induced from Comparable Corpora, Sree Harsha Ramesh and Krishna Prasad Sankaranarayanan, NAACL, workshop 2018\\n[3] Neural machine translation for low-resource languages, Robert Ostling \\u00a8 and Jorg Tiedemann, 2017.\\n[4] Arora, Sanjeev, Yingyu Liang, and Tengyu Ma. \\\"A simple but tough-to-beat baseline for sentence embeddings.\\\" ICLR-2017.\\n[5] Mu, Jiaqi, Suma Bhat, and Pramod Viswanath. \\\"All-but-the-top: Simple and effective postprocessing for word representations.\\\" ICLR-2018.\\n[6] Vaswani, Ashish, et al. \\\"Attention is all you need.\\\" NIPS-2017.\\n[7] Phrase-Based & Neural Unsupervised Machine Translation. EMNLP-2018.\"}",
"{\"title\": \"Please Clarify Theoretical and Empirical Advantages over Previous Work\", \"comment\": \"As noted by Reviewer 3, the extraction of parallel sentences from comparable corpora has been covered extensively in the previous literature. While the method presented here is unquestionably useful, there is only a single reference to a paper from 2017, despite the fact that similar methods have existed and been widely studied since at least 2006. In order for me to recommend the paper for acceptance, I would like to see a comparison, theoretical and empirical, to the prominent previous works in the field of parallel sentence mining from comparable corpora, starting with the method cited by Reviewer 3 and also covering more recent work.\"}",
"{\"title\": \"Nice BLEU score improvements over existing work but will it generalise to low-resource language pairs?\", \"review\": [\"This paper proposes a method to train a machine translation system using weakly paired bilingual documents from Wikipedia. A pair of sentences from a weak document pair are used as training data if their cosine similarity exceeds c1, and the similarity between this sentence pair is c2 greater than any other pair in the documents, under sentence representations formed from word embeddings trained with MUSE. The neural translation model learns to translate from language X to Y, and from Y to X using the same encoder and decoder parameters, but the decoder is aware of the intended target language given an embedding of the intended language. The model is also trained to minimise the KL divergence between the distribution of terms in the target language document and the distribution of terms in the current model output. The model also uses the denoising autoencoding and reconstruction objectives of Lample et al. (2017). The results show improvements over the Lample et al. (2017) and that performance is heavily dependent on the number of sentences extracted from the weakly aligned documents.\", \"Positives\", \"Large improvement over previous attempts at unsupervised MT for the En-De language pair.\", \"Informative ablation study in Section 4.4 of the relative contribution of each part of the overall objective function (Eq 9).\", \"Negatives\", \"The introduction gave the impression that this method would be applied to low-resource language pairs but it was applied to two high-resource language pairs. Because you have not evaluated on a low-resource language pair, it's not clear how your proposed method would generalise to a low-resource setting.\", \"Questions\", \"Can you give some intuition for why you remove the first principal component from the word embeddings in Equations 1 - 3?\", \"Are the Supervised results in Table 2 actually a fair reflection of a reasonable NMT model trained with sub-word representations and back translated data?\", \"What is the total number of sentences in the weakly paired documents in Table 1? It would be useful to know the proportion of sentences you managed to extract to train your models.\", \"Comments\", \"Koehn et al. (2003) is not an example of any kind of neural network architecture.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"the claimed \\\"new direction\\\" has been explored before.\", \"review\": \"The major issue in this paper is that the \\\"new direction\\\" in this paper has been explored before [1]. Therefore the introduction needs to be rewritten with arguing the difference between existing methods.\\n\\nThe proposed method highly relies on the percentage of implicitly aligned data. I suggest the author do more experiments on different data set with a significant difference in this \\\"percentage\\\". Otherwise, we have no idea about the performance's sensitivity to the different datasets. \\n\\nMore detailed explanations are needed. For example, what do you mean by \\\"p(w) as the estimated frequency\\\"? Why do we need to remove the first principal components?\\n\\nSection 3.2 title is \\\" aligning topic distribution\\\" but actually it is doing word distribution alignment.\\n\\nDo you do normalization for P(w^Y;d_i^X,\\\\theta) in eq.6 which is defined on the entire vocab's distribution?\\n\\nI think the measurement of the alignment accuracy and more experiments with different settings of \\\\alpha and \\\\beta are needed.\\n\\nCitation needed for \\\"Second, many previous works suggest that the word distribution ...\\\"\\n\\n[1] Munteanu et al, \\\"Improving Machine Translation Performance by Exploiting Non-Parallel Corpora\\\", 2006\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"nice contribution\", \"review\": \"Summary\\nThe authors propose a relatively simple approach to mine noisy parallel sentences which are useful to greatly improve performance of purely unsupervised MT algorithms.\\nThe method consists of a) mining documents that refer to the same topic, b) extracting from these documents parallel sentences, c) training the usual unsup MT pipeline with two additional losses, one that encourages good translation of the extracted parallel sentences and another one forcing the distribution of words to match at the document level.\", \"novelty\": \"the approach is novel.\", \"clarity\": \"the paper is clearly written.\", \"empirical_validation\": \"The empirical validation is solid but limited. The authors could further strengthen it by testing on low-resource language pairs (En-Ro, En-Ur).\\nIt would also be useful to report more stats about the retrieved sentences in tab. 1 (average length compared to ground truth, BLEU using as reference the translation of a SoA supervised MT method, etc.)\\n\\nQuestions\\n1) Sec. 3.2 is the least clear of the paper. The notation of eq. 7 is quite unclear because of the overloading (e.g., P refers to both the model and the empirical distribution).\", \"i_am_also_unclear_about_this_constraint_about_matching_the_topic_distribution\": \"as far as I understood, the model gets only one gradient signal for the whole document. I find then surprising that the authors managed to get any significant improvement by adding this term.\\nRelated to this term, how is it computed? Are documents translated on the fly as training proceeds? Could the authors provide more details?\\n\\n2) Have the authors considered matching sentences to any other sentence in the monolingual corpus as opposed to sentences in the comparable document?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"comment\": \"Are there only success cases in your ablation study with BLEU being hampered by removing the topic loss part of the objective? The filtering indicates that non-comparable articles pass off as weakly paired documents - which can often lead to a wrong signal. Do you have any cases (success/failure) indicating the same? I see alpha as 1 and beta as 0.05 balances this to some extent.\\n\\nCould you detail the stats further to include a bit of word level stats on the pair of documents somehow to see if these are comparable articles or not? \\n\\nI'm particularly curious about how this would scale to the low resource setting, where the noisy loss signals becomes more prominent. How likely am I to get similar results?\\n\\nThanks,\", \"title\": \"On Topic Distribution Loss\"}"
]
} |
|
rkxhX209FX | An Active Learning Framework for Efficient Robust Policy Search | [
"Sai Kiran Narayanaswami",
"Nandan Sudarsanam",
"Balaraman Ravindran"
] | Robust Policy Search is the problem of learning policies that do not degrade in performance when subject to unseen environment model parameters. It is particularly relevant for transferring policies learned in a simulation environment to the real world. Several existing approaches involve sampling large batches of trajectories which reflect the differences in various possible environments, and then selecting some subset of these to learn robust policies, such as the ones that result in the worst performance. We propose an active learning based framework, EffAcTS, to selectively choose model parameters for this purpose so as to collect only as much data as necessary to select such a subset. We apply this framework to an existing method, namely EPOpt, and experimentally validate the gains in sample efficiency and the performance of our approach on standard continuous control tasks. We also present a Multi-Task Learning perspective to the problem of Robust Policy Search, and draw connections from our proposed framework to existing work on Multi-Task Learning. | [
"Deep Reinforcement Learning"
] | https://openreview.net/pdf?id=rkxhX209FX | https://openreview.net/forum?id=rkxhX209FX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1eWHdJZxN",
"ryewCa4XR7",
"H1lQr2V7R7",
"HygiHo4mCm",
"Syl3kt4QCm",
"BJgA2D7WTm",
"BJeCUHYThX",
"rJejEde9n7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544775736767,
1542831566958,
1542831162760,
1542830914549,
1542830307631,
1541646261778,
1541408086396,
1541175347472
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1398/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1398/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1398/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1398/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1398/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1398/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1398/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1398/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper addresses sample-efficient robust policy search borrowing ideas from active learning. The reviews raised important concerns regarding (1) the complexity of the proposed technique, which combines many separate pieces and (2) the significance of experimental results. The empirical setup adopted is not standard in RL, and a clear comparison against EPOpt is lacking. I appreciate the changes made to address the comment, and I encourage the authors to continue improving the paper by simplifying the model and including a few baseline comparisons in the experiments.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"The paper can be improved\"}",
"{\"title\": \"Response to review\", \"comment\": \"Thank you for your review. We have made several changes to the paper as we have described in our official comment. Our response is based on this new version.\\n\\nModel-Ensemble Trust-Region Policy Optimization (ME-TRPO):\\nThis paper presents a Model Based RL method to solve any given RL task, with the highlight being that it uses an ensemble of DNNs to successfully train dynamics models for Model Based RL. The contributions of ME-TRPO do not include learning policies robust to changes in the *environment*, and it is therefore not for the same purpose as EPOpt. There is an element of dealing with model uncertainty that is shared with EPOpt, but this comes not from the environment itself but rather due to learning dynamics from finite samples. Therefore, we don\\u2019t believe a comparison with ME-TRPO is warranted. Nevertheless, we have mentioned these connections along with Ensemble CIO, a similar work.\", \"model_agnostic_meta_learning\": \"We wish to learn a robust policy that works \\u201dout of the box\\u201d on a given test environment. This method on the other hand involves an adaptation phase for meta-learning for each new test task, which is not the paradigm we want. It is therefore not pertinent in the discussion on connections to Multi-Task Learning.\\n\\n\\nActive learning vs. Explore/Exploit, Finding bottom epsilon percentile region:\\nActive learning is concerned with improving and speeding up learning by making use of the ability to choose the data to learn from. The learning task in question is to perform regression from the space of env. model parameters to the performance of the policy, and it is possible to query the performance (noisily) at arbitrary parameters. The role of the active learner in our framework then boils down to (judiciously) using this ability to obtain a good fit while reducing uncertainty across its input space. So, while this involves an element of balancing exploration and exploitation, achieving an optimal trade-off is in no way the aim of the active learner.\\n\\nEven our use of the bandit in conjunction with EPOpt is to learn about the performance across the space of parameters (which amounts to getting a reliable estimate for the weights for its input features), and *not* for identifying the point leading to the worst performance. Although, for simplicity, we have not used active learning oriented formulations like (Soare et al. (2013)), our Thompson Sampling implementation still has good guarantees on learning accurate weights (see (Abeille & Lazaric (2017) on which it is based). With the added expressiveness of polynomial input transformations, we expect it to be able to identify accurately the bottom epsilon percentile region. Also, for the purpose of using the fit to sample from that region, it doesn\\u2019t matter if the fit is not very accurate in other regions.\\n\\nNow, it is entirely possible that the performance profile is too complex to fit accurately. However, as we have shown by successfully learning robust policies on even a 2-D model ensemble, this is not a problem in practice. Furthermore, even EPOpt will require too much data to correctly identify the required region given a sufficiently complex profile, and will still not be completely reliable.\", \"efficiency_gains\": \"The statement in Section 4 calculates as an illustrative example the reduction in data requirement from EPOpt under a particular value of trajectory allowances for each method. Nowhere do we say that it is possible to always achieve this much reduction. The experiments then go on to perform these calculations for the situations we have actually tested and establish the level of efficiency gain that we can successfully achieve. Thus, we do not see how our statements are loose in any sense. On the point of theoretical analysis, please see our answer to AnonReviewer3.\", \"framework_development\": \"There is an overarching idea behind our framework, and it is that the performance has dependencies on the parameters, and this can be used to inform decisions on which parameters to train on. We have justified that active learning is the tool to use for this purpose, and also the way it is used. Although we have not studied new objectives (which we also think does not contribute towards our objectives), we have still provided a concrete realization of the framework using an existing one (CVaR) which leads to a sensible algorithm. All these make \\u201cnot principled\\u201d an unjust criticism.\", \"experiments\": \"Our experiments successfully establish the robustness and performance of the learned policies and validate the reduction in data usage. Further, we show evidence that the bandit active learner indeed works as it should and plays its part correctly. Quite clearly, this array of experiments is not preliminary.\"}",
"{\"title\": \"Response to review\", \"comment\": \"Thank you for your review. We have made several changes to the paper as we have described in our official comment. Our response is based on this new version.\\n\\nOn the point of theoretical analysis, please see our answer to AnonReviewer3.\\n\\nDue to space constraints, we have discussed only the concepts that are immediately relevant to the contributions of the paper. Going into details such as Deep RL algorithms like TRPO would take up too much space.\\n\\nWe also believe our references on the topics we discuss are exhaustive enough. The survey by Settles (2010) covers work on active learning extensively. We have cited the works that define two kinds of Linear Bandit Algorithms in Section 3.3. Further, we cite several works on bandits that specifically deal with active learning in Section 4. In Section 5, we refer to many prominent MTL papers that are relevant to our discussions apart from (Sharma et. al., 2018). Also see our response to AnonReviewer1 on why approaches like \\u201cModel-Agnostic Meta-Learning\\u201d are not relevant.\", \"we_also_have_very_good_reasons_to_think_our_paper_is_not_incremental_in_all_dimensions\": [\"First and foremost, we have achieved a huge reduction in data usage from previous work on Robust RL and experimentally verified the soundness of the framework.\", \"We have developed a novel framework to use active learning for the problem of Robust Policy Search and provided a concrete realization of the framework which leads to a sensible algorithm (EffAcTS-EPOpt). We have justified the use of active learning in this setting, and brought out the need for developments on that front.\", \"In our discussions on MTL, we not only compare and contrast with (Sharma et al., 2018), we also clearly bring out how that area of research is tied to this problem through parameterized MTL problems, and how Robust RL can benefit from the application of MTL methods.\"]}",
"{\"title\": \"Response to review\", \"comment\": \"Thank you for your review. We have made several changes to the paper as we have described in our official comment. Our response is based on this new version.\\n\\n> The biggest concern is that this paper tackles this problem with a combination with existing techniques, leaving many questions unanswered.\\n\\nWe request elaboration on what questions are unanswered, and why combining existing techniques is particularly a problem.\\n\\n> more theoretical-grounded perspective would make the paper much stronger.\\n\\nProviding theoretical grounding for the effectiveness of our framework in general would be as difficult as doing so for Active Learning itself, that is quantifying how efficiently the Active learner can learn about the output (policy performance) across its input space (the space of model parameters). There hasn\\u2019t been a lot of work on providing performance bounds for Active Learning in general (ref. the survey by Settles, 2010). There is however a good amount of analysis that has gone into the specific case we have implemented, namely Linear Bandits with Thompson Sampling. Pl. see (Abeille & Lazaric (2017)) which we have cited for bounds on the deviation of the bandit\\u2019s learned weights from their true values.\\n\\n> As the Thompson sampling algorithm of LSB draw samples from the distribution of MDP parameters that leads to the wost performance, why not directly use it for policy search?\\n\\nIn EffAcTS-EPOpt, the idea is to use the LSB to come up with an approximate sample from the bottom epsilon percentile of parameters according to the source distribution (for the CVaR objective). However, the trajectories generated while the LSB is learning cannot be used for this for two reasons: first, it would have chosen a number of parameters from outside the required region while exploring, and second, it moves towards the absolute worst performance (and so is biased towards that value, even though it learns well about the entire parameter space). That is why we have to have an additional stage of sampling using the performance function fit by the LSB. We illustrate this in Appendix A2 that we have added.\\n\\n> However, the uncertainty of prediction is not used by the proposed algorithm.\\n\\nThe approach we have taken in our implementation of the framework is to assign the task of uncertainty reduction to the active learner itself (here the LSB), and assume that the learned return profile has low enough uncertainty that we can use it directly for the subsequent steps. We concur that this is a very valid point and that it might be necessary to employ methods that use the uncertainty of predictions to improve the quality of model parameter selection in situations such as very high dimensional model ensembles.\\n\\n> The proposed method will not outperform if the number of trajectories used for updating policy is the same, as the surrogate model can never be as good as the real model. It would be nice to explicitly demonstrate the runtime and performance trade-off.\\n\\nA point to be noted is that even EPOpt does not have the true performance profile and also performs approximate sampling from the bottom epsilon percentile (i.e it is also not guaranteed to produce samples from that region alone). Therefore there is no reason to say that this method will not outperform EPOpt, and as we have shown, the performance of EffAcTS-EPOpt is very close to or better than EPOpt. It samples far fewer trajectories to attain this, and so quite clearly wins in the run-time performance tradeoff.\"}",
"{\"title\": \"Updates on the paper\", \"comment\": [\"We have revised our paper with the following changes:\", \"Reorganized parts of the description of the EffAcTS framework (section 4) for clarity. We believe that this has been a source of confusion and we make these edits to better express the way in which we use Active Learning in EffAcTS, as well as the usage of the bandit in EffAcTS-EPOpt.\", \"Added an appendix \\u201cVisualizing the Bandit Learner in EffAcTS-EPOpt\\u201d where we visualize and comment on the parameter choices of the bandit algorithm in the learning phase and when outputting trajectories.\", \"Added a discussion of work on model based RL ([1] and [2]) in the related work (Section 2).\", \"Errata:\", \"Expanded TRPO (Trust Region Policy Optimization) and added inline citations.\", \"Changed argument of GetTrajectory (in LEARN, Algorithm 1) to \\\\theta_i, from \\\\pi_\\\\theta_i for consistency with the\", \"definition of GetTrajectory.\", \"Added some references that were missed earlier.\", \"Corrected citation formatting throughout.\", \"We agree that presenting the results for EPOpt alongside ours will improve the presentation, and therefore will add it in the camera ready version. As we have pointed out in the paper, we are using the same environments and therefore the results presented in the EPOpt paper can be used for comparison in the meantime.\", \"--------------\", \"[1] Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, and Pieter Abbeel. Model-ensemble trust-region policy optimization. In International Conference on Learning Representations, 2018.\", \"[2] I. Mordatch, K. Lowrey, and E. Todorov. Ensemble-cio: Full-body dynamic motion planning that transfers to physical humanoids. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)\"]}",
"{\"title\": \"Introducing active learning to robust policy search for efficient sampling.\", \"review\": \"This paper introduced an active learning mechanism on top of robust policy search in RL for better sampling efficiency. The authors proposed EffAcTS active learning framework and combined it with policy search method EPOpt. Theoretical analysis of active learning efficiency was not investigated. Simulation experiments were done on Hopper and Half Cheetah, 5 runs for each parameter setting.\\n\\nThe paper is well written and easy to follow. The authors quickly went through several key topics (active learning, linear bandits, multi-task, etc.) without too many details. However, there is a huge lack of key references in these topics. It would be better to notice that they were not introduced together with DRL.\\n\\nOverall, it is a nice paper with incremental contributions on every dimension the authors claimed (e.g. comparing to Sharma et al., 2018).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"This paper tries to tackle the sampling efficiency of RL with building a probabilistic surrogate model.\", \"review\": \"This paper targets at a particular type of robust policy search, where a simulation environment exists with explicit tuning parameters, which is referred to as the model parameters of a Markov decision process. The task of robust policy search is to learn a policy robust to all the parameters of the simulator, so that it can potentially give robust performance in real environment. The previous work handles this problem by sampling many trajectories and only learning from the trajectories, in which the current policy produces the worst performance. This approach effectively focus the policy search on the worst case performance, but is highly inefficient as most of the sampled trajectories are discarded. This paper proposes to improve the sampling efficiency by building a surrogate model predicting the return of the current policy given a MDP parameter. The surrogate model is used to select the MDP parameters leading to the worsts performance, so that the policy search can directly sample and learn from the selected MDP parameters without discarding any trajectories.\\n\\nThis paper tries to tackle the sampling efficiency of RL with building a probabilistic surrogate model. This is a promising direction. The biggest concern is that this paper tackles this problem with a combination with existing techniques, leaving many questions unanswered. Presenting the paper in a more theoretical-grounded perspective would make the paper much stronger.\\n\\nThis paper uses a linear stochastic bandits (LSB) method to build a surrogate model of the return of the current policy and fits the surrogate model into the EPOpt framework by sampling from the worst performing parameters according to the surrogate model. As the Thompson sampling algorithm of LSB draw samples from the distribution of MDP parameters that leads to the wost performance, why not directly use it for policy search?\\n\\nThe surrogate model is expected not to give accurate prediction everywhere due to the limited number of data but produces uncertainty of its prediction as an dictator. However, the uncertainty of prediction is not used by the proposed algorithm.\\n\\nThe presentation of the experiment section needs to be improved. The performance of the baseline needs to be explicitly presented, otherwise it is hard to compare. The proposed method will not outperform if the number of trajectories used for updating policy is the same, as the surrogate model can never be as good as the real model. It would be nice to explicitly demonstrate the runtime and performance trade-off.\", \"minor_issues\": \"1. What does TRPO stand for?\\n2. When referring to the paper instead of the authors, the citation format needs to be (authors year) instead of authors (year).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"need further improvement\", \"review\": \"Summary: This paper proposes an integration of active learning for multi-task learning with policy search. This integration is built on an existing framework, EPOpt, which each time samples a set of models and a set of trajectories for each model. Only trajectories with the bottom \\\\epsilon percentile returns will be used to update the multi-task policy. This paper proposes a way to improve the sample-efficiency so that fewer trajectories will be sampled and fewer trajectories will be loss.\\n\\nIn general, the paper presentation is easy to follow. The idea is well motivated of why an active learning integration is needed. The related work is a bit too narrow, e.g. work [1] on the same approach like EPOpt or meta-learning (for model adaptation) [2] (and others more on this topic)\\n\\n[1] T. Kurutach, I. Clavera, Y. Duan, A. Tamar, and P. Abbeel. Model-Ensemble Trust-Region Policy Opti\\nmization. In ICLR, 2018.\\n\\n[2] C. Finn, P. Abbeel, and S. Levine. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. In ICML, 2017.\\n\\nIn overall, I have major concerns regarding to the proposed framework.\\n- Active learning is a method that is in general known to be an optimal trade-off between exploration vs. exploitation in finding a global optimal solution. That means, the proposed use of linear stochastic bandits is trying to find an optimal arm \\\\theta^* (the worst trajectory) that gives the highest reward (the lowest return). In my opinion, integrating this idea naively into EPOpt to sample a set of trajectories would only aim to find the worst trajectory among all trajectories from all models. This is clearly not enough to say \\\"finding ALL the WORSE regions among trajectory space\\\" to improve the policy. Therefore, a new way of integration or a new objective should be used in order to make a principled framework. \\n\\n- The statement over sample-efficiency gain vs. EPOpt in Section 4 is too loose which is not based on any detailed analysis or further theoretical results.\\n\\n- The experiment results are not well presented: there is no results for EPOpt in Fig. 1;\", \"minor_comments\": \"- Algorithm 1: argument of GetTrajectory (in LEARN) should be \\\\theta_i, instead of \\\\pi_\\\\theta_i?.\\n\\n\\nIn conclusion, the proposed framework is not yet principled. Experiment results are too preliminary and not well presented.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
BJgnmhA5KQ | Diverse Machine Translation with a Single Multinomial Latent Variable | [
"Tianxiao Shen",
"Myle Ott",
"Michael Auli",
"Marc’Aurelio Ranzato"
] | There are many ways to translate a sentence into another language. Explicit modeling of such uncertainty may enable better model fitting to the data and it may enable users to express a preference for how to translate a piece of content. Latent variable models are a natural way to represent uncertainty. Prior work investigated the use of multivariate continuous and discrete latent variables, but their interpretation and use for generating a diverse set of hypotheses have been elusive. In this work, we drastically simplify the model, using just a single multinomial latent variable. The resulting mixture of experts model can be trained efficiently via hard-EM and can generate a diverse set of hypothesis by parallel greedy decoding. We perform extensive experiments on three WMT benchmark datasets that have multiple human references, and we show that our model provides a better trade-off between quality and diversity of generations compared to all baseline methods.\footnote{Code to reproduce this work is available at: anonymized URL.} | [
"machine translation",
"latent variable models",
"diverse decoding"
] | https://openreview.net/pdf?id=BJgnmhA5KQ | https://openreview.net/forum?id=BJgnmhA5KQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1xSxgLEx4",
"r1xTHmYBkE",
"BklSfmKSJE",
"BkeOw7YMyN",
"HyeWvWIc6X",
"SkxgQb8cpm",
"H1gWie8cTQ",
"H1lEMxL9TQ",
"Bye_FWf3nX",
"H1x1dumo3Q",
"Byxs0v_937"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544998893484,
1544028996931,
1544028941124,
1543832416280,
1542246744807,
1542246680343,
1542246553450,
1542246411761,
1541312896342,
1541253223357,
1541208018996
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1397/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1397/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1397/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1397/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1397/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1397/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1397/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1397/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1397/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1397/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1397/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": [\"a simple method\", \"producing diverse translation is an important problem\", \"technical contribution is limited / work is incremental\", \"R1 finds writing not precise and claims not supported, also discussion of related work is considered weak by R3\", \"claims of modeling uncertainty are not well supported\", \"There is no consensus among reviewers. R4 provides detailed arguments why (at the very least) certain aspects of presentations are misleading (e.g., claiming that a uniform prior promotes diversity). R1 is also negative, his main concerns are limited contribution and he also questions the task (from their perspective producing diverse translation is not a valid task; I would disagree with this). R2 likes the paper and believes it is interesting, simple to use and the paper should be accepted. R3 is more lukewarm.\"], \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"writing needs to be improved / contribution limited\"}",
"{\"title\": \"To AnonReviewer4\", \"comment\": \"We thank the reviewer for the valuable feedback. We realize how the current wording in the paper may have been misleading. Below, we will clarify our approach and try to better support and motivate our modeling choices. We will also update our model description accordingly in the next revision.\\n\\nFirst, we provide an alternative interpretation of our lower bound to better illustrate how our objective rewards diversity, and then draw an analogy to K-means to further motivate our approach.\\n\\np(y|x) = sum_z p(z|x)p(y|x,z) > 1/K max_z p(y|x,z) \\nWe agree with the reviewer that this lower bound may be loose. However, the idea is that as we maximize the lower bound, we're maximizing the marginal likelihood p(y|x) minus the gap g=1/K (sum_z p(y|x,z) - max_z p(y|x,z)). We want to learn a model that achieves both large marginal likelihood and small gap, and our formulation of the objective p(y|x)-g=1/K max_z p(y|x,z) enables efficient optimization of such quantity. Our model does NOT assume that p(y|x,z) must be large for only one value of z (we acknowledge that the current wording in the paper is imprecise about this and we will correct it); rather, we are optimizing for this by incorporating g into our loss. g rewards diversification and achieves its minimum when p(y|x,z) is large only for one value of z. Of course, optimizing p(y|x) -g does not guarantee that the individual terms will be optimized as this is not convex, but our loss does reward for g being small.\\n\\nAnother interpretation of our model comes from an analogy to K-mean. The proposed model can be simplified to its core by 1) removing the conditioning on x and 2) assuming that the output space of y is in R^d. In this setting, the soft-mixture of experts reduces to a mixture of Gaussians (MoG). If we replace the prior with a uniform and replace the marginalization with a minimization, as we propose in our model, one recovers K-means. Admittedly, MoGs and K-means are known to possibly fail; after all, the corresponding log-likelihood loss is not convex in this case. For example, it is possible that clusters \\u201cdie\\u201d and it is also possible for poor initialization to yield poor diversification of clusters. Nevertheless, K-means is effective in practice and often produces useful clusterings of the data. Our model is an instance of on-line K-means but in the conditional case and when the output space is the space of word sequences. We find that this model is less prone to collapse and works consistently better than Soft-MoE and VAE on three different language pairs using state-of-the-art architectures and large scale datasets.\\n\\nFinally, we'd like to emphasize that compared to vanilla K-means in R^d, there are more subtle factors here that are important for the model to be optimized well. For example, we found that implementation details such as the amount of parameter sharing among experts and how dropout is applied can dramatically influence how a model is going to use its latent variable.\\n\\nOverall, this paper contributes a) a simple (K-means like) model that works on a variety of datasets and outperforms existing approaches by quite some margin, b) a novel evaluation protocol assessing both quality and diversity of generations, and c) code to reproduce all of the empirical findings. We truly believe these are valuable contributions to the community working on text generation, and our model is an important baseline for future work. \\n\\nWe hope through the above discussions the reviewer can appreciate the merits of this work. We will improve clarity and make the suggested changes, and we are always available for further discussions.\"}",
"{\"title\": \"To AnonReviewer4 (continued)\", \"comment\": \"Answers to specific questions:\\n1. Modeling uncertainty.\\nBy uncertainty in the output distribution we mean the fact that there are multiple plausible translations of the same sentence (hence, uncertainty in what to predict). \\nThe baseline model leaves all the uncertainty in the decoder distribution p(y|x) and it is hard to search for multiple modes with this. Instead, we introduce a latent variable to capture some of the uncertainty. If successfully learned, p(y|x,z) is going to have less uncertainty and we can better explore multiple modes of p(y|x) by first sampling/enumerating z and then greedily decoding p(y|x,z). Our objective and training algorithm are designed for this purpose, as discussed above.\\n\\n2. which gating\\n It's the gating function p(z|x;theta) of Soft-MoE (Sec. 3).\\nWe will clarify in the revision.\\n\\n3. \\\"While they showed improvements due to the regularization effect of the Monte Carlo gradient estimate\\u201d\\nThe VAE objective is L(theta,phi;x)=E_{q(z|x;phi)}[-log p(x|z;theta)] + D_{KL}(q(z|x;phi)||p(z)), the expected reconstruction error plus a KL regularizer of the posterior. The latter is directly calculated, and the former can be optimized via the reparameterization trick and Monte Carlo gradient estimate (Kingma and Welling, 2013). Applying these to variational NMT (we need to replace x with y and condition on x, since VAE is a model of p(x) and here we're modeling p(y|x)), it collapses and the posterior is always the same as the prior so their KL divergence is 0 (Bowman et al., 2016). Comparing the remaining term E_{z~N(0,I)}[-log p(x|z;theta)] (or -log p(y|x,z;theta)) with traditional loss -log p(x;theta) (or -log p(y|x;theta)) that doesn't involve a latent variable, the MC gradient estimate for the former may bring some regularization effect. So, we are referring to the fact that although latent variables in variational NMT may not be used, they are still useful in the sense that at training time sampling noise may have a regularization effect in the overall model. We briefly mentioned this in the introduction as it's not the main part of this paper.\\n\\n4. \\u201cif you aim to have p(y|x,z) high for a single latent variable at a time, you are implicitly saying that every x has at most (or rather exactly) K translations with non-negligible probability. Is that sensible? \\u201d\\nGoing back to K-means, a priori there is no right way to pick K; the same applies in our case. This is admittedly a crude modeling assumption; however, our experiments show that this model does work better than models that are equipped with richer internal representation (like variational NMT) but more prone to degeneracies that ignore the latent variable. We leave to future work exploring more effective modeling choices.\"}",
"{\"title\": \"Good direction, though problematic assumptions\", \"review\": \"# Summary of model\\n\\nThe paper proposes a mixture model formulation of NMT where the mixing coefficients are uniform and fixed. The authors then proceed to derive a lowerbound on the marginal likelihood \\n\\np(y|x) = \\\\sum_z p(z)p(y|x,z) > 1/K \\\\max_z p(y|x,z)\\n\\nby picking the component z for which the joint likelihood is maximised. With a uniform p(z) this clearly selects the z for which the conditional p(y|x,z) is maximum. I use strictly greater here because p(z) > 0 and p(y|x,z) > 0 for every z.\\n\\nThe loss L(\\\\theta|x,y) for an observation (x,y) is \\\\min_z - \\\\log p(y|z,x; \\\\theta)\\nwhose gradient with respect to NN parameters (theta) is \\\\grad_theta \\\\log p(y|z,x; \\\\theta) for the component z that minimises the negative log-conditional and 0 for every other component, thus while this requires K forward passes (to solve \\\\min_z), it only takes 1 backwards pass.\\n\\n# Discussion\\n\\nI appreciate model-based (as opposed to search-based) attempts to improve diversity for generation tasks such as MT. Latent variable modelling aims at a more explicit account of the generative procedure, namely, the joint distribution, which can potentially disentangle and explain different modes of the marginal. Thus from that point of view, this paper points to an exciting direction. That said, in my view, the assumptions behind the proposed approach are not justifiable and some of the claims are simply not appropriate. Below I try to support this view.\\n\\nA stepping stone of this model is that p(y|x,z) must be \\\"large for only one value of z\\\" (as authors put it), and authors *assume* that will be the case. \\n\\nWhile the bound in equation (2) holds, whether or not p(y|z,x) turns out to be \\\"large for only one value of z\\\", it will be a very loose bound unless that happens. \\n\\nThe key point is that one cannot *assume* it to be the case. One could perhaps *promote* it to be the case, but there's no aspect of the model formulation (or objective) that promotes such behaviour.\\n\\nBackpropagating through whichever component happens to assign the largest likelihood does not guarantee (nor encourages) the other conditionals to *independently* end up going to zero. \\n\\nGiven the level of parameter sharing, I'd even consider the possibility that the exact opposite happens. As authors put it themselves \\n\\n\\\"Instead, by sharing parameters, even unpopular experts receive some gradients throughout training.\\\"\\n\\nIt's true they do, but they are being updated on the basis of the unilateral opinion of the selected component about the likelihood of the data.\\n\\nNote that the true posterior p(z|x,y) is exactly proportional to the likelihood, as the prior is *uniform and fixed*:\\n p(z|x,y) \\\\propto 1/K p(y|x,z) \\\\propto p(y|x,z)\\nThis means that the authors expect the likelihood to do component allocation on its own. That is, the conditionals p(y|x,z=1), ..., p(y|x,z=K) must somehow coordinate themselves in making good use of the latent components. Without any mechanism to promote \\\"competition\\\" (in the parlance of Jacobs et al 1991), I don't see how this can work.\\n\\nAlso, the paper claims to model uncertainty, if I take the posterior to fulfil this claim, then I'm just left with a likelihood (again, due to uniform prior). In any case, a notion of uncertainty here would be conditioned on a point estimate of the network's parameters and should thus be worded carefully.\\n\\n# Clarifications\\n\\n1. \\\"we aim to explicitly model uncertainty during training\\\" can you make a case for where that happens in your model?\\n\\n2. \\\"prevents the gating from training well and the latent variable embeddings from specializing\\\" which gating?\\n\\n3. \\\"While they showed improvements due to the regularization effect of the Monte Carlo gradient estimate\\u201d. I find it strange to talk about the \\u201cregularisation effect\\u201d of a gradient estimate, perhaps you can be a bit more precise here? Or perhaps you are referring to some specific component of the objective function whose gradient we are estimating via MC and perhaps that component may have some regularisation effect.\\n\\n4. if you aim to have p(y|x,z) high for a single latent variable at a time, you are implicitly saying that every x has at most (or rather exactly) K translations with non-negligible probability. Is that sensible? \\n\\n# Pros/Cons\\n\\nPros\\n\\n* simple: the approach presented here requires no significant changes to otherwise standard architectures, it instead concentrates in a change of objective and training algorithm.\\n* assessment of variability in translation: this paper proposes to use BLEU and a corpus of multiple references in an interesting (potentially novel) way. \\n\\nCons\\n\\n* problematic assumptions: e.g. posterior will turn out sparse without any explicit way to promote such behaviour\\n* unrealistic claims: e.g. modelling uncertainty\\n* imprecise use of technical language: some technical terms are not used in their strictly technical sense (e.g. uncertainty, degeneracy), some explanations employ loosely defined jargons (e.g. regularisation effect of the gradient estimate)\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank the reviewer for the feedback and comments.\\n\\n\\u201cwhether it is sufficient to feed the latent variable embedding only once\\u201d\\nIn addition to feeding the latent variable embedding once as the beginning-of-sentence (<bos>) token, we also tried other model architectures, including adding or concatenating the latent variable embedding with the input word embeddings at each time step, and injecting it into each decoder layer. We found that they have similar results, therefore we adopt the simplest replacing <bos> strategy. We hypothesize that the latent variable's effect does not dilute across long output sequences because the decoder in Transformer has a self-attention mechanism. When generating each word it can always attend to the latent variable directly without being affected by distance.\\n\\n\\u201cEffect of initialization\\u201d\\nIn our experiments, we randomly initialize all weights following the reference Transformer implementation, and our findings are quite consistent across random seeds. As the reviewer conjectures, the particular choice of parameter sharing between experts is key to avoid the common failure mode of one expert taking over all the others, as their parameters are always updated even when they are not selected for a particular input example. \\n\\n\\u201cWhat is the training time and computational resource requirements? Are multiple DGX-1s running in parallel required to train the model?\\u201d\\nTraining the Hard-MoE model with 10 states takes about twice as long as training the baseline without latent variable. We train the big WMT En-De model with 10 latents in ~3.5 hours on 128 GPUs. An equivalent model can be trained on a single machine with 8 GPUs in ~1.5 days (due to faster inter-GPU communication), or with 1 GPU in about a week.\\n\\n\\u201cStructure captured by the latent variable?\\u201d\\nThe reviewer is correct that in the proposed approach there is no constraint on what each latent value represents.\\nAs a result the different translation styles captured by each latent value are often mixed and have no clear structure. From Sec. 5.5 and Table 4 we can see that z=1 captures past tense, \\u201cthat\\u201d and \\u201cper cent\\u201d, while z=3 captures present tense, \\u201cthis\\u201d and \\u201c%\\u201d, for instance. We are working towards adding a similar analysis on the En-De dataset, following your suggestion. \\nIn general, we would like to investigate models with richer and more structured latent representations in future work.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"\\u201cgenerating the diverse translations for machine translation problem may not so important and piratically in actual scenarios\\u201d\\nMT systems are approaching human level performance on several language pairs (see recent results of WMT competitions, for instance). Improving translation quality remains a big challenge on low resource languages, but not as much on high resource languages such as those considered in this work. Instead, providing the user with a diverse set of translations that capture different translation styles (e.g., formal/informal, literal/not literal) and information asymmetry between different languages (e.g., when translating from a language without tense to a language that requires tense specification) may become the next important feature of MT systems.\\nMore generally, the study of how to model uncertainty is key to the advance of AI systems in non-deterministic settings and for prediction tasks that are inherently multi-modal. The MT task serves as a good test bed application for this. See the first two paragraphs of the introduction.\\n\\n\\u201c1) The only modification for this work is to make the soft probability of p(z|x) to be 1/K.\\u201d\\nThis statement is not correct. As we explain in Sec. 3.1, in addition to the change in prior we also introduce a new training procedure that includes inference of the latent variable and selectively backpropagating through only the optimal latent variable assignment.\\n\\n\\u201c2) ... the beam original beam search is good enough from the results in Table 1\\u201d\\nThis statement is not correct. Comparing the \\u201cpairwise BLEU\\u201d (lower numbers indicate more diversity) and \\u201c#refs covered\\u201d columns in Table 1, we show that beam search is about 20 (pairwise) BLEU points less diverse than our approach! The qualitative results in Table 3 further illustrate the significant improvement in diversity of our approach compared to beam search.\\n\\n\\u201c3) In table 2, what means k=0 for the BLEU score?\\u201d\\nIn this experiment, we use different models to generate 2 hypotheses and compute the BLEU score of the first generated hypothesis (k=0) and the second generated hypothesis (k=1) w.r.t. the reference respectively. We have changed the notation to make this clearer, see new table 2.\\n\\n\\u201c4) I want to indicate that the purpose of VAE approach related to this work is to increase the model performance...\\u201d\\nA general discussion about the purpose of latent variable models is beyond the scope of this rebuttal. \\nIn short, latent variables are one major and principled approach to describe uncertainty of distributions, and VAE is a density estimation framework equipped with latent variables. The conditional distribution we aim to model has uncertainty (as there are several plausible ways to translate a source sentence). Such uncertainty is partly captured by the output distribution of the decoder and partly modeled by the stochastic latent variables.\\nThe potential advantage of VAEs may stem from the implicit regularization induced by the use of latent variables at training time (which is what the reviewer seems to be referring to) or from the better modeling of the underlying uncertainty (which is what we are studying in this paper). \\n \\n5) Relation to \\u201cSequence to Sequence Mixture Model for Diverse Machine Translation\\u201d\\nThis paper is concurrent to ours and was not available at submission time. They also consider the same problem and propose a similar approach. The major differences are:\\na) we select only one expert while they use all of them. This means that their model is much more expensive in terms of memory and computation at training time; in fact for larger number of latent states they also propose to select one latent value but they do so by sampling as opposed to via minimization.\\nb) the parameterization is different.\\nc) we study the collapses of latent variable models and provide insights on how to prevent these failures.\\nd) their evaluation is limited to the small IWSLT dataset and to small baseline models, while we use the much bigger WMT datasets, state-of-the-art baselines and we leverage multiple references for each source sentence in our evaluation. Our paper proposes metrics and reports a more in depth analysis of diversity both quantitatively and qualitatively. \\nWe have added a reference to this paper in the revised version. Thank you for the suggestion.\"}",
"{\"title\": \"Response to AnonReviewer3 (continued)\", \"comment\": \"\\u201cBy putting the model in evaluation mode during minimization we also speed up training and reduce memory consumption, since the K forward passes have no gradient computation or storage.\\u201d\\nLearning a Soft-MoE model with K experts requires K-1 times more work in both the forward and backward pass compared to a baseline (single expert) model. In contrast, our Hard-MoE model requires K times more work in the forward pass (to choose z), but the backwards pass is the same work as the baseline (see Algorithm 1). Moreover, the K forward passes can be very efficiently parallelized since they do not require storing any intermediate values for backpropagation and therefore require less GPU memory (e.g., using the torch.no_grad option in PyTorch).\\nWith K=10, the training time of our model is roughly twice that of a baseline model on the same hardware. At test time we can generate K hypotheses from each value of the latent variable in parallel via greedy decoding.\\n\\n\\u201cSmall/big value of K\\u201d\\nIn this paper we aim at generating order 10 hypotheses, as we think that's a reasonable amount to present to users in practical applications, while still capturing the most significant diversity. This also matches the number of references we have available for evaluation. We leave to future work scaling to a much larger number of states and properly evaluating in that setting. Note that although competing approaches like VHRED or Variational NMT have the potential to model a larger variety of hypotheses by introducing continuous latent variables, they actually fail to use the latent variable and cannot generate different hypotheses from different values of z. It seems that the benefits of those approaches are mostly due to their implicit regularization (due to the addition of noise in the latent space), more than better modeling ability.\\n\\n\\u201cParameter sharing\\u201d\\nWe share all parameters among the experts, except that each expert has a unique beginning-of-sentence embedding in the decoder (see paragraph 2, Sec. 3.2). We also tried other parameterizations, such as to add or concatenate the latent variable's embedding with the input word embeddings at each time step, and to inject it into each decoder layer, but found similar results. Therefore we adopt the simplest approach described above.\\n\\n\\u201c#ref covered\\u201d\\nWe could divide all human translations into half and half, use one set as reference and the other as hypothesis to compute the coverage number. However this approach doesn't make full use of all human references, and different divisions lead to different numbers. Besides, \\u201c#ref covered\\u201d serves the same purpose as \\u201cpairwise BLEU\\u201d, both of which measure diversity. We therefore consider \\u201cpairwise BLEU\\u201d as a more direct metric for comparing both the diversity of human references and the diversity of system hypotheses. \\n\\u201cVariational NMT\\u201d\\nVariational NMT fails to use the latent variable on the IWSLT dataset. It degenerates to the baseline NMT model and cannot generate different hypotheses (see table 2) unless we use diverse decoding strategies (sampling, beam, diverse beam). Therefore we did not test it on the larger WMT datasets.\\n\\n\\u201cMissing citation\\u201d\\nThe beam search diversification heuristic proposed by Li and Jurafsky (2016) is outperformed by diverse beam search (Vijayakumar et al., 2016), which is a baseline in our paper. It's indeed relevant and we have added a reference to it in our updated version.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for the feedback and comments. We address each of them in turn:\\n\\nThe VHRED model for dialogue (Serban et al., 2017) has a Gaussian latent variable for each utterance given the context and is trained with the VAE objective. Applying this model to machine translation is equivalent to having a latent variable for the target sentence given the source--- which is exactly the variational machine translation baseline (Zhang et al., 2016) we compare to (see Table 2). Our method is not limited to MT and can be applied to other text generation tasks such as dialogue and image captioning. In this work we choose MT because this is an application where the importance of modeling uncertainty has been underestimated so far. People often think that source and target sentences should be semantically equivalent, but neglect the different translation styles (e.g., formal/informal, literal/not literal) and information asymmetry between different languages. For example, Chinese has no tense, while English requires tense specification, and our experiments show that the latent variable captures this phenomenon (Sec. 5.5). More examples include grammatical gender, honorifics, etc. We believe it's important to provide such a variety of translations, which is an unresolved problem in current MT systems. Moreover, compared to dialogue and other applications, MT has a more widely established metric, BLEU, which enables us to do systematic evaluation against a set of reference translations.\\n\\nThe reviewer is right that when computing average oracle BLEU among human references, each reference is evaluated against other N-1 references, while each system hypothesis is evaluated against N references, which could put the score for human at a disadvantage. We have added a note about this in the updated version.\\nWe also conducted a leave-one-out \\u201caverage oracle BLEU\\u201d evaluation for system hypotheses, i.e. each hypothesis is evaluated against all N-1 subset of N references, followed by averaging. This is equivalent to pair each hypothesis to its best matching reference N-1 times, and its second best once. On the WMT\\u201917 Zh-En dataset for which we only have 3 human references (and therefore, the two evaluations will differ the most), the results are:\\n against all refs (as in the paper) leave-one-out\\n Sampling 17.7 15.9\\n Beam 30.7 27.8\\n Diverse beam 28.6 25.8\\n Hard-MoE 28.9 26.1\\n Human 33.6\\nWe can see that the relative rankings between the models stay the same, this different evaluation just slightly change the absolute value of this metric. Therefore, all our conclusions hold the same.\"}",
"{\"title\": \"Interesting but somewhat incremental approach; related work a bit weak\", \"review\": \"The authors aim to increase diversity in machine translation using a multinomial latent variable that captures uncertainty in the target sentence. Modeling uncertainty with latent variables is of course relatively common in ML, and this work has similarities with latent variables models for MT [Zhang et al., 2016] and for other generation tasks such as dialogue [Serban et al., 2017; etc.]. The key difference is that the authors here use a Mixture of Expert (MoE) approach while most relevant prior works use variational approaches. Experiments show improvements in diversity over variational NMT [Zhang et al., 2016] and decoding-time approaches (e.g., diversity constraints [Vijayakumar et al., 2016]).\\n\\nOverall, the proposed approach (hard-MoE) is well motivated and the experimental results are relatively promising. I think the authors did a good job analyzing and justifying their approach against the soft version of their model (i.e., soft-MoE causes experts to \\u201cdie\\u201d during training) and variational alternatives (i.e., variational approaches often have failure modes where the latent variable is effectively ignored.) \\n\\nHowever, I find related work a bit weak because the problem of producing diverse output has been a much bigger focus in tasks other than MT, such as dialogue and image captioning. The paper glosses over related approaches on these tasks, but the need to model uncertainty for these other tasks is much bigger since source and target are usually not semantically equivalent. So it would have been nice to see argumentative (or even empirical) comparisons with popular models such as VHRED for dialogue [Serban et al., 2017], as many of these models are not intrinsic to either MT or dialogue (the only aspect specific to dialogue in VHRED is context, but it can be set to empty and thus VHRED could have been used as a baseline in the paper.) It would be interesting to compare the work against Serban et al. [2017]\\u2019s justification for using a latent variable, which is quite different (see their bit on \\u201cshallow generation\\u201d, and the idea that their latent variable encapsulates \\u201cthe high-level semantic content of of the output\\u201d). \\n\\nOne technical caveat is that there appears to be some inconsistency in the comparison between human and systems in Table 1. If N is the number of references, then systems are evaluated on N references while the human \\u201csystem\\u201d on only N-1 because of leave-one-out. While this difference might have less of an impact on \\u201caverage oracle BLEU\\u201d than standard BLEU, having one less reference might still penalize the human \\u201csystem\\u201d, and this might partially explain why \\u201cbeam search\\u2019s average oracle BLEU is fairly close to human\\u2019s average oracle BLEU\\u201d. The right thing to do would be to evaluate both human and all systems in a leave-one-out approach (i.e., let references [r1 \\u2026 rN] and systems [s1 \\u2026 sM], then evaluate each element of [s1 \\u2026 sM r1] on references [r2 \\u2026 rN], etc.). In that manner, all the \\u201csystems\\u201d including human are consistently evaluated on *exactly* the same references.\", \"minor_comments\": \"\\u201cBy putting the model in evaluation mode during minimization we also speed up training and reduce memory consumption, since the K forward passes have no gradient computation or storage.\\u201d In other words, does this mean the algorithm is easy to *parallelize* because sharing parameters is often what kills the effectiveness of parallelized SGD and variants? If so, \\u201cparallelizing\\u201d is key word to mention here otherwise I don\\u2019t see how we can speed that up by increasing K.\", \"figure_2\": \"performance drops with K approaching 20. What happens with K=50 or 100 or more? This is a bit of a concern because (1) larger K could require a massive amount parallelization and (2) competing approaches such as VHRED can handle latent variables with higher capacities.\", \"practical_considerations_subsection_is_too_vague\": \"parameter sharing is not formally/mathematically explained and the work could be hard to reproduce exactly (as there are often different ways to share parameters).\\n\\nWhy no \\u201c#ref covered\\u201d for human in Table 1, and why no comparison with Variational NMT? Zhang et al [2016] is the most talked about competing model, so it should probably be evaluated on both settings.\", \"missed_reference\": \"Mutual Information and Diverse Decoding Improve Neural Machine Translation.\\nJiwei Li, Dan Jurafsky. https://arxiv.org/abs/1601.00372\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review for Diverse MT with a Single Multinomial Latent Variable\", \"review\": \"This paper studies the diverse text generation problem, specifically on machine translation problem. The authors use a simple method, which just using a single multinomial latent variable compared with previous approaches that using multi latent variables. They named the approach: Hard-MoE. They use parallel greedy decoding to generate the diverse translations and the experiments on three WMT datasets show the approach make a trade-off between diversity and quality.\\nIn general, I think generating the diverse translations for machine translation problem may not so important and piratically in actual scenarios. In fact, how to generate fluent and correct translations is more important. \\n\\nFor the details, there are some problems. 1) The only modification for this work is to make the soft probability of p(z|x) to be 1/K. The others are several experimental studies. To be an formal ICLR paper, this may not be interesting enough to draw my attention. 2) In case of the results, though the authors claimed they achieved better trade-off between diversity and quality, in my opinion, the beam original beam search is good enough from the results in Table 1. 3) In table 2, what means k=0 for the BLEU score? 4) I want to indicate that the purpose of VAE approach related to this work is to increase the model performance w.r.t. the BLEU score instead of the diversity, same as the original MoE method. 5) There are some related works to this work, but their methods are also very effective in terms of the BLEU score, e.g., the author can check this one in EMNLP this year: \\u201cSequence to Sequence Mixture Model for Diverse Machine Translation\\u201d. Authors may need a more discussion between those works and this work.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting contribution\", \"review\": \"This paper proposes a sequence to sequence model augmented with a multinomial latent variable. This variable can be used to generate multiple candidate translations during decoding. This approach is simpler than previous work using continuous latent variables or modifying beam search to encourage diversity, obtaining more diverse translations with a smaller drop in translation accuracy.\", \"strengths\": [\"Simple model that succeeds in achieving its goal of generating diverse translations.\", \"Provides insights into training models with categorical latent variables.\"], \"weaknesses\": \"- More insight into what the latent variable is learning to represent would strengthen the paper. \\n\\nWhile the model is simple, its simplicity has significant strengths: In contrast to more complex latent space, the latent variable assignments can be enumerated explicitly, which enables it to be used to control the generation and compare outputs. The simplicity of the model will force the latent variable towards capturing diversity - modelling uncertainty in how to express the output rather than uncertainty in the content. \\n\\nOne question about the model architecture is just whether it is sufficient to feed the latent variable embedding only once, as it effect might be diluted across long output sequences (as opposed to, say, feeding the latent variable at each time step). \\n\\nThe paper provides some interesting insights, such as the need to do hard EM-style training and turning off dropout when inferring the best latent variable assignment during training, to avoid mode collapse. \\n\\nWhat is the effect of initialization? This often has a large impact in EM-style training, and could also lead to mode collapse, though in this case the restricted parameterization might prevent that. \\n\\nWhat is the training time and computational resource requirements? Are multiple DGX-1s running in parallel required to train the model?\\n\\nWhat is not clear enough from the paper is what kind of structure the latent variables learn to capture. In particular this model is not biassed towards any explicit notion of the kind of diversity one would like to learn. While there is some qualitative analysis, further analysis would strengthen the paper. \\n\\nOverall this is a very interesting contributions that offer useful insights into designing controllable sequence generation models.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rkxn7nR5KX | Incremental Few-Shot Learning with Attention Attractor Networks | [
"Mengye Ren",
"Renjie Liao",
"Ethan Fetaya",
"Richard S. Zemel"
] | Machine learning classifiers are often trained to recognize a set of pre-defined classes. However,
in many real applications, it is often desirable to have the flexibility of learning additional
concepts, without re-training on the full training set. This paper addresses this problem,
incremental few-shot learning, where a regular classification network has already been trained to
recognize a set of base classes; and several extra novel classes are being considered, each with
only a few labeled examples. After learning the novel classes, the model is then evaluated on the
overall performance of both base and novel classes. To this end, we propose a meta-learning model,
the Attention Attractor Network, which regularizes the learning of novel classes. In each episode,
we train a set of new weights to recognize novel classes until they converge, and we show that the
technique of recurrent back-propagation can back-propagate through the optimization process and
facilitate the learning of the attractor network regularizer. We demonstrate that the learned
attractor network can recognize novel classes while remembering old classes without the need to
review the original training set, outperforming baselines that do not rely on an iterative
optimization process. | [
"meta-learning",
"few-shot learning",
"incremental learning"
] | https://openreview.net/pdf?id=rkxn7nR5KX | https://openreview.net/forum?id=rkxn7nR5KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Bke5fit-gN",
"H1eAV3aK1E",
"HJxkPjEY1V",
"B1g66Xr9Cm",
"BylWUetOAQ",
"rkgRfJ576X",
"S1xaiCK76X",
"SJe0IAYmTX",
"BJxKuIEahm",
"Sye3gf7ahX",
"B1lJPA1hhQ",
"BJlnllqsn7",
"Hyeike3c3X",
"Skgbo1iQ37",
"B1ghUyjmn7",
"HylWwmBZh7",
"BklxABhTi7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"comment"
],
"note_created": [
1544817426054,
1544309814211,
1544272727226,
1543291844946,
1543176265293,
1541803797635,
1541803684631,
1541803606349,
1541387888828,
1541382644316,
1541303895069,
1541279732333,
1541222371244,
1540759449487,
1540759380000,
1540604760829,
1540371911790
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1396/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1396/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1396/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1396/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1396/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1396/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1396/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1396/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1396/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1396/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1396/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1396/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1396/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1396/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1396/AnonReviewer1"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes an approach for incremental learning of new classes using meta-learning.\", \"strengths\": \"The framework is interesting. The reviewers agree that the paper is well-written and clear. The experiments include comparisons to prior work, and the ablation studies are useful for judging the performance of the method.\", \"weaknesses\": \"The paper does not provide significant insights over Gidaris & Komodakis '18. Reviewer 1 was also concerned that the motivation for RBP is not entirely clear.\\nOverall, the reviewers found that the strengths did not outweigh the weaknesses. Hence, I recommend reject.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta review\"}",
"{\"title\": \"Thanks for the rebuttal. But the response is kinda poor.\", \"comment\": \"Dear authors,\\n\\nI read through your feedback and have a quick scan of other reviews. Thanks for your update. I really appreciate. I would keep the score unchanged (5: weak reject).\\n\\n[Novelty]: I don't view the differences you mentioned in my response, alongside with that in other two reviewers (training base data used in CVPR vs not used in yours, etc) as a \\\"very big difference\\\" to call it novelty.\\n\\n[Experiments]: the question as to why not use other regularizers other than the proposed one is not answered. \\\"Starting from the first line in Section 3.3, \\\"since there is no closed-form of the regularizer in Eqn (13)\\\", E needs BPTT or the introduced recurrent BP. This part is simply a re-adaption of other algorithms. A very simple question is, how about use other regularizers to replace Eqn (13)?\\\"\\n\\n[Applying RBP]: What I mean is that your work is: attention with incremental learning + RBP, right? The CVPR work is kinda the first thing only (attention with incremental learning). How about you apply the RBP idea into the CVPR work and see if there is some improvement. They open-sourced the code (not sure tf or pytorch or others) and do need some time to incorporate into their work though.\\n\\n[Motivation of RBP and Organization of the paper] I noticed you changed the related work part about the comparison with CVPR paper. This is good.\\n\\nAs regard to the motivation of the proposed regularization, you mentioned, in the new manuscript, that\\n\\\"Since in our settings the model cannot see base class data in each few-shot learning episode, different from\\nHariharan & Girshick (2017); Wang et al. (2018), it is challenging to jointly classify both base and\\nnovel categories using a vanilla logistic regression. Towards this end, we propose to add a learned\\nregularizer, which is learned by differentiating through few-shot learning iterations. \\\"\", \"so_here_is_how_the_logic_goes_as_fas_as_i_understand\": \"challenging to classify both base and novel classes -> a new regularizer is needed -> adopt the RBP (which is not a proposed method but rather you applied existing one).\\n\\nWhy is a regularizer needed if learning many classes is challenging? I don't see a strong motivation here.\\n\\n----\", \"sum_up\": \"1. As a research paper, novelty is quite limited in the paper even though authors try to explain the difference with previous method.\\n\\n2. Motivation of RBP part is not clear. This one should be the highlight of your paper and I don't see such a change in the paper organization after rebuttal.\\n\\n3. Some part (figures, notations, experiments) are not clear. Probably you have already revised them. I don't check this in details though. This negative impression is witnessed from other comments in this forum.\"}",
"{\"title\": \"More insights may help to improve the novelty\", \"comment\": \"Thanks for the clarification. There is no misunderstanding during the original review.\\n\\nThe meta-learning happens in the second stage of the framework, and it involves the data in the first stage, as the query Q_{a+b} contains old classes, quoted below\\n\\n\\\"During meta-learning, \\u0012E are updated to minimize an expected loss of the query set\\nQa+b which contains both old and new classes, averaging over all few-shot learning episodes.\\\"\\n\\nSo an interesting extension would be to study whether it is also possible to use only the learned feature extractor and classifier from the first stage.\\n\\nThis framework is quite interesting but it appears incremental given Gidaris & Komodakis, CVPR'18, though several modifications are proposed. An interesting extension would be to provide new insights into the framework, say, on how it attends to old classes, when it would fail, and on the assumption of the relatedness of old data (D_a) and new data (D_b).\"}",
"{\"title\": \"Added experiments using T=200\", \"comment\": \"We would like to once again thank the reviewer for the insightful comments. As promised, we have added experiments with T=200 for BPTT baselines (which is 10x longer than the RBP steps). Similar to what we have seen in T=100, we observe a large degradation of performance when solving until convergence. This shows that the introduced RBP algorithm is a modular way to learn energy functions that are less sensitive to how they are minimized in the forward computation.\"}",
"{\"title\": \"Key difference from low-shot learning papers\", \"comment\": \"Once again we thank the reviewer for pointing out the related work on low-shot learning (CVPR18 & ICCV17). We are closely studying them and planning to incorporate their dataset into our experiments. However, we would like to re-emphasize that, one of the key differences between \\\"low-shot learning\\\" and our \\\"incremental few-shot learning\\\" is that \\\"low-shot learning\\\" has access to the training data of base classes during the few-shot learning stage, whereas our \\\"incremental few-shot learning\\\" does not. This makes our problem setup much more challenging, and also more practical since the model does not need to carry the full training data with it. We hope that this addresses the potential misunderstanding.\"}",
"{\"title\": \"Author Response\", \"comment\": [\"Thank you for the review. We are currently revising the paper and will incorporate your helpful suggestions (to add discussion to CVPR work, add BPTT with larger T, and highlight the RBP algorithm).\", \"Novelty: First, we would like to address the novelty issue. Although both our work and the CVPR paper uses attention mechanism, the two methods are actually very different. The new weights in their method are based on Prototypical Networks, i.e. simply averaging the embedding. We however optimize the weights on the new task and backprop through the optimization, which is a challenging step in learning our model. The attention mechanism is also formulated differently. Whereas the attended content is used as multiplicative gating in their work, we used it as an additive energy term in the overall objective function to optimize.\", \"Applying RBP to the CVPR work: The CVPR work is based on a Prototypical Network which computes weights for the novel classes in a single layer, and regular backpropagation is sufficient. Since there is no iterative optimization involved, we do not see anything that allows us to apply RBP to the CVPR work, or any need for it.\", \"Motivation of RBP: Since we have an iterative optimization procedure in the model, directly differentiating through the procedure is not straightforward. Also, as shown in the experiment, regular backprop through time does not learn a stable objective function. Prior work focus on the case where there is a closed form solution (Bertinetto et al. 2018), where RBP allows us to backprop through any converging optimization layers, which is more general.\", \"Best WD: Yes, in that experiment, no weight decay is needed (although adding a small amount of WD does not hurt the performance). In the other experiments, we found a small amount of weight decay (1E-5) helped.\", \"BPTT with larger T: Thank you for the suggestion. We are currently adding more experiments that use a larger T for BPTT and will update the paper with the latest results.\"]}",
"{\"title\": \"Author Response\", \"comment\": \"We thank the reviewer for the comments and pointing out related work. We are revising our paper and adding the discussion of these and other relevant papers. In response to one of the public comments, we have compared our approach to these two papers:\\n\\nThe ICCV 2017 paper proposes the SGM loss, which makes the learned classifier from the few-shot examples have a smaller gradient value when learning on all examples. The CVPR 2018 paper proposes the prototypical matching networks, a combination of prototypical network and matching network. The paper also adds hallucination, which generates new examples.\\n\\nIn contrast to these approaches, we directly learn a logistic regression classifier during the few-shot episode, which is very simple and straightforward. Although vanilla logistic regression has been shown to be worse in these prior work (since the logistic regression cannot see old data), we found that it can be improved significantly by differentiating through the few-shot learning iterations, taking into account the additional regularizer..\\n\\n- Uniform samples: We also would like to emphasize that, in the learning of novel classes, the base class data is *not* available, thus making the problem very challenging. Therefore, the proposed \\u201cnaive baseline\\u201d which samples a mini-batch uniformly over novel and base classes, will not be comparable to the new approach introduced in the paper, which does not rely on reviewing the old data.\\n\\n- Early stopping: Since we are learning an objective function that needs to be solved until convergence. Stopping early is possible but that relies on an external validation set, which might not be available since we do not have access to the old data when learning the novel classes.\\n\\nLastly, the reviewer is right that there is a trade-off between learning novel and remembering old classes. Getting better results on the novel class is is indeed possible but has the undesired effect, of catastrophic forgetting. In our setting of incremental few-shot learning the goal is to have the best performance on *both base and novel classes*. Hence we focus on the \\\\delta bar metric, and our method has a clear win on this crucial metric.\"}",
"{\"title\": \"Author Response\", \"comment\": [\"Thank you for the review. We would like first explain the novelty aspect of our paper.\", \"Novelty: Although both our work and the CVPR paper use an attention mechanism, the two methods are actually very different. The new weights in their method are based on Prototypical Nets, i.e. simply averaging the embedding. We however optimize the weights on the new task and backprop through the optimization, which is a challenging step in learning our model. The attention mechanism is also formulated differently. Whereas the attended content is used as multiplicative gating in their work, we used it as an additive energy term in the overall objective function to optimize.\", \"Secondly, there seem to be a couple crucial misunderstandings in the review. We will revise our paper to make sure that our points are clearly stated.\", \"Learning of novel classes needs old data: We are afraid that there might be a big misunderstanding. The whole incremental few-shot learning problem is set up so that reviewing old data is *not* allowed. Otherwise the problem can be very trivial to solve: just sample some old data and new data and train jointly. We believe that learning novel classes *without* reviewing old data is an important and challenging problem, especially learning it iteratively, since many models will run into catastrophic forgetting. We have shown that while BPTT does not perform well in this scenario, the proposed meta-learning algorithm can solve it.\", \"Learning of novel classes involves relearning U_k. During learning of novel classes, U_k is fixed and *not* re-learned. U_k is learned during the meta-learning stage, where the novel classes are subsampled from the training set classes (Train_B set). Also the size of U_k is the same as a fully connected softmax layer, which is quite small compared to all the parameters of a deep CNN model.\"]}",
"{\"title\": \"Response\", \"comment\": \"Hi, $U_k$ are learned as slow weights in the meta-training. Thanks!\"}",
"{\"comment\": \"Hi, I have the following confusion: How is the attractor $U_k$ for the base class which is stored in the knowledge base generated? Apologies if I missed something important.\\n\\nLooking forward to your reply. Thank you.\", \"title\": \"Question about the details\"}",
"{\"title\": \"The problem of incremental few-shot learning is interesting and the presented meta-learning method seems to be effective, but the novelty is limited.\", \"review\": \"This work addresses incremental few-shot learning that learns novel classes without forgetting old classes, which is interesting and different from conventional few-shot learning that considers only the few-shot learning task of interest. This problem is also related closely to the important problem of life-long learning.\\n\\nThis work presents an interesting framework based on meta-learning by learning to learn how to attend to the old classes using an attention mechanism. Experimental results also show improvement over two related works on incremental few-shot learning. The writing is quite clear. Some concerns, especially its novelty, are listed below. \\n\\n1. The novelty appears to be limited. The presented framework looks quite similar to the recent work \\n\\nSpyros Gidaris and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. CVPR'18\", \"that_addresses_the_same_problem_in_a_similar_manner\": \"1) learn a base feature extractor and classifier; and then 2) attend to old classes also via meta-learning and attention mechanism.\\nAs mentioned by the authors, \\\"The main difference to this work is that we use an iterative optimization to compute W_b\\\". More discussions on the iterative optimization and why it matters may be helpful.\\n\\nAnother related work is \\\"Deep Meta-Learning: Learning to Learn in the Concept Space\\\", Arxiv'18, that also relies on an external base classes for few-shot learning. Similar to the proposed research, it also learns a feature extractor and a classifier from the base classes, which are used to regularize the learning of novel classes, in an end-to-end meta-learning manner. Extending it for the incremental setting seems natural. \\n\\n2. To learn a few novel classes, all U_k on old classes are relearned, which seems quite time-consuming with a large vocabulary of base classes.\\n\\n3. To learn a few novel classes, old data on base classes are still required, which seems different from how humans learn -- humans learn novel concepts solely from a few examples without forgetting old concepts, without requiring examples on old concepts.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Import discussions missing\", \"review\": \"This paper proposes a novel few-shot learning method that achieves better overall accuracies on base and novel classes. The key idea is to regularize the learning of novel classes such that base classes are not forgotten.\\n\\nI mainly have the following two concerns. \\n\\n-In Table 2, I observe that performance on novel classess is actually not improved. The main improvement lies in overall accuracy. As numbers of training samples between base and novel classes are not balanced, there must be some trade-off between obtaining better performance on base or novel classes. For instance, stopping early when training on novel classes would result in high base accuracy but low novel accuracy. Fine-tuning on novel classes for more iterations would lead to high novel accuracy but low base accuracy. Such trade-off can be also controlled by simply over-sampling novel or base classes. I would suggest the authors to study more on understanding this trade-off. In addition, another naive baseline is to train a softmax classifier at the second stage on both base and novel class training samples and sample mini-batch by uniformly sampling over novel and base classes. \\n\\n-The following two papers extensively studied the problem of achieving better overall accuracies on base and novel classes. Including comparison and discussion with those two papers will enhance this paper further. \\nLow-Shot Learning from Imaginary Data\\nlow-shot visual recognition by shrinking and hallucinating features\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Limited novelty and unclear motivation\", \"review\": \"The paper addresses the incremental few-shot learning problem where a model starts with base network and then introduces the novel classes, building a connection between novel and base classes via an attention module.\", \"strengths\": [\"clear writing.\", \"the experiments are compared with related work and the ablation studies can verify the effectiveness of the proposed (or \\\"introduced\\\" would be a precise term) recurrent BP.\"], \"weakness\": \"- [Novelty]\\nThe paper title is called attention attractor network, which shares very relevance to previous CVPR work (Gidaris & Komodakis, 2018). So the first thing I was looking for is the clear description of the difference between these two. Unfortunately, in related work, authors mention the CVPR work without stating the difference (last few lines in Section 2). As such, I don't see much novelty in the paper compared with previous work. Eqn. (7)-(10) explicitly describes the attention formula. What's the distinction from the CVPR work?\\n\\n- [Motivation of the regularizer using Recurrent BP is not clear]\\nThe use of recurrent BP is probably the most distinction from previous work. However, I don't see a clear description on why such a technique is necessary.\\n\\nStarting from the first line in Section 3.3, \\\"since there is no closed-form of the regularizer in Eqn (13)\\\", E needs BPTT or the introduced recurrent BP. This part is simply a re-adaption of other algorithms. A very simple question is, how about use other regularizers to replace Eqn (13)? \\n\\n- [Some experiments missing]\\nThe experiments section 4.6 uses a case of None and \\\"best WD\\\" to address some of my concerns. This is good. Does the \\\"gamma random\\\" indicates only E is used without the ||W||^2? why the best WD for one-shot is zero? This implies the model is best for applying no weight decay?\\n\\nWhat's the effect of using the recurrent BP technique to the CVPR work? Is there some similar improvement? If yes, then the paper makes some contribution by the regularization. If not, what's the reason?\\n\\nHow about using the truncated BPTT with a larger T?\\n\\nIn general, I think the recurrent BP part should be the highlight of the paper and yet authors fail to spread such a spirit in the abstract or title. And there are some experiments missed as I mentioned above.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Thank you for the comment\", \"comment\": \"1) Thank you for your comments. We will add the discussion in our next version of the paper. Note that in our paper we compared to LwoF, which has better performance than the two papers mentioned above. We are planning to add experiments using the dataset proposed by Bharath & Girshick for more thorough comparison.\\n\\nThe ICCV 2017 paper proposes the SGM loss, which makes the learned classifier from the few-shot examples have a smaller gradient value when learning on all examples.\\n\\nThe CVPR 2018 paper proposes the prototypical matching networks, a combination of prototypical network and matching network. The paper also adds hallucination, which generates new examples.\\n\\nDifferent from these approaches, we directly learn a logistic regression classifier during the few-shot episode, which is very simple and straightforward. Although it has been shown to be worse in these prior work, we found that it can be improved significantly by backprop through the few-shot learning iterations to learn additional regularizer terms.\\n\\n\\n2) We think you might be mixing back-propagation through time (BPTT) commonly used to train recurrent neural networks with recurrent back-propagation (RBP). We are not trying to replace the SGD algorithm, but just proposing to use RBP to take the gradients. Typically, when training RNNs, people use backpropagation through time (BPTT), which unrolls the computation graph and takes the gradient. RBP is a different way of taking gradients, if the recurrent process converges to a fixed point. Here we found RBP is a better tool for learning the energy functions.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for the questions.\\n\\n(1) h_tilde is the original hidden representation, and we augment it with an extra dimension with value=1.\\n\\n(2) When jointly testing base and novel classes, we quantify the drop in performance in each category, relative to testing the base and novel classes separately as follows:\\n\\nIf the Acc_A is base accuracy, Acc_B is few-shot accuracy, and Acc_joint is joint accuracy. Within Acc_joint, Acc_A\\u2019 is the base accuracy when tested jointly, and Acc_B\\u2019 is the few-shot accuracy when tested jointly. Then, \\\\delta_bar is computed as:\\n\\n\\\\delta_bar = \\u00bd (Acc_A\\u2019 - Acc_A) + \\u00bd (Acc_B\\u2019 - Acc_B)\\n\\n(3) The iterative process corresponds to Line 5 in Alg. 1, where it solves the L_S loss. M-loop is the backpropagation of gradients of the loop.\"}",
"{\"title\": \"Some clarification\", \"comment\": \"Hi,\\n\\nI have some confusions in the paper.\\n\\n(1) what's h_tilde in Eqn. (1)?\\n(2) how \\\\delta_bar is computed in Table 1? for example, \\\"LwoF (our implementation)\\\", is it supposed to be (56.97 + 52.37)/2 - 74.58 = -19.91?\\n(3) Fig.1, the iterative process corresponds to the M-loop in Alg. 1? If so, it seems that the M-loop deals with L_Q, which is the query set, the \\\"iterative solver\\\" in Fig. 1 deals with support set only.\"}",
"{\"comment\": \"The idea of incremental few-shot learning in this paper is quite interesting. After reading the paper, I have two questions detailed in the below.\\n\\nQ1. The same problem has been proposed and studied in two recent papers \\u201cLow-shot Visual Recognition by Shrinking and Hallucinating Features (ICCV 2017)\\u201d and \\u201cLow-Shot Learning from Imaginary Data (CVPR 2018)\\u201d. They address the same problem of classifying novel classes with a few labeled examples based on identifying a set of base classes. They also categorize the classes as \\u201cbase\\u201d and \\u201cnovel\\u201d class as this paper does. Since I did not find any discussion about these two papers in this paper, can you provide some comments about their differences?\\n\\nQ2. In this paper, you use recurrent back-propagation as an optimizer, but most previous few-shot learning methods use SGD. Recurrent back-propagation is widely used in NLP because of the sequential nature of texts. However, an image is rarely treated as a sequence. Is there any particular reason for using recurrent back-propagation? Or did you see any critical advantages of using recurrent back-propagation rather than using SGD?\\n\\nI am looking forward to your reply. Thanks a lot!\", \"title\": \"Interesting work, questions about problem setting and optimizer\"}"
]
} |
|
Skz3Q2CcFX | Visualizing and Understanding the Semantics of Embedding Spaces via Algebraic Formulae | [
"Piero Molino",
"Yang Wang",
"Jiawei Zhang"
] | Embeddings are a fundamental component of many modern machine learning and natural language processing models.
Understanding them and visualizing them is essential for gathering insights about the information they capture and the behavior of the models.
State of the art in analyzing embeddings consists in projecting them in two-dimensional planes without any interpretable semantics associated to the axes of the projection, which makes detailed analyses and comparison among multiple sets of embeddings challenging.
In this work, we propose to use explicit axes defined as algebraic formulae over embeddings to project them into a lower dimensional, but semantically meaningful subspace, as a simple yet effective analysis and visualization methodology.
This methodology assigns an interpretable semantics to the measures of variability and the axes of visualizations, allowing for both comparisons among different sets of embeddings and fine-grained inspection of the embedding spaces.
We demonstrate the power of the proposed methodology through a series of case studies that make use of visualizations constructed around the underlying methodology and through a user study. The results show how the methodology is effective at providing more profound insights than classical projection methods and how it is widely applicable to many other use cases. | [
"visualization",
"embeddings",
"representations",
"t-sne",
"natural",
"language",
"processing",
"machine",
"learning",
"algebra"
] | https://openreview.net/pdf?id=Skz3Q2CcFX | https://openreview.net/forum?id=Skz3Q2CcFX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BklwuGtBxN",
"S1lEgPp70m",
"Hyl64g_mRQ",
"B1lGiFoDaQ",
"ryeGFtiDT7",
"S1eFYDjDaQ",
"Hyl7dLiDaX",
"rJgQde81pQ",
"S1g6_qV937",
"rJxIh6zc3Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545077359304,
1542866667952,
1542844468925,
1542072730296,
1542072697553,
1542072193429,
1542071915133,
1541525611442,
1541192309301,
1541184942078
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1395/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1395/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1395/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1395/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1395/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1395/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1395/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1395/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1395/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1395/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"Several visualizations are shown in this paper but it is unclear if they are novel.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"All reviewers agree that paper is not strong enough\"}",
"{\"title\": \"rebuttal for each of the suggested visualizations\", \"comment\": \"Thank you for providing this additional references, we'll add them as relevant literature.\\n\\nRegarding the first one (Figure 4 of Bolukbasi et Al.) it surely is relevant, but as you can see the axis are the difference of two embeddings in two different embedding spaces. This makes the plot difficult to interpret as it shows only how different, for each word represented by a dot in the scatter plot it is more biased in one or the other dataset / embedding method, but the reader has no idea of how much the word is biased to begin with. In contrast, in our comparison view, you can clearly se the position of the word in both datasets and a line is drawn between them so that the reader can understand both the absolute degree of bias of each word in each dataset and how much the dataset differ in terms of bias.\\n\\nRegarding the second one, it doesn't seem to be a published paper, but rather a report that a student put on their website, so it doesn't seem like something we would be able to cite. In any case, as much as that work is relevant, in all the figures the author provide a side by side view of the embeddings projection in the two datasets, which makes it really hard to compare the relative location, and how much delta there is between the two locations of the same embedding in the different embedding spaces. Thus we believe that our comparison view provides a better visualization.\\n\\nFinally, regarding Heimerl et al., we already describe the difference in the related section, but we can add that our approach has several advantages compared to that tool: first the user can define any algebraic formula as axes rather than just averages of embeddings, and fo instance the fine grained analysis plots and the polysemy plot we provide in the paper would not be obtainable; second: as you can define averages as axes, our approach subsume their, being more general; third we also provide the comparison view for cases where the user wants to compare different embedding spaces and the polar view where the goal requires more than two axes, both things are not provided by Heimerl et al.\\n\\nFinally, as a general remark, we are proposing to use this simple idea as a general methodology, while most of the plots you refer to (a part from Heimerl's tool) are spurious specific plots obtained in specific circumstances, so the fact that someone used a similar plot to what is obtainable with our methodology is just an additional example use case that strengthens even more the usefulness of the methodology.\"}",
"{\"title\": \"don't agree with some response.\", \"comment\": \"I don't agree with author on their novelty with the response. For instance, Figure 4 in https://arxiv.org/pdf/1607.06520v1.pdf conveys similar ideas of using the proposed approach. [Indeed the same plot as Figure 1 in this paper]\\n\\nActually, if just simply google serach, you will hit this following paper:\", \"https\": \"//web.stanford.edu/class/cs224n/reports/6835575.pdf\\nwith the exact plots as the author may argue. \\n\\nFurther, as the author mentioned in literature,\", \"http\": \"//embvis.flovis.net/s/scatter.html\\nwhere you can get the exact plots with single concept or average of multiple concept to project.\"}",
"{\"title\": \"rebuttal part 1\", \"comment\": \"Thank you for your comments and please see the inlined answers regarding your concerns.\\n\\n>>> However, the main technical contribution of this paper is otherwise not clear - the methodology section covers a very broad set of techniques but doesn't provide a clear picture of what is novel;\\n\\nWe axplcitly outlined the contribution in the revised manuscript.\\nWhat we described in the methodology is how to map an analysis task in terms of the items to visualize and the dimensions to visualize them on (again, defined as explicit formulae). To the best of our knowledge, there was no precedent work and ours is entirely novel.\\n\\n>>> furthermore, it makes a strong assumption about linear structure in the embedding space that may not hold. (It's worth noting that t-SNE does not make this assumption.)\\n\\nThe proposed methodology doesn't make any assumption on the structure of the embedding space itself, regardless of whether it is linear or not. What we proposed is to slice the space with a hyperplane that is semantically meaningful with respect to the analysis goal of the user. The slicing can is linear (high-dimensional manifold in case of polar view) but we don\\u2019t make assumptions embedding space.\\n\\n>>> The visualization strategies presented don't appear to be particularly novel. In particular, projection onto a linear subspace defined by particular attributes was done in the original word2vec and GloVe papers for the analogy task.\\n\\nWe find it difficult to relate with this comment and would like to ask the reviewer to kindly provide more details on specific papers and figures.\\n\\nIn the original word2vec paper (Distributed Representations of Words and Phrases and their Compositionality) there is one 2d PCA projection, while in the original GloVe paper (GloVe: Global Vectors for Word Representation) there is no visualization or projection. If the reviewer were referring to the examples on the GloVe website (https://nlp.stanford.edu/projects/glove/) those are again 2d PCA projections with no interpretable semantics along the axes. Or, if Figure 2 in Linguistic Regularities in Continuous Space Word Representations was referenced, that is a cartoon image.\\n\\n>>> There's also a lot of other literature on interpreting deeper models using locally-linear predictors, see for example LIME (Ribeiro et al. 2016) or TCAV (Kim at el. 2018).\\n\\nWe are aware of this line of work using local descriptors for interpreting deep models, but find it less relevant to the analysis and visualization of embeddings. It would be helpful if the reviewer could help clarify the concerns as local interpretability is an approach applicable to various problem domains, but we don\\u2019t see how it is related to to our proposal as we are not learning any predictor.\"}",
"{\"title\": \"rebuttal part 2\", \"comment\": \">>> Evaluations are exclusively qualitative, which is disappointing because there are quantitative ways of evaluating a projection - for example, how well do the reduced dimensions predict a particular attribute relative to the entire vector.\\n\\nYes, there are quantitative measures for comparing different algorithmic ways to perform dimensionality reduction. However, what we proposed is not an algorithm to perform dimensionality reduction, but a methodology to support users to encode intentions (concepts she cares about) explicitly to the axes of the projection. Such intention is different case by case in analytical tasks. The attributes the user cares about are explicitly encoded in the algebraic formulae she decides to use, so using the reduced dimensions to predict the attributes is meaningless.\\n\\nWe made it mode clear in the first paragraph of the evaluation section that we are not trying to find an optimal dimensionality reduction technique, but comapring an approach to keep users in the loop:\\n\\u201cThe goal is to find out if and how visualizations using user-defined semantically meaningful algebraic formulae as their axes help users achieve their analysis goals.\\nWhat we are not testing for is the quality of projection itself, as in PCA AND t-SNE the projection axes are obtained algorithmically, while in our case they are explicitly defined by the user.\\u201d\\n\\n>>> Five-axis polar plots can pack in more information than a 2-dimensional plot in some ways, but quickly become cluttered. The authors might consider using heatmaps or bar plots, as are commonly used elsewhere in the literature (e.g. for visualizing activation maps or attention vectors).\\n\\nWe explicitly explained in the 3rd paragraph that the polar view is to be preferred in the case where the analysis goal needs more than 2 dimensions of variability, but where the number of items is limited for the reason that several items would make the visualization cluttered. The problem with bar plots and heatmaps is that themselves they don't scale either to a big number of axes of elements and they rely on hue (in the case of heatmaps) and difference in size (in the case of barplots) to provide the information, while the polar view relies on several visual variables at the same time (position, dimension, hue, shape) for the same task, making it a better choice.\\n\\n>>> User study is hard to evaluate. What were the specific formulae used in the comparison? Did subjects just see a list of nearest-neighbors, or did they see the 2D projection? If the latter, I'd imagine it would be easy to tell which was the t-SNE plot, since most researchers are familiar with how these look.\\n\\nThe tasks and the formulae we used were [banana & strawberry, google & microsoft, nerd & geek, book & magazine, 110392 & 95212, 387862 & 42956, 278209 & 230444, 162363 & 307542], the numbers are the same terms but obfuscated, as described in the manuscript. We added them in the description of the experiment.\\n\\nAs we are evaluating visualizations, the users are shown visualizations of the 2d projections as described in the second paragraph of the user study section.\\nThey are told which visualizations are t-SNE and which use explicit axes as they are required to express a preference at the end (which is reported). Knowing which plot they were looking at was also needed in order to provide an interpretation to the axes (in the t-SNE they are meaningless, without labels, in the explicit case the axes report the formula used to obtain them in order to be interpretable).\\nCan you elaborate on why do you believe this is a problem?\"}",
"{\"title\": \"rebuttal\", \"comment\": \"Thank you for your comments and please see the inlined answers regarding your concerns.\\n\\n>>> To the best of my understanding the paper proposes some methodological ideas for visualizing and analyzing representations. The paper is unclear mainly because it is a bit difficult to pinpoint the contribution and its audience. What would help me better understand and potentially raise my rating is an analysis of a classical model on a known dataset as a case study and some interesting findings would help make it more exciting and give the readers more incentives to try this out. Like train an AlexNet and VGG imagenet model and show that the embeddings are better aligned with the wordnet taxonomy in one of the other. This should be possible with their approach if i understand it correctly.\\n\\nWe did perform an analysis of a classical model (GloVe) on a known dataset (Gigaword) and reported several case studies in the Case Studies section reporting interesting findings:\\nthe presence of gender bias in different datasets and how it changes over the different datasets, \\nthe characterization of fine-grained differences between extremely close vectors, showing how the embeddings encode polysemy by showing a plane on which multiple senses of words are separable. \\nIt seems to us we already did what the reviewer was asking us to do, but just on a purely linguistic dataset.\\n\\nAlso, our work is not related with wordnet taxonomies. It would be appreciated if the reviewer could kindly point us to how we should proceed in aligning the embeddings and the wordnet taxonomies, as it is not clear to us and seems unrelated to our work.\\n\\n>>> pros:\\n\\t- visualization and analysis is a very exciting and important topic in machine learning\\n\\t- this is clearly useful if it worked\\n\\nThe combination of case studies and user study that we reported in the paper suggest that it actually works.\\n\\n>>> cons:\\n - not sure what the contribution claim for the paper is since these types of plots existed already in the literature (is it a popularization claim ?)\\n\\nWe find it hard to derive actionable items from this comment. We ask the reviewer to kindly provide references where plots using explicit formulae as axes are used. To the best of our knowledge, we are the first to propose such a methodology.\"}",
"{\"title\": \"No previous visualization actually used this simple idea\", \"comment\": \"Thank you for your comments and please see the inlined answers regarding your concerns.\\n\\n>>> - on algebraic formulae (AF): it would be good to clarify the def of AF explicitly. Rules/extention/axes are not very clear and mathematically consistent in section 3. Would projection idea be applied to arbitrary AFs?\\n\\nYes, we allow users to create AFs by compositing vector math operators (e.g., add, sub, mul) to be applied on vectors (referenced by their label in the data, i.e. the vector(apple) is obtained by using the keyword \\u201capple\\u201d in the formula). For instance, \\u201c(he + him) /2\\u201d, resolves he and him with their respective vectors, sums them and then divides the resulting vector by two. We added the following sentence to the manuscrip in the methodology sectiont:\\nAlgebraic formulae are a composition vector math operators (e.g., add, sub, mul) to be applied on vectors (referenced by their label in the data, i.e. the vector of ``apple'' is obtained by using using the keyword ``apple'' in the formula)..\\n\\n>>> - while the idea being simple, I am not quite confident about the novelty. For example for the de-bias application, Bolukbasi et al. had already did the same plot along the he-she axis. Similar plots on the polysemous word embedding can be found in Arora et al., 2017, etc. \\n\\nWe acknowledge that those works are relevant, but argue that the information provided in their plots and the nature of their plots is different from our proposal. \\n\\nIn Bolukbasi et al., the x-axis is the difference between he and she, while the y-axis is a learned direction encoding neutrality (through SVD). As such, only one of the axes is explicitly defined by an algebraic formula. The example in Bolukbasi et al. indeed is a subset the methodology that we are proposing, which is more generic and can be applied widely.\\n\\nRegarding Arora et al. the task may be similar, but the plots are obtained through isometric mapping rather than explicitly-defined semantically-meaningful subspaces. That is, axes in their plots are not interpretable, which makes their approach more closer to PCA or t-SNE projections and has little in common with our proposal.\\n\\n>>> - The user study with n=10 are typically less reliable for any p-value evaluation.\\n\\nWhile we agree more test subjects usually lead to (perceptually) better statistical confidences, the ANOVA and t-test results in our study already suggested significant difference (p values being magnitudes smaller than 0.01, as described in the result analysis section).\"}",
"{\"title\": \"ideas seems a common practice in various prior visualization tasks\", \"review\": [\"Paper presented a new and simple method to visualize the embedding space geometry rather than standard t-SNE or PCA. The key is to carefully select items to be visualized/embed and interpretable dimensions. A few case study and user study were conducted to show the benefit of the proposed approach.\", \"on algebraic formulae (AF): it would be good to clarify the def of AF explicitly. Rules/extention/axes are not very clear and mathematically consistent in section 3. Would projection idea be applied to arbitrary AFs?\", \"while the idea being simple, I am not quite confident about the novelty. For example for the de-bias application, Bolukbasi et al. had already did the same plot along the he-she axis. Similar plots on the polysemous word embedding can be found in Arora et al., 2017, etc.\", \"The user study with n=10 are typically less reliable for any p-value evaluation.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"review\", \"review\": \"The idea of analyzing embedding spaces in a non-parametric (example-based) way is well-motivated. However, the main technical contribution of this paper is otherwise not clear - the methodology section covers a very broad set of techniques but doesn't provide a clear picture of what is novel; furthermore, it makes a strong assumption about linear structure in the embedding space that may not hold. (It's worth noting that t-SNE does not make this assumption.)\\n\\nThe visualization strategies presented don't appear to be particularly novel. In particular, projection onto a linear subspace defined by particular attributes was done in the original word2vec and GloVe papers for the analogy task. There's also a lot of other literature on interpreting deeper models using locally-linear predictors, see for example LIME (Ribeiro et al. 2016) or TCAV (Kim at el. 2018).\\n\\nEvaluations are exclusively qualitative, which is disappointing because there are quantitative ways of evaluating a projection - for example, how well do the reduced dimensions predict a particular attribute relative to the entire vector. Five-axis polar plots can pack in more information than a 2-dimensional plot in some ways, but quickly become cluttered. The authors might consider using heatmaps or bar plots, as are commonly used elsewhere in the literature (e.g. for visualizing activation maps or attention vectors).\\n\\nUser study is hard to evaluate. What were the specific formulae used in the comparison? Did subjects just see a list of nearest-neighbors, or did they see the 2D projection? If the latter, I'd imagine it would be easy to tell which was the t-SNE plot, since most researchers are familiar with how these look.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"interesting but not clear how useful\", \"review\": \"To the best of my understanding the paper proposes some methodological ideas for visualizing and analyzing representations.\\nThe paper is unclear mainly because it is a bit difficult to pinpoint the contribution and its audience. What would help me better understand and potentially raise my rating is an analysis of a classical model on a known dataset as a case study and some interesting findings would help make it more exciting and give the readers more incentives to try this out. Like train an AlexNet and VGG imagenet model and show that the embeddings are better aligned with the wordnet taxonomy in one of the other. This should be possible with their approach if i understand it correctly.\", \"pros\": [\"visualization and analysis is a very exciting and important topic in machine learning\", \"this is clearly useful if it worked\"], \"cons\": [\"not sure what the contribution claim for the paper is since these types of plots existed already in the literature (is it a popularization claim ?)\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
r1gnQ20qYX | Pearl: Prototype lEArning via Rule Lists | [
"Tianfan Fu*",
"Tian Gao*",
"Cao Xiao*",
"Tengfei Ma*",
"Jimeng Sun"
] | Deep neural networks have demonstrated promising prediction and classification performance on many healthcare applications. However, the interpretability of those models are often lacking. On the other hand, classical interpretable models such as rule lists or decision trees do not lead to the same level of accuracy as deep neural networks and can often be too complex to interpret (due to the potentially large depth of rule lists). In this work, we present PEARL, Prototype lEArning via Rule Lists, which iteratively uses rule lists to guide a neural network to learn representative data prototypes. The resulting prototype neural network provides accurate prediction, and the prediction can be easily explained by prototype and its guiding rule lists. Thanks to the prediction power of neural networks, the rule lists from prototypes are more concise and hence provide better interpretability. On two real-world electronic healthcare records (EHR) datasets, PEARL consistently outperforms all baselines across both datasets, especially achieving performance improvement over conventional rule learning by up to 28% and over prototype learning by up to 3%. Experimental results also show the resulting interpretation of PEARL is simpler than the standard rule learning. | [
"rule list learning",
"prototype learning",
"interpretability",
"healthcare"
] | https://openreview.net/pdf?id=r1gnQ20qYX | https://openreview.net/forum?id=r1gnQ20qYX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1ljxjoQeN",
"H1e5vPB9RQ",
"BJlcTrHc0m",
"r1elcZBcRX",
"S1lAzkr9Am",
"Syg5kyhS6X",
"SJx0XuLc3Q",
"HJgb6MBq2m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544956658927,
1543292769910,
1543292353867,
1543291272269,
1543290646470,
1541943010347,
1541199910246,
1541194424775
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1394/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1394/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1394/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1394/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1394/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1394/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1394/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1394/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper presents an approach that combines rule lists with prototype-based neural models to learn accurate models that are also interpretable (both due to rules and the prototypes). This combination is quite novel, the reviewers and the AC are unaware of prior work that has combined them, and find it potentially impactful. The experiments on the healthcare application were appreciated, and it is clear that the proposed approach produces accurate models, with much fewer rules than existing rule learning approaches.\", \"the_reviewers_and_ac_note_the_following_potential_weaknesses\": \"(1) there are substantial presentation issues, including the details of the approach, (2) unclear what the differences are from existing approaches, in particular, the benefits, and (3) The evaluation lacked in several important aspects, including user study on interpretability, and choice of benchmarks.\\n\\nThe authors provided a revision to their paper that addresses some of the presentation issues in notation, and incorporates some of the evaluation considerations as appendices into the paper. However, the reviewer scores are unchanged since most of the presentation and evaluation concerns remain, requiring significant modifications to be addressed.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Presentation and Evaluation concerns remain\"}",
"{\"title\": \"Summary of major improvements\", \"comment\": \"We thank the reviewers for the constructive comments and suggestions. We have substantially revised the paper, in particularly\\n1) adding a precise definition of interpretability, \\n2) clarifying system design, \\n3) improving the presentation of the paper, \\nto respond to all the question raised. Hopefully the revised paper has clarified some confusion and addressed all the concerns from reviewers. Our detailed responses to the comments from each reviewer are listed below.\"}",
"{\"title\": \"Improvement on evaluation, design choices and presentation\", \"comment\": \"1. AUC significance\", \"our_response\": \"We thank the reviewer for the suggestion. We have revised our experiment to uses cosine similarity in prototype layer instead of Euclidean distance, as shown in Eq 6. The experimental results are correspondingly updated. The results show no big difference compared with previous results.\\n\\n\\n9. We have addressed paper presentation, fixed notation, added relevant citations, and clarified line-by-line comments in the revision, and here are some response to the major ones:\\n1) p_j and h(x) are vectors of the same predefined dimension, which is a hyperparameter. \\n2) Equation 6: the distance metric is computed in the learned representation space, rather than the original space. The representation space that h(X) lies in should be more discriminative. \\n3) Eq. 1: prototype set P is determined given rule list R and learned representation h(X) hence there is no new parameter. We have modified Equation 1 to reflect so and clarified this point below Equation 1. \\n4) Case Study 4.3: this rule is applied only if all conditions are true. We have clarified it in the writing. \\n5) Fig. 2: we have redrawn the figure to show a better classifier per suggestion.\"}",
"{\"title\": \"adding definition of interpretability\", \"comment\": \"1. the definition of interpretability\", \"our_response\": \"We have revised the paper, with major updates in Section 2 and 3 to introduce a complete definition of interpretability and an overview of PEARL along some intuition of its components choices. We thank the reviewers for the suggestion.\"}",
"{\"title\": \"big modification on presentation, method and experiment\", \"comment\": \"1. rule list definition\", \"our_response\": \"we have added a few synthetic dataset results in the appendix for comparison. Overall, our method has comparable performance as deep models, but more interpretable, and better performance than conventional rule lists.\"}",
"{\"title\": \"Interesting idea but needs significant improvement in terms of presentation and design of method and experiments\", \"review\": \"Summary:\\nThis paper presents a new interpretable prediction framework which combines rule based learning, prototype learning, and NNs. The method is particularly applicable to longitudinal data. While the idea of bringing together rules, prototypes, and NNs is definitely novel, the method itself has some unclear design choices. Furthermore, the experiments seem pretty rudimentary and the presentation can be significantly improved.\", \"detailed_comments\": \"1. In Section 2, the authors seem to define rule list as a set of independent if-then rules. Please note that rule lists have an \\\"else if\\\" clause which creates a dependency between the rules. Please refer to \\\"Interpretable decision sets\\\" by Lakkaraju et. al. for understanding the differences between rule lists and rule sets. \\n2. Section 3.1 is quite confusing. It would be good to give an intuition as to how the various pieces are being combined and in why it makes sense to combine them in this way. The data reweighting process seems a bit adhoc to me. What other choices for reweighting were considered?\\n3. I would strongly encourage the authors to carry out at least a simple user study before claiming that the proposed method is more interpretable than existing rule lists. Adding both prototypes and rules, in fact, adds to the cognitive burden of an end user - it would be interesting to see when and how having both prototypes and rules will help an end user.\", \"pros\": \"1. First approach to combine NNs, rule learning, prototype learning\\n2. Provides an interpretable method for predictions on longitudinal medical data\\n3. Experimental results seem to suggest that the proposed approach is resulting in accurate and interpretable models.\", \"cons\": \"1. The various pieces in the method (rule learning, prototype, NNs, data reweighting) seem to be somewhat haphazardly connected. Section 3.1 does not give me a good idea about how the different pieces are resulting in an accurate and interpretable model\\n2. The paper makes claims such as \\\"Experimental results also show the resulting interpretation\\nof PEARL is simpler than the standard rule learning.\\\" without actually doing any significant user studies. Furthermore, any other synthetic data experiments which could demonstrate the various facets of accuracy-interpretability tradeoffs are missing\\n3. The presentation of the paper is quite unclear. See detailed comments above.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interpretability insufficiently defined\", \"review\": \"This paper aims at tackling the lack of interpretability of deep learning models, which is especially problematic in a healthcare setting --the focus of this research paper. Specifically, the authors propose Prototype lEArning via Rule Lists (PEARL), which combines rule learning and prototype learning to achieve more accurate classification and better predictive power than either method independently and which the authors claim makes the task of interpretability simpler.\\nThe authors present an interesting and novel architecture in PEARL. Combining the two approaches of rule lists and prototype learning. However, my main concern with the paper and with the architecture in general is the lack of clarity upfront regarding what the authors perceive as the criteria for interpretability. This seems to be one of the chief aims of the paper, however, the authors don\\u2019t reach this point until Section 4 of the paper. Given that this is one of the main strengths of the paper as proposed by the authors, this needs to be given more prominence and also needs to be made more explicit what the authors mean by this. The authors define interpretability as measured by the number of rules and number of protoypes identified by a particular model, without, providing an argument, justification, or a citation of previous work which justifies these criterion. Especially since this is one of the main points of the paper, this needs to be better argued and the authors should either elaborate on this point, or restrain on making claims that these models are more interpretable.\\nThe model architecture of Section 3.1 was quite obscure both from the intuitive and implementation level. It\\u2019s not clear how the different modules (prototype learning, rule lists) link together in practice, nor how these come together to create an interpretable model.\\nGenerally, the paper is quite poorly structured and there were several grammatical errors which made the paper quite hard to follow. Although the problems articulated are important, the paper did not do sufficient justice to addressing these problems.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Needs more thorough evaluation, more justification of design choices, and improvement in presentation clarity\", \"review\": \"Review Summary\\n--------------\\nThe paper presents a combination of rule lists, prototypes, and deep representation learning to fit classifiers that are said to be simultaneously \\\"accurate\\\" and \\\"interpretable\\\". While the topic is interesting and the direction seems novel, I don't think the work is quite polished or competitive enough to be accepted without significant revision. The major issues include non-competitive evaluation of what \\\"interpretability\\\" means, ROC AUC numbers that are indistinguishable from standard deep learning (RCNN) pipelines that use many fewer parameters, and many unjustified choices inside the method itself. The paper itself could also benefit from revision to improve flow and introduce technical ideas to be more accessible to readers.\\n\\n\\nPaper Summary\\n-------------\\nThe paper presents a new method called \\\"PEARL\\\" (Prototype Learning via Rule Lists), which produces a rule list, a set of prototypes, and a deep feed-forward neural network that can embed any input data into a low-dimensional feature space. The primary intended application is classifying subjects into a finite set of possible disorders given longitudinal electronic health records with categorical features observed at T irregular time intervals. \\n\\nThe paper suggests learning a representation for each subject's data by feeding the EHR time series into a recurrent convolutional NN. The input data is a 2 x T array, with one row representing observed data and second row giving time delay between successive observations. The vector output of an initial convolutional RNN is then fed into a highway network to produce a final vector denoted \\\"h\\\". \\n\\nGiven an encoder to produce feature vectors, and a fixed rule list learned from data itself, the paper suggests obtaining a prototype for each rule by computing the average vector of all data that matches the given rule. The quality of these prototypes and related neural networks (for computing features and predicting labels from features) is then assessed via their loss function in Eq. 1: a weighted combination of how well the prototypes match the learned embeddings (distance to closest prototype) and how well the classifier predicts labels. The core idea is that the embedding is learned to classify well while creating a latent space that looks like the prototypes of the rule list.\\n\\nAfter training an embedding and NN classifier on a fixed rule list, it seems the data is reweighted according to some heuristic procedure to obtain better properties, then a new rule list is trained and the process repeats again. (I admit the reweight procedure's purpose was never clear to me).\\n\\nExperiments are done on a proprietary heart failure EHR dataset and on a subset of MIMIC data. \\n\\nStrengths\\n---------\\n* Seems original: I'm unaware of any other method connecting rule lists AND prototypes AND NNs\\n* Neat applications to healthcare\\n\\nLimitations\\n-----------\\n* Interpretability evaluation seems weak: no human subject experiments, no quantiative metrics, unclear if rule-lists shown is an apples-to-apples comparison\\n* Prototypes themselves never evaluated \\n* Many design choices inside method not justified with experiments -- why highway networks + RCNNs?\\n\\nMajor Issues with Method\\n------------------------\\n\\n## M1: Not clear that AUC difference between PEARL and baselines is significant\\n\\nThe major issue is that the presented approach does not seem significantly different in predictive performance than the baseline Recurrent CNN. Comparing ROC AUC, we have PEARL's 0.688 to RCNN'S 0.682 with stddev of 0.009 on the proprietary heart failure dataset, and PEARL's 0.769 to RCNN's 0.766 with stddev of 0.009. When AUCs match this closely, I struggle to believe one model is definitively better, especially given that the RCNN has 2x *fewer* parameters (8.4k to 18.4k). \\n\\nIf the counterargument is that the resulting \\\"deep model\\\" is not \\\"interpretable\\\", one should at least compare to a post-processing step where the decision boundary of the RCNN is the reference to which a rule list or decision tree is trained.\\n\\n## M2: Interpretability evaluation not clear.\\n\\nIsn't the maximum number of rules set in advance? \\n\\nAdditionally, prototypes are a key part of this work, but the learned prototypes are not evaluated at all in any figure (except to track avg. distance from prototype while training). If prototypes are so central to this work, I would like to see a formal evaluation of whether the learned prototypes are indeed better (in terms of distance, or inspection of values by an expert, or something else) than alternatives like Li et al.\\n\\n## M3: Missing a good synthetic/small dataset experiment\\n\\nNeither of the presented data tasks is particularly easy to understand for non-experts. I'd suggest creating an additional experiment where the audience of ML readers is likely to easily grasp whether a set of rule lists is \\\"good\\\" for the problem at hand... maybe create your own synthetic task or a UCI dataset or something, or even use the stop-and-frisk crime dataset from the Angelino et al. 2018 paper. Then you can compare against just a few relevant baselines (rule lists only or prototypes only). I think a better illustrative experiment will help readers grasp differences between methods. \\n\\n## M4: How crucial is feature selection?\\n\\nIn each iteration, Algo. 1 performs feature selection before learning rules. Are any other baselines (trees, rule lists) allowed feature selection before the classifier is learned? What would happen to PEARL without feature selection? What method is used for selection? (A search of the document only has 'feature selection' occur once, in the Alg. itself, so it seems explanation is missing).\\n\\n## M5: Why are multiple algorithm iterations needed?\\n\\nWon't steps 3 and 4 of Alg. 1 result in the same rules every time? It's not clear then why on subsequent iterations the algorithm would improve. Perhaps it's just the reweighting of data that causes these steps to change?\\n\\nMinor issues\\n------------\\n\\n## Loss function notation confusing\\n\\nDoesn't the rule list classifier s_R take the data itself X? Not the learned embedding h(X)? Please fix or clarify Eq. 1. I think you might clarify notation by just writing yhat(h(X)) if you mean the predicted label of some example as done by your NNs. Using \\\"R\\\" makes folks think the rule list is involved.\\n\\n## Not clear why per-example reweighting is required\\n\\nNone of the experiments assess why per-example reweighting (lines 6-9 of Algo. 1) is required. Readers would like to see a comparison of performance with and without this step.\\n\\n## Not clear or justified when \\\"averaged\\\" prototypes are acceptable\\n\\nAre your \\\"averaged\\\" prototypes guaranteed to satisfy the rule they represent? Is taking the average of vectors that match a rule always guaranteed to also match the rule? I don't think this is necessarily true. Consider a rule that says \\\"if x[0] == 0 or x[1] == 0, then ___\\\". Suppose the only matching vectors are x_A = [0 1] and x_B = [1 0]. The average vector is [0.5 0.5] which doesn't work.\\n\\n## Several different measures of distance used without careful justification \\n\\nWhy use two different distances -- Euclidean distance to assess distance to prototypes for prototype assignment, and then cosine similarity when deciding which examples to upweight or downweight? Why not just use Euclidean distance for both (appropriately transformed to a similarity)?\\n\\nComments on Presentation\\n------------------------\\nOverall I think every section of the paper needs significant revision to improve a reader's ability to understand main ideas. Notation could be introduced slowly (explain purpose and dimension of every variable), assumptions could be clearly stated (e.g. each individual rule can have ANDs but not ORs), and design choices justified. You might try the test of giving the paper to a colleague and having them explain back the ideas of each section to you... currently I do not believe this version passes this test.\\n\\nThe introduction claims that \\\"clinicians are often unwilling to accept algorithm recommendations without clarity as to the underlying reasoning\\\", but I would be careful in blindly asserting this without evidence. For a nice argument about avoiding blind assumptions about what doctor's will and won't accept, see Lipton's 2017 paper \\\"The Doctor Just Won't Accept That\\\" (https://arxiv.org/abs/1711.08037)\\n\\nAdditionally, the authors should clarify more precisely what definition of interpretability is needed for their applications. Is it simplicity? Is it conceptual alignment with known medical facts? Is it the ability to transparently list the rules in plain English?\\n\\nLine-by-line details\\n--------------------\\n\\n## Sec. 2\\n\\nWhen introducing p_j, should clarify this this is one prototype vector of many.\\n\\nWhen defining p_j = f_j(X), can you clarify what dimensionality p_j has? Is it always the same size as each example's data vector x_i?\\n\\n\\n## Sec. 3\\n\\nFig. 2: I don't find this figure very easy-to-understand. It's clear that after embedding raw features to a new space, the learned rules are *different*, but it's not clear they are *better*. None of the illustrated rules perfectly segments the different colors, for example. I guess the point is all the red dots are within one rule? But they aren't alone (there are blue and orange dots too), so it's still not clear this would be a better classifier.\\n\\nFor EHR datasets, are you assuming that events are always categorical? And that outcomes \\\"y\\\" are always discrete (one-of-L) variables? Or could y be real-valued?\\n\\nEq. 1: You should make notation clearly indicate which terms depend on \\\\theta. Currently it seems that nothing is a function of \\\\theta.\\n\\nEq. 1: Do you also find the prototype set P that minimizes this objective? Or is there another way to obtain P given parameters \\\\theta? This is confusing just from reading the eqn.\\n\\nWhat size is the learned representation h(X)? Is it a vector?\\n\\nEq. 6: Do you really need a \\\"network\\\" to compute the distance to each of the K prototypes? Can't you just compute these distances directly?\\n\\n## Sec 4\\n\\n\\\"Mac OS 1.4\\\" : Do you mean Mac OS version 10.4? Not clear this is relevant.\\n\\n4.3 Case Study: How do I read these rules? Is this rule applied only if ALL conditions are true? or if any individual one is true (\\\"or\\\")? This is unclear.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
B1xhQhRcK7 | Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures | [
"Jonathan Uesato*",
"Ananya Kumar*",
"Csaba Szepesvari*",
"Tom Erez",
"Avraham Ruderman",
"Keith Anderson",
"Krishnamurthy (Dj) Dvijotham",
"Nicolas Heess",
"Pushmeet Kohli"
] | This paper addresses the problem of evaluating learning systems in safety critical domains such as autonomous driving, where failures can have catastrophic consequences. We focus on two problems: searching for scenarios when learned agents fail and assessing their probability of failure. The standard method for agent evaluation in reinforcement learning, Vanilla Monte Carlo, can miss failures entirely, leading to the deployment of unsafe agents. We demonstrate this is an issue for current agents, where even matching the compute used for training is sometimes insufficient for evaluation. To address this shortcoming, we draw upon the rare event probability estimation literature and propose an adversarial evaluation approach. Our approach focuses evaluation on adversarially chosen situations, while still providing unbiased estimates of failure probabilities. The key difficulty is in identifying these adversarial situations -- since failures are rare there is little signal to drive optimization. To solve this we propose a continuation approach that learns failure modes in related but less robust agents. Our approach also allows reuse of data already collected for training the agent. We demonstrate the efficacy of adversarial evaluation on two standard domains: humanoid control and simulated driving. Experimental results show that our methods can find catastrophic failures and estimate failures rates of agents multiple orders of magnitude faster than standard evaluation schemes, in minutes to hours rather than days. | [
"agent evaluation",
"adversarial examples",
"robustness",
"safety",
"reinforcement learning"
] | https://openreview.net/pdf?id=B1xhQhRcK7 | https://openreview.net/forum?id=B1xhQhRcK7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ryeijQ7GeN",
"r1ggQ1XjCX",
"BJex-sbiRm",
"H1l_8M3gRm",
"r1lgl9Iz6X",
"r1xLqdIM6X",
"BJxm6c4GaQ",
"Byxi9c4fpm",
"H1lea0SWpQ",
"ByeinoLypQ",
"B1lWFTj03X",
"H1guoXzK37",
"BJl92-Odhm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544856483163,
1543347992092,
1543342839693,
1542664784127,
1541724647669,
1541724301747,
1541716666560,
1541716627466,
1541656247917,
1541528499089,
1541483896785,
1541116832009,
1541075378061
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1393/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1393/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1393/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1393/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1393/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1393/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1393/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1393/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1393/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1393/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1393/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1393/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1393/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"* Strengths\\n\\nThe paper addresses a timely topic, and reviewers generally agreed that the approach is reasonable and the experiments are convincing. Reviewers raised a number of specific concerns (which could be addressed in a revised version or future work), described below.\\n\\n* Weaknesses\\n\\nSome reviewers were concerned the baselines are weak. Several reviewers were concerned that relying on failures observed during training could create issues by narrowing the proposal distribution (Reviewer 3 characterizes this in a particularly precise manner). In addition, there was a general feeling that more steps are needed before the method can be used in practice (but this could be said of most research).\\n\\n* Recommendation\\n\\nAll reviewers agreed that the paper should be accepted, although there was also consensus that the paper would benefit from stronger baselines and more close attention to issues that could be caused by an overly narrow proposal distribution. The authors should consider addressing or commenting on these issues in the final version.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"reasonable approach, convincing experiments, important topic\"}",
"{\"title\": \"Real-world applicability\", \"comment\": \"Thanks for clarifying your concerns.\", \"we_understand_the_high_level_question_raised_here_to_be\": \"\\u201cwhen should practitioners deploying a system in the real world test this system with the FPP rather than VMC\\u201d? In short, the answer is *always*.\\n\\nFirst, for risk estimation, by mixing the FPP and VMC estimates, we can guarantee that we never do worse than VMC by over a small constant factor, even when the FPP does not generalize at all, while preserving orders of magnitudes improvement when the FPP generalizes. See the discussion on statistical efficiency in our response to R3, or the paper by Neufeld et al in our citations, for details.\\n\\nSecond, in the real world, practitioners have limited evaluation budgets - self-driving car companies can\\u2019t test the car for millions of miles before every code update. When we deploy ML in safety critical environments, existing methods will often find 0 failures under limited evaluation, even if the system is unsafe. We don\\u2019t claim that our method will work well in all such situations. But even if it finds failures in some safety critical domains, preventing the deployment of some unsafe systems is a very positive impact. Our experiments suggest that there are widely used domains where our method works. Further, conceptually, the proposed continuation method for learning the FPP seems much stronger to us than all existing methods and baselines we are aware of.\\n\\nIn the revised section 3.3, we gave detailed explanations for when the method would (not) be better than existing approaches. To add on, if the test agent fails in a subset of ways (at least some of) the training agents do, our method will work well. If all the training agents do well in a particular scenario, but the test agent fails, then we will do no better (but no worse) than existing methods at detecting such failures.\\n\\nFinally, we've also provided a number of tips for practitioners using our method in the revised section 4.4, for example using a DND/Bayesian Neural Network to prioritize sampling uncertain states.\"}",
"{\"title\": \"Appreciate the clarifications\", \"comment\": \"Thank the authors for clarifying some of the details.\\n\\nRegarding my concern about the practical performance of the proposed approach, I was not referring to the experiments but rather the use cases in the real world. As FPP depends on the generalization of the binary classification neural network, it is hard to give a confidence interval about its prediction. Furthermore, the distribution of the training data for this binary neural network may be significantly mismatched with that of the test given that the catastrophic failures are rare in nature. \\n\\nI do not have any concrete baselines in mind. I find it hard to justify the proposed approach by experimental comparisons only. Reiterating my earlier comments, it is better to lay out the conditions under which the proposed method would (not) work. The authors' current answer is somehow too obvious to be useful for practitioners: \\\"if the neural network severely underestimates the failure probability of a large fraction of failure cases\\\".\"}",
"{\"title\": \"Paper Updated\", \"comment\": \"Dear Reviewers,\\n\\nThank you for the constructive feedback. All reviews expressed that we were formulating and tackling a significant problem, and that the experimental results were compelling. There were also positive comments about the soundness and novelty of the approach. We hope our work leads to an increased focus on robustness and adversarial examples in RL (and in general beyond norm-ball perturbations).\\n\\nWe have updated our paper to incorporate reviewer feedback. In particular, we added a paragraph at the end of section 4.1 to explain why classical baselines would not work in our context. We added section 4.4 to discuss practical considerations: lower bounds on statistical efficiency, as well as heuristics we use to robustify our method. We have revamped the exposition in section 3.3 to explain one of the key novelties of our approach: the continuation approach to learning FPPs. The other novelties were motivating an important, unaddressed problem, and the extension of the importance sampling framework to include stochasticity.\\n\\nWe believe these address the reviewer comments on statistical efficiency, baselines, and novelty. If the responses satisfy the reviewers, we hope they will consider raising their scores, or letting us know in what ways they think the paper should be improved. \\n\\nThanks,\\nAuthors\"}",
"{\"title\": \"Clarifying other details\", \"comment\": \"> What is the certainty equivalence approach? A reference would be helpful and improve the presentation quality of the paper.\\n\\nThe certainty equivalence approach is described on page 3. The term has a long history in economics and control, going back to work by Stephen Turnovsky. We will add a reference:\\nStephen Turnovsky. Optimal Stabilization Policies for Stochastic Linear Systems: The Case of Correlated Multiplicative and Additive disturbances. Review of Economic Studies 1976. 43 (1): 191\\u201394.\\n\\n> What is exactly the $\\\\theta_t$ in Section 3.3? What is the dimension of this vector in the experiments? What quantities should be encoded in this vector in practice? \\n\\nIn general, theta_t should contain any features which provide useful information about the failure probabilities of the policy, and are easy to condition on. In our experiments, theta_t encodes the training iteration, and the amount of noise applied to the policy (details in old appendix D.1, moved to E.1 in the upcoming version), so two dimensions. More features may improve performance, but this was just the simple thing we tried, and since the improvement was already so drastic, it didn\\u2019t seem there was much point pushing further.\"}",
"{\"title\": \"Addressing main concerns regarding practical performance and baselines\", \"comment\": \"> Overall, this paper addresses a practically significant problem and has proposed reasonable approaches. While I still have concerns about the practical performance of the proposed methods, this work along the right track in my opinion.\\n\\nThank you for the positive comments, and helpful feedback. Could you please explain what concerns you have about the practical performance of the proposed methods? How can we address these? We believe our approach is a large improvement over baselines, both in theory, and as supported by our experiments.\\n\\n> The reviewer is not familiar with this domain, but the baseline, naive search, seems like straightforward and very weak. Are there any other methods for the same problem in the literature?\\n\\nWe assume you are talking about failure search, and not failure rate estimation? In our original paper, we did compare our method with an additional baseline: a prioritized replay baseline. This does significantly better than naive search, but significantly worse than our proposed method. \\n\\nWe seem to be the first to tackle this problem. The setting is sufficiently different from classical settings, so classical baselines would not work, as we explain in our response to R2. We\\u2019d be happy to compare to additional baselines though - are there are any other baselines you would suggest we include?\\n\\n> I am still concerned about the fact that the FPP depends on the generalization of the binary classification neural network, although the authors tried to give intuitive examples and discussions. Nonetheless, I understand the difficulty. Could the authors give some conditions under which the approach would fail? Any alternative approaches to the binary neural network? What is a good principle to design the network architecture? \\n\\nThe main point we hope to convey is that approaches beyond VMC are crucial, and using an optimized adversary is a good idea in safety-critical settings. We can guarantee that we never do worse than VMC by over a small constant factor (see the discussion on statistical efficiency in our response to R3 for details). However, as you point out, details can influence how much improvement we observe in practice. These details can be application specific, and is not the focus of our paper, but we expand on some of these details below.\\n\\nOur approach would not help if the neural network severely underestimates the failure probability of a large fraction of failure cases. This could occur for initial states that are very different from all the initial states we have seen during training. We could mitigate this issue: (1) In the humanoid domain, we use a differentiable neural dictionary. The DND outputs higher failure probabilities for points very far from those seen during training. (2) Since we train on weaker agents, we tend to overestimate the failure probabilities. In general, a guiding principle is to output higher failure probabilities for examples we are uncertain about.\\n\\nWe included architectural details in Appendix D.1, but will move the key ideas to the main paper in the next update. Does this address your concerns? We are happy to provide more details if that helps.\"}",
"{\"title\": \"Other details\", \"comment\": \"> I think the method accomplishes what it sets out to do. However, as the paper notes, creating robust agents will require a combination of methodologies, of which this testing approach is only a part.\\n\\nAgreed, this an exciting direction for future work. We believe our work is essential for this goal - if we cannot test whether an agent is robust or not, we cannot hope to develop robust agents. Note that in section 4.3 we use the FPP in a simple way to identify more robust agents. We hope future work extends on this - one way is to learn the FPP online with the policy and apply it for adversarial training. This could yield large improvements in sample efficiency - if the FPP is 100x faster at failure search, the agent gets useful examples 100x as often.\\n\\n> I would suggest incorporating some of the descriptions of the models and methods in Appendix D into the main paper.\\n\\n\\nWe\\u2019ve edited down the length of the paper, which allows to move some important details to the main paper. We\\u2019ll mention some details regarding the training + architecture of the failure probability predictor in the next update. Are there any specific details you would suggest we include?\\n\\n> Sec 4.2: How are the confidence bounds for the results calculated?\\n> What are the \\\"true\\\" failure probabilities in the experiments?\\n\\nThe ground truth failure probabilities are obtained by running the VMC estimator for 5e6 episodes on Driving and 2e7 episodes on Humanoid. Right now, this is mentioned in the footnote at the bottom of page 7, with additional details in the appendix. Thanks for raising this - we\\u2019ve definitely tried to make these details as clear as possible, but also realize there\\u2019s a lot of such details, and may still be unclear. Please let us know if the writing could be clearer.\\n\\nThe confidence bands in Figure 1 represent 2 standard errors. Each plot is generated by running the estimators many times, and plotting the probability of an unreliable estimate. We use a conservative estimate for standard errors, where if p^ is the empirical mean over n trials for the probability parameter for a Bernoulli RV, SE(p^) = sqrt(max(p^, 0.1) * (1-p^) / n). The max is just to avoid overly narrow confidence bands when p^ is very close to 0 (i.e. when none of the estimates from the estimator are unreliable).\\n\\n> Sec 4.3: There is a reference to non-existant \\\"Appendix X\\\"\\n\\nThanks, fixed.\"}",
"{\"title\": \"Addressing main concerns and novelty\", \"comment\": \"Thank you for the review and suggestions. We first address what we understand to be the main concerns in your review:\\n\\nWe believe there are two sources of novelty. (1) A long-term goal is robust RL agents. Testing agents when rewards are highly sparse is on the critical path to this goal. To our knowledge, this problem has gone unaddressed. Thus, one novelty is considering a practical and important class of rare event estimation problems. (2) Our setting is fairly different from classical settings. By exploiting its structure, we provide an effective approach, whereas prior approaches simply would not work.\\n\\n> Small amount of novelty; primarily an application of established techniques\\n> The specific novelty of the approach seems to be fitting the proposal distribution to failures observed during training. \\n\\nWe believe there are several novel ideas in our approach which are missing in this summary. These novelties aren\\u2019t just small changes - we don\\u2019t see how existing approaches could handle our setting (failure search and risk estimation, with binary failure signals) without them. Admittedly, we emphasized importance over novelty in writing the paper, and will edit for clarity.\\n\\nThe main novelty in the continuation approach is to learn the proposal distribution from a family of related, but weaker, agents. Our method goes beyond simply fitting a function to data. Fitting a proposal distribution to failures observed for the final agent would not work well. For example, in Humanoid, the final agent fails once every 110k episodes, and was trained for 300k episodes. If we run existing methods like the cross-entropy method on the final agent, we would need significantly more than 300k episodes of data to get a good proposal distribution. \\n\\nAnother novel aspect is our extension of the standard importance sampling setup to include stochasticity. While this seems very fundamental, we are not aware of this in prior work. To reflect the practicalities of RL tasks, we separate controllable randomness (observed initial conditions) from unobservable, uncontrollable randomness (environment and agent randomness, or unobserved initial conditions). We show this changes the form of the minimum-variance proposal distribution (Proposition 3.2). Additionally, in our setup, the initial state distribution is arbitrary and unknown.\\n\\n> I wonder if learning the proposal distribution based on failures observed during training presents a risk of narrowing the range of possible failures being considered.\\n\\nThis is a good observation. In our humanoid experiments, we safeguard against this using a differentiable neural dictionary (Appendix D.1, moved to E.1 in the latest revision). This encourages higher failure probabilities for initial conditions far from those seen during training. Also see our response to R3 regarding statistical efficiency.\"}",
"{\"title\": \"Statistical efficiency and other technical concerns\", \"comment\": \"Thank you for taking the time to write very thoughtful comments.\\n\\n> \\\"I believe that this paper addresses an important problem in a novel manner (as far as I can tell) and the experiments are quite convincing.\\\"\\n\\nIt sounds like we\\u2019re on the same page regarding the importance of the problem, novelty, and experimental sections. You raised some really good points about the technical section, which we discuss below.\\n\\n> \\\"The main negative point is that I believe that the proposed method has some flaws which may actually decrease statistical efficiency in some cases... It seems to me that a weak point of the method is that it may also severely reduce the efficiency compared to a standard MC method.\\\"\\n\\nTheoretically, we can ensure that our method never does more than 2x worse than standard MC.\\n\\n(1) Here\\u2019s an intuitive approach for limiting slowdown by a constant factor. We can run both standard MC and our estimator in parallel. If standard MC finds at least a few failures, we can use standard MC. If not, we can use our method. This incurs a slow-down of 2x in the worst case, while remaining orders of magnitudes better in safety critical domains such as the ones we test. Neufeld et al, which we mention in our related works, give even better guarantees when combining stochastic estimators.\\n\\n(2) Moreover, any method for variance reduction or choosing proposal distributions can be worse in certain cases. This is true for cross-entropy method, subset simulation, control variates, baselines, to name a few. Yet these methods are used in practice, with great success. Requiring 0 slowdown may be too demanding -- we suspect an analogue of the no free lunch theorem might hold -- but we can limit slowdown by a constant factor. We will make all this more clear in the manuscript.\\n\\nIn practice, we employ safeguards to protect us from the issues you describe. (1) For the humanoid experiment we used a Differentiable Neural Dictionary described in Appendix E.1 (Pritzel at al, 2017), this was in D.1 in the original version. A DND is a kNN classifier in feature space, but uses a learned pseudo-count to output higher failure probabilities when the query point is far from training points. Intuitively, the DND model outputs higher failure probabilities for points on which it is uncertain, related to UCB. (2) We trained the FPP on weaker agents. So our method typically over-estimates failure probabilities. (3) Even so, if f underestimates the probability of failure at several points x, it will still typically converge much faster than standard MC. If all x are underestimated by at most a factor of k, then our method slows down on the order of sqrt(k). We show experimentally that our method does orders of magnitude better so this slowdown is not bad.\\n\\n> \\\"The proposed method relies on the ability to initialize the system in any desired state. However, on a physical system, where finding failure cases is particularly important, this is usually not possible. It would be interesting if the paper would discuss how the proposed approach would be used on such real systems.\\\"\\n\\nOur method actually does not initialize the system at arbitrary states. We only assume that the initial state x is sampled from some (unknown) distribution. Further, the initial system state only needs to be partially observable and the unobserved details can be absorbed into Z. We will make this more clear in the paper - does this address your concern?\\n\\n> \\\"On page 6, in the first paragraph, the state is called s instead of x as before. Furthermore, the arguments of f are switched.\\\"\\n\\nThanks for spotting this, we will fix this.\\n\\nReferences (also cited in the original paper):\\nJames Neufeld, Andras Gyorgy, Csaba Szepesvari, Dale Schuurmans. Adaptive Monte Carlo via Bandit Allocation. In ICML 2014.\\nAlexander Pritzel, Benigno Uria, Sriram Srinivasan, Adria Puigdomenech, Oriol Vinyals, Demis Hassabis, Daan Wierstra, and Charles Blundell. Neural episodic control. In ICML 2017.\"}",
"{\"title\": \"Clarification regarding Proposition 3.2\", \"comment\": \"Thank you for the specific feedback and helpful comments. We wanted to quickly clarify the correctness of Proposition 3.2, since it seemed to be a major point in your review.\\n\\n> \\\"It seems to me that Proposition 3.2 is wrong. In the proof it is written E[U^2] = E[W^2 c(X,Z)], which is wrong since U^2 = W^2 c^2(X,Z). This means that the proposal distribution Q_f* is not in fact the optimal proposal distribution. This is problematic because the entire approach is justified using this argument.\\\"\\n\\nWe believe the proof is correct, but this point is indeed subtle, and we\\u2019ll clarify it in the paper. In our case c(X, Z) is a Bernoulli random variable. So c^2(X, Z) = c(X, Z), as c(\\u00b7, \\u00b7) is either 0 or 1 and in both cases the square is the identity. This means E[U^2] = E[W^2 c^2(X,Z)] = E[W^2 c(X,Z)]. In the case where c represents an arbitrary distribution, the optimal proposal distribution is more difficult to compute and is a worthwhile question for future work. \\n\\nWe also note that the standard analysis of the optimal proposal distribution under importance sampling does not account for unobserved stochasticity, which we model in Z. This is why the optimal proposal distribution we derive (for Bernoulli random variables) differs from the standard case.\\n\\nPlease let us know if this addresses your concern.\"}",
"{\"title\": \"Effective application of an importance sampling framework to testing RL agent policies for rare failures\", \"review\": \"The overall approach is technically sound, and the experiments demonstrate a significant savings in sampling compared to naive random sampling. The specific novelty of the approach seems to be fitting the proposal distribution to failures observed during training. \\n\\nI think the method accomplishes what it sets out to do. However, as the paper notes, creating robust agents will require a combination of methodologies, of which this testing approach is only a part. \\n\\nI wonder if learning the proposal distribution based on failures observed during training presents a risk of narrowing the range of possible failures being considered. Of course identifying any failure is valuable, but by biasing the search toward failures that are similar to failures observed in training, might we be decreasing the likelihood of discovering failures that are substantially different from those seen during training? One could imagine that if the agent has not explored some regions of the state space, we would actually like to sample test examples from the unexplored states, which becomes less likely if we preferentially sample in states that were encountered in training.\\n\\nThe paper is well-written with good coverage of related literature. I would suggest incorporating some of the descriptions of the models and methods in Appendix D into the main paper.\\n\\nComments / Questions:\\n* Sec 4.2: How are the confidence bounds for the results calculated?\\n* What are the \\\"true\\\" failure probabilities in the experiments?\\n* Sec 4.3: There is a reference to non-existant \\\"Appendix X\\\"\", \"pros\": [\"Overall approach is sound and achieves its objectives\"], \"cons\": [\"Small amount of novelty; primarily an application of established techniques\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Timely topic, reasonable approach, and good experimental results\", \"review\": \"This paper proposed an adversarial approach to identifying catastrophic failure cases in reinforcement learning. It is a timely topic and may have practical significance. The proposed approach is built on importance sampling for the failure search and function fitting for estimating the failure probabilities. Experiments on two simulated environments show significant gain of the proposed approaches over naive search.\\n\\nThe reviewer is not familiar with this domain, but the baseline, naive search, seems like straightforward and very weak. Are there any other methods for the same problem in the literature? The authors may consider to contrast to them in the experiments. \\n\\nWhat is the certainty equivalence approach? A reference would be helpful and improve the presentation quality of the paper.\\n\\nWhat is exactly the $\\\\theta_t$ in Section 3.3? What is the dimension of this vector in the experiments? What quantities should be encoded in this vector in practice? \\n\\nI am still concerned about the fact that the FPP depends on the generalization of the binary classification neural network, although the authors tried to give intuitive examples and discussions. Nonetheless, I understand the difficulty. Could the authors give some conditions under which the approach would fail? Any alternative approaches to the binary neural network? What is a good principle to design the network architecture? \\n\\nOverall, this paper addresses a practically significant problem and has proposed reasonable approaches. While I still have concerns about the practical performance of the proposed methods, this work along the right track in my opinion.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Relevant, convincing experiments with a potential weak point in the method\", \"review\": \"PAPER SUMMARY\\n-------------\\n\\nThe paper proposes a method for evaluating the failure probability of a learned agent, which is important in safety critical domains. \\n\\nUsing plain Monte Carlo for this evaluation can be too expensive, since discovering a failure probability of epsilon requires on the order of 1/epsilon samples. Therefore the authors propose an adversarial approach, which focuses on scenarios which are difficult for the agent, while still yielding unbiased estimates of failure probabilities. \\n\\nThe key idea of the proposed approach is to learn a failure probability predictor (FPP). This function attempts to predict at which initial states the system will fail. This function is then used in an importance sampling scheme to sample the regions with higher failure probability more often, which leads to higher statistical efficiency.\\nFinding the FPP is itself a problem which is just as hard as the original problem of estimating the overall failure probability. However, the FPP can be trained using data from different agents, not just the final agent to be evaluated (for instance the data from agent training, containing typically many failure cases). The approach hinges on the assumption that these agents tend to fail in the same states as the final agent, but with higher probability. \\n\\nThe paper shows that the proposed method finds failure cases orders of magnitude faster than standard MC in simulated driving as well as a simulated humanoid task. Since the proposed approach uses data acquired during the training of the agent, it has more information at its disposal than standard MC. However, the paper shows that the proposed method is also orders of magnitudes more efficient than a naive approach using the failure cases during training.\\n\\n\\nREVIEW SUMMARY\\n--------------\\n\\nI believe that this paper addresses an important problem in a novel manner (as far as I can tell) and the experiments are quite convincing.\\nThe main negative point is that I believe that the proposed method has some flaws which may actually decrease statistical efficiency in some cases (please see details below).\\n\\n\\nDETAILED COMMENTS\\n-----------------\\n\\n- It seems to me that a weak point of the method is that it may also severly reduce the efficiency compared to a standard MC method. If the function f underestimates the probability of failure at certain x, it would take a very long time to correct itself because these points would hardly ever be evaluated. It seems that the paper heuristically addresses this to some extent using the exponent alpha of the function. However, I think there should be a more in-depth discussion of this issue. An upper-confidence-bound type of algorithm may be a principled way of addressing this problem.\\n\\n- The proposed method relies on the ability to initialize the system in any desired state. However, on a physical system, where finding failure cases is particularly important, this is usually not possible. It would be interesting if the paper would discuss how the proposed approach would be used on such real systems.\\n\\n- On page 6, in the first paragraph, the state is called s instead of x as before. Furthermore, the arguments of f are switched.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
SyGjQ30qFX | TopicGAN: Unsupervised Text Generation from Explainable Latent Topics | [
"Yau-Shian Wang",
"Yun-Nung Chen",
"Hung-Yi Lee"
] | Learning discrete representations of data and then generating data from the discovered representations have been increasingly studied because the obtained discrete representations can benefit unsupervised learning. However, the performance of learning discrete representations of textual data with deep generative models has not been widely explored. In addition, although generative adversarial networks(GAN) have shown impressing results in many areas such as image generation, for text generation, it is notorious for extremely difficult to train. In this work, we propose TopicGAN, a two-step text generative model, which is able to solve those two important problems simultaneously. In the first step, it discovers the latent topics and produced bag-of-words according to the latent topics. In the second step, it generates text from the produced bag-of-words. In our experiments, we show our model can discover meaningful discrete latent topics of texts in an unsupervised fashion and generate high quality natural language from the discovered latent topics. | [
"unsupervised learning",
"topic model",
"text generation"
] | https://openreview.net/pdf?id=SyGjQ30qFX | https://openreview.net/forum?id=SyGjQ30qFX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Byg_im0lxN",
"BkguegXjR7",
"S1erKnbi0X",
"B1eCntlsRQ",
"B1gtuhMg6m",
"SJlsDbO627",
"H1l8tzxq3Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544770464267,
1543348208119,
1543343228918,
1543338421569,
1541577840878,
1541402978590,
1541173886003
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1392/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1392/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1392/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1392/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1392/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1392/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1392/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes TopicGAN, a generative adversarial approach to topic modeling and text generation. TopicGAN operates in two steps: it first generates latent topics and produces bag-of-words corresponding to those latent topics. In the second step, the model generates text conditioning on those topic words.\", \"pros\": \"It combines the strength of topic models (interpretable topics that are learned unsupervised) with GAN for text generation.\", \"cons\": \"\", \"there_are_three_major_concerns_raised_by_reviewers\": \"(1) clarity, (2) relatively thin experimental results, and (3) novelty. Of these, the first two were the main concerns. In particular, R1 and R2 raised concerns about insufficient component-wise evaluation (e.g., text classification from topic models) and insufficient GAN-based baselines. Also, the topic model part of TopicGAN seems somewhat underdeveloped in that the model assumes a single topic per document, which is a relatively strong simplifying assumption compared to most other topic models (R1, R3). The technical novelty is not extremely strong in that the proposed model combines existing components together. But this alone would have not been a deal breaker if the empirical results were rigorous and strong.\", \"verdict\": \"Reject. Many technical details require clarification and experiments lack sufficient comparisons against prior art.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"technical details require clarification and experiments lack sufficient comparisons\"}",
"{\"title\": \"Thank you for your valuable review\", \"comment\": \"(1)More details and writing:\\nIn the revised version of paper, we have provided more details and clearer explanations of our model in revised version Section 3.3. We have also rewritten many parts of the article to make the paper easier to understand.\\n\\n(2)Baseline:\\nBecause we use GAN to do the same fine tuning method of our proposed TopicGAN and our baseline VAE+WGAN. Therefore, we consider it a proper baseline. Please notice that the baseline and other related works can only generate text conditioned on noise, while our generation task is more difficult that we conditioned not only on noise but also on discovered latent topics.\\n\\n(3)Evaluation metric:\\nWe used perplexity of generated text as our standard metrics. That is because we studied some previous generation papers(e.g.: https://arxiv.org/abs/1801.07736) and find some them using perplexity as their evaluation metric. To evaluate whether our topic model can discover meaningful topics, we also report topic coherence score in revised paper Table.3.\\n\\n(4)\\nThe reason that we said our method can be combined with any other text generation method using GAN is that we can use any other GAN to jointly train our whole text generator G, where G is composed of BOW generator $G_{bow}$ and sequence generator $G_{seq}$.\", \"minor_comments\": \"You are right. It should be equation 1 instead. We have revised this mistake.\"}",
"{\"title\": \"Thank you for your thorough review.\", \"comment\": \"\", \"writing\": \"We have rewritten many parts of the article to make the paper easier to understand. In addition, some not convincing explanations mentioned in the review are also revised.\\n\\n(1)How to select topic words:\\nOur topic classification model is a V*K matrix M, where V is the word number and K is the number of latent topics.\\nFor each column of M is a topic distribution of words which is similar to conventional topic models such as LDA.\\nThe value of M[i][j] represents the importance of i-th word to j-th topic. Therefore, we were able to select the top few words with highest weight within each topic as topic words. We have included those details in the revised version (Section4.2).\\n\\n(2)Why word sequence generator is included in the paper:\\nThe goal of our work not only aims to train a high quality topic model, but also aims to generate high quality text using GAN by two steps generation. Using GAN for language generating is notorious for extremely difficult to train because it needs to (1) generate meaningful context with (2) correct grammar simultaneously. However, in our work, we try to separate this two core part of language generation and make the generation process easier. \\n\\n(3)Some detail:\\nFor all experiments(including baseline models), we set the topic number same as the class number. For example, in 20 News Groups, the class number is 20, and thus we set the topic number to 20. We use online LDA with different hyperparameters adjusted to get the better result on each dataset.\\n\\n(4)Why BOW vocabulary size is smaller:\\nThe size of the bag-of-words vocabulary is smaller because we hope during bag-of-words generation our model can focus on more important and general words. With smaller vocabulary size of bag-of-words, the result of unsupervised learning is better.\\n\\n(5)Cross-references:\\nWe have rewritten some methodology part and make it clearer.\\n\\n(6)Typo:\\nWe have revised this typo.\", \"part2\": \"(1)Assuming documents are generated from one single main topic:\\nIn our experiments, we conduct unsupervised document classification, in which the documents have only one single class. Therefore, for those unsupervised classification experiments, assuming each documents coming from a single main topic is a more appropriate assumption, which allows our model to learn more distinct topics. In addition, as the length of our training documents is short, it\\u2019s hard to break the short text into several topics, which is one of the possible reason that makes LDA works not well on short text.\\n\\n(2)Proposed model ignores the word counts:\\nAlthough the model ignores the word counts, it still performs well in unsupervised document classification and topic coherence score.\\n\\n(3)Topic coherence score is reported:\\nWe have evaluated the topic coherence score and reported the score on revised paper Table 3. Our method outperformed baseline method on all datasets, which implies the effectiveness of our proposed topic model. We believe LDA worked properly as the topic coherence scores and unsupervised classification accuracy were in reasonable range.\\nWe think LDA is the most famous conventional topic model, could you list the state-of-the-art conventional topic model that should be compared?\"}",
"{\"title\": \"Thank you for your valuable review\", \"comment\": \"(1)Writing:\\nWe have rewritten many parts of the article to make the paper easier to understand. In addition, some not convincing explanations mentioned in the review are also revised.\\n\\n(2)Assuming documents are generated from one single main topic:\\nIn our experiments, we conduct unsupervised document classification, in which the documents have only one single class. Therefore, for those unsupervised classification experiments, assuming each documents coming from a single main topic is a more appropriate assumption, which allows our model to learn more distinct topics. In addition, as the length of our training documents is short, it\\u2019s hard to break the short text into several topics, which is one of the possible reason that makes LDA works not well on short text.\\n\\nHowever, we acknowledge that for long documents, it's more appropriate to assume they come from the mixture of topics. In fact, it's feasible for our method to generate documents from several topics because info-GAN allows us to decide the distribution of the predicted code. We are conducting experiments on using several topics to generate longer documents and the current result seems better than generating from one single main topic.\\n\\n(3)Novelty:\\nThe novelty of our work is that (a) as far as we know, there is no previous work which tries to use GAN to achieve topic modeling, which is a worth exploring direction. (b) Some extra tricks for Info-GAN training (c)Two steps generation of text may also be a better and easier method for generating text.\\n\\n(4)Evaluation:\\nWe have evaluated the topic coherence score and reported the score on revised paper Table 3. Our method outperformed baseline method on all datasets, which implies the effectiveness of our proposed topic model.\\nWhen conducting human evaluation to evaluate the quality of sentences, we asked 17 annotators to compare 13 sets of sentences generated by different methods.\"}",
"{\"title\": \"This paper proposes a generative adversarial approach to topic modeling. While the idea is fine, the paper has several limitations.\", \"review\": \"This paper proposes TopicGAN, a generative adversarial approach to topic modeling and text generation. The model basically combines two steps: first to generate words (bag-of-words) for a topic, then second to generate the sequence of the words.\\n\\nWhile the idea is interesting, there are several important limitations. First, the paper is difficult to understand, and some of the explanations are not convincing. For example, in section 4.1.1, it says \\\"... our method assumes that the documents are produced from a single topic ... Our assumption aligns well with human intuition that most documents are generated from a single main topic.\\\" This goes very much against the common assumption of a generative topic model, such as LDA, which the model compares against. I don't mean to argue either way, but if the paper presents a viewpoint which is quite different from the commonly accepted viewpoint (within the specific research field), then there needs to be a much deeper explanation, ideally with concrete evidence to support it. Another sentence from the same paragraph states that their \\\"model outperforms LDA because LDA is a statistical model, while our generator is a deep generative model.\\\" This argument also seems flawed and without concrete evidence. There are other parts in the paper where the logic seems strange and without evidence, and they make it difficult to understand and accept the major claims of the paper.\\n\\nSecond, the model does not offer much novelty. It seems that the two-stage model simply puts the two pieces, a GAN-style generator and an LSTM sequence model together. Perhaps I am not understanding the model, but the model description was also not clear nor easy to understand with respect to its novelty.\\n\\nThird, the evaluation is somewhat weak. There are two main evaluations tasks: text classification and text generation. For the first task, classification is not the main purpose of topic models, and while text classification _is_ used in many topic modeling papers, it is almost always accompanied by other evaluation metrics such as held-out perplexity and topic coherence. This is because the main purpose of topic modeling is to actually infer the topics (per-topic word distribution and per-document topic distribution) and model the corpus. Thus I feel it is not a fair evaluation to just compare the models using text classification tasks. The second evaluation task of text generation is not explained enough. For the human evaluation, who were the annotators, and how were they trained? How many people annotated each output, and what was the inter-rater agreement? How many sentences were evaluated, and how were they chosen? Without these details, it is difficult to judge whether this evaluation was valid.\\n\\nLastly, the results are mediocre. Besides the classification task, the others do not show significant improvements over the baseline models. Perplexity (table 3) shows similar results for DBPedia and worse results (than WGAN-gp) for Gigaword. Table 4 shows slightly better results for \\\"Preference\\\" for TopicGAN with joint training, but \\\"Accuracy\\\" is measured only for the proposed model and not the baseline model.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"This paper presents a topic model based on adversarial training.\", \"review\": \"This paper presents a topic model based on adversarial training. Specifically, the paper adopts the framework of InfoGAN to generates the bag-of-words of a document and the latent codes in InfoGAN correspond to the latent topics in topic modelling. In addition to the above framework, to make the model work better, several add-ons are also proposed, combining autoencoder, loss clipping, and a generative model to generate text sequences based on the bag-of-words.\", \"my_comments_are_as_follows\": \"1. There are several issues of this paper on clarity:\\n\\n(1) The first major one for me is that the authors did not give any details on how to interpret the latent code (i.e. the topics here) with the top words. In conventional topic models, usually a topic is a distribution of words, so that top words can be selected by their weights. But I did not see something similar in the proposed model.\\n\\n(2) Another major one is why the word sequence generator is introduced in the proposed model. I did not see the contribution of this part to the whole model as a topic model, although the joint training shows the marginal performance gain on text generation.\\n\\n(3) Some of the experiment settings are not provided, for example, the number of topics, the value of \\\\alpha and \\\\lambda in the proposed model, the hyperparameters of LDA, which are crucial for the results.\\n\\n(4) Why is the size of the bag-of-words vocabulary set to be 3K whereas that of the word generation vocabulary set to be 15K?\", \"minor_issues\": \"(5) In the related work of InfoGAN, there are a lot of cross-references to the following sections, before they are properly introduced.\\n\\n(6) Typo of \\\"Accurcay\\\" in Table 4(a).\\n\\n2. Using adversarial training for topic models seems to be an interesting idea. There is not much work in this line and this paper proposes a model that seems to be working. But it seems to be that the proposed model has several issues as follows:\\n\\n(1) Each document seems to have only one topic, which can be an impractical setting for long documents.\\n\\n(2) The proposed model ignores the word counts, which can be important for topic modelling.\\n\\n(3) I did not see a major improvement of the proposed model over others, given that the only numerical result reported is classification accuracy and the state-of-the-art conventional topic models are not compared. This also leads to my concern about the experiments. I would expect more comparisons than classification accuracy, such as topic coherence and perplexity (for topic modelling) and with more advanced conventional models. From the low values of the accuracy on 20NG, I am wondering if LDA is working properly.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"not convincing\", \"review\": \"This paper proposes a new framework for topic modeling, which consists of two main steps: generating bag of words for topics and then using RNN to decode a sequence text.\", \"pros\": \"The author draws lessons from the infoGAN and designed a creative object function with reconstruction loss and categorical loss. As a result, this paper achieved impressive outcome for topic modeling tasks.\", \"comments\": \"1. High-level language is used to describe how to train two parts of the model, which is not technically clear. It would be better describe the algorithms in more details by listing steps for your algorithm in the section 3.3.\\n\\n2. For text generation experiments, why didn\\u2019t you compare your model with any other related model such as SeqGAN or TextGAN? It is not so convincing to just use VAE+Wgan-gp as a baseline model.\\n\\n3. For qualitative analysis part, you just listed some of your generated sentences for proving the fluency and relevance. Why didn\\u2019t you use some standard metrics for evaluating the quality of the text? I cannot judge the quality of your model through these randomly selected sentences.\\n\\n4. As you mentioned in this paper \\u201cyour model can be easily combined with any current text generation models\\u201d, have you done any experiments for demonstrating the original text generation model will get better performance after applying your framework?\", \"minor_comments\": \"1. On page 2 and page 4, you mentioned \\u201cthe third term in (2)\\u201d. According to my understanding, this should be equation 1 instead.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
Sklsm20ctX | Competitive experience replay | [
"Hao Liu",
"Alexander Trott",
"Richard Socher",
"Caiming Xiong"
] | Deep learning has achieved remarkable successes in solving challenging reinforcement learning (RL) problems when dense reward function is provided. However, in sparse reward environment it still often suffers from the need to carefully shape reward function to guide policy optimization. This limits the applicability of RL in the real world since both reinforcement learning and domain-specific knowledge are required. It is therefore of great practical importance to develop algorithms which can learn from a binary signal indicating successful task completion or other unshaped, sparse reward signals. We propose a novel method called competitive experience replay, which efficiently supplements a sparse reward by placing learning in the context of an exploration competition between a pair of agents. Our method complements the recently proposed hindsight experience replay (HER) by inducing an automatic exploratory curriculum. We evaluate our approach on the tasks of reaching various goal locations in an ant maze and manipulating objects with a robotic arm. Each task provides only binary rewards indicating whether or not the goal is achieved. Our method asymmetrically augments these sparse rewards for a pair of agents each learning the same task, creating a competitive game designed to drive exploration. Extensive experiments demonstrate that this method leads to faster converge and improved task performance. | [
"reinforcement learning",
"sparse reward",
"goal-based learning"
] | https://openreview.net/pdf?id=Sklsm20ctX | https://openreview.net/forum?id=Sklsm20ctX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkxUtBU-xN",
"ryeSkJO1xV",
"BkgA6ZKc14",
"Bkltsc1uC7",
"BkedtqyuCX",
"r1lUvqk_Rm",
"rkxn75k_AX",
"BJefLzCXaX",
"SygdrZr53Q",
"BygCiBx9nm",
"HJeEbx9t27"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544803709897,
1544679132579,
1544356293822,
1543137952531,
1543137920295,
1543137885796,
1543137828510,
1541821001986,
1541194048412,
1541174694481,
1541148668281
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1391/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1391/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1391/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1391/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1391/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1391/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1391/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1391/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1391/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1391/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1391/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes a new method to improve exploration in sparse reward problems, by having two agents competing with each other to generate shaping reward that relies on how novel a newly visited state is.\\n\\nThe idea is nice and simple, and the results are promising. The authors implemented more baselines suggested in initial reviews, which was also helpful. On the other hand, the approach appears somewhat ad hoc. It is not always clear why (and when) the method works, although some intuitions are given. One reviewer gave a nice suggestion of obtaining further insights by running experiments in less complex environments. Overall, this work is an interesting contribution.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Nice idea with good empirical results, but ad hoc approach\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"Thank you very much for your reply!\\nThe following are the necessary details to get HER+ICM to work on those tasks:\\nWe adopt code accompanying the paper 'Large-Scale Study of Curiosity-Driven Learning' to implement HER+ICM for fair comparison with HER+CER. An intrinsic reward is computed via ICM for each transition during sampling samples from environment, for each episode sampled from replay buffer HER is utilized to relabel and recompute extrinsic reward in each transition, then each transition has a weighted sum of intrinsic reward and extrinsic reward, finally we sample random transitions in these episodes for gradient computation. The weight between extrinsic reward and intrinsic reward is tuned between 0.1, 0.3, 0.6, 0.9 for each task.\\nIn order to get ICM to work on continuous control tasks, we discretize the action space into 5 bins per dimension for maze environment, and we discretize the action space into 10 bins per dimension for robotic control environment. We use 256x256 neural network to implement forward network and inverse network, and 128x128 neural network for embedding network, we use 3 layers neural network with hidden size 256 for policy and critic network, the same as in HER + CER. Buffer size, batchsize and number of parallel MPI cores are kept the same as HER + CER. To get HER + ICM to work on the four tasks, we additionally tune the ratio between HER replays and regular replays in from 2,3,4,5,6, and tune the learning rate of policy network and critic network and ICM between 0.001, 0.0003, 0.0001 for each task.\\nWe will add this detail in the experiment sections of the paper in next revision, and will be happy to add further clarifications if the reviewer requests it.\"}",
"{\"title\": \"Please add details of HER + ICM\", \"comment\": \"I'm glad to see that HER + CER was indeed better than HER + ICM, in terms of stability and faster learning.\\nHowever, I do not find any implementation details of HER + ICM. Could you describe how it was done and how did you tune the hyperparamaters for this method? Basically the question is how can I be convinced that the baseline is tuned properly -- as much as you have tuned for your method. Please include these details in your next revision of the paper.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your thoughtful and constructive comments. Following the reviewer\\u2019s suggestions, we have re-written portions of the paper to hopefully provide better insights into why CER improves performance. In addition, we have taken care to improve the quality of the figures to improve readability.\\nSince the policy does not include any awareness of the other agent, no fixed correspondence between rollouts is required and the randomness offers a better sampling. The replay buffer size may come in to play if it induces comparison between rollouts from very different stages of training. Even so, the agents are intentionally different and MADDPG helps to make the learning problem more stationary. As such, we expect our results to be fairly robust to the size of the replay buffer.\\nUnfortunately, we are unable to ground this method or the results within such a theoretical lens at the time being. However, we would hope to explore that possibility more before our paper is finalized or in future work.\\nThe most likely explanation for B being worse than A is simply that, during the initial stages of training, the parameters of B are periodically re-initialized. We find that this practice improves stability during training and the consistency of Agent A\\u2019s final performance. However, there is likely also an effect due to the asymmetric reward structure, such that the reward received by Agent A is more helpful for learning the underlying task itself. In anticipation of other readers having the same question, we have made these details clearer in the paper.\\nRegarding the state comparison, we apply the same criteria used to measure goal completion. So, only a small portion of the whole state is used.\\nWe agree that Section 4.2 is more appropriate as a section of the appendix. We have rearranged the paper accordingly.\\nBased on the results earlier in the paper, the empirical best application of CER is in combination with HER. Our intention with Figure 4 was simply to provide a more exhaustive demonstration of the benefit gained by CER when added to the previous best method (HER). As discussed in the paper, we do not wish for HER and CER to be viewed as alternative methods but instead complementary methods. We use earlier figures to report the results of CER without HER.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for their constructive and thoughtful feedback. Our updated version of the paper includes a number of changes designed to improve readability and consistency with respect to plots and notation. In addition, we have moved Figure 3 and its accompanying text to the appendix, since those results are more related to practical considerations that we wanted readers to be aware of in case they cared to implement our methods. We have also taken care to clarify those results.\\nWhile we would ideally be able to provide an exhaustive comparison between our method and other policy optimization algorithms, such a comparison would likely be difficult to interpret within the scope of this work. Since all the algorithmic variations we consider are based on DDPG, directly comparing the results from each variant is straightforward. Along the same lines, reward relabeling is most straightforward in the application of off-policy algorithms. Figuring out the best way to perform adversarial reward relabeling for on-policy algorithms, such as PPO, is not something we can adequately address within the scope of our current work. However, to demonstrate that the challenge is not due simply to inadequacies of DDPG, we did attempt training with PPO as a baseline (without any relabeling). The results of PPO by itself are essentially the same as the results of DDPG by itself: it doesn\\u2019t work. The combination of sparse reward and difficult exploration are enough to make both baselines fail. To make this point available to the reader, we have included these results in Figure 1, where we also show vanilla DDPG.\\nWe thank the reviewer for pointing out the confusion in how we compare HER and CER. We have re-written portions of the paper to make this clearer. The point we wish to communicate is simply that HER and CER likely address different challenges within this domain of RL and are easily combined. Given that the two methods are more powerful in tandem, they do seem to interact in practice, so we agree that describing them as \\u201corthogonal\\u201d may be confusing. We have included additional analyses to provide some intuition on how CER interacts with HER.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We would like to thank the reviewer for their very thoughtful, constructive, and encouraging feedback. Within the timeframe allowed for rebuttals, we have not had a chance to apply a more reductionist/theoretical approach. We hope to address such questions in future work.\\nFrom our observations, CER is very stable if implemented correctly (as is HER by itself). This is quickly obvious when considering the variability in performance when using curiosity instead of CER (see new baselines in the revised paper). However, implementation matters. We found that our results were most consistent and strong when we periodically reset the parameters of agent B during the initial stages of training. In addition, we observe benefits in this regard from using a large batch size. We have made these details clearer in the revised paper, but found it necessary to include them in the appendix for space considerations.\\nWhile HER helps the agent learn to reach arbitrary goals amongst the states it is capable of reaching, CER incentivizes the agent to encounter hard to reach states. The combination of these 2 strategies are powerful in the settings we have explored. However, one can imagine task/goal structures where generalizing from arbitrary goals is difficult to the point that CER does not help to explore useful directions. From a more technical perspective, CER may be difficult to apply when it is not straightforward to define reward based on the proximity of two substates.\\nWe have taken care to revise the figure annotations and notation to address the issues included in the review. In addition, we have included a new analysis based on the reviewer\\u2019s suggestion where we visualize the state distribution of agents trained on a U-maze during training. \\nOne likely strength of the reward shaping induced by CER comes from the fact that the only guaranteed reward is that related to the task itself. While the behavior of one agent could create an exploit for the other agent, we have never observed that to dominate learning such that policies oscillate between globally suboptimal behaviors. That said, it is possible that we have missed some examples of that kind of failure case and/or that the results might suffer if the balance of each reward type is off. From our empirical results, however, it does not seem that such degenerate behavior is an issue, and we did not have to carefully tune rewards.\\nWe have not tried scheduling CER until a certain point. Indeed, that may unmask some improvements if done correctly. However, it would require learning a new Q-function since we would effectively be switching to a single-player game with a new reward function. How to do that without inviting instability is not trivial and we did not get the chance to address that challenge.\\nFollowing the reviewer\\u2019s suggestions, we have made it clearer when the two variations of CER are to be used and how that may affect the inherent limitations of CER.\\nWe did not experiment with a larger pool of agents. Using >2 agents while keeping the reward formulation the same (i.e. treating CER as a competition between 2 agents) would give the opportunity to relabel transitions using a more diverse set of agent B\\u2019s. Speculatively, richer sampling of competitor strategies could make CER more effective. However, this approach may exacerbate nonstationarities during learning. In any case, it is not immediately obvious where/how to modify CER to best support additional agents.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"Thank you for your constructive review. The review has pointed out that, since CER should not be viewed as an alternative to HER, it is incomplete to treat HER as a suitable baseline. We agree with this assessment and have made sure to include results with HER+ICM (intrinsic curiosity module [1][2]). To compare CER with ICM, we discretized the action space into 5 bins per dimension for maze environments, and for robotic control environments we discretized the action space into 10 bins per dimension. We adapt ICM on top of HER for fair comparison. Experimental results show that CER is better than ICM in terms of faster learning and higher performance. Furthermore, ICM introduces a source of variability that CER manages to sidestep.\\nFollowing the reviewer\\u2019s comments, we have selected more conservative language for motivating our method. We cannot provide a guarantee that our method allows the agent to generalize to task-relevant goals. Instead, we attempt to formulate a competition that incentivizes exploration and leverage the fact that the dynamics of this competition effectively yield an automatic curriculum of exploration. Our intuition is that, when used with HER, CER may help to discover arbitrary goals that more readily generalize to task-relevant goals.\"}",
"{\"title\": \"Interesting idea; lack of comparisons with current methods.\", \"review\": \"The author proposes to use a competitive multi-agent setting for encouraging exploration.\\n\\nI very much agree with most of previous reviewers, and their constructive suggestions. However, I find a major issue with this paper is the lack of baseline comparisons. The paper shows that CER + HER > HER ~ CER. I do not think CER should be compared to HER at all. CER to me attacks the exploration problem in a very different way than HER. It is not trying to \\\"reuse\\\" experience, which is the core in HER; instead, it uses 2 agents and their competition for encouraging visiting new states. This method should be compared to method that encourages exploration via some form of intrinsic motivation. There are methods proposed in the past, such as [1]/[2] that uses intrinsic motivation/curiosity driven prediction error to encourage exploration. Note that these methods are also compatible with HER. I'd suggest comparing CER with one of these methods (if not all) both with and without HER.\", \"minor\": \"In the beginning paragraph of 3.1, the paper states: \\n\\\"\\nWhile the re-labelling strategy introduced by HER provides useful rewards for training a goal-conditioned\\npolicy, it assumes that learning from arbitrary goals will generalize to the actual task goals. As such,\\nexploration remains a fundamental challenge for goal-directed RL with sparse reward. We propose a relabelling\\nstrategy designed to overcome this challenge.\\n\\\"\\nI think overcoming this particular challenge is a bit overstating. The method proposed in this paper is not guaranteed to address the \\\"fundamental challenge\\\" either --- i.e., why can you assume that learning from arbitrary goals that results from the dynamics of two agents will generalize to the actual task goals?\\n\\nI will change my rating accordingly if there are more meaningful comparisons made in the rebuttal.\\n\\n[1] Curiosity-driven Exploration by Self-supervised Prediction, Pathak et. al.\\n[2] Large-Scale Study of Curiosity-Driven Learning. Burda et. al.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"The authors propose a new method for learning from sparse rewards in model-free reinforcement learning settings. This is a challenging and important problem in model-free RL, mainly due to the lack of effective exploration. They propose a new way of densifying the reward by encouraging a pair of agents to explore different states (using competitive self-play) while trying to learn the same task. One of the agents (A) receives a penalty for visiting states that the other agent (B) also visits, while B is rewarded for visiting states found by A. They evaluate their method on a few tasks with continuous action spaces such as ant navigation in a maze and object manipulation by a simulated robotic arm. Their method shows faster convergence (in some cases) and better performance than comparable algorithms.\", \"strengths\": \"Attempts to solve a long-standing problem in model-free RL (effective exploration in sparse reward environments)\\nClear writing and structure, easy to understand (except for some minor details)\\nNovel, intuitive, and simple method building on ideas from previous works\\nGood empirical results (better than state of the art, in terms of performance) on some challenging tasks\", \"weaknesses\": \"Not very clear why (and when) the method works -- more insight from experiments in less complex environments or some theoretical analysis would be helpful\\nIt would also be useful to better understand the conditions under which we can expect this to bring significant gains and when we can expect this to fail (or not help more than other methods) \\nNot clear how stable (to train) and robust (to different environment dynamics) the method is\\n\\n\\nMain Comments / Questions:\\nThe paper makes the claim that their technique \\u201cautomatically generates a curriculum of exploration\\u201d which seems to be based more on intuition rather than clear experiments or analysis. I would suggest to either avoid making such claims or include stronger evidence for that. For example, you could consider visualizing the visited states by A and B (for a fixed goal and initial state) at different training epochs. Other such experiments and analysis would be very helpful.\\nIt is known that certain reward shaping approaches can have negative consequences and lead to undesired behaviors (Ng et al., 1999; Clark & Amodei, 2016). Why can we expect that this particular type of reward shaping doesn\\u2019t have such side effects? Can it be the case that due to this adversarial reward structure, A learns a policy that takes it to some bad states from which it will be difficult to recover or that A & B get stuck in a cyclic behavior? Have you observed such behaviors in any of your experiments?\\nDo you train the agents with using the shaped reward (from the exploration competition between A and B) for the entire training duration? Have you tried to continue training from sparse reward only (e.g. after the effect ratio has stabilized)? One problem I see with this approach is the fact that you never directly optimize the true sparse reward of the tasks, so in the late stages of training your performance might suffer because the agent A is still trying to explore different parts of the state space. \\nCan you comment on how stable this method is to train (given its adversarial nature) and what potential tricks can help in practice (except for the discussion on batch size)?\\nPlease make clear the way you are generating the result plots (i.e. is A evaluated on the full task with sparse reward and initial goal distribution with no relabelling?).\\nIn Algorithm 1, can you include the initialization of the goals for A and B? Does B receive identical goals as A?\\nIt would also be helpful to more clearly state the limitations and advantages of this method compared to other algorithms designed for more efficient exploration (e.g. the need for a resettable environment for int-CER but not for ind-CER etc.).\\n\\n\\nMinor Comments / Questions:\\nYou might consider including more references in the Related Work section that initializing from different state distributions such as Hosu & Rebedea (2016), Zhu et al. (2016), and Kakade & Langford (2002), and perhaps more papers tackling the exploration problem. \\nCan you provide some intuition on why int-CER performs better than ind-CER (on most tasks) and why in Figure 1, HER + int-CER takes longer to converge than the other methods on the S maze?\\nIn Figure 4, why are you not including ind-CER (without HER)?\\nHave you considered training a pool of agents with self-play (for the competitive exploration) instead of two agents? Is there any intuition on expecting one or the other to perform better?\", \"plots\": \"What is the x-axis of the plots? Number of samples, episodes, epochs? Please label it.\\nPlease be explicit about the variance shown in the plots. Is that the std?\\nIt would be helpful if to have larger numbers on the xy-axes. It is difficult to read when on paper.\\nCan you explain how you smoothed the curves -- whether before or after taking the average and perhaps include the min and max as well. I believe this could go in the Appendix.\", \"notation\": \"I don\\u2019t understand the need for calling the reward r_g instead of r. I believe this introduces confusion since the framework already has r taking as argument the goal g (eq. 1) while the g in the subscript doesn\\u2019t seem to refer to a particular g but rather to a general fact (that this is a reward for a goal-oriented task with sparse reward, where the goals are a subset of the states) (eq. 4)\\nPlease use a consistent notation for Q. In sections 2.1 and 2.2, at times you use Q(s,a,g), Q(a,s,g) or Q(s,a).\", \"typos\": \"Page 6, last paragraph of section 4.1: Interestingly, even the \\u2026 , is enough to support \\u2026\\nPage 7, last paragraph of section 4.3: Interestingly, \\u2026 adversely affects both ...\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"To address the sparse reward problems, the authors propose a relabeling strategy called Competitive Experience Reply (CER). This strategy relabels states, and places learning in the context of an exploration competition between a pair of agents. The experiments support some parts of authors\\u2019 claim well. However, the experiments are insufficient.\", \"review\": \"The authors propose a states relabeling strategy (CER) to encourage exploration in RL algorithms by organizing a competitive game between a pair of agents.\\nTo verify their strategy, they extend MADDPG as their framework. Then, they compare the performance of agents trained with HER, and both variants of CER, and both variants of CER with HER. The experiments show that CER can improve the performance of HER with faster converge and higher accuracy.\\n\\nMy major concerns are as follows.\\n1.\\tThe authors may want to conduct more experiments to compare CER with other state-of-the-art methods such as PPO[1]. As illustrated in Figure 1, the performance of HER is better than that of CER. The authors may want to analyze whether CER strategy alone could properly address the sparse reward problems, and why CER strategy can improve HER. The authors have mentioned that CER is \\u201corthogonal\\u201d to HER. I suggest authors provide more discussions on this statement. \\n2.\\tThe authors may want to improve the readability of this paper. \\nFor example, in Figure 1, the authors may want to clarify the meanings of the axes and the plots. \\nThe results shown in Figure 3 are confusing. How can the authors come to the conclusion that the optimal con\\ufb01guration requires balancing the batch sizes used for the two agents? \\nTo better illustrate the framework of CER, the authors may want to show its flow chart.\\n3.\\tThere are some typos. For example, in Section 2.1, the authors use T(s\\u2019|s,a) without index t; in Section 2.2, the authors use both Q(a,s,g) and Q(s,a,g). \\nThere is something wrong with the format of the reference (\\u201cTim Salimans and Richard Chen \\u2026 demonstration/, 2018.\\u201d) in the bottom of page 10.\\n\\n[1] Schulman J, Wolski F, Dhariwal P, et al. Proximal Policy Optimization Algorithms[J]. 2017.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"clear simple idea and good results\", \"review\": \"The paper is well written and easy to read. Exploration is one of the fundamental problems in RL, and the idea of using two agents for better exploration is interesting and novel. However, an explanation of the intuition behind the method would be useful. The experimental results show that the method works well in complex tasks. Since states are compared to each other in L2 distance, the method might not generalize to other domains where L2 distance is not a good distance metric.\", \"pros\": [\"well written\", \"a simple and novel idea tackling a hard problem\", \"good results on hard tasks\"], \"cons\": [\"an explanation of why the method should work is missing\", \"plot text is too small (what is the unit of X-axis?)\"], \"questions\": [\"what is the intuition behind the method?\", \"during training, randomly sampled two states are compared. why it is a good idea? how the replay buffer size will affect it?\", \"since it is a two-player game, is there anything you can say about its Nash equilibrium?\", \"why A is better than B at the task?\", \"when comparing states, are whole raw observations (including velocity etc.) used?\", \"section 4.2 doesn't seem to be that relevant or helpful. is it really necessary?\", \"fig 4 is missing CER alone results? why is that? it doesn't work by itself on those tasks?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
BJej72AqF7 | A Max-Affine Spline Perspective of Recurrent Neural Networks | [
"Zichao Wang",
"Randall Balestriero",
"Richard Baraniuk"
] | We develop a framework for understanding and improving recurrent neural networks (RNNs) using max-affine spline operators (MASOs). We prove that RNNs using piecewise affine and convex nonlinearities can be written as a simple piecewise affine spline operator. The resulting representation provides several new perspectives for analyzing RNNs, three of which we study in this paper. First, we show that an RNN internally partitions the input space during training and that it builds up the partition through time. Second, we show that the affine slope parameter of an RNN corresponds to an input-specific template, from which we can interpret an RNN as performing a simple template matching (matched filtering) given the input. Third, by carefully examining the MASO RNN affine mapping, we prove that using a random initial hidden state corresponds to an explicit L2 regularization of the affine parameters, which can mollify exploding gradients and improve generalization. Extensive experiments on several datasets of various modalities demonstrate and validate each of the above conclusions. In particular, using a random initial hidden states elevates simple RNNs to near state-of-the-art performers on these datasets. | [
"RNN",
"max-affine spline operators"
] | https://openreview.net/pdf?id=BJej72AqF7 | https://openreview.net/forum?id=BJej72AqF7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SkxLwipIl4",
"SkePFUZVam",
"S1eTiBWVpm",
"HyeMiQWEaQ",
"B1eby2B5n7",
"B1e0FnM93X",
"r1e_41DDhQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545161566051,
1541834366567,
1541834149381,
1541833626260,
1541196760813,
1541184646084,
1541005104161
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1390/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1390/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1390/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1390/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1390/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1390/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1390/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"While the reformulation of RNNs is not practical as it is missing sigmoids and tanhs that are common in LSTMs it does provide an interesting analysis of traditional RNNs and a technique that's novel for many in the ICLR community.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Reasonable strong theory paper\"}",
"{\"title\": \"We have made significant improvements to notations.\", \"comment\": \"We thank the reviewer for their careful reading and constructive suggestions. We agree that the MASO framework sheds new light on the inner workings of RNNs. We have made significant simplifications and revisions to the mathematical notation, particularly in Sections 1.1, 1.2, and 2, that should address most of your concerns. Below we respond to your specific questions.\\n\\na) We removed the exponents \\\\ell in Section 1.2. The reason for not using S in the remainder of the paper is that each operation in an RNN cell is a MASO S, e.g., we could have written an RNN cell operation as z_t = S_cell ( x , z_{t-1} ) = S_sigma ( S_W * x + S_z_{t-1} * z_{t-1} + b ), but this would make the notation a bit more confusing. Therefore, we only use the notation S to introduce the definition of MASO, and omit it in the remainder of the paper. \\n\\nb) Implicitly, yes, Q is dependent on the affine parameters A and B of the MASO and the region in which the input x belongs. Here is a bit more detail on Q: given the parameters A and B and the input x, the MASO calculates the output through the internal maximization mechanism of the max-affine splines (see Eqs. 4 and 5 of the updated paper). This process infers (for each output dimension k) the region $r_k$ in which the input x belongs to, and adapts the rows of A and entries of B (of the affine mapping) accordingly. This process is highlighted in the paper through the tensor Q, in which the region inferred by the max-affine splines are stored as one-hot vectors. Stacking these region selection vectors for all output dimensions (all max-affine splines) row-wise, we obtain the partition section matrix Q. We have added a discussion about Q in Section 1.2 and revised Section 3 to make the explanation much cleaner.\\n\\nc) We have included the notation for A_\\\\sigma in Proposition 1.\\n\\nd) We agree that the bracket notation is nonideal. The notation A[z]z is intended to indicate that the matrix A depends on the value of z (actually the partition region into which z falls). In an attempt to clarify the notation, in our revised paper, we use brackets strictly to denote matrix/vector value selection or concatenation. For example, [x]_k denotes the value of the k-th entry of the vector x, and [x_1, \\u2026, x_n] denotes the concatenation of the vectors x_1, \\u2026, x_n. Accordingly, we have omitted the input-dependency of the affine parameters. Instead, we make a note on page 3 to remind the reader that all affine parameters are input-dependent even though they are not explicitly written as such.\\n\\ne) We have added a footnote in the statement of Proposition 1 to reflect that \\\\sigma is assumed to be piecewise affine and convex.\\n\\nf) Yes, we have unified our notation and now both a layer of an RNN and the overall RNN are referred to as a \\u201cpiecewise affine spline operator\\u201d in their MASO formulation.\\n\\ng) \\u201cf\\u201d here denotes the RNN function, where the input is the concatenated input sequence and the output is the concatenated hidden states at the last layer. We have removed \\u201cf\\u201d in Theorem 2 to make it cleaner.\\n\\nh) We have made significant revisions and simplifications to the notation that hopefully improve the flow of the paper. Since Section 4 is an important section that contains the matched filterbank view of an RNN, we have kept this section in the main text.\\n\\ni) We have added a short overview of our contributions, including the noisy initial hidden state, in the second paragraph of the Introduction.\\n\\nPlease let us know if the above address your concerns and if you have further inquiries.\"}",
"{\"title\": \"Addressing concerns on limited applications of the MASO perspective of RNNs\", \"comment\": \"We thank the reviewer for their constructive comments and suggested edits. We address each of them below.\\n\\n1) Lack of application of the MASO formulation: \\nIn addition to improving the performance of RNNs using our suggestion of a noisy initial hidden state, our paper provides two additional insights/applications: (i) visualizing the progression through time of the RNN MASO input space partitioning and (ii) interpreting an RNN as a template matching machine (matched filterbank). These two applications are detailed in Sections 3 and 4; they provide new ways to visualize and interpret RNNs that complement related prior work on RNN visualization and interpretation. \\n\\nFuture research directions and applications include the following, which have been added to the Conclusions of the paper (see Section 6). We can study whether enforcing an orthogonality constraint on the slope parameter A improves RNN performance, similar to what has been observed in [1] for deep feedforward networks. We can use the recently developed random matrix theory of deep learning [2] to analyze the affine slope parameter A (e.g., study how the distribution of its singular values changes during training) to analyze the implicit regularization that the optimizer performs when training RNNs. \\n\\n2) Limitation of the analysis to convex activation functions: \\nFirst, we acknowledge that focusing on piecewise affine and convex nonlinearities in RNNs might be limiting, since more elaborate models like LSTM and GRU use sigmoid and hyperbolic tangent activations. Nevertheless, having a solid understanding piecewise affine and convex nonlinearities in RNNs will guide subsequent theoretical development on other nonlinearities used in RNNs. Moreover, ReLU RNNs have recently gained considerable attention due to their simplicity, competitive performance, and ability to combat the exploding gradient problem provided they are parametrized and initialized properly. We have added a concise discussion in the third paragraph of the Introduction about ReLU RNNs to provide additional motivation for our work. In a future work direction, we expect that we can extend our convex/affine analysis to non-convex nonlinearities like the sigmoid and hyperbolic tangent by leveraging the development of the recent paper [3], which extend the MASO framework to more general nonlinearities. This development, however, is beyond the scope (and available space) of the current paper.\\n\\nPlease let us know if the above address your concerns and if you have further inquiries. \\n\\n[1] Mad Max: Affine Spline Insights into Deep Learning (Balestriero and Baraniuk, 2018), https://arxiv.org/abs/1805.06576\\n[2] Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning (Martin and Mahoney, 2018), https://arxiv.org/abs/1810.01075\\n[3] From Hard to Soft: Understanding Deep Network Nonlinearities via Vector Quantization and Statistical Inference (Balestriero and Baraniuk, 2018), https://arxiv.org/abs/1810.09274\"}",
"{\"title\": \"Addressing concerns on experiments and datasets\", \"comment\": \"We thank the reviewer for their constructive comments. First, all the typos have been corrected in the updated manuscript. Second, we have made significant simplifications to the mathematical notation in Sections 1 and 2 that improve the clarity of presentation. We address the remaining concerns below.\\n \\n1) Regarding the seemingly insufficient experimental evaluation:\", \"we_actually_evaluated_the_use_of_the_noise_in_the_initial_hidden_state_on_not_one_but_four_datasets_of_four_different_modalities\": \"simulated toy data (artificial), MNIST (imagery), SST-2 (text), and bird detection (audio). Our goal (which was achieved) was to demonstrate that, for simple RNNs, injecting noise into initial hidden state improves performance for all four modalities. We present additional successful experimental results in Appendix F of the Supplementary Material; we could not include these in the main text due to space limitations. Our experiments on these four datasets/modalities provide strong evidence on the utility of the noisy initial hidden state. Additional results/visualizations of the input space partitioning and matched filtering are included in Appendices D and E, respectively.\\n\\nFor the exploratory experiments (last part in Section 5.2), we have added experimental results on MNIST and permuted MNIST datasets using a one-layer GRU that similarly demonstrates the potential gain in classification accuracy when using noisy initial hidden state in more complex models where nonlinearities are no longer piecewise affine and convex. \\n\\n2) Regarding the Bird Detection Dataset being not well benchmarked: \\nThis dataset is, in fact, well benchmarked; perhaps we failed to make it clear in the main text. Please see this website for the task description (http://machine-listening.eecs.qmul.ac.uk/bird-audio-detection-challenge) and this website for a list of benchmarks (http://c4dm.eecs.qmul.ac.uk/events/badchallenge_results). We have included the link to the benchmarks in the main text; see the new footnote on page 8. \\n\\nPlease let us know if the above address your concerns or if you have further inquiries.\"}",
"{\"title\": \"Good quality paper, experimentation can be improved\", \"review\": \"The paper rewrites equations of Elman RNN in terms of so-called max-affine spline operators. Paper claims that this reformulation allows better analysis and, in particular, gives an insight to use initial state noise to regularize hidden states and fight exploding gradients problem.\\n\\nThe paper seems to be theoretically sound. The experiment with sequential MNIST looks very promising, thought it would be great to check this method on other datasets (perhaps, toy data) to check that this is not a fluke. The bird audio dataset is not well benchmarked in the literature. The paper could make much stronger claim with more extensive experimentation.\", \"some_typos\": [\"p3: an simple -> a simple\", \"Figure 2 caption is not finished\", \"p5 last paragraph: extra full stop\", \"Fig 3: correct and negative probably switched around\", \"p7: in regularize -> in regularization\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting view point, but with limited applications\", \"review\": \"In this paper, the authors provide a novel approach towards understanding\\nRNNs using max-affine spline operators (MASO). Specifically, they rewrite RNNs\\nwith piecewise affine and convex activations MASOs and provide some\\nexplanation to the use of noisy initial hidden state. \\n\\nThe paper can be improved in presentation. More high level explanation should\\nbe given on MASOs and why this new view of RNN is better. \\n\\nTo best of my knowledge, this is the first paper that related RNNs with MASOs\\nand provides insights on this re-formulation. However, the authors failed to\\nfind more useful applications of this new formulation other than finding that\\nnoisy initial hidden state helps in regularization. Also, the re-formulation\\nis restricted to piecewise affine and convex activation functions (Relu and\\nleaky-Relu). \\n\\nIn general, I think this is an original work providing interesting viewing\\npoint, but could be further improved if the authors find more applications of\\nthe MASO form.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Very promising paper, in particular regarding applications, yet I found that heavy notation not so well explained made it hard to read\", \"review\": \"This paper builds upon recent work by Balestriero and Baraniuk (ICML 2018) that concern max-affine spline opertaor (MASO) interpretation of a substantial class of deep networks. In the new paper a special focus is put on Recurrent Neural Networks (RNNs), and it is highlighted based on theoretical considerations leveraging the MASO and numerical experiments that in the case of a piecewise affine and convex activation function, using noise in initial hidden state acts as regularization.\\nOverall I was impressed by the volume of contributions presented throughout the paper and also I very muched like the light shed on important classes of models that turn out to be not as black box as they could seem. My enthouasiasm was somehow tempered when discovering that the MASO modelling here was in fact a special case of Balestriero and Baraniuk (ICML 2018), but it seems that despite this the specific contribution is well motivated and justified, especially regarding application results. Yet, the other thing that has annoyed me and is causing me to only moderately champion the paper so far is that I found the notation heavy, not always well introduced nor explained, and while I believe that the authors have a clear understanding of things, it appears to me that the the opening sections 1 and 2 lack notation and/or conceptual clarity, making the paper hard to accept without additional care. To take a few examples:\\na) In equation (3), the exponent (\\\\ell) in A and B is not discussed. On a different level, the term \\\"S\\\" is used here but doesn't seem to be employed much in next instances of MASOs...why? \\nb) In equation (4), sure you can write a max as a sum with an approxiate indicator (modulo unicity I guess) but then what is called Q^{(\\\\ell)} here becomes a function of A^{(\\\\ell)}, B^{(\\\\ell)}, z^{(\\\\ell-1)}...?\\nc) In proposition 1, the notation A_sigma is not introduced. Of course, there is a notation table later but this would help (to preserve the flow and sometimes clarify things) to introduce notations upon first usage...\\nd) Still in prop 1, braket notation not so easy to grasp. What is A[z]z? \\ne) Still in prop 1, recall that sigma is assumed piecewise-linear and convex? \\nf) In th1, abusive to say that the layer \\\"is\\\" a mapping, isn't it? \\ng) In Theorem 2, what is f? A generic term for a deterministic function? \\nAlso, below the Theorem, \\\"affine\\\" or \\\"piecewise affine\\\"? \\nh) I found section 4 somehow disconnected and flow-breaking. Put in appendix and use space to better explain the rest? \\ni) Section 5 is a strong and original bit, it seems. Should be put more to the fore in abstract/intro/conclusion?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
Hyls7h05FQ | A Differentiable Self-disambiguated Sense Embedding Model via Scaled Gumbel Softmax | [
"Fenfei Guo",
"Mohit Iyyer",
"Leah Findlater",
"Jordan Boyd-Graber"
] | We present a differentiable multi-prototype word representation model that disentangles senses of polysemous words and produces meaningful sense-specific embeddings without external resources. It jointly learns how to disambiguate senses given local context and how to represent senses using hard attention. Unlike previous multi-prototype models, our model approximates discrete sense selection in a differentiable manner via a modified Gumbel softmax. We also propose a novel human evaluation task that quantitatively measures (1) how meaningful the learned sense groups are to humans and (2) how well the model is able to disambiguate senses given a context sentence. Our model outperforms competing approaches on both human evaluations and multiple word similarity tasks. | [
"unsupervised representation learning",
"sense embedding",
"word sense disambiguation",
"human evaluation"
] | https://openreview.net/pdf?id=Hyls7h05FQ | https://openreview.net/forum?id=Hyls7h05FQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1xd0pDxe4",
"SJlEps7zkN",
"S1exdsXGkE",
"rygvBxcJyN",
"ryl5e-HnRX",
"Hkxssn3oRX",
"r1eqzQYw07",
"SkgpFMYv0m",
"r1xGmzKvAQ",
"r1xVV1VR2m",
"BkgDZsDThQ",
"BygDQWP537"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544744399711,
1543809980072,
1543809895995,
1543639103216,
1543422193920,
1543388322681,
1543111442119,
1543111301183,
1543111194497,
1541451563936,
1541401343481,
1541202207248
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1389/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1389/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1389/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1389/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1389/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1389/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1389/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1389/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1389/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1389/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1389/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"\", \"pros\": [\"High quality evaluation across different benchmarks, plus human eval\", \"The paper is well written (though one could quibble about the motivation for the method, see Cons)\"], \"cons\": [\"The approach is incremental, the main contribution is replacing marginalization or RL with G-S. G-S has already been studied in the context of VAEs with categorical latent variables, i.e. very similar models.\", \"The main technical novelty is varying amount of added noise (i.e. downscaling Gumbel noise). In principle, the Gumbel relaxation is not needed here as exact marginalization can be done (as) effectively. Unlike the standard strategy used to make discrete r.v. tractable in complex models, samples from G-S are not used in this work to weight input to the 'decoder' (thus avoiding expensive marginalization) but to weight terms corresponding to reconstruction from individual latent states (in constract, e.g., to SkimRNN of Seo et al (ICLR 2018)). Presumably adding noise to softmax helps to force sharpness on the posteriors (~ argmax in previous work) and stochasticity may also help exploration.\", \"(Given the above, \\\"to preserve differentiability and circumvent the difficulties in training with reinforcement learning, we apply the reparameterization trick with Gumbel softmax\\\" seems slightly misleading)\", \"With contextualized embeddings, which are sense-disambiguated given the context, learning discrete senses (which are anyway only coarse approximations of reality) is less practically important\", \"Two reviewers are somewhat lukewarm (weak accept) about the paper (limited novelty), whereas one reviewer is considerably more positive. I do not believe that the reviews diverge in any factual information though.\"], \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"word sense induction with Gumbel-Softmax\"}",
"{\"title\": \"Address the commenter's question 1\", \"comment\": \"We thank the commenter for their interest in our paper and the valuable comments! We address the concerns as follows:\\n\\n1) Model assumptions\\n\\nIf we understand correctly, the commenter worries that the implied assumption w->s->c (first select a sense s from word w, then generate the context c) of our model is flawed, because they think w and c are not independent given s. They provide an example to show when w and c should not be independent given s. They suggest that we should model \\u201cthe collocation of senses\\u201d (Qiu et al. 2016) instead of \\u201cthe collocation of words and sense\\u201d like Li & Jurafsky (2015) to address this concern. \\n\\nFirst, we\\u2019d like to argue that no model assumption is perfect (https://en.wikipedia.org/wiki/All_models_are_wrong), for example, both Bag-of-Word model and LDA are flawed language model, it doesn\\u2019t mean that they\\u2019re not useful. Without mentioning what consequences our model assumptions would cause in training, testing, or evaluation, the commenter's concern seems to be purely cognitive/theoretical. Nevertheless, we're happy to discuss with the commenter about it\\n\\nWe think that the commenter may have misunderstood a few points and the assumption w->s->c is not a concern. The conditional independence of w and c given sense could hold based on our assumption (we discuss using the commenter\\u2019s example in 1.2 below). \\n\\n\\nMoreover, the evaluations demonstrate that our model is able to learn distinguishable senses and succeeds in both similarity-based and human evaluations. We agree that the model approach of Qiu et al. (2016) also makes sense, but it is not necessarily better than ours, as shown in our evaluation where ours outperform theirs (Table 1).\", \"in_detail\": \"1.1 The commenter may have an inaccurate understanding about our premise.\\n\\nWe\\u2019d like to reassure the commenter that our model does not share senses among words, like *all* multi-prototype models. Also, the premise is not to learn \\u201csame\\u201d embeddings for words that have the same sense. It is to learn similar embeddings for certain senses of words that are synonyms (share similar contexts). Two words may be synonyms, but they won\\u2019t have identical senses. \\n\\n1.2 The commenter worries \\u201cc is not independent of w given s\\u201d, if we understand correctly, they think \\u201cnot sharing senses\\u201d could be a solution but not quite. \\n\\nHowever, no matter the senses are shared across words or not, we can have c independent of w given s. \\n \\nA. When not sharing senses (our case),\\n\\n P(w_i | s_i^k) = 1, i.e., given a specific sense of a word, the surface word type is fixed, therefore P(w, c|s) = P(w|s)*p(c|s) = P(c|s).\\n\\n>>>\\nFor example, \\\"guy\\\" and \\\"man\\\" are synonym, but one is more casual and the other is more formal. Hence, despite the same sense, different realization would cause the contexts to be more (or less) formal.\\n<<<\\n\\nIn this example, \\u201cguy\\u201d and \\u201cman\\u201d won\\u2019t be associated to the same sense vector. One sense of \\u201cguy\\u201d s_{guy}^i and one sense of \\u201cman\\u201d s_{man}^j may be similar but are two separate embeddings, which would have distinct (perhaps close) P(c|s_{guy}^i) and P(c| s_{man}^j). Ideally, P(c|s_{guy}^i) would result in more casual expression while P(c| s_{man}^j) results in formal expression. Therefore, this case won\\u2019t result in the commenter\\u2019s concern.\\n\\n B. When senses are shared among words\\n\\nIn the commenter's example: it is possible that senses are divided to s_{man_casual}, s_{man_formal} and s_others, once we observe s_{man_casual}, we know that the context should be casual rather than formal no matter it\\u2019s generated by \\u201cman\\u201d or \\u201cguy\\u201d. \\n\\nSurely the surface word type eventually affect its context, in the way that \\u201cguy\\u201d may have a higher probability to generate s_{man_casual} while \\u201cman\\u201d may have a higher probability to generate s_{man_formal}.\\n\\nAlthough \\u201cman\\u201d and \\u201cguy\\u201d are synonyms, they will have different distribution: P(s_i| guy) and P(s_i|man).\\n\\nIdeally, \\nP(s_{man_casual}|guy) > P(s_{man_formal}|guy),\\n and \\nP(s_{man_casual}|man) < P(s_{man_formal}|man). \\n\\nIf they have the same distribution, it means that the usage of \\u201cguy\\u201d and \\u201cman\\u201d are identical, which contradicts the commenter\\u2019s statement \\u201cone is more casual and the other is more formal\\u201d.\"}",
"{\"title\": \"Address the commenter's question 2, 3 and 4\", \"comment\": \"2) Approximation in Eq. (5)\\n\\nWe add the approximation notation because we estimate the sense disambiguation distribution with local context instead of using the global word sense distribution (Tian et al. 2014). We assume that we can disambiguate senses based on local context. The same assumption is adopted by Neelakantan et al. (2014), Li and Jurafsky (2015), Lee and Chen (2017), etc. Moreover, if we consider the original Skip-Gram, each word-context pair counted is in a fixed context window, the local context is given, the left side is also implicitly conditioned on the local context within the window. We will revise the notation to make this clear.\\n\\n\\n3) Parameters\\n\\nThanks for pointing out this. We do not claim in the paper that learning two sets of parameters is our contribution. In fact, several previous works apply the same parameter setting, such as Tian et al. (2014), Neelakantan et al. (2014) [the additional centers are computed with context parameters] and Li and Jurafsky (2015). Despite the same parameter setting, similar to previous work, our novelty lies in *different* mechanisms to disambiguate and select senses. We highlight the # of parameters in responses mainly to differentiate our work from Lee and Chen (2017), mentioning the parameter setting is also the basis in discussing sense disambiguation and sense selection mechanism. \\n\\n\\n4) Analysis of estimator\\n\\n>>>\\nIf the gumbel softmax estimation is an important novelty to this paper, the analysis of the estimator should be shown instead of the end-to-end performance. \\n<<<\\n\\nWe thank the commenter for this insight! We agree that Gumbel Softmax (GS) and RL approach have their advantages/disadvantages. However, the focus of our paper is not a direct comparison of GS and RL but rather to develop an efficient but effective sense embedding model and to answer \\u201cwhat are good sense embeddings\\u201d. The goal is to learn sense vectors that both capture semantics and are distinguishable to human. Thus, with limited space, following previous work, we focus on evaluating the quality of the learned embeddings and propose a new human evaluation method to exam the ability of the proposed model in learning human distinguishable senses. Moreover, in order to show the benefit of the proposed scaled GS, we perform the ablation study by comparing SASI, GASI and GASI-\\\\beta.\"}",
"{\"comment\": \"This is a nice work that extends current works to gumbel softmax and provides new experiment settings. However, I have some concerns based on my understanding. I hope that they can be answered by the author.\\n\\n1. First of all, I have a methodological concern about the central assumption of the method Eq. (5). If I understand it correctly, the authors do not mention the conditional independence assumption (w -> s -> c) but instead using it directly. It suggests that the word w_i and the context words are conditionally independent given the underlying sense s^i_k. Isn't it true that the concrete realization of a sense (i.e., a word) drastically affect the context words? For example, \\\"guy\\\" and \\\"man\\\" are synonym, but one is more casual and the other is more formal. Hence, despite the same sense, different realization would cause the contexts to be more (or less) formal. One potential hope for this concern is that the sense of different words are not shared in your models, but given the premise is to learn embeddings of words such that the words with the same sense would have \\\"the same\\\" sense embeddings, such modeling framework does not solve this problem. In contrast, the recent trend to model the collocation of senses (Qiu et al., 2016, Lee & Chen, 2017) seems to avoid this problem. Why do the authors pursue an older approach (akin to (Li & Jurafsky, 2015)) that models the collocation of words and sense?\\n\\n2. Also, the second \\\"approximation\\\" in Eq. (5) is very weird, how can p(s^i_k|w_i, \\\\tilde{c}_i) be similar to p(s^i_k|w_i)? In the non-parametric sense, the author change the conditional table of (# of sense, # of words) for p(s^i_k|w_i) to (# of sense, # of words, # of contexts) for p(s^i_k|w_i, \\\\tilde{c}_i), while the # of contexts are exponential to the # of words. It suggests a exponential difference in the complexity for the \\\"approximation\\\". (Sorry for that I cannot follow the remaining part of the methodology quite well because the remaining methods highly depend on the above assumptions.)\\n\\n3. Why do the authors think modeling *two* sets of parameters (words and senses) as a novelty? Isn't it simply the conventional design as (Li & Jurafsky, 2015) that is different from more recent approaches that models a purely sense-based framework (Qiu et al., 2016, Lee & Chen, 2017)?\\n\\n4. The gumbel softmax is a nice estimator that enables differentiability, but suffer from an biased gradient. The RL approach used by (Lee & Chen, 2017) provides unbiased gradient but suffer from large variance. Both approaches have their advantages and disadvantages, so I'm interested in seeing an ablation study that analyze the difference of gradient estimator in this task in terms of the impact of gradient estimation on the learning process. If the gumbel softmax estimation is an important novelty to this paper, the analysis of the estimator should be shown instead of the end-to-end performance.\", \"title\": \"Some questions\"}",
"{\"title\": \"Will release the code and data soon\", \"comment\": \"We thank the reviewer for the new comments! We'll elaborate on the differences between our model with prior works with a little more detail in the next revision after the decision been made. And we'll release our code and data for the human evaluation as soon as possible.\"}",
"{\"title\": \"Complete experiments and analysis\", \"comment\": \"After seeing the author responses, my score is updated.\\n\\nThe revised paper includes more experiments (almost all I can think of) and detailed analysis.\\nThe difference between the proposed model and the prior work can be better elaborated for explicitly pointing out the novelty.\\nAlso, the improved performance seems convincing for the proposed model, and it will be better to see the published code for encouraging researchers to easily follow up this direction.\\n\\nTo sum up, this is a good paper that can motivate the following research in the related field.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for taking time to read our paper and the useful suggestions! We address the reviewer\\u2019s concerns and suggestions as follows:\\n\\n1) Additional evaluation with recently released WiC dataset\\n>>>\\nFinally, I suggest the authors remove the word-level similarity task and try the recently released Word in Context (WiC) dataset\\n<<<\\nWe thank the reviewer for their suggestion! We add an evaluation on the recently released WiC dataset in the revision. We focus the evaluation on the sense selection module of the model and classify the senses in an unsupervised fashion. Our model achieves the highest accuracy among competing models (Table 2), except for DeConf which is a supervised sense model that annotates senses on the same lexical resource (WordNet) that was used to build WiC. \\n\\nWe believe that the word similarity tasks demonstrate that each sense-specific embedding learned by our model captures good semantics in addition to better sense disambiguation ability. The high quality of each sense embeddings demonstrates the benefits of using Gumbel softmax. Therefore we decide to keep this evaluation in the revision, but it is more meaningful alongside the WiC results.\\n\\n2) Novelty\\n\\n>>>\\nThe only difference between the proposed one and Lee and Chen (2017) is Gumbel softmax instead of reinforcement learning between sense selection and representation learning modules.\\n<<<\\nWe appreciate that the reviewer noticed the similarity between our models with MUSE by Lee and Chen (2017), as both try to improve the sense selection module with hard attention. However, the overall structure of our model (Figure 1) is quite different, in addition to using Gumbel Softmax (GS) instead of RL for hard attention, we\\u2019d like to explain the differences on two key aspects:\", \"model_structure_and_parameters\": \"MUSE learns *four* sets of parameters: sense representations for target words U, collocation context representations V, and two additional matrix P, Q to estimate sense selection distribution for both target words and contexts (both target and context have multiple senses).\\nIn contrast, ours learns *two* sets of parameters (Section 3.1): sense representations for target words S and global context representations C. We use C to disambiguate senses of target words S instead of using additional parameters like in MUSE, which reduces the number of total parameters in our model. Furthermore, we update S and C in both the sense selection and context prediction modules, as these two modules are \\u201csymmetric\\u201d (predicting senses by context and predict context by word sense) and both help to capture the semantics in words. Moreover, similar to Neelakantan et al. (2014), we do not disambiguate senses for context words (one global vector per context word) to further reduce the parameter size.\", \"optimization_function\": \"We use the (scaled) GS instead of straight-through (scaled) GS to have a stronger error signal (update not only the senses that are chosen but also the ones that are not). To use a distribution instead of a one-hot selection and reduce the computational cost by negative sampling, we optimize the lower bound of the original negative sampling Skip-Gram objective with marginalization and Jensen's Inequality. Using straight-through (scaled) GS learns worse sense embeddings (lower word similarity score and human-model consistency) than (scaled) GS. Due to space limitations, we didn\\u2019t include the comparison in the paper. RL methods are similar to ST-GS since they also make a hard selection each time and update the selected senses but not others.\\n\\n>>>\\nthe idea from the proposed model is similar to Li and Jurafsky (2015), because the sense selection is not one-hot but a distribution.\\n<<<\\nLi and Jurafsky (2015) sample one-hot senses during the training with Chinese Restaurant Process (CRP) and model the CRP with a distribution; while we directly use the distribution and implement the standard skip-gram objective with marginalization over senses. \\n\\n3) Error analysis \\n\\n>>>\\nMoreover, there is no error analysis about the result on the proposed contextual word sense selection task, which may shed more light on the strength and weakness of the model. \\n<<<\\nWe appreciate the reviewer\\u2019s suggestion! We add the error analysis on the crowdsourced contextual word sense selection task in the revision (Section 6.2 Error Analysis).\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for taking time to read our paper and the useful suggestions on improving our writing! We address the specific points from the reviewer as follows:\\n\\n>>>\\nIf \\\\beta=0, then we get SASI, right? How well does this perform on the non-contextual word similarity task? Also, on the crowdsourced evaluation? \\n<<<\\nWe thank the reviewer for this suggestion! We\\u2019ve added the results in Table 3 in the revision. SASI generally performs poorly on the word similarity tasks, so we focus our comparison between our main model GASI-beta with the baseline models given limited space. \\n\\n>>>\\nThe motivation for the hard attention/Gumbel softmax is to learn sense representations that are distinguishable. But do the experiments test this? \\n<<<\\n\\nOur crowdsourced contextual sense selection task evaluates this property. The raters need to distinguish between the learned senses in order to make a selection (Section 6.2, sense disambiguation and interpretability). We also add more detail to these experiments in the additional error analysis in the revision. \\n\\n>>>\\nThere's something strange about Eq 6. \\u2026\\u2026, While the motivation for the right hand side makes sense, the notation could use work.\\n<<<\\nWe address the notation issue in the revision.\\n\\n>>>\\nThe description of how the number of senses is pruned in section 3.1 seems to be a bit of a non sequitur.\\n<<<\\nWe thank the reviewer\\u2019s suggestion. Since it\\u2019s not the focus of our paper, in our revision we move the descriptions of pruning to the appendix.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for taking time to read our paper and the useful suggestions! We address the suggestions from the reviewer as follows:\\n\\n1\\uff09dynamic number of senses and evaluation on downstream applications. \\n\\nWe thank the reviewer for these two suggestions! The simplest way to model words that have more than 3 senses is to initialize all words with more senses and prune aggressively; we set K=3 mainly for purpose of comparison. We think both implementing a dynamic number of senses (e.g., by setting a threshold to split senses) and evaluating on end tasks are great ideas; given the limited space, we\\u2019ll address these in future work.\\n\\n2\\uff09benefits of differentiability\\n\\nIn addition to updating the sense selection module and context prediction module at the same time, full differentiability allows updates to flow to all senses, not only the ones chosen by the attention, which results in stronger error signals and better sense selection ability. While approximating hard attention still guarantees that the model will focus on specific senses so that each sense captures good semantics (Table 3) and is interpretable to humans (Section 6), the Gumbel-softmax trick helps to guarantee both with the original objective, and we don\\u2019t need additional parameters for the policy network in RL.\\n\\n>>>\\nFor example, the contrast with Lee and Chen (2017) seems to be only that of differentiability. \\n<<<\\nWe also contrast the sense selection module with Lee and Chen (2017) in the related work and Section 3.1. The overall structure of the two models are actually different, Lee and Chen (2017) learn *four* set of parameters while we learn *two*. Given limited space, we don\\u2019t elaborate further in our paper, but we discuss this with more details in our response to Reviewer 3. \\n\\n3) negative sampling and lower bound\\n\\nThe negative sampling (NS) is for reducing the computational cost. To still optimize our original objective while implementing NS, we deduce the lower bound by Jensen\\u2019s Inequality.\"}",
"{\"title\": \"Neat idea applying Gumbel-softmax to multi sense embeddings\", \"review\": \"The paper presents a method for deriving multi sense word embeddings. The key idea behind this method is to learn a sense embedding tensor using a skip-gram style training objective. The objective defines the probability of contexts marginalised over latent sense embeddings. The paper uses Gumbel-softmax reparametrization trick to approximate sampling from the discrete sense distributions. The method also uses a separate hyperparameter to help scale the dot product appropriately.\", \"strengths\": \"1. The technique is a well-motivated solution for a hard problem that builds on the skip-gram model for learning word embeddings.\\n2. A new manual evaluation approach for comparing sense induction approaches.\\n3. The empirical advance while relatively modest appears to be significant since the technique seems to yield better results than multiple baselines across a range of tasks.\", \"suggestions\": \"1. The number of senses is fixed to three. This is a bit arbitrary, even though it is following some precedence. I like the information in the appendix that shows how to handle cases when there are duplicate senses induced for words that dont have many senses. It would be useful to know how to handle the cases where a word can have more than three senses. Given that the authors have a way of pruning duplicate senses, it would have been interesting to try a few basic methods that select the number of senses per word dynamically. \\n\\n2. The evaluation includes word similarity task and crowdsourcing for sense intrusion and sense selection. These provide a measure of intrinsic quality of the sense based embeddings. However, as Li and Jurafsky (2015) point out, typically applications use more powerful models that use a wide context. It is not clear how these improvements to sense embeddings will translate in these settings. It would have been useful to have at least one or two end applications to illustrate this. \\n\\n\\n3. Given that the empirical gains are not quite consistent, I would encourage the authors to specifically argue why this particular method should be favoured over other existing methods. The related work discussion merely highlights methodological differences. For example, the contrast with Lee and Chen (2017) seems to be only that of differentiability. Is the claim that differentiability is desirable because this allows for fine tuning in applications? If this is the case then it will be nice to have this verified. \\n\\n4. The lower bound on the log likelihood objective is good but what are we supposed to take away from it? Is it that there is an interpretation that allows us to get away with negative sampling? \\n\\nOverall I like the paper. It presents an application of the Gumbel-softmax trick for sense embeddings induction and shows some empirical evidence for the usefulness of this idea, including some manual evaluation. \\n\\nI think the evaluation could be strengthened with some end applications and much crisper arguments on why the method is preferable over other methods that achieve comparable performance.\", \"references\": \"[Li and Jurafsky., EMNLP 2015] Do Multi-Sense Embeddings Improve Natural Language Understanding?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting paper, promising results\", \"review\": [\"Summary\", \"This paper extends the skipgram model using one vector per sense of a word. Based on this, the paper proposes two models for training sense embeddings: One where the word senses are marginalized out with attention over the senses, and the second where only the sense with highest value of attention contributes to the loss. For the latter case, the paper uses a variant of Gumbel softmax for training. The paper shows evaluations on benchmark datasets that shows that the Gumbel softmax based method is competitive or better than other methods. Via a crowdsourced evaluation, the paper shows that the method also produces human interpretable clusters.\", \"Review\", \"This paper is generally well written and presents a plausible solution for the problem of discovering senses in an unsupervised fashion.\", \"If \\\\beta=0, then we get SASI, right? How well does this perform on the non-contextual word similarity task? Also, on the crowd sourced evaluation? The motivation for the hard attention/Gumbel softmax is to learn sense representations that are distinguishable. But do the experiments test this?\", \"There's something strange about Eq 6. If I understand this correctly, \\\\tilde{c_i} is the context and c_j^i is the j^th context word. Then P(c_j^i | w, \\\\tilde{c_i}) should be 1 because the context is given, right? While the motivation for the right hand side makes sense, the notation could use work.\", \"The description of how the number of senses is pruned in section 3.1 seems to be a bit of a non sequitur. It is not clear whether this is used in the experiments and if so, how it compares. The appendix gives more details, but it seems a bit out of place even then because the evaluations don't seem to use it.\", \"Minor comments\", \"There are some places where the writing could be cleaned up.\", \"Eq 16 changes the notation for the sense embeddings and the context words from earlier, say Eq 12.\", \"Parenthetical citations would be more appropriate in some places Eg: above Eq 3, in footnote 3\", \"Page 6, above 6.2: Figure-Figure?\", \"Page 9, Agreement paragraph: hight -> highest\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This paper points out an important evaluation perspective, but the model architecture is incremental (limited novelty).\", \"review\": \"This paper proposes GASI to disambiguate different sense identities and learn sense representations given contextual information.\\nThe main idea is to use scaled Gumbel softmax as the sense selection method instead of soft or hard attention, which is the novelty and contribution of this paper.\\nIn addition, the authors proposed a new evaluation task, contextual word sense selection, which can be used to quantitatively evaluate the semantic meaningfulness of sense embeddings.\\nThe proposed model achieves comparable performance on traditional word/sense intrinsic evaluation and word intrusion test as previous models, while it outperforms baselines on the proposed contextual word sense selection task.\\n\\nWhile the scaled Gumbel softmax is the claimed novelty, it is more like an extension of the original MUSE model (Lee and Chen, 2017), which proposed the sense selection and representation learning modules for learning sense-level embeddings.\\nThe only difference between the proposed one and Lee and Chen (2017) is Gumbel softmax instead of reinforcement learning between sense selection and representation learning modules.\\nTherefore, the idea from the proposed model is similar to Li and Jurafsky (2015), because the sense selection is not one-hot but a distribution.\\nThe novelty of this paper is limited because the model is relatively incremental.\\n\\nFrom my perspective, the more influential contribution is that this paper points out the importance of evaluating sense selection capability, which is ignored by most prior work.\\nTherefore, I expect to see more detailed evaluation on the selection module of the model. \\nAlso, because the task of this paper is multi-sense embeddings, the traditional word similarity (without contexts) task seems unnecessary. \\nMoreover, there is no error analysis about the result on the proposed contextual word sense selection task, which may shed more light on the strength and weakness of the model. \\nFinally, I suggest the authors remove the word-level similarity task and try the recently released Word in Context (WiC) dataset, which is a binary classification task that determines whether the meaning of a word is different given two contexts.\\nIt would be better to see that GASI performs well on this task given its better sense selection module.\\n\\nOverall, the contribution is somewhat incremental and the evaluation/discussion should focus more on the sense selection module. \\nConsidering the issues mentioned above, I will expect better quality for an ICLR paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
SkgiX2Aqtm | PIE: Pseudo-Invertible Encoder | [
"Jan Jetze Beitler",
"Ivan Sosnovik",
"Arnold Smeulders"
] | We consider the problem of information compression from high dimensional data. Where many studies consider the problem of compression by non-invertible trans- formations, we emphasize the importance of invertible compression. We introduce new class of likelihood-based auto encoders with pseudo bijective architecture, which we call Pseudo Invertible Encoders. We provide the theoretical explanation of their principles. We evaluate Gaussian Pseudo Invertible Encoder on MNIST, where our model outperform WAE and VAE in sharpness of the generated images. | [
"Invertible Mappings",
"Bijectives",
"Dimensionality reduction",
"Autoencoder"
] | https://openreview.net/pdf?id=SkgiX2Aqtm | https://openreview.net/forum?id=SkgiX2Aqtm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SylwAFovLB",
"HylkOySLLB",
"rJlas_tnzN",
"S1xGzSv2G4",
"Hyxvo6Osf4",
"B1gy49uiMN",
"rJgFlgSKGE",
"HkgfHR4Yz4",
"H1lir6NKMV",
"r1grhsEFGE",
"H1lFblNFMN",
"SJlNGufOfN",
"HkxnemR6k4",
"SJl6C6od0X",
"Hye21tiOC7",
"HJgQj2qd0Q",
"BJg5ZZkp3Q",
"rJxyHtAn37",
"SkgeFa7c37",
"S1e7sluJjX",
"S1xo1vQAqX"
],
"note_type": [
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1568287182725,
1568194406785,
1547634853436,
1547625737814,
1547566495383,
1547565606613,
1547419633507,
1547419193601,
1547418946577,
1547418541331,
1547415553051,
1547343883559,
1544573683913,
1543187924757,
1543186660102,
1543183515185,
1541366018204,
1541364023260,
1541188984441,
1539436698571,
1539352290547
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1388/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1388/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1388/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1388/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1388/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1388/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1388/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1388/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1388/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1388/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1388/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1388/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1388/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1388/Authors"
],
[
"~Robin_Tibor_Schirrmeister1"
]
],
"structured_content_str": [
"{\"title\": \"Yes\", \"comment\": \"If it is invertible, then yes. But the details depend on the exact method you use.\"}",
"{\"comment\": \"My question is in general about invertible autoencoders. If we have an invertible autoencoder, does that mean we only need to train the encoder (and not the decoder) since the reconstruction can be obtained by inverting the encoder?\", \"title\": \"A general question about invertible auto-encoders\"}",
"{\"title\": \"g(z)=0 alternatives!\", \"comment\": \"You can choose g(z) to be a neural network or a pre-fixed analytical function.\\n\\nBy the way, feel free to contact us directly via email\"}",
"{\"comment\": \"setting g(z)=0 means PIE is doing denoising variational pseudo inverse\\nis there better alternative other than setting g(z)=0?\", \"title\": \"g(z)=0 alternatives?\"}",
"{\"title\": \"Good!\", \"comment\": \"You are welcome!\"}",
"{\"comment\": \"I tested on CIFAR-10, it works like denoising:\", \"in_early_training\": \"the recovered (inverted feature) images have black parts (the residual, torch.zeros_like(r), not fully trained)\", \"in_later_epochs\": \"replacing residual output with torch.zeros_like(r), then invert back to image, does image completion (black parts before are filled with some pixels), not 100% accurate, but plausible! hence \\\"pseudo inverse\\\"\\n\\nThank you for your time to explain!\", \"title\": \"It works!\"}",
"{\"title\": \"eps is a fixed constant!\", \"comment\": \"eps is a constant during optimization.\\ndim(g(z)) = dim(r), \\nso g(z) = torch.zeros_like(r)\"}",
"{\"comment\": \"So eps = constant?\\nAlso since g(z)=0, is it g(z)=torch.zeros_like(z) or g(z)=z-z\\nThanks again!\", \"title\": \"So eps is a fixed constant?\"}",
"{\"title\": \"eps!\", \"comment\": \"eps is a scalar hyper-parameter of the method.\\nIs it introduced in Eq. 10\\nIts role is discussed after Eq. 17\\nIn experiment 5.1 we demonstrate how it affects the process of the encoding.\"}",
"{\"comment\": \"how do I find\\neps?\", \"title\": \"eps and g(z)?\"}",
"{\"title\": \"Implementation!\", \"comment\": \"Hello\\n\\n>> How do I implement r mapping to Normal?\\nIn order to train PIE, one maximises the function in Eq. 13\\nSo r ~ N(g(z), eps^2) if |r - g(z)|_2^2 = 0\\n\\n>> What is function g(z) to parameterise mu, is it just a Linear layer from d dim to D-d dim?\\ng(z) could be any differentiable function from d to D-d\\nIn our experiments we use g(z) = 0\\n\\n>> If g(z) is Linear layer with input_dim =d and output_dim=D-d ,\\n>> is the objective to minimize is r_loss = 0.5* -(g(z).mean() - 0.001)).mean()\\nNo. It is r_loss = - ((r - g(z))**2).sum() / (2 * eps**2)\"}",
"{\"comment\": \"Original bijective net is\\nF(x)=z\\nF_inv(z)=x\", \"pie_net_is\": \"F(x)=[z;r] where r~Normal(mu=g(z), sd= <<1 )\\nF_inv(F(x)) = [z;g(z)]\\n\\nHow do I implement r mapping to Normal?\\n\\nWhat is function g(z) to parameterise mu, is it just a Linear layer from d dim to D-d dim?\\n\\nIf g(z) is Linear layer with input_dim =d and output_dim=D-d ,\\nis the objective to minimize is r_loss = 0.5* -(g(z).mean() - 0.001)).mean()\\nThanks in advance\", \"title\": \"Implementation?\"}",
"{\"metareview\": \"The presented approach demonstrates an invertible architecture for auto-encoding, which demonstrates improvements in performance relative to VAE and WAE's on MNIST.\", \"pros\": [\"R3: The idea of pseudo-inversion is interesting.\", \"R3: Manuscript is clear.\"], \"cons\": [\"R1,2,3: Additional experiments needed on CIFAR, ImageNet, others.\", \"R1: Presentation unclear. Authors have not made any apparent attempt to improve the clarity of the manuscript, though they make their point that the method allows dimensionality reduction in their response.\", \"R1, R2: Main advantages not clear.\", \"R3: Text could be compressed further to allow room for additional experiments.\", \"Reviewers lean reject, and authors have not updated experiments. Authors are encouraged to continue to improve the work.\"], \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"New approach for an Invertible Architecture Autoencoder shows promise, but experiments incomplete.\"}",
"{\"title\": \"Reply to AnonReviewer2\", \"comment\": \"Thank you for your detailed review!\\n\\nOn the one hand, flow-based models are tractable by design. They allow for data manipulation and their flexibility is mainly limited by the computational resources. However, these models do not allow one to compress the input data.\\n\\nOn the other hand, autoencoders provide a method for significant compression of the data.\\nNevertheless, the training of an autoencoder requires the minimization of a reconstruction error as one of the terms in a total loss function. \\nThe reconstruction error must be specified beforehand and is dependent on the stated requirements of the task.\\n\\nWhen the task is not defined in advance but the compression of the data is required, \\nthe above-listed methods cannot be used. \\n\\nThe proposed model (PIE) is tractable by design. Moreover, it allows for data compression without specifying the reconstruction error function. The most relevant components are learned from the data.\\n\\nThank you for the recommended papers. \\nWe will consider them during the next revision of the paper.\"}",
"{\"title\": \"Reply to AnonReviewer1\", \"comment\": \"Thank you for your review!\\n\\nFirst of all, we would like to emphasize the fact the PIE is an autoencoder and allows for dimensionality reduction.\\nWe refer to PIE as an Autoencoder as it performs the encoding of the data automatically. \\nIn contrast to previously published paper on invertible models, PIE allows for the compression of the data \\nand chooses the main nonlinear components of the input.\\nThe direct comparison to flow-based models such as Real NVP and NICE is not relevant in sense of compression, \\nas these models transform the distributions of variables while preserving the dimensionality. \\nIn section 3.4 we discuss the relation to flow-based models with multiscale architecture \\nand demonstrate that such models may be viewed as one of the configurations of PIEs.\\n\\nIn our paper, we start from a necessity of the dimensionality reduction and derive a\\ngeneral method for achieving this by using invertible models. The models studied in the experiments are chosen to be simple in order to demonstrate the difference between vanilla methods.\\nAs always, the proposed model could be used as a backbone for a more complicated setup, \\nbut it is out of the scope of the current paper.\\n\\nWe agree that the experimental part is limited. We will compare our model to a wider class of competitors during the next revision. Thank you for the recommended models to compare with.\"}",
"{\"title\": \"Reply to AnonReviewer3\", \"comment\": \"Thank you for your review!\\n\\nWe have changed Fig. 6 (b) so now it is clear. \\nIt is g(z) instead of 0. Fig. 6 (b) exactly matches Eq. 20.\\n \\nWe have also fixed the typos in the text.\\nWe agree that the experimental part is limited. \\nWe will conduct the experiments on other datasets during the next revision.\"}",
"{\"title\": \"Review\", \"review\": \"PIE extend NICE and Real NVP into situations which require having a smaller dimensionality of the latent variable (d) compared to the dimensionality of the observed variable (D), i.e. d < D. This is done by learning an extension function g(z) from R^d to R^{D-d} and then using the change of variables formula on x and [z, g(z)]. To model probabilistically the deterministic function g(z) is replaced by Normal distribution with mean g(z) and a small variance.\\n\\nPIE is used to build deep generative models and trained on the MNIST dataset. The authors show that the models learnt via PIE produce sharper samples than VAEs and Wasserstein autoencoders (WAEs). No comparison to real NVP is made, which should be the main baseline of comparison to answer the question of \\\"what is the advantage of having d < D?\\\". Further MNIST is no longer a good enough benchmark to evaluate deep generative models. Most representative work in this literature use CIFAR-10, downsampled Imagenet, or Imagenet at 256x256.\", \"this_work_falls_short_of_the_standards_of_iclr_in_a_few_ways\": \"1. The presentation is unclear. The explanation of the extension-restriction idea is overly complicated. Further, the paper does very little to properly contextualize this work in the literature. Real NVP and flow-based models are mentioned but the proposed technique is not compared to it. The authors say they \\\"introduce new class of likelihood-based Auto-Encoders\\\", but this is false as far as I understood. The technique is not even an autoencoder since a separate decoder is not trained, and is obtained by exactly inverting the encoder as in real NVP.\\n\\n2. The experiments are weak. The samples shown are of poor quality, and on a very simplisitic dataset (MNIST). The authors compare with vanilla VAEs, but ignore more recent improvements to VAE such as VAE-IAF, flow-based models, and also autoregressive models. A heuristic is used to measure sharpness and only used to compare against VAE and WAE. Since all these models allow likelihood evaluation, likelihoods should also have been compared.\\n\\n3. The technique itself is a small change over real NVP and it's not clear whether this change brings any improvements or provides any insights about generative modeling.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting model without thorough evaluations\", \"review\": \"In this paper, an invertible encoding method is proposed for learning latent representations and deep generators via inverting the encoder. The proposed method can be seen as an autoencoder without the need to learn the decoder. This can be computed by inverting the encoder. To the best of my knowledge the proposed method is novel and its building blocks are described adequately.\", \"my_main_questions_are_the_following\": \"What is the main advantage of this model? Does it make the problem of deep generative model learning tractable? If so, under what conditions?\\n\\nDiscussion of prior art and relevant methods is limited in the paper and it can be extended. The authors may want to consider discussing relevant work on invertible autoencoders (e.g., https://arxiv.org/pdf/1802.06869.pdf) and methods like https://openreview.net/pdf?id=ryj38zWRb which can be seen as symmetric to the proposed one in the sense that an encoderless autoencoder is learnt. \\n\\nThe experimental evaluation is limited. The authors should consider to compare their method with other relevant models such as those mentioned above as well as GANs and their variants. Experiments on other more complex real-world data (e.g., faces) are also needed in order to prove the merits of the proposed model.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"A nice paper but needs stronger experiment results\", \"review\": \"General:\\nIn general, this is a well-written paper and I feel pleasant to read the paper. The paper proposed a model named Pseudo Invertible Autoencoder(PIE) which combines invertible architecture and inference model.\", \"strength\": \"1. The explanation of the paper is very clear and consistent.\\n2. The idea is interesting. A lot of papers related to the inverse problem focus on perfect invertibility, but the author(s) emphasize the importance of invertible compression and relate PIE to the inference model.\", \"possible_improvements\": \"1. The experiments could have been more convincing: 1) The only competitors are VAE and WAE. 2)The only data set has been tested was MNIST data set. There are many great works mentioned in the paper and those works should also be compared in a way.\\n2. The content could be more compact so that more experiments can be shown to support the paper. It seems to me there is too much explanation to previous works in the paper. \\n3. The paper has 9 pages which exceed the suggestion a little bit.\\n4. I am not sure if the author(s) checked the grammar of the paper carefully. I found quite few typos in the paper. Page 3: 'Rather then' should be 'Rather than' and 'As we are interested' should be as 'As we are interested in'; Page 4: 'Can me' should be 'Can be'; Page 6: 'Better then' should be 'Better than'; Fig.6 (b): Should it be '0' or 'g(z)'?\", \"conclusion\": \"This is a good and clean paper in general. It explains the related work and presents PIE with necessary details. My biggest concern is that empirical validation(experiment) is poor. As a conclusion, I tend to vote for weak rejection.\", \"minor_suggestion\": \"Refer to the conference instead of arXiv if the paper was already published.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Reply\", \"comment\": \"Thank you for your comment!\\n\\n1) In Fig. 3 the green lines indicate the aggregation of the variables in objective function. \\nFor examples, the blue nodes of the 2nd layer which are discarded (r_k) \\nand the variables which will be further processed (z_k) are aggregated in a conditional distribution\\np(r_k| z_k) = \\\\delta(r_k - g_k(z_k)). \\nWe will try to change the scheme in order to avoid confusion.\\n\\n2.1) Why 0?\\nIn experiments with Gaussian PIE we used g(z) = 0. \\nTherefore, x = [z, 0] as it is indicated in Eq. 20. \\nWe will change Fig. 6 (b) in order to exactly match Eq. 20.\\n\\n2.2) To sample or not to sample?\\nIn our experiment we demonstrate the behaviour of function G, defined in Eq. 4.\\nThe operation of extension of the function is deterministic. \\n\\nWe conducted the experiments, where we used sampling from N(g(z), eps^2 I). \\nThe obtained images were visually close to those depicted in the current version of the paper.\\n\\n3) Thank you for providing us with interesting and useful papers. \\nWe will consider them during the next revision of the paper.\"}",
"{\"comment\": \"Hi, quite interesting paper!\\nFirst, I have some questions for understanding it correctly.\\n\\n1) In figure 3 c), what is the meaning of the green lines? From the text, I assumed g_k(z_k) should only be computed from those dimensions that will be processed further, correct? So in the case of the second layer, g_k(z_k) only from those 4 dimensions that will be processed further? So why are there green lines from all eight nodes of the second layer to the one node at the bottom? And what should this one node symbolize? r_k? Comparison between r_k and g_k(z_k)?\\n\\n2) What exactly is used in the generation of data/in the inversion for the values of the r's? In Figure 6 b) it looks as if you put in 0s? Shouldn't you put in g(z)? Or even sample from the gaussian with standard deviation eps_0 centered at g(z)?\", \"also_wanted_to_mention_two_possibly_related_works_for_your_consideration\": \"In our paper \\\"Training Generative Reversible Networks\\\" https://arxiv.org/abs/1806.01610 , we also use a reduced latent dimensionality, however without a rigorous mathematical motivation. In the parallel ICLR submission \\\"Analyzing Inverse Problems with Invertible Neural Network\\\" (https://openreview.net/forum?id=rJed6j0cKX) the authors experiment with a different kind of partitioning of the latent space.\", \"title\": \"Understanding Questions and related work\"}"
]
} |
|
rJ4qXnCqFX | Probabilistic Knowledge Graph Embeddings | [
"Farnood Salehi",
"Robert Bamler",
"Stephan Mandt"
] | We develop a probabilistic extension of state-of-the-art embedding models for link prediction in relational knowledge graphs. Knowledge graphs are collections of relational facts, where each fact states that a certain relation holds between two entities, such as people, places, or objects. We argue that knowledge graphs should be treated within a Bayesian framework because even large knowledge graphs typically contain only few facts per entity, leading effectively to a small data problem where parameter uncertainty matters. We introduce a probabilistic reinterpretation of the DistMult (Yang et al., 2015) and ComplEx (Trouillon et al., 2016) models and employ variational inference to estimate a lower bound on the marginal likelihood of the data. We find that the main benefit of the Bayesian approach is that it allows for efficient, gradient based optimization over hyperparameters, which would lead to divergences in a non-Bayesian treatment. Models with such learned hyperparameters improve over the state-of-the-art by a significant margin, as we demonstrate on several benchmarks. | [
"knowledge graph",
"variational inference",
"probabilistic models",
"representation learning"
] | https://openreview.net/pdf?id=rJ4qXnCqFX | https://openreview.net/forum?id=rJ4qXnCqFX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkgJWm8WlE",
"SkesolC-Am",
"SyxIO5TW0Q",
"r1l78tTb0m",
"rylFNAMQaQ",
"S1lKg0Iq3m",
"SyxbX4AFnQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544803062941,
1542738083233,
1542736493997,
1542736202753,
1541774896869,
1541201393180,
1541166104959
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1387/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1387/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1387/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1387/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1387/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1387/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1387/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes a Bayesian extension to existing knowledge base embedding methods (like DistMult and ComplEx), which is applied for for hyperparameter learning. While using Bayesian inference for for hyperparameter tuning for embedding methods is not generally novel, it has not been used in the context of knowledge graph modelling before. The paper could be strengthened by comparing the method to other strategies of hyperparameter selection to prove the significance of the advantage brought by the method.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Intersting idea but significance is not fully clear\"}",
"{\"title\": \"Thank you for your detailed comments!\", \"comment\": \"We respond to each of your questions below:\\n\\n> Sec 3 can be regarded as a knowledge base extension of [Bayesian treatments to\\n> embedding methods] with a different likelihood [...].\\n\\nWe agree with the reviewer\\u2019s summary of our theoretical contributions. In addition, we would like to stress our experimental contribution. Our method sets a new state of the art on a very competitive benchmark, while at the same time significantly reducing the cost for hyperparameter tuning.\\n\\n> [...] However, [...] they still use MAP estimations with tuned hyperparameters\\n> instead of variational distribution directly. This does not support the\\n> parameter uncertainty argument [...]. The hyperparameter tuning argument is\\n> well-supported by both theoretical analysis and experiments.\\n\\nWe will clarify the role of uncertainty. The final prediction does indeed not take uncertainty into account, as we discuss in the conclusions. The proposed hyperparameter optimization algorithm, however, only works because of a Bayesian treatment of the latent embedding vectors. As we mention in the paragraph above the algorithm box, a gradient based hyperparameter optimization that ignores posterior uncertainty would lead to divergent solutions.\\n\\nSpecifically, minimizing the loss function $L$ simultaneously over model parameters and hyperparameters would send the hyperparameters to infinity. This minimizes the loss if the model parameters are strictly zero, which is possible in a point estimated model but not when we attribute a nonzero uncertainty to each parameter.\\n\\n> [...] the parameter uncertainty issue hasn\\u2019t been well verified (Figure 3\\n> demonstrates the advantages of hyperparameter tuning instead of uncertainty\\n> in parameters).\\n\\nDue to space limitations, we defer the quantification of uncertainty to the appendix. Figure 4 shows the posterior uncertainty as a function of the frequency of an entity (or relation). It shows a clear correlation between infrequent entities and high posterior uncertainty.\\n\\n> Since the Bayesian treatment is general, such an improvement [as shown for\\n> the ComplEx and DistMult models] should [also] be found for other knowledge\\n> base embedding methods.\\n\\nIn our experiments, we focused on the DistMult model because of its simplicity, and on the ComplEx model because it is the current state of the art and because it is a particular hard benchmark for a new hyperparameter tuning method as a lot of effort has already been invested into tuning its hyperparameters, see [Kadlec et al., 2017] and [Lacroix et al., 2018].\\n\\nWe agree that the proposed method is more general and that it should be easily applicable to other knowledge graph embedding models. We think that the main advantage of our method will be to speed up the evaluation cycle when designing new models. Current records on the link prediction task are held by tensor factorization models that may seem surprisingly primitive given the difficulty of the task. Yet, without our method, it would be difficult to prove that a more involved model performs better in practice because one would have to compete with the large amount of expensive hyperparameter tuning that has already gone into existing models.\\n\\n> Time complexity is not analyzed (since Algorithm 1 requires re-train the models).\\n\\nOur method scales linearly in the number of hyperparameters. By comparison, the standard approach to hyperparameter optimization, i.e., grid search using a validation set, scales exponentially with the number of hyperparameters. It would be infeasible to perform grid search over individual hyperparameters for each entity and relation. \\n\\nWhen comparing our method to training without hyperparameter optimization, we obtain the following: the first and the last of the three training cycles in our method are the usual MAP estimation of the parameters. The second training cycle (variational EM) took less time than these two steps (due to the preinitialization).\\n\\n> Algorithm 1 is a one-EM-step approximation for optimizing the ELBO. [...]\\n> For example, what will happens if running line 4-10 for multiple times?\\n\\nRunning lines 4-10 multiple times would mean that we would alternate between variational EM (lines 4-9) and re-training with the learned hyperparameters (line 10). This would be wasteful since both parts of the algorithm optimize over the model parameters, just under different approximations (variational approximation for lines 4-9 vs. point estimate approximation for line 10). Returning to variational EM (lines 4-9) after executing line 10 would cause the algorithm to forget anything that it has learned on line 10. By contrast, the opposite direction preserves information. Lines 4-9 also optimize over the hyperparameters $\\\\lambda$, which are then used (and left unchanged) on line 10. This is why it is important to do variational EM (lines 4-9) first, before one re-trains with the learned hyperparameters (line 10).\"}",
"{\"title\": \"Thank you for your valuable feedback\", \"comment\": \"We respond to each question below:\\n\\n> Question: why is the ELBO (Eq. 6) not used anywhere in Algorithm 1?\\n\\nThank you for this question. In fact, line 7 in Algorithm 1 does contain an unbiased estimator of the ELBO (we will clarify this notation). We decided to write out the ELBO in terms of the loss function L in the algorithm box so as to stress that it is easy to retrofit the EM algorithm into an existing implementation of a model that just minimizes a loss L. \\n\\nIn detail, the ELBO (Eq. 6) is given by the expected log joint distribution minus the expected log variational distribution. The latter can be calculated analytically and leads to the $\\\\log \\\\sigma$ terms on line 7 (the entropy of q). We obtain an unbiased estimator of the (negative) expected log joint by injecting Gaussian noise with standard deviation $\\\\sigma$ into the loss function L.\\n\\n> The model does not mention a number of significantly more accurate models\\n> proposed in the literature, such as [1].\\n> [1] https://arxiv.org/abs/1707.01476\\n\\nWe present comparisons of our experimental results to the best baseline that we could find in the literature. The mentioned paper [1] reports results on the same four benchmark datasets that we also use (compare Tables 3 and 4 in [1] to Tables 1 and 2 in our paper). Only for one dataset (WN18), results reported in [1] are comparable to the baselines used in our paper. On all three other datasets, the baselines used in our paper perform substantially better (>5 percentage point improvement for Hits@10).\\n\\n> Furthermore, it seems to me that the point of the whole paper is finding\\n> efficient ways of estimating the hyperparameters efficiently. In that sense,\\n> [...] there are other methods [to estimating hyperparameters efficiently]\\n> that were not considered [...]\\n> [2] https://github.com/hyperopt/hyperopt\\n> [3] https://arxiv.org/abs/1703.04782\\n\\nThank you for pointing out these references. The library in [2] is a general purpose optimization library that takes an objective function and tries to find its maximum. It is specialized for searches over parameter spaces of nontrivial shape. This is not necessary in our hyperparameter optimization problem since the search space is a simple real-valued vector space. The documentation for [2] suggests that the library may in the future support Bayesian optimization. Bayesian optimization is, in principle, an alternative method for hyperparameter optimization, but it does not scale well to a large number of hyperparameters as it requires retraining the full model after every change to any hyperparameter. In contrast, variational EM optimizes hyperparameters and model parameters concurrently.\\n\\nThe work by [3] is orthogonal to our work. It proposes a learning algorithm for the hyperparameters of the optimizer (e.g., the learning rate) to improve the convergence rate of gradient descent. In contrast, our contribution optimizes over hyperparameters of the model, such as regularizers.\"}",
"{\"title\": \"Thank you very much for your insightful comments\", \"comment\": \"We respond to each of your questions below:\\n\\n> [...] While overall, the proposed method seems to indicate small improvement\\n> upon a very strong baseline, in some cases it\\u2019s very close [...]\\n\\nThank you for raising the point of a challenging baseline comparison. While the comparison is arguably close for the ComplEx model, improvements on the DistMult model are much more pronounced. Also, note that our proposed hyperparameter optimization method is much more efficient than the traditional grid search approach. We obtained our results from a single run of our method, whereas the baseline relied on the extensive hyperparameter search reported in [Lacroix et al., 2018]. We hope that our fast hyperparameter tuning method will speed up research on new knowledge base embedding models.\\n\\n> How do different initial hyperparameter values affect final performance?\\n\\nThe choice of hyperparameters in the first training phase (the \\\"pre-training\\\") is only used to find good initializations for the embedding vectors in the second phase (the variational EM). We ran experiments both with uniform initial $\\\\lambda$ and with initial $\\\\lambda$ proportional to the frequency and obtained indistinguishable performance at the end.\\n\\n> The authors claim that the improvement is most notable for entities with fewer\\n> training points, however, this is only investigated by using a balanced MRR [...]\\n\\nThank you for pointing this out. We will revise the paper with the following more carefully formulated claim: the role of uncertainty is most important for entities with few training points. This is confirmed by Figure 4 in the appendix, which plots the posterior uncertainty as a function of frequency.\\n\\n> Parameter uncertainty is not further handled in the paper, the final approach\\n> is a point estimate [...].\\n\\nWe will clarify the role of uncertainty in the proposed hyperparameter optimization strategy. The final prediction does indeed not take uncertainty into account, as we discuss in the conclusions. The proposed hyperparameter optimization algorithm, however, only works because of a Bayesian treatment of the latent embedding vectors. As we mention in the paragraph above the algorithm box, a gradient based hyperparameter optimization that ignores posterior uncertainty would lead to divergent solutions.\\n\\nSpecifically, minimizing the loss function $L$ simultaneously over model parameters and hyperparameters would send the hyperparameters to infinity. This minimizes the loss if the model parameters (=embedding vectors) are strictly zero, which is possible in a point estimated model but not in a model that attributes a nonzero uncertainty to each embedding vector. Technically, the optimization cannot diverge because the ELBO is bounded, see discussion below Eq. 6.\\n\\n> The author\\u2019s hypothesis is that a more flexible posterior approximation could\\n> solve this issue. No concrete numbers or further analysis are provided.\\n\\nWe will revise the conclusions and provide the following more concrete proposals. A more flexible posterior approximation opens up many new avenues of research. One option is to explicitly introduce a more flexible posterior approximation via a model based variational distribution. The challenge here is a trade-off between efficient inference and expressiveness of the variational model. Alternatively, the bound on the marginal likelihood can be improved by importance weighting, similar to [https://arxiv.org/abs/1808.09034]. This implicitly fits a more involved variational distribution (see Theorem 1 in that preprint).\\n\\n> How much additional computation is needed to achieve the reported results?\\n\\nOur approach is much cheaper than the standard approach for hyperparameter optimization, i.e., grid search using a validation set. The computational cost of grid search scales exponentially with the number of hyperparameters, and it would be infeasible to perform grid search over individual hyperparameters for each entity and relation. Our proposed method scales only linearly in the number of hyperparameters.\\n\\nWhen comparing our method to training without hyperparameter optimization, we obtain the following: the first and the last training cycles in our method are the usual MAP estimation of the parameters. The second training cycle (variational EM) took less time than these two steps (due to the preinitialization).\\n\\n> Would it be possible to group the entities in bins by frequencies\\n> (say 6-10 bins) [...] and run grid search over just 6-10 hyperparameters [...]?\\n\\nIt is possible, however, with two caveats. First, the optimal hyperparameter is not a strict function of the frequency, as can be seen by the spread in vertical direction of the data points in Figure 3a (note the log scale). Second, as grid search scales exponentially in the number of hyperparameters, even a grid search over only 6-10 hyperparameters would be much more expensive than our method, which involves only three training cycles.\"}",
"{\"title\": \"Probabilistic Knowledge Graph Embeddings - Review\", \"review\": \"Summary:\\nThe paper presents a probabilistic treatment of knowledge graph embeddings, motivating it in parameter uncertainty estimation and easier hyperparameter optimisation. The authors present density-based DistMult and ComplEx variants, where the posterior parameter distributions for entity and relation embeddings are approximated by diagonal Gaussians q_\\\\gamma. Variational EM is used to infer the variational parameters \\\\gamma as well as the per-entity/per-relation precision (\\\\lambda) hyperparameters. The training process proposed by the authors consists of three phases: (1) pretraining a MAP estimate that\\u2019s used as initial means of the posterior approximating Gaussians, (2) variational EM (see above) to find better hyperparameters and (3) another MAP training phase that uses the updated per-entity/per-relation hyperparameters. Finally, experimental results indicate a slight improvement in MRR and HITS@10 across FB and WN datasets.\", \"originality\": \"To the best of the reviewer\\u2019s knowledge, the presented approach is novel for knowledge graph embeddings.\", \"discussion\": \"While the task is relevant, it is unclear how significant the improvements are. While overall, the proposed method seems to indicate small improvement upon a very strong baseline, in some cases it\\u2019s very close (96.2 vs 96.4 HITS@10 on WN18, 36.4 vs 36\\n.5 MRR on FB15K237), or worse (85.8 vs 85.4 MRR on FB15K). \\nIt is unclear how adequate some details in the experimental setup are for verifying the main hyperparameter optimization claim. In particular, what is \\u201ca reasonable choice of hyperparameters\\u201d in the first training phase? From figure 3b it seems the initial lambda\\u2019s are set proportionately to the frequency, as in the baseline. Are the initial hyperparameter values in EM set the same as the hyperparameter values used for MAP in the reported results? If the claim is to optimize hyperparameters, shouldn\\u2019t their initial values be set as uninformed as possible? How do different initial hyperparameter values affect final performance?\\nThe authors claim that the improvement is most notable for entities with fewer training points, however, this is only investigated by using a balanced MRR, where the results are again very close, the same (WN18) or worse (FB15K) for ComplEx. Wouldn\\u2019t it be clearer to perform a separate evaluation only considering low-frequency entities to verify this claim?\\nParameter uncertainty is not further handled in the paper, the final approach is a point estimate, which discards the uncertainties obtained by VI. Authors mention (last paragraph of Sec. 4) that for a large embedding dimension, bayesian predictions are worse, while for small dimension, they are better. The author\\u2019s hypothesis is that a more flexible posterior approximation could solve this issue. No concrete numbers or further analysis are provided.\", \"clarity_and_presentation\": \"The result tables should be merged and formatted better. \\nFigures need some work (Fig. 2 looks poorly scaled, all figures should be in vector format for scalability, typos in Fig. 1)\", \"questions\": [\"How much additional computation is needed to achieve the reported results?\", \"Would it be possible to group the entities in bins by frequencies (say 6-10 bins) and assign each bin a hyperparameter, and run grid search over just 6-10 hyperparameters, and then interpolate between the bins to set hyperparameters per entity as a function of its frequency?\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Sound method, but the scope is limited to hyperparameter tuning, and no comparisons with other methods are provided\", \"review\": [\"In this paper, authors propose a probabilistic extension of classic Neural Link Prediction models, such as DistMult and ComplEx. The underlying assumption is that the entity embeddings and the relation embeddings are sampled from a prior Multivariate Normal distribution, whose (hyper-)parameters can be estimated via maximum likelihood. In this paper, authors use Variational Inference (VI) for approximating the posterior distribution over the embeddings, and use Stochastic VI for maximising the Evidence Lower BOund (ELBO) while scaling to large datasets. In Sect. 3, authors introduce the generative process, and show how MAP estimation of the embedding matrices can recover the original models. In Sect. 4, authors start from the intractable marginal likelihood over the data (Eq. 5) for deriving the corresponding ELBO (Eq. 6), which is defined over:\", \"The \\\"hyperparameters\\\" gamma, which define the parameters of the prior Multivariate Normal distribution over the embeddings, and\", \"The parameters gamma of the variational distributions.\"], \"question\": \"why the ELBO (Eq. 6) is not used anywhere in Algorithm 1?\\n\\nThe model does not mention a number of significantly more accurate models proposed in the literature, such as [1].\\n\\nFurthermore, it seems to me that the point of the whole paper is finding efficient ways of estimating the hyperparameters efficiently. In that sense, there are other methods that were not considered, either simple (e.g. random sampling or black-box optimization techniques [2]) or more complex (e.g. hypergradient descent [3]).\\n\\n[1] https://arxiv.org/abs/1707.01476\\n[2] https://github.com/hyperopt/hyperopt\\n[3] https://arxiv.org/abs/1703.04782\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"An adaptive hyperparameter tuning method for knowledge graph embeddings\", \"review\": \"This paper proposes a Bayesian extension to knowledge base embedding methods, which can be used for hyperparameter learning. My rating is based on following aspects.\\n\\nNovelty. \\nApplying Bayesian treatment to embedding methods for uncertainty modelling and hyperparameter tuning is not new (examples include PMF [1] and Bayesian PMF [2]), and Sec 3 can be regarded as a knowledge base extension of them with a different likelihood (MF considers user-item pairs while knowledge base considers head-edge-tile triplets). However, it seems that there is little work considering the hyperparameter tuning problems for knowledge base embeddings.\\n\\nQuality & Clarity.\\nThis paper makes two arguments. 1. Small data problems exist, and needs parameter uncertainty; 2. Bayesian treatment allows efficient optimization over hyperparameters. However, as mentioned in Sec 4 and Sec 5, they still use MAP estimations with tuned hyperparameters instead of variational distribution directly. This does not support the parameter uncertainty argument (since there is no uncertainty in parameters of the final model, i.e., those re-trained in line 10 of algorithm 1). More analysis, both theoretically and experimentally, is needed to address this argument. The hyperparameter tuning argument is well-supported by both theoretical analysis and experiments. \\n\\nMy questions are mainly about experiments. Overally, I think current experiments cannot support the claims well and further experiments are needed.\\n1.\\tAs mentioned above, the parameter uncertainty issue hasn\\u2019t been well verified (Figure 3 demonstrates the advantages of hyperparameter tunning instead of uncertainty in parameters).\\n2.\\tTable 1 & 2 demonstrates that hyperparameter tunning using algorithm 1 introduces performance improvement on ComplEx and DistMult. Since the Bayesian treatment is general, such an improvement should be found for other knowledge base embedding methods. \\n3.\\tTime complexity is not analyzed (since Algorithm 1 requires re-train the models).\\n4.\\tAlgorithm 1 is a one-EM-step approximation for optimizing the ELBO. How well such a algorithm approximates the optimal solution of ELBO. For example, what will happens if running line 4-10 for multiple times? Does the performance increase or decrease?\\n\\n[1] Salakhutdinov and Minh, Probabilistic Matrix Factorization, NIPS 2007.\\n[2] Salakhutdinov and Minh, Bayesian Probabilistic Matrix Factorization using Markov Chain Monte Carlo, ICML 2008.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
ByGq7hRqKX | Cross-Task Knowledge Transfer for Visually-Grounded Navigation | [
"Devendra Singh Chaplot",
"Lisa Lee",
"Ruslan Salakhutdinov",
"Devi Parikh",
"Dhruv Batra"
] | Recent efforts on training visual navigation agents conditioned on language using deep reinforcement learning have been successful in learning policies for two different tasks: learning to follow navigational instructions and embodied question answering. In this paper, we aim to learn a multitask model capable of jointly learning both tasks, and transferring knowledge of words and their grounding in visual objects across tasks. The proposed model uses a novel Dual-Attention unit to disentangle the knowledge of words in the textual representations and visual objects in the visual representations, and align them with each other. This disentangled task-invariant alignment of representations facilitates grounding and knowledge transfer across both tasks. We show that the proposed model outperforms a range of baselines on both tasks in simulated 3D environments. We also show that this disentanglement of representations makes our model modular, interpretable, and allows for zero-shot transfer to instructions containing new words by leveraging object detectors. | [
"tasks",
"knowledge transfer",
"knowledge",
"words",
"visual objects",
"model",
"navigation",
"navigation recent efforts",
"visual navigation agents",
"language"
] | https://openreview.net/pdf?id=ByGq7hRqKX | https://openreview.net/forum?id=ByGq7hRqKX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ryeVIZUNgV",
"S1giw29tk4",
"Syexze6aCX",
"B1x9nUP5RX",
"r1ekwQDcRQ",
"H1ezHTU9CQ",
"r1xCIjSA2Q",
"H1efLyrc3X",
"BJxzPorY3Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544999244040,
1544297571093,
1543520264486,
1543300786289,
1543299927216,
1543298361549,
1541458774287,
1541193546294,
1541131097826
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1386/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1386/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1386/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1386/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1386/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1386/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1386/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1386/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1386/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The authors have proposed a language+vision 'dual' attention architecture, trained in a multitask setting across SGN and EQA in vizDoom, to allow for knowledge grounding. The paper is interesting to read. The complex architecture is very clearly described and motivated, and the knowledge grounding problem is ambitious and relevant. However, the actual proposed solution does not make a novel contribution and the reviewers were unconvinced that the approach would be at all scalable to natural language or more complex tasks. In addition, the question was raised as to whether the 'knowledge grounding' claims by the authors are actually much more shallow associations of color and shape that are beneficial in cluttered environments.\\nThis is a borderline case, but the AC agrees that the paper falls a bit short of its goals.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}",
"{\"title\": \"Response to authors\", \"comment\": \"Hi authors \\u2014\\u00a0thanks for your answers and updates to the paper. While the gated attention mechanism designed in this paper seems to yield nice interpretable representations (thanks for Fig 9!), I still can't see how this gating mechanism can scale to anything like natural language \\u2014 take the more complex sentences in the Embodied QA dataset [1], for example \\u2014 without major revisions. As such, it's not clear why we should take the Dual GA mechanism to be an important milestone on the way to bigger EQA accomplishments.\\n\\nRe #3: No-Aux seems to converge on this task, true \\u2014 but the representations in figure 9 suggest that convergence is nowhere near optimal. I'm worried the model with the No-Aux constraint will also not scale as-is.\\nI understand that, while learning in this task with model-free RL, there is little an agent can do to learn a word but to accidentally bump into the relevant object or accidentally predict the correct answer. This seems like a fundamental direction for improvement, either by model iteration or novel task design, since *human children* clearly learn much more than the model-free exploration in the current task can support.\\n\\nThanks for adding figure 13 \\u2014 but please note in the caption that these are not an arbitrary \\\"3 models\\\", but the 3 *best* models. (How many training trajectories look like these 3 models? How many don't converge?)\\n\\nDue to the above concerns about model scalability and the framing of the task, I'm keeping my rating as-is.\\n\\n[1]: https://embodiedqa.org/\"}",
"{\"title\": \"List of revisions, looking forward to further discussion\", \"comment\": \"For the convenience of the reviewers and the Area Chair, we would like to list the revisions made to the submission after the reviews:\\n\\n- Addition of visualization figures and videos: \\nAs requested by Reviewer 1, we added policy execution videos with visualization of convolutional output channels and spatial attention at the following link:\", \"https\": \"//sites.google.com/view/emml\\nWe also added Figure 9 containing visualization of the convolutional outputs channels corresponding to different words. Both the videos and figure indicate that our proposed model learns to detect objects and attributes corresponding to the word in relevant convolutional output channels even when it is not trained with auxiliary labels. \\n\\n- Addition of training curves with different seeds:\\nAs requested by Reviewer 1, we added training curves for top 3 models across 3 training runs with different seeds both with and without auxiliary tasks for Easy environment in Figure 13 in the appendix C. The performance of our proposed model is fairly consistent across different runs. The variance across different runs is higher in the NoAux setting as compared to the Aux setting. We also removed the shaded region in other training curves to avoid confusion.\\n\\n- Addition of references to missing prior work:\\nIn the original submission, we discussed prior work on visually-grounded navigation tasks. We thank Reviewers 2 and 3 for pointing us to relevant prior work on multimodal learning in static settings which do not involve navigation or reinforcement learning. We added references to prior methods which use similar attention mechanisms for Visual Question Answering [1,2,3] and grounding audio to vision [4]. We also added a reference to prior work which explored the use of object recognition as an auxiliary task for Visual Question Answering [3]. In contrast to these works, we propose a new non-trivial multitask learning problem of multimodal tasks with different action spaces, and define a novel scenario of testing cross-task knowledge transfer. We also create appropriate datasets, RL environments, and train-test splits for evaluating the above. Even though the individual attention mechanisms are similar to ones used in prior work, we propose a unique combination of these attention mechanisms to be used with specific kind of representations in order to achieve cross-task knowledge transfer.\\n\\n- Additional experiment on using text as subtitles:\\nAs requested by Reviewer 3, we ran an experiment where we superimposed the text in the top part of the image and trained a unimodal RL network with the same convolutional network and recurrent policy module as in our proposed model using PPO. The test performance of the model was similar to the Image only baseline (0.21 SGN accuracy, and 0.10 EQA accuracy for Easy Aux setting). This highlights the need for multimodal learning and dual-attention.\\n\\n- Minor changes:\\nRemoved the claim of zero-shot transfer in the extension to new words using object detectors.\\nRevised Figure 10 to correctly reflect the actions.\\nRevised Table 3 to specify out of vocabulary words.\\n\\nIf there are any follow-up questions or additional queries, we will be happy to provide additional details. We look forward to actively participate in the follow-up discussion.\\n\\n[1] Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering, Huijuan Xu, Kate Saenko\\n[2] Compositional Attention Networks for Machine Reasoning. Drew A. Hudson, Christopher D. Manning\\n[3] Aligned Image-Word Representations Improve Inductive Transfer Across Vision-Language Tasks, Tanmay Gupta, Kevin Shih, Saurabh Singh, Derek Hoiem\\n[4] The Sound of Pixels. Hang Zhao, Chuang Gan, Andrew Rouditchenko, Carl Vondrick, Josh McDermott, Antonio Torralba\"}",
"{\"title\": \"Discussion regarding novelty, adding missing references to prior works\", \"comment\": \"Thank you for providing critical feedback. We would like to address the concern of novelty in the presented work. First, irrespective of the method we believe the problem introduced in the paper is novel. It introduces not only a new non-trivial multitask learning problem of multimodal tasks with different action spaces, but also defines a novel scenario of testing cross-task knowledge transfer. We also create appropriate datasets, RL environments, and train-test splits for evaluating the above.\\n\\nIn terms of the method, we agree that the mechanism of Gated-attention or use of auxiliary tasks is not novel. However, the purpose of the paper is not to introduce a new attention mechanism or auxiliary task, but to tackle the problem of embodied multimodal multitask learning and achieve cross-task knowledge transfer. For this purpose, we proposed a unique combination of these attention mechanisms to be used with specific kind of representations (GA-BoW followed by SA followed by GA-Sent), which we believe is novel. This unique combination leads to alignment of textual and visual representations with each other and the answer space and helps achieve cross-task knowledge transfer. We also provide justification behind the choice of this unique combination. The ablation tests indicate that these attention mechanisms by themselves or a trivial combination of the attention mechanisms are both not sufficient to achieve cross-task knowledge transfer. \\n\\nThanks for pointing us to the relevant prior work in [1], [2] and [3]. We agree that the attention mechanisms in the Dual-Attention unit are similar to the attention mechanisms used in these works and we have made revisions in the paper to add these references. However, these prior works address VQA, which does not involve navigation as in EQA. In our tasks, the action space includes navigational actions as well as answer words, and we reuse the same architecture for both semantic goal navigation and EQA to enable cross-task knowledge transfer.\\n\\n[1] Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering, Huijuan Xu, Kate Saenko\\n[2] Drew A. Hudson, Christopher D. Manning, Compositional Attention Networks for Machine Reasoning\\n[3] Aligned Image-Word Representations Improve Inductive Transfer Across Vision-Language Tasks, Tanmay Gupta, Kevin Shih, Saurabh Singh, Derek Hoiem\"}",
"{\"title\": \"New visualization figures and videos showing alignment of words with corresponding convolutional output channels, discussion regarding need for auxiliary task and use of single word attributes\", \"comment\": \"Thank you for providing critical feedback.\\n\\n1a. Regarding the zero-shot transfer claim:\\nWe used the term 'zero-shot' to imply that we do not use any trajectories for training the policy after transfer. So it is zero-shot transfer for the policy, but we agree that some static samples are required for training the object detectors. We have revised the manuscript to remove the claim of zero-shot transfer.\\n\\n1b. \\u201cWhy is this evaluated only for SGN and not for EQA?\\u201d:\\nThis is because we need additional information for EQA (e.g., whether the new word corresponds to an object type or attribute), which cannot be obtained from an object/attribute detector.\\n\\n2a. Use of single word objects and attributes:\\nWe agree that the proposed model only handles objects and attributes described by single words. One possible way to scale the proposed architecture to more naturalistic datasets with multi-word objects and attributes is to replace the Bag-of-Words representation by a \\u2018Bag-of-Concepts\\u2019 representation where a separate module is trained to extract \\u2018concepts\\u2019 from language input. \\u2018Skull key\\u2019 or \\u2018metallic gray\\u2019 can be classified as concepts. The gated-attention can then work on concepts rather than words.\\nWhile our model might be able to handle multi-word object names and attributes using a \\u2018Bag-of-Concepts\\u2019 representation, more complex architectures are certainly required to handle other complexities of language such as negation, conjunctions, prepositions and so on. In future, we plan to look into learning to compose dual-attention units in a recursive manner to capture the relationships between words and phrases in order to handle these complexities. The fact that all the baselines fail to generalize better than a naive Text-only model emphasizes the difficulty of the task even with single word objects and attributes.\\n\\n2b. Visualization of convolutional outputs:\\nThanks for bringing up this interesting point. We added visualizations of the convolutional network outputs in Figure 9 in Appendix A for both Aux and No Aux models. We visualize the output corresponding to 7 words for the same frame along with the auxiliary task labels. As expected, the Aux model predictions are very close to the auxiliary task labels. More interestingly, the convolutional outputs of the No Aux model show that words and objects/properties in the images have been properly aligned even when the model is not trained with any auxiliary task labels. We also uploaded some visualization videos showing convolutional output channels corresponding to different words as well as spatial attention at the following link:\", \"https\": \"//sites.google.com/view/emml\\n\\n3. Need for auxiliary objectives:\\nThe most important benefit of using auxiliary tasks is the improvement in sample efficiency. Although the No Aux model can perform as well as Aux model given enough training data, the sample efficiency of Aux models is much better than No Aux models as shown by the training curves in Figure 7 (Aux) and Figure 12 (No Aux) in the appendix. The only way of learning new objects without auxiliary tasks is randomly bumping into objects or predicting an answer and receiving rewards and making a connection between the visual and textual modalities and the rewards. Auxiliary tasks provide additional supervision which helps in training the model much faster. As we scale embodied RL agents to learn thousands of objects and attributes, it might not be possible to train such models without auxiliary tasks as the trial-and-error method of learning new objects and attributes might require samples beyond the current computing capabilities. The sample inefficiency is partly an optimization issue (not just PPO but any RL algorithm in general) but also the task itself is very challenging with just sparse rewards.\\n\\nWhile auxiliary tasks improve the sample efficiency, we show that the proposed model works well even without the auxiliary tasks. This is useful for scenarios where we do not have the auxiliary labels for all or a subset of the objects and attributes.\\n\\nResponse to minor notes\\n4a. We revised Figure 10 to correctly reflect the actions.\\n4b. The shaded region does not represent the variance across multiple runs. Each point in the curves in fig 7 and 8 is calculated using weighted running average across time (Gaussian smoothing), the shaded region represents the amount of smoothing applied at each point in a single run. We removed the shaded region in all relevant figures to avoid this confusion. We also added training curves for 3 top models across 3 training runs with different seeds both with and without auxiliary tasks for Easy environment in Figure 13 in the appendix C. The performance of our proposed model is consistent across different runs. The variance across different runs is higher in the NoAux setting as compared to the Aux setting.\\n4c. The out of vocabulary words are 'red' and 'pillar' as mentioned in the text. We made revisions to specify this in Table 3 as well.\"}",
"{\"title\": \"Regarding the need for multimodal learning and dual-attention\", \"comment\": \"Thank you for providing critical feedback.\", \"regarding_the_need_for_multimodal_learning_and_dual_attention\": \"It is true that the visual and textual modalities can be fused by using subtitles. However, the alignment of knowledge between the words in the subtitle, visual objects, and the answer space still remains challenging even in this setting. We believe that the dual attention unit is still essential in this setting, as a \\\"DeepMind Atari\\\" type of RL solution would still overfit to the training set of instructions and questions (similar to all the baselines used in our paper), and would not generalize to instructions or questions with unseen words. To verify this, we ran an experiment, where we superimposed the text in the top part of the image and trained a unimodal RL network with the same convolutional network and recurrent policy module as in our proposed model using PPO. The test performance of the model was similar to the Image only baseline (0.21 SGN accuracy, and 0.10 EQA accuracy for Easy Aux setting). We believe that this modified unimodal setting involves all the challenges of our setting and an additional challenge of learning to extract text from the image.\\n\\n\\u201cIn addition, there are studies (https://arxiv.org/abs/1804.03160) where sound and video are , in unsupervised manner, correlated together.\\u201d\\n\\nThank you for pointing us to this relevant work. We have made revisions to discuss this work with respect to our model.\"}",
"{\"title\": \"The manuscript is clearly written and adds to the state of the art in multidomain machine learning. Recommend as a poster.\", \"review\": \"The system is explained thoroughly, and with the help of nice looking graphics the network architecture and its function is clearly described. The paper validates the results against baselines and shows clearly the benefit of double domain learning. The paper is carefully written and follows the steps required for good scientific work.\\n\\nPersonally, I do not find this particularly original, even with the addition of the zero-shot learning component. \\n\\nAs a side note, the task here does not seem to need a multitask solution. Adding the text input as subtitles to the video gives essentially the same information that is used in the setup. The resulting inclusion of text could utilise the image attention models in a similar manner as the GRU is used in the manuscript for the text. In this case the problem stated in the could be mapped to a \\\"DeepMind Atari\\\" type of RL solution, with text as a natural component, but added as visual clue to the game play. Hence, I am not convinced that the dual attention unit is essential to the performance the system.\\n\\nIn addition, there are studies (https://arxiv.org/abs/1804.03160) where sound and video are , in unsupervised manner, correlated together. This contains analogous dual attention structure as the manuscript describes, but without reinforcement learning component.\\n\\nI would recommend this as a poster.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Sensible model, but missing some important justification / visualization / error analysis\", \"review\": \"The authors propose a multitask model using a novel \\u201cdual-attention\\u201d unit for embodied question answering (EQA) and semantic goal navigation (SGN) in the virtual game environment ViZDoom. They outperform a number of baseline models originally developed for EQA and SGN (but trained and evaluated in this multitask paradigm).\\n\\nComments and questions on the model and evaluation follow.\\n\\n1. Zero-shot transfer claim:\\n1a. This is not really zero-shot transfer, is it? You need to derive object detectors for the meanings of the novel words (\\u201cred\\u201d and \\u201cpillar\\u201d from the example in the paper). It seems like this behavior is supported directly in the structure of the model, which is great \\u2014 but I don\\u2019t think it can be called \\u201czero-shot\\u201d inference. Let me know if I\\u2019ve misunderstood!\\n1b. Why is this evaluated only for SGN and not for EQA?\\n\\n2. Dual attention module:\\n2a. The gated attention model only makes sense for inputs in which objects or properties (the things picked out by convolutional filters) are cued by single words. Are there examples in the dataset where this constraint hold (e.g. negated properties like \\u201cnot red\\u201d)? How does the model do? (How do you expect to scale this model to more naturalistic datasets with this strong constraint?)\\n2b. A critical claim of the paper is that the model learns to \\u201calign the words in both the tasks and transfer knowledge across tasks.\\u201d (Earlier in the paper, the claim is that \\u201cThis forces the convolutional network to encode all the information required with respect to a certain word in the corresponding output channel.\\u201d) I was expecting you would show some gated-attention visualizations (not spatial-attention visualizations, which are downstream) to back up this claim. Can you show me visualizations of the gated-attention weights (especially when trained on the No-Aux task) which demonstrate that words and objects/properties in the images have been properly aligned? Show that e.g. the filter at index i only picks out objects/properties cued by word i?\\n\\n3. Auxiliary objective: it seems like this objective solves most of the language understanding problem relevant in this task. Can you motivate why it is necessary? What is missing in the No-Aux condition, exactly? Is it just an issue with PPO optimization? Can you do error analysis on No-Aux to motivate the use of the Aux task?\\n\\n4. Minor notes:\\n4a. In appendix A, the action is labeled \\u201cTurn Left\\u201d but the frames seem to suggest that the agent is turning right.\\n4b. How are the shaded regions estimated in figs. 7, 8? They are barely visible \\u2014 are your models indeed that consistent across training runs? (This isn\\u2019t what I\\u2019d expect from an RL model! This is true even for No-Aux..?)\\n4c. Can you make it clear (via bolding or coloring, perhaps) which words are out-of-vocabulary in Table 3? (I assume \\u201clargest\\u201d and \\u201csmallest\\u201d aren\\u2019t OOV, for example?)\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Promising results in cross-task transfer. Missing references to prior works\", \"review\": \"This work proposes to train an RL-based agent to simultaneously learn Embodied Question Answering and Semantic Goal Navigation on the ViZDoom dataset. The proposed model incorporates visual attention over the input frames, and also further supervises the attention mechanism by incorporating an auxiliary task for detecting objects and attributes.\", \"pros\": \"-Paper was easy to follow and well motivated\\n-Design choices were extensively tested via ablation\\n-Results demonstrate successful transfer between SGN, EQA, and the auxiliary detection task\", \"cons\": \"-With the exception of the 2nd round of feature gating in equation (3), I fail to see how the proposed gating -> spatial attention scheme is any different from the common inner-product based spatial attention used in a large number of prior works, including [1], [2], and [3] and many more.\\n-The use of attribute and object recognition as an auxiliary task for zero-shot transfer has been previously explored in [3]\\n\\n\\nOverall, while I like the results demonstrating successful inductive transfer across tasks, I did not find the ideas presented in this work to be sufficiently novel or new.\\n\\n[1] Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering, Huijuan Xu, Kate Saenko\\n[2] Drew A. Hudson, Christopher D. Manning, Compositional Attention Networks for Machine Reasoning\\n[3] Aligned Image-Word Representations Improve Inductive Transfer Across Vision-Language Tasks, Tanmay Gupta, Kevin Shih, Saurabh Singh, Derek Hoiem\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
Syf9Q209YQ | Manifold regularization with GANs for semi-supervised learning | [
"Bruno Lecouat",
"Chuan-Sheng Foo",
"Houssam Zenati",
"Vijay Chandrasekhar"
] | Generative Adversarial Networks are powerful generative models that can model the manifold of natural images. We leverage this property to perform manifold regularization by approximating a variant of the Laplacian norm using a Monte Carlo approximation that is easily computed with the GAN. When incorporated into the semi-supervised feature-matching GAN we achieve state-of-the-art results for semi-supervised learning on CIFAR-10 benchmarks when few labels are used, with a method that is significantly easier to implement than competing methods. We find that manifold regularization improves the quality of generated images, and is affected by the quality of the GAN used to approximate the regularizer. | [
"semi-supervised learning",
"generative adversarial networks",
"manifold regularization"
] | https://openreview.net/pdf?id=Syf9Q209YQ | https://openreview.net/forum?id=Syf9Q209YQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJxtrZEmeN",
"H1elMgJc0m",
"H1g061y90Q",
"Syl4_k19Rm",
"BJlYMJJ507",
"Byl2kX8c2Q",
"rkepvb7c2Q",
"BklhkXfqn7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544925504540,
1543266311582,
1543266245970,
1543266156227,
1543266065093,
1541198563778,
1541185893398,
1541182180405
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1385/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1385/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1385/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1385/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1385/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1385/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1385/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1385/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes a method to perform manifold regularization for semi-supervised learning using GANs. Although the SSL results in the paper are competitive with existing methods, R1 and R3 are concerned about the novelty of the work in the light of recent manifold regularization SSL papers with GANs, a point that the AC agrees with. Given the borderline reviews and limited novelty of the core method, the paper just falls short of the acceptance threshold for ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Borderline paper: reasonably good SSL results but limited novelty\"}",
"{\"title\": \"Thanks for the feedback - we've updated baselines\", \"comment\": \"Thank you very much for your constructive comments.\\n\\nFirst, with respect to baselines, we have updated the results tables to include the additional baselines mentioned, as well as runs for VAT(+EntMin) with lower numbers of labels on CIFAR-10. After updating these baselines, we note that our method still achieves state-of-the-art performance in the regime where 1000 and 2000 labels are used for training on CIFAR-10, with and without data augmentation. We have also updated the text to tone down the claims.\\n\\nIn addition, we note that the highest performance in many of the mentioned baselines (and with VAT) are obtained with a combination of multiple approaches. When our method is compared head-to-head against the proposed method in the mentioned papers, it is competitive and sometimes outperforms them, for instance, in experiments on CIFAR-10 with 4000 labels\", \"with_augmentation\": \"Adversarial Dropout [1] (11.32) vs ours (11.79 +/- 0.25)\", \"without_augmentation\": \"Improved GAN + SNTG [2] (14.93) vs ours (14.34 +/- 0.17)\\n\\nDefining the best combination of techniques to achieve the highest performance is an interesting direction of future work; our preliminary experiments combining Mean Teacher with manifold regularization have shown some improvements and we will include the results in the final version of the paper.\\n\\nSecond, with respect to novelty, we would like to re-iterate our contributions since they may not have been clear. First, while manifold regularization has been explored in (Kumar et al 2017) and (Qi et al 2018), we proposed an efficient and effective approximation of manifold regularization that is far easier to compute than the involved method in (Kumar et al 2017). Moreover, we point out issues with the standard finite difference approximation to the Jacobian regularization and propose a solution to this problem by ignoring the magnitude of the gradient and using only the direction information. Moreover, we showed manifold regularization provides significant improvements to image quality and linked it to gradient penalties used for stabilizing GAN training, which were not shown by (Qi et al 2018). \\n\\nWe did try to use spectral normalization but did not observe any gains for semi-supervised learning.\\n\\nFinally we would like to emphasize the conceptual differences between our method and other smoothing methods like spectral normalization - such methods perform isotropic regularization, whilst ours performs anisotropic smoothing along the manifold directions of generated data-points. We showed through experiments using (isotropic) ambient regularization that anisotropic regularization is more beneficial in the case of semi-supervised learning.\"}",
"{\"title\": \"Thanks for the feedback - we've addressed your questions and clarified novelty of our approach.\", \"comment\": \"Thank you for your constructive comments. We are glad that you found our experiments extensive and that our approach provides significant improvements.\\n\\nIn response to your comment that \\\"similar ideas have been undertaken (e.g., Mescheder et al 2018), but in different contexts\\\" we would like to take this opportunity to clarify the novelty of our approach.\\n\\nFirst, with regards to (Mescheder et al 2018), our method is not simply the application of existing gradient penalties (GPs) in the context of semi-supervised learning. Our approach is conceptually different since the regularizer proposed by (Mescheder et al 2018) is an (isotropic) ambient regularizer in the input space, whereas the regularizer we used performs (anisotropic) smoothing on the manifold parametrized by the latent generative model. We believe we are the first to show the benefits of anisotropic Jacobian regularizers in the context of semi-supervised learning. \\n\\nMoreover, an important contribution of our work is the efficient computation of such gradient penalties in the context of semi-supervised learning. Current application of such penalties uses the exact Jacobian which is especially computationally expensive in the case of semi-supervised learning as it is now a tensor (one matrix per class in the case of Improved GAN), which quickly becomes intractable with large numbers of classes. We proposed and demonstrated the effectiveness of an efficient (non-obvious) approximation of the Jacobian-based regularizer which significantly accelerates training.\\n\\nWe provide responses to further questions/comments below:\", \"q\": \"\\\"It should be stated that bold values in tables do not represent best results (as it is usually the case) but rather results for the proposed approach.\\\"\", \"a\": \"We have revised the tables such that bold values represent the best results for clarity.\"}",
"{\"title\": \"Thanks for the feedback - we've addressed your questions/comments\", \"comment\": \"Thank you for your encouraging comments especially with regards to novelty and thoroughness of our experiments.\\n\\nWe have addressed the minor issues you highlighted; answers to your questions are also provided below:\", \"q\": \"\\\"What are the differences of the 6 pictures in Figure A7? Iterations?\\\"\", \"a\": \"These are the results from 6 different runs.\"}",
"{\"title\": \"Updated baselines, generated images and fixed various issues\", \"comment\": [\"Dear Reviewers,\", \"Thank you for your reviews and constructive feedback. We have included some new figures and updated tables in our paper as a result of the feedback.\", \"Briefly, the changes we made are as follows:\", \"We updated results tables with additional baselines.\", \"We moved the table for results including data augmentation into the main text to show a comparison with newer baselines.\", \"We included generated images with and without manifold regularization or ambient regularization (SVHN and CIFAR-10) in the Appendix (Figure A5).\", \"Due to the lack of space, we moved the tangent images figure in appendix (Figure A7).\", \"We have addressed other minor comments.\", \"We would be happy to address any further questions or concerns.\"]}",
"{\"title\": \"Interesting Approach with Good Results\", \"review\": \"Review for MANIFOLD REGULARIZATION WITH GANS FOR SEMISUPERVISED LEARNING\", \"summary\": \"The paper proposed to incorporate a manifold regularization penalty to the GAN to adapt to semi-supervised learning. They approximate this penalty empirically by calculating stochastic finite differences of the generator\\u2019s latent variables. \\nThe paper does a good job of motivating the additional regularization penalty and their approximation to it with a series of experiments and intuitive explanations. The experiment results are very through and overall promising. The paper is presented in a clear manner with only minor issues. \\nNovelty/Significance:\\nThe authors\\u2019 add a manifold regularization penalty to GAN discriminator\\u2019s loss function. While this is a simple and seemingly obvious approach, it had to be done by someone. Thus while I don\\u2019t think their algorithm is super novel, it is significant and thus novel enough. Additionally, the authors\\u2019 use of gradients of the generator as an approximation for the manifold penalty is a clever.\\nQuestions/Clarity:\\nIt would be helpful to note in the description of Table 3 what is better (higher/lower). Also Table 3 seems to have standard deviations missing in Supervised DCGANs and Improved GAN for 4000 labels. And is there an explanation on why there isn\\u2019t an improvement in the FID score of SVHN for 1000 labels?\\nWhat is the first line of Table 4? Is it supposed to be combined with the second? If not, then it is missing results. And is the Pi model missing results or can it not be run on too few labels? If it can\\u2019t be run, it would be helpful to state this.\\nOn page 11, \\u201cin Figure A2\\u201d the first word needs to be capitalized. \\nIn Figure A1, why is there a dark point at one point in the inner circle? What makes the gradient super high there?\\nWhat are the differences of the 6 pictures in Figure A7? Iterations?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Borderline: Manifold Regularization with GANS for SEMI-Supervised Learning\", \"review\": \"This paper builds upon the assumption that GANs successfully approximate the data manifold, and uses this assumption to regularize semi-supervised learning process.\\nThe proposed regularization strategy enforces that a discriminator or a given classifier should be invariant to small perturbations on the data manifold z. It is empirically shown that naively enforcing such a constraint by randomly adding noise to z could lead to under-smoothing or over-smoothing in some cases which can harm the final classification performance. Consequently, the proposed regularization technique takes a step of tunable size in the direction of the manifold gradient, which has the effect of smoothing along the direction of the gradient while ignoring its norm.\\n \\nExtensive experiments have been conducted, showing that the proposed approach\\noutperforms or is comparable with recent state-of-the-art approaches on cifar 10, especially in presence of fewer labelled data points. On SVHN however, the proposed approach fails in comparison with (Kumar et al 2017) but performs better than other approaches.\\n\\nFurthermore, it has been shown that adding the proposed manifold regularization technique to the training of GAN greatly improves the image quality of generated images (in terms of FID scores and inception scores). Also, by combining the proposed regularizer with a classical supervised classifier (via pre-training a GAN and using it for regularization) decreases classification error by 2 to 3%.\\n \\nFinally, it has also been shown that after training a GAN using the manifold regularization, the algorithm is able to produce similar images giving a low enough perturbation of the data manifold z.\\n \\nOverall, this paper is well written and show significant improvements especially for image generation. However, the novelty is rather limited as similar ideas have been undertaken (e.g., Mescheder et al 2018), but in different contexts. The paper would be improved if the following points are taken into account:\\n \\nA comparison with Graph Convolutional Network based techniques seems appropriate (e.g. Kipf and Welling 2017).\\nHow do the FID/Inception improvements compare to (Mescheder et al 2018)?\\nIt would be interesting to discuss why the FID score for SVHN gets worse in presence of 1000 labels.\\nAlthough there is a clear improvement in FID scores for Cifar10. It would be informative to show the generated images w/ and w/o manifold regularization.\\nMore analysis should be provided on why (Kumar et al 2017) perform so well on SVHN.\\nIt should be stated that bold values in tables do not represent best results (as it is usually the case) but rather results for the proposed approach.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A simple but interesting idea, but the results are not very significant and some baselines are missing.\", \"review\": \"The paper tackles the problem of semi-supervised classification using GAN-based models. They proposed a manifold regularization by approximating the Laplacian norm using the stochastic finite difference. The motivation is that making the classifier invariant to perturbations along the manifold is more reasonable than random perturbations. The idea is to use GAN to learn the manifold. The difficulty is that (the gradient of) Laplacian norm is impractical to compute for DNNs. They stated that another approximation of the manifold gradient, i.e. adding Gaussian noise \\\\delta to z directly (||f(z) - f(g(z+\\\\delta))||_F) has some drawbacks when the magnitude of noise is too large or too small. The authors proposed another improved gradient approximation by first computing the normalized manifold gradient \\\\bar r(z) and then adding a tunable magnitude of \\\\bar r(z) to g(z), i.e., ||f(z) - f(g(z) +\\\\epsilon \\\\bar r(z) )||_F. Since several previous works Kumar et al. (2017) and Qi et al. (2018) also applied the idea of manifold regularization into GAN, the authors pointed out several advantages of their new regularization.\", \"pros\": [\"The paper is clearly written and easy to follow. It gives some intuitive explanations of why their method works.\", \"The idea is simple and easy to implement based on a standard GAN.\", \"The authors conduct various experiments to show the interaction of the regularization and the generator.\"], \"cons\": [\"For semi-supervised classification, the paper did not report the best results in other baselines. E.g., in Table 1 and 2, the best result of VAT (Miyato et al., 2017) is VAT+Ent, 13.15 for CIFAR-10 (4000 labels) and 4.28 for SVHN (1000 labels). The performance of the proposed method is worse than the previous work but they claimed \\\"state-of-the-art\\\" results. The paper also misses several powerful baselines of semi-supervised learning, e.g. [1,2]. The experimental results are not very convincing because many importance baselines are neglected.\", \"The paper does not have a significant novel contribution, but rather extends GANs (improved-GAN mostly) with a manifold regularization, which has been explored in many other works Kumar et al. (2017) and Qi et al. (2018).\", \"I'm wondering whether other smoothness regularizations can achieve the same effect when applied to semi-supervised learning, e.g. spectral normalization[3]. It would be better to compare with them.\"], \"references\": \"[1] Adversarial Dropout for Supervised and Semi-Supervised Learning, AAAI 2018\\n[2] Smooth Neighbors on Teacher Graphs for Semi-supervised Learning, CVPR 2018\\n[3] Spectral Normalization for Generative Adversarial Networks, ICLR 2018\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.